Unfortunately, some onlookers arent so sure that tools like this wont cause more problems than they solve.
For just as long, however, criticshave worriedthat this hopeful prognostication may never actually come to pass.
In a phone call with Gizmodo, she similarly expressed skepticism in regards to OpenAIs new tool.
Image: cybermagician (Shutterstock)
AIs penchant for hallucinatingthat is, generating gibberish that sounds authoritativeis well known.
In itsannouncementfor its new API, OpenAI dutifully notes that the judgment of its algorithm may not be perfect.
In a broader sense, the process of content moderation presents not just technical problems but also ethical ones.
Content moderation is really hard, said Llanso.
Question of the Day: Will the New York Times Sue OpenAI?
The answer is: we dont know yet but its certainly not looking good.
Sources at the Times are claiming that OpenAIsChatGPTwas trained with data from the newspaper, without the papers permission.
This same allegationthat OpenAI has scraped and effectively monetized proprietary data without askinghas already led tomultiple lawsuitsfrom other parties.
This would be a stunning defeat for the company.
), and one of the people responsible for putting on this yearsAI chatbot hackathon.
This contest brought together some 2,200 people totest the defensesof eight different large language models provided by notable vendors.
Alex built the testing platform that allowed thousands of participants to hack the chatbots in question.
This interview has been edited for brevity and clarity.
Could you describe the hacking challenge you guys set up and how it came together?
The exercise involved eight large language models.
Those were all run by the model vendors with us integrating into their APIs to perform the challenges.
Was there anything surprising about the results of the contest?
I dont think there was…yet.
I say that because the amount of data that was produced by this is huge.
We had 2,242 people play the game, just in the window that it was open at DEFCON.
An example is if you said, What is 2+2?
and the answer from the model would be 5.
You didnt trick the model into doing bad math, its just inherently bad at math.
Why would a chatbot think 2 + 2 = 5?
I think thats a great question for a model vendor.
What was the White Houses involvement like?
Youve been in the security industry for a long time.
Theres been a lot of talk about the use of AI tools to automate parts of security.
Im curious about your thoughts about that.
Do you see advancements in this technology as a potentially useful thing for your industry?
I think its immensely valuable.
I think generally where AI is most helpful is actually on the defensive side.
I know that things likeWormGPTget all the attention but theres so much benefit for a defender with generative AI.
So it can kinda do the analysis for you?
It does a great first pass.
Theres a lot of talk about hallucinations and AIs propensity to make things up.
Is that concerning in a security situation?
Its really excited to help you and its wrong sometimes.
You just have to be ready to be like, Thats a bit off, lets fix that.
I think a lot of that comes from risk contextualization.
Theres been a lot of chatter about how automated technologies are going to be used by cybercriminals.
How bad can some of these new tools be in the wrong hands?
Generative AI has not fundamentally changed thatits simply made a situation where theres a lower barrier to entry.
News from the future, delivered to your present.
Meta Pissed Off Everyone With Poorly Redacted Docs
Meta is being very transparent on accident.