Clinton Bicknell was embroiled in one of the great mysteries of the technology world last September. The head of AI at language learning app Duolingo was given rare access to GPT-4, a new artificial intelligence model created by Microsoft-backed OpenAI.
They soon discovered that the new AI system was even more advanced than the earlier version of OpenAI, which was used to power the HitChatGPT chatbot that provides realistic answers in response to text prompts.
Within six months, Bicknell’s team had used GPT-4 to build a sophisticated chatbot of its own, allowing human users to practice conversational French, Spanish, and English as they interacted in real-world settings like an airport or cafe. be in the setting of
Bicknell said, “It was amazing how the model had such a detailed and specialized knowledge of how languages work and the correspondence between different languages.” “With GPT-3, which we were already using, it just wouldn’t be a viable feature.”
Duolingo is among a handful of companies, including Morgan Stanley Wealth Management and online education group Khan Academy, that were granted early access to GPT-4 ahead of its more widespread launch this week.
The release shows how OpenAI has grown from a research-focused conglomerate to a nearly $30 billion company, racing giants like Google in efforts to commercialize AI technologies.
OpenAI announced that GPT-4 demonstrated “human-level” performance on a range of standardized tests, such as the US bar exam and SAT school tests, and demonstrated how its partners can use AI software to create new products and services. Was doing.
But for the first time, OpenAI did not reveal any details about the technical aspects of GPT-4, such as what data it was trained on or the hardware and computing capabilities to deploy it, due to both the “competitive landscape and security”. was used. Meaning”.
This represents a change since OpenAI was formed in 2015 as a non-profit, in part, the brainchild of some of the tech world’s most radical thinkers, including Elon Musk and Peter Thiel. It was built on the principles of making AI accessible to all through scientific publications and developing the technology safely.
A pivot in 2019 turned it into a profitable venture with a $1bn investment from Microsoft. That was followed by another multibillion-dollar funding round from the tech giant this year, with OpenAI quickly becoming a key part of Microsoft’s bet that AI systems would transform its business models and products.
The change led to Musk, who left OpenAI’s board in 2018, tweeting this week that he was “still confused how a non-profit to which I donated ~$100mn somehow managed a $30bn market”. Cap became for-profit. If it’s legal, why doesn’t everyone do it?”
OpenAI’s lack of transparency regarding the technical details of GPT-4 has drawn criticism from others within the AI community.
“It’s very opaque, they’re saying ‘trust us, we did the right thing’,” said Alex Hanna, director of research at the Distributed AI Research Institute (DAIR) and former member of Google’s ethical AI team. “They are cherry-picking these tasks, because there is no scientifically agreed upon set of benchmarks.”
GPT-4, which can be accessed through the $20 paid version of ChatGPT, has shown rapid improvement over earlier AI models on some tasks. For example, the GPT-4 scored in the 90th percentile on the Uniform Bar Examination taken by prospective attorneys in the United States. ChatGPT only reached the 10th percentile.
While OpenAI didn’t provide details, AI experts believe the model size is larger than previous generations and it took a lot more human training to get it right.
The most obvious new feature is that the GPT-4 can accept input in both text and image form – although it responds using text only. This means that users can upload a photo and ask the model to describe the photo in detail, request ideas for a meal made from the ingredients in the image, or use it as a behind the scenes meme. Can ask to explain the joke.
The GPT-4 is also capable of generating and ingesting larger amounts of text than other models of its type: users can feed up to 25,000 words, compared to 3,000 words in the ChatGPT. This means it can handle detailed financial documents, literary works or technical manuals.
Its more advanced reasoning and parsing capabilities mean it is far more efficient at analyzing complex legal contracts for risk, said Winston Weinberg, co-founder of Harvey, an AI chatbot that was built using GPT-4. And it is used by PwC and Magic Circle Law. firm Allen & Overy.
Despite these advances, OpenAI warns of several risks and limitations of GPT-4. This includes the ability to provide detailed information on how to conduct illegal activities – including developing biological weapons and making hate and discriminatory speech.
OpenAI put GPT-4 through a security testing process known as red-teaming, where over 50 outside experts from medicinal chemistry to nuclear physics and misinformation attempted to break the model. was asked to do.
Paul Rottger, an AI researcher at the Oxford Internet Institute who focuses on identifying toxic content online, was contracted by OpenAI to provide feedback and try to remove harmful responses from GPT-4 for six months Content on topics related to suicide or self-harm, graphic descriptions of violence, or examples of extremism and hate speech.
He said that overall the model improved its responses over the months of testing, where it would initially hedge its answers but later became more explicit in responding to bad signals.
“On the one hand, there have been advances in security research since GPT-3, and there are a lot of good ideas that went into making this model secure,” he said. “But at the same time, this model is so much more powerful and can do a lot more things than GPT-3, so the risk surface becomes much bigger.”
Orignal Post From: OpenAI’s GPT-4 shows progress—and the potential to make money