The company behind the now-famous chatbot ChatGPT released GPT-4, its latest model of Artificial Intelligence (AI), on Wednesday in the latest step in a technology that has caught the world’s attention.
The new software can calculate tax deductions and answer questions in the style of a Shakespearean pirate, for example, but it still “flips” at the facts and makes reasoning errors.
Here’s a look at San Francisco-based startup OpenAI’s latest enhancement of the generative AI model that can deliver readable text and unique images:
OpenAI says that GPT-4 “exhibits human-level performance.” In its announcement, OpenAI claims that its new program is much more reliable, creative, and can handle “more nuanced instructions” than its predecessor system, GPT-3.5, on which ChatGPT was built.
In an online demo on Tuesday, Greg Brockman, president of OpenAI, went through some scenarios that showcased GPT-4 capabilities that seemed like a radical improvement over previous versions.
He demonstrated how the system could quickly generate the proper income tax deduction after being fed huge amounts of tax code, something he said he couldn’t figure out himself.
“It’s not perfect, but neither are you. And with you is this amplification tool that will allow you to reach new heights,” Brockman stated.
WHY DOES THIS MATTER?
Generative AI technology like GPT-4 could be the future of the internet, at least according to Microsoft, which has invested at least $1 billion in OpenAI and made a splash by integrating AI chatbot technology into its Bing browser. .
It is part of a new generation of machine learning (or machine learning) systems that can apparently converse, generate readable text on demand, and produce novel images and videos based on what they have learned from a vast database of digital books and online texts.
These new AI advances have the potential to transform many professions and the Internet search business, long dominated by Google, which is trying to catch up with its own AI chatbot.
“With GPT-4, we are one step closer to life imitating art,” said Mirella Lapata, Professor of Natural Language Processing at the University of Edinburgh. She was referring to the anthology television series “Black Mirror,” which focuses on the dark side of technology.
“Humans are not fooled by the AI in ‘Black Mirror,’ but they tolerate it,” added Lapata. “Similarly, GPT-4 is not perfect, but it paves the way for AI to be used on a daily basis as a basic tool.”
WHAT ARE THE IMPROVEMENTS EXACTLY?
GPT-4 is a “large multimodal model”, which means that it can be fed with both text and images that it uses to generate responses.
In an example posted on the OpenAI website, GPT-4 is asked, “What’s weird about this image?” His response: “What is unusual about this image is that a man is ironing clothes on an ironing board attached to the roof of a moving taxi.”
GPT-4 is also “addressable”, so instead of getting a response in the “classic” fixed tone and verbosity of ChatGPT, users can customize it and ask for responses in the style of a Shakespearean pirate, for example.
In his demo, Brockman asked both GPT-3.5 and GPT-4 to summarize in one sentence an article that explained the difference between the two systems. The condition was that every word in English began with the letter “g”.
GPT-3.5 didn’t even try and display a normal sentence. The newer version was quick to respond: “GPT-4 generates groundbreaking, grandiose gains, greatly galvanizing generalized AI goals.”
HOW WELL DOES IT WORK?
ChatGPT can write silly poems and songs or quickly explain anything you find on the internet. He also gained notoriety for responding with results that could be seriously wrong, such as confidently providing a detailed but false description of the Super Bowl days before it was played, and even being dismissive of users.
OpenAI acknowledged that GPT-4 still has limitations and warned users to be careful. GPT-4 is “not completely reliable yet” because it “flies” on the facts and makes reasoning errors, he warned.
“Great care should be taken when using the results of the language model, particularly in high-risk contexts,” the company emphasized, though it added that hallucinations have been dramatically reduced.
The experts also advised caution.
“We must remember that language models like GPT-4 do not think in a human-like way, and we must not be fooled by their language fluency,” said Nello Cristianini, Professor of Artificial Intelligence at the University of Bath.
Another problem is that GPT-4 doesn’t know much about what happened after September 2021, because that was the cut-off date for the data it was trained on.