OPENAI CEO SAYS UNFORTUNATELY, PEOPLE WILL BE DISAPPOINTED WITH GPT-4
TURNS OUT THOSE VIRAL GRAPHS ABOUT GPT-4 WERE COMPLETELY MADE UP.
OpenAI's AI chatbot ChatGPT, which is based on the startup's latest version of its GPT-3 language model, has taken the internet by storm ever since being made to the public in November thanks to its uncanny ability to come up with anything from entire college essays, malware code, and even job applications from a simple prompt.
But the company's leader is warning that OpenAI's long-rumored successor, GPT-4, could end up being a huge letdown, given the sheer volume of attention and hype the company has been getting lately.
OpenAI CEO Sam Altman attempted to downplay expectations this week, telling StrictlyVC in an interview that "people are begging to be disappointed and they will be" in the company's upcoming language model.
"The hype is just like..." Altman told StrictlyVC, trailing off. "We don’t have an actual [artificial general intelligence] and that’s sort of what’s expected of us," referring to the concept of an AI that's capable of matching the intellect of a human being.
Altman also refused to reveal when GPT-4, if that's even what it'll be called, will be released.
"It’ll come out at some point, when we are confident we can do it safely and responsibly," he told StrictlyVC.
A quick search of the term "GPT-4" on Twitter brings up a number of widely circulating tweets speculating on the capabilities of OpenAI's unannounced language model.
One particularly viral tweet claims that GPT-4 will have 100 trillion "parameters," compared to GPT-3's 175 billion parameters, something that Altman called "complete bullshit" in the interview.
"The GPT-4 rumor mill is a ridiculous thing," the CEO said. "I don’t know where it all comes from."
In short, the internet has already decided that GPT-4 will blow minds and represent a huge step up from OpenAI's current language models — without knowing a single thing about it.
Or, on the other hand, maybe Altman is shrewdly trying to stem controversy around AI by downplaying expectations.
READ MORE: OpenAI CEO Sam Altman on GPT-4: ‘people are begging to be disappointed and they will be’ [The Verge]
Behind a Safe ChatGPT Are Kenyan Workers
This one's uncomfortable.
Yesterday, TIME reported that OpenAI paid Kenyan workers to do the dirty work of making ChatGPT safe.
The idea: For ChatGPT to be "safe", you need to teach it what "safe" is. It's a computer - it doesn't really understand what anyone is saying.
To do this, you find a bunch of the bad stuff, label it with "this is bad stuff, and it's this specific category of bad", then feed that into ChatGPT.
How OpenAI did it:
They scraped content about topics they didn't want ChatGPT talking about (e.g., murder, self-harm, sexual abuse, incest)
They sent it to Sama AI along with a $12/hr contract to get that content labeled
Sama AI paid Kenyan workers $2/hr to read and label 150-250 passages each nine-hour shift.
Pay is a common first debate: Was this a well-paying job (Kenya avg. being ~$1.25/hr) or an exploitative one? Where you stand will depend on your values and frame of reference.
For that, the Hacker News discussion thread is one of the few places on the Internet where you can hear from Kenyans and non-Kenyans alike. It's worth a browse.
The bigger discussion: This is the reality of today's models. They all need this approach to be safe. Plus, as ChatGPT-like tools get more popular, the more people will try to make it say bad things, the more you have to clamp down.
Someone has to do that psychologically brutal work. Is this how things are going to work? And is the ultimate payoff worth it? It's worth thinking about, especially if the answer is "yes".
Power and responsibility. AI is getting better. That's exciting. We also need to acknowledge how we keep it safe - both in concept and how it actually happens.
Google Responds to Microsoft
Here we go.
In the rap battle that is Big Tech AI, Microsoft spit a crazy first 8 bars by declaring they're putting OpenAI everywhere.
It's Google's turn to take the mic, and they've showed up with *check notes* a 7,000-word blog post titled, "Google Research, 2022 & Beyond: Language, Vision and Generative Models".
It's the first of a series, which will eventually cover robotics, health and quantum along with today's hot topics in language and generative models.
This is Google saying, "Let's not forget that we're absolutely stacked in the R&D department.
"We've got models."
Multiple language models that are bigger and better than yours
Multiple models that can generate images from text
A model that is getting real good at doing math and science (Oh, your little ChatGPT can't add two numbers? Cute.)
A model that can handle text, images *and* over 100 languages
A model that can generate video from text
It's not a flashy announcement, but it's a strong pitch to prospective research talent and a solid reminder of just how much firepower Google is wielding.
Microsoft fanboys: you think this battle is over? It's only getting started.
SURPRISE !!! 500+ Best AI Tools
I hope you find this useful! You can check out all my AI tools here