Officials generally do not encourage over-regulation of their industries. But ChatGPT and others like it are so powerful—and their impact on society will be so profound—that regulators need to get in on it now.
That’s according to Meera Murati, chief technology officer at OpenAI, the venture behind ChatGPT.
“We are a small group of people and we need a lot of input into this system and a lot of input that goes beyond technologies – of course regulators and governments and everyone else,” Murati said in a Time Interview published on Sunday.
ChatGPT is an example of “Generative AI”, which refers to devices that can, among other things, provide answers, pictures or even music within seconds based on simple text prompts. But ChatGPT will also be used for AI-infused cyberattacks, BlackBerry researchers warned this week.
To offer such tools, AI enterprises need cloud computing resources that only a handful of tech giants can provide and are forging lucrative partnerships with the likes of Microsoft, Google and Amazon. In addition to raising antitrust concerns, such an arrangement makes it more likely that generative AI tools will quickly reach large audiences – perhaps faster than society is ready for.
“We weren’t expecting this level of excitement from bringing our baby into the world,” Murati explained. Time, referring to ChatGPT. “In fact, there was some trepidation about even putting it there.”
Yet since its release in late November, ChatGipt has reached 100 million monthly active users faster than TikTok or Instagram, analysts at UBS noted this week. “In 20 years following the Internet space, we can’t miss the rapid ramp up in consumer Internet apps,” he said.
Meanwhile, under pressure from Microsoft’s tie-up with OpenAI, Google is ramping up its efforts to get more such AI tools to consumers. On Friday, Google announced a $300 million investment in Anthropic, which has developed a ChatGPT rival called Cloud.
Anthropic, in turn, was launched largely by former OpenAI employees who were concerned about business concerns outweighing AI security concerns in the ChatGPT developer.
“Artificial intelligence can be misused or used by bad actors,” Murati said. Time, “So, there are questions about how do you regulate the use of this technology globally. How do you regulate the use of AI in a manner consistent with human values?”
Elon Musk helped start OpenAI in 2015, a non-profit that is no longer there. Tesla’s CEO has warned about the danger advanced AI poses to humanity, and in December he called chatGPT “scary good,” adding, “We’re not far from dangerously strong AI.” He tweeted in 2020 His confidence in OpenAI’s security was “not high”, noting that it began as open-source and non-profit and that “neither is still true.”
Microsoft co-founder Bill Gates recently said, “AI is going to be debated as the hottest topic of 2023. And you know what? it is reasonable. This PC is as important as the Internet.
Billionaire entrepreneur Mark Cuban said last month, “Just imagine what GPT 10 is going to look like.” He said generative AI is “the real deal” but “we’re still in our infancy.”
Asked if it was too early for regulators to get involved, Murati said Time, “It is not too early. Given the impact of these technologies, it is very important for everyone to be involved.
Learn how to navigate and strengthen trust in your business with The Trust Factor, a weekly newsletter that examines what leaders need to succeed. Sign up here.