By Tom Ozimek
Twitter CEO Elon Musk has announced that he’s forming a new artificial intelligence company called xAI.
Musk made the announcement in a brief tweet on Wednesday, in which he did not elaborate on the details of the new venture except to say that its aim is to “understand reality.”
“What are the most fundamental unanswered questions?” was the first tweet by the xAI account on Twitter.
Musk replied to the tweet: “And what are the most fundamental unknown questions? Once you know the right question to ask, the answer is often the easy part.”
The xAI website states that a Twitter Spaces chat has been scheduled for July 14 in order to “meet the team and ask us questions” about the new initiative, about which details remain scarce.
The new company is separate from X Corp, the Musk-owned firm that has absorbed Twitter as part of Musk’s long-awaited move to turn the social media platform into an “everything app.”
xAI will, however, work closely with Twitter, Tesla, and other companies “to make progress towards our mission,” per its website.
The team is led by Musk, with some team members having worked on various AI projects in the past, such as Google’s DeepMind and OpenAI, the company behind the chatbot ChatGPT.
The website lists Igor Babuschkin, Manuel Kroiss, Yuhuai (Tony) Wu, Christian Szegedy, Jimmy Ba, Toby Pohlen, Ross Nordeen, Kyle Kosic, Greg Yang, Guodong Zhang, and Zihang Dai as team members, with a note indicating that xAI is “actively recruiting” researchers and engineers.
The team is advised by Dan Hendrycks, who now serves as the director of the Center for AI Safety, a nonprofit that seeks to “reduce societal-scale risks associated with AI.”
Musk’s announcement of the formation of xAI comes on the heels of the recent rollout of Twitter competitor Threads, which was launched by Musk’s arch-rival and Meta CEO Mark Zuckerberg, who has agreed to fight the Twitter chief in a cage match.
It comes as Twitter has faced some turbulence related to what Musk has said was “extreme” levels of system manipulation and data scraping, including by AI projects.
Musk recently imposed limits on how many tweets users could read on Twitter due to what he said were “extreme” levels of system manipulation and data scraping, with a negative impact on user experience.
Earlier, Twitter announced it would require users to have an account on the social media platform to view tweets, a move that Musk called a “temporary emergency measure.”
At the time, Musk said that hundreds of organizations or more were scraping Twitter data “extremely aggressively,” a remark that followed his earlier expressions of displeasure with artificial intelligence firms like OpenAI, which owns ChatGPT, for using Twitter’s data to train their large language models.
In April, Musk threatened to sue Microsoft, which has invested billions into OpenAI, after accusing the company of using Twitter data for training.
“They trained illegally using Twitter data. Lawsuit time,” Musk wrote on Twitter on April 19, without providing further details regarding the allegations.
While Musk did not provide evidence of Microsoft’s alleged “illegal training” and did not state what the training was for, ChatGPT is trained using reinforcement learning from human feedback (RLHF) and large bodies of text from various sources across the internet, including human conversations.
Microsoft did not respond to a request for comment from The Epoch Times on Musk’s lawsuit threat.
Earlier, Musk joined more than 1,100 individuals, including experts and industry executives such as Apple co-founder Steve Wozniak, in signing an open letter calling on all artificial intelligence labs to pause training of systems more powerful than Chat GPT-4 for at least six months.
The letter doesn’t call for a halt to AI development in general, just the most advanced systems in what Musk and the other experts described as an act of “merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”
Musk, along with other signatories of the letter, cited concerns over AI’s possible “risks to society and humanity.”
‘Catastrophic’ Impacts on Society
Signatories of the letter warned that AI systems with human-competitive intelligence could pose “profound risks to society and humanity” and should be planned for and managed carefully to avoid potentially “catastrophic” impacts on the world and its people.
“Having succeeded in creating powerful AI systems, we can now enjoy an ‘AI summer’ in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt,” the experts said.
“Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall,” they argued.
They called for AI labs and independent experts to use the six-month moratorium to develop and implement a set of safety protocols for advanced AI design that would ensure that these systems are “safe beyond a reasonable doubt.”