Multibillionaire Elon Musk might have been taking notes from his fictional counterpart, Tony Stark when it comes to the dangers of artificial intelligence (AI).
For context, in the Marvel Cinematic Universe, Tony Stark created an AI named ‘Ultron’ as a global defense mechanism to safeguard the world against imminent dangers. However, Ultron eventually developed a twisted perspective that deemed humans as the primary threat to the world and vowed to eradicate them from the planet.
While many individuals may be unaware, Elon Musk was, in fact, one of the pioneering founders behind OpenAI, the organization responsible for developing the globally acclaimed ChatGPT, widely regarded as the most sophisticated commercially available AI-based chatbot.
Musk’s concerns about the possibility of AI turning rogue and “destroying humanity” compelled him to take action by initiating a project called ‘TruthGPT‘.
Musk already floated the idea of a counterweight AI against for-profit AI products through a tweet last February 2023. In fact, he’s been assembling his own machine learning A-team for some time. But the idea gained renewed attention from the public when the billionaire discussed the potential hazards of AI in an interview with Fox News TV host Tucker Carlson.
TruthGPT: Musk’s New Attempt to Make AI Safe Again
During the interview, Elon Musk outlined numerous hazards that threaten the safety of AI, highlighting the need for a new and more secure approach to development.
The first risk he discussed is OpenAI’s shift towards for-profit ventures. This move clearly contradicted the organization’s initial goal of being an open-source and non-profit entity. Musk expressed his disappointment with his involvement in OpenAI’s inception, given that the organization has now produced an AI chatbot that frequently hallucinates and is vulnerable to jailbreaks and hacking attempts.
Furthermore, OpenAI’s behavior has shifted, as shown by its decision to withhold select crucial information about its new GPT-4 large language model (LLM).
The second risk is the absence of an AI government regulator. This leaves the AI battlefield with no strong competition and ethical rules. Although there is a growing demand for regulation in the industry, the absence of an overseeing body places the safety of the public in jeopardy.
Third is Elon Musk’s interaction with Google’s co-founder Larry Page. In the interview, Musk clearly expressed his doubts and disappointment with Page because of his clear disinterest in AI safety and complete disregard for the technology’s impact on the masses.
He was even baffled by the Google co-founder’s response when he insisted on prioritizing human safety before launching an ambitious superintelligent AI or ‘AI god’. According to Musk, Page mockingly called him a ‘speciest’, which refers to his concern over human beings. That was the ‘last straw’ for Musk and compelled him to exercise a more active role in AI safety.
TruthGPT, X.AI, and GPUs
At present, TruthGPT remains primarily a concept, as limited details about its development have been made public. But another AI initiative from the billionaire, called X.AI, is slowly but surely gaining form as it was already incorporated last March 2023. In fact, it already has two directors, Musk (of course) and Jared Birchall, the director of his family’s office.
Recent reports also indicate that Elon Musk has purchased numerous graphical processing units (GPUs), which are crucial components in the development of foundational large language models. Additionally, he is also reportedly seeking funding from Tesla and SpaceX investors to equip much-needed boosters for his new company.
What are the Clear Risks of AI?
There are two major AI risks that Musk has highlighted in the interview: A sophisticated manipulation of information and humans’ cluelessness when Singularity takes place.
“It has the potential of civilization destruction.”
Elon Musk
He believes that AI’s advanced algorithms and evolving capabilities will soon enable it to manipulate the perceptions and decisions of the public. Although it may not be as terrifying as a Skynet/Terminator-level threat, the ability of advanced chatbots to spread propaganda and conceal the truth could potentially lead to a catastrophic outcome for our society.
Just imagine manipulating the public to go against a specific religion, reject and change previously held values, or even support wars. These are just some of the massive manipulations that AI could possibly exercise in the future.
The next threat he highlighted is humans’ cluelessness when the Singularity is eventually achieved. For millennia, humans have been the most intelligent beings on the planet, which has also become our defining trait.
But now, we have created a new being that could soon exceed our collective intelligence and be vastly smarter than us, a chilling scenario we’ve never dealt with before.
Join our newsletter as we build a community of AI and web3 pioneers.
The next 3-5 years is when new industry titans will emerge, and we want you to be one of them.
Benefits include:
- Receive updates on the most significant trends
- Receive crucial insights that will help you stay ahead in the tech world
- The chance to be part of our OG community, which will have exclusive membership perks