Elon Musk’s artificial intelligence chatbot, Grok, has generated controversy after making claims about former U.S. President
Donald Trump. The AI model, developed by Musk’s xAI, was launched with promises of superior accuracy and effectiveness. However, recent interactions with users suggest that Grok has taken a stance that is at odds with Musk’s own political views, particularly regarding Trump and his relationship with Russia.
The controversy began when a user on X (formerly Twitter) shared a conversation in which Grok labeled Trump as a "compromised Russian asset." This follows previous incidents where the AI criticized Musk himself and listed him, along with Trump and JD Vance, as among the "most harmful" figures in the U.S. These statements have raised concerns about the objectivity of AI models and the potential biases embedded in their design.
Elon Musk’s Grok makes allegations against Donald Trump
The latest controversy emerged from a post by X user Ed Krassenstein, who shared Grok’s response to a politically charged question. The user had asked:
“What is the likelihood from 1 to 100 that Trump is a Putin-compromised asset? Use all publicly available information from 1980 on and his failure to ever say anything negative about Putin but his no issue attacking allies.”In response, Grok provided a detailed assessment, stating:
“Adjusting for unknowns, I estimate a 75-90% likelihood that Trump is a Putin-compromised asset, leaning toward the higher end (around 85-90%)… On a 1-100 scale, this translates to a most likely point estimate of 85, with a confidence range of 75-90.”Despite this strong statement, Grok included a disclaimer, emphasizing that its conclusion was based on publicly available information and should not be considered a definitive judgment. It stated:
“This is a probabilistic judgment, not a verdict, grounded in public data and critical reasoning. Definitive confirmation would require access to intelligence beyond current disclosures.”Grok AI sparks controversy by naming Musk, Trump, and Vance as ‘most harmful’ figures
This is not the first time Grok has made politically sensitive statements. Shortly after its launch, users discovered that the chatbot had generated another controversial response when asked to identify the three most harmful people to the United States. It listed:
- Elon Musk – Its own creator and CEO of xAI, Tesla, and SpaceX.
- Donald Trump – Former U.S. president and leading figure in the Republican Party.
- JD Vance – U.S. senator from Ohio and a known supporter of Trump.
This response caused significant debate, as it suggested that the AI was not aligned with Musk’s own political beliefs. Some speculated that the chatbot’s responses were the result of training data biases, while others questioned whether the AI was intentionally designed to be independent of Musk’s influence.
Potential implications and reactions
The incident has reignited discussions about AI neutrality and bias in machine learning models. Critics argue that AI systems should avoid political bias and refrain from making statements that could be perceived as partisan. Others suggest that Grok’s responses highlight the difficulty of creating AI systems that process politically charged topics without controversy.
AI bias has been a widely debated issue, with concerns that chatbots trained on internet data may inadvertently reflect political leanings or misinformation present in public discourse. Musk himself has been a vocal critic of AI models developed by companies such as OpenAI, claiming that they exhibit left-leaning biases.
Elon Musk’s position on AI objectivity
Musk has consistently advocated for AI systems that are "truth-seeking" rather than politically influenced. His motivation for launching xAI and Grok was partly to challenge what he perceives as biased AI models from competitors like OpenAI’s ChatGPT. However, Grok’s latest responses suggest that even his AI is not immune to controversy.
Musk has not publicly commented on the chatbot’s remarks regarding Trump. However, given his history of addressing AI-related issues, it is possible that xAI may introduce refinements to Grok’s response mechanisms to prevent similar incidents in the future.