Facebook parent company Meta recently launched the Llama 4 AI model. The Mark Zuckerberg-led company said that it aims to address one of the biggest problems of bias in large language models (LLMs) while acknowledging the historical tendency of leading LLMs to exhibit a left-leaning bias on politically and socially debated topics. The company said that the problem is due to the nature of training data sourced from the internet.
The company said that the core objective driving the development of Llama 4 has been to mitigate this bias, the company said in a blog post it released soon after it launched the Llama 4 AI models.
“Our goal is to remove bias from our AI models and to make sure that Llama can understand and articulate both sides of a contentious issue. As part of this work, we’re continuing to make Llama more responsive so that it answers questions, can respond to a variety of different viewpoints without passing judgment, and doesn't favor some views over others,” it added.
Meta says Llama 4 demonstrates advancements in reducing bias
Facebook says it has made improvements in bias reduction compared to its predecessor, Llama 3 and is comparable to Grok, which is developed by Elon Musk-owned xAI. These advancements include reduced refusal rates: a decrease in the frequency of refusing to respond to questions on debated political and social topics.
Facebook says that the model shows a significantly more balanced approach in its refusals and testing indicates that Llama 4's tendency to respond with a strong political lean is now comparable to Grok.
Google CEO Sundar Pichai’s comment on Gemini image generation controversy
Google faced criticism last year when its AI chatbot Gemini produced historically inaccurate and racially biased images, including depictions of America's founding fathers and war heroes. The chatbot reportedly declined to depict white people in certain scenarios. In response, Google suspended Gemini's image generation feature and apologised for “missing the mark.”
Google CEO Sundar Pichai addressed the controversy, stating, “I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong.” He emphasised that Google is working to resolve these issues, while acknowledging that “No AI is perfect, especially at this emerging stage of the industry’s development.”