Talking Points: Is the AI arc always towards truthfulness?
- Wil James

- Nov 4
- 2 min read
With the announcement that Elon Musk’s latest venture is an AI-powered encyclopaedia (“Grokipedia”), is it time to challenge the assumption that using AI in research will ultimately lead to better outcomes?
The launch of Grokipedia has garnered a fair bit of media attention, with plenty of fun poked at some questionable assertions, not to mention the gargantuan length of the entries (the page for relatively low-profile former Conservative backbencher Michael Fabricant is 6,500 words – compared to the 2,000 words mustered for him on Wikipedia)…
Much of the coverage – as is generally the case with AI stories – focuses on the veracity of the information being offered by Grokipedia. As we have all come to be more familiar with “AI hallucination” over the past 18 months, this is not surprising. Anyone who is using public domain research to inform their decisions should be worried about whether the “facts” they find online are really true or just fabrications created by a large language model (LLM) that is too eager to please.
But this isn’t really a fundamental critique of using AI for research. It is reasonable to expect that as AI models learn more, they ought to get better at weeding out “alternative facts”. This seems more like a bug than a disqualifying flaw.
The bigger issue, which is particularly relevant to us in the business intelligence industry (and indeed to anyone whose research is being infiltrated by AI) is where AI sources its information and how this might impact research.
A basic tenet of all the discussions I’ve heard to date over the use of AI in business intelligence is that the “data” LLMs draw on are a neutral property that is not subject to manipulation, selectivity or omission. Underlying this assumption is the fact that many of the answers currently served up by AI assistants draw on human generated content from sites like Wikipedia and Answers.com, as well as chat boards like Reddit.
But what is to say that AI can’t be used against AI to distort the factual picture? As highly political actors like Elon Musk step into the free information business, the problem is not that he is using AI to power his encyclopaedia, it is the creation of an information environment that is weaponised in pursuit of a particular agenda.
Musk has been unequivocal that his platform is intended to counteract the “woke” bias he asserts in other platforms and professional media organisations. Whatever you think of Musk and his politics (or Jimmy Wales, for that matter), we can no longer pretend that pointing LLMs at “data” in the public domain is a straightforward task. Will others now also seek to pollute the pool from which AI draws information?
For anyone reliant on fact-based research to inform good decisions, the arrival of Grokipedia makes the deployment of AI research tools a fraught prospect. If we can’t assume that the “data” available to LLMs is neutral or unadulterated, we need to ask hard questions about how we maintain factual reliability while using technology to accelerate research.




Comments