What is the most significant risk of ChatGPT right now?
The Dunning-Kruger effect!
The Dunning-Kruger effect describes a cognitive bias where people with low ability in a task overestimate their ability to perform it well.
Not only that, but ChatGPT goes to the extent of providing an answer that seems always grounded and factual, but it’s fake and misleading!
An example?
I asked ChatGPT to tell me “what’s FourWeekMBA,” but I also told it to cite its sources.
caption for image
ChatGPT first defined it, and it made total sense, thus making you believe that the answer was grounded and based on facts it found.
Yet, when it cited its sources, those were mostly invented!
In short, the AI - as long as something is plausible and it makes sense - it will make stuff up only to have you believe that what it says is factual when it’s not!
Of course, the AI doesn’t know what it’s doing neither it’s trying to deceive as it’s not conscious.
In short, the example below it answers the question of what’s FourWeekMBA by making up sources which do not exist on the website!
Thus, to make its argument convincing, ChatGPT produces links for those fake sources as if they really existed on my website when they do not exist!
This can generate a huge amount of misinformation if employed at scale…
Therefore:
1. AI-generated content is not - in many cases - factually correct
Beware of these limitations when you do use AI-generated content like this one.
2. AI-generated content as misinformation wave
Right now, this is the greatest threat to Google, as if this AI-generated content gets employed at scale on the web, it might quickly destroy the value of Google’s index.
3. Information vs. Knowledge and Understanding
It shows that one thing is the form or the understanding of the machine of how to structure an argument; another is the substance or whether that argument is grounded in reality or experience!
Which is an incredible limitation of AI right now.
Information can be vague, noisy, ambiguous and even misleading.
Knowledge and understanding on the other hand, are grounded in reality and real-world experience!
4. Negative externalities for society
That is a major obstacle to the scalability of these AI assistants.
As of now, with a limited user base, misinformation has a low externality. Yet if it were to be carried on a large user base, the externality might become unbearable.
5. Staged roll out vs. mass release
To enable scale, those AI assistants might need proper guardrails and confidence scores to give answers, and they will need to be grounded in reality as the risk of hallucination is substantial.
Therefore, to be viable they’ll need to be - initially closed assistants available for very specific features, before they can be employed as general-purpose engines!