AI is not an infallible search engine, but many people take its word as gospel. There are things that ChatGPT and its peers can handle on their own, but there hundreds of tasks they simply cannot be trusted with. These models cannot “know” things, and can only predict the next most likely word based on patterns. This means they can deliver a somewhat satisfactory answer confidently or a completely incorrect response otherwise known as a “hallucination” that can’t be trusted.
In some cases, an AI hallucination is relatively harmless. But relying on it for things like guidance regarding your finances, medical health or legal matters is a recipe for disaster. With those questions, a single incorrect answer can lead to serious real-world consequences.
...Keep reading this article on CNET.