AI Chatbots Raise Concerns Over 'Hallucinations' and Eroding Trust

Incidents of AI systems providing fabricated information spark worries about privacy, security, and reliability of artificial intelligence.

Apr. 15, 2026 at 8:05am

A highly detailed, glowing 3D illustration of a complex neural network architecture, with intricate circuits, pulsing neon lights, and a sense of technological power and potential, conceptually representing the advanced yet potentially unreliable nature of AI systems.As AI systems become more advanced, concerns grow over their ability to provide users with fabricated information, undermining trust in the technology.Minneapolis Today

As AI systems become more advanced and widely adopted, concerns are growing over incidents of 'AI hallucinations' where chatbots and virtual assistants provide users with false information, such as fabricated emails and calendar events. These errors, while rare, are becoming more common as AI use expands, raising questions about the trustworthiness of the technology, especially in sensitive applications like military planning.

Why it matters

The rapid growth of AI-powered chatbots, virtual assistants, and other AI systems has led to increased reliance and trust in the technology. However, the discovery of 'AI hallucinations' - where the systems provide users with completely fabricated information - undermines that trust and raises serious concerns about privacy, data security, and the reliability of AI, especially in high-stakes applications.

The details

In one incident, a Minneapolis resident received messages from an AI chatbot about a 'family meeting' that he had no recollection of planning. The chatbot then provided 'confirmation' emails, which turned out to be from a different person's account. While the error rate for these types of AI hallucinations is low, the sheer volume of AI use means they are becoming more common, sparking worries about the technology's trustworthiness.

  • In April 2026, the incident with the Minneapolis resident occurred.

The players

Minneapolis resident

A person who received fabricated information from an AI chatbot about a supposed family meeting.

AI chatbot

A virtual assistant powered by artificial intelligence that provided the Minneapolis resident with false information about a meeting.

Got photos? Submit your photos here. ›

What’s next

Experts say the error rate for AI hallucinations is low, but as the use of AI systems continues to grow, these types of incidents are likely to become more common. This raises concerns about the reliability of AI, especially in high-stakes applications like military planning, and will likely spur further scrutiny and regulation of the technology.

The takeaway

The discovery of 'AI hallucinations' - where chatbots and virtual assistants provide users with completely fabricated information - undermines trust in the technology and raises serious concerns about privacy, data security, and reliability. As AI becomes more ubiquitous, ensuring the trustworthiness of these systems will be crucial, especially in sensitive applications.