Bad data, bad answers. Avoiding AI missteps in government

Artificial intelligence has the potential to simplify and improve access to government services. But when AI gets something wrong, the consequences can still be significant.
If a resident asks about renewing a driver’s license and is instead directed to a fishing permit page, it’s not just an inconvenience; it’s a breach of trust. Even small errors can erode confidence in public systems. Misdirection or incorrect answers can lead to missed payments, legal complications, or lost benefits.
In fact, 51% of Americans say they feel more concerned than excited about the future of AI, and it’s easy to see why. Trust, once lost, is hard to regain.
Are “hallucinations” still a thing?
AI tools like large language models are adept at sounding confident, but that doesn’t make them accurate. On the extreme end, OpenAI’s research studies have shown that their latest models may hallucinate up to 48% of the time. However, in general usage, hallucination rates of 2% to 5% are commonplace.
Structured data helps AI get it right
AI works best when the information it uses is organized. Feeding models unstructured content that overlaps services or mixes use cases makes it difficult for them to match user questions to correct answers.
By contrast, when content is tagged into service categories, such as “vehicle renewals” or “property taxes,” AI can surface accurate results more reliably. Investing in data labeling and structuring can reduce hallucination rates by up to 40%.
So who is responsible for information accuracy?
Many vendors rely on disclaimers like “AI may be wrong” to manage expectations. But disclaimers don’t help a resident who missed a deadline due to incorrect information. In government, misinformation isn’t just frustrating; it can have real consequences.
A better approach is to design AI systems that guide users only where data is verified. That often means limiting AI to informational guidance, not transactions or legal advice.
It also means ensuring there’s always human oversight and ownership of AI output. Some agencies are adopting hybrid approaches, where AI can answer common questions based on approved data but routes more complex or sensitive inquiries to staff. This ensures the information remains accurate while freeing up employees to focus on tasks that require their expertise.
Designing for flexibility
Focus on using proven AI models while focusing on structuring the underlying content they depend on.
When data is well-organized, switching AI providers becomes far easier without starting from scratch. Structured content also enables easy-to-follow audit trails and supports human review, empowering agencies to refine AI performance over time.
What your agency can do today
If your agency is exploring AI, here are practical, low‑risk steps to begin:
- Use AI for service discovery or public information, where risk is lower and value is clear
- Map and tag content by service area before feeding it to AI tools
- Avoid chatbot designs that try to carry on long conversations. Instead, focus on directing users quickly to verified information
- Choose tools that log interactions and support audit review for ongoing improvement.
Getting started with AI doesn’t require a full system overhaul. The most effective improvements often come from behind the scenes: how your content is organized and how clearly it’s connected to real services.
When residents get accurate, trustworthy answers the first time, it improves service delivery and builds long-term trust.
Looking for more content?
Get articles and insights from our monthly newsletter.