Trust in government AI could affect the digital divide

Artificial intelligence has made its way into digital government services, from predictive utility maintenance to multilingual chatbots. Although more than half of agencies are already using or working on deploying AI, public comfort with artificial intelligence hasn’t budged, and it’s even declined among Gen Z and Gen X.
Consumers are wary about AI use in government
Residents want faster, personalized service but fear things like overreach in surveillance or misuse of personal data. A little more than half of Americans rated the societal risks of AI as high in a recent PEW Research study, and 50% say they’re more concerned than excited about the increased use of AI in daily life (up from 37% in 2021). Consumer comfort with AI in government is flat year-over-year, and research reveals a critical hurdle: Only 54% of people trust government institutions to “do what is right.”
Overall, consumers are leaning more toward concern than optimism regarding the use and impact of artificial intelligence. And that skepticism is held toward tech companies and government alike.
AI is now a component of the digital divide
This is a new dimension to the digital divide; residents don’t all have equal access to AI tools or information about emerging technologies. Part of the role of government will be to ensure that deploying AI doesn’t marginalize parts of the community.
These pillars can help prevent digital disadvantages:
- Explainability: help people understand how the AI tool works
- Accountability: monitor AI deployments for indicators of success and failure (like bias and accuracy) and correct when necessary
Consumers expect transparency
Skepticism about AI is amplified when it’s government agencies deploying the powerful new technology — especially without clearly communicating guardrails. Although people see AI’s potential for faster service and cost savings, they remain uneasy about issues like privacy, biased decisions, and job losses.
Building (and keeping) resident trust requires more than technically and ethically sound AI execution. Research suggests that a long-term, comprehensive approach addressing people’s underlying concerns and experiences is necessary to build trust in governmental AI projects.
Agencies need to have sustained transparency and dialogue:
- Publish AI roadmaps
- Explain data protections in plain language
- Invite residents to pilot programs before full rollout
- Host advisory councils and public forums
- Produce regular reports on outcomes
Proactive communication and educational opportunities can help make AI feel less like a black box and more like a shared civic tool. Even something seemingly simple like sharing how a conversational workflow reduced call wait times or how predictive analytics prevented costly equipment failures can help residents see tangible benefits.
People are more likely to trust AI if there’s human oversight
Without supervision, or if the underlying data is flawed or biased, errors will be repeated and compounded, leading to adverse consequences. Residents want reassurance that a qualified person is reviewing automated decisions and correcting any mistakes. There’s research from MIT GOV/LAB supporting this: Retaining a human in the loop increases trust in government AI use. (A 75% human-made decision to 25% AI-made decision ratio was the magic equation for resident acceptance.)
By pairing innovation with accountability, governments can show that AI tools can enhance, rather than replace, human judgment.
Successful integration of AI in government hinges on public trust
As awareness of AI has increased, so too has concern about its misuse. Flawed AI deployments could create doubts about other digital services, and potentially, could undermine trust in government altogether.
For governments to leverage AI successfully, they must prioritize transparency, mitigate risk, and preserve a vital role for human decisionmaking — with an incremental and deliberate process. Transparency and a commitment to ethical deployment will determine whether AI becomes a trust-builder or a trust-breaker in the public sector.
The 2025 Consumer Digital Government Adoption Index has more details on balancing AI tools and human oversight.
Looking for more content?
Get articles and insights from our monthly newsletter.




