|

Artificial intelligence in the public sector: What residents really think

Illustration of a woman using a laptop, accompanied by a friendly green robot. The text reads, 'When it comes to AI, people still have mixed feelings.' Below the robot, two rating bars are shown: one with five stars and a happy face, the other with two stars and a sad face, representing contrasting opinions about AI.

Artificial intelligence (AI) and machine learning use cases have been a popular topic of discussion within public sector organizations for a while — many have begun rolling out internal AI solutions and testing use cases. However, public sentiment remains mixed, especially about AI’s role in government services.

A recent Gallup-Bentley University survey found that 31% of Americans believe AI does more harm than good, while only 13% see it as more beneficial. Additionally, 77% of respondents distrust businesses’ and government agencies’ use of AI, highlighting a major hurdle in AI adoption.

While AI can enhance government operations, from automating routine tasks to improving service delivery, the public remains skeptical about its ethical implications. According to one Gallup study, 75% of Americans fear AI will reduce job opportunities, while a 2023 study from Pew Research found that 70% of Americans don’t trust companies to use AI responsibly. Despite these concerns, AI is increasingly being used for digital government initiatives, including chatbots, natural language processing, and automation for public engagement.

The challenge now lies in balancing emerging technologies with public trust, ensuring that AI adoption in public administration is transparent, fair, and beneficial for all.

People still have mixed feelings about AI adoption in government services

Although using AI effectively could improve government operations, it would be wise to exercise caution when deploying AI solutions to your residents. 

We found that (similar to other emerging technologies), customers are still skeptical about the use of Artificial Intelligence (AI) for government services. In our recent survey, consumers were pretty evenly split on whether or not they felt comfortable with AI in digital government:

  • 56% of respondents are somewhat or very comfortable with government agencies using AI 
  • 44% of respondents said they are somewhat or very uncomfortable with the use of AI
  • People seem most comfortable with using AI to complete mundane tasks or automate manual processes, but overall trust in AI remains shaky at best. An April 2024 YouGov poll found that 62% of people don’t trust AI to make ethical decisions, and 45% don’t trust that AI’s information is accurate.

Respondents recognized that while Artificial Intelligence could have benefits, concerns remained. While AI has transformed the private sector, its role in the public sector faces more scrutiny due to accountability concerns. Concerns about data management, bias in algorithms, and lack of transparency in decision-making are key barriers to trust.

What concerns people about government agencies using Artificial Intelligence?

Lack of regulation and potential for error gives them pause. Distrust in AI technologies and concerns about large data sets being misused are prevalent. Other concerns include data security, scams, and bugs. The most common issues people have with AI revolve around misinformation, ethics, and the possibility of job losses. However, according to one policy researcher who focuses on state and local communities in regions with strong economic ties to agriculture and manufacturing, residents expressed both significant concerns about AI’s impact on jobs and optimism about its potential to strengthen those industries. 

This tension between anxiety and excitement seems to be the baseline for people trying to decide how they should feel about AI.

1. Misinformation and bias in AI systems

By nature of their design, AI tools hallucinate. Even if the information presented is wrong, there’s a good chance it’ll sound plausible, increasing the risk that residents receive misinformation. And that’s especially worrisome to consumers: One poll found that 76% of people are concerned with misinformation from AI tools.

And false information goes beyond text. Deep fake videos are more and more common, and AI-created or AI-manipulated images are getting more and more realistic. Use of either (on purpose or accidentally) could spark red flags for consumers. In fact, a whopping 98% of consumers reported that ‘authentic’ images and videos are pivotal in establishing trust.

2. Job displacement and automation in public services 

Numerous respondents in our study expressed concerns about the human cost of AI, including job losses. Other studies support that finding. One reported that 77% of people are worried that AI will cause job loss within the next 12 months. Another (more positive we’ll say) poll found that 30% of consumers are worried about AI replacing workers. 

Interestingly, people seem more concerned about some jobs than others being replaced by technology — and customer support roles seem pretty important. A Gartner survey reported that 53% of customers would consider switching to a competitor if they found out a company was planning on transitioning to AI for customer service. (This is a contrast to business leaders’ excitement about AI support like chatbots.) 

It seems consumers are cautious about the growing use of AI in areas where transparency and honesty are important; integrating AI into things like product descriptions, online reviews, chatbots, and the hiring process raises concern.  

3. Privacy and data security in AI use cases

Privacy concerns in digital government remain high, especially as the use of artificial intelligence expands into tracking, surveillance, and predictive analytics for public safety. To address these risks, government operations are increasingly focused on mitigation strategies, such as AI ethics audits and stricter data governance policies, ensuring that AI applications are used responsibly while safeguarding resident data. As public sector organizations adopt more AI solutions, policymakers are emphasizing transparency, accountability, and regulatory frameworks to balance innovation with privacy protection.  

Consumers are positive about AI in some situations

WhiWhile consumers seem very knowledgeable about the risks of AI, they’re also optimistic about productivity and efficiency gains

  • 60% of people think AI could be helpful in education
  • 55% of people anticipate workplace efficiencies
  • 67% of consumers said they’d use AI tools as a search engine

People don’t want AI forcing people out of work, but they are interested in using AI to assist with tasks like drafting emails, answering financial questions, and providing quick summaries. Despite the understandable caution, and recognizing that opinions about this emerging technology are extremely fluid, 65% of consumers said they still trust businesses that use AI technology. They just want to know it’s being used.

Keep a pulse on resident trust levels

Transparency is important, especially if your agency is testing or deploying AI tools. Research is already finding that mentioning AI lowers consumers’ emotional trust. While that may seem daunting, it’s helpful to keep in mind while your team is framing messaging. Specifically address what your agency is doing to prevent bias, hallucinations, overreach, environmental impacts, and job losses.

Trustworthy AI is the only way to move forward

Even if there’s no ill intent, AI gone wrong isn’t a good look. We know that consumers don’t fully embrace and trust AI yet, but given the trajectory of AI development, it doesn’t make sense for government agencies to sit back and wait until a perfect product exists. 

As government agencies continue to explore use cases for AI, use these tactics to build the most trustworthy solution possible: 

  • Develop with human benefit in mind: Technology should make life easier for people, not replace people’s livelihood; aim for tools that make tasks simple, fast, and safe
  • Operate with transparency and accountability: Residents should know what data was used for training, understand the basics of security measures, and get reassurance that should an “oops” occur, the agency will resolve it without stress on the consumer
  • Make it a community discussion: Ask residents how they feel about AI tools so you know what they’re comfortable with
  • Provide education: Create content or host learning sessions in your community to help people understand the technology your agency is using and why you’re using it 

Check out our 2024 Consumer Digital Government Adoption Index to learn more about what residents think about digital government services.

Looking for more content?

Get articles and insights from our monthly newsletter.