What AI can’t do: The case for human-led digital government

When you work in the public sector, it’s not enough to just recognize facts or faces. Forming relationships, understanding the changing dynamics of social interactions, and recognizing bias are key parts of working with the public — and they’re also blind spots in current AI systems.
Artificial Intelligence (AI) is an incredible advancement in technology, and it offers numerous benefits. However, unlike people, it doesn’t have agency or seek to have a meaningful purpose or impact. As government agencies integrate AI into their operations, it’s crucial to recognize its limitations and make sure that human oversight is maintained and prioritized.
The importance of human oversight
Human-led digital government uses technology to support (not replace) the expertise, empathy, and ethical responsibility of public servants. Human oversight ensures that decisions made with AI assistance are transparent and accountable.
Without a human to manage and course-correct AI tools, your agency is opening itself up to a lot of risk. And it’s challenging to explain or justify AI-driven decisions, especially when they impact residents’ lives.
Maintaining public trust is paramount. While AI can improve efficiency, it may also impact residents’ comfort level: When residents feel excluded or uncertain about how systems affect them, they’re less likely to trust government officials.
Disengagement could mean fewer people accessing vital services, more skepticism toward institutions, and reduced civic participation — emphasizing the need for human involvement. When residents know that humans are overseeing AI systems, they’re more likely to trust the outcomes.
People know when empathy and understanding are needed. Humans can adapt to unforeseen circumstances and make decisions based on empathy, ethics, and societal norms — areas where AI currently falls short.
Limits of AI in the public sector
AI works well for repetitive tasks that take place within a closed management system, where the rules are clear and aren’t influenced by external forces. The real world, however, doesn’t function that way. AI underperforms in areas that require things like intuition and cultural sensitivity:
- Ethical decision-making
AI lacks the moral compass inherent to human judgment. In scenarios requiring ethical considerations, such as social services or law enforcement, relying solely on AI can lead to unfair outcomes. Studies have shown that algorithmic decisions can reinforce existing social inequalities. Similarly, the misuse of AI in legal settings has raised concerns about undermining the integrity of the justice system.
- Data bias and representation
AI systems are only as good as the data they’re trained on. If the training data contains biases or lacks representation from certain groups, the AI’s outputs will reflect these shortcomings, potentially leading to discriminatory practices.
- Emotional intelligence
Without lived experience to learn from, AI tools lack the ability to pick up nuanced emotions. Humans can imagine, judge shifts in a situation or conversation, and anticipate — abilities that are unique to people.
Best practices for human-centered AI integration
- Make procurement transparent
Agencies should prioritize transparency in AI procurement. Understanding how tools are developed, what data is used, and how decisions are made is essential for ethical deployment.
- Implement explainable AI (XAI)
Most people won’t trust an AI tool if they don’t understand how it works. For more successful tech adoption, make sure the decision-making process of the AI systems is clear. The AI system should provide explanations for its decisions, allowing humans to understand and, if necessary, challenge outcomes.
- Conduct regular audits and monitoring
Continuously monitor AI systems for biases and inaccuracies. Regular audits can help identify and rectify issues before they escalate.
- Ensure inclusive data collection
Gather diverse and representative data to train AI systems, minimizing biases and ensuring fair outcomes for all community segments.
A collaboration between humans and AI is the future of public sector work
Artificial intelligence will lead to a new way of working, but this isn’t a zero-sum situation. While AI offers promising tools for enhancing public sector efficiency, it does have significant limitations.
AI is unlikely to replace hands-on roles, positions that require a high level of personal interaction, and jobs in less predictable environments. Traits like creativity, imagination, and empathy can’t be replicated by a machine; yet, they’re all necessary for people working in the public sector.
The challenge will be balancing AI capabilities and human judgment: integrate the new technologies ambitiously and strategically into the organization while upholding public trust and ethical standards.
Looking for more content?
Get articles and insights from our monthly newsletter.