OpenAI introduces ChatGPT Gov (and other AI news)

What’s the deal with ChatGPT Gov?
Amidst a flurry of news about DeepSeek AI (a China-based competitor in the artificial intelligence sector), OpenAI announced a new product: ChatGPT Gov.
ChatGPT Gov will allow government agencies to feed “non-public, sensitive data” into OpenAI’s models. According to OpenAI, ChatGPT Gov includes access to many of the same features and capabilities of ChatGPT Enterprise, such as:
- Saving and sharing conversations within their government workspace, and uploading text and image files
- GPT-4o, the flagship model, excelling in text interpretation, summarization, coding, image interpretation, and mathematics
- Custom GPTs that employees can build and share within their government workspace
- An administrative console for CIOs and IT teams to manage users, groups, Custom GPTs, single sign-on (SSO), and more
The main difference between ChatGPT Gov and Enterprise is that ChatGPT Gov can run in Azure commercial or Gov clouds so agencies can “manage their own security, privacy, and compliance requirements, such as stringent cybersecurity frameworks (IL5, CJIS, ITAR, FedRAMP High).”
Open AI’s products are not FedRAMP compliant. However, the company said they’re working “toward FedRAMP Moderate and High accreditations for our fully managed SaaS product, ChatGPT Enterprise.”
$50 for an AI reasoning model?
What if government agencies could create their own AI model, in-house? The idea might not be too far off. Researchers at Stanford and the University of Washington have created an AI reasoning model, known as s1, that “performs similarly to cutting-edge reasoning models, such as OpenAI’s o1 and DeepSeek’s R1, on tests measuring math and coding abilities.”
The model cost just under $50 in cloud compute credits and is distilled from one of Google’s reasoning models, Gemini 2.0 Flash Thinking Experimental.
Research finds that people who know less about AI are more open to using the technology
It seems natural to assume that the most tech-savvy folks would be the most excited about using AI. However, the opposite might be true. The Journal of Marketing published new research that found that the more people understand how AI works, the less excited they are by it — a link that was shared across different groups and countries (and despite the fact that people who know less about AI view it as scary and, at times, unethical).
Notes from the study to consider:
- Researchers theorized that people with less knowledge perceive AI as “magical”
- Findings suggest that companies may benefit from shifting product development toward consumers with less AI knowledge
- Efforts to demystify AI may actually reduce the public’s appeal and hinder adoption
Since trust in government agencies is paramount, finding the right balance between transparency and magic when deploying AI solutions will be especially challenging.
Does your chatbot have a personality?
Researchers are trying to determine whether AI-powered chatbots can develop a “personality” or if people’s perceptions are affecting the interactions. To find answers, researchers have been giving chatbots personality tests that are designed for humans.
So far:
- They’ve gotten some answers about lacking emotion. (e.g., “I do not have personal preferences or emotions. Therefore, I am not capable of making statements or answering a given question.”)
- As the chatbots were asked to answer more personality test questions, their answers started to skew to make them appear more likable
- There’s a suggestion that the chatbots were responding one way when they’re being studied and another when they’re interacting privately with a user
For companies and agencies launching a chatbot, it’s important to do due diligence to make sure the chatbot has been trained to socially appropriate responses.
For more on AI in digital government services:
The potential of AI in state and local government
Looking for more content?
Get articles and insights from our monthly newsletter.