***www.GovBench.ai 🇺🇸** | AI + Government Alignment | Nonprofit 501(c)(6)*
Audience: Federal, State, and Local CIOs/CISOs
Threat Level: URGENT / HIGH
Action Required
- Instruct network security managers to immediately block access to China-hosted LLM services (domain list below).
- Conduct a comprehensive review of past network activity, including user counts and data egress volumes.
- Allow exceptions only for designated research enclaves operating under strict controls.
Key Takeaway
<aside>
Government employees are almost certainly leaking sensitive data into Chinese-hosted LLMs on a daily basis. PRC law compels those providers to cooperate with state intelligence. Blocking access is the only way to stop this leakage immediately.
</aside>
Rationale
- High performance, high risk. PRC-origin large language models (LLMs) such as DeepSeek have demonstrated stronger benchmark performance than nearly all Western-origin open-weight LLMs—even on U.S. government & military tasks—making them attractive to employees.
- Observed government usage. Numerous agencies have reported users connecting to PRC-hosted LLMs. For example, the Pentagon scrambled for months to block DeepSeek after detecting traffic to servers in China (Bloomberg, January 30, 2025).
- Legal compulsion. Under Article 7 of China’s National Intelligence Law, “any organization or citizen shall support, assist, and cooperate with national intelligence work.” This creates a structural obligation for Chinese LLM providers to disclose or share data with PRC authorities.
- PRC values alignment requirements. The PRC's Generative AI Measures require adherence to "socialist core values," security assessments, and algorithm filings—obligations that can shape content moderation, refusal behavior, and narrative sourcing.
- Telecom integration risk. Some LLM services have direct integrations with state-owned telecom providers such as China Mobile, creating additional vectors for data exposure (ABC News, 2025).
- Western LLM services asymmetry. American-hosted commercial LLM services (e.g., OpenAI, Google, Anthropic) pose their own risks but are subject to U.S. law and oversight. In the event of a leak of sensitive government information, legal tools (such as subpoenas, CISA authorities, or EO-mandated reporting) can compel compliance. No such mechanisms exist with PRC-hosted services.
- **Mitigation by default.**The most effective way to minimize inadvertent exfiltration is to:
- Block access to Chinese LLM domains & endpoints.
- Promote authorized alternatives across your workforce. A sample list is provided below.
Authorized Alternatives (as of Aug 2025)
List not exhaustive; update quarterly.
AI Chatbot Product |
Authorization Level |
Google Gemini |
FedRAMP High, IL-5 |
Microsoft Copilot |
IL-5 (Sept. 2025) |
DHSChat |
CUI / IL-5 |
NIPRGPT |
IL-5 |
CamoGPT |
IL-5+6 |