Below are guest posts and opinions from Ahmad Shadid, founder of O.xyz.
Under the thin pretext of efficiency, the Ministry of Government Efficiency (DOGE) is taking away its workforce. An independent report suggests that Doge cut around 222,000 job cuts in March alone. Cuts are hit hardest in areas where the US can't afford to delay – the development of artificial intelligence and semiconductors.
Now the bigger problem goes beyond taking away the workforce – the government efficiency of Musk is that it uses artificial intelligence to sn communications among federal employees and seek out something dishonest. It's already creeping up around the EPA.
Doge's AI-First Push To Shrink federal agency feels like Silicon Valley has become fraudulent. Grab your data, automate functionality and rush to justify cuts with semi-baking tools like GSA's “intern level” chatbot. That's reckless.
Additionally, according to the report, Doge's “technicians” are deploying Musk's Grok AI to monitor Environmental Protection Agency employees with government cut plans.
Federal workers, long accustomed to emailing transparency through public records laws, are now facing hyperintelligent tools to analyse all words.
How can federal employees trust a system in which AI surveillance combines with massive layoffs? Is the US silently drifting towards surveillance dystopias with artificial intelligence amplifying the threat?
AI-driven monitoring
Are AI models trained with government data reliable? Additionally, using AI for complex bureaucracy offers classical pitfalls: bias – GSA's own help page flags without clear enforcement.
Increased integration of information within AI models poses an escalating threat to privacy. In addition, Musk and Doge also violate the Privacy Act of 1974. The 1974 Privacy Act came into effect during the Watergate scandal, which aimed to curb the misuse of government-held data.
No special government officials, even special government officials, without proper permission under the law. Currently, Doge appears to be violating privacy laws in the name of efficiency. Is government efficiency driving worth putting American privacy at risk?
Surveillance is no longer just about cameras and keywords. It's about determining who processes the signal, who owns the model, and who matters. Without strong public governance, this direction will end with the corporate control infrastructure that shapes the way government operates. Set dangerous precedents. Public trust in AI is weakened when people believe that decisions are made by opaque systems outside of democratic control. The federal government is supposed to set standards rather than outsource them.
What is at risk?
The National Science Foundation (NSF) has recently cut significantly more than 150 employees, with internal reports suggesting even deeper cuts coming. NSF funds critical AI and semiconductor research across universities and public institutions. These programs support everything from basic machine learning models to innovation in chip architectures. The White House is also proposing a two-thirds budget cut to the NSF. This wipes out the very bases that support America's competitiveness in AI.
The National Institute of Standards and Technology (NIST) faces similar damages. Almost 500 NIST employees are in the chopping block. These include most teams responsible for Chips Act incentive programs and R&D strategies. NIST ran the US AI Safety Institute and created the AI risk management framework.
Does Doge supply confidential, public data to the private sector?
Doge's involvement also raises more important concerns about confidentiality. The department quietly gained drastic access to federal government records and agency data sets. The report suggests that AI tools are examining this data and identifying the capabilities of automation. Therefore, the administration is now allowing private actors to process sensitive information about government operations, public services and regulatory workflows.
This is a risk multiplier. AI systems trained with sensitive data require not only efficient goals but also monitoring. This move shifts public data into private hands without a clear policy guardrail. It also opens the door to biased or inaccurate systems that make decisions that affect real life. The algorithm does not replace accountability.
There is no transparency about the data that Data Doge uses, the model it deploys, or how the agency validates the output. Federal workers have been fired based on AI recommendations. The logic, weights, and assumptions of these models are generally unavailable. It is a governance failure.
What do you expect?
Surveillance does not have rules, surveillance, or even basic transparency. And when we use artificial intelligence to monitor words like loyalty and “diversity,” we are not rationalizing government.
Federal workers don't have to wonder if they're seeing them do their jobs or say the wrong things at meetings. This underscores the need for a better, more reliable AI model that can meet the specific challenges and criteria required for public services.
It is mentioned in this article