Back to InsightsIndustry Updates

The New Federal AI Rules and What They Mean for Your Contracts

Rachel PhillipsMarch 9, 2026

The Department of War just blacklisted one of the world's leading AI companies, signed deals with two others, and launched a platform giving 3 million military personnel access to AI tools — all in the span of a few weeks. If you are a small business owner working in government contracting, or thinking about getting into it, this matters to you. AI is no longer a buzzword floating around federal agencies. It is a regulated capability with real rules, real deadlines, and real consequences for contractors who are not paying attention. Whether you are building AI tools or simply using them in your day-to-day operations, the federal government now has specific expectations for how AI fits into the contracting process — and your cybersecurity and compliance readiness is part of that picture.

Here is what happened, what changed, and what you should be thinking about right now.

The Department of War, Anthropic, and OpenAI — What Just Happened

In September 2025, President Trump signed Executive Order 14347, officially rebranding the Department of Defense as the Department of War. The website moved to war.gov, and Secretary Pete Hegseth now goes by Secretary of War. The legal name has not been formally changed by Congress yet, but the department uses the new name in all its public communications and official correspondence.

That rebrand came alongside a much bigger shift: the Department of War is going all in on artificial intelligence.

In July 2025, AI company Anthropic became the first frontier AI provider to deploy its models on the Department of War's classified networks, under a $200 million prototype agreement. For months, Anthropic's Claude was the only advanced AI model available in that classified environment.

Then things fell apart. The Department of War wanted Anthropic to agree to let the military use Claude for any lawful purpose, with no restrictions. Anthropic pushed back on two specific points: it did not want its technology used for mass domestic surveillance of Americans, and it did not want it used in fully autonomous weapons systems. According to Anthropic's own statement from February 26, 2026, the company said "We cannot in good conscience accede to their request."

The response was swift. On February 27, President Trump directed all federal agencies to stop using Anthropic's technology. Secretary Hegseth then designated Anthropic a "supply chain risk to national security" — the first time the U.S. government has ever applied that label to an American company, according to NPR.

Within days, OpenAI stepped in. The company announced an agreement with the Department of War to deploy its models on classified networks — the very networks Anthropic had been removed from. OpenAI said its agreement includes guardrails: no domestic surveillance of U.S. persons, no deployment on edge devices that could enable autonomous weapons, and no use by Department of War intelligence agencies like the NSA. CEO Sam Altman later acknowledged the deal "looked opportunistic and sloppy" and amended the contract to add stronger language around surveillance protections.

As of early March 2026, Anthropic CEO Dario Amodei is reportedly back at the negotiating table with the Department of War, according to CNBC and the Financial Times.

This is not just tech industry drama. For government contractors, it is a signal that AI compliance, vendor selection, and data security are now front-and-center issues in federal procurement — and the rules are actively being written.

GenAI.mil — The Government's New AI Platform

While the Anthropic situation dominated headlines, the Department of War quietly built something significant: GenAI.mil, a centralized AI platform available to all 3 million military, civilian, and contractor employees.

GenAI.mil launched in late 2025 and hit 1 million unique users in its first two months. It is designed as a multi-vendor environment, meaning the government is not tying itself to a single AI provider. Google Cloud's Gemini was the first model onboarded, followed by xAI's Grok, and now OpenAI's ChatGPT, according to Department of War press releases on war.gov.

The platform's rapid expansion tells you something important about where the government is heading. AI is not an experiment anymore. It is becoming standard infrastructure — the same way email, cloud storage, and cybersecurity tools became non-negotiable parts of how the government operates.

For small business contractors, this means the agencies you work with are already using AI internally. They are going to expect their contractors to understand it, work alongside it, and in many cases, comply with new rules about how AI is used in contract performance.

The New Procurement Rules Contractors Need to Know

The headline-grabbing stories about Anthropic and OpenAI are dramatic, but the quieter policy changes may actually matter more to your business. Two major regulatory actions are reshaping how the government buys and manages AI — and both directly affect contractors.

OMB Memorandum M-25-22: AI Procurement Rules (Already in Effect)

On April 3, 2025, the Office of Management and Budget issued Memorandum M-25-22, titled "Driving Efficient Acquisition of Artificial Intelligence in Government." This is the current rule of the road for how federal agencies buy AI systems and services. It replaced the previous administration's guidance and has been in effect for new solicitations since September 30, 2025.

Here is what M-25-22 requires that contractors should know about:

Your government data is protected. Contracts must permanently prohibit vendors from using non-public government data to train publicly or commercially available AI models without the agency's explicit written consent. If you are an AI vendor or a contractor using AI tools that touch government data, this is a hard line.

Intellectual property rights must be clearly defined. Every AI contract must spell out who owns what — the government's data, the contractor's models, and any derived products. Agencies are required to standardize these processes across their contracts.

Agencies will test your AI before awarding contracts. The memo directs agencies to test proposed AI solutions in environments that mirror real-world conditions on agency networks before making award decisions. Vendors need to be prepared for hands-on demonstrations.

You may need to disclose AI use even when it is not required. M-25-22 warns agencies that vendors will increasingly use AI during contract performance in ways the government did not anticipate. Agencies are directed to consider requiring contractors to disclose when they are using AI, even if AI was not part of the original contract scope.

Buy American applies to AI. The memo includes a clear preference for American-developed AI products and services, consistent with Executive Order 14179.

Ongoing monitoring is mandatory. Contracts must give agencies the ability to regularly evaluate AI system performance, risks, and effectiveness — quarterly or biannually. Vendors must provide access for independent testing and cannot block agencies from sharing test results internally.

FY 2026 NDAA: A CMMC-Style Framework for AI Security

The National Defense Authorization Act for Fiscal Year 2026, signed into law in December 2025, includes Section 1513 — a provision that directs the Department of War to develop a cybersecurity framework specifically for AI systems acquired by the military.

Think of it as CMMC for AI. The framework must cover workforce risks, supply chain risks, adversarial tampering, and security monitoring. It will be built as an extension of the existing CMMC program and incorporated into DFARS — the defense contracting regulations.

The NDAA does not set a hard implementation deadline, but it requires the Department of War to submit a plan with timelines and milestones to Congress by June 16, 2026. For defense contractors, this means AI-specific security requirements are coming, and the smart move is to start preparing now rather than scrambling later.

What This Means for Small Businesses

I know what you might be thinking: I am not an AI company, so does any of this affect me? The honest answer is yes, and probably sooner than you expect.

If you are a defense contractor: The Anthropic supply chain risk designation means defense vendors and subcontractors may need to certify which AI tools they use in their work with the Department of War. If your team uses AI-powered tools for anything — proposal writing, data analysis, project management — you need to know where those tools come from and whether they create compliance issues.

If you use AI in contract performance: Under M-25-22, agencies can now require you to disclose AI use during contract performance, even if it was not in the original scope. Using AI to draft deliverables, analyze data, or manage workflows could trigger disclosure requirements depending on the contract.

If you are bidding on new contracts: Every solicitation issued after September 30, 2025, falls under the new M-25-22 rules. That means stricter data protections, clearer IP terms, and potentially more rigorous testing of any AI components in your proposal. Being prepared for this makes your bid stronger.

If you are not thinking about AI at all: According to Federal News Network, agencies are already favoring contractors who demonstrate AI maturity — not necessarily building AI, but showing they understand how to use it responsibly and govern it within their operations. In best-value evaluations, that kind of readiness can be the difference between winning and losing.

The opportunity is real. Federal AI spending continues to grow, with set-aside contracts under SBA programs like 8(a), HUBZone, SDVOSB, and WOSB still available for small businesses. But the compliance requirements are growing alongside the opportunities. Certifications, security frameworks, and data governance are becoming just as important as technical capability.

The Bottom Line

The government contracting landscape around AI changed dramatically in just the last few weeks — and it is still moving. Between the Department of War's vendor shakeup, the GenAI.mil platform rollout, and the procurement rules already in effect under M-25-22, contractors at every level need to understand where things stand.

The businesses that take the time to understand these new requirements — and get their compliance, certifications, and registrations in order now — are the ones that will be positioned to win when the next wave of AI-related solicitations hits. This is not a future problem. The rules are already here.

If you are a small business owner trying to make sense of how these changes affect your government contracting goals, FEDCON's advisors work with businesses every day to navigate exactly this kind of complexity — from certifications and registrations to cybersecurity readiness to understanding what agencies are actually looking for in their next round of awards. Start a conversation with our team and find out where you stand.

Ready to take the next step?

Book your free Market Assessment. A senior FEDCON advisor will review your business and show you exactly where the opportunities are.