THE BOARD BRIEF
Weekly Intelligence For Directors Who Want To See What’s Coming
March 4, 2026 | Issue #5
THE BIG STORY
On Friday, February 27, the Trump administration did something unprecedented: it blacklisted a leading American technology company for refusing to remove ethical guardrails from its products.
Anthropic, the San Francisco AI company behind the Claude model, had been operating under a $200 million Pentagon contract since July 2025. Claude was the only frontier AI model deployed on the military's classified networks, integrated through a partnership with Palantir Technologies. It was used during the operation to capture Venezuelan President Nicolas Maduro in January. Defense officials described its capabilities in superlatives: one senior official called the prospect of disentangling Claude from military systems a "huge pain in the ass."
None of that mattered when Anthropic refused to remove two restrictions: it would not allow Claude to be used for mass surveillance of American citizens, and it would not allow its model to power fully autonomous weapons that fire without human involvement.
The Pentagon demanded that Anthropic make Claude available for "all lawful purposes," no exceptions. It set a deadline of 5:01 p.m. on Friday. Anthropic's CEO, Dario Amodei, responded Thursday evening that the company "cannot in good conscience accede to their request."
What followed was swift. Defense Secretary Pete Hegseth designated Anthropic a "supply chain risk to national security," a classification previously reserved for foreign adversaries like Huawei. President Trump ordered every federal agency to "immediately cease" using Anthropic's technology, with a six-month phase-out for systems already dependent on Claude. Hours later, OpenAI announced it had struck a deal with the Pentagon for classified network access.
Why this matters for your board, specifically.
This is not a defense contracting story. It is a governance story. The Anthropic blacklisting establishes a precedent that directly affects three categories of board-level risk that many directors have not yet connected.
First, AI vendor concentration risk just became political risk. Your company almost certainly uses AI products from Anthropic, OpenAI, Google, or some combination. Until last week, the primary vendor risk was technical: model performance, data security, API reliability. Now it includes the possibility that your AI vendor's relationship with the government, or its refusal to comply with government demands, could disrupt your operations. Palantir, one of the military's most important software contractors, now has to extract Claude from its classified platform and find a replacement. Amazon Web Services hosts Anthropic's infrastructure and is a major defense cloud provider. Boeing and Lockheed Martin were asked to assess their exposure to Anthropic before the blacklisting was announced. The ripple effects are not hypothetical. They are happening now.
If your company uses Claude in any workflow that touches a government contract, your technology team and general counsel need to assess exposure this week, not next quarter.
Second, the "all lawful purposes" standard is a trap door. The Pentagon's position is straightforward: once the military buys a tool, it applies its own standards for how to use it, and vendors have no say. The problem is what "all lawful purposes" actually encompasses. The Axios report revealed that the deal offered to Anthropic in the final hours would have required allowing the collection or analysis of Americans' geolocation data, web browsing history, and personal financial information purchased from data brokers. These activities may be technically legal under current interpretation of existing authorities. They are also, by any reasonable definition, mass surveillance.
For boards, the lesson is this: when a government customer or a large enterprise client demands that your AI tools be available for "all lawful purposes," that phrase is doing more work than it appears. Your acceptable use policies, your terms of service, and your ethical commitments are either enforceable boundaries or marketing copy. Last week showed what happens when the distinction is tested.
Third, the AI governance question your board deferred just became urgent. In Issues #3 and #4 of this Brief, I noted that the question of board-level AI governance structures was accelerating faster than most boards were responding. Only 36% of boards have implemented a formal AI governance framework, according to NACD's 2025 survey. Only 6% have established AI-related management reporting metrics.
The Anthropic blacklisting makes AI governance a fiduciary issue, not a technology issue. Under the Caremark doctrine, board members may face liability if they fail to implement functioning compliance systems for mission-critical risks or if they consciously ignore red flags. AI vendor relationships that can be disrupted overnight by government action, AI acceptable use policies that may conflict with client demands, AI deployment decisions that carry reputational and legal consequences: these are the kinds of risks that Delaware courts have increasingly been willing to scrutinize.
If your board does not have a committee or designated director responsible for AI oversight, the time to establish one was six months ago. The second-best time is this week.
The broader signal. Multiple legal experts have questioned the legal basis for the supply chain risk designation. The Institute for Law & AI noted that the government must complete a risk assessment and notify Congress before making such a designation, neither of which appears to have occurred. Anthropic has stated it will challenge the designation in court, calling it "legally unsound" and a "dangerous precedent for any American company that negotiates with the government."
The chilling effect is already visible. Several hundred employees at OpenAI and Google signed petitions supporting Anthropic's position. Sam Altman, OpenAI's CEO, called Anthropic's red lines "reasonable" and said the dispute had become "an issue for the whole industry," even as his company moved to fill the vacuum. A retired Air Force general who led the Pentagon's first AI center called the blacklisting counterproductive and warned that Claude is the "single most widely deployed AI system in the U.S. military."
The precedent cuts in both directions. If the government can punish a vendor for maintaining ethical restrictions, every AI company must now calculate whether having red lines is a business risk. If Anthropic prevails in court, it establishes that vendors retain meaningful control over how their technology is used, even by government customers. Either outcome reshapes the landscape for every company that builds, deploys, or depends on AI.
Three questions your board should ask this week:
"Which AI vendors are we using across the enterprise, and do any of them have government contracts or acceptable use restrictions that could create business continuity risk if those relationships deteriorate?"
"Do our own AI acceptable use policies define clear boundaries, and has management stress-tested what happens when a major client demands we override them?"
"Who on this board, or which committee, is responsible for AI governance oversight, and are they receiving regular reporting on vendor relationships, deployment decisions, and emerging regulatory risk?"
ON THE RADAR
Five signals board directors should be tracking this week.
1. Iran strikes reshape the risk landscape overnight. The U.S. and Israel launched coordinated strikes across Iran on February 28, targeting regime leadership, nuclear facilities, and military infrastructure. Iran retaliated within six hours, striking U.S. military installations in six countries. Oil tankers are avoiding the Strait of Hormuz. Markets open Monday into significant uncertainty. If your business has any exposure to Gulf energy supply chains, shipping routes, or operations in the region, this is a board-level conversation now. (For deep analysis, see ScenarioWatch Radar #5: "Five Crises, One Week, No Spare Capacity," and The Paranoidist Flash Issue #2: "The Containment Assumption Failed in Six Hours," both published this week at DeepStrategy.ai.)
2. DHS shutdown enters wartime. The Department of Homeland Security shutdown that began February 14 is now in its third week, with no deal in sight. CISA, the nation's lead cyber defense coordinator, is operating at 38% capacity with 62% of its workforce furloughed. This was already a governance failure. With the U.S. now engaged in active military operations against Iran, whose cyber capabilities against Western infrastructure are well documented, operating the nation's cyber defense agency on a skeleton crew is a different category of risk entirely. Boards should be asking their CISOs what CISA degradation means for their threat environment.
3. Private credit stress deepens. Blue Owl Capital had its worst month on record in February, with its flagship fund down 23%. Deutsche Bank downgraded the stock to hold. Activist investors launched tender offers. This connects to the AI displacement pattern we highlighted in ScenarioWatch Focus #1: the enterprise software companies most vulnerable to AI-driven margin compression are concentrated in the same mid-market lending portfolios where private credit has been growing fastest. Treasury Secretary Bessent has flagged concern about systemic risk migration to regulated insurance companies with private credit exposure. If your company has significant private credit holdings or your pension fund has increased allocations to the asset class, this warrants attention.
4. Section 122 tariff architecture faces legal challenge. The administration's use of Section 122 of the Trade Act to impose tariffs is being challenged on the grounds that it requires a balance-of-payments justification that may not exist. Approximately 2,000 lawsuits are pending at the Court of International Trade related to IEEPA tariff refunds. A new motion for permanent injunctive relief was filed February 24. The Tariff Refund Act of 2026 has been introduced in the Senate. Boards with significant import exposure should be tracking these legal developments closely: the tariff framework's legal foundations are less settled than the policy announcements suggest.
5. NIST AI agent security RFI closes March 9. The Commerce Department's National Institute of Standards and Technology is collecting public input on security challenges posed by AI agents, systems that can take autonomous actions in digital and physical environments. This RFI will shape the federal government's approach to AI agent regulation. If your company is deploying or planning to deploy agentic AI systems, your technology and risk teams should be monitoring this closely. The regulatory framework that emerges will define the compliance landscape for the next generation of AI products.
THE BOARDROOM QUESTION
Is your AI vendor relationship a strategic asset or a single point of failure?
Two years ago, choosing an AI vendor was primarily a technology decision: which model performed best for your use case. Today it is a strategic decision with governance, legal, regulatory, and geopolitical dimensions.
The Anthropic blacklisting forces boards to think about AI vendor relationships the way they already think about critical infrastructure: with redundancy, contingency planning, and explicit risk assessment. Palantir is now scrambling to replace Claude in classified systems. Defense contractors are certifying that they do not use Anthropic's products in any government-adjacent work. Enterprise customers are evaluating whether Anthropic's government difficulties create operational risk for their own deployments.
This is not unique to Anthropic. Any frontier AI company could find itself on the wrong side of a government demand. The question for boards is whether they have the visibility and the governance structures to respond.
Most boards do not know which AI models are embedded in their operations, through how many layers of vendor relationships, or what the business continuity impact would be if access to any single model were disrupted. This is the AI equivalent of not knowing your supply chain past tier one.
The board that asks "what AI are we using and through whom?" before the disruption arrives is the board that can respond. The board that waits until the disruption arrives is the board that explains.
REGULATORY WATCH
March 2026
March 9: NIST AI Agent Security RFI comment deadline (Commerce Dept)
March 2-6: War powers votes expected in House (Massie-Khanna) and Senate (Kaine-Paul) following Iran strikes
March 2: Large accelerated filer 10-K deadline (SEC)
March 12: BlackCat ransomware sentencing
Ongoing: EU AI Act general application approaching (August 2, 2026); Colorado AI Act effective date approaching; IEEPA tariff refund litigation (Court of International Trade, ~2,000 cases pending); Section 122 legal challenges; Anthropic supply chain risk court challenge expected
WHAT'S AHEAD
The week of March 2-6 will test several of the risks outlined above simultaneously. Markets open Monday into the Iran shock, with energy prices, defense stocks, and risk assets all repricing. War powers votes will signal whether Congress intends to assert its constitutional role. The NIST RFI deadline on March 9 will close a window that shapes AI regulation for years. And the Anthropic situation will continue to develop as the company prepares its legal challenge and the six-month federal phase-out begins.
For boards, the operational question is straightforward: do you have the information and the structures to monitor these converging risks, or are you waiting for management to bring them to you one at a time? The week that just ended demonstrated, with unusual clarity, that risks arrive in clusters, not in queues. Governance structures built for sequential processing will not survive an environment that delivers simultaneous disruption.
Next week's Board Brief will cover the market response to Iran, the war powers debate, and what Anthropic's legal challenge means for the broader AI vendor landscape.
Researched, written, and edited in collaboration with Claude by Anthropic.