OpenAI published a 13-page policy paper this week arguing that superintelligence demands a new AI governance policy from governments — one as sweeping as the New Deal. Sam Altman compared himself to FDR in an Axios interview. Critics called it "regulatory nihilism" dressed in progressive language.
Both readings are partly right. And neither one changes what your team should do this quarter.
The most important observation came from outside OpenAI. Lucia Velasco, senior economist at the Inter-American Development Bank and former head of AI policy at the United Nations, noted that governments are treating AI as a technology problem when it is actually a structural economic shift. That gap is real. It is not closing fast. And for small teams running AI tools in production today, that gap is your AI governance policy problem to solve — not Washington's.
Key Takeaways
- OpenAI's "Industrial Policy for the Intelligence Age" proposes sweeping social changes — public wealth funds, shorter workweeks, AI literacy programs — framed as preparation for superintelligence.
- Critics argue the paper provides cover for blocking near-term state regulation while gesturing at hypothetical federal solutions.
- Even critics agree that federal AI governance policy is years behind AI deployment reality.
- The governance gap the paper acknowledges is your problem to solve internally, now.
- Small teams can act without waiting: build an incident log, audit your vendors quarterly, and write your AI governance policy before you are required to.
- None of this requires a legal team or a federal mandate.
Summary
On April 6, 2026, OpenAI's global affairs team published a 13-page paper called "Industrial Policy for the Intelligence Age." It arrived the same day The New Yorker published a year-and-a-half investigation into CEO Sam Altman's conduct on AI safety — a coincidence that critics noted loudly.
The paper frames superintelligence as a disruption on the order of industrialization. It argues that societies need proactive policy — not reactive regulation — to distribute its benefits fairly. The proposed AI governance policy ideas include: sovereign wealth-style public AI funds, universal basic income pilots, shorter workweeks, AI literacy programs, infrastructure investment, and democratic oversight mechanisms.
OpenAI positioned this as a "starting point for discussion," explicitly inviting others to refine or challenge the ideas. Altman called it his version of the New Deal.
Soribel Feliz, former senior AI policy advisor for the U.S. Senate, gave the most grounded assessment: "The acknowledgment that U.S. institutions and safety nets are falling behind AI deployment is correct, and the conversation needs to happen at this level." But she added that nine Senate AI policy forums between 2023 and 2024 discussed virtually identical proposals — and most went nowhere.
What OpenAI's Industrial Policy Paper Proposes
The paper organizes around three themes: distributing the gains of AI broadly, building infrastructure to develop and deploy it safely, and protecting workers from displacement.
On distribution, it endorses public wealth funds seeded from AI productivity gains, progressive tax structures designed to handle sudden wealth concentration, and democratized access to AI tools for small businesses and individuals. These are not new ideas — versions have circulated in AI governance policy circles since at least 2022 — but having them in an OpenAI document raises their profile.
On infrastructure, the paper endorses open data access for AI development, investment in energy and compute capacity, and a national AI education initiative. It also explicitly recommends government restrictions on certain high-risk AI uses — a notable concession from a company that has lobbied against some of these restrictions at the state level.
On labor, it proposes policies to shorten the standard workweek, provide income support for displaced workers, and give employees more voice in how AI is deployed in their workplaces.
Nathan Calvin, vice president at Encode AI, called this a genuine improvement over earlier OpenAI policy documents: "Some of the concrete suggestions around things like auditing or incident reporting and government restrictions on certain uses of AI are good ideas." The problem is not the ideas themselves. The problem is who is proposing them and why now.
Risks to Watch — The "Regulatory Nihilism" Argument
The timing is hard to ignore. The paper arrived the same day a major investigative piece raised questions about Altman's credibility. That alone does not invalidate the substance, but it is context worth having.
More substantive is the lobbying record. OpenAI's political arm — the Leading the Future PAC, led by global affairs head Chris Lehane and bankrolled in part by president Greg Brockman — has actively fought state-level AI regulation. It lobbied against the New York RAISE Act, a safety and transparency law recently signed into law. It worked against California's SB 53. It implied, during its legal dispute with Elon Musk, that critics including Encode AI were secretly Musk-funded — a charge Calvin called an intimidation tactic.
Anton Leicht of the Carnegie Endowment for International Peace wrote on X that the proposed changes are "fundamental societal changes and heavy political lifts" that won't emerge organically. "On that read, this is comms work to provide cover for regulatory nihilism."
The pattern is familiar in tech policy. Invoke the scale of disruption. Argue existing frameworks don't fit. Position yourself as the reasonable actor willing to engage on "real" solutions. Keep those solutions at the ideas-paper stage rather than enforceable law. This is not unique to OpenAI — it is the standard Silicon Valley regulatory strategy, and it has worked for two decades.
The risk for your team is not that the paper will mislead policymakers. The risk is that the pattern holds: federal AI governance policy remains unresolved for years while deployment accelerates. Teams without an internal AI governance policy are caught without controls when liability questions arrive.
Governance Goals for Small Teams in the Policy Gap
The practical consequence of this political environment is straightforward: your AI governance policy needs to exist whether or not anyone requires it.
Feliz worked on nine AI policy forums between 2023 and 2024 where nearly identical proposals were raised. "All of this was already said, all of it," she wrote. "The problem is the gap between naming the solutions and building real mechanisms to achieve them."
That gap is your starting point. Your AI governance policy goals should not be calibrated to what Washington might eventually require. They should be calibrated to what creates genuine accountability in your own operations today.
Accountability. One person on your team should be responsible for knowing what AI tools you use, on what data, and with what safeguards. This does not need to be a full-time role — it needs to be a named person with a quarterly calendar reminder.
Documentation. The basics of your AI governance policy — approved tools, data handling rules, escalation procedures for AI failures — should be in writing. A one-page document that actually gets reviewed is worth more than a 30-page policy that sits unread in a shared folder.
Incident tracking. A simple log of AI failures, hallucinations, and unexpected outputs is the foundation of any credible AI governance policy. Without it, you cannot identify patterns, demonstrate due diligence to auditors, or improve over time.
None of this requires legislation to exist. All of it becomes more valuable the moment regulation does arrive — and it will arrive eventually.
Build Your AI Governance Policy Before Rules Arrive
The teams that will navigate AI regulation well when it comes are the ones building their AI governance policy now — not because they are forced to, but because it protects the business.
Start with an AI use policy. Write down which tools are approved, which are not, and why. Define what data each tool can and cannot touch. Specify where human review is required before acting on AI output. This document does not need to be long. It needs to be specific enough that a new hire can read it and understand what they are allowed to do.
Set a vendor review cadence. Quarterly is achievable for any team. For each AI tool you pay for, answer: what data does it see, how is it retained, and what happens in a breach? For practical guidance on exactly what to ask, the AI developer tool vendor security questionnaire covers the key questions to put to every vendor.
Build an incident log before you have an incident. The OpenAI paper recommends AI incident reporting as a federal policy mechanism. You can implement this today. A simple spreadsheet — date, tool, what happened, what action was taken — reviewed monthly by one person is sufficient. When something serious happens, the AI vendor security incident response guide gives you the full response playbook.
Map data access. Know which AI tools have access to what data. Customer data, employee data, financial records — each category should have a clear rule about whether AI tools can touch it and under what conditions.
An AI governance policy built today will serve as the foundation for compliance when obligations eventually arrive. Teams that have been running this practice for two years will look far stronger than teams scrambling to produce something on a deadline.
Controls Your Team Can Apply This Quarter
The concrete controls in the OpenAI paper — auditing, incident reporting, use restrictions — are exactly what a small team can implement without waiting for law. Here is what each looks like in practice.
Model and vendor inventory. List every AI tool in use, the vendor, the model version, and the primary data it touches. Update it when you add or change tools. This inventory is the foundation of any future audit.
Use restrictions in writing. Define where AI is not appropriate in your workflow. Common examples include: final legal or financial decisions, external communications that represent the company without human review, or processing of sensitive personal data without a signed data processing agreement.
Output review protocol. For high-stakes AI outputs — drafts that go to clients, code that goes into production, recommendations that affect people — define a human review step. Document who is responsible for that review and what they are checking.
Quarterly vendor check. For each vendor, spend 15 minutes reviewing: has anything changed in their terms of service, data retention policy, or security posture? Have any incidents been reported? This is not an audit — it is a 30-minute check that most teams can complete in one meeting.
Visible policy communication. Your team should know what the AI governance policy is and why it exists. The hidden AI features governance gap is a persistent operational risk when employees use unapproved tools because they never knew the policy existed. A brief team meeting each quarter — covering approved tools, the incident log, and what changed — is sufficient.
AI Governance Policy — Implementation Steps
Building an AI governance policy from scratch sounds large. It is not. Here is how to do it in thirty days.
Week 1 — Inventory. Spend two hours listing every AI tool your team uses. Include tools individual team members use on their own, not just company-licensed software. For each: what is it, who uses it, what data does it see, who is the vendor? This is your starting point. You will update it quarterly.
Week 1 — Assign ownership. Name one person as the AI governance point of contact. This does not need to be a compliance professional. It can be an engineer, an operations lead, or a founder. What matters is that one person has the responsibility and a calendar reminder.
Week 2 — Write the AI use policy. One page. Approved tools. Prohibited uses. Data handling rules. Human review requirements. Incident reporting procedure. Keep it short enough that someone will read it. A good AI governance policy gets used; a long one does not.
Week 2 — Build the incident log. Create a simple shared document: date, tool, description of the issue, action taken, follow-up needed. Share it with the team and explain what it is for. Teams that have a log report issues. Teams that do not, tend not to.
Week 3 — Vendor review. For each tool in your inventory, spend 15 minutes reviewing their current data processing agreement, privacy policy, and any known incidents. Flag anything that has changed since you started using the tool.
Week 3 — Data access map. For each category of data your team handles — customer data, employee data, financial records, sensitive communications — write one line: which AI tools can access this, under what conditions, with what human oversight.
Week 4 — Team communication. Share the AI governance policy with your team. Run a 30-minute meeting to walk through it. Answer questions. The goal is that everyone knows what the rules are and why they exist — not that you have produced a document that satisfies a hypothetical auditor.
Week 4 — Set the quarterly cadence. Put four review meetings on the calendar for the year. Each covers: what changed in the tool inventory, any incidents in the log, any vendor policy updates, and any changes to the AI governance policy. A policy that is never reviewed stops being a policy.
The whole process takes ten to fifteen hours of effort spread across one month. After that, it is a quarterly review. That is achievable for any team, regardless of size or legal resources.
Action Checklist
Use this to audit your current AI governance posture:
- AI tool inventory is documented and current
- Each tool has a named data category — what data it can and cannot see
- A written AI governance policy exists and is accessible to all team members
- Prohibited uses are specified, not just approved uses
- Human review requirements are defined for high-stakes outputs
- An incident log exists and has been shared with the team
- One person is named as responsible for quarterly governance review
- Vendor data processing agreements are on file for each AI tool
- Quarterly review meetings are on the calendar for the full year
- Team members know the reporting procedure when AI behaves unexpectedly
A team with all ten boxes checked has a stronger AI governance posture than most organizations of any size. Most teams starting from scratch can reach six to eight within thirty days.
Frequently Asked Questions
What is OpenAI's Industrial Policy for the Intelligence Age?
It is a 13-page policy paper published by OpenAI in April 2026 outlining proposed government responses to superintelligent AI. It covers wealth redistribution, shorter workweeks, public AI infrastructure, and AI literacy programs — comparing the scale of disruption to the New Deal era. The paper was co-authored by OpenAI's global affairs team and released as a starting point for discussion, not a final policy position.
What does "regulatory nihilism" mean in the context of AI policy?
Critics use the term to describe a pattern where tech companies invoke the scale of AI's impact to argue that existing regulations do not fit, while simultaneously lobbying against new AI-specific legislation. The result is a regulatory vacuum that benefits incumbents. Anton Leicht of the Carnegie Endowment used it specifically to describe OpenAI's paper: proposing sweeping societal changes as cover for blocking near-term, enforceable rules.
Does the OpenAI policy paper affect small businesses directly?
No — the paper targets federal policymakers, not businesses. It proposes broad societal changes, not immediate compliance requirements. However, the AI governance policy gap it acknowledges is real and affects every team deploying AI tools today. Federal action is years away. An internal AI governance policy is something you can build this month.
What AI governance policy should small teams implement first?
Start with three basics: a tool inventory (what AI you use and on what data), a written AI use policy (approved uses, prohibited uses, human review rules), and an incident log (a record of AI failures and unexpected outputs). These three documents form the foundation of any serious AI governance policy, and building all three takes less than two weeks.
How do I build an AI governance policy without a legal team?
Focus on documentation over legal precision. A one-page AI governance policy that lists approved tools, data handling rules, and human review requirements is more useful than a 30-page document no one reads. Review it quarterly and update it when you adopt new tools. Legal precision can come later — when you have the resources and when regulation makes it necessary. What matters now is that the AI governance policy exists and your team knows what it says.
References
- Sam Altman says AI superintelligence is so big that we need a 'New Deal' — Fortune, April 6, 2026
- OpenAI, "Industrial Policy for the Intelligence Age," April 2026
- Carnegie Endowment for International Peace — Anton Leicht commentary on OpenAI policy paper, April 2026
- Inter-American Development Bank — Lucia Velasco commentary, April 2026
- Federal AI preemption and what state law compliance means for your team in 2026
