Key Takeaways
- Anthropic's Claude Mythos Preview autonomously found thousands of AI zero-days in major OS and browser codebases — with no human involvement after the initial prompt.
- Project Glasswing gives coalition members 90 days to patch before findings go public; over 99% of vulnerabilities are not yet patched.
- Small teams are affected indirectly: your stack runs on the same Linux kernels, browsers, and TLS libraries where vulnerabilities were found.
- The gap between vulnerability disclosure and weaponised exploit is narrowing — patch cadence and vendor accountability are now active governance requirements, not background intentions.
- Immediate actions: subscribe to security advisories, set a written patch SLA, add AI-assisted autonomous exploitation to your threat model.
Summary
In April 2026, Anthropic disclosed that its unreleased Claude Mythos Preview model had autonomously discovered and exploited zero-day vulnerabilities across every major operating system and every major browser — with no human involvement after the initial prompt.
To manage the fallout, Anthropic stood up Project Glasswing: a coalition of twelve major technology and security companies, plus over forty critical-infrastructure organisations, all receiving early access to scan their own systems before findings go public within a 90-day window.
This article explains what happened, why it matters to small teams, and what you should do before the disclosure clock runs out.
What Was Announced
Anthropic released two things simultaneously: the Mythos Preview benchmarks and the Glasswing coalition.
The model. Claude Mythos Preview is unreleased and positioned above Opus 4.6. On SWE-Bench Pro it scored 77.8% (Opus 4.6 was 53.4%). On Terminal-Bench 2.0 it scored 82.0%. These are significant margins, not incremental ones.
The findings. When Anthropic tested Mythos Preview against real production codebases, it found and exploited AI zero-days autonomously. The reported examples are technically serious:
- A 27-year-old bug in OpenBSD's TCP stack
- A 17-year-old unauthenticated remote code execution in FreeBSD, built using a 20-gadget ROP chain — without human involvement
- JIT heap sprays escaping browser sandboxes
- Vulnerabilities in TLS and AES-GCM implementations
For comparison: Opus 4.6 converted known Firefox JavaScript engine bugs into working exploits twice. Mythos Preview produced 181.
The coalition. Project Glasswing includes AWS, Apple, Google, Microsoft, CrowdStrike, Cisco, NVIDIA, JPMorganChase, Palo Alto Networks, Broadcom, and the Linux Foundation, with more than 40 additional organisations from critical infrastructure. Anthropic committed $100 million in usage credits to let members scan their own systems, and $4 million in direct donations to the Linux Foundation and Apache Software Foundation to fund the engineers who will actually fix the bugs.
The disclosure model. All findings go public within 90 days of vendor notification — the same timeline Google Project Zero uses.
Why This Is Different From Previous AI Security Warnings
The usual narrative around AI and cybersecurity has been about augmentation: AI helps junior attackers write better phishing emails, or automates reconnaissance that would otherwise take hours. That is real but familiar.
What Glasswing describes is different in two ways.
It is autonomous. The model was not given a list of known CVEs and asked to write exploits. It found novel vulnerabilities and constructed working AI zero-days end-to-end. The 17-year-old FreeBSD RCE with a 20-gadget ROP chain was built without a human in the loop after the first prompt. That is qualitatively different from a tool that assists a skilled attacker.
It was not trained for this. Anthropic says the cybersecurity capability is emergent — Mythos was trained on coding, reasoning, and autonomous task completion, not specifically on security research. The implication is that any frontier-class model with strong code and reasoning capabilities may have similar reach, whether or not the developer intended it.
Governance Goals
For small teams, Project Glasswing clarifies three concrete governance goals that were previously vague intentions:
- Establish a written patch SLA. Decide how quickly your team applies critical security updates — and document it. A policy that exists in writing is a commitment. A patch cadence that lives in someone's head is an intention. Given that AI zero-days can narrow the time between disclosure and exploitation, 72 hours for critical patches is a reasonable ceiling.
- Maintain an accurate vendor security register. Know which cloud providers, SaaS tools, and managed services your team depends on, and whether each has a defined security advisory and patch process. Vendors who were not part of the Glasswing coalition are patching on the same public timeline as everyone else — that is a risk input, not just a curiosity.
- Update your threat model to include AI-assisted autonomous exploitation as a category. This is not about modelling Mythos specifically. It is about acknowledging that the category exists and adjusting your assumptions about how quickly a known vulnerability becomes an exploited one.
AI Zero-Days: Risks to Watch
The second-order risks from Project Glasswing are more relevant to small teams than the first-order ones:
- Unpatched foundations. Over 99% of the vulnerabilities found are not yet patched at time of writing. Small teams running Linux-based infrastructure, mainstream browsers, and software with TLS or AES-GCM dependencies are running on those foundations.
- Widening attacker capability gap. Frontier-model access is unevenly distributed. Organisations with access to capable AI models gain an asymmetric advantage in offensive security research. The gap between well-resourced attackers and under-resourced defenders widens.
- Vendor blind spots. SaaS and cloud vendors who were not part of Glasswing have no advance notice of findings that affect their stacks. If a vendor's security posture is unclear, their patch timeline is unknown.
- Faster exploit weaponisation. The window between public vulnerability disclosure and working exploit used to be measured in days to weeks. Autonomous exploit generation compresses that window. Teams that apply patches on a monthly cadence will increasingly find themselves exposed.
- Scope creep of AI security capability. Mythos was not designed as a security tool. Its capabilities emerged from general coding and reasoning training. Future frontier models — from Anthropic or others — may have comparable reach, whether or not developers intend it.
AI Zero-Days: What It Means for Small Teams
Small teams are not the direct targets of the Glasswing findings. The model was run against major OS and browser codebases, not against your internal tools. But several second-order effects matter.
Your software stack is built on the affected foundations
Every team uses a browser. Most use Linux-based infrastructure. Many use software that depends on TLS and AES-GCM. The vulnerabilities Mythos found are in the foundations of modern software — not in niche or obscure projects. Over 99% of the thousands of AI zero-days found are not yet patched.
Until the 90-day window closes and patches are released, your exposure to those unpatched bugs is real. Attackers who independently discover the same bugs — or who get access to leaked findings — will have working exploit templates.
Patch cadence matters more now
Most small teams apply security updates on a loose schedule: when a tool prompts, when an engineer has time, or when a breach makes the news. That approach made sense when the attack surface was relatively stable. It makes less sense when autonomous exploit generation means that disclosure triggers a race between defenders and attackers.
What to change: Apply security updates to operating systems, browsers, runtimes, and TLS libraries within days of release, not weeks. This is especially important in the 90-day window following any Glasswing-related disclosure.
Vendor security due diligence is now load-bearing
If you use SaaS tools, cloud services, or managed infrastructure, the security of those vendors just became more important. Vendors who were part of the Glasswing coalition had early access. Vendors who were not are patching on the same public timeline as everyone else.
When you evaluate a vendor — or review an existing one — ask directly whether they have a security advisory subscription, a defined patch SLA, and whether they track disclosures from coalitions like Glasswing. Our guide on AI governance for small teams covers the baseline vendor questions in a broader governance context.
Your threat model needs a new row
If your AI policy baseline includes a threat landscape section, it probably lists things like: phishing, ransomware, supply chain compromise, and insider misuse. Add a row for AI-assisted autonomous exploitation. The characteristics of this threat class are:
- High technical sophistication with lower attacker skill requirement
- Faster time-to-exploit after vulnerability disclosure
- Novel vulnerability discovery, not just known-CVE exploitation
- Likely to widen the gap between attackers who have access to frontier models and those who do not
You do not need to model Mythos specifically. You need to recognise that the category is real and update your assumptions about how quickly a disclosed vulnerability becomes an exploited one.
Controls (What to Actually Do)
These are practical controls you can implement now, without a dedicated security team.
1. Subscribe to security advisories for your critical stack. Most OS and browser vendors publish security bulletins by email. Sign up the person responsible for infrastructure updates. For Linux, this typically means the advisories list for your distribution (Ubuntu, Debian, RHEL, etc.). For browsers, Chrome and Firefox both publish release notes with CVE lists.
2. Set a patch SLA. Decide and document how quickly critical security updates will be applied. A reasonable default for small teams: critical patches within 72 hours of release, all others within two weeks. Write this in your security policy document so it is a decision, not an intention.
3. Add AI-assisted attacks to your incident response thinking. Your incident response documentation probably covers ransomware and data breach scenarios. Add a scenario where an attacker uses AI tooling to identify and exploit a vulnerability in your stack. The response steps are largely the same — isolate, assess, notify, remediate — but the trigger and speed are different. See our guide on voluntary cloud rules and AI compliance for related vendor accountability frameworks.
4. Ask your cloud and SaaS vendors one new question. "Are you tracking the Glasswing 90-day disclosure window for any of the vulnerabilities in your stack?" This is a reasonable question to raise at the next vendor review or in a support ticket. Their response tells you something about their security posture.
5. Update your AI risk assessment. Add the new threat class described above. Note the date and source of this update. If you have a risk register, log it there. This lightweight governance work — thirty minutes — creates a record that your team assessed the risk rather than ignoring it. Our usage limits and compliance guide covers how usage controls intersect with security governance.
Implementation Steps
- This week: Subscribe to security advisories for your OS distribution, browsers, and any services handling TLS termination. Assign the subscription to a named person with a calendar reminder to act on bulletins within 72 hours.
- This week: Add "AI-assisted autonomous exploitation" as a threat category to your risk register or threat model document. Note the source (Project Glasswing, April 2026) and the date.
- Within two weeks: Draft a one-paragraph patch SLA and add it to your security policy or AI governance document. It does not need legal review — it needs to exist in writing and be communicated to the team.
- Within two weeks: Run a vendor security review for your top three critical vendors. Ask each whether they track coalition disclosures, have a defined patch SLA, and subscribe to relevant security advisories.
- Within 90 days: Apply any security updates released for your Linux distribution, browsers, and TLS libraries during this period. Treat this window as elevated-alert — the Glasswing findings will be disclosed publicly as patches are released.
- Quarterly ongoing: Review your threat model and vendor security register. Update as new AI-assisted security capabilities emerge from Anthropic or other frontier-model developers.
Checklist
Copy this checklist into your security review doc or task manager:
- Subscribe to OS security advisories (distro mailing list or RSS)
- Subscribe to browser security release notes (Chrome, Firefox, Safari as applicable)
- Subscribe to OpenSSL / TLS library advisories if you handle HTTPS termination
- Assign a named owner for security advisory monitoring
- Document patch SLA: critical = 72 hours, standard = two weeks
- Add "AI-assisted autonomous exploitation" row to threat model
- Ask top three vendors: "Do you track Glasswing disclosures? What is your patch SLA?"
- Log vendor responses in your vendor security register
- Apply all critical security updates released in the next 90 days within SLA
- Schedule quarterly threat model review
What to Watch in the Next 90 Days
The 90-day window is rolling from the date each vendor was notified, not a single fixed deadline. Some patches will arrive quickly; others will take the full window. Here is what to track:
- Linux kernel security releases — subscribe to the kernel security mailing list or your distro's advisories
- Browser releases with security fixes — Chrome, Firefox, and Safari all publish CVE lists in release notes
- OpenSSL and TLS library updates — if your infrastructure includes services that handle HTTPS termination, these are in scope
- FreeBSD and OpenBSD security advisories — relevant if any of your infrastructure runs on BSD-based systems or appliances
This is not an exhaustive list, and Anthropic has not published the full vulnerability index. The approach is to watch the normal security channels more closely for the next quarter than you might have done before.
The Bigger Picture
Project Glasswing represents a bet by Anthropic that coordinated, responsible disclosure at scale is better than either sitting on findings or releasing the model publicly. The coalition model — give defenders early access, fund the engineers who fix things, then disclose — is a coherent response to a genuinely hard problem.
Whether it works depends on how quickly affected vendors patch, how well the coalition holds information before disclosure, and whether similar capability appears in models that take less care about how they handle it.
For small teams, the practical implication is simple: the gap between vulnerability disclosure and weaponised exploit is narrowing. Governance that assumed a comfortable lag between "known" and "exploited" needs to be updated. The controls are not exotic — patch management, vendor accountability, and a clear threat model — but they need to be treated as active commitments, not background intentions.
Frequently Asked Questions
Q: What is Project Glasswing? A: Project Glasswing is a coalition announced by Anthropic in April 2026, involving AWS, Apple, Google, Microsoft, CrowdStrike, Cisco, NVIDIA, JPMorganChase, Palo Alto Networks, Broadcom, the Linux Foundation, and over 40 additional critical-infrastructure organisations. The coalition was formed after Anthropic's Claude Mythos Preview model autonomously discovered thousands of AI zero-days across major operating systems and browsers. Members receive controlled access to scan their own systems before findings are disclosed publicly within 90 days.
Q: Does this affect small teams directly? A: Yes — indirectly but materially. Small teams run software built on the same stacks where Mythos found vulnerabilities: Linux, BSD kernels, mainstream browsers, TLS implementations, and AES-GCM. If your software stack includes any of these, patch cycles and vendor security practices now matter more. The 90-day disclosure window is the window you have to patch before exploits become public.
Q: Should we panic? A: No. The controlled disclosure model exists precisely to give defenders a head start. What you should do is tighten your patch management cadence, add AI-assisted attack capability to your threat model, and revisit your vendor security questions — particularly for software providers who were not part of the Glasswing coalition.
Q: What is the 90-day disclosure window? A: Anthropic and the Glasswing coalition committed to disclosing all findings publicly within 90 days of notification to affected vendors. This mirrors the model used by Google Project Zero. For small teams, it means security updates for affected software are coming — and you should be ready to apply them quickly.
Q: What should we ask our cloud or SaaS vendors about Glasswing? A: Ask whether they track the Glasswing 90-day disclosure window for vulnerabilities in their stack, whether they have a defined patch SLA for critical security updates, and whether they subscribe to the relevant OS and library security advisories. A vendor who cannot answer these questions has an unclear security posture — factor that into your risk assessment.
References
- Anthropic. (2026, April). Project Glasswing and Claude Mythos Preview security research. Anthropic Security Blog. Retrieved from https://www.anthropic.com/research/project-glasswing
- Google Project Zero. Disclosure policy. Retrieved from https://googleprojectzero.blogspot.com/p/vulnerability-disclosure-faq.html
- NIST. Artificial Intelligence Risk Management Framework (AI RMF). Retrieved from https://www.nist.gov/artificial-intelligence
- Linux Foundation. Security resources and advisories. Retrieved from https://www.linuxfoundation.org/security
