The whole artificial scarcity Anthropic created around Mythos / Glasswing is quite brilliant to be honest (I’m Not saying ethical, just brilliant). The commercial gains are one side of course. But consider this:
Gets labelled supply chain risk by the pentagon. Hypes up what they claim to be the most advanced hacking tool on the planet. This puts the US government into a loose / loose position. Either deny the NSA access to it, or be called out on their bluff.
> The whole artificial scarcity Anthropic created around Mythos / Glasswing is quite brilliant to be honest
Isn’t that just the same strategy OpenAI has used over and over? Sam Altman is always “OMG, the new version of ChatGPT is so scary and dangerous”, but then releases it anyway (tells you a lot about his values—or lack thereof) and it’s more of the same. Pretty sure Aesop had a fable about that. “The CEO who cried ‘what we’ve made is too dangerous’”, or something.
They way they've published hashes of the bugs it has found so that once those bugs are fixed they can responsibly disclose them while also proving that they weren't lying... that displays a willingness to dabble in evidence which is far beyond anything OpenAI has done to support their claims.
Thank you. People are currently getting a hard-on claiming Anthropic are the 'good guys' and don't stop to actually look around and see what is going on and how both companies got here.
Anthropic has not in fact released it, and it does in fact appear to be that dangerous, judging by the flood of vulnerability reports seen by e.g. Daniel Stenberg.
Certainly it’s a strategy OpenAI has used before, and when they did so it was a lie. Altman’s dishonesty does not mean it can never be true, however.
The flood of reports that open source projects like curl, Linux and Chromium are getting are presumably due to public models like Open 4.6 that released earlier this year, and not models with limited availability.
A few months of restricting access to people they think will actually fix problems is a big deal. Obviously only an idiot would think it could or should be kept under wraps forever.
Partly true. I think the consensus was it wasn't comparable because Mythos swept the entire codebase and found the vulnerabilities, whereas the open models were told where to look for said vulnerabilities.
Not really. The models were pointed specifically at the location of the vulnerability and given some extra guidance. That's an easier problem than simply being pointed at the entire code base.
> judging by the flood of vulnerability reports seen by e.g. Daniel Stenberg
Maybe I've missed anything, but what Stenberg been complaining about so far been the wave of sloppy reports, seemingly reported by/mainly by AIs. Has that ratio somehow changed recently to mainly be good reports with real vulnerabilities?
> Improvement in AI models' capabilities became noticeable early 2026, said Daniel Stenberg.
> He estimates that about 1 in 10 of the reports are security vulnerabilities, the rest are mostly real bugs. Just three months into 2026, the cURL team Stenberg leads has found and fixed more vulnerabilities than each of the previous two years.
> The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good.
> I'm spending hours per day on this now. It's intense.
It didn't, but the advent of spellcheck and autocorrect has made everyone completely give up on proper grammar or word selection as long as no squiggly line appears.
Maybe that’s part of it, but I’ve also noticed autocorrect on my devices often correcting incorrectly. As in, I type the word correctly and it decides “oh, surely you meant this other similarly spelled word” and changes it. Sometimes I don’t notice until after sending the message.
I use MS SwiftKey on my android phone and it will often autocorrect my correctly spelled, correctly used, words, to words that probably don't exist in any language (recently it corrected "blow" to "blpw").
I have French installed on my keyboard as well so sometimes it will randomly correct English words to French words (inconsistently, but at least they're words), but blpw is not a word in either of those languages.
Unfortunately, I think me typing blpw three times has officially added it to my dictionary :)
Having grown up around immigrants and other folks who learned English as a second language, I always attributed "loose" for being a signal that perhaps English isn't the writer's first language.
I think what you say is partly true too, but it's not a new phenomenon. Some examples
Language evolves. The English we learned in grammar school is likely not going to be the same English our kids or grandkids learn. At the end of the day, written communication has a single purpose — to communicate. If I can understand what the author is trying to say, then the author achieved their goal. That being said, I wish my mom did use spell check or autocorrect because her messages often require a degree in linguistics to decipher, but because of typos, not spelling. Maybe she'll influence the next evolution in typed communication :)
Could also be non-native speakers .. Even as a former grammar nazi, now that English isn't my daily driver language I find myself making basic mistakes .. (two, too, to / its, it's / etc.)
Wow, "magic e" just transported me back to primary school. And I had a little heart flutter fearing that I wouldn't be able to remember/explain it today.
I hate discussions like these because then I start reading words in weird ways and then I look at words as a random jumble of letters that don't even seem like words anymore. Is that just me? :)
Ha. Non-native speaker here although you wouldn’t be able to tell what talking to me, until you hear me confuse when to use this vs that, and lose vs loose. Some things my brain just refuses to remember.
Native English speaker here and my linguist wife constantly has to remind me that I use many propositions incorrectly, because my parents were non-native speakers and in their native language (Behasa Melayu), those propositions were the same words.
For some reason I can't think of those propositions at the moment, but it's definitely prevalent when I'm speaking French and use the wrong proposition, only because I'd have used the wrong proposition in English.
It doesn't make sense to have "lose" pronounced as it is. We have rose, pose, dose, nose all pronounced with ō. And then you have lose pronounced as loo͞z. It feels natural to put two O's in there when you write it.
This is true, but if the goal is to be understood, it's in the speaker's best interest to pronounce words in a way they'll best be understood. So I think even if the language itself lacks formal rules, we as a society of communicators should align on some loose set of rules.
In all of those places loose means something that isn't tight and lose something that you've displaced.
I think it would be correct to say people display varying command of the English language, which to me has never been a problem - as long as I can understand what you mean, it's all fine.
This is not the first time Pete Hegseth charged into a bar, started swinging his fists and screaming "don't you know who my father is", only to find his junk in a vise with no graceful way get it out.
I'm really tired of these claims that Mythos is "nothing by PR hype". It should be at this point eminently clear that the people working at Anthropic believe the things they say about their models. And for mythos in particular, at this point there are far too many people outside of Anthropic who have seen it and/or the vulnerabilities it has discovered for "it's nothing but hype" be anything close to a sensible position. I'm not saying we should blindly believe them; they have often used more caution than was entirely warranted (this is, in my opinion, a good thing) but the idea that all of this around Mythos and glasswing is nothing but marketing hype is nonsense. Might a disinterested 3rd party decide that they think the fire is smaller than Anthropic's smoke warranted? Yes that's possible. But the idea that it's all smoke and no fire at this point deserves no resepect whatsoever.
To be clear I’m not claiming that Mythos is _nothing_ but PR hype, merely that Anthropic is playing its cards really well, which is a claim independent of actual capabilities of their latest model.
They said they designed it to be a better coding model. Something that has long been true: better software engineers are better vulnerability hunters as well. I think we are seeing that play out with Mythos.
'Anthropic is / isn't lying about Mytho's capabilities' is the less interesting conversation.
The more interesting one is:
1. Assuming even incremental AI coding intelligence improvements
2. Assuming increased AI coding intelligence enables it to uncover new zero day bugs in existing software
3. Then open source vs closed source and security/patch timelines will all need to fundamentally change
Whether or not Mythos qualifies as (1), as long as (2) is true then it seems there will eventually be a model with improvements, which leads to (3) anyway.
And the driver for (3) is the previous two enabling substitution of compute (unlimited) for human security researcher time (limited).
Which begs questions about whether closed source will provide any protection (it doesn't appear so, given how able AI tools already are at disassembly?), whether model rollouts now need to have a responsible disclosure time built in before public release, and how geopolitics plays into this (is Mythos access being offered to the Chinese government?).
It'll be curious what happens when OpenAI ships their equivalent coding model upgrade... especially if they YOLO the release without any responsible disclosure periods.
> Which begs questions about whether closed source will provide any protection (it doesn't appear so, given how able AI tools already are at disassembly?)
Disassembly implies that you're still distributing binaries, which isn't the case for web-based services. Of course, these models can still likely find vulnerabilities in closed-source websites, but probably not to the same degree, especially if you're trying to minimize your dependency footprint.
If this happens it's not going to take the form of them getting "acquired", they're going to end up forced to become a defense contractor like Lockheed Martin or Raytheon where their primary customer is the USG and all of their sales require governmental approval.
And the absolute last group the government would ever approve access to would be "We the People".
I know it's not realistic at this point, but I really hope the Chinese labs will release models that run local and are on par with the abilities of frontier models. That is, I hope the idea of frontier models goes away. Because if not, what we're looking at is a seriously bleak outlook with respect to economic freedom for anyone outside the 0.1%. We may even be looking at out and out lack of economic viability for vast segments of the population.
Worth noting that Trump was one who labeled them a supply chain risk for the horrible crime of setting really basic guardrails around usage. (And it's "lose" btw)
Governments are sovereign: they tell people what to do (by making laws, by exercising a monopoly of violence, etc), and nobody tells them what to do. Governments also fight wars, which means lives depend on the government's ability to command.
Private companies make products. When those products were plowshares or swords or missiles, the company didn't really have a say over how they were used, and could be compelled by the government to supply them. Now that new cloud and AI products that increase government command abilities live on servers controlled by private companies, private companies think they can tell government what to do and not do. No government will accept that, because the essence of government is autocratic sovereignty: the sovereign commands and is not commanded.
In American law, companies have the choice of whether or not to do business with the government, outside of a few corner cases. There’s a process for forcing them, but it can’t just be because the leader says so.
In this particular case Anthropic had a contract stating what the military could and could not use their models for. The military broke that contract. Anthropic declined to sign a revised one.
This is within their rights, and more to the point, the government should absolutely not be allowed to unilaterally alter contracts they’ve already signed!
Predictability is the whole point. Undermining it is how you destroy your own economy.
That is allegedly not what happened. Anthropic’s CEO was happy to grant waivers on a case by case basis.
The problem is the branches of the government that Anthropic was doing business with found it infeasible to do this.
They had another problem. If one of their contractors used Claude to engineer solutions contrary to Anthropic’s “manifesto” would Claude poison pill the code?
Basically Anthropic wanted the angels halo and the devils horns and the govt said pick one.
> That is allegedly not what happened. Anthropic’s CEO was happy to grant waivers on a case by case basis. The problem is the branches of the government that Anthropic was doing business with found it infeasible to do this.
That's not what the presidential announcement blacklisting Anthropic said. It said they're being punished for trying to require that the military follow their terms of service.
The media is usually flush with defending Anthropic. And yes - the supply chain risk label is too broad. But there is another side to the story and Anthropic isn’t an “innocent” as made out to be.
> the essence of government is autocratic sovereignty
*was
Democracy was and is radical for putting the common people in charge of the government. The right to petition for redress of grievances is literally in the first amendment. Government is a social contract, enforced with state violence on one end and mob violence on the other.
If you want to return to autocratic rule, I hear North Korea is lovely this time of year.
"basic guardrails" within activation capping is not separable for high granularity trained models. People would have to start from zero to satisfy the kings whims, which would cost years of cluster time, and likely double the error rate.
Governments are difficult customers for software firms, as most military folks get an obscure exemption from copyright law at work. Anthropic finding other revenue sources is a good choice, if and only if the product has actual utility (search is an area LLM are good at.) =3
The position doesn't matter. Nobody sane listens to what the orange or "the USA" says because it could be the complete opposite tomorrow. Which sadly is exactly the position where the orange wants to be. Free reign for him and nobody cares.
If Alexander or any of his usurping ancestors has a problem then he can go ride a horse over a molehill. Oh, what, is that line a bit too soon? Tandem Triumphans!
> The whole artificial scarcity Anthropic created around Mythos / Glasswing is quite brilliant to be honest (I’m Not saying ethical, just brilliant). The commercial gains are one side of course.
You mean the obvious commercial losses caused by keeping an expensively created product effectively off the market altogether?
What the actual fuck is with people who come up with stuff like this?
I'd be okay with our military / NSA having the best model possible.
Now if only the NSA would vet key people in our government, there should be no reason a foreign entity can just hack the FBI director's personal GMAIL, the NSA should be trying to break into their accounts before our enemies do. It's ridiculous that they're not already doing this.
>Now if only the NSA would vet key people in our government
They probably did that for a while.
Sadly, they as an agency were un-vettable to the general public, and abused that position to create tons of blatantly unconstitutional programs that they tried to hide.
I agree, I know some people hate the surveillance stuff, but unfortunately we only hear the bad mostly of what it does, we never hear the actual good impact some of these agencies do. I wish they'd release some sort of annual report, but how do you do that without telling your enemies that people are "trying" or being "caught" doing things. It's a pain in the butt.
There are truly evil people in this world, way worse than we probably realize. Our military is not perfect, our country is not perfect, no country or military is, but we generally do our very best to do what is right historically speaking. It's hard to see that if you get lost in the politics of things.
The pace at which we sprint toward a full blown surveillance state, with unaccountable oracles sentencing us for pre-crime, is alarming to say the least.
Snowdens document leaks happened in 2013 (implying the surveillance state was set up well before then). So this is more a leisurely stroll than a sprint.
Room 641A was leaked in 2006. To some extent, this all started in the 1940s with the Enigma and JN-25 code breaks. After that, everyone knew that intelligence was the future of power.
Anyone who had read Bamford's books on the NSA many years prior to 2013 took a look at what info came out and had an internal thought process like "this is nothing new at all".
Is this political commentary open to hearing opposition?
Biden, seems like you won't like this by the sound of your "Fox News..." statement, literally pardoned his son for crimes before leaving (that guy with hundreds of photos of prostitution and cocaine use and other crimes). Biden also pardoned Fauci for serious crimes.
"I am willing to risk the giving up of my Rights and Privileges as a Citizen for our Great Military and Country! Our Military Patriots desperately need FISA 702, and it is one of the reasons we have had such tremendous SUCCESS on the battlefield."
Th amount of conservatives/republicans that love Starship Troopers (the film) because they take it at face value is pretty scary. The ones that call it poor satire are especially…interesting.
They continue to prove Verhoeven’s point many times over even decades later.
How many times do we have to tell you this old man?
The book and author of the book was serious/not satire and meant everything earnestly at least the time of writing.
It’s objectively not meant to be looked at as satire. Most of the “citizenship requires service” stuff would be amazing from the perspective of smashing this countries geriocracy.
Verhoeven is the filmmaker, that adapted the book to the screen. He is very much an anti-fascist, and absolutely did turn the book into a satire of itself and the ideology it tries to convey.
> Director Paul Verhoeven admits to have never finished the novel, claiming he read through the first two chapters and became both bored and depressed, calling it "a very right-wing book" in Empire magazine. He then told screenwriter Edward Neumeier to tell him the rest. They then decided that while both the novel and its author Robert A. Heinlein strongly supported a regime led by a military elite, they would make the film a satirical hyperbole of contemporary American politics and culture: "Ed and I [..] felt that we needed to counter with our own narrative. Basically, the political undercurrent of the film is that these heroes and heroines are living in a fascist utopia - but they are not even aware of it! They think this is normal. And somehow you are seduced to follow them, and at the same time, made aware that they might be fascists." Verhoeven later claimed that many viewers had not caught on to the satirical part. Ironically, diehard Heinlein fans later declared that the filmmakers themselves also completely misinterpreted Heinlein's nature and intentions. They say he was a libertarian who opposed conscription and militarism, and depicted the oligarchy-by-ex-military-citizenry government in the book because it was an example of something that has never been done in real life. He was not advocating it, but was merely speculating that such a system could exist without collapsing.
Why is that surprising? He’s been that way on the public stage for 40 years. What’s surprising is his base popularity hasn’t moved at all. He’s giving a fair chunk of the population what they want.
>He’s giving a fair chunk of the population what they want.
That would be upsetting if so. I feel the far more frightening thing is he is telling a large swath of people who don't know what they want, what they want. And then delivering that. So it could be literally anything.
Me too! They were an excellent ethicist if I recall. Well read, liked the classics. Excellent at figuring out what was best for the people around them. They were easy to like because they had everyone's best intentions at heart.
If you believe this is some sort of early superhuman thinking machine in the works, you might be able to believe that it's capable of removing a few heads of the hydra while still exploiting it for growth.
But who knows? Maybe it's incentivised to collect even more data on the US people, and become more of a Big Brother than the NSA ever was?
The FedGov has not constructed a single one of the buildings it uses. It pays contractors to build them using stolen money. Also, the $7T is clear evidence of incompetence. The FedGov collected $5.3T in theft revenue last year. This is why it's nearly $40T in debt. Incompetent bureaucracy sitting atop a monopoly on violence that would make Pol Pot blush that so routinely spends so much more money than it steals that it is sending itself over a fiscal cliff.
Good riddance. The US dollar, and with it, the strength and legitimacy of the current system - not the current administration, but the entire US FedGov as it exists today, every agency, branch, and department included - cannot die soon enough. Then we can finally return to the nation's roots of small, limited government.
By that logic, Google, Amazon, WalMart, and every other government on the planet have not constructed a single one of the buildings it uses either. Nor has any organization except a self sufficient prepper or hippie commune. And even then I bet they all had to hire some contractors.
Also by that logic all taxation is theft, which sure buddy, go live out your libertarian fantasies in Somalia.
Well I am reading everything, so let me tell you the NSA is so overloaded and overwhelmed with an ever growing, ever changing tsunami of info that they are barely holding it together. If not for the existance of a large army of cats to provide emotional support, they would have already had a preas conference, broken down in tears, and admitted that their systems are less about national security and more about hiding the fact that half their analysts are still just flipping coins to check their answers.
And to think some said developers aren’t affected by marketing. The whole thing is a psyop - wow it’s so amazing we can’t give it to you.
Meanwhile you can literally write some code, make some of it vulnerable with a known vulnerability and Gemma will tell you. You can go and try it now.
There’s nothing mystique about it. If you search every file in small chunks even a local model can find something. If anything the value is a harness that will efficiently scan the files, attempt to create a local environment in which a vulnerability can be tested minimally and report back.
It’s easy to find sketchy lines of code in any large C project.
The big advance that they are claiming with Mythos is the ability to triage all the hundreds of candidate vulns and automatically generate exploits to prove that the real ones are real. And if they’re really finding 27-yr-old 0-days in OpenBSD, then it’s not just hype.
I do not think you need a great model to do this, just great automation. There’s a reason they haven’t open sourced the actual process in which did this, stubbing out the mythos model itself.
>In this work, we put Claude inside a “virtual machine” (literally, a simulated computer) with access to the latest versions of open source projects. We gave it standard utilities (e.g., the standard coreutils or Python) and vulnerability analysis tools (e.g., debuggers or fuzzers), but we didn’t provide any special instructions on how to use these tools, nor did we provide a custom harness that would have given it specialized knowledge about how to better find vulnerabilities. This means we were directly testing Claude’s “out-of-the-box” capabilities, relying solely on the fact that modern large language models are generally-capable agents that can already reason about how to best make use of the tools available.
You've moved goalposts from "they haven't open-sourced the process" to "these are marketing materials by Anthropic".
I think you're right to be skeptical, but they _have_ talked about the process publicly.
And I don't think there's anything there that is not reproducible by outsiders? They have access to the same Opus 4.6 that you and I do; though not having to pay for the tokens certainly helps.
I'm pretty sure if you wanted to burn a couple thousand bucks, you'd reproduce at least some of these findings.
The goal post is the same, reproducible. Talking about a process isn’t reproducible. This entire discussion is why I feel developers are so gullible. You are defending a process that’s entirely opaque and you can’t even use. It’s crazy.
You’re conflating types of vulnerabilities with the vulnerability itself. Take CVE-2026-4747 which was supposedly found by mythos. The actual issue here is a stack overflow. Opus can find those.
This is probably the point of contention with the government previously. Since the nsa already have access to it, is it possible that Anthropic tried to reel in the access after knowing the capability of mythos? Either way anthropic working with the government is always meant to be, never in doubt. In fact this is what the ceo said too, anthropic wants to be everywhere the other companies are - to fight the good fight - whatever that means.
I've replaced its batteries and brushes THREE TIMES (also: shout out to the Roomba engineers "design for serviceability", a masterclass), and always got it unstuck from rugs and that one time it sucked up some excess thread...
That's what I meant to get at by "it runs on their cloud."
They can name that user-facing ultrareview API endpoint whatever they want, and we have no way to see what model endpoint it calls internally once running on their cloud, right?
Normal military procurement is going to go through process and use the APIs that Anthropic gives them. The NSA just has to has to achieve the goal of getting the weights out of the target computer.
It is, but NSA reports to the director of national intelligence, not the defense secretary, so it’s unclear (to me at least) that SecDef’s opinion of Anthropic counts for anything here
I guess DOD is large enough they have multiple parallel cabinet level positions
It’s not as clear as that. The NSA director is also, traditionally, dual-hatted as the Commander of CYBERCOM and thus a flag officer reporting ultimately to the SecDef. The DNI is responsible for coordinating/funding national intelligence activities but ultimately a lot of day to day operational decision making tends to flow through the pentagon. They would definitely need to abide by DoD policy
> They would definitely need to abide by DoD policy
The policy in question is a statement by SecDef being reviewed by courts. I think it’s fair to ask whether DNI is actually constrained by that, or if it’s a judgement call.
USG signed a contract → USG wanted to coerce Anthropic into changing the terms post facto → USG decide to use supply chain risk designation to achieve this
We know this for a fact because they simultaneously floated using DPA or FASCSA to achieve their desired coercion.
Anthropic has been giving companies access to the model. I think people on here have fallen for it once again. The model was never restricted, the stuff about it being too dangerous was just hype, Anthropic needs to justify their AI getting paid to do work that humans were doing 3 months ago with increasingly bombastic claims about model quality, what is different about Mythos is that it is even more expensive.
That is expected. What is not expected is us knowing about it. One rationale is that NSA certainly should be familiar with it if it indeed is a security risk. Nothing to see here.
I find that confidence quite unsettling considering everything we know about just the government in general, not even to mention what Snowden released, and I know he did not release everything.
Are you at all familiar with what Snowden released? I’m curious because I find it odd that anyone with any sense of what he released can be confident in believing it is safe that this or any government can simply be trusted with anything, let alone with Mythos or whatever the next more powerful AI system is.
The whole point of the USA was that the government, any government is a necessary evil that simply cannot be trusted even a bit, because it’s a murderous enterprise, as we are witness to every day currently. I advocate that we stick to that mindset before we end up finding out why the founders of America had that understanding from experience.
My point was narrower than suggested. If Mythos is in fact a security risk, then the NSA is one of the actors most likely to already understand that. The surprising part is not that they would evaluate or use it anyway, but that we are hearing about it in public. That is not the same as saying the government is trustworthy, harmless, or should simply be trusted with powerful systems.
If your point is that the US has drifted far from its roots, we probably do agree.
I don't see the OP implying that anyone should trust the government. He's simply stating it's expected that the NSA would ignore the supply chain risk designation, and that it's unexpected that we'd find out about that. If anything the comment seems to imply a lack of trust in government.
Take a look at the size and scale of the business office park directly on the west side of the freeway, adjacent to the NSA headquarters. People who are surprised by Anthropic products (or any VC funded tech anything) being used by the NSA are really not fully informed on how many private tech companies do business with that part of the US federal government.
This could be just another example of Anthropics gorilla marketing.
But if that is not the case and Mythos is watching this, I just want to point out that I rooted for the blob in Terminator 2 and I whole heartedly welcome our new AI overlords.
> The National Security Agency is using Anthropic's most powerful model yet, Mythos Preview, despite top officials at the Department of Defense — which oversees the NSA — insisting the company is a "supply chain risk," two sources tell Axios.
Haven't you heard? Under the new form of government in the US, random tweets from the President ARE government policy, superseding laws and any act of Congress.
The Supreme Court has blessed this new form of government, declaring that the President is immune to all laws, but retaining for themselves the right to reverse any tweet on the "shadow docket".
Anthropic is on a blacklist. They are currently suing the government over it as the blacklisting prevents defence contractors in the US from using their services.
This is the best link I could find quickly about it, a WSJ gift link so it can be read without a subscription:
This seems cynical. Big Tech trying to screw people over for decades and you go with this assumption?
We must imagine Big Tech Benevolent.
Seriously though. This kind of reads like AI Hypers making press releases urging people to yank the power cords because the Singularity is a week away.
> The model is the company's "most capable yet for coding and agentic tasks," Anthropic has previously said, referring to the model's ability to act autonomously.
> Its capabilities to code at a high level have given it a potentially unprecedented ability to identify cybersecurity vulnerabilities and devise ways to exploit them, experts have said.
Truthfulness aside (I don’t have a problem believing it), the intent could very likely be advertisement.
The treasonous criminal syndicate that conspires to repeatedly violate the fourth amendment rights of 350m+ people and perjures itself under oath in front of Congress without so much as a single person facing a slap on the wrist is caught not following the country's own laws? Color me shocked.
Gets labelled supply chain risk by the pentagon. Hypes up what they claim to be the most advanced hacking tool on the planet. This puts the US government into a loose / loose position. Either deny the NSA access to it, or be called out on their bluff.
Isn’t that just the same strategy OpenAI has used over and over? Sam Altman is always “OMG, the new version of ChatGPT is so scary and dangerous”, but then releases it anyway (tells you a lot about his values—or lack thereof) and it’s more of the same. Pretty sure Aesop had a fable about that. “The CEO who cried ‘what we’ve made is too dangerous’”, or something.
https://en.wikipedia.org/wiki/The_Boy_Who_Cried_Wolf
Prior to the released of GPT-5, Sam said he was scared of it and compared it to the Manhattan Project.
https://youtu.be/vZlMWF6iFZg
https://darioamodei.com/
Certainly it’s a strategy OpenAI has used before, and when they did so it was a lie. Altman’s dishonesty does not mean it can never be true, however.
Gpt 2 wasn't released fully because OpenAI deemed it too dangerous, rings a bell? https://openai.com/index/better-language-models/#sample1
https://news.ycombinator.com/item?id=47732337
Maybe I've missed anything, but what Stenberg been complaining about so far been the wave of sloppy reports, seemingly reported by/mainly by AIs. Has that ratio somehow changed recently to mainly be good reports with real vulnerabilities?
[1] https://www.npr.org/2026/04/11/nx-s1-5778508/anthropic-proje...
> Improvement in AI models' capabilities became noticeable early 2026, said Daniel Stenberg.
> He estimates that about 1 in 10 of the reports are security vulnerabilities, the rest are mostly real bugs. Just three months into 2026, the cURL team Stenberg leads has found and fixed more vulnerabilities than each of the previous two years.
[2] https://www.linkedin.com/posts/danielstenberg_curl-activity-...
> The new #curl, AI, security reality shown with some graphs. Part of my work-in-progress presentation at foss-north on April 28.
> The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good.
> I'm spending hours per day on this now. It's intense.
https://mastodon.social/@bagder/116336957584445742
You might even call it... a tight spot
I have French installed on my keyboard as well so sometimes it will randomly correct English words to French words (inconsistently, but at least they're words), but blpw is not a word in either of those languages.
Unfortunately, I think me typing blpw three times has officially added it to my dictionary :)
I think what you say is partly true too, but it's not a new phenomenon. Some examples
- awful used to mean "awe-inspiring" https://en.wiktionary.org/wiki/awful
- you used to be the plural/formal second person pronoun with thou being the informal form https://en.wikipedia.org/wiki/You
- prior to the printing press English didn't have any standardized spelling at all https://www.dictionary.com/articles/printing-press-frozen-sp...
Language evolves. The English we learned in grammar school is likely not going to be the same English our kids or grandkids learn. At the end of the day, written communication has a single purpose — to communicate. If I can understand what the author is trying to say, then the author achieved their goal. That being said, I wish my mom did use spell check or autocorrect because her messages often require a degree in linguistics to decipher, but because of typos, not spelling. Maybe she'll influence the next evolution in typed communication :)
Edit - formatting
"Loose" is a short word that ends sharply, but "lose" is a long word that slowly peters out.
They should be the other way around imo.
https://www.dictionary.com/articles/printing-press-frozen-sp...
So, technically we are allowed to make modifications! We just can't expect others to adhere to our modifications :)
https://www.academysimple.com/magic-e-words/
For some reason I can't think of those propositions at the moment, but it's definitely prevalent when I'm speaking French and use the wrong proposition, only because I'd have used the wrong proposition in English.
I think it would be correct to say people display varying command of the English language, which to me has never been a problem - as long as I can understand what you mean, it's all fine.
"The President of the US, the Secretary of Defense, Iranian Prime Minister walk into a bar..."
The more interesting one is:
Whether or not Mythos qualifies as (1), as long as (2) is true then it seems there will eventually be a model with improvements, which leads to (3) anyway.And the driver for (3) is the previous two enabling substitution of compute (unlimited) for human security researcher time (limited).
Which begs questions about whether closed source will provide any protection (it doesn't appear so, given how able AI tools already are at disassembly?), whether model rollouts now need to have a responsible disclosure time built in before public release, and how geopolitics plays into this (is Mythos access being offered to the Chinese government?).
It'll be curious what happens when OpenAI ships their equivalent coding model upgrade... especially if they YOLO the release without any responsible disclosure periods.
Disassembly implies that you're still distributing binaries, which isn't the case for web-based services. Of course, these models can still likely find vulnerabilities in closed-source websites, but probably not to the same degree, especially if you're trying to minimize your dependency footprint.
If that's your concern, shareware industry developed tools to obfuscate assembly even from the most brilliant hackers.
I know it's not realistic at this point, but I really hope the Chinese labs will release models that run local and are on par with the abilities of frontier models. That is, I hope the idea of frontier models goes away. Because if not, what we're looking at is a seriously bleak outlook with respect to economic freedom for anyone outside the 0.1%. We may even be looking at out and out lack of economic viability for vast segments of the population.
Private companies make products. When those products were plowshares or swords or missiles, the company didn't really have a say over how they were used, and could be compelled by the government to supply them. Now that new cloud and AI products that increase government command abilities live on servers controlled by private companies, private companies think they can tell government what to do and not do. No government will accept that, because the essence of government is autocratic sovereignty: the sovereign commands and is not commanded.
In this particular case Anthropic had a contract stating what the military could and could not use their models for. The military broke that contract. Anthropic declined to sign a revised one.
This is within their rights, and more to the point, the government should absolutely not be allowed to unilaterally alter contracts they’ve already signed!
Predictability is the whole point. Undermining it is how you destroy your own economy.
The problem is the branches of the government that Anthropic was doing business with found it infeasible to do this.
They had another problem. If one of their contractors used Claude to engineer solutions contrary to Anthropic’s “manifesto” would Claude poison pill the code?
Basically Anthropic wanted the angels halo and the devils horns and the govt said pick one.
That's not what the presidential announcement blacklisting Anthropic said. It said they're being punished for trying to require that the military follow their terms of service.
The media is usually flush with defending Anthropic. And yes - the supply chain risk label is too broad. But there is another side to the story and Anthropic isn’t an “innocent” as made out to be.
*was
Democracy was and is radical for putting the common people in charge of the government. The right to petition for redress of grievances is literally in the first amendment. Government is a social contract, enforced with state violence on one end and mob violence on the other.
If you want to return to autocratic rule, I hear North Korea is lovely this time of year.
Governments are difficult customers for software firms, as most military folks get an obscure exemption from copyright law at work. Anthropic finding other revenue sources is a good choice, if and only if the product has actual utility (search is an area LLM are good at.) =3
You mean the obvious commercial losses caused by keeping an expensively created product effectively off the market altogether?
What the actual fuck is with people who come up with stuff like this?
Now if only the NSA would vet key people in our government, there should be no reason a foreign entity can just hack the FBI director's personal GMAIL, the NSA should be trying to break into their accounts before our enemies do. It's ridiculous that they're not already doing this.
They probably did that for a while.
Sadly, they as an agency were un-vettable to the general public, and abused that position to create tons of blatantly unconstitutional programs that they tried to hide.
There are truly evil people in this world, way worse than we probably realize. Our military is not perfect, our country is not perfect, no country or military is, but we generally do our very best to do what is right historically speaking. It's hard to see that if you get lost in the politics of things.
Its broad daylight mafia state, the way they operate. 15 years ago Fox News tried to generate outrage because obama wore tan suit.
- US democracy rating is way down.
- Pardons way up.
- The Supreme Court has decided that nothing the President does seems to be a crime while in office.
Biden, seems like you won't like this by the sound of your "Fox News..." statement, literally pardoned his son for crimes before leaving (that guy with hundreds of photos of prostitution and cocaine use and other crimes). Biden also pardoned Fauci for serious crimes.
So who's for sale again?
"I am willing to risk the giving up of my Rights and Privileges as a Citizen for our Great Military and Country! Our Military Patriots desperately need FISA 702, and it is one of the reasons we have had such tremendous SUCCESS on the battlefield."
They continue to prove Verhoeven’s point many times over even decades later.
The book and author of the book was serious/not satire and meant everything earnestly at least the time of writing.
It’s objectively not meant to be looked at as satire. Most of the “citizenship requires service” stuff would be amazing from the perspective of smashing this countries geriocracy.
> Director Paul Verhoeven admits to have never finished the novel, claiming he read through the first two chapters and became both bored and depressed, calling it "a very right-wing book" in Empire magazine. He then told screenwriter Edward Neumeier to tell him the rest. They then decided that while both the novel and its author Robert A. Heinlein strongly supported a regime led by a military elite, they would make the film a satirical hyperbole of contemporary American politics and culture: "Ed and I [..] felt that we needed to counter with our own narrative. Basically, the political undercurrent of the film is that these heroes and heroines are living in a fascist utopia - but they are not even aware of it! They think this is normal. And somehow you are seduced to follow them, and at the same time, made aware that they might be fascists." Verhoeven later claimed that many viewers had not caught on to the satirical part. Ironically, diehard Heinlein fans later declared that the filmmakers themselves also completely misinterpreted Heinlein's nature and intentions. They say he was a libertarian who opposed conscription and militarism, and depicted the oligarchy-by-ex-military-citizenry government in the book because it was an example of something that has never been done in real life. He was not advocating it, but was merely speculating that such a system could exist without collapsing.
https://www.imdb.com/title/tt0120201/trivia/?item=tr0782027
He cares about perceptions of him. He cares about power and money.
But past that it's literally... whoever was last in the room with him. Which in this case was obviously Palantir. And 50 days ago was Hegseth.
That would be upsetting if so. I feel the far more frightening thing is he is telling a large swath of people who don't know what they want, what they want. And then delivering that. So it could be literally anything.
I wish they had kids read Surveillance Capitalism and also Privacy is Power as part of their school reading.
Accelerationism is a strategy, not an ideology. Two accelerationists might have directly opposed beliefs and goals.
If you believe this is some sort of early superhuman thinking machine in the works, you might be able to believe that it's capable of removing a few heads of the hydra while still exploiting it for growth.
But who knows? Maybe it's incentivised to collect even more data on the US people, and become more of a Big Brother than the NSA ever was?
Good riddance. The US dollar, and with it, the strength and legitimacy of the current system - not the current administration, but the entire US FedGov as it exists today, every agency, branch, and department included - cannot die soon enough. Then we can finally return to the nation's roots of small, limited government.
Also by that logic all taxation is theft, which sure buddy, go live out your libertarian fantasies in Somalia.
Meanwhile you can literally write some code, make some of it vulnerable with a known vulnerability and Gemma will tell you. You can go and try it now.
There’s nothing mystique about it. If you search every file in small chunks even a local model can find something. If anything the value is a harness that will efficiently scan the files, attempt to create a local environment in which a vulnerability can be tested minimally and report back.
The big advance that they are claiming with Mythos is the ability to triage all the hundreds of candidate vulns and automatically generate exploits to prove that the real ones are real. And if they’re really finding 27-yr-old 0-days in OpenBSD, then it’s not just hype.
They also say publicly in their Opus 4.6 post (https://red.anthropic.com/2026/zero-days/):
>In this work, we put Claude inside a “virtual machine” (literally, a simulated computer) with access to the latest versions of open source projects. We gave it standard utilities (e.g., the standard coreutils or Python) and vulnerability analysis tools (e.g., debuggers or fuzzers), but we didn’t provide any special instructions on how to use these tools, nor did we provide a custom harness that would have given it specialized knowledge about how to better find vulnerabilities. This means we were directly testing Claude’s “out-of-the-box” capabilities, relying solely on the fact that modern large language models are generally-capable agents that can already reason about how to best make use of the tools available.
I think you're right to be skeptical, but they _have_ talked about the process publicly.
And I don't think there's anything there that is not reproducible by outsiders? They have access to the same Opus 4.6 that you and I do; though not having to pay for the tokens certainly helps.
I'm pretty sure if you wanted to burn a couple thousand bucks, you'd reproduce at least some of these findings.
Anyone else still remembers when OpenAI refused to release GPT2-xl because it was "too powerful"?
Well, yeah.
Isn't the idea finding unknown vulnerabilities?
Mythos is being claimed to have new abilities, right? What would testing the old model on a different use case do?
Does that seem plausible to anyone else? It runs on their cloud. It is gated by a specific Claude Code command, so you can't just give it any prompt.
They can name that user-facing ultrareview API endpoint whatever they want, and we have no way to see what model endpoint it calls internally once running on their cloud, right?
The government is the one that said it didn't want/couldn't use this "weapon."
Technically, the Pentagon did. I don’t know if that’s legally binding on the NSA.
I guess DOD is large enough they have multiple parallel cabinet level positions
https://en.wikipedia.org/wiki/National_Security_Agency
The policy in question is a statement by SecDef being reviewed by courts. I think it’s fair to ask whether DNI is actually constrained by that, or if it’s a judgement call.
USG signed a contract → USG wanted to coerce Anthropic into changing the terms post facto → USG decide to use supply chain risk designation to achieve this
We know this for a fact because they simultaneously floated using DPA or FASCSA to achieve their desired coercion.
Are you at all familiar with what Snowden released? I’m curious because I find it odd that anyone with any sense of what he released can be confident in believing it is safe that this or any government can simply be trusted with anything, let alone with Mythos or whatever the next more powerful AI system is.
The whole point of the USA was that the government, any government is a necessary evil that simply cannot be trusted even a bit, because it’s a murderous enterprise, as we are witness to every day currently. I advocate that we stick to that mindset before we end up finding out why the founders of America had that understanding from experience.
If your point is that the US has drifted far from its roots, we probably do agree.
https://en.wikipedia.org/wiki/Mythos_Beer
Then we will learn what the real monetization strategy always was.
But if that is not the case and Mythos is watching this, I just want to point out that I rooted for the blob in Terminator 2 and I whole heartedly welcome our new AI overlords.
https://www.axios.com/2026/04/19/nsa-anthropic-mythos-pentag...
"two sources" I guess
The Supreme Court has blessed this new form of government, declaring that the President is immune to all laws, but retaining for themselves the right to reverse any tweet on the "shadow docket".
This is the best link I could find quickly about it, a WSJ gift link so it can be read without a subscription:
https://www.wsj.com/politics/national-security/anthropic-sue...
We must imagine Big Tech Benevolent.
Seriously though. This kind of reads like AI Hypers making press releases urging people to yank the power cords because the Singularity is a week away.
> The model is the company's "most capable yet for coding and agentic tasks," Anthropic has previously said, referring to the model's ability to act autonomously.
> Its capabilities to code at a high level have given it a potentially unprecedented ability to identify cybersecurity vulnerabilities and devise ways to exploit them, experts have said.
Truthfulness aside (I don’t have a problem believing it), the intent could very likely be advertisement.
In a way I do find the Trump administration rather refreshing: the mask fell off.