Hacker Newsnew | past | comments | ask | show | jobs | submit | tbrownaw's commentslogin

The claim isn't that the LLMs are democratized. The claim is that LLMs are causing software development to be democratized. As in, people who want software are more able to make it themselves rather than having to go ask the elites for some. As in, the elites in IT now have less power to govern what software other people can have.

(Or alternatively, it's getting harder to stamp out "shadow IT" and all the risks and headaches it causes.)


> human-level to run on a single 16GB GPU before the end of the decade.

That's apparently about 6k books' worth of data.


For the weights and temporary state, yes. It doesn't sound like a lot until you remember that your DNA is about 600 books worth of data by the same metric.

How many humans do you know who can recite 6000 books, word for word, exactly?

Most road damage here appears after events of the form "it rained and then the temperature crossed freezing twice a day for a week".

> committed to never train an AI system unless it could guarantee in advance that the company’s safety measures were adequate

That doesn't even make sense.

What stops one model from spouting wrongthink and suicide HOWTOs might not work for a different model, and fine-tuning things away uses the base model as a starting point.

You don't know the thing's failure modes until you've characterized it, and for LLMs the way you do that is by first training it and then exercising it.


This is something they've been working on "in recent months". The Pentagon thing was today.

This cannot have been caused by that, unless they've also invented time travel.


You heard about the Pentagon thing today. Doesn't mean it wasn't started because of political pressure.

9 days ago: https://www.axios.com/2026/02/15/claude-pentagon-anthropic-c...

And I suspect that was not the first time the topic was discussed.


Definitely not the first time. Wall Street Journal reported it back on Jan 29:

https://www.wsj.com/tech/ai/anthropic-ai-defense-department-...


My theory is that Anthropic has been wanting to make this change and doing it now while they’re making a (leaked to the) public stand in the name of ethics was a good opportunity.

Honest question: why have an elaborate theory with no evidence when the simple facts support a much simpler conclusion?

Anthropic is free to do what they want. I can’t imagine the board meeting where this triple bank shot of goading the government into threatening the company to do what they want.


I don't think it's that elaborate. I didn't mean to suggest they intentionally goaded the government into this confrontation. I figure it's a simpler "Oh look, we now have a good opportunity to make that announcement that we were worried about." Considering it's probably the same high-level decision makers on both choices it doesn't need a board meeting. And yes they're absolutely free to do what they want, but they're also not blind to how the public will view their decisions.

> The Pentagon thing was today.

Right because we are 100% aware of everything the pentagon does minute by minute...


It might have been contingency planning: you don't need a weatherman...

Pentagon issue was reported before today. It only made headlines again from Hegseth’s comments.

Prediction is hard, especially of the future.

This isn't the first time they new technology has reshaped society, or even just the economy. How well were the results of prior things predicted ahead of time?


No.

But giving someone who isn't the government the power to tell the military what it can and can't do seems like something they should object to categorically rather than case-by-case.


A bit late for that. Diego Garcia etc.

> A source familiar with the Tuesday meeting says the Pentagon said it would terminate Anthropic’s contract by Friday if the company does not agree to its terms. Pentagon officials also warned they would either use the Defense Production Act against Anthropic, or designate Anthropic a supply chain risk if the company didn’t comply with their demands.

So they're saying they won't use it if it comes with restrictions.

Either (a) it can be offered without restrictions; (b) they can take it; or (c) the government won't use it. That sounds like a comprehensive list of all the possible things that don't involve someone telling the government what it can and can't do.


the funny thing that no one seems to be talking about is that all the other LLM's have already agreed and Anthropic is the only holding out.

Anthropic was the only one that was cleared for use in classified systems though. So it doesnt really seem to matter much if it wasnt being used in those types of systems? xAi has now reached a deal though to be used in these circumstances and has signed the paper.

> or (c) the government won't use it

And coerce other defence contractors into not using it.


Not just companies that we think of as defense contractors but a whole ton of corporations that do business with the federal government. They'd be treating Anthropic like it was controlled by the CCP or Revolutionary Guards.

Because then your linter won't be able to tell you when you're done migrating the calls that can be migrated.

Grokipedia is a tool for converting money into improvements in AI (by iterating on it). Any outward resemblance to an encyclopedia is incidental, despite apparently being the intended purpose.

No, it's a tool for converting money into influence. Musk already has a fairly direct way to disseminate his thoughts towards any Twitter users, but that leaves out many people. With Grokipedia he can automatically inject his biases and ideas into search results, ensuring that any AI query could be influenced towards his views.

This is literally already happening, Grokipedia can be a source returned by current AI tools.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: