The claim isn't that the LLMs are democratized. The claim is that LLMs are causing software development to be democratized. As in, people who want software are more able to make it themselves rather than having to go ask the elites for some. As in, the elites in IT now have less power to govern what software other people can have.
(Or alternatively, it's getting harder to stamp out "shadow IT" and all the risks and headaches it causes.)
For the weights and temporary state, yes. It doesn't sound like a lot until you remember that your DNA is about 600 books worth of data by the same metric.
> committed to never train an AI system unless it could guarantee in advance that the company’s safety measures were adequate
That doesn't even make sense.
What stops one model from spouting wrongthink and suicide HOWTOs might not work for a different model, and fine-tuning things away uses the base model as a starting point.
You don't know the thing's failure modes until you've characterized it, and for LLMs the way you do that is by first training it and then exercising it.
My theory is that Anthropic has been wanting to make this change and doing it now while they’re making a (leaked to the) public stand in the name of ethics was a good opportunity.
Honest question: why have an elaborate theory with no evidence when the simple facts support a much simpler conclusion?
Anthropic is free to do what they want. I can’t imagine the board meeting where this triple bank shot of goading the government into threatening the company to do what they want.
I don't think it's that elaborate. I didn't mean to suggest they intentionally goaded the government into this confrontation. I figure it's a simpler "Oh look, we now have a good opportunity to make that announcement that we were worried about." Considering it's probably the same high-level decision makers on both choices it doesn't need a board meeting. And yes they're absolutely free to do what they want, but they're also not blind to how the public will view their decisions.
This isn't the first time they new technology has reshaped society, or even just the economy. How well were the results of prior things predicted ahead of time?
But giving someone who isn't the government the power to tell the military what it can and can't do seems like something they should object to categorically rather than case-by-case.
> A source familiar with the Tuesday meeting says the Pentagon said it would terminate Anthropic’s contract by Friday if the company does not agree to its terms. Pentagon officials also warned they would either use the Defense Production Act against Anthropic, or designate Anthropic a supply chain risk if the company didn’t comply with their demands.
So they're saying they won't use it if it comes with restrictions.
Either (a) it can be offered without restrictions; (b) they can take it; or (c) the government won't use it. That sounds like a comprehensive list of all the possible things that don't involve someone telling the government what it can and can't do.
Anthropic was the only one that was cleared for use in classified systems though. So it doesnt really seem to matter much if it wasnt being used in those types of systems? xAi has now reached a deal though to be used in these circumstances and has signed the paper.
Not just companies that we think of as defense contractors but a whole ton of corporations that do business with the federal government. They'd be treating Anthropic like it was controlled by the CCP or Revolutionary Guards.
Grokipedia is a tool for converting money into improvements in AI (by iterating on it). Any outward resemblance to an encyclopedia is incidental, despite apparently being the intended purpose.
No, it's a tool for converting money into influence. Musk already has a fairly direct way to disseminate his thoughts towards any Twitter users, but that leaves out many people. With Grokipedia he can automatically inject his biases and ideas into search results, ensuring that any AI query could be influenced towards his views.
This is literally already happening, Grokipedia can be a source returned by current AI tools.
(Or alternatively, it's getting harder to stamp out "shadow IT" and all the risks and headaches it causes.)
reply