Hacker Newsnew | past | comments | ask | show | jobs | submit | alwayseasy's commentslogin

When I read your quote, I was agreeing with you. However, according to the article this very far from the very graphic content of the book in question!

It feels like a strawman quote.


It feels like you didn't read or even skim the full article and instead are just reacting to the title.


feels like you think social media is bad for other people but not you. every single one of you is posting on social media right now, whilst making the case its evil or a problem or bad or some negative descriptor. people who think its only bad for kids are quick to bring porn up, but that issue is itself an emotional reaction. remember when prior to the 1950s they said homosexuality was bad for mental health then once it became socially acceptable, there was suddenly "evidence" to the contrary.


Note this is the asbtract, so please let's not debate the abstract...

The link to download the paper is here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5870623


I already debated this on HN when this was posted two days ago, but this paper is not peer-reviewed and is a draft. The examples it uses of DOGE and of the FDA using AI are not well researched or cited.

Just as an example, they criticize the FDA for using an AI that can hallucinate whole studies, but they don't talk about the fact that it's used for product recalls, and the source that they use to cite their criticism is an Engaget article that is covering a CNN article that got the facts wrong, since it relied on anonymous sources that were disgruntled employees that had since left the agency.

Basically what I'm saying is the more you dig into this paper, the more you realize it's an opinion piece.


Not only it's an opinion piece disguised as scientific "Article" with veneer of law, it has all the hallmarks of quackery: flowery language full of allegory and poetic comparisons, hundreds of superficial references from every area imaginable—sprinkled throughout, including but not limited to—Medium blog posts, news outlets, IBM one-page explainers, random sociology literature from the 40's, 60's and 80's, etc.

It reads like a trademark attorney—turned academic got himself interested in "data" and "privacy," wrote a book about it in 2018, and proceeded to be informed on the subject of AI almost exclusively by journalists from popular media outlets like Wired/Engaget/Atlantic—to bring it all together by shoddily referencing his peers at Harvard and curiously-sounding 80's sociology. But who cares as long as AI bad, am I right?


Are there any particular points you want to refute?


I'm finding it hard to identify any particulars in this piece, considering the largely self-defeating manner in which the arguments are presented, or should I say, compiled, from popular media. Had it not been endorsed by Stanford in some capacity, and sensationalised by means of punchy headline, we wouldn't be having this conversation in the first place! Now, much has been said about various purported externalities of LLM technology, and continues so, on a daily basis—here in Hacker News comments, if not elsewhere. Between wannabe ethicists and LessWrong types, contemplating the meaning of the word "intelligence," we're in no short supply of opinions on AI.

If you'd like to hear my opinion, I happen to think that LLM technology is the most important, arguably the only thing, to have happened in philosophy since Wittgenstein; indeed, Wittgenstein presents the only viable framework for comprehending AI in all of humanities. Part because it's what LLM "does"—compute arbitrary discourses, and part because that is what all good humanities end up doing—examining arbitrary discourses, not unlike the current affairs they cite in the opinion piece at hand, for arguments that they present, and ultimately, the language used to construct these arguments. If we're going to be concerned with AI like that, we shall start by making effort to avoid all kinds of language games that allow frivolously substituting "what AI does" for "what people do with AI."

This may sound simple, obvious even, but it also happens to be much easier said than done.

That is not to say that AI doesn't make a material difference to what people would otherwise do without it, but exactly like all of language is a tool—a hammer, if you will, that only gains meaning during use—AI is not different in that respect. For the longest time, humans had monopoly on computing of arbitrary discourses. This is why lawyers exist, too—so that we may compute certain discourses reliably. What has changed is now computers get to do it, too; currently, with varying degree of success. For "AI" to "destroy institutions," or in other words, for it doing someone's bidding to some undesirable end, something in the structure of said institutions must allow that in the first place! If it so happens that AI can help illuminate these things, like all good tools in philosophy of language do, it also means that we're in luck, and there's hope for better institutions.


You seem to be relying too heavily on your own "language games". For instance, flip flopping between using "LLM technology" and "AI" to refer to what appears to be the same thing in your argument. I find it all quite incomprehensible.

> If you'd like to hear my opinion, I happen to think that LLM technology is the most important, arguably the only thing, to have happened in philosophy since Wittgenstein;

So, assume cognitive bias and a penchant for hyperbole.

> LLM technology is the most important, arguably the only thing, to have happened in philosophy

Why would "LLM technology" be important to philosophy?

> arguably the only thing, to have happened in philosophy

Did "LLM technology" "happen in philosophy"? What does it mean to "happen in philosophy"?

> indeed, Wittgenstein presents the only viable framework for comprehending AI in all of humanities.

What could this even mean?

Linguistics would appear at least one other of the applicable humanities to large language models.

Wittgenstein was famously critical of Turing's claim that a machine can think to the extent he claimed it caused Turing to create misunderstandings even in his mathematics.

Wittgenstein also disliked Cantor. and even the concept of 'sets'.

I am struggling to see how this all adds up to being the "only viable framework for comprehending AI".

> If it so happens that AI can help illuminate these things, like all good tools in philosophy of language do, it also means that we're in luck, and there's hope for better institutions.

This is a wild ride.

So, "AI" exploits weaknesses in institutions, but this is different from "destroying institutions", and its a good thing because we can improve the institutions by fixing the exploitable areas; which is also a wholly speculative outcome with many counterexamples in real life.

Reads like: "Sure, I broke your window and robbed your store, but you should be thanking me and encouraging me to break more windows and rob more people because I illuminated that glass is susceptible to breaking when a rock is thrown at it. Oh, your shit? I'm keeping it. You're welcome."


My writing could be erratic sometimes, but "flip flopping" is a bit unfair, don't you think? When they say "AI," I assume they mean LLM technology and its applications above all else; the so-called "intelligent agent" discourse is a big one, but it's important to remember why it works in the first place. Well, because the pretraining stage is already capturing all the necessary information, right? Moreover, mechanistic studies show that most significant info is preserved in the dense layers, not attention heads. So there's something very fundamental, albeit conceptually simple—going on that allows for a whole bunch of emergent behaviour, enabling much more complex discourses.

> Why would "LLM technology" be important to philosophy?

Well, because it has empirically proved that Wittgenstein was more or less right all along, and linguists like Chomsky (I would go as far as saying Kripke, too, but that's a different story) were ultimately wrong! To put it simply: in order to learn language, and by extension, compute arbitrary discourses, you don't need to ever learn definitions of words. All you need is demonstrations of language use. The same goes for syntax, grammar, and a bunch of other things linguists were obsessing about for decades, like modality. (But that's a different story altogether!) Computer science people call this the bitter lesson, but that is only a statement on predictive power, not emergent power. If it only ever were the case for learning existing discourses, that wouldn't be remotely as surprising. Computing arbitrary discourses is a much stronger proposition!

> Did "LLM technology" "happen in philosophy"? What does it mean to "happen in philosophy"?

LLM's were a bit of a shock, and a lot of people are not receptive to this idea that Wittgensteinians won, basically, game over. There will be more flailing, but ultimately they will adapt. You can already see this with Askell and other traditionally-trained philosophy people adopting language games, it's only that they call it alignment. Neither a coincidence she went to Cambridge. It will take a bit of time for "academic philosophy" to recognise this, but eventually they will, because why wouldn't they?

Game over.

> Linguistics would appear at least one other of the applicable humanities to large language models.

Yeah, not really. All the interesting stuff that is happening has very little to do with linguistics. There's prefill from grammar, but it would be a stretch to attribute it to linguistics. In linguistic literature, word2vec was big time for the time being, but they did fuck-all with it ever since. I'm not trying to be hyperbolic here, either.

> Wittgenstein was famously critical of Turing's claim that a machine can think

I never understood this line of reasoning. So what Witt. and Turing had disagreements at the time? Witt. never had a chance to see LLM's, or anything remotely like it. This was unexpected result, you know? We could have guessed that it would be the case, but there were no evidence. We still don't have a solid theory to go from Frege to something like modern LLM's, and we may never will, but the evidence is there—Wittgenstein was right about you need for language to work.

> Wittgenstein also disliked Cantor. and even the concept of 'sets'.

I don't see what this has anything to do with?

> So, "AI" exploits weaknesses in institutions, but this is different from "destroying institutions", and its a good thing because we can improve the institutions by fixing the exploitable areas; which is also a wholly speculative outcome with many counterexamples in real life.

I never said AI "exploits" anything. I only ever said that being able to compute arbitrary discourses opens so many more doors than what's a pigeonhole insinuation like that would entail. What wasn't obvious before, is becoming obvious now. (This is why all these people are coming out with "revelations" on how AI is destroying institutions.) And it's not because of material circumstance. Just that some magic was dispelled, so stuff became obvious, and this is philosophy at work.

This is real philosophy at hand, not some academic wanking :-)


Again, I find it very difficult to get past your own personal "language game"s.

> Game over.

Is a perfect example. What "game" is "over"?

Chomsky's philosophical linguistics have long been derided and stripped for parts, and he was friends with Epstein and his cohorts so he can fuck right on off to disgrace and obscurity, but his goals within linguistics, as I understand them, were to identify why humanity has its faculty of language.

Wittgenstein was uninterested in answering the same question, and large language models are about as far from an answer to that question as one can get.

So, again, I am unsure what has been settled to the point of decrying "Game over".

Does this game only have two "teams"? One possible "outcome"?

Who's on what side of the "game"?

What have they said that shows their allegiance to one idea, and what have they said in opposition to the other?

What about large language models either support or contradict, respectively, said ideas?

As a huge fan of the ideas and writings of Wittgenstein I find it hard to believe that there are contemporary 'philosophers' who disagree with his ideas, namely that words take on meaning through context, but there are certainly trolls and conservatives in every field.


> an Engaget article that is covering a CNN article that got the facts wrong, since it relied on anonymous sources that were disgruntled employees that had since left the agency.

Disgruntled doesn't mean inaccurate.


It is inaccurate though. Those employees never used the system, and incorrectly cited what it is used for. I did some legwork before I drew my conclusions.

EDIT: citing some resources here for those that are curious.

Original article cited by the paper: https://www.engadget.com/ai/fda-employees-say-the-agencys-el...

Actual CNN article the Engadget article is based on: https://www.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-reg...

Neither present a serious take about what's going on with Elsa at the FDA. They both glom onto a handful of anonymous sources without looking deeper.

A far more serious take about Elsa's use at the FDA in a journal that prides itself on scientific rigor and ethical behavior: https://publichealthpolicyjournal.com/elsa-llm-at-the-fda-a-...


This is what drafts for. It's either a very rough draft with some errors and room for improvement, or a very bad draft sitting on the wrong foundation.

Either way, it's an effort, and at least the authors will learn to not to do.


No, it’s definitely not what drafts are for. Fundamental issues of the nature pointed out by the parent comment are way too serious to make it into a draft. Drafts are for minor fixes and changes, as per the usual meaning of the word draft.


> Institutions like higher education, medecine, and law inform the stable and predictable patterns of behavior within organizations such as schools, hospitals, and courts., respectively,, thereby reducing chaos and friction.

Hard to take seriously with so many misspellings and duplicate punctuation.

I vibe with the general "AI is bad for society" tone, but this argument feels a lot to me like "piracy is bad for the film industry" in that there is no recognition of why it has an understandable appeal with the masses, not just cartoon villains.

Institutions bear some responsibility for what makes AI so attractive. Institutional trust is low in the US right now; journalism, medicine, education, and government have not been living up to their ideals. I can't fault anyone for asking AI medical questions when it is so complex and expensive to find good, personalized healthcare, or for learning new things from AI when access to an education taught by experts is so costly and selective.


> Hard to take seriously with so many misspellings and duplicate punctuation.

Very bad writing, too, with unnecessarily complicated constructions and big words seemingly used without a proper understanding of what they mean (machinations, affordances).


[flagged]


It's funny how many of us know the shortcomings of AI, yet we can't be bothered to do the thing ourselves and read or at least skim a in-depth research paper to increase our depth.

Even if we don't agree with what we read, or find its flaws.

Paradox of the century.

P.S.: Using ChatGPT to summary something you don't bother to skim while claiming AI is a scam is the cherry on top.


I read the entire paper a couple of days ago and have done a lot of work to critique it because I think it is flawed in several ways. Ironically, this AI summary is actually quite accurate. You're getting down voted because posting AI output is not condoned, but that doesn't mean that in this case it is not correct.


They're getting downvoted because without even taking a look at the paper, they felt that "please create a summary of the stupid, bad faith, idiot, fake science paper" is a reasonable way to ask for a summary.


Okay, but does anybody care that the paper that's in the link doesn't substantiate its central claim with empirical evidence?


It's ok, I'll just read the AI summary...


366k views in 4 days hardly qualifies as a worldwide hit. It's decent, but other ads saw more views faster this year, like that American Eagle ad with Sweeney.

It's hard to measure on Youtube due to the weight of paid views but still.

Anyway, it's a cute ad.


I think it mostly blew up via unofficial reposts, since that original version was in French without subtitles.

This one copy on X has 27 million views after 2 days: https://x.com/pawcord/status/1998361498713038874


Ok thanks, this changes things! X exagerates how it counts views but overall I do believe millions saw it.


True on Lutnick but he is playing into Trump's deeply held belief (despite every data point saying the opposite) that Americans want manufacturing jobs.


In a dark humor way, it is the intended effect by Trump but really, how many Americans dream of a sweatshop job? Reminds me of a famously documented conversation between Cohn and Trump during the first administration (Bob Woodward's book):

Cohn starts assembling every piece of economic data to try and convince Trump that American workers did not aspire to work in assembly factories. “See,” he says to Trump at one point, “the biggest leavers of jobs – people leaving voluntarily – is from manufacturing.” “I don’t get it,” replies Trump. Cohn soldiers on. “I can sit in a nice office with air conditioning and a desk, or I can stand on my feet eight hours a day. Which one would you rather do for the same pay?” Trump still wasn’t buying it. Eventually, exasperated, Cohn simply asks Trump: “Why do you have these views?” “I just do,” Trump replies. “I’ve had these views for 30 years.” “That doesn’t mean they’re right,” says Cohn. “I had the view for 15 years I could play professional football.”

https://www.gq-magazine.co.uk/article/the-best-fights-betwee...


Well this seems like a fun read about a guy telling Trump, “just shut the fuck up and listen.”


Specifically here, he is under oath in France so an American gag order wouldn't protect him from the French justice system.

This make it less likely he's lying. It could be possible Microsoft France has a "rogue" employee system where a key person only obeys to Microsoft US orders rather than his French boss and French law. Then the boss can swear to the Senate that they're complying.

This is exactly the system the US Congress accused TikTok of having set up.


If the data center is operated by a "trusted subsidiary" as the article mentions and everyone in key roles is a French citizen with no connection to the US then there is no one to give a gag order.

In practice the US HQ could mandate a security update that secretly uploads all data to the US but that's a whole other can of worms that I don't think anyone is ready to open.


the data center which runs software written and controlled by the US companies and likely has a 24/7 software related support team which is distributed across the world....

in a modern cloud dater center you don't need someone physically plugging a USB stick in a server, you just need a back door in a cloud software stack many times the size then any modern operating system which often even involves custom firmware for very low level components and where the attacker has the capabilities to convince your CPU vendor to help them...


... a backdoor that is a necessity anyway, because it is constantly used to upgrade the cluster software.


>In practice the US HQ could mandate a security update that secretly uploads all data to the US but that's a whole other can of worms that I don't think anyone is ready to open.

incredibly ambiguous/unsatisfying sentence. if this french hearing is concerned about french data security, then asking a question about your "in practice" is exactly a can of worms the french would like to open.


Everyone and everything is automatically downloading updates of US software. It's a door that is impossible to close in the short term.


> This is exactly the system the US Congress accused TikTok of having set up.

"Every accusation is a confession" remains undefeated


Agreed.

--

As a side rant:

"Specious accusations are often confessions."

I understand the psychology and casual use of absolutely worded reactions, and that their extreme expression is not taken literally, but as emphasis. But I still prefer balanced wording.

A surprisingly large number of people tragically clash and talk past each other over charged non-issues, that normal undramatic language would render moot.

I.e. "We must believe all X", vs. "We should listen to all X", ... and many more.

"Black Lives Matter Too", isn't as pithy. Nor should the last word be necessary for anyone to understand the three word version. But the fourth word, nodding to the wider context, pre-counters a lot of ridiculous responses to the original line. Not actually suggesting a sea change in a well recognized movement banner line. But it is a widely observed example of how any lack of pedantic clarity is seized upon by motivated reactionaries, to achieve politically significant impact via obtuse reinterpretation.

A little verbal pedantry is an effective speed bump against the siren song of motivated or inadvertent polarization.


It's naive or foolish to think that the problem with "Black Lives Matter" was insufficient specificity.

People who are not operating in good faith won't operate in good faith. There were thousands of words written on the phenomenon protested by BLM, but those are easily ignored. Three words are twisted and co-opted by propagandists. Consider a function that describes "comprehension by bigots" as a function of word count. We know that 0 words yields 0 comprehension. Evidence suggests that 10k words also yields 0 comprehension. There is no evidence that this Laffer curve will ever achieve anything other than zero.

It's possible to reach and change bigots' minds, but it requires human connections. Not sloganeering, prose, or reels.


I agree.

I wasn’t making a hard argument.

Words are not everything. Still, they matter.

To the degree that pushback against anti-minority mistreatment can be framed as pro-universal (reciprocal) respect, I think it helps. Given the latter is in fact the real, most general, and most relevant principle.

That avoids the framing created and imposed by biases. I.e. that somehow, race or other category is the question, instead of (logically and morally) irrelevant to the value of reciprocal respect. Not forgetting the point of it all, avoids actual or perceived reverse biasing. Minority rights and equality being interpreted by either side as anti-majority, or being at the expense of anyone.

Some shrill minority defenders do manage to imply that, as well the people having trouble respecting some group.

This are just thoughts based on what I find works better in personal encounters with people I know or ran into, who had/have difficulty seeing the world without in-group, out-group filters of various kinds.

Keep the simple, general, most important thing clear and center.

Avoid letting the conversation be artificially narrowed by exactly the destructive framing we want to push back on. The narrower the framing the more people forget, ignore, and successfully distract from the main principle. The more people get bogged down in narrower and narrower arguments, the less people understand each other.


Until this happened MS was still going around trying to convince lawyers to use their Cloud and telling them that there is no issue.

Including certain contractual "standard"(1) agreements which would make some of their higher management _personally_ liable for undue data access even under Cloud act from the US!!!

(1) As in standard agreements for providers which store lawyer data, including highly sensitive details about ongoing cases etc.

So you can't really trust MS anymore at all, even if personal liability (e.g. lying under oath) is at stack. And the max ceiling for the penalties for lying under oath seem less then what you can run into in the previous mentioned case...

You also have to look a bit closer at what it even means if "the french MS CEO swears they are complying" it means he doesn't know about non compliance and did tell his employees to comply and hired someone to verify it etc.

But the US doesn't need the French CEO to know, they just need to gain access to the French/EU server through US employees, which given that most of the infra software is written in the US and international admin teams for 24/7 support is really not that hard...

And even if you want to sue the French CEO after a breach/he (hypothetically) lied he would just say he didn't because he also was lied too leading to an endless goose chase and "upsi" by now the French CEO somehow is living in the US.

And that is if you ever learn about it happening, but thanks to the US having pretty bad gag orders/secret court stuff the chance for that is very low.

So from my POV it looks like MS has knowingly and systematically lying and deceiving customer, including such with highly sensitive data, and EU governments about how "safe" the data is even if it lead to personal legal liabilities of management.

And I mind to remember that AWS was giving similar guarantees they most most likely can't hold, but I'm not fully sure. Idk. about Google.

Oh and if you hope that the whole Sovereign Cloud things will help, it wont. It's a huge mage pretend theater moving millions over millions into the hands of US cloud providers while not providing a realistic solutions to the problem it is supposed to solve and neglecting local competition which actually could make a difference, smh.


The max penalty for things like this is actually life inprisonment though. If you, to aid a foreign power without authorization gather certain types of information, it's espionage.

There wouldn't be any lawsuit. If you do this kind of things you get arrested, get a trial and then you are in prison forever.


except we are speaking about lying under oath, not espionage, you don't get a trail for espionage because you lie under oath

and leading management also technically doesn't need to know that is happens for it to be doable. Or in other words they have a lot of reason to "accidentally" not know about it/have it overlooked

this means even if it happens they are very unlikely to be charged for anything more then negligence

but the contracts I mentioned above basically state "it doesn't matter why it happens and if you knew or if it was your fault as long as there was the smallest bit of negligence on your side you are on the hook for it personally". So in a situation where they can effectively avoid espionage trials (because they didn't commit espionage, just negligence) they still are hold responsible

if high level management would reliable go to prison for things like that you wouldn't need additional contracts to make sure they actually have insensitive to actively try to find/prevent anything like this/act very non-negligent.


He wouldn't even be charged for lying under oath if he lied and it became apparent, because there'd be not considering the much more serious espionage charges. They'd only prosecute the espionage part.

Participating in a plot to supply french state information to the US is espionage. France also apparently has a broad definition of espionage, relative to some other EU countries.

States have a tendency of coming down rather harshly on this kind of thing, so this idea about negligence is I think unlikely. If you know about it the charges will be espionage charges. If it happened it would be the biggest thing ever. They'd arrest most Microsoft employees in relevant teams as well the leadership, probably many others too. Just interrogation would probably take half a year due to lack of interrogators.


> This make it less likely he's lying. It could be possible Microsoft France has a "rogue" employee system where a key person only obeys to Microsoft US orders rather than his French boss and French law. Then the boss can swear to the Senate that they're complying.

It's also possible that US employees had access to French servers without anyone in France knowing.


Less likely doesn't say much though. He may have simply weighed the chances of the French government ever finding out that he lied.

> It could be possible Microsoft France has a "rogue" employee system where a key person only obeys to Microsoft US orders rather than his French boss and French law.

I would think that is not just a possibility, but a certainty.


Well, in a few notorious cases the tax services cared and the voters cared.


So you're saying the entire part about ALBMs (the crux of this article) is fake. Why not start with this then?

Maybe conspiracy theories are not welcome here.


They are incompatible. He is lying or not smart enough to filter out propaganda.


> He is lying or not smart enough to filter out propaganda.

This kind of divisive rhetoric helps no one.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: