Hacker Newsnew | past | comments | ask | show | jobs | submit | Havoc's commentslogin

Very much feels like OpenAI trying to PR manage their weaker ethical stance

Both their stances are flawed because their ethics apparently end at the border - none of them have a problem being unethical internationally (all the red lines talk is about what they don’t want to do in the us)

? we're talking about autonomous weapons systems. That would be internationally.

Secondarily, we're talking about domestic surveillance / law enforcement. That would be domestic.

(But they do not find an issue with international intelligence gathering-- which is a legitimate purpose of national security apparatus).


I don’t think deploying “80% right” tools for mass surveillance (or anything that can remotely impact human life) counts as lawful in any context.

Just because the US currently lacks a functioning legislative branch doesn’t magically make it OK when gaps in the law are reworded into “national security”


I'm really not sure what you're trying to say or assert, so you can put it more clearly.

One of Anthropic's line in the sand was domestic mass-surveillance.

> > Secondarily, we're talking about domestic surveillance / law enforcement. That would be domestic.

> One of Anthropic's line in the sand was domestic mass-surveillance.

And?


I think the person you are replying to takes issue with the thing which you have simply asserted.

Which thing? Helping intelligence / international surveillance?

There's an obvious difference.

Surveillance within the border is oppressive 1984-style surveillance state behavior.

International spying is a universal tradition.


I used S3 as a vibe code case study for whether I can just make my own. Learnings:

* Yes you can absolutely spin up a DIY S3 server

* When you run your server against a credible bench suite it throws a bunch of issues (ceph s3 - is disheartening 5 pass out of 800)

* Vibe coding can address the core issues & make significant progress on the 800 issues. Most of those 800 don't actually matter

* Low trust in resulting outcome, but I do plan on running some personal infra off DIY s3 - shopping list etc.

* Planning to roll some personal infra onto said S3, but with low confidence on


Good.

I do hope corporations in general take a harder stance on this. From a society perspective people with inside knowledge fleecing randoms is not a win. We've got that somewhat under control on the stock exchange, but have this absurd situation where on prediction markets it is a free for all and everyone pretends this is fine.

I also think corporations should distance themselves from individuals willing to fleece randoms. Trading in general is very wild west survival of the fittest but active exploitation of insider knowledge speak of very poor morale character


Honestly it seems stupid but fine to me. Like if someone random comes up to me on the sidewalk and says hey if OpenAI announces a browser tomorrow, you give me $100. If not I'll give you $1000. Obviously I'm not going to take them up on it, they clearly have inside information.

If you're betting on a prediction market without insider information then you're just... The fool who is soon parted from his money one way or another.

I generally feel like people should be free to do whatever insane stuff they want with their own lives.


> I generally feel like people should be free to do whatever insane stuff they want with their own lives.

The problem with people doing insane stuff with their "own money" is the burden they often exact on their family or society.

Perhaps the realm of independence starts when loans are reasonable and current, there is sufficient child support, and they are meeting a base savings rate for their retirement.

Speaking of which, perhaps any UBI could also use a minimal criteria, reviewed annually but without any barriers on first year eligibility.


>Like if someone random comes up to me on the sidewalk and says hey if OpenAI announces

Then you hopefully understand that randoms approaching you is no equal to reality.


Stuff like this just makes the anti-woke gang look more reasonable.

Not enjoying this verification can future


Advances in this space are always welcome.

I see the change in kld values is pretty modest vs prior version. Does anyone know how that translates to real world? Is more of a linear type situation or exponential etc


Yes the new blog post https://unsloth.ai/docs/models/qwen3.5/gguf-benchmarks has some benchmarks from community people on our quants vs others on LiveCodeBench for eg!

History reviews is not a great way to approach ground breaking tech

"Not learning from history because the present is the present" is a pretty accurate description of the world in 2026, at least.

You are not going to stop people from reading into history, ever. If anything, people need to learn more about what happened in the past.

We have yet to invent ground breaking tech that transcends either human nature or the banal depravity that stems from the profit motive at scale. Prior history of major tech innovations therefore may have some insight to offer regarding expected outcomes of the current hype wave around AI. The notion that technology so cleanly breaks from underlying social paradigms as to be wholly unpredictable is one of the tech industries most persistently naive and destructive mythologies.

It is democratising from the perspective of non-programmers- they can now make their own tools.

What you say about big tech is true at same time though. I worry about what happens when China takes the lead and no longer feels the need to do open models. First hints already showing - advance access to ds4 only for Chinese hardware makers


Programming is probably the most democratized profession ever.

The problem was never access barriers, but the fact that people are too lazy to study even a 200-300 pages on something as simple as ruby on rails.


I think there’s an actual barrier. I’ve seen it, especially since the (until recently) brisk market for programmers was sucking people out of traditional engineering.

It’s puzzling because programming seems so easy and fun. And even before LLM’s, we had StackOverflow after all.

But for some reason a lot of people just hit a wall when they try to learn programming, and we don’t know why. The “CS 101” course at colleges has extremely high attrition.

A minor secondary effect may have been that if you were not a software developer, your boss didn’t want to see you programming.


They can rent their own tools, more like.

No, they can make their own tools. They rent someone else's tools in the process of making their own tools.

Not entirely true. For instance if I use LLMs to build an ios app I still need to pay apple $100 to use my own app for an undetermined amount of time.

If I build a web app i still need to pay for a domain, for a server for egress.

We are just renting. Wouldn’t be surprised if in the future this gets even more depressing


One day people will not even be able to own computers anymore. They will be owned, controlled and rented out by corporate elites for limited purposes only. The personal computer will probably either cease to exist due to economic factors. It will probably be made illegal for citizens to own free computers. We'll probably need licenses to operate one.

The mere concept of people "making their own tools" is just comical in this bleak timeline.


They can continue renting to maintain the tools they make.

Terrible argument. They always could learn and DIY.

You have to have a knack for it, most people are not programmer types

I don't think it's about being a "type" so much as choosing what to specialize in.

I could learn plumbing skills and do the plumbing around my house. I've chosen not to.


There’s definitely a type. My wife is much smarter and harder working than me, near perfect SAT score, made it through an engineering degree at a much better school than I went to. Then did med school, residency, and fellowship.

She’s insanely quick. I once told her about one way hashing and before I was even half way through the explanation. Before I and ever said a thing about what they were used for she stops me and says “oh so that’s why websites can’t just send you your password when you forget it”.

At her job she has to call time of death for kids, tell people their kid has cancer, deal with people who literally want her dead, work shifts where she is the one ultimately responsible for the life and death of every patient that walks in the door, and work 7a-4p one day then 10p-7a the next.

She can do all that but she says that she hated her Matlab class in college more than anything else and she could absolutely never do my job because she doesn’t have it in her to bang her head against a wall chasing down a bug for an hour that turns out to be a typo.


... if they are privileged enough to be able to take time away from family and jobs.

The current crop of LLMs are subsidised enough to make this learning less expensive for those with little of both time and money. That's what's meant by democratised.


The people taking the lead in most of Ai in America are bootlickers of fascism. So not much difference than China on a long enough time line.

The US losing the plot doesn’t change the fact that the tech is fundamentally democraticism on a personal level.

If all the frontier models disappear into autocratic dark holes then yeah we have a problem but the fundamental freedom gain an “individuals can make tools without knowing coding” isn’t going anywhere


Given the amount of planes this isn’t going to be a single precision strike

I doubt either of them is keen to enter the fray here. Russia is making shaheeds at home now anyway

You just need access to the videos then the pedo cabal does whatever you want

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: