Hacker Newsnew | past | comments | ask | show | jobs | submit | woah's commentslogin

All they did was prompt an LLM over and over again to execute one iteration of a towers of hanoi algorithm. Literally just using it as a glorified scripting language:

```

Rules:

- Only one disk can be moved at a time.

- Only the top disk from any stack can be moved.

- A larger disk may not be placed on top of a smaller disk.

For all moves, follow the standard Tower of Hanoi procedure: If the previous move did not move disk 1, move disk 1 clockwise one peg (0 -> 1 -> 2 -> 0).

If the previous move did move disk 1, make the only legal move that does not involve moving disk1.

Use these clear steps to find the next move given the previous move and current state.

Previous move: {previous_move} Current State: {current_state} Based on the previous move and current state, find the single next move that follows the procedure and the resulting next state.

```

This is buried down in the appendix while the main paper is full of agentic swarms this and millions of agents that and plenty of fancy math symbols and graphs. Maybe there is more to it, but the fact that they decided to publish with such a trivial task which could be much more easily accomplished by having an llm write a simple python script is concerning.


Good lord, I can only imagine the wasted electricity.

Just work on more ambitious projects?

Exactly, go where the AI can't go.

I am building a project that AI is incapable of doing and I really need to think hard and progress slowly but hopefully it will create real value.


The realtor might pay for it or even do it themselves. It would take 5 minutes with a reciprocating saw. Or the scammer tells the realtor "never mind that" and the realtor tells the buyer.

Does title insurance cover it if the sale actually goes through?

Warning- it's a Gary Marcus article. This is a guy who started out dissing LLMs to pump his own symbolic AI startup, was (likely to his surprise) hoisted on the shoulders of a mass of luddites, and has now pivoted to a career as an anti-AI influencer

He didn't "start out" when LLMs were growing or at the time he founded a symbolic AI startup.

He "started out" a lot earlier, he wrote a book in 2001 and his written 8 books in total and has publications in academic journals like Cognitive Psychology dating back to 1995.

The world didn't start when LLMs got popular.


Great, can't wait to balance the ultra-pro-AI views I get everyday from mainstream media, X, Hacker News, Reddit, etc.

I made a similar comment and was flagged. Seems like AI is now in the same category as Elon Musk on HN: negative sentiment = autoflag.

I wish we would see these warnings on all articles and comments from pro-AI influencers as well.

Except you got it all the time, just not as polite. Under every Simon Willison article you can see people call him grifter. Even under Redis developer's post you can see people insulting him for being pro-AI.

https://garymarcus.substack.com/archive?sort=new

Yeah, this guy is... something. The text form equivalent to Youtube Shorts.


Meh, he’s been very fairly calling out AI companies for over-promising and under-delivering, along with critiquing the idea that training LLMs on bigger data will solve AGI.

He’s vocal and perhaps sometimes annoying, but who cares. A number of his articles have made great points at times when people are loosing themselves with hype for AI. Reality is somewhere in the middle, and it’s good to have more people critiquing what AI companies are doing. Who cares if a lot of the blog posts are short and not that interesting.


> Meh, he’s been very fairly calling out AI companies for over-promising and under-delivering, along with critiquing the idea that training LLMs on bigger data will solve AGI.

But we don't want that! We want blind faith in the promises of SV AI companies. We want positivity!


[flagged]


Because the overall "discourse" on this has devolved into tribal politics that have very little to do with the technology anymore.

I think that the tribalism is one sided.

On one side you have people who know how to build deep nn saying one thing, and on the other there seems to be people who don’t even know what tanh is and are very sure of their “strong” opinions.

Do you have an example of someone who actually knows how LLMs work who has a tribalistic view?


“people who don’t even know what tanh is” sounds like something a tribe-member criticizing outsiders would say :)

Lol, I like that as a joke, but I wouldn’t think you are saying “a person who has no idea how something works” their opinion should be given equal weighting as someone who actually knows? Maybe you are - that seems to be how things work now.

I think you already get what I am saying, but it seems that there are maybe 3 groups. 2 who know how things work under the hood and have differing opinions and are curious to hear the other side, and one group who have no idea how things work, are very loud, have sci-fi fantasies, and spout strong opinions.

I wouldn't call that discourse i would call it ignorance.


It's weird though, the critics of LLMs have very good points, usually very reasonable but when they share them they get downvoted and criticized like someone who was critical of NFTs in 2022.

I wonder why that is, and what it portends regarding the future of that "tribe"


Your username lol

Was doing some back of the envelope math with chatGPT so take it with a grain of salt, but it sounds like in ideal conditions a radiator of 1m square could dissipate 300w. If this is the case, then it seems like you could approach a viable solution if putting stuff in space was free. What i can't figure out is how the cost of launch makes sense and what the benefit over building it on the ground could be

What temperature were you assuming?

Because the amount of energy radiated varies with the temperature to the fourth power (P=εσT^4).

Assuming very good emissivity (ε=0.95) and ~75C (~350K) operating temperature I get 808 W/m2.


I was adding some generous padding and rounding up. I assume they'd try to get it to operate as hot as possible

They would most likely launch with TPUs designed for space and target lower temperatures, closer to 60C.

The various GPU-accelerated terminal projects always make me chuckle

Not sure why, terminals are literally GPU accelerated text rendering solutions since the very beginning of rendering text

Heck, not even just a separate card or whatever, back in the terminal days where you practically had a whole separate small computer just to display the output of the bigger computer on a screen instead of paper.

It's messed up that Grok underwrote all those subprime mortgages in 2008

I think the argument is that it's messed up that a large debt swap from xAI kept Musk's margin on Twitter from being called by his investors, and now that debt is being absorbed by SpaceX.

> Musk's margin on Twitter from being called by his investors,

Primary and largest investors in X are: Elon Musk, Saudi Prince Alwaleed bin Talal, Larry Ellison, Jack Dorsey.

I don't know that you need to worry about their financial well-being or that they are getting a raw deal.


I think people are more concerned about SpaceX getting the raw deal here.

And specifically that if the music is about to stop SpaceX has an implicit government backstop

It doesn't have to; the government's rescue of GM in 2008 killed a bunch of brands that they owned.

But given the current administration, I don't have a lot of faith in the government looking out for anyone else's interests here.


And TARP destroyed 4 of the 5 largest investment banks in the US, but it still left a bad taste in a lot of people's mouths

Starlink is about to get billions and billions from the BEAD program, on top of this.

> SpaceX getting the raw deal here.

Have they complained?


You’re really asking whether anyone at a private company is publicly speaking up against the famously emotional and vindictive owner?

Yes. People are saying they’re worried that the poor private investors of SpaceX are getting the short end of the stick.

That seems like misplaced concerned for an investor class that really aren’t suffering.


This thread specifically excluded the big investors, but they too have nothing but loss popping the bubble: Musk has been talking up the value of their investment. If they criticize in public, they’re just costing themselves money — much safer to sell and walk away.

Well, no, the worry is that xAI's bondholders, who are also Twitter's bondholders, will be indemnified from any loss on those bonds at public expense because they are now also SpaceX bondholders and SpaceX is a national security interest of the US.

> bonds at public expense because they are now also SpaceX bondholders and SpaceX is a national security interest of the US.

If our elected officials have done a poor job diversifying risk by not just depending on one single supplied, they are to blame and we should hold them accountable.

But, is that even the case?


I think unsavory business practices actually affect approximately everyone, even those not directly connected to any one particular instance of unsavory business practices.

Culture exists, after all.


Well this was just announced, and I'll be surprised if nobody gripes about a $2T dilution of their equity.

Yeah, the financial well-being of those investors is not what people are worried about here

Whoa, I had to do a double-take on the Dorsey mention -- like, didn't he take the money and run while laughing at the folks that overpaid? But it seems he's retained a 2.4% ownership stake in Twitter/X, according to Wikipedia:

https://en.wikipedia.org/wiki/Jack_Dorsey#Twitter

Still, don't make the mistake I did, which was to read the above comment to mean "he put more money in at the time of the buyout", since he was called an "investor in X".


Shouldn't the government be aiming to pay the lowest price for the best goods and services rather than using procurement as a way to promote or suppress certain political opinions?

> Simple cooling sinks

What? You're in a huge vacuum thermos


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: