All they did was prompt an LLM over and over again to execute one iteration of a towers of hanoi algorithm. Literally just using it as a glorified scripting language:
```
Rules:
- Only one disk can be moved at a time.
- Only the top disk from any stack can be moved.
- A larger disk may not be placed on top of a smaller disk.
For all moves, follow the standard Tower of Hanoi procedure:
If the previous move did not move disk 1, move disk 1 clockwise one peg (0 -> 1 -> 2 -> 0).
If the previous move did move disk 1, make the only legal move that does not involve moving
disk1.
Use these clear steps to find the next move given the previous move and current state.
Previous move: {previous_move}
Current State: {current_state}
Based on the previous move and current state, find the single next move that follows the
procedure and the resulting next state.
```
This is buried down in the appendix while the main paper is full of agentic swarms this and millions of agents that and plenty of fancy math symbols and graphs. Maybe there is more to it, but the fact that they decided to publish with such a trivial task which could be much more easily accomplished by having an llm write a simple python script is concerning.
The realtor might pay for it or even do it themselves. It would take 5 minutes with a reciprocating saw. Or the scammer tells the realtor "never mind that" and the realtor tells the buyer.
Warning- it's a Gary Marcus article. This is a guy who started out dissing LLMs to pump his own symbolic AI startup, was (likely to his surprise) hoisted on the shoulders of a mass of luddites, and has now pivoted to a career as an anti-AI influencer
He didn't "start out" when LLMs were growing or at the time he founded a symbolic AI startup.
He "started out" a lot earlier, he wrote a book in 2001 and his written 8 books in total and has publications in academic journals like Cognitive Psychology dating back to 1995.
Except you got it all the time, just not as polite. Under every Simon Willison article you can see people call him grifter. Even under Redis developer's post you can see people insulting him for being pro-AI.
Meh, he’s been very fairly calling out AI companies for over-promising and under-delivering, along with critiquing the idea that training LLMs on bigger data will solve AGI.
He’s vocal and perhaps sometimes annoying, but who cares. A number of his articles have made great points at times when people are loosing themselves with hype for AI. Reality is somewhere in the middle, and it’s good to have more people critiquing what AI companies are doing. Who cares if a lot of the blog posts are short and not that interesting.
> Meh, he’s been very fairly calling out AI companies for over-promising and under-delivering, along with critiquing the idea that training LLMs on bigger data will solve AGI.
But we don't want that! We want blind faith in the promises of SV AI companies. We want positivity!
On one side you have people who know how to build deep nn saying one thing, and on the other there seems to be people who don’t even know what tanh is and are very sure of their “strong” opinions.
Do you have an example of someone who actually knows how LLMs work who has a tribalistic view?
Lol, I like that as a joke, but I wouldn’t think you are saying “a person who has no idea how something works” their opinion should be given equal weighting as someone who actually knows? Maybe you are - that seems to be how things work now.
I think you already get what I am saying, but it seems that there are maybe 3 groups. 2 who know how things work under the hood and have differing opinions and are curious to hear the other side, and one group who have no idea how things work, are very loud, have sci-fi fantasies, and spout strong opinions.
I wouldn't call that discourse i would call it ignorance.
It's weird though, the critics of LLMs have very good points, usually very reasonable but when they share them they get downvoted and criticized like someone who was critical of NFTs in 2022.
I wonder why that is, and what it portends regarding the future of that "tribe"
Was doing some back of the envelope math with chatGPT so take it with a grain of salt, but it sounds like in ideal conditions a radiator of 1m square could dissipate 300w. If this is the case, then it seems like you could approach a viable solution if putting stuff in space was free. What i can't figure out is how the cost of launch makes sense and what the benefit over building it on the ground could be
Heck, not even just a separate card or whatever, back in the terminal days where you practically had a whole separate small computer just to display the output of the bigger computer on a screen instead of paper.
I think the argument is that it's messed up that a large debt swap from xAI kept Musk's margin on Twitter from being called by his investors, and now that debt is being absorbed by SpaceX.
This thread specifically excluded the big investors, but they too have nothing but loss popping the bubble: Musk has been talking up the value of their investment. If they criticize in public, they’re just costing themselves money — much safer to sell and walk away.
Well, no, the worry is that xAI's bondholders, who are also Twitter's bondholders, will be indemnified from any loss on those bonds at public expense because they are now also SpaceX bondholders and SpaceX is a national security interest of the US.
> bonds at public expense because they are now also SpaceX bondholders and SpaceX is a national security interest of the US.
If our elected officials have done a poor job diversifying risk by not just depending on one single supplied, they are to blame and we should hold them accountable.
I think unsavory business practices actually affect approximately everyone, even those not directly connected to any one particular instance of unsavory business practices.
Whoa, I had to do a double-take on the Dorsey mention -- like, didn't he take the money and run while laughing at the folks that overpaid? But it seems he's retained a 2.4% ownership stake in Twitter/X, according to Wikipedia:
Still, don't make the mistake I did, which was to read the above comment to mean "he put more money in at the time of the buyout", since he was called an "investor in X".
Shouldn't the government be aiming to pay the lowest price for the best goods and services rather than using procurement as a way to promote or suppress certain political opinions?
```
Rules:
- Only one disk can be moved at a time.
- Only the top disk from any stack can be moved.
- A larger disk may not be placed on top of a smaller disk.
For all moves, follow the standard Tower of Hanoi procedure: If the previous move did not move disk 1, move disk 1 clockwise one peg (0 -> 1 -> 2 -> 0).
If the previous move did move disk 1, make the only legal move that does not involve moving disk1.
Use these clear steps to find the next move given the previous move and current state.
Previous move: {previous_move} Current State: {current_state} Based on the previous move and current state, find the single next move that follows the procedure and the resulting next state.
```
This is buried down in the appendix while the main paper is full of agentic swarms this and millions of agents that and plenty of fancy math symbols and graphs. Maybe there is more to it, but the fact that they decided to publish with such a trivial task which could be much more easily accomplished by having an llm write a simple python script is concerning.
reply