> and for whatever reason 2.4ghz only devices cant find the SSID unless you if there is a name conflict on the 5ghz frequency
Huh? Is this true? It doesn’t make intuitive sense to me—if the device doesn’t have a 5ghz radio I would expect it to be physically impossible for the 5ghz network to interfere.
I've had this issue too on older devices, until I made the SSIDs different by suffixing 2ghz and 5ghz to each one. I think I've had it happen both on an older Android and older MacBook but it was a while ago, could be misremembering.
I think enough of us were running dual-band networks sharing the same SSID back in the day that doing so now cannot be the entire reason for things not working.
> Huh? Is this true? It doesn’t make intuitive sense to me—if the device doesn’t have a 5ghz radio I would expect it to be physically impossible for the 5ghz network to interfere.
It's not an issue with the device itself, it's an issue with the device setup process.
For whatever reason, I assume it's easier in some common device platform, a lot of IoT devices do not use the SSID for discovering WiFi APs. Instead they connect directly to the BSSID (read: WiFi MAC) of the specific radio on the AP. These devices always rely on a phone app for setup, and the phone app has you select the WiFi network by the SSID name, but passes the BSSID to the device over Bluetooth.
When your phone is connected to the 5 GHz (or now 6 GHz) radio as would be normal for a modern device in a combined network, the BSSID it sees is invisible to the 2.4-only IoT device and thus it doesn't work unless you force your phone to only see the 2.4 GHz radio.
The problem also comes up if you have a larger network with multiple access points, set up a device, and then move it (or if your phone just happens to be hanging on to a more distant AP that the IoT device's puny antenna can't see).
It's been a stupid problem from the beginning, it'd be trivial to solve permanently in software if these device vendors would get their heads out of their collective asses, and yet the "you have to disable 5GHz" nonsense persists for the same reason as software vendors still insist on admin privileges for everything, any/any firewall rules, etc.
It's called Band Steering and it messes up older devices. Its truely an L direction that the wifi industry has gone, its a reaction to overcrowding of the 2.4Ghz spectrum by automatically moving capable devices to the 5Ghz SSID
so happens that its not backwards compatible very well to 2.4Ghz only devices. Not because of the frequency itself but because of the band steering implementation from the router
Also, if you are a human who does taste, it's very difficult to get an AI to create exactly what you want. You can nudge it, and little by little get closer to what you're imagining, but you're never really in control.
This matters less for text (including code) because you can always directly edit what the AI outputs. I think it's a lot harder for video.
> Also, if you are a human who does taste, it's very difficult to get an AI to create exactly what you want.
I wonder if it would be possible to fine train an AI model on my own code. I've probably got about 100k lines of code on github. If I fed all that code into a model, it would probably get much better at programming like me. Including matching my commenting style and all of my little obsessions.
Talking about a "taste gap" sounds good. But LLMs seem like they'd be spectacularly good at learning to mimic someone's "taste" in a fine train.
> LLMs always seem to be inarguably worse than the “original”.
True. But quantity has a quality of its own.
I'm personally delighted at the idea of outsourcing all the boring cookie cutter programming work to an AI. Things like writing CSS, plumbing between my database, backend server and web UI. Writing and maintaining tests. All the stuff that I've done 100 times before and I just hate doing by hand over and over again.
There's lots of areas where it doesn't really matter that the code it produces isn't beautifully terse and performant. Sometimes you just need to get something working. AIs can do weeks of work in an afternoon. The quality isn't as good. But for some tasks, that's an excellent trade.
My M1 16GB Mini and M2 16GB Air both deliver insane local transcription performance without eating up much memory - I think the M line + Parakeet delivers insane local performance and you get privacy for free
Yeah, that model is amazing. It even runs reasonably well on my mid-range Android phone with this quite simple but very useful application, as long as you don't speak for too long or interrupt yourself for transcribing every once in a while. I do have handy.computer on my Mac too.
I find the model works surprisingly well and in my opinion surpasses all other models I've tried. Finally a model that can mostly understand my not-so-perfect English and handle language switching mid sentence (compare that to Gemini's voice input, which is literally THE WORST, always trying to transcribe in the wrong language and even if the language is correct produces the uttermost crap imaginable).
Ack for dictations but Gemini voice is fun for interactive voice experiments -> https://hud.arach.dev/ honestly blown away by how much Gemini could assist with with basically no dev effort
NVIDIA is showing training at 4 bits (NVPF4), and 4 bit quants have been standard for running LLMs at home for quite a while because performance was good enough.
I mean, GPT-OSS is delivered as a 4 bit model; and apparently they even trained it at 4 bits. Many train at 16 bits because it provides improved stability for gradient descent, but there are methods that allow even training at smaller quantizations efficiently.
There was a paper that I had been looking at, that I can't find right now, that demonstrated what I mentioned, it showed only imperceptible changes down to 6 bit quants, then performance decreasing more and more rapidly until it crossed over the next smaller model at 1 bit. But unfortunately, I can't seem to find it again.
There's this article from Unsloth, where they show MMLU scores for quantized Llama 4 models. They are of an 8 bit base model, so not quite the same as comparing to 16 bit models, but you see no reduction in score at 6 bits, while it starts falling after that. https://unsloth.ai/docs/basics/unsloth-dynamic-2.0-ggufs/uns...
Anyhow, like anything in machine learning, if you want to be certain, you probably need to run your own evals. But when researching, I found enough evidence that down to 6 bit quants you really lose very little performance, and even at much smaller quants the number of parameters tends to be more important than the quantization, all the way down to 2 bits, that it acts as a good rule of thumb, and I'll generally grab a 6 to 8 bit quant to save on RAM without really thinking about, and I try out models down to 2 bits if I need to in order to fit them into my system.
This isn't the paper that I was thinking of, but it shows a similar trend to the one I was looking at. In this particular case, even down to 5 bits showed no measurable reduction in performance (actually a slight increase, but that probably just means that you're withing the noise of what this test can distinguish), then you see performance dropping off rapidly as it gets down to 3 various 3 bit quants: https://arxiv.org/pdf/2601.14277
There was another paper that did a similar test, but with several models in a family, and all the way down to 1 bit, and it was only at 1 bit that it crossed over to having worse performance than the next smaller model. But yeah, I'm having a hard time finding that paper again.
Why do you think ChatGPT doesn't use a quant? GPT-OSS, which OpenAI released as open weights, uses a 4 bit quant, which is in some ways a sweet spot, it loses a small amount of performance in exchange for a very large reduction in memory usage compared to something like fp16. I think it's perfectly reasonable to expect that ChatGPT also uses the same technique, but we don't know because their SOTA models aren't open.
This. Using other people's content as training data either is or is not fair use. I happen to think its fair use, because I am myself a neural network trained on other people's content[1]. But, that goes in both directions.
Attak by JohnnyTwoShoes[0,1], is one of the standout games from that era: super-saiyan type battles using an underlying physics/ragdoll engine with fully destructible buildings and terrain.
Half the strategy was digging out an underground bunker to shield yourself from your enemy's 500m vertical drop attacks. It was an incredible feat of programming for the time.
The game unfortunately is in that final tiny percentage of AS3 games with missing features and thus does not work with ruffle yet, I suspect because the game was so cutting edge.
There's honestly a ton more, you can download the archive and go through the various community lists in there. I've spent a few evenings just having a few drinks and playing some old games! :D
Overall, I love this essay. However, the entire argument hinges on one assertion, buried about halfway through:
> Robots are improving fast, but I do not believe that this cute fellow will be stuffing envelopes or affixing stamps anytime soon.
Is this correct? I don't feel qualified to say. But if it's wrong... well, then there's a missing pixel in the magic circle, and flood fill will make the whole thing unrecognizable.
I also love this essay, but I think there's a much larger, scarier breach in the magic circle.
We humans consume information on the Internet, it changes our ideas, and those ideas directly inform our very physical and material behavior. We ourselves are essentially 3D printers for our thoughts, running 24/7.
Flashmobs, scenic spots that get overrun with tourists after an Instagram post goes viral, teens eating tide pods, adults failing to cure COVID with Ivermectin, fashion trends, everyone kind of getting into sourdough during the pandemic, Kate Bush making almost half a million bucks in two weeks because of Stranger Things, the death of Payton Isabella Leutner, millions of people protesting for Black Lives Matter, and thousands more are real-world events that would not have happened without the Internet infecting brains.
Elections are decided based on what people learn online, and those elections have world-sized potentially catastrophic impact when you consider things like climate change policy.
I fear there is no meaningful separation between the digital world and the physical world, because it's really about the separation between ideas and material reality. Living beings exist entirely to span that bridge.
Like Rodney Brooks says, "No one has managed to get articulated fingers (i.e., fingers with joints in them) that are robust enough, have enough force, nor enough lifetime, for real industrial applications."
Here, I'll link to that piece directly, it's long and detailed and illustrated, and it also counters the idea of just throwing AI at the problem until robot dexterity emerges from whatever physical parts.
"there have now been fifteen different families of neurons discovered that are involved in touch sensing and that are found in the human hand" ... "a human hand has about 17,000 low-threshold mechanoreceptors" ... "These receptors come in four varieties (slow vs fast adapting, and a very localized area of sensitivity vs a much larger area)"
You might ask, do robots that interact with the real world need such complicated bio-mimicking physical tech, or can they cut corners? But they can't cut all the corners, anyway. Somebody has to make a high-bandwidth robot hand with flexible strength and a self-repair ability. Or, hey, cyborgs maybe? Reanimate cadavers with AI, that could do the trick.
> However, the entire argument hinges on one assertion, buried about halfway through:
>> Robots are improving fast, but I do not believe that this cute fellow will be stuffing envelopes or affixing stamps anytime soon.
Okay, lets presume he is correct; the conclusion is still "We will do the unthinking manual work requiring physical dexterity while the computers will direct us".
Which brings us neatly to why many people are opposed to “AI” and automation.
We’re automating away the pleasurable work and leaving the drudgery for humans, when it should be the other way around.
Robots should be toiling while humans create art and music and whatever else they desire.
AI image generation doesn’t “democratize” art. Art has always been available to everyone. Anyone can learn to make art. AI image generation devalues artists and robs everyone else of the desire to learn art skills themselves.
It turns out there's a counter–magic circle, and that's the economy. Even if people are happy to move to a commune without internet, they won't produce efficiently, so they won't be able to pay property taxes. The system consumes everything that doesn't already follow it.
His (compelling) evidence for that assertion is that printers still jam after 40 years. For humans, writing something on a piece of paper is absolutely trivial, and if something goes wrong, grabbing a new piece of paper or a pen is also trivial. Computers _can_ now write on paper tolerably fast and well, but they absolutely can't handle even simple failure modes. And the real world is _massively_ failure-prone, in contrast to the digital domain.
Think about Tesla's pivot to "AI robots". My guess is that they'll get to something that can very slowly pick up a dropped sock and put it in the washing basket. But that it will fall over occasionally on the stairs, wrecking your kid's photos and the vase standing at the bottom, and dinging the wall. It might do a passable job of picking up the shards of pottery, but gluing the picture frames together, plastering the wall and repainting it... well maybe in in Elon's chemical dreams.
I like to think about it this way: why do printers need me to give it more paper? Why do I need to go buy paper from the store? These at the most trivial real-world things a human can do but I can’t imagine any robot doing that for a very long time.
Forget self-driving cars, how about a printer I don’t have to unjam or fill with paper?
But I doubt that kind of thing will happen in my lifetime.
I have worked with automation robots for over 15 years in manufacturing. Robust robots have only one degree of freedom, robust robots aren't very useful. Useful robots have two or three degrees of freedom, useful robots breakdown all the time from use and need a pit crew to repair them. I'm sure the use of robotic automation is about to explode, but robotics are limited.
Even if the assertion is correct, what would life be like with computers incomprehensibly smarter and faster thinking than humans?
All management decisions from the top down to individual manual workers handled by an AI (LLM or otherwise)?
Owner has a company-wide AI, instructs it to maximize profit and lets it run. It handles hiring and firing, marker research, advertisement, ordering supplies, ... It generates individualized instructions for each worker what to do throughout the day. Any communication between humans would be redundant, the AI would have microphones and cameras everywhere, humans would only be needed for physical interaction with the world. Even communication with other companies, suppliers and clients, would by done between AIs, they would be better and faster at negotiating.
1) It sounds like a dystopian nightmare - constant surveillance and taking orders from a machine which only cares your productivity.
2) Would it lead to a devolution of the human race? What makes us different from other animals is intelligence. If all humans are good at (= economical to use for) is manual labor, would intelligence stop being a sexually attractive trait?
3) It would completely remove any social mobility. If those who own companies continue owning them after an AI revolution and there's little economic value in human labor except the most menial, then there would be a 2 class society with almost no way for non-owners to become owners.
Human hands have absolutely crazy performance. Human hands have 15+ degrees of freedom. Sub-millimeter precision, no backlash. Strong enough to lift 100 lbs. Gentle enough to catch a thrown egg without breaking it. Rigid enough to hammer a nail without dropping the hammer. A compact forearm for reaching into tight spaces. Water- and dust-proof. Oh, and it'll last for decades without maintenance.
Even a $100k robot hand like a Shadow Hand can't compare.
reply