An analogy for cryptocurrency that I like is lasers. I remember reading an Usborne book about lasers as a kid and thinking they were the coolest thing ever and would doubtless find their way into every technology because glowing beams of pure light energy, how could they not transform the world!
Lasers turn out to be useful for... eye surgery, and pointing at things, and reading bits off plastic discs, and probably a handful of other niche things. There just aren't that many places where what they can do is actually needed, or better than other more pedestrian ways of accomplishing the same thing. I think the same is true of crypto, including NFTs.
In 1982 I joined Leeds university as a computer operator/assistant. First job out of a CS degree in another uni. The department head was a laser physics specialist and I, and other snobs used to say "what does a laser man know about REAL computer science" and act outraged.
More fool us. More power to him. He was well ahead of the curve, him and his laser physics friends worldwide.
> Imagine the damage one bad kid could cause using deepfakes
Deepfakes are highly damaging right now because much of the world still doesn't realise that people can make deepfakes.
When everyone knows that a photo is no longer reliable evidence by itself, the harm that can be done with a deepfake will drop to a similar level as that of other unreliable forms of evidence, like spoken or written claims. (Which is not to say that they won't be harmful at all -- you can still damage someone's reputation by circulating completely fabricated rumours about them -- but people will no longer treat photorealistic images as gospel.)
I think it will be big, but I don't think it's bigger than the automation of manufacturing that began during the Industrial Revolution.
Think about the physical objects in the room you're in right now. How many of them were made from start to finish by human hands? Maybe your grandmother knitted the woollen jersey you're wearing -- made from wool shorn using electric shears. Maybe a clay bowl your kid made in a pottery class on the mantelpiece. Anything else?
A robot that automatically untangles a rope is pretty much the coolest project idea I have ever heard of. It hits all the right buttons: extremely technically difficult with many design possibilities, completely novel, and of marginal utility. You cannot say that it would never be useful!
This was interesting, thanks. Was hoping to see a bit more about type hinting, but there's already a lot here.
A question about efficiency: IIUC, in your initial bitmap rastering implementation, you process a row of target bitmap pixels at once, accumulating a winding number count to know whether the pen should be up or down at each x position. It sounds like you are solving for t given the known x and y positions on every curve segment at every target pixel, and then checking whether t is in the valid range [0, 1). Is that right?
Because if so, I think you could avoid doing most of this computation by using an active edge list. Basically, in an initial step, compute bounds on the y extents of each curve segment -- upper bounds for the max y, lower bounds for the min y. (The max and min y values of all 3 points work fine for these, since a quadratic Bezier curve is fully inside the triangle they form.) For each of the two extents of each curve segment, add a (y position, reference to curve segment, isMin) triple to an array -- so twice as many array elements as curve segments. Then sort the array by y position. Now during the outer rendering loop that steps through increasing y positions, you can maintain an index in this list that steps forward whenever the next element crosses the new y value: Whenever this new element has isMin=true, add the corresponding curve segment to the set of "active segments" that you will solve for; whenever it's false, remove it from this set. This way, you never need to solve for t on the "inactive segments" that you know are bounded out on the y axis, which is probably most of them.
Thanks, I've bookmarked an article recently that I thought was about that, but haven't read it yet. Your explanation lays a very good foundation to understand that technique.
If I understood you correctly, this might be an issue if you have multiple strokes (so multiple mins and maxes that you need to stay within) on a row of pixels (think all strokes of an N).
What I'm suggesting is just a way to do less computation to get the same result as before, it doesn't change the correctness of the algorithm (if implemented correctly!). Instead of testing every curve segment at each (x, y) pixel location in the target bitmap, you only need to test those curve segments that overlap (or, more precisely, aren't known not to overlap) that y location, and what I described is a way to do that efficiently.
Gemini tells me that for thousands of years, the swastika was used as "a symbol of positivity, luck and cosmic order". Try drawing it on something now and showing it to people. Is this an effective way to fight Nazism?
I think it's brave to keep using em dashes, but I don't think it's smart, because we human writers who like using them (myself very much included) will never have the mindshare to displace the culturally dominant meaning. At least, not until the dominant forces in AI decide of their own accord that they don't want their LLMs emitting so many of them.
I think it's safe to assume they meant it within their specific cultural context. They the symbol has different connotations in other cultures doesn't really change the point being made.
My point is just: if a test for what a symbol ‘really means’ depends on choosing an audience that conveniently erases everyone who uses it differently, that’s not describing intrinsic meaning, that’s describing the author’s cultural bubble and bias.
And on em dashes—most people outside tech circles see no “AI fingerprint,” and designers like myself have loved them since early Mac DTP, so the suspicion feels hilariously retroactive and very knee-jerk. So what if somebody thinks my text here is written by a bot?
> So what if somebody thinks my text here is written by a bot?
Then they might not read it at all. I often zone out as soon as I expect I'm reading slop and that's the reason try to ensure my own writing isn't slop adjacent.
I'm also not sure there is an "AI bubble." Everyone I know is using it in every industry. Museum education, municipal health services, vehicle engineering, publishing, logistics, I'm seeing it everywhere.
As mentioned elsewhere I've seen non-tech people refer to them as "AI dashes."
> if a test for what a symbol ‘really means’
There was no suggestion of such a test. No symbol has an intrinsic meaning. The point GP was about considering how your output will be received.
That point was very obviously made within a specific cultural context, at the very least limited to the world of the Latin alphabet. I'm sure there are other LLM signifiers outside of that bubble.
> I often zone out as soon as I expect I'm reading slop and that's the reason try to ensure my own writing isn't slop adjacent.
And how is this a problem someone else has to address? Some people zone out when they see a text is too long: are we supposed to only publish short form then? I have 10 years of writing on my site, if someone in 2026 sees my use of em dashes and suddenly starts thinking that my content is AI generated that's their problem, not mine.
Too many people are willingly bending to adapt to what AI companies are doing. I'm personally not gonna do it. Because again, now it's em dashes, tomorrow it could be a set of words, or a way to structure text. I say fuck that.
> And how is this a problem someone else has to address?
Where has anyone made the claim that it is?
> Some people zone out when they see a text is too long: are we supposed to only publish short form then?
No, but a good writer will generally consider if their text is needlessly verbose and try to make it palatable to their audience.
> starts thinking that my content is AI generated that's their problem, not mine.
If you want to reach them with your writing then it might become a problem. Obviously the focus on em dashes alone isn't enough but it's undoubtedly one of the flags.
> Too many people are willingly bending to adapt to what AI companies are doing.
It's bending rather to what readers are feeling. It's not following the top down orders of a corporation, it's being aware of how technology shapes readers' expectations and adapting your writing to that.
I'm not confident that the average person is aware of an em dash nor that it is widely associated with AI; I think the current culturally dominant meaning is just a fat hyphen (which most people just call a dash anyway).
My wife was working from home recently and I overheard a meeting she was having. It's a very non technical field. She and her team were working on a presentation and her boss said "let's use one of those little AI dashes here."
> Gemini tells me that for thousands of years, the swastika was used as "a symbol of positivity, luck and cosmic order". Try drawing it on something now and showing it to people. Is this an effective way to fight Nazism?
I'm happy to change my position when some 13 million people are killed by lunatics that used the em dash as the symbol of their ideology. Until then, I'll keep using it everywhere it's appropriate.
Also, if we don't have the guts to resist even when the stakes are this low and the consequences for our resistance are basically non existent, then society is doomed. We might as well roll on our side and die.
> At least, not until the dominant forces in AI decide of their own accord that they don't want their LLMs emitting so many of them.
It's not a power I'm willing to give them. What if tomorrow they tweak something and those tool start to use a specific word more often? Or a different punctuation sign? What do we do then? Do we constantly adapt, playing whack-a-mole? What if AI starts asking a lot more questions in their writing? Do we stop asking them as a result?
You feel free to adapt and bend. I'm personally not going to do it and if someone starts thinking that I'm using AI to write my thoughts and as a result that's on them.
Lasers turn out to be useful for... eye surgery, and pointing at things, and reading bits off plastic discs, and probably a handful of other niche things. There just aren't that many places where what they can do is actually needed, or better than other more pedestrian ways of accomplishing the same thing. I think the same is true of crypto, including NFTs.
reply