It certainly can matter in any scope. `x` or even `delay` will lead to more bugs down the line than `delay_in_milliseconds`. It can be incredibly frustrating to debug why `delay = 1` does not appear to lead to a delay if your first impression is that `delay` (or `x`) is in seconds.
No - "delay_in_milliseconds" will let you find the error and resolve it faster. With the less descriptive name, you need to notice the mismatch between the definition and the use site, which are further apart in context. Imagine you see in your debugger: "delay_in_milliseconds: 3" in your HttpTimeout - you'll instantly know that's wrong.
If you believe your reductive argument, your function and variable names would all be minimally descriptive, right?
For your specific example, there would never be a “delay in milliseconds” variable in the first place. That’s just throat clearing.
“sleep 1” is the complete expression. Because sleep takes a parameter measured in seconds, it’s already understood.
You do not need “delay_in_seconds = 1” and then a separate “sleep delay_in_seconds”. That accomplishes nothing, you might as well add a comment like “//seconds” if you want some kind of clarity.
Years later, when all memory of intent is long gone, I'd much rather work on a large code base that errs on the side of too much "throat clearing" than one that errs on the side too little. `sleep 1` tells what was written, which may or may not match intent.
Many bugs come from writing something that does not match intent. For example, someone writes most of their code in another language where `sleep` takes milliseconds, they meant to check the docs when they wrote it in this language, but the alarm for the annual fire drill went off just as they were about to check. So it went in as `sleep 1000` in a branch of the code that only runs occasionally. Years later, did they really mean 16 minutes and 40 seconds, or did they mean 1 second?
Leaving clues about intent helps detect such issues in review and helps debug the problems that slip through review. Comments are better than nothing, but they are easier to ignore than variable names.
Weird to claim the llm does all the boring learning and boilerplate for you as a selling point, but then also insist we still need to responsibly read all the output, and if you can't understand it's a "skill issue".
Also the emphasis on greenfield projects? Starting is by FAR the easiest part. That's not impressive to me. When do we get to code greenfield for important systems? Reminds me of the equally absurd example of language choice. You think you get to choose? What?
Imagine all the code these agents are going to pump out that can never be reviewed in a reasonable time frame. The noise generated at the whim of bike-shedding vibe coders is going to drown all the senior reviewers soon enough. I'll call that Cowboy Coders on Steroids. Anyone with skills will be buried in reviews, won't have time for anything else, and I predict stricter code gen policies to compensate.
I don't have a huge dog in this fight apart from AI advocates being annoying... but I would say that for greenfield projects the interesting thing is that I can get a skeleton of a working iOS app for something simple in like an hour of some copy/pasting stuff from ChatGPT. Instead of spending a good amount of time trying to get through learning material to do it.
It's nice to build throwaway things _so fast_, especially in the sort of fuzzy stuff like frontend where it's fine for it to be completely wrong. And then I can just use my own sense of how computers work to fix up what I care about, delete a bunch of what I don't care about... It's pretty amazing.
For existing projects I have only witnessed garbage output. I know people have success. I haven't seen it.
I have witnessed PMs taking a bullet pointed list of requirements and then use ChatGPT to generate paragraphs of text for some reason. You had the list!
This is just obviously not true. I had a full-time job of reviewing code for roughly 15 years and it was never true, but it's also just intuitively not true that engineers spend 10 hours reviewing their peers code to every 1 they spend writing it.
What you mean to claim here is that verification is 10x harder than authorship. That's true, but unhelpful to skeptics, because LLMs are extremely useful for verification.
I once graded over 100 exams in an introductory programming course (Python). The main exercise was to implement a simple game (without using a runtime).
Some answers were trivial to grade—either obviously correct or clearly wrong. The rest were painful and exhausting to evaluate.
Checking whether the code was correct and tracing it step by step in my head was so draining that I swore never to grade programming again.
Right, sure. So: this doesn't generally happen with LLM outputs, but if it does, you simply kill the PR. A lot of people seem to be hung up on the idea that LLM agents don't have a 100% hit rate, let alone a 100% one-shot hit rate. A huge part of the idea is that it does not matter if an agents output is immediately workable. Just take the PRs where the code is straightforwardly reviewable.
But your reply was to "reviewing code is easily 10x harder than writing it". Of course that's not true if you just kill all PRs that are difficult to review.
Sometimes, code is hard to review. It's not very helpful if the reviewer just kills it because it's hard.
> It's not very helpful if the reviewer just kills it because it's hard.
I am absolutely still an AI skeptic, but like: we do this at work. If a dev has produced some absolutely nonsense overcomplicated impossible to understand PR, it gets rejected and sent back to the drawing board (and then I work with them to find out what happened, because thats a leadership failure more than a developer one IMO)
You're massively burying the lede here with your statement of 'just take the PRs where the code is straightforwardly reviewable'. It's honestly such an odious statement that it makes me doubt your expertise in reviewing code and PRs.
A lot of code can not and will not be straightforwardly reviewable because it all depends on context. Using an LLM adds an additional layer of abstraction between you and the context, because now you have to untangle whether or not it accomplished the context you gave it.
I have no idea what you mean. Tptacek is correct. LLM does not add an additional layer because at the end of the day code is code. You read it and you can tell whether it does what you want because you were the person who gave the instructions. It is no different than reviewing the code written by a junior (who also does not add an additional layer of abstraction).
That's exactly what an additional layer is! If I outsource coding to someone else whether it's a junior engineer, an outside firm or an LLM that is an additional layer of context you need to understand. You need to figure out if the junior engineer grasped the problem set, if the firm understood the requirements or if the LLM actually generated decent code.
While I'm still a skeptic (despite my Cursor usage at work), I still absolutely agree. Careful review has never been 10x more difficult than writing the code, I'm not sure where this comes from. And I've had about the same experience as you.
Also, and know this doesn't matter, but it's so weird to see this downvoted. That's not an "I disagree" button...
I think the karma thresholds here are kind of silly given how low they are, but Dan and Tom actually run this place and they know things I don't know about what chaos would be unleashed if we eliminated them.
I really don't think that's true at all. Anyone here can read and sign off on 1000 lines of code faster than they can write the same 1000 lines of code, pretty much no matter what the code is.
I can review 100x more Go code in a set amount of time than I can, say React.
With Go there are repetitive structures (if err == nil) and not that many keywords, it's easy to sniff out the suspicious bits and focus on them.
With Javascript and all of the useWhatevers and cyclic dependencies and functions returning functions that call functions, it's a lot harder to figure out what the code does just by reading it.
I definitely get your point, but it's also pretty annoying to write weird indirect or meta-functional code, not just to read it. It's still almost always faster to read than to write.
I can think of edge cases where a certain section of code is easier and faster to write than to read, but in general - in our practical day-to-day experiences - reading a lot of code is faster than writing a lot of code. Not just faster but less mentally and physically taxing. Still mentally taxing, yes, but less.
The idea is that people will spend 10x more time reading your code in all future time stacked together. Not that reading and understanding your code once takes 10x the effort of writing it, which is obviously untrue.
Here is the quote from Clean Code, where this idea seems to originate from:
> Indeed, the ratio of time spent reading versus writing code is well over 10 to 1.
I think the point is that the person orchestrating the agent(s) reviews the code. It doesn't make sense to have 5 Juniors using agents to submit PRs and a senior or two reviewing it all. You just have the senior(s) orchestrating agents and reviewing it themselves. Maybe one or two juniors because we still need to train new devs, but maybe the junior doesn't even use an LLM. Maybe the junior writes code manually so they can learn programming properly before they start being an "AI lead".
Everyone is still responsible for the code they produce. I review my own PRs before I expect others to, and I don't even use AI. I think what the article describes seems interesting though.
Yeah, I remember five years ago (before being sold this latest VC crock of bull) when reviewing shitty thoughtless code was the tedious and soul-sucking part of programming.
No matter the extent you believe in the freedom of information, few believe anyone should then be free to profit from someone else's work without attribution.
You seem to think it would be okay for disney to market and charge for my own personal original characters and art, claiming them as their own original idea. Why is that?
Yes. I 100% unironically believe that anyone should be able to use anyone else's work royalty/copyright free after 10-20 years instead of 170 in the UK. Could you please justify why 170 years is in any way a reasonable amount of time?
The copyright last 70 years after the death of the author, so 170 years would be rare (indeed 190 years would be possible). This was an implementation of a 1993 EU directive:
That itself was based on the 1886 Berne Convention. "The original goal of the Berne Convention was to protect works for two generations after the death of the author". 50 years, originally. But why? Apparently Victor Hugo (he of Les Miserables) is to blame. But why was he bothered?
Edit: it seems the extension beyond the death of the author was not what Hugo wanted. "any work of art has two authors: the people who confusingly feel something, a creator who translates these feelings, and the people again who consecrate his vision of that feeling. When one of the authors dies, the rights should totally be granted back to the other, the people." So I'm still trying to figure out who came up with it, and why.
So far as I can tell, the idea behind extending copyright two generations after the author's death was so that they could leave the rights to their children and grandchildren, and this would keep old or terminally ill authors motivated.
I mean, it's fun. Ever listened to the KLF, and things from the era before sampling was heavily sat on, such as the album 1987 (What the Fuck Is Going On?) - ? I don't claim it's very good, but it was definitely fun. And the motivation for using existing works, instead of creating your own, is similar to the motivation for using existing words, instead of creating your own. They're reference points, people recognize them, you can communicate with them instead of having to extract patience from the audience like they have to learn a new language for each work. And of course in practice the rules are fuzzy, so everybody sails close to the wind by imitating others and in this way we share a culture. Stealing their work is just sharing the culture more closely.
> is similar to the motivation for using existing words
I don't think it's like that. If we take music, for example, the existing word would be a note or a scale or a musical instrument or a style, but a melody would be an existing sentence. As for sampling, there is creative usage of samples, like Prodigy for example where it is difficult to even recognize the source.
Also today there is some leeway in copyright enforcement. For example, I often see non-commercial amateur covers of commercial songs and the videos don't get taken down.
I put it to you that same difference. These matters of degree are what copyright lawyers haggle over. It implies to me that the whole edifice is forced into being, for its desirable (?) effects, and has no concrete foundation. Nothing pure and elegant and necessary about copyright.
Well, you asked why, anyway, and there's why: it's a natural thing to do.
> Lacto-ferment chillis with your choice of veg and/or fruit in a brine solution for a couple of weeks at room temperature.
Room temperature is no kind of standard. Optimal fermentation is up to about 75f max. I live in the south and most of the year room temp for me is at least 76. I've ruined several batches of lacto-fermented experiments before making that connection.
Don't get me started on vague salt measurements like "seawater" taste.
This did not work for me in the USA and Canada, where the pre-authorisation attempt failed with a debit card but worked with a credit card (from the same bank) in 3 different places when renting cars.
Pre-auth amounts over hit over maximum withdrawal limits for debit cards. Often times those are furiously low, like 300-500 dollars a day. Call your bank and find out.
That’s interesting. I suppose having an “old bank” also affects things.
For example, some times in other situations my debit card also rejects big purchases or new countries or some reservations of funds from some new hotel when I check in. But because they are pretty modern they also have an app that notifies me of the rejection at the same time as the terminal is told that it was rejected and so I can go in to the app and say this is actually ok and then I try again and then it goes through.
Maybe what people need is really this kind of thing like what I have with my card and the app.
I have zero interest in arguing with a candidate who, when asked to implement bubble sort argues the requirements instead of the implementation. I have NDA's that may preclude me from telling you the details of the stack you'll be working on. The job listing should reasonably describe the environment (embedded, data warehousing, frontend, whatever). You're being asked to implement something super basic to prove you can code yourself out of a wet paper bag basically.
Many cannot, and those that argue the premise don't even get a chance to try.
If you think you're being evaluated on your knowledge of the stdlib's sort(), you misunderstand the purpose. You're being tested not because you can hit compile and it passes, but you're being tested on if you're able to follow directions, work within scope, solve problems and explain your thinking. If your thinking stops at "I'd never do that", so does mine.
Evidently not enough for libraries to abandon books from publishers.
If they had the selection and quality people were seeking, people wouldn’t be arguing to kill copyright but rather just pointing people to free equivalents. The Internet Archive would just distribute those alternatives.
When a free equivalent does exist, that is what happens. Nobody is demanding legislation to force Oracle to lower database licence prices. We just use Postgres.
I think the point the commenter was making is, why not just archive those books in that case?
Just ignore the books with authors that have released the works for monetary gain. I don't know whether or not such an archive is at all attractive to anyone? But maybe over time it might start to move the needle?
Until AI is compiling straight to machine language, code needs to be readable.