Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Python was meant as a scrypting language that was easy to learn and work with on all levels.

Being fast isn't contradictory with this goal. If anything, this is a lesson that so many developers forget. Things should be fast by default.



When you only have so many hours to go around, you concentrate on the main goals.


My point is that you can write fast code just as easily as you can write slow code. So engineers should write fast code when possible. Obviously you can spend a lot of time making things faster, but that doesn't mean you can't be fast by default.


> you can write fast code just as easily as you can write slow code

I think some people can do this and some can't. For some, writing slow code is much easier, and their contributions are still valuable. Once the bottlenecks are problems, someone with more performance-oriented skills can help speed up the critical path, and slow code outside of the critical path is just fine to leave as-is.

If you somehow limited contributions only to those who write fast code, I think you'd be leaving way too much on the table.


You usually need more tricks for fast code. Bubble sort is easy to program (it' my default when I have to sort manually, and the data is has only like 10 items)

There are a few much better options like mergesort or quicksort, but they have their tricks.

But to sort real data really fast, you should use something like timsort, that detects if the data is just the union of two (or a few) sorted parts, so it's faster in many cases where the usual sorting methods don't detect the sorted initial parts. https://en.wikipedia.org/wiki/Timsort

Are you sorting integers? Strings? Ascii-only strings? Perhaps the code should detect some of them and run an specialized version.


Being fast requires effort. It's not always about raw performance of the language yo use, it's about using the right structures, algorithms, tradeoffs, solving the right problems, etc. It's not trivial and I've seen so many bad implementations in "fast" compiled languages.


This isn't true in general, and it is especially not true in the context of language interpreters / VMs.


Not true. Premature optimization is the root of all evil. You first write clean code, and then you profile and optimize. I refer you to the underlyings of dicts through the years (https://www.youtube.com/watch?v=npw4s1QTmPg) as an example of that optimization taking years of incremental changes. Once you see the current version it's easy to claim that you would have get to the current and best version in the first place, as obvious as it looks in hindsight.


CPython and the Python design in general clearly show that writing clean code and optimizing later is significantly harder and take much more effort than keeping optimizations in mind from the start. It doesn't mean you need to write optimal code form day one, just that you need to be careful not to program yourself into a corner.


> Being fast isn't contradictory with this goal. If anything, this is a lesson that so many developers forget. Things should be fast by default.

It absolutely is contradictory. If you look at the development of programming languages interpreters/VMs, after a certain point, improvements in speed become a matter of more complex algorithms and data structures.

Check out garbage collectors - it's true that Golang keeps a simple one, but other languages progressively increase its sophistication - think about Java or Ruby.

Or JITs, for example, which are the latest and greatest in terms of programming languages optimization; they are complicated beasts.


Yes, you can spend a large amount of time making things faster. But note that Go's GC is fast, even though it is simple. It's not the fastest, but it is acceptably fast.


Funny you should pick that example in a sub thread that you started with an assertion that code should be fast from by default.

Go’s GC was intentionally slow at first. They wanted to get it right THEN make it fast.

No offense but you’re not making a strong case. You’re sounding like an inexperienced coder that hasn’t yet learned that premature optimization is bad.


Far from it, Go was designed to be optimizable form the start. The GC was obviously not optimal, but the language semantics were such that the GC can be replaced with a better one with relatively minimal disruption.

Of course one can't release optimal code from version one, that would be absurd.

Also your last sentence is extremely condescending.


Just FYI, Go's GC mantra is "GC latency is an existential threat to Go."


> Things should be fast by default.

In over 90% of my work in the SW industry, being fast(er) was of no benefit to anyone.

So no, it should not be fast by default.


> no, it should not be fast by default.

Maybe better to elaborate on what it should be, if not fast? Surely you aren’t advocating things should be intentionally slow by default, or carelessly inefficient?

There’s a valid tradeoff between perf and developer time, and it’s fair to want to prioritize developer time. There’s a valid reason to not care about fast if the process is fast enough that a human doesn’t notice.

That said, depends on what your work is, but defaulting to writing faster, more efficient code might benefit a lot of people indirectly. Lower power is valuable for server code and for electricity bills and at some level for air quality in places where power isn’t renewable. Faster benefits parallel processing, it leaves more room for other processes than yours. Faster means companies and users can buy cheaper hardware.


> Maybe better to elaborate on what it should be, if not fast?

It should satisfy the needs of the customer and it should be reasonably secure.

Everything else is a luxury.

My point was that in most of my time in the industry being faster would not have benefited my customers in any manner worth measuring.

I'm not anti-performance. I fantasize about how to make my software faster to maintain my geek credentials. But neither my bosses nor my customers pay for it.

If a customer says they want it faster we'll oblige.


Clean code, easy to maintain is not luxury when talking about a programming language implementation.


Being first to market is usually more important than being faster then those who got there first.


Ha! Good luck convincing end-users that that's true.


Client doesn't care. As long as their advertising spend leads to conversion into your "slow" app, they're happy.


I don't need to. Less than 10% of end users have put in requests for performance improvements.


Fast typically comes with trade offs.

Languages that tried to be all things to all people really havent done so well.


Fast in which regard? Fast coding? Fast results after hitting "run"? ;)


Or at the very least they should be designed in such a way that it optimizing them later is still possible.


"Premature optimization is the root of all evil."

-- Donald Knuth


You should probably read the full context around that quote, I'm sick and tired of everyone repeating it mindlessly:

https://softwareengineering.stackexchange.com/a/80092

> Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified.


> You should probably read the full context around that quote, I'm sick and tired of everyone repeating it mindlessly:

I'm confused. What do you think the context changes? At least as I read it, both the short form and full context convey the same idea.


That quote has been thrown around every time in order to justify writing inefficient code and never optimizing it. Python is 10-1000x slower than C, but sure, let's keep using it because premature optimization is the root of all evil, as Knuth said. People really love to ignore the "premature" word in that quote.

Instead, what he meant is that you should profile what part of the code is slow and focus on it first. Knuth didn't say you should be fine with 10-1000x slower code overall.


The thing is, when I see people using this quote, I don't see them generally using it to mean you should never optimize. I think people don't ignore the premature bit in general. Now, throwing this quote out there generally doesn't contribute to the conversation. But then, I think, neither does telling people to read the context when the context doesn't change the meaning of the quote.


Right but if they post just that part, they're probably heavily implying that now is not the time to optimize. I've seen way more people using it to argue that you shouldn't be focussing on performance at this time, than saying "sometimes you gotta focus on that 3% mentioned in the part of the quote that I deliberately omitted"


No, they don't deliberately omit the part of the quote. They are either unaware of that part of the quote or don't think it matters to the point they are making.

Yes, if you quote Knuth here (whether the short quote or a longer version) you are probably responding to someone whom you believe is engaged in premature optimization.

It remains that the person quoting Knuth isn't claiming that there isn't such a thing as justified optimization. As such, pointing to the context doesn't really add to the conversation. (Nor does a thoughtless quote of Knuth either)


I dunno, guess it's hard to say given we're talking about our own subjective experiences. I completely believe there are people who just know the "premature optimization is the root of all evil" part and love to use it because quoting Knuth makes them sound smart. And I'm sure there are also people know it all and who quote that part in isolation (and in good faith) because they want to emphasise that they believe you're jumping the gun on optimization.

But either way I think the original statement is so uncontroversial and common-sense I actually think it doesn't help any argument unless you're talking to an absolutely clueless dunce or unless you're dealing with someone who somehow believes every optimization is premature.


You certainly can accept that slowdown if the total program run-time remains within acceptable limits and the use of a rapid prototyping language reduces development time. There are times when doing computationally heavy, long-running processes where speed is important, but if the 1000x speedup is not noticeable to the user than is it really a good use of development time to convert that to a more optimized language?

As was said, profile, find user-impacting bottlenecks, and then optimize.


I would note that the choice of programming language is a bit different. Projects are pretty much locked into that choice. You've got to decide upfront whether the trade off in a rapid prototyping language is good or not, not wait until you've written the project and then profile it.


> Projects are pretty much locked into that choice.

But, they aren't.

I mean, profile, identify critical components, and rewrite in C (or some other low-level language) for performance is a normal thing for most scripting languages.

> You've got to decide upfront whether the trade off in a rapid prototyping language is good or not, not wait until you've written the project and then profile it.

No, you absolutely don't.


Yes, it true, if you use Python you can rewrite portions in C to get improved performance. But my point was rather that you couldn't later decide you should have written the entire project in another language like Rust or C++ or Java or Go. You've got the make decision about your primary language up-front.

Or to look at it another way: Python with C extensions is effectively another language. You have to consider it as an option along with Pure Python, Rust, Go, C++, Java, FORTRAN, or what have you. Each language has different trade-offs in development time vs performance.


Certainly, but Python is flexible enough that it readily works with other binaries. If a specific function is slowing down the whole project, an alternate implementation of that function in another language can smooth over that performance hurdle. The nice thing about Python is that it is quite happy interacting with C or go or Fortran libraries to do some of the heavy lifting.


The quote at least contextualises "premature". As it is, premature optimisation is by definition inappropriate -- that's what "premature" means. The context:

a) gives a rule-of-thumb estimate of how much optimisation to do (maybe 3% of all opportunities);

b) explains that non-premature opimisation is not just not the root of all evil but actually a good thing to do; and

c) gives some information about how to do non-premature optimisation, by carefully identifying performance bottlenecks after the unoptimised code has been written.

I agree with GP that unless we know what Knuth meant by "premature" it is tempting to use this quote to justify too little optimisation.


I agree with you, the context changes nothing (and I upvoted you for this reason). However programming languages and infrastructure pieces like this are a bit special, in that optimizations here are almost never premature.

  * Some of the many applications relying on these pieces, could almost certainly use the speedup and for those it wouldn't be premature
  * The return of investment is massive due to the scale
  * There are tremendous productivity gains by increasing the performance baseline because that reduces the time people have to spend optimizing applications
This is very different from applications where you can probably define performance objectives and define much more clearly what is and isn't premature.


I don't know about that. Even with your programming language/infrastructure you still want to identify the slow bits and optimize those. At the end of the day, you only have a certain amount of bandwith for optimization, and you want to use that where you'll get the biggest bang for you buck.


This certainly does not mean, "tolerate absurd levels of technical debt, and only ever think about performance in retrospect."


Python was meeting needs well enough to be one of, if not the single, most popular language for a considerable time and continuing to expand and become dominant in new application domains while languages that focussed more heavily on performance rose and fell.

And it's got commercial interests willing to throw money at performance now because of that.

Seems like the Python community, whether as top-down strategy or emergent aggregate of grassroots decisions made the right choices here.


Python had strengths that drove it's adoption, namely that it introduced new ideas about a language's accessibility and readability. I'm not sure it was ever really meeting the needs of application developers. People have been upset about Python performance and how painful it is to write concurrent code for a long time. The innovations in accessibility and readability have been recognized as valuable - and adopted by other languages (Go comes to mind). More recently, it seems like Python is playing catch-up, bringing in innovations from other languages that have become the norm, such as asyncio, typing, even match statements.

Languages don't succeed on their technical merit. They succeed by being good enough to gain traction, after which it is more about market forces. People choose Python for it's great ecosystem and the availability of developers, and they accept the price they pay in performance. But that doesn't imply that performance wasn't an issue in the past, or that Python couldn't have been even more successful if it had been more performant.

And to be clear, I use Python every day, and I deeply appreciate the work that's been put into 3.10 and 3.11, as well as the decades prior. I'm not interested in prosecuting the decisions about priorities that were made in the past. But I do think there are lessons to be learned there.


> tolerate absurd levels of technical debt

In my experience it's far more common for "optimizations" to be technical debt than the absence of them.

> only ever think about performance in retrospect

From the extra context it pretty much does mean that. "but only after that code has been identified" - 99.999% of programmers who think they can identify performance bottlenecks other than in retrospect are wrong, IME.


Well it's entirely possible that Knuth and I disagree here, but if you architect an application without thinking about performance, you're likely going to make regrettable decisions that you won't be able to reverse.

It is not possible to predict bottlenecks in computation, no. But the implications of putting global state behind a mutex in a concurrent application should be clear to the programmer, and they should think seriously before making a choice like that. If you think of a different way to do that while the code is still being written, you'll avoid trapping yourself in an irreversible decision.


python isn't premature. python is more than 30 years old now, python 3 was released more than 10 years go.


It's been at least 5 years since I read an angry post about the 2 to 3 version change, so I guess it's finally been accepted by the community.


i would say this quote does not apply here. VM implementations are in the infamous 3% Donald Knuth is warning us about.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: