Properly manage PATH for the context you're in and this is a non-issue. This is the solution used by most programming environments these days, you don't carry around the entire npm or PyPI ecosystem all the time, only when you activate it.
Then again, I don't really believe in performing complex operations manually and directly from a shell, so I don't really understand the use-case for having many small utilities in PATH to begin with.
Macro hygiene, static initialization ordering, control over symbol export (no more detail namespaces), slightly higher ceiling for compile-time and optimization performance.
If these aren't compelling, there's no real reason.
import std; is an order of magnitude faster than using the STL individually, if that's evidence enough for you. It's faster than #include <iostream> alone.
Chuanqi says "The data I have obtained from practice ranges from 25% to 45%, excluding the build time of third-party libraries, including the standard library."[1]
Yeah, but now compare this to pre-compiled headers. Maybe we should be happy with getting a standard way to have pre-compiled std headers, but now my build has a "scanning" phase which takes up some time.
> even senior C++ developers are always going to be able to deduce the correct value category
Depends what "senior" means in this context. Someone with 20-years of domain experience in utility billing, who happened to be writing C++ for those 20 years? Probably not.
Someone who has been studying and teaching C++ for 20 years? Yes they are able to tell you the value category at a glance.
Language experience is not something you develop accidentally, you don't slip into just because you're using the language. Such tacit experience quickly plateaus. If you make the language itself the object of study, you will quickly surpass "mere" practitioners.
This is true of most popular programming languages in my experience. I find very, very few Python programmers understand the language at an implementation level, can explain the iterator protocol or what `@coroutine` actually used to do, how `__slots__` works, etc.
C++ is not unique in this, although it is old and has had a lot more time to develop strange corners.
The author has succeeded only in arguing one meaningless image factory produces images they find more aesthetically pleasing than another meaningless image factory.
The framing implies they understand little of art at all; beyond gurgling and clapping like a child at the colors and shapes they find most stimulating.
I don't think it is some secret. There are many who say that art is not just a painting itself, but in the process of making it, and the motivation and goals behind it. Generative "AI" has none of that. It does not labor like a human would. It has no motivation, because it is not a thinking being. It has no intention in making a digital output. It just works. It has no meaning by the process of creating. Some Michelangelo working on something amazing for years, that's something that has meaning.
It is also not inventive. It's rehashing and regurgitating. That point is a bit muddy, because many humans do that too. But ask a generative "AI" to make something better than what it has learned from and new, and you will probably be disappointed.
I am not an art buff, but I can sort of see, why one wouldn't consider it proper art.
> Is true art a hermetic endeavour which must be gate-kept to seal out the lesser folk?
Kind of. If everyone on the planet can paint the Sistine Chapel’s ceiling, then it’s not anything special anymore is it? Especially if it reduces the process to asking the world’s most prolific counterfeit machine to do it for you.
Is art then just the outcome? The artifact that was produced?
What's your criteria then for who is allowed to produce art? If allowing everyone to create it lessens its value such that it becomes worthless, there must be a cutoff.
If your goal is to ensure the continuity of human expression, limiting who is allowed to create art and narrowly defining art to great works kind of misses the point.
Well, birthdays are merely symbolic of how another year's gone by and how little we've grown. No matter how desperate we are that someday a better self will emerge, with each flicker of the candles on the cake we know it's not to be. That for the rest of our sad, wretched, pathetic lives, this is who we are to the bitter end. Inevitably, irrevocably. Happy birthday? No such thing.
Reading COW memory doesn't cause a fault. It doesn't mean unused literally.
And even if it's not COW, there's nothing wrong or inefficient about opportunistically allocating pages ahead of time to avoid syscall latency. Or mmapping files and deciding halfway through you don't need the whole thing.
There are plenty of reasons overcommit is the default.
People keep saying "O3 has bugs," but that's not true. At least no more bugs than O2. It did and does more aggressively expose UB code, but that isn't why people avoid O3.
You generally avoid O3 because it's slower. Slower to compile, and slower to run. Aggressively unrolling loops and larger inlining windows bloat code size to the degree it impacts icache.
The optimization levels aren't "how fast do you want to code to go", they're "how aggressive do you want the optimizer to be." The most aggressive optimizations are largely unproven and left in O3 until they are generally useful, at which point they move to O2.
Sure. All I am saying is that there are still plenty of compiler bugs related to optimization, which is reason enough for me to recommend being careful with optimization in contexts where correctness is important.
Sure, I guess? In my experience I turn on the optimizer mostly without fear because I know that if, in the rare case I need to track down an optimizer bug, it would look the same as my process for identifying any other sort of crazy bug and in this case it will at least have a straightforward resolution.
More aggressive optimization is necessarily going to be more error prone. In particular, the fact that -O3 is "the path less traveled" means that a higher number of latent bugs exist. That said, if code breaks under -O3, then either it needs to be fixed or a bug report needs to be filed.
Unity builds have been largely supplanted by LTO. They still have uses for build time improvements in one-off builds, as LTO on a non-incremental build is usually slower than the equivalent unity build.
I would expect a little benefit from devirt (but maybe in-TU optimizations are getting that already?), but if a program is pessimized enough, LTO's improvements won't be measurable.
And programs full of pointer-chasing are quite pessimized; highly-OO code is a common example, which includes almost all GUIs, even in C++.
Do you link against a version of the Qt library that provides IR objects?
In any case even with whole program optimization, O would expect that effectively devirtualizing an heavily object oriented application to be very hard.
Except for `xchg eax, eax`, which was the canonical nop on x86. Because it was supposed to do nothing, having it zero out the top 32-bits of rax would be quite surprising. So it doesn't.
Instead you need to use the multi-byte, general purpose encoding of `xchg` for `xchg eax, eax` to get the expected behavior.
Ya OP is shadow boxing. There is absolutely no need for any of these things.
Tons of open source exists as only source code and a license, nothing else. No docs, no issue tracker, nothing. People who need it use it, learn from it, remix it, whatever, but there need not be any engagement at all from the given repo's maintainer.
Seriously. If I throw something up somewhere, you get a tarball, a README, and no way to get in touch with me. If the code helps you, fantastic! If it doesn't, then I hope you at least got something out of the experience. But "as-is" means what it says on the tin. I'm not sure why people are so hellbent on treating every message from every stranger as important.
Because open source is not just about the code and the license. It is first and foremost about a community of people who want to make software better for everyone, not just for themselves or a select few. The code and license are ancillary to this goal.
I won't get into this discussion again. I'll just say that if you think otherwise, whatever good you think you're putting out into the world, is not much better than keeping the software proprietary.
You have this entirely backwards. Open source is, definitionally, the code and a license. It is "first and foremost" those things. The community of people cannot exist without the code and the license. The code and the license can and often does exist without dedicated communities.
Everything else in open source is a cultural projection entirely ancillary to the code and the license.
> I'll just say that if you think otherwise, whatever good you think you're putting out into the world, is not much better than keeping the software proprietary.
I have never seen someone so entirely miss the point of open source. This is not a house party, this is not a community support network. There are genuine disagreements about open source philosophy, if it should be more focused on user freedoms or developer convenience, but they are all incompatible with the idea that open-source licensed code in and of itself "is not much better than keeping the software proprietary".
Stallman did not invent the GPL because he wanted an issue tracker and complete documentation from HP. He invented the GPL because he needed to fix his printer drivers.
A ton of very important open source code was thrust into the world, created immense value, but was never further supported or developed by its original developers. Off the top of my head: git, Doom, Bitcoin, and basically everything Fabrice Bellard has ever done.
Code existed before FOSS. Code that people collaborated on existed before FOSS. Code given away for free existed before FOSS. FOSS code, by itself, is not anything special.
Licences also existed before FOSS, but open sources licences enabling the kind of freedoms that they allow did not exist. And as it happens, a license is not a technical artefact but a social contract. Stallman is activist, not simply a neutral combination of a technician and a lawyer.
The social contract and political vision are consequently not ancillary, but core to FOSS. Code is the medium, but the license is the innovation. Without that social contract, 'open' code is just abandonware.
The community doesn't need to be a 'house party,' but the license guarantees the right for a community to form when the original author walks away.
> The community doesn't need to be a 'house party,' but the license guarantees the right for a community to form when the original author walks away.
Which is why the license is the only thing that matters. Without the license you don't have the community. It will happen with some code, it won't happen to other code. Without the license, or without the code, it never happens.
The only thing you need to do as an open source software developer is release your code under an open source license. You don't need to respond to or even maintain an issue tracker, you don't need to accept MRs into your upstream, you don't need to care about anyone else using your code.
Open source places no other obligations on a developer other than the license. To say otherwise is to fundamentally misunderstand what open source is.
Maybe you are lasering in on a term we use to describe software, but they are talking more broadly about maintaining open source (lower case, btw) collaborative software.
Though I have to be very charitable to grant your point.
Even your examples support their point of "people who want to make software better for everyone, not just for themselves or a select few". Stallman just cared about code, like fixing his printer, and not a whole social movement?
> Stallman just cared about code, like fixing his printer, and not a whole social movement?
Stallman created a social movement that just cared about code, yes. He needed the social movement to create an environment in which he could fix his printer.
The social movement was about the license and the code, not about providing support for, documentation of, or continuing development of any particular code.
By creating an environment where code is open, you allow for communities to organically form around code and maintain it. Without the environment, without the code and the license, the communities cannot form.
> The community of people cannot exist without the code and the license.
That is obviously false. Communities form around any common interest. They also exist around proprietary software, where no code is shared.
When code is freely available, it is the community of people who make the project successful—not the code, and certainly not a piece of legalese text.
> The code and the license can and often does exist without dedicated communities.
Technically true, but such projects languish in obscurity. They're driven by the will of a small group of people, often the original lone author, and once that diminishes, they are abandoned and forgotten. The vast majority of software which can technically be described as "open source" is mostly inconsequential to computing or anyone's lives. It once scratched the itch of a single person, and now sits unread on some storage device.
Thus, communities are what make software successful. Not just free software, but software in general. We write software for people, and we publish its source code to help others. We do so because software is better when shared and improved by a community of passionate users, rather than written by one or a few people who wanted it to exist.
It's wild that you would bring up Stallman as an example, since everything he's done goes completely against your point. That printer story served as a good example to illustrate to others why free software is necessary—not just for him, or for the team and company he worked with at the time, but for the world at large. He didn't need to invent a social movement and philosophy to fix his printer issues. He probably could've hacked around it and found a solution that worked for their specific case, and called it a day. And yet he didn't. He believed that software could be built and shared in a different way. In a way that would benefit everyone, and not just the people who wrote it. He believed in the power of sharing knowledge freely, of collaborating, and building communities of like-minded people. The source code is important, and the license less so, but it is this philosophy that brings the most value to the world.
> A ton of very important open source code was thrust into the world, created immense value, but was never further supported or developed by its original developers. Off the top of my head: git, Doom, Bitcoin, and basically everything Fabrice Bellard has ever done.
Whether the original developers supported it or not is irrelevant. All of the examples you mentioned are projects supported by someone, and have communities of passionate people around them. That is the point. Individuals may come and go. The author is no more important than any talented and passionate member of the community. But someone cares enough to continue maintaining the software, and to nurture the community of users around it, without which none of these projects would be remotely as successful as they are today.
It is fundamentally true. You cannot have a Pokemon community without Pokemon, a knitting community with yarn, or a software community without software.
> Technically true
You should have stopped here. It is true. Period, full stop. Everything else is fluff.
> The vast majority of software which can technically be described as "open source" is mostly inconsequential to computing or anyone's lives.
This is because the open source software movement was so overwhelming in its success it became the norm.
> He didn't need to invent a social movement and philosophy to fix his printer issue.
Yes he did. The philosophy is about the freedom to fix your printer. It is not about engaging others to fix your printer, or obliging maintainers to fix your printer.
Those things are follow ons to the core philosophy. Once you have the freedom to fix your printer, you can form communities of people also interested in fixing printers. The freedom comes first.
> Whether the original developers supported it or not is irrelevant.
It's literally the only thing we're talking about. Open source enables others to come along and support software abandoned by or simply never championed by its original creator. Without open source you do not have those later "someones".
Then again, I don't really believe in performing complex operations manually and directly from a shell, so I don't really understand the use-case for having many small utilities in PATH to begin with.
reply