IMO, the bad parts Unix are of two semi-distinct kinds: the skin-deep kind, and the genetic defects.
The superficial problems are everywhere, easy to spot, and fun to complain about! The names of commands are obscure, the flags are inconsistent. Any utility's feature set is inherently arbitrary and its limitations are equally so. Just how to accomplish any particular task is often a puzzle, the traedeoffs between using different utilities for the same task is inscrutable, and the time spent contemplating alternative approaches within the tool kit is utterly worthless. Utilities' inputs, outputs, and messages aren't for humans, they're either for coprocesses or for the control layer running the utility; and so users are supposed to learn to conform to the software, rather than vice versa. There are a hodgepodge of minilanguages (sed, find, test, bc, dc, expr, etc.), but they're unnecessary and inefficient if you're working in an even halfway capable language (let's say anything at about the level of awk is potentially "halfway capable"); and so "shelling out" to such utilities is a code smell. The canonical shells are somewhat expressive for process control, but terrible at handling data: consequently, safe, correct and robust uses of the shell layer Unix is hard, maybe impossible; so nowadays most any use of the Unix shell in any "real" application is also a bad code smell.
I say these are skin-deep in the sense that in theory any particular utility, minilanguage, or shell can be supplanted by something better. Some have tried, but uptake is slow/rare. The conventional rationales for why this doesn't happen is economic: it's either not worth anyone's time to learn new tools that replace old ones, or the network-effect-induced value of the old ones is so high (because every installation has the old ones) that any prospective replacement has to be loads better to get market traction. I have a different theory, which I'll get to below.
But I also think there's a deeper set of problems in the "genetics" of Unix, in that it supports a "reductive" form of problem solving, but doesn't help at all if you want to build abstractions. Let's say one of the core ideas in Unix is "everything is a file", i.e., read/write/seek/etc. is the universal interface across devices, files, pipes, etc.). "Everything is a file" insulates a program from some (but not all!) irrelevant details of the mechanics of moving bytes into and out of RAM... by forcing all programs to contend with even more profoundly irrelevant details about how those bytes in RAM should be interpreted as data in the program! While it is sometimes useful to be able to peek or poke at bits in stray spots, most programs implicitly or explicitly traffic in in data relevant to that program. While every such datum must be /realized/ as bytes somewhere, operating on some datum's realization /as bytes/ (or by convention, as text) is mostly a place to make mistakes.
Here's an example: consider the question "who uses bash as their login shell?" A classical "Unixy" methodology to attacking such a problem is supposed to be to (a) figure out how to get a byte stream containing the information you want, and then (b) figure out how to apply some tools to extract and perhaps transform that stream into the desired stream. So maybe you know that /etc/passwd one way to get that stream on your system, and you decide to use awk for this problem, and type
awk -F: '$6 ~ "bash$" { print $1 }' /etc/passwd
That's a nicely compact expression! Sadly, it's an incorrect one to apply to /etc/passwd to get the desired answer (at least on my hosts), because the login shell in the 7th field, not the 6th. Now, this is just a trivial little error, but that's why I like it as an example. Even in the most trivial cases, reducing anything to a byte stream does mean you can use any general purpose tool to a problem, but it also means that any such usage is going to reinvent the wheel in exact proportion to how directly it's using that byte stream; and that reinvention is a source of needless error.
Of course the sensible thing to do in all but the most contrived cases is to perform your handling of byte-level representations with a dedicated a library that provides at least some abstraction over the representation details; even thin and unsafe abstractions like C structs are better than nothing. (Anything less than a library is imperfect: if all you've got is a separate process on a pipe, you've just traded one byte stream problem for another. Maybe the one you get is easier than the one you started with, but still admits the same kinds of incorrect byte interpretation errors.) And so "everything is a file", which was supposed to be great faciltiy to help put things together, is usually just an utterly irrelevant implementation detail beneath libraries.
And this gets me back around to why I think the superficial stuff hasn't changed all that much: I doubt that the "Unix way" of putting things together has really truly mattered enough to bother making the tools or shells substantially better. I got started on Unix in 1999, by which time it was already customary for most people I knew to solve problems inside a capable language for which libraries existed, rather than to use pipelines of Unix tools. (Back then there was lots of Perl, Java, TCL, Python, et al.; nowadays less Perl and TCL, more Ruby and JavaScript.) Sure, you've needed a kernel to host your language and make your hard drive get warm, but once you have a halfway capable language (defined above), if it also has libraries and some way to call C functions (which awk didn't), you don't need the Unix toolkit, or a wide range of the original features of Unix itself (pipes, fork, job control, hierarchical process structure, separate address spaces, etc.).
And that's just stuff related to I/O and pipes. One could look at the relative merits of Unix's take on the file namespace, Plan 9's revision of the idea, and then observe that "logical devices" addressed much of that set of problems as early by the early-to-mid 70s on TOPS-20 and VMS, without (AFAICT) accompanying propaganda about how simple and orthogonal and critical it is that there be a tree-shaped namespace (except that it's a DAG) and everything in the namespace works like a file (except when it doesn't).
My point is that people have said about Unix that it's good because it's got a small number of orthogonal ideas, and look how you can make those ideas can hang together to produce some results! That's all fine, though in practice the attempt to combine the small number of ideas ends up giving fragile, inefficient, and unmaintanable solutions; and what you need to do to build more robust solutions on Unix is to ignore Unix, and just treat it as a host for an ecology of your own invention or selection, which ecology will probably make little use of Unix's Unix-y-ness.
(As to why Unix-like systems are widespread, it's hard not to observe some accidents of history: it was of no commercial value to its owner at a moment when hardware vendors needed a cheap operating system. Commercial circumstances later changed so that it made sense for some hardware vendors to subsidize free Unix knockoffs. Commercial circumstances have changed again, and it still makes sense for some vendors to continue subisidizing Unix knockoffs. But being good for a vendor to sell and being good for someone to use can very often be different things...)
> It's not a normal thing for code that needs that record that wasn't found.
No offense, but I don't know where that idea comes from. Systems are checking for records all the time in ways where the absence of data doesn't necessitate throwing an exception.
For instance, a page, user, or piece of media on a website may have existed at one point but was since deleted, but still has a permalink floating around the net. Is it useful, in the case that someone clicks on such a link, to throw an exception when I can instead choose to render different page content when a record wasn't found? I'll never need a stack trace for that.
> They tell you lots: you asked for something your code path wanted and your request couldn't be satisfied. Also, you've been helpfully kicked onto the alternative execution path to handle that situation.
Why would I want a code path for expected behavior? I agree for cases like a failed connection, where the system is actually broken, but there isn't anything fundamentally broken about data absence.
> Also, you've been helpfully kicked onto the alternative execution path to handle that situation.
That's not only presumptuous, but if I wanted that to happen, I can do so myself.
> Exceptions are a hell of a lot better than littering your code with null checks or error code checks, especially when you forget one and get a null pointer error.
Hmmm...
const record = store.findRecord(params.id);
if (record) {
render('show-record');
} else {
render('record-not-found');
}
I feel like floating point is a completely separate branch of programming that a lot of coders never use. When they do it's often a mistake, like for currency. Yet floating point is very useful for scientific calculations and simulations, which is what computers were all about for the first couple of decades.
I have a suspicion that on a fundamental level, floating point isn't actually good for games or machine learning. They're just used because existing computers are so good at floating point number crunching, especially GPUs.
The lack of deep friendships feel like a 3-fold problem.
1. You can't ever be real, if you are real, you are likely to be recorded doing something someone somewhere on the largest stage in the world (the public web) that someone will disapprove of, and someone else will raise their own profile by mining your impiety to prove their own concern and moral superiority.
2. Everyone is so mobile and connected online, they never have to break the ice and talk to those around them in the breakroom or geographical space, so all of our social skills have atrophied at best, or were never learned at worst. We know just enough civility to not get in fights, but we don't know how to easily break the ice or become acquaintances.
3. All the people that live in the cities are not close with each other, they didn't grow up together and don't go to church / rotary club / male-only spaces any longer because we are all supposed to pretend to be cool liberated yuppies in a hookup culture. Can't have real ties or any strongly held beliefs, that would make you religious (or worse, Religious on an actual religion), those people are bad. So I'm okay, you're okay, and we all smile. And inside, no real connections are ever made.
Not to mention testosterone levels dropping, schools being geared towards women, always co-ed spaces, and a breakup of younger and older generations because of cultural differences there too...not that the old people are always nice.
> The real problem is acceptance of non-word/latex papers
Some scientific journals, which only provides a Word template, require you to print to PDF to submit, then ships this PDF to India, where a team recreates the look of the submission in LaTeX, which is then used to compose the actual journal. I wish this was hyperbole. For these journals, you can safely create a LaTeX-template looking _almost_ the same, and get away with it.
> Is there academic research on this by any chance
Read "Secrets of Sand Hill Road".
It's written by Scott Kupor - one of the Managing Partner at A16Z - and explains in depth how the VC model works.
It's also required reading if you want to interview at a VC fund, and highly recommended by other founders to learn how to better strategize when raising funds.
Secrets of Sand Hill Road, Monetizing Innovation, and Working Backwards are the only books you need to successfully understand how to raise and build a VC funded startup. The pricing strategy guide by I think Sequoia is also useful.
How can a fully backed coin offer 5-8% interest without being a scam? The math doesn’t work out since you can’t get 5-8% yields from short term treasuries.
Why do people use Tether scamcoin over Gemini stablecoin GUSD? It's supposedly backed 1:1 GUSD:USD and has US-regulated books. I get 5-8% interest on it and hasn't had an issue for its handful of years of being issued.
You might be right. Posted by a KF user regarding the threat:
"It's a 2020 account that wasn't active till a month ago with 1 post in the CWC forum and the other 42 in the keffals thread. The post was deletedly nearly instantly, yet within 10 minutes of it being posted Keffals had contacted CF, CF pulled the plug, and articles (which you can find in A&N right now) were being posted. Also it's notable that Keffals removed the quote/reply portion of the post which he accidently revealed before indicating he has an account here. This was so obviously coordinated, it glows more than nuclear blast."
One can perfect the software without perfecting the code.
A bad overall architecture is unmaintainable.
An ugly 30 line pure function that has unit tests and hasn't needed changes in 5 years is just meh. 50 of those ugly pure functions are still not going to break a project. They could be untangled at any time as needed.
50 decent but not great functions are definitely not going to break a project.
So much perfectionism is at the level of functions and variable names. Some makes things worse, when people reinvent nontrivial functionality instead of using widely trusted libraries, like as if work projects are their personal playground of weekend educational stuff.
There needs to be an anti-patent-troll membership organization. You pay a fee relative to some metric and the organization acts as insurance against parent trolls by fully defending any patent lawsuits that are obviously unjustified. And to keep costs low, membership in this organization would be public to deter patent trolls from even trying to sue a member in the first place.
Not necessarily, sure. But the odds that you just happen to know some fact that "obviously falsifies" a decades old scientific controversy are very close to zero.
In fact, if you are right, you should write up a paper. There are numerous journals that would be thrilled to resolve this controversy once and for all.
I'm a moderator at Julia's discourse site. We've seen this happen and we're working on improving this, but I'm also (biased and) sympathetic to the "Julia community" writ large.
Julia's optimal sorts of workflows are different than many expect — especially for folks coming from static languages. Julia is a dynamic language. But also, Julia isn't Python. There are ways in which your workflows from other languages just aren't optimal in Julia. So when someone comes in "hot" and rants about how their workflow isn't working out how they expect, others jump in and — yes, defensively at times — point out alternatives that work for them.
I think some of this tension comes from an expectation mismatch. Some of the changes noted in these comments about how Julia is presented on julialang.org were made specifically due to this sort of expectation mismatch.
This doesn't mean that nobody in the community cares about improving workflows — in fact I know that's not true. But if you want to use Julia today, there are definitely some happy paths that we should guide folks towards.
See the tracking consent page on https://basecamp.com/? No? That's because there isn't one.
Every time you see one of those cookie popups it is a sign, right there front and centre, that the website you are trying to use is trying to play fast and loose with your data.
Complaining about these notices would be like complaining that restaurants are forced to put up a sign on their front door "Kitchen employees don't wash their hands" when they get caught not doing so.
> This is very reminiscent of Google/YouTube circa 2006.
In that it was a startup acquired by a big company? Yep.
> When Google bought YT it was a small team of people and a pretty nascent product that people really loved, and the usage numbers were out of control.
Like most startup acquisitions, the team size is relatively small and there is significant traction in the market with headroom to mature their footprint.
> They left the product mostly untouched and let it grow on its own. Though there was major criticism at the time, it is one of the best tech acquisitions of the past decade.
This is not going to be one of the best tech acquisitions of the next decade. YouTube helped to propel Google into content. It also helped to commoditise web video in a massive way: reminiscent of the way which Google commoditised search (YouTube is probably just short of being a byword for online video at this point).
Instagram is a photo service in a sea of other photo services. Photography has been around on the web in meaningful ways for a long time. Flickr lost out to Facebook in the community stakes, and Instagram is doing great in whatever-the-fuck market it's in (the share-to-my-twitter-followers market?), but this is not Google acquiring YouTube.
Yeah I mean it's mostly interesting to see that at $0/mo Onavo was a fantastic deal for FB and they are willing to pay users at least $20/mo for the same quality data. I wonder what price this instrumentation is worth to them if $20/mo and PR risk was okay -- like, what is the upper bound on good quality iOS Onavo data?
Seemed like the WSJ described this tool pretty well [ https://www.wsj.com/articles/facebooks-onavo-gives-social-me... ] -- reportedly Facebook employees can just plug in "Snapchat" into the Onavo metrics and see "we estimate [XX] MAU, declining [Y%] year over year and [Z%] month over month", and they can use this info to short/long SNAP or to prioritize building/buying a competitor. Such a great idea.
I do feel bad for whatever PM in Facebook (on Project Atlas or whatever) has been watching this news for the past few days and saying "whoa, this seems disproportionately unfair, given that Google and others do the same thing on iOS". I'm just wildly speculating here but that project team is probably getting a firsthand lesson in the "New York Times test" rule: if what you are doing were published on the front page of the NYT, would you regret it? (This is a particularly rough area because I think a lot of current employees probably feel like the NYT and peers have some kind of vendetta against them and probably don't really understand the hostility.)
I wonder if there is a competitive advantage to be had there? UMaine is a pretty good system. When I retired, I taught math at UMF for a little while, so I got to see the system from the inside.
UMF has one of the best teaching programs in the country. They even have a CS path that is judged well.
Maybe recruiting at the small State universities is a potential edge?
Of course there aren't because BMW absolutely squashed the possibility of that at the outset. But Google, apparently, deserves a $5B fine because they made a mistake of powering AOSP phones, the Amazon products, countless variations in China, set top boxes, etc.
It is incredible seeing Google being seriously attacked on here for have an open source path as well. I feel like this is some sort of alternative universe.
- Google had an operating income of 32.9 billion in 2017 [0]. $5B fine for anti-trust practices is 15.2% (5 / 32.9) of that.
- HSBC had an operating income of 63.8 billion in 2017 [1]. In 2017, HSBC suffered a $1.9B fine for _willfully_ relaxing it's anti-money laundering filters in order to profit from the illegal drug industry, regimes that are embargoed, and entities or individuals who are suspected of financing terrorism [2]. That's a 1.9 / 63.8 = 0.29% fine for helping finance murderers.
That puts the Google fine into perspective, but it probably says more about the HSBC case than it does about Google's.
It should be noted that HSBC broke a deal to avoid prosecution, thanks to then Attorney General Eric Holder who overruled prosecutors' recommendation to pursue criminal charges. As part of the deal, HSBC confessed to above allegations, which was just a theory at the time.
The superficial problems are everywhere, easy to spot, and fun to complain about! The names of commands are obscure, the flags are inconsistent. Any utility's feature set is inherently arbitrary and its limitations are equally so. Just how to accomplish any particular task is often a puzzle, the traedeoffs between using different utilities for the same task is inscrutable, and the time spent contemplating alternative approaches within the tool kit is utterly worthless. Utilities' inputs, outputs, and messages aren't for humans, they're either for coprocesses or for the control layer running the utility; and so users are supposed to learn to conform to the software, rather than vice versa. There are a hodgepodge of minilanguages (sed, find, test, bc, dc, expr, etc.), but they're unnecessary and inefficient if you're working in an even halfway capable language (let's say anything at about the level of awk is potentially "halfway capable"); and so "shelling out" to such utilities is a code smell. The canonical shells are somewhat expressive for process control, but terrible at handling data: consequently, safe, correct and robust uses of the shell layer Unix is hard, maybe impossible; so nowadays most any use of the Unix shell in any "real" application is also a bad code smell.
I say these are skin-deep in the sense that in theory any particular utility, minilanguage, or shell can be supplanted by something better. Some have tried, but uptake is slow/rare. The conventional rationales for why this doesn't happen is economic: it's either not worth anyone's time to learn new tools that replace old ones, or the network-effect-induced value of the old ones is so high (because every installation has the old ones) that any prospective replacement has to be loads better to get market traction. I have a different theory, which I'll get to below.
But I also think there's a deeper set of problems in the "genetics" of Unix, in that it supports a "reductive" form of problem solving, but doesn't help at all if you want to build abstractions. Let's say one of the core ideas in Unix is "everything is a file", i.e., read/write/seek/etc. is the universal interface across devices, files, pipes, etc.). "Everything is a file" insulates a program from some (but not all!) irrelevant details of the mechanics of moving bytes into and out of RAM... by forcing all programs to contend with even more profoundly irrelevant details about how those bytes in RAM should be interpreted as data in the program! While it is sometimes useful to be able to peek or poke at bits in stray spots, most programs implicitly or explicitly traffic in in data relevant to that program. While every such datum must be /realized/ as bytes somewhere, operating on some datum's realization /as bytes/ (or by convention, as text) is mostly a place to make mistakes.
Here's an example: consider the question "who uses bash as their login shell?" A classical "Unixy" methodology to attacking such a problem is supposed to be to (a) figure out how to get a byte stream containing the information you want, and then (b) figure out how to apply some tools to extract and perhaps transform that stream into the desired stream. So maybe you know that /etc/passwd one way to get that stream on your system, and you decide to use awk for this problem, and type
awk -F: '$6 ~ "bash$" { print $1 }' /etc/passwd
That's a nicely compact expression! Sadly, it's an incorrect one to apply to /etc/passwd to get the desired answer (at least on my hosts), because the login shell in the 7th field, not the 6th. Now, this is just a trivial little error, but that's why I like it as an example. Even in the most trivial cases, reducing anything to a byte stream does mean you can use any general purpose tool to a problem, but it also means that any such usage is going to reinvent the wheel in exact proportion to how directly it's using that byte stream; and that reinvention is a source of needless error.
Of course the sensible thing to do in all but the most contrived cases is to perform your handling of byte-level representations with a dedicated a library that provides at least some abstraction over the representation details; even thin and unsafe abstractions like C structs are better than nothing. (Anything less than a library is imperfect: if all you've got is a separate process on a pipe, you've just traded one byte stream problem for another. Maybe the one you get is easier than the one you started with, but still admits the same kinds of incorrect byte interpretation errors.) And so "everything is a file", which was supposed to be great faciltiy to help put things together, is usually just an utterly irrelevant implementation detail beneath libraries.
And this gets me back around to why I think the superficial stuff hasn't changed all that much: I doubt that the "Unix way" of putting things together has really truly mattered enough to bother making the tools or shells substantially better. I got started on Unix in 1999, by which time it was already customary for most people I knew to solve problems inside a capable language for which libraries existed, rather than to use pipelines of Unix tools. (Back then there was lots of Perl, Java, TCL, Python, et al.; nowadays less Perl and TCL, more Ruby and JavaScript.) Sure, you've needed a kernel to host your language and make your hard drive get warm, but once you have a halfway capable language (defined above), if it also has libraries and some way to call C functions (which awk didn't), you don't need the Unix toolkit, or a wide range of the original features of Unix itself (pipes, fork, job control, hierarchical process structure, separate address spaces, etc.).
And that's just stuff related to I/O and pipes. One could look at the relative merits of Unix's take on the file namespace, Plan 9's revision of the idea, and then observe that "logical devices" addressed much of that set of problems as early by the early-to-mid 70s on TOPS-20 and VMS, without (AFAICT) accompanying propaganda about how simple and orthogonal and critical it is that there be a tree-shaped namespace (except that it's a DAG) and everything in the namespace works like a file (except when it doesn't).
My point is that people have said about Unix that it's good because it's got a small number of orthogonal ideas, and look how you can make those ideas can hang together to produce some results! That's all fine, though in practice the attempt to combine the small number of ideas ends up giving fragile, inefficient, and unmaintanable solutions; and what you need to do to build more robust solutions on Unix is to ignore Unix, and just treat it as a host for an ecology of your own invention or selection, which ecology will probably make little use of Unix's Unix-y-ness.
(As to why Unix-like systems are widespread, it's hard not to observe some accidents of history: it was of no commercial value to its owner at a moment when hardware vendors needed a cheap operating system. Commercial circumstances later changed so that it made sense for some hardware vendors to subsidize free Unix knockoffs. Commercial circumstances have changed again, and it still makes sense for some vendors to continue subisidizing Unix knockoffs. But being good for a vendor to sell and being good for someone to use can very often be different things...)