Those that keep trying to make a point of Android being UNIX, never really shipped anything on the Play Store.
Yes, it uses the Linux kernel, and several components born on the Linux world and that is about it.
Only Google and OEM partners get to see that layer, and those that root their devices.
Nothing on the Android userspace, the Java and Kotlin frameworks, or the NDK official APIs have anything to do with Linux in any kind, form and fashion.
As for putting UNIX in a pedestal, most of the so called culture, is mostly circled around nerd circles, hardly ever noticed in big iron commercial UNIXes, and now that most of them faded away, gets passed by as a kind of old culture myth told during cold nights around the fire.
As for the technology, the UNIX haters book exists for a reason, had AT&T been allowed to sell UNIX from the get go, and most likely it wouldn't have picked up steam on the market.
Being source tapes available for symbolic prices, when compared with the real cost of a commercial OS on the 1970's, is of course a different matter.
I find great that at least the mobile OSes, and cloud infrastructure nowadays focus on other stacks, instead of being yet another UNIX clone, no need for OS monoculture stuck in the past.
Well you can ssh to an Android phone through the USB developer tools, and you get a bunch of Unix commands like ls, df, etc.
Anyway what's wrong about Unix is that it assumes that users need to be protected from each other (good) but installed binaries can be somehow trusted (bad). We're in the internet age now where we download so many executable code that we need better sandboxing, as part of the OS.
so what? Regular users (whatever they are) don't know or care that a mac is unix under there somewhere.
At moments like these i feel the loss of the nokia N900 that was glibc, gtk+ linux all the way down and up. Best phone I ever owned by a margin. Also arguably not a machine owned by apple/goog/samsung/etc whose purpose is surveillance of the schmuck (me) who just paid for it.
Its UNIX influence is a phyrric victory, without any real value as Google can completly replace it with Fuschia at any time they feel like it, and those regular users won't even notice.
For the same reason that is behind complaints about undefined behaviour in C/C++: because there are people who do, and people who can't do. Those who can't do, need to complain in order to relieve their frustration.
“Some people who can use other systems fine, can’t use my preferred system” is possibly the worst defense of a system. It’s bad. It’s not something to be proud of.
Secondly, unless you are doing classical VM deployments, the only thing we care about are type 1 hypervisors.
If the language runtimes are running directly on top of a type-1 hypervisor, some kind of container, or bare metal, it is mostly irrelevant for most languages, with exception of C and C++ with their direct dependency on POSIX related APIs.
Taken to the extreme, on serveless workloads, you can even upload the code, and it get magically transformed into a deployment, mostly using managed languages that are OS agnostic.
At the end of the day, if those type-1 hypervisors are self-contained, or need a guest OS as management node, is completly transparent to the managed runtime workloads.
> with exception of C and C++ with their direct dependency on POSIX related APIs
...why would POSIX be special if the POSIX layer is statically linked with the application and would just be a translation layer to anything that lies below? Same way MUSL works on Linux as alternative C library which bypasses glibc and instead directly uses syscalls to talk to Linux.
(also technically the POSIX standard doesn't have anything do to with the C and C++ standards)
The POSIX standard definitely has something to do with C standards:
> Parts of the ISO/IEC 9899:1999 standard (hereinafter referred to as the ISO C standard) are referenced to describe requirements also mandated by this volume of POSIX.1-2017. Some functions and headers included within this volume of POSIX.1-2017 have a version in the ISO C standard; in this case CX markings are added as appropriate to show where the ISO C standard has been extended (see Codes). Any conflict between this volume of POSIX.1-2017 and the ISO C standard is unintentional.
Well ok, the two standards are overlapping where it makes sense, I stand corrected there.
But nothing prevents me from writing C or C++ programs on Windows which don't have a single POSIX (or even libc call) in them, but exclusively use Windows system DLL calls - and those are still entirely valid C or C++ programs (although I'm aware that the situation is muddy for some libc calls which some compilers treat like builtins, e.g. memset and memcpy).
It's a system inside a system shipped as an app, isolated from other apps and with no integrations with the wider operating system. It's not far from WSL or a VM.
It forks and execs new programs using the actual Linux system calls. It's a real Linux terminal emulator that opens and drives /dev/ptmx. Not sure what else it needs to be a real Linux.
Termux is very well integrated. Termux:API allows access to phone sensors, location and cameras, sharing data with android apps, issuing notifications... All from the command line. Things like xdg-open work fine.
>Those that keep trying to make a point of Android being UNIX, never really shipped anything on the Play Store.
An old wise sayings said, never say never
>Yes, it uses the Linux kernel, and several components born on the Linux world and that is about it
It's the OS engine of the world's most popular mobile software framework so called Android not unlike the Unreal Engine is the game engine for the world's most popular 3D gaming software so called PUBG. Is it wrong to claim Android is Unix, as PUBG is Unreal?
>Only Google and OEM partners get to see that layer, and those that root their devices
I think the sibling comments already answered these allegations
>As for the technology, the UNIX haters book exists for a reason, had AT&T been allowed to sell UNIX from the get go, and most likely it wouldn't have picked up steam on the market.
Haters gonna hate but the most ardent Unix hater, the author of most successful desktop OSes VMS and Windows NT namely the lagendary David Cutler did copied many of the "flawed" Unix ideas into modern Windows
>recovered UNIX zealot
Personally never meet a Unix zealot, but never say never
The classic example is that the original VMS (of which Dave is one of the original authors) has several types of native files, but the modern Windows NT has forgone this many types approach (e.g record & index) for simpler flat file approach similar to Unix.
I guess. I’m not sure I buy how just removing some shitty idea from a system is copying a system that never had that in the first place.
The actual meat of how the filesystems and namespaces work (and Unix has several types of files - block and character special to start with) are still wildly different between the two systems.
Remember that Ken Thompson and Dennis Ritchie were part of the original Multics project, and the Unix name was a joke or more exactly a pun on the former.
I strongly believe that the non-existent of complicated file structures are actually by design not by chance by Unix authors, and perhaps at the time is mainly due to the limitation of the hardware. But since now we have computer with TB size of storage and RAM (that's not a typo), and apparently no OS after VMS copy the record and index file structure, it's probably a very bad idea. The fact that modern OS textbooks don't bother to cover them in details but only in passing mentions is really telling something.
> I strongly believe that the non-existent of complicated file structures are actually by design not by chance by Unix authors, and perhaps at the time is mainly due to the limitation of the hardware
And Unix was just a return to the basics of CTSS, which also was the precursor to ITS. This is not to mention the numerous micro OSes independent of Unix that didn’t live in the bubble of Multics. I don’t think Unix was really original in that regard (but it won, so everybody just assumes they did everything first) - and claiming that everything they undid from an all but forgotten ancient overengineered system was an invention is a bit contrived.
“In most ways UNIX is a very conservative system. Only a handful of its ideas are genuinely new. In fact, a good case can be made that it is in essence a modern implementation of MIT’s CTSS system. This claim is intended as a compliment to both UNIX and CTSS” - dmr
I wonder how much of the hate Windows gets is simply because Windows has nothing to do with Unix's bloodlines and heritage.
Aside from cursory accomodations like the Microsoft POSIX Subsystem (what we would in theory call Windows Subsystem for POSIX if it existed today), that is.
Windows gets hate because as a desktop OS it's an unreliable user-hostile experience with obvious security vulnerabilities.
Unix gets hate because as a command line OS it's an unreliable user-hostile experience with obvious security vulnerabilities.
They're both examples of worse-is-better, and (mostly) ignore the history of mainframe operating systems with more sophisticated security and reliability and - sometimes - a friendlier shell/UI.
My disdain for Windows has nothing to do with that. It comes from trying multiple operating systems. From a development and usability I do not find Windows to be good. From personal usage, IT management and consulting, Internet hosting, and development in full stack, desktop, service, and embedded applications.
Open file locking built in to the OS is not a good feature. Sysadmin days of having to wait and delete an empty folder because Windows Explorer keeps the thumbs.db locked with no other files in the directories. Where UNIX environments let me delete an open file, replace it, or rename it. Nothing like downloading a PDF, having to rename, because it's file name is a GUID. Renaming it based on the PDFs title, in Windows, requires using notepad to type the new file name, close the PDF, and then rename. UNIX removes the need for notepad or a text editor to do this, just rename it while viewing it.
Means to run a service and also have that service run a GUI application as the currently sign in user is simple in Unix where Windows requires a bunch of API hoops to perform. Recommend reviewing Ultra VNC source code it this needs to be done.
Winforms and WPF touch event design is broken. Had to circumvent the API for standard message pump parsing to work around faulty drivers and WPF latching on to UI elements while the fingers have left the screen.
Having hit edge cases where Windows backwards capability breaks with new features. Had to hack a virtual keyboard means to work around. SetForegroudnWindow() brakes with UWP and while operating Windows in tablet mode, needed since main applications run a touch screen without any other physical inputs. This would be BringToFront() in Winforms too. This edge case is not listed in Windows documentation too.
Irony is that Windows 10 in tablet mode has the best All Applications display by utilizing the whole screen versus standard desktop mode with single column infinite scrolling.
The registry just a bad design. No way to recover a corrupt hive unless there is a backup. Where UNIX configuration files are easy to re-construct if they get corrupt. Never had a UNIX system that could not be revived where Windows must be re-installed.
All of this grew out of continual use with using DOS to Win 3.1 to Win 10 and recently trial of the terrible Win 11 regressions. Cannot event place the taskbar at the top of the screen like Windows 95 to Windows 10.
Windows gets hate because it's a de facto monopoly on enterprise computing. Most medium sized business or organizations don't really have much choice but to go with the industry standard. It's natural to feel resentment against an oppressive monopoly. Posix compatibility has nothing to do with it.
I don't believe that's correct. MacOS has been a certified UNIX for ages, but to my knowledge Apple have never claimed nor sought to have their "smaller" operating systems certified as UNIX.
This article just keeps repeating the two concepts of Unix technology and Unix ideas but doesn't clarify what they mean by either. The links, seemingly complaining about iOS or Android, don't really seem to clarify.
It seems many want Unix technology or ideas to be about what Unix meant in the 70s and 80s and don't have much interest in moving things forward from that plateau. (To be clear, I think all the major operating systems of today leave a lot to be desired, but each one has their own pros and cons.)
Aside from the file system parts, my understanding of the idea of Unix is that it is a bunch of small utilities interacting with each other. I think almost anyone agrees with that idea, see functional programming and node based tools that link up data between dedicated functions and nodes, respectively. This idea is quite pervasive, and I don't think it was necessarily Unix that conceived of it. But in reality, the implementation of this idea in Unix leaves a lot to be desired. The collection of utilities that have very few standards in interfacing, documentation, naming, usability, and more importantly, data structures leaves one wanting. The idea that everything should be text is implicit and highly limiting. This is why programming languages that are somewhat OS-like and have their own scripting are so well liked, becuase it removes the need to worry about the various OSes' implementation of these ideas, leaving one to stay at a virtual machine level. For example, Elixir and the BEAM.
It's typically viewed as blasphamous, but I think the PowerShell model is the more powerful one if we're just considering OS-level provided interactions. But even then, restricting oneself to the terminal is limiting. I find the ideas in Smalltalk to be much more powerful and forward thinking, despite being as old as Unix itself.
The communal spirit of Unix as described by Ritchie back in the day revolves around minicomputers (aka PDP-11) shared in a single organisation, accessible via multiple terminals. A close community of users would log into the system and would eventually come up with their own small tools for a particular purpose. Those oddly-named tools would be adopted by the small community without need for extensive documentation. This is the "Unix spirit" of the 1970s that everyone seems to be referring to.
And I'm disappointed that things have stagnated so much. I mean, cool, a commandline is an useful concept that sometimes fits a given task excellently well. But why in the hell are we still emulating a VT100? How is a modern Linux console sticking to a 50 year old level of functionality?
And why are we scripting in something as painful as Bash?
I agree with what you've said here and in other comments. Terminals are useful, to a degree, but there's so much fruit to be had. And I don't see why we need to be stuck in terminals (I guess terminal emulators is more accurate) as the primary mode of interaction.
Warp is interesting enough, but it's a shame it isn't cross-platform to begin with, and I'm dubious about the business model of a terminal product. I think the best hope is that they're purchased by Microsoft, and it gets integrated into VS Code. It also continues to lean into the terminal idea, although it does approach it from scratch, which is nice.
I think the idea of notebook type interfaces is perhaps underrated for scripting purposes. Microsoft's Polyglot Notebooks and .dib format addresses this.
Terminals are a central idea affecting Unix process control and hierarchy, and signal delivery. Emulating them was the easier decision that didn't require any changes to the rest of the system. You introduce a virtual terminal device and the rest stays the same. We are still trapped in those basic abstractions of the 1970s without even noticing them, building shiny stuff on top.
But why are we in 2023, and people are still coming up with fancier versions of "top" that try to draw graphs in text? Every modern computer boots to a graphical console. Every terminal emulator in a GUI runs in a graphical environment. Wouldn't be be much nicer and much less awkward to actually use pixels?
Reducing everything to a monospaced text character is a beautiful and simplifying constraint, IMO. It gives me as a user a sense of predictable consistency that I really appreciate.
Because the UNIX terminal model is mostly "good enough" (and with a recent incarnation like zsh plus Fish-style autosuggestion-plugin the user experience is very different from a plain VT100 terminal from the 70's).
PowerShell isn't such a big revolution in user experience that it justifies switching IMHO (and FWIW I don't agree that objects are better than a plain text stream, at least not when the whole thing stops in the terminal - it would probably be different if every Windows application - including all UI applications - would expose their internals as "PowerShell-scriptable objects", then we could probably get an actual productivity jump by glueing small, specialised UI apps together - this was a central idea in the last days of the Amiga which didn't make the jump to other platforms before its demise).
Same for the concept of a hierarchical filesystem. Sure, there may be better ideas floating around, but when confronted with reality they are usually not so much better than the "good enough" status quo that current operating systems have arrived at.
TL;DR: long-term incremental progress usually beats revolutionary ideas that are poorly executed.
Think of doing that in classic Unix. How does that command output a tree shaped structure, where items have sub-items? How do you parse that? How do you make sure that your parsing doesn't get confused by the presence of an unexpected = or :? How do you deal with that maybe some certificates have extensions?
Oh, we actually have commands that do that sort of thing, GPG does it.
And it's awful. It's colon separated, so I'm not sure what happens if somebody puts one in their name. And you have to count fields. And it's completely non-extensible, good luck inserting an extra column. But hey, you can parse that with AWK.
Point taken, but there's nothing prohibiting commands from outputting their data in a different text format that's better suited for tree-like data (like JSON), and another set of minimal commands which processes JSON-formatted data - easy extensibility by the user is also part of the "UNIX philosophy".
That's also the point where I'd reach for Python or Deno/TS to write such command line tools, because both Bash and C are not really well suited for text processing - and neither are the traditional UNIX tools like sed or awk IMHO.
JSON would be better, true, but PowerShell actually has objects, so dates are actually dates and not just a string containing a date. So you can actually do timezone conversions and calculations on what you get directly, rather than having to feed the date string into a parser.
But yeah, JSON would be a good start. But for some reason there seems to be little interest even in that. Wouldn't it be nice to have `df --json` or `mount --json`?
And that makes me sad, because the system could be so much better if it moved forward a bit instead of being stuck in the 80s.
Honest question: How do you do discovery on PowerShell objects?
That is, with the Unix approach, when I'm trying to do something ad hoc I can do one step of a pipe, read the results on the terminal, see what the format is, and figure out how I have to set up the next step of the pipe. How do I do that with PowerShell?
Unix was a couple of guys in an attic. Hack was one guy. Unix was collaborative, personally collaborative between some extraordinary intelligent people. If you read Kernighan's memoir, it talks about the second version problem. Unix solved a problem. GNU solved a problem. Linux solved a problem, and WSL2 solved a problem. Using a hammer to rid your house of ants is a solution, that does not fit the problem. I do not consider a hammer harmful in itself, if its used to drive nails. Everything needs a tersely named tool is outdated, useful for trouble shooting but outdated UX/UI. We need to continue with our tools, and develop them further, but not with extraneous bells and whistles. ( This bit of wisdom came from a 5th grader, who now has a degree in Political Economics, and I do not understand a single thing he says. ) Read Kernighan's memoir.
What's often lost in a simple review of the history is that in modern times certain elements of the Unix philosophy [1] have become more important than ever. In fact as technology becomes more pervasive I would argue that the ideas it encapsulates are no longer only relevant to computer architecture, they're now relevant to the survival of a free and open society. This isn't just about how to build the best widget anymore (though I still think the Unix philosophy is a great contender for that).
In particular I'm referring to the principles of composability and universal interfaces. These principles make the Unix philosophy inherently democratizing - when you apply it you're producing something that's intended to interoperate with other agents in a diverse ecosystem beyond your control. This is in stark contrast to the monoliths that tend to emerge from proprietary software development where the socioeconomic goals are to control everything strictly and limit interoperability because there's commercial value in having a "moat" or a "walled garden" which people can't escape.
As computers become interwoven with our lives and our society the general purpose computer has become a tool like the printing press. The printing press was a precursor for democracy as we know it because it enabled the dissemination of information on a scale that was previously impossible.
What sort of social progress we might ultimately achieve through the general purpose computer is yet to be fully determined. But in the modern era we have many examples of what the competing philosophy of proprietary monoliths looks like - it's a hellscape where more your time and attention are harnessed by the monolith's owner, to serve his own ends, even at the detriment of your health and well-being [2].
Facebook and Instagram wouldn't be killing our children if they had been designed around the Unix principles of composability and a universal interface. How many people have lamented their inability to take their data, content and relationships away from Meta's silo? It's a constant refrain in our industry that people wish they could get away from a service that started out great and gradually "enshittified" - had these services been designed with the Unix philosophy in mind, it would be possible.
Either way, the Unix philosophy is now just as much about your human freedom and well-being, as it is about whether your code works well.
Stallman of course saw this coming decades ago and predicted with disturbing prescience - that "either the user would control the program, or the program would control the user."
The principles are good, the implementation sadly is not.
Plain data streams have outstayed their welcome. Pretty much nothing but the classic old Unix tools communicates with plain text streams. Smarter, new commandline tools tend to provide at least JSON, but there's plenty that don't and that makes them absolutely painful to reuse. I've written large amounts of code that amount to running a process, parsing its output, and then translating that to the actual API I want: methods and events. Basically all that work is a waste that doesn't actually get what I want done.
Of all things, Microsoft had a better idea with Powershell: it does the same concept, but far better.
The internet has seen something similar: a huge amount of users take a TCP socket and proceed to build an improvised messaging protocol on top, because that actually fulfills the needs of many applications far better, yet the API doesn't actually provide it.
IMO, there's been too much Unix philosophy worship. We've stuck to an early, simple idea and on the long term it's a huge detriment. Since it's so lacking, everyone builds their proprietary hacks on top.
Fully agree.
Also, jq exists, a simple example how open and simple interfaces (text streams) allow for extensibility. There is a reason why so much is built on top of "simple terminals".
Not only Microsoft, Powershell ideas can be traced back to Xerox PARC REPLs across Smalltalk, Interlisp-D and Mesa/Cedar, or the keyboard+mouse+REPL from Oberon.
Also some influences from AmigaDOS and IBM's REXX.
> Stallman of course saw this coming decades ago and predicted with disturbing prescience - that "either the user would control the program, or the program would control the user."
Do you have a source for this quote? Best I can find is this text from 2013:
The issue with "Unix", both as a technology and as an idea, is that has remained substantially incomplete. It never developed the next layer(s) that would help embrace all users, not just the technical ones. The fact that the project/ideology/mindset is old does not mean it is rounded, finished and fulfilled.
"Unix" missed the popularization of compute not once but twice, first with the infamous "linux desktop", then with the mobile revolution.
There does not seem to be a technical reason why this "completion" has not happened. The history of computing is idiosyncratic, very far from exploring the phase space of possibilities, let alone converging on optimal ones. Random economic and regulatory/political constructs helped fill the gap in all the abominable ways that dominate today's landscape.
Yet Unix 2.0 - the human-centered version is just a finite number of keystrokes away...
> There does not seem to be a technical reason why this "completion" has not happened.
I'd say there is. Unix is a wonderful toolkit of tools that all work together in the most wonderful way. You can build your own perfectly custom anything by piecing tools together exactly how you want. I can't think of a more satisfying playground for a technical person.
Thing is, the nontechnical users don't want that. So building something on top of Unix that can "embrace all users, not just the technical ones" will by definition hide or remove nearly all the interesting lego-like interlocking tool functionality.
So at that level, the Unix desktop is an impossible contradiction. Sure you can built plenty of desktops on top of Unix and plenty have been written. But at that point it isn't quite Unix anymore, unless you ignore the desktop and open xterm.
I am not sure that follows. Maybe the "hiding" part in some sense, but without any negative impact. One approach is to pre-package a set of components for the tasks people use frequently and for which they expect to have a simple interface. Such an arrangement does not prevent power users from clicking on the proverbial "developer" option and unlocking the components, creating new mashups.
For sure such a natural and intuitive higher level interface is non-trivial to design. It would be as important an innovation as the underlying unix itself.
Early on, Unix was in use across the phone company and normal people used it for their every day tasks. All Unix users read the manual. Otherwise you're not really a Unix user.
If we analyze the computational complexity of software architectures the way we do regular algorithms, we see that UNIX was much more efficient than previous OSes/approaches.
A narrow waist is an idea or interface that gives you O(M x N) functionality for O(M + N) code. It's a way to reuse code and data by interoperating.
If you can't interoperate, you end up writing bad versions of the same application code, over and over again.
More concretely, I remember there was this whole "re-architect google3" idea >10 years ago, which seemed to get brushed aside for the cloud. I think they were trying to have more of a narrow waist, not 10 different distributed databases made by 10 competing teams, 10 different auth libraries, etc.
The management's perception then was that product dev was extremely slow, and had poor quality results
That video is about the application side of Google (e.g. docs and maps), but the internal side had similar architectural problems.
---
As another example, it seems like the dev tools group at Google wanted this article to be published, and I have no idea why:
Google Is 2 Billion Lines of Code—And It's All in One Place
It's saying Microsoft Windows has 50 M lines of code, while Google has 2 B lines of code.
There are all sorts of problems with those numbers, but in any case, it's not a flattering comparison -- not something you want to brag about
I think they wanted to emphasize the scalability of the source control system, and ended up saying that Google has a lot of bad code that doesn't work well
Yeah it's not really a good comparison, but I guess my point is that when you have that much code, it's not a surprise that it's hard to get work done, and the products work poorly
That is a great video. If Google is the new Multics and the cloud is the new mainframe, does that mean that AI is the new Unix? AI is the command line?
I would argue that while Unix the technology is helpful, Unix the idea is now actively harmful.
The Unix idea just doesn’t scale well or have the necessary security.
It seems we are more and more going to an idea that is even older than Unix - the mainframe.
From fault tolerance to distributed computing to security to memory channels our modern architectures are more resembling mainframes than Unix and that is good.
Everything is a file is a very, very leaky abstraction. There are all sorts of issues with latency, cost, atomicity, security, persistence.
I feel at this point, that this constant looking back to Unix “purity” is holding us back and closing our mind to new solutions.
There's a book I like called "Design Rules: The Power of Modularity"; it was written by some business professors in 2000 and is loosely about the industrial organization and the history of IBM's System/360 (the first "modular computer family") and more generally about how modularity enables innovation and growth.
They enumerate what they call "modular operators" which are basic ways a system can be changed:
* "splitting" a design into modules
* "substituting" one module design for another
* "augmenting" adding a new module to the system
* "excluding" a module from the system
* "inverting" to create new design rules
* "porting" a module to another system
It doesn't matter here whether the modules are functions, computer programs, hardware components, business units, 2 pizza teams, or companies in an industry (Intel, AMD, RAM manufacturers...).
Modularity is a core part of the Unix philosophy. Unix itself is an operating system and involves some particular concepts (pipes, files, processes).
It seems obvious that the concepts for an operating system (pipes, files) will be less relevant in a different domain (say, web programming).
While I agree that maybe the Unix abstractions aren't appropriate for new problems in different domains, I think some parts of the Unix philosophy (i.e. modularity, minimalism, and compositionality) still deserve consideration.
> “purity” is holding us back and closing our mind to new solutions.
Maybe. But consider shebang. It means that any program (/script) written in any language known by the system can call any other program written in any other language. That keeps open the option of which laguage should we use for our next module. Right?
That's an constrained example of the general concept of a file-association (having a file plus the info of what application executes that file).
As such, it predates UNIX, exist in other OSes as well, and can be even more dynamic (as shebangs are hardcoded into the file, where file + executable associations could be external).
Linux offers another way anyway: linux_binfmt
If I'm not mistaken that's also how jar files can be executed by themselves without calling java on them.
It is nifty and mind-blowing. It basically means there is a way to "call" a program-file and let it decide who should interpret the program.
It is not too different from the way in which html-pages can declare what is the character-set that should be used to interpret the binary contents of the file.
The deep thing to me is that programs can not only encode instructions for the computer, they can also encode how to do the decoding. Obviously you then need some common convention (like shebang) to say how you can interpret the part that tells how to interpret the content.
There would be a step anyway between the user selecting even a regular binary executable (via pressing enter after writing its name on the shell, by double-clicking on it in a desktop file manager, by running a shell script which on some line invokes it, etc) and it starting to execute (the shell or GUI would have to interpret the action and tell the OS to execute the binary).
So adding an intermediary to do a choice of executor and/or handover in between (whereas at the shell/file-manager level, or at the OS call level), but still before execution, is quite easy.
IMO, all technology in the computer space are ideas. Unix is one that is intentional about it. I view creating a different software program as a "technological argument". Ultimately, almost everything a computer is is a set of agreements (or a standard, a special type of agreement). A programming language is one, the ISAs of the popular CPU companies are one (and the fact that they are popular), all the way to the interfaces of software.
Android is a great NEGATIVE example of the Unix philosophy! The code is based on the Linux kernel, which might make you think it's Unix, but isn't Unix [1]
An OS is where 3 things meet: code, data, and people.
AKA apps, files, and users.
AKA CPU, storage, and networking when viewed from hardware (in the old days, you'd have terminals and serial ports, not Internet)
I'd argue that (if you take away the historical "hairs"), Unix is a minimal and efficient implementation of those ideas.
It's efficient enough to build heterogeneous "platforms" on top -- Lisp, JVM, JavaScript, Python, etc. Databases, hardware virtual machines, the entire modern cloud
The shebang line is basically "polymorphism" for those runtimes -- everything looks like ./foo arg1 arg2
The args are often unstructured files that you share between different programs
--
Android breaks from that design in many, many ways
- Applications are written to the Java runtime, not to the OS
- They written in Java or Kotlin. Escaping to C++ is a "special" thing, you use the "NDK" for that
- It doesn't expose raw files to applications, or at least it seems apps can't really share files. It uses its own security model, not Unix security (which was almost certainly necessary!)
Hm I clicked on the original link, and Rob Pike said it better: The very idea of a "Files app" is high on my list of world-defeating idiocies perpetrated by technology.
So Android is more like a "monoglot" operating system on top of Unix. It's just using Linux as a bunch of device drivers basically
(Makes me think of the quote from the 90's regarding browser/JVM: "Windows should just be a bunch of poorly debugged device drivers", and that rhymes a bit with how Android turned out.)
---
I think the cloud is missing the "Unix" layer, i.e. a good factoring of mechanism and policy, C and shell, which you can build other platforms on top of.
I'll refer to my own post: Kubernetes is our generation's Multics
The cloud is kind of an incoherent architectural mess -- it doesn't really have the code / data / users model.
It basically has the incoherent "Files app" architecture, as Pike says
What's code in the cloud? How do I invoke it? What are files? What are users? There's a million answers, and it causes a lot of work for us programmers.
Most app code seems to be papering over the differences between different representations of these fundamental things, which ultimately makes them slow and unreliable
[1] Incidentally, Android is also not GNU/Linux -- it's the Linux kernel, without GNU at the platform layer
> So Android is more like a "monoglot" operating system on top of Unix. It's just using Linux as a bunch of device drivers basically
Not even that since Project Treble.
From Project Treble point of view, Linux kernel is a microkernel, and the only in-kernel drivers are legacy drivers, all modern drivers should run on their own userspace process (written in C++ or Java), and use Android IPC to talk to the kernel.
Android has zero to do with UNIX philosophy, they could ship a version using Zirkon instead of Linux kernel and nothing in userspace would notice, unless they are trying to use private APIs, or are OEM partners.
Android could in the future replace the Linux kernel and libs with their own Fucscia kernel/OS and (aside from some light porting effort for some minor behaviors breaking and affecting very few items) nobody would even know.
That's how non-essential Linux/UNIX/POSIX is to Android.
And that's for the kernel and libs and APIs. When it comes to the philosophy, experience, etc, it's even more remote.
I'm having trouble understanding what he's criticising in that Mastodon post
If he's saying he doesn't have sufficient OS access rights to his files, surely the existence of a "Files App" (aka a file explorer, hardly new) disproves his claim. Android also has multiple-choice file associations just like a desktop OS
If he's complaining that the formats are opaque, or that we keep reinventing very similar CRUD apps, then isn't the Unix model of "a file is just a bag of bytes in a hierarchical namespace" the cause of his problem? It explicitly rejects having standard formats and the result is every application having to bring its own, with all the resultant opaqueness and reinvented wheels
I haven't used the Files app in awhile, but I think it might have special permissions? I know I had a real hard time deleting music to free up space
(As mentioned, I don't think the Unix security model would have been enough -- not even close, it's a hard problem)
I think the problem is that APPS own data, not USERS
This is a bit incoherent. Like the Amazon Music app has its own private data area to store files. And then I can put my own mp3s on my phone, but that experience sucks.
And those files are in no way related to the Amazon mp3s.
A related thing is that Android doesn't seem to expose USB drives as files -- it's a custom protocol now, and you need a custom app on Ubuntu/Windows/OS X.
I don't even remember -- the experience was so bad, that I stopped trying to use Android that way
---
I mean Windows is a bit like this too. It's full of bespoke/proprietary files that you're not supposed to touch. It's commercial, so it's code-centric, not data-centric.
Unix is data-centric. Your files come first, and then you can use different programs to open them and manipulate them.
As a real concrete example, see this restoration of a Flash video game -- it has different programs for code, visual assets, audio assets, etc.
You could argue that the Unix data-centric model got pushed into the developer niche, and end users have chosen the app-centric model. But I would say there are tons of computer-literate people out there (making Minecraft mods, professors doing experiments, etc.) who are limited by the app-centric model.
Computers could just do a lot more, but you have to wait for the person who owns your data to implement the features you want (e.g. the whole Adobe ecosystem, Autodesk ecosystem, Google sheets, docs, and more)
What an interesting book and read. Although, I worry about the tbought that Macs are elegant. At least nowadays, macOS is such an amalgam of technologies, it is a mess.
I agree with many of the sage points here, however, I think Unix is more of a general philosophy as well as a set of related operating systems that are loosely based on the original UNIX. My recent KW-LUG presentation revolved around this, specifically: https://archive.org/details/kwlug_meeting_2023-07-10/KWLUG+2...
Do one thing correctly and do it well - like everything is a file OS and clib and the C language.
Sigh the same cannot be said of the terrible wonky state of the web - JS is just so shit and anything that followed it as just a new mess on top the existing pile of messes.
Yes, it uses the Linux kernel, and several components born on the Linux world and that is about it.
Only Google and OEM partners get to see that layer, and those that root their devices.
Nothing on the Android userspace, the Java and Kotlin frameworks, or the NDK official APIs have anything to do with Linux in any kind, form and fashion.
As for putting UNIX in a pedestal, most of the so called culture, is mostly circled around nerd circles, hardly ever noticed in big iron commercial UNIXes, and now that most of them faded away, gets passed by as a kind of old culture myth told during cold nights around the fire.
As for the technology, the UNIX haters book exists for a reason, had AT&T been allowed to sell UNIX from the get go, and most likely it wouldn't have picked up steam on the market.
Being source tapes available for symbolic prices, when compared with the real cost of a commercial OS on the 1970's, is of course a different matter.
I find great that at least the mobile OSes, and cloud infrastructure nowadays focus on other stacks, instead of being yet another UNIX clone, no need for OS monoculture stuck in the past.
Signed, a recovered UNIX zealot.