No one in their right mind would design a general purpose computer system from scratch with an 8-bit word size unless they were forced to. Neither Zuse nor Eckert and Mauchly ever did.
> Plenty of older microcomputers got by just fine with only 8 bits in a word.
They got by fine on your kid’s desk. Those processors were better suited for implementing smart terminals that people would use to interface with the computer that would actually solve your problem.
> That's enough room to fit two binary-coded decimal digits and thus perform all the common arithmetic operations one or two digits at a time.
Why perform them one digit at a time when you can perform them 10 digits at a time? Why force your users to adopt a programming model where you have to implement arbitrary precision arithmetic anytime you want to use a number larger than 255? Or have to implement a useful floating point in software? Or be forced to pull in libraries or use some environment were that stuff is pre-implemented in order to do basic operations that previously could be done in a single line in assembly?
>...albeit at speeds far below the billions of operations per second expected of modern computers.
They performed at speeds below what was expected of computers from the 50s and 60s.
> The advantage of 32-bit word size is more for interoperability with modern systems and familiarity to modern programmers and programming languages.
No it is not about modern computing or modern programmers. 32-bit or similar word sizes were the standard for electronic computing since the earliest times because there are intrinsic advantages. Nobody did 8-bit computing before micro-processors from the 70s because it would have been stupid to limit yourself that way if you didn’t need to.
But the designers of early micros did need to limit themselves, and they proved that those limits were not insurmountable. The luxuries afforded by kilowatts of electricity and cubic meters of space were beyond the reach of early micros. You may sneer at the fruits of their labors, but for a large number of people across an entire generation, these devices were accessible to them when room-sized computers weren't.
Taking such a hostile tone over what is obviously already settled seems strange to me. Of course you don't design modern systems with 8-bit words. There are many advantages to larger words besides those I mentioned or implied. But calling the elements which were good enough for the second wave of the computer revolution completely impractical is also absurd to me.
> Of course you don't design modern systems with 8-bit words.
Again this is not about contemporary computing or computer technology level in general, as I have already made clear. What is good or bad design for general computing is independent of those things, it is timeless. Why do you keep referring to some arbitrary “modern computing” as if that is somehow relevant when word-size in particular is a design decision that is not really dictated by technology level? It is often possible to compromise on other aspects of the design in order to have the width you want.
What you mean to say is “you don't design any general purpose system with 8-bit words.” but you won’t because it would undermine your whole argument.
> But the designers of early micros did need to limit themselves,
It would have been more difficult but not impossible to create an IC with a larger word-size in the early 70s. The 8-bit word size was not entirely chosen for technical reasons but also because they were targeting a specific market, one that did not have to meet all the demands of general computing and one for which the general computers were ill suited so this made the most sense. Would you really not say they were designed for and are better suited for the subset of computing applications that is called “embedded” computing? Don’t forget that most of them were designed before the desktop computer market existed and were never intended to be used that way. You can force them to be more than what they are (and people did) and pretend that they are but it doesn’t mean you should.
> But calling the elements which were good enough for the second wave of the computer revolution completely impractical is also absurd to me.
Design concessions which are intrinsically bad, awkward or inappropriate do not become good because it “still worked” or succeeded in a particular market. Not that it counts for much since those computers were not marketed in the same way or purchased for the same reasons. You say it was “good enough” but a lot of people didn’t think so but I guess most of them are dead now (like Wirth). The word-size was not a selling point, and only not enough of a hindrance to upset the novelty of the overall product. What it takes to introduce people to electronic computing is not the measure of general computing.
What is absurd is people like yourself insisting those particular machines, out of all machines, should be used to inform us as to what is reasonable for general computing when, considering the context of their creation, it is completely unwarranted.
Despite the disdain of ivory-tower academics who had access to resources well beyond their individual means, practical computing for the masses was only made possible with the "wrong" parts and the "wrong" number of bits and all these other "mistakes". Even today, early microcomputers (and their remakes) remain more broadly accessible than contemporaneous minicomputers.
We're now entering an era where, despite sophisticated hardware with wide words and layered caches and security rings and SIMD units and all this other fanciness, the end-user increasingly doesn't own their computing experience. While 8-bit and 16-bit computing may not meet your arbitrary definition of sufficiency, it still provides an avenue for people to access, understand, and control their computing. If it wasn't good enough for Wirth, so what?
As I mentioned in my very first post it is based on what humans want out of computing. We have ALWAYS wanted to do arithmetic with large numbers and with fractional values. We have wanted to compute trigonometric functions since antiquity. We wanted human computers to do all these things and more in the past so why shouldn’t we want the same of automatic computers now?
What we want out of computing is constantly evolving but in most cases the new things boil down to solving the same old set of basic problems. A large portion of modern cryptography is simply arithmetic with large numbers for instance.
An 8-bit machine can be used to solve these problems but often inefficiently or awkwardly, which is not the point of automatic computing, hence the impracticality of that word size in a general purpose computer.
Since word-size mostly determines the magnitude and precision with which we may conveniently represent a quantity it is pertinent that we consider what scales we might be expected to work at. There are 13 orders of magnitude (~43 binary orders) between a pauper with a single dollar and the nominal GDP of the USA. It is therefore unlikely your adding machine will need more than that many columns. There are 61 orders of magnitude (~203 binary orders) between the diameter of the observable universe and a Planck length. Those are the opposite ends of two extremes and we could reason that common calculations would involve a more Earthly scale. We should also consider the other natural quantities but the point is that in this world we must expect to operate at scales which have been determined by economic or natural forces. A machine that doesn’t acknowledge this in its design is not practical. Continually applying one’s reason in this way we can arrive at a sensible range of scales with good generality.
These are the principles which should guide us when determining a practical word length. Like considerations have guided the design of all our calculating tools. What good would an abacus be if it had so few rods as to make common mercantile calculations more awkward? Perhaps it seems less arbitrary now.
Many aspects of the universe appear continuous so it is natural we would need to do calculations involving numbers with fractional parts (rationals are a useful abstraction for many things besides). We could use fixed-point arithmetic but we have found that floating point is generally more efficient for most applications. A general purpose computer should have a floating point unit. Even Zuse’s Versuchsmodell 1 had one in 1938. A 16-bit float is useful in some applications, but a 32-bit or higher floating point is what scientists and engineers have always asked for in their computers.
If you want to talk about market success the fact that computers across many different eras, technologies and sizes with 32/36 to 60/64 bit word sizes have existed and continue to exist despite our evolving computing needs, suggests some level of generality which a smaller word size does not provide. The short career of 8-bit words in (semi) general purpose computing counts against it more than anything. It is even being displaced in its traditional avenues.
> the end-user increasingly doesn't own their computing experience. While 8-bit and 16-bit computing may not meet your arbitrary definition of sufficiency, it still provides an avenue for people to access, understand, and control their computing.
What does this have to do with word-size? Why even bother mentioning this? You seem to endlessly conflate larger word-sizes with “modern computing” and smaller word-sizes with specific past implementations or genres of computers. Don’t bother arguing the merits of a smaller word-size by appealing to unrelated virtues or ills.
By the way Wirth was not against the idea of small computers, he just though it was being done badly so he designed his own. People have always thought that computing could be done better, this shouldn’t be a surprise to anyone here.
> If it wasn't good enough for Wirth, so what?
It means that there are people with a far better sense for what is acceptable and unacceptable in computing than you.
This argument has long ceased to have any real substance. We are talking past each other and just get more and more disconnected with each volley. Practically speaking, the number of people alive today who have had the opportunity to sit at and use a mainframe or minicomputer is basically zero. Maybe that represents some missed opportunity for a modern downsized PDP-11 or S/360 or whatever, but the fact remains that you can actually get microcomputers, whether archaic or modern, easily, and so they are all that matters for home use.
> Plenty of older microcomputers got by just fine with only 8 bits in a word.
They got by fine on your kid’s desk. Those processors were better suited for implementing smart terminals that people would use to interface with the computer that would actually solve your problem.
> That's enough room to fit two binary-coded decimal digits and thus perform all the common arithmetic operations one or two digits at a time.
Why perform them one digit at a time when you can perform them 10 digits at a time? Why force your users to adopt a programming model where you have to implement arbitrary precision arithmetic anytime you want to use a number larger than 255? Or have to implement a useful floating point in software? Or be forced to pull in libraries or use some environment were that stuff is pre-implemented in order to do basic operations that previously could be done in a single line in assembly?
>...albeit at speeds far below the billions of operations per second expected of modern computers.
They performed at speeds below what was expected of computers from the 50s and 60s.
> The advantage of 32-bit word size is more for interoperability with modern systems and familiarity to modern programmers and programming languages.
No it is not about modern computing or modern programmers. 32-bit or similar word sizes were the standard for electronic computing since the earliest times because there are intrinsic advantages. Nobody did 8-bit computing before micro-processors from the 70s because it would have been stupid to limit yourself that way if you didn’t need to.