I'd really like to see the full ISA, etc for the chip. A few years ago, I was doing research on building a byte code based vm after working through Peter Michaux "Scheme From Scratch" http://peter.michaux.ca/articles/scheme-from-scratch-introdu.... (I highly recommend running through his code, but do the GC earlier, it's easier to get it from the start than to try to add it.) I couldn't find anything online listing the kinds of instructions you'd want in a lisp chip.
The original Lisp Machines were conventional machines with hardware features like tagged pointers that let them execute Lisp more quickly. I'm under the impression that the machine in the article is far more radical.
They were not that conventional. The first Lisp Machines were using micro-coded processors with special instruction sets tailored for compiled and interpreted Lisp.
* tagged architecture
* stack-oriented architecture with large stack buffers
* hardware assisted GC
* support for generic Lisp operators. For example a simple + operator.
* support for basic Lisp data structures like cons cells
The result is that Lisp programs compile to very compact machine code.
I'm guessing that by "far more radical" the GP meant "directly evals s-expressions." Especially since, IIRC, that's how PicoLisp works. I don't think there is much benefit to that approach over simpler register machines, but it's an interesting idea at least.
I have been told that TI have given formal permission to redistribute their code, I will chase up the person who got hold of it to get it properly labelled.
The LMI stuff is more interesting to me as it is complete, I'm still a fair bit of work away from being able to build it though. I have spoken with RG about it but we didn't discuss the licence.
Texas Instruments sold their computer systems division at the start of the 1990s. I've read that that sale did not include the Explorer copyrights.
At the start of 2011 I got in touch with Robby Holland, then the head of TI's patent licensing division. Holland could not find anything about the computer systems division, or anything about the sale, or whether they owned the rights anymore. They did not have any of the project materials on hand. Holland said he would have someone look into the archives. When I tried to get in touch with him a few months later Holland was no longer with Texas Instruments.
Yes, it can be done, but the half-dozen or so LISP machines of the 1980s were not very successful. Price/performance was worse than compiled LISP on common CPUs.
There's no indication of hardware support for garbage collection. It would probably be more useful to have tag bits to support GC than a LISP-oriented instruction set, especially if it allowed concurrent GC.
Talking about Lisp machines. I don't know much about these things but I was thinking about them recently when Hewlett Packard announced its so-called Machine^1. They want to build a new kind of OS for it, but wouldn't a Lisp machine just do?
I've heard a few people suggest that with single core performance stagnating we may see more ASICs. I admit I'm skeptical, but this line of development does seem worth exploring.
This is PilMCU, a hardware design which apparently runs PicoLisp.
Right now, only the design exists, not physical hardware. But you can run the design in a simulator. ...Something, something, verilog. (This is not my area of expertise.)
[Edit: the devs use the Icarus verilog compiler to provide simulations. It's available in Debian repositories.]
Verilog is a hardware description language. You describe the hardware connections, sometimes with high level constructs like "for" loops, sometimes by just describing the gates themselves. In a sense, it is similar to HTML describing a web page, but with higher level constructs available like loops. So maybe HTML with a templating engine is a closer analogy.
For simulation, you compile verilog with a software tool into something executable by a VM or natively. This is heavily event based in execution, with events being edge transitions (a signal going from 0->1 or 1->0) occuring at specific times--for most (but not all) cases.
For producing something usable by an FPGA or a foundry for an ASIC, instead of compiling you synthesize. Different tool. Synthesis is the process of taking higher level hardware descriptions and outputing the lower level descriptions usually called a netlist. It's akin to translating C into assembly for example.
Device specific tools can take that netlist and create a bitstream for configuring an FPGA, or the foundry can take that netlist and go through a process called "physical synthesis" which takes the netlist and chooses from the foundry's library the components that will work best for that netlist to operate at speed, figure out where to place them on the die, and insert buffers as needed.
What the GP was asking: is this design small enough to fit in an FPGA. This question is orthogonal to the language used to describe the hardware.
I'd say they answer is yes, depending on the FPGA you choose. Some FPGAs are pretty high capacity these days, and even fast.