Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I like macros and have used them heavily for most of my career. But in practice, macros have some limitations:

1. Macros allow you to easily improve your syntax, but offer less guidance for improving your semantics. To use a mathematical analogy, it's nice to have a better notation, but what you really want are better definitions and theorems. Most macros are thin syntactic wrappers; few offer profound semantic insights. (Although there are some lovely examples of the latter in PG's excellent On Lisp: http://www.paulgraham.com/onlisp.html .)

2. Macros almost never call macroexpand recursively, except inside of code bodies. This means that macros like defclass usually cannot be extended in an incremental or modular fashion. Instead, you need to declare a def-your-class macro that can't usefully compose with any other def-her-class macro that you might encounter.

3. Weaker mechanisms than macros can accomplish nearly as much, with fewer issues similar to (2). For example, you can re-implement many Lisp macros using Ruby's block syntax, metaobject protocol, and ability to call class methods from inside a class body. And you can usually compose the resulting DSLs freely, even when they're analogous to defclass extensions—witness all the ActiveRecord add-ons.

So I like macros, but I no longer think that they're the ultimate high-level abstraction. They're mostly a nice way to incrementally improve your notation, except in the hands of programming language designers who are prepared to pull out all the stops.

Vladimir Sedach: The Haskell gang is primarily interested in advancing applied type theory.

If we're talking about high-level abstractions, this is probably not the best way to describe Haskell. (In fact, Haskell's type system is notoriously ad hoc, compared to languages like ML.)

Originally, Haskell started out a project to explore lazy evaluation and programming without side effects. But in recent years, the Haskell community has been investigating ways to build better domain-specific languages using ideas from mathematics: combinators, monads, co-monads, arrows, derivatives of types, and so on.

So far, many of these Haskell ideas have a steep learning curve, but the results are nonetheless impressive. If I were trying to design a "hundred-year language", I would definitely pay close attention to the Haskell community—they just bubble with fascinating ideas. And mathematical ideas tend to endure.



> Macros allow you to easily improve your syntax, but offer less guidance for improving your semantics.

This IMO is the biggest problem with macro usage today and needs to be overcome. I've started a campaign against the "macros are syntactic sugar, and I have a sweet tooth!" camp (http://carcaddar.blogspot.com/2010/08/input-needed.html). I recommend reading this excellent parody blog post (written at the height of PG-inspired Lisp mania) for background: http://classic-web.archive.org/web/20070706135848/brucio.blo...

> Macros almost never call macroexpand recursively, except inside of code bodies.

This is by design. The expansion of a macro needs to be opaque - otherwise where is the abstraction? When you macroexpand, you're opening the black box. You can only rely on macroexpanding macros that you have some control over.

> This means that macros like defclass usually cannot be extended in an incremental or modular fashion.

This is because it's the same problem as extending an arbitrary grammar. You can't compose two grammars and expect the result to be non-ambiguous.

> Instead, you need to declare a def-your-class macro that can't usefully compose with any other def-her-class macro that you might encounter.

This is completely, totally wrong. Things like def-your-class is the reason I started my education campaign. You should never write a def-your-class macro. Macros are the wrong way to solve this problem, which is why CLOS and the CLOS metaobject protocol include all the facilities you could want for extending defclass in a composable way (Art of the Metaobject Protocol is a comprehensive, but confusing, reference).


You can only rely on macroexpanding macros that you have some control over.

I think we're looking at this from exactly opposite directions. I'm not talking about code-walking pre-existng macros, but rather providing hooks for other people to extend yours:

  ;; Completely hypothetical package management-system.
  (define-module foo
    (require 'bar)
    (file "foo.ss")
    (file "foo-tests.ss" :in-mode 'test))
There's a lot of macros like this in the Lisp world, and they define non-extensible DSLs. For example, you can't define a macro tested-file to eliminate the duplication above:

  (define-module foo
    (require 'bar)
    ;; Doesn't work:
    (tested-file "foo"))
This could be supported if define-module internally called macroexpand-1 on unrecognized child forms until it saw either require or file. Then you could define a tested-file macro to extend define-module.

Note that the actual advisability of this approach varies greatly between Lisp and Scheme dialects.

Contrast the following ActiveRecord example:

  class Person < ActiveRecord::Base
    belongs_to :organization
    
    state_machine :initial => :starting do
      state :starting
      state :online
      state :stopping
      state :offline
    end
  end
Here, we have a third-party state machine library extending ActiveRecord. This is just one of several places where many Lisp macros tend to be broken or inflexible.

Of course, macros are a very useful tool. But they have limitations, and other languages also provide interesting mechanisms for "writing programs that write programs", with different tradeoffs.

You should never write a def-your-class macro.

Well, even if the CLOS metaobject protocol does solve this problem, it doesn't fix defsystem, or any of the other thousand non-extensible defblah forms in the Lisp community.


The amount of OO runtime metaprogramming crap out there far outweighs the amount of macro metaprogramming crap from simple fact that hardly anyone knows anything about macro metaprogramming. I've seen only a few non-extensible defblah and a metric ton of non-extensible OO bullshit in major projects.

I agree that you have some valid points but your overall argument holds little water. In good Lisps you have access to both runtime and compile time abstractions. Used wisely they will always trump a system that can only lean on runtime abstractions.

I'll also note that in my experience runtime metaprogramming is horribly painful. And many, many, many others have pointed out this fact. There are numerous cases where structural transformation is far simpler to reason about.


Oh, I absolutely agree that OO languages do gross things, too. If somebody gets me started on AbstractRequestProcessorFactoryFactory, or inappropriate use of instance_eval, I can keep going for hours. :-)

Overall, macros are a very useful tool. But there's a school of thought that sees them as the ultimate, universal abstraction. I'm no longer convinced by that argument.

However, I do agree with your remark, elsewhere in this discussion, that The Reasoned Schemer is an extraordinary fine example of what macros can be used for. It's one of my all-time favorite programming books.


> This could be supported if define-module internally called macroexpand-1 on unrecognized child forms until it saw either require or file. Then you could define a tested-file macro to extend define-module.

If tested-file expanded to just file, it means that the only thing it can do is have side-effects at compile-time. Those side-effects won't be preserved in compiled files. So tested-file has to expand to something that includes file and the extra code, and define-module has to specify what that expansion can look like and how it will be handled. The design of this protocol can't be automated, because code-walking the expansion of tested-file is potentially Turing-complete.

So now you have two problems:

1. every define-foo macro needs a protocol to specify how it can be customized 2. this only works at compile-time

The need for this kind of customization is where the meta-object protocol arose from. Read up on Gregor Kiczales' work on open implementations (http://www2.parc.com/csl/groups/sda/projects/oi/ieee-softwar...). Putting in the required hooks into the interpreter (object system) using a restricted protocol provides a common system that all customizations can share (no need to design your own define-foo protocol) and is available at all stages (compile and run-time).


> If we're talking about high-level abstractions, this is probably not the best way to describe Haskell. (In fact, Haskell's type system is notoriously ad hoc, compared to languages like ML.)

In what sense is Haskell's type system any more ad-hoc than ML's?

They're almost equivalent, except for Haskell's type-class extension, which is used for ad-hoc polymorphism. Support for ad-hoc polymorphism does not make the type system ad-hoc. It means you can use (\*) for multiplication between any type of number, which is very useful.

Many GHC extensions to the type systems are also based on sound theory (Rank 2/N types, Type families, GADTs) and not ad-hoc. ML lacks these too.

Haskell is one of the forefronts of applied type theory, AFAIK, ML seems to stagnate in this area.


In what sense is Haskell's type system any more ad-hoc than ML's?

According to Mark P Jones, the author of "Typing Haskell in Haskell":

Haskell benefits from a sophisticated type system, but implementors, programmers, and researchers suffer because it has no formal description. http://web.cecs.pdx.edu/~mpj/thih/

Haskell's type system was designed to make programming delightful. I vastly prefer it to some of the contortions of ML's type system, such as the separate + and +. operators. But by functional programming standards, I think that any type system without an official description can be fairly described as ad hoc.


I posted the following comment earlier today but it was moderated by the infantile censorship police on HN :

Before muddying the water for future generations of potential Lisp programmers in this forum please go and master macros first instead of spewing unfounded bullshit about their limitations.

1. Macros were never meant to 'offer guidance for improving your semantics' however they are the mechanism that makes it possible to create your own new semantics (which is just not possible in any other family of languages). The semantics that you want to create are often specific to the domain you may be working in. Learn from those who have mastered the domain and created their own semantics and then come up with your own.

2. It's clear that you've been drinking the OO koolaid for a while. Let it go and try really learning macros. Leave the baggage at the door, it's just not required where you can go with macros.

3. Ruby is a badly diluted hack that borrowed some ideas from Lisp and other languages. No you cannot accomplish in Ruby what is possible with Lisp macros. For a start all the Ruby mechanisms that you suggest execute at runtime. Ruby does not have direct access to it's AST the way that Lisp does. Lisp and Ruby are not even in the same class of languages. Contrast Ruby with Java, and then yes it looks great.

If your talking about the "hundred-year language" then just forget Haskell. Lisp is the future, go and learn Qi. Qi has a turing complete type system, this is already light-years beyond what Haskell's type system will ever have.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: