(Trying to refactor a thread between AdamSpitz
The burning question is: Do we really need macros? What do macros allow us to do which we can't
do without them?
One example is PatternMatching
, it's a replacement for a whole bunch of nested ifs. here's and example (PltScheme
syntax using their match.ss library, square brackets mean then same as the usual parens):
(define (map fn list)
[(head . rest) (cons (fn head) (map fn rest))]
The more complex the destructuring (think folding over a tree of complex data types - e.g. a compiler) the more useful it becomes. There are two macro-ish aspects to this example:
- The patterns themselves aren't meant to be evaluated; they're used as data structures, not code.
- The names specified inside each pattern need to become variable names in the block of code associated with it.
Since we're wondering what macros allow us to do which we couldn't do without, we asked ourselves if this PatternMatching
macro could be implemented as a function. The disadvantage to writing the macro as a function would be that you'd have to type a quote in front of each pattern, and you'd have to type "lambda" in front of each associated code block.
Lisp already has the quote operator to deal with #1. Does it have anything that deals with #2?
The match construct creates a lambda for the RHS (i.e. the body of the expression) in the scope of the surrounding expressions
. So this works as well:
(define (map fn list)
[(head . rest) (display list) (cons (fn head) (map fn rest))]
Note the (display list) call in the first case.
So you could manually expand the above into:
(define (map fn list)
(if (pair? list)
((lambda (head rest) (display list) (cons (fn head) (map fn rest))) (car list) (cdr list))
(if (null? list)
((lambda () (list))
(error "No match"))))
In fact this is what the macro does.
The question you should be asking is not "can Lisp without macros do what Lisp with macros can do?" The answer must be yes as Lisp without macros is Turing complete so any macro can be rewritten in Lisp without macros if you are prepared to go through enough hoops. It's also trivially yes because macros must expand into Lisp without macros if the macro expansion is to terminate! The question to ask if "do macros allow construct that are significantly easier to use than their no macro counterparts?" -- NoelWelsh
Yes. That's exactly the question that I'm asking. And I'm suggesting that if the answer is "Yes, lots of them," then that might be an indicator that there's something missing from the Lisp-without-macros language.
Or it might not be. I keep sensing a tremendous reluctance to change anything about the Lisp-without-macros language; maybe there's a good reason for it. I don't understand what it is yet, though. I understand the value of keeping the language simple; it just seems that if you're forced to rely on macros so often, then it's probably too
I suspect that you disagree, but I don't know why. I guess that's what I'm trying to learn here.
Yes, what is missing from Lisp-without-macros is macros! -- AnonymousDonor
Heh. Fair enough. :) But I've tried to describe below why abstractions represented as macros don't make me as happy as abstractions represented by runtime constructs. Macros are a wonderfully general EscapeHatch
, but I think there are still useful ways of improving Lisp by allowing more ideas to be represented directly in Lisp-without-macros. -- AdamSpitz
Lisp programmers generally share your concern about macros. They're not as satisfying as abstractions represented by runtime constructs. But when you *can't* represent the abstraction you're after with runtime constructs, you need macros. Anyway, your distinction between Lisp-with-macros and Lisp-without-macros is puzzling to us Lispers. Note that things implemented with macros, such as (loop) and CLOS are, as far as the CL standard is concerned, just as much a part of the language as are (car) and (cdr).
represented directly in Lisp is of no semantic difference from
implemented with a macro in Lisp.
I don't understand how the first half of your paragraph fits with the second half. This distinction between abstractions-represented-as-macros and abstractions-represented-as-runtime-constructs is exactly what I'm trying to get at. If one is less satisfying than the other, then why do you say there's no semantic difference? -- AdamSpitz
What I was/am trying to get at is that the use of macros is largely an implementation detail. It doesn't concern the user of the macro particularly, and it doesn't feel any different from an implementation using (defun ...) or other Lisp. Macros are avoided where possible because they can have tricky side effects that can be difficult to debug. And I maintain that
represented directly in Lisp and
implemented with a macro in Lisp is a funny distinction because so much of Lisp is implemented with macros.
Way back at the beginning of this discussion, the thing that started bugging me about macros was a comment about how you can't, say, MAPCAR over a macro, or pass a macro into a function. I'll believe you if you say that you don't want to do that very often, but it seems to me that it's not quite
just a hidden implementation detail; there are real technical reasons why runtime constructs are different from macros. -- AdamSpitz
I am not sure I understand what you mean; it seems to be either confusion or a tautology (not sure if it is you or I who are confused though!). You can only mapcar over a list. Of course, if you pass mapcar a form that happens to be a macro that expands to a list, that is fine. Similarly, (foo (bar baz)) will call foo with an argument that is the result of the form (bar baz). If bar happens to be a macro, (bar baz) will be expanded at read time to, say, bar-expansion, and the result will be (foo bar-expansion). Runtime constructs are different from readtime, as you say - but this is a benefit. For example, I could use a macro to force loop unrolling for a particular construct. This happens at readtime, before the compiler sees the forms; thus it doesn't matter if the compiler can do this optimization or not. Note that this probably isn't the right thing to do, but it gives you an example of something simple that can be done at read time bun *not* at runtime. Of course since code is data, you *could* construct macros at runtime, etc....
Suppose I write (mapcar #'square '(1 2 3)). This works just fine, because "square" is an abstraction that Lisp can implement nicely as a function.
Lisp has macros because there are some abstractions that Lisp can't implement so nicely as functions. These abstractions don't exist after read-time; we can't pass them around at runtime. If "square" had been implemented as (defmacro square (x) `(* ,x ,x)), we would not be able to write (mapcar #'square '(1 2 3)).
No Lisp programmer in the universe would implement "square" as a macro, because implementing it as a function is just as convenient and has the advantage of making #'square a first-class value at runtime.
The "with-scarce-resource" abstraction described on the LispMacro
s page was given as an example of a macro. Because it is implemented as a macro, we can't pass it around at runtime. This is probably not a problem.
In other languages, though (say, Self or Smalltalk or Ruby), "with-scarce-resource" could
be implemented as an ordinary function, in such a way that the syntax for calling it would be just as convenient as the syntax for calling the "with-scarce-resource" macro in Lisp. If Ruby had macros, Ruby programmers would still implement "with-scarce-resource" as an ordinary function, for exactly the same reason that Lisp programmers implement "square" as an ordinary function.
We prefer to implement abstractions using runtime constructs rather than macros, whenever it's convenient. There are ways of changing Lisp so that more abstractions can be implemented directly using runtime constructs. And yet Lisp programmers seem content to leave the language the way it is and implement lots of abstractions using macros. That's the mindset that I don't understand yet.
Am I still confused?
I'll leave that judgment up to you. :-) The reason lisp programmers don't worry about such issues (I think I've said this before) is that they express their solutions
in terms of functions, and their problems
in terms of macros. If you're trying to solve a problem, you express it, the READer transforms it into functions which yield the solution.
Look, macros and functions are different
. You never want to MAPCAR over a macro any more than you want to eat a rock. This is turning into one of these theological issues; I think, fundamentally, Common lispers (Schemers having a very different mindset) are very pragmatic, and don't worry about such problems because they're never a problem
. It's like the so-called "unhygienic" macro "issue"; the common lisper answers: if it's never a problem, why is it a problem? and looks blankly at you and gets back to work. Maybe I'll describe in more details the CommonLispMindset?
That's a page I'd be interested in reading. Please do create it.
One last thing that strikes me, though: Whenever I hear someone say, "My language can't do that, but why would I ever want to?" I think of what PaulGraham
wrote about "Blub" programmers. Probably that's not what's happening here... but how would we be able to tell?
I asked a C++ programmer about the problems of memory leaks and he said that with well written destructors and stack allocation its never a problem. I suggested that GC handle all of this automatically and he replied that C++ programmers are very pragmatic, and don't worry about such problems because they're never a problem
. It's like the so-called "unhygienic" #define "issue"; the C++ programmer answers: if it's never a problem, why is it a problem? and looks blankly at you and gets back to work. Maybe I'll describe in more details the CeePlusPlusMindset?
Unfortunately, your C++ programmer was lying to you. I have never seen a significantly sized C++ project that didn't have memory management problems of one type or another. Notwithstanding the more subtle problems of managing lifetimes, the only ones that didn't even leak were ones where real money had been spent in both tools and time to find and plug leaks. Some of these were horrible band-aid jobs, too (eg, we have no idea how this resource is leaking in our 10MLOC, but we are pretty sure it is dead at this point so lets explicitly delete it here). A C++ programmer who thinks that memory management is "never a problem" is simply incompetent. There is a lot of (especially shrink-wrap) applications where this is not considered a problem, since the app will be closed (or crash) long before it exhausts available memory. I can even remember sitting in a design meeting where it was decided from above that the remaining leaks would not be fixed, since the target machine (win 95) would never stay up long enough for this to be an issue. Not the sort of meeting you leaving feeling a rush of pride in your profession. If you are selling to the PC crowd, you can sometimes get away with murder in this regard, but that doesn't mean it isn't a problem. It's just a problem that parts of the industry have been systematically sweeping under the rug. Of course, it is possible to write code in C++ that manages memory correctly. The people who do this understand what is needed, and think hard about memory issues. Often they add garbage-collecting of one type or another to the system (Greenspuns at work). They are nowhere near a majority in the C++ world, mores the pity.
What if the C++ paragraph was meant to be ironic?
Then let the answer stand as an abject lesson in the pitfalls of reading-wiki-before-first-coffee! In any case, there is a vast difference between the two contexts: "Never a problem" meaning I understand all the issues, and have empirical evidence that this theoretical problem doesn't occur in the real world, and "Never a problem" meaning I can't see a problem, so I don't worry about it - why worry about all those details anyway. Unfortunately, the second variant appears in industry (not particular to c++) a disturbingly large amount.
I believe the original post had two points:
- To try and get you to examine your assumptions about macros. It is not necessary that macros be first-order, for example: in TemplateHaskell macros are higher order, at the cost of introducing a splicing operator to denote macro application
- To make a point about hygienic macros. If there wasn't a problem with variable capture there wouldn't be gensym. You can code around it, but better would be to have the compiler code around it for you - after all it knows what names are visible at any given point. The syntax-case macro system allows hygienic pattern matching a la SyntaxRules? and defmacro style non-hygienic macros.
No - that wasn't the point. We don't have GENSYM because of non-hygienic DEFMACRO. We have GENSYM because we often have to generate new symbols. One of the occasions on which we may need to generate new symbols is in writing correct macro code. There have been long religious flame wars about this; suffice it to say most CL programmers don't want
the syntax rewriting macros - they want DEFMACRO. And slinging gensyms still
isn't a problem.
was that macros are different
from functions, that's precisely why
they're useful, so trying to bash them for being irregular (i.e. different from functions) is kind of missing the point.
in particular) is a GeneralPurposeProgrammingLanguage
, as opposed to a DomainSpecificLanguage
. Hence the language is not simple
, it is general
, however no real world program is general, they are all specific to a particular domain. Every GeneralPurposeProgrammingLanguage
tries to bridge this gap (from the general language to the specific problem) in some way, most do it by providing libraries, libraries for text processing, network access, the NextBigThing
, whatever. Lisp does it in two ways, libraries and macros.
There is definitely something missing from the Lisp-without-macros language, unless the problem you are trying to solve is to implement Lisp. The Lisp-without-macros language is an excellent tool (as opposed to solution) for solving many fundamentally different problems, consider this with respect to Perl, which is an excellent tool to solve a specific problem. The reason Lisp is so good at solving many fundamentally different problems is that you can easily transform lisp from a general tool (see SwissArmyChainSaw?
) into a specific tool (see LaserScalpel?
), without every giving up anything. Macros are the metatool which allow you to do this.
While looking at these examples it could appear that all macros allow you to do is save typing, which is true, especially considering (as NoelWelsh
pointed out) all macros are turned into lisp code. However, the point is: would you? CeePlusPlus
's Template system is a way to save typing, CommonLisp
's loop is a way to save typing, but we wouldn't have typed it had it not been easy
. Which is the point of macros, they allow you to do something you would not have done before, not because it's impossible, but because it's too hard.
Let me add my 2 pence in the same vein as MacroBaringer?
. All you need for a TuringComplete
language is recursive functions and function application. This language is known as the LambdaCalculus
. Everything else: variables, conditional statements, numbers, etc. is syntactic sugar. Wrong; syntactic sugar is some trivial rearrangement of syntax for a banal notational convenience. Some of the language features you describe are much more; they are SemanticSugar?.
Stuff the language designer has put in to make life a bit easier. Some stuff is so general, like conditionals, that almost every language provides them. Some syntax is only useful for specific tasks. Take Perl's regex operations for example. This syntax makes text processing easier but doesn't add anything to, for example, numerical code. Now the question to ask yourself is: "who knows your problem better; the language designer or you?". Unless you are the language designer it has to be you. So who should decide what special syntax the language has? It should be you, because are in the best position to decide what is the most useful for your task. Most languages don't allow this. They put the language designer in a privileged position and say users aren't allowed to adapt the language to their needs. Lisp doesn't do this. It puts you on par with the language designer and lets you fit the language to your problem. This is certainly a different mind set from most languages but it can be incredibly powerful. It lets you define, for example, little sublanguages for text processing (see OlinShiver?
's awk macros in the SchemeShell?
), for XML processing (syntax-case is an excellent XML transform tool) and any other task.
Your response indicates that you consider macros to be a bad thing ("it just seems that if you're forced to rely on macros so often, then it's probably too
simple.") I disagree. Relying on macros shows you've adapted your language to the problem domain, making your solutions simpler. PaulGraham
seems to agree in his writing about ViaWeb?
. If you haven't read them already, reading his articles may help you understand this point of view. Otherwise I must ask what you'd add to or change about Lisp to reduce this reliance on macros, and what makes this more the RightThing
then implementing it as macros. -- NoelWelsh
As people here have said, there are two ways to extend a language: with runtime constructs (ie. functions, in Lisp), and with macros.
We use functions and macros to construct bigger ideas out of smaller ones, ideas that are closer to the problem domain than the general ones provided by the system. The job of the language is to allow me to construct those new ideas in such a way that I can think in terms of them. It's creating an illusion; the ideas don't really exist, but the language tricks me into thinking that they do. The more completely the language maintains the illusion, the easier it is for me to think in terms of the higher-level ideas I've built.
We use macros to represent ideas that the language can't fully reify. Ultimately, everything has to be translated down into the runtime constructs, and if the runtime constructs aren't powerful enough to represent my ideas directly, then I need to use a macro. The macro does an imperfect job of representing the idea; it allows me to type it concisely, but doesn't maintain the illusion, because for some purposes the macro doesn't exist. So sometimes I'm still forced to think in terms of the lower-level runtime constructs that the macro expands into.
That's better than nothing. If my language doesn't have macros at all, I'm forced to write out the idea the long way every time I want to express it. So I still want macros, because at least they guarantee that I'll be able to give each of my ideas a concise read-time representation. But having an equally-concise read-and-run-time representation is better. And so when I find myself with a large class of ideas that can be represented more concisely as macros than as ordinary functions, I see it as a failure of either the syntax or the runtime language. (And when I say "failure of the syntax", I might mean, "failure of the macro system", if the macro system is supposed to be flexible enough to let the programmer do anything the language designer could do.)
want to be able to adapt the language to my problem domain. But I want to be able to do it using constructs that don't disappear at runtime.
Anyway, I've probably rambled on for too long already. If any of you are still willing to talk to me, I'll keep going; otherwise, I'll stop. I think I'm happy with what I've learned here.
Do you mind that 'if' or 'defun' disappears at run time? I just have never
had a situation where i'd need a macro to be around at run time. Macros really aren't even intended to be around at run time, they're intended to write code for you, and in so doing make what you write concise and clear.
I would argue that macros
are available at runtime. You can stop a running program in Lisp and issue some commands at it in the ReadEvalPrintLoop. These commands can and do include macros. Where exactly in Common Lisp is there a runtime where macros don't get expanded? You can even use macros recursively in other macros! Additionally, in the debugger, you see the code as you wrote it, not the macroexpanded version.
Don't forget, REPL stands for read
,eval,print,loop. I still maintain that macros are not
available at run-time; the fact that you can type an expression containing macros in the REPL proves nothing more that those forms are READ
before being EVAL
ed. And if your debugger lets you see macro code, you have a very nice environment, but that is not universal (or maybe only works with interpreted code, or something). Which lisp do you use, BTW? -- AlainPicard
I'll think about this more and write a proper response, but for now it just struck me as funny that in Smalltalk, 'if' is
just an ordinary method, which I could do higher-order things with if I wanted to - but I've never wanted to. :) -- AdamSpitz
Ok, I think I see what you're trying to do. Your basic question, if I have it right, is: "what changes could we make to Lisp in terms of evaluation model or syntax that would render a large number of macros as functions?" In answer to this question: if the Lisp was a lazy language control flow macros would be unnecessary. Alternatively people might be prepared to fake them with closures as done in Smalltalk if 1) the syntax was easier and 2) the overhead was guaranteed to be low. You may be interested to know that the with-input-from-file etc. macros in Common Lisp are functions in Scheme. I suppose there could be simpler syntax for closures, say fn as suggested by PaulGraham
, but I don't see it as a major complaint. This does away with a class of simple utility macros but still leaves the big-uns like match. I don't think you can get rid of them. -- NoelWelsh
As a side note: It used to be that Lisps allowed you to define "fexprs" that would hold off on executing the arguments of the function, until and/or if you needed them (or not). KentPitman? wrote an article explaining why programmers should be satisfied with macros, and why fexprs are evil, and this article is a major reason why CommonLisp and SchemeLanguage do not have fexprs. I, for one, disagree mightily with this: if fexprs are so bad, then why does CommonLisp have 25 of them? (They are called Special Forms.) Rather than remove the ability to create new special forms entirely, programmers should have been given the power to use them, with explanations of their pros and cons. That way, programmers could decide for themselves when to use functions, fexprs, or macros.
Yes! That's exactly what I'm trying to get at. And I think it's perfectly fine to "solve" one class of macros but leave the rest; I don't want to bloat up the language, so I'd like to only add things that will have a large impact. The more macros we write, the more patterns we'll see, and every so often maybe we'll be smart enough to come up with a simple feature that'll have a big effect.
Uh-oh. I'm about to start saying weird things. You should stop reading now.
Lately I've been working with the SelfLanguage
, which places a strong emphasis on concreteness - like Smalltalk, there's no such thing as "compile time", only runtime, and the environment does a very good job of making you feel like you're directly manipulating your objects. The idea is that if you can make the program world feel like the real world, you can draw on a whole host of skills that humans have ingrained in their subconscious from working with the real world.
In the real world, there's no "compile time." :) The universe exists, and everything in it exists. (Unless you think that it doesn't. But that's a philosophical thing; I'm just talking about the way our brains have evolved to see the world.) If I pick up a ball, it's a real object, and I can poke it and see what colour it is and throw it around; there's no such thing as a "virtual" ball that only exists at compile time. There are factories that create
balls, but those factories are also real objects in the universe.
Anyway, I think DavidUngar
has corrupted me, but the more I think about it, the weirder it seems to have this idea of a concept that doesn't exist at runtime. So that's why I've been arguing with you guys. Three months ago I would have been on your side. :)
P.S. In Self, we do
use blocks (lambdas) for all sorts of things (including control flow), and the syntax is lightweight enough that it feels perfectly natural, and the VM is smart enough to inline them away a lot of the time.
I will come back to this but for now i'd like to point out that TheRealWorld?
and my mental image of TheRealWorld?
, more often than not, differ. I look to my left and i see a chair,
I do not worry about details like whether the legs are steel or wood. TheRealWorld?
's compile time
would be those moments when we abandon the abstractions/simplifications which allow us to live and go into the nitty gritty details. -- MarcoBaringer
See how cool the human brain is? :) You look at a chair - a real, concrete thing - and you see an abstraction.
You say that "compile time" is "those moments when we abandon the abstractions." Is that analogy really right? Notice that there's no separate "compile" step in the real world - your brain just shifts its focus. One second you could be thinking about the abstract "chairness" of the chair, and the next second you could be thinking about how the chair has wooden legs. It's like that optical illusion, where you're given a picture and you see either a vase or two faces. The faces don't cease to exist when you're seeing the vase. Your brain just shifts its focus.
I want my programming language/environment to work the same way. I want it to fool my brain into thinking that the objects inside the program are real
, and let my brain shift between different views of those objects the same way I shift between seeing the vase and the faces.
The more convincingly the environment presents this illusion, the more advantage I can take of all this cool stuff that my brain can do. That's why macros kinda bug me - they wreck the illusion.
Kind of a weird way to look at programming, huh? :) But it's growing on me.
No doubt Self is cool. Last time I looked it was only available for Sun and Mac so I've never used it but I'd like to get the chance some day. Note that the phase separation between macro expansion and compilation in Lisp is somewhat arbitrary. Some systems (notably more advanced Scheme systems like DrScheme
) do allow to clearly separate compilation and runtime, but with eval and load around you can always do compile time things at run time. Sometimes the Self world-view is nice. Othertimes you want a clearly defined compilation time (e.g. when speed is important). -- NoelWelsh
Yes. In situations where the machine isn't fast enough or smart enough to adapt to us, we have to adapt to it. Luckily, the human brain is good at that
, too. :) -- AdamSpitz
It isn't just about performance. Many applications (e.g. web applications) transform code through a variety of stages. Think transforming templates (with values filled in from a database, say) to XML to HTML. It happens. Macros allow you to express these different stages in a natural way. (See WebIt
for a Scheme framework for transforming XML using macros.)
Now for an example of a macro that lives on at runtime, consider encoding a design pattern (such as Singleton) using a macro. The macro expands into a class definition that may implement a Singleton interface and certainly lives at runtime. The macro is just a shorthand expression. Then there are macros that don't make any sense living at runtime, like the pattern matching example. The whole point of this is to specify a visual way of doing cond statements. It makes no sense to have this at runtime as the runtime can't see.
Finally consider that a macro can do arbitrary computation. Where does macro expansion stop and runtime start? -- NoelWelsh
Wow. I'm having a lot of trouble figuring out how to communicate with you; you seem to see the world very differently than I do. (I have this same problem with Java people, and C people, and Perl people. I guess language really does shape the way we think. Or else it's just that I'm so weird that I can't communicate with anyone
When you said that there are things that "don't make any sense living at runtime," I was thinking the exact opposite. In my mind, it doesn't make any sense having these things not
exist at runtime. Why should the concepts represented by macros be any different from the concepts represented by functions? I think it's a completely artificial division. (It might be a useful division for the machine to make, for efficiency reasons, but I don't think it's a division that has any meaning in the user's problem domain, and therefore the system should present the illusion that the division doesn't exist.)
But maybe I have a different idea of what "runtime" means than you do. You said that "the runtime can't see." But can you
see the program at runtime? How does the Lisp debugger work?
That last thing that you said made perfect sense to me. If you can't tell the difference between read-time and run-time, then I withdraw some of my objections. (I suspect that you can
, though, because otherwise AlainPicard
never would have mentioned the terms, because they wouldn't have existed as concepts in his mind.) So my question is: does Lisp maintain the illusion? When people say that macros disappear after read time, what do they mean?
They mean this (gross simplification follows - see HyperSpec
for real McCoy
). When the lisp environment (the READER) reads a form (for compilation or evaluation), the READ function checks if the form is a list, and if so, if the CAR of that list corresponds to a symbol for which a macroexpansion function has been defined. If so, that function is applied to its arguments (which, of course, are first READ in, doing this whole process recursively).
Note further that Lisp separates the concepts of READ-time, LOAD-time and COMPILE-time. The lack of a READ-time in other languages is the cause of much confusion and misunderstanding of lisp macros - programmers lacking a mental concept of READ-time are unable to move past the CPP style "substitute textual tokens" model of what a macro does. Lisp macros apply functions to program structures
, to yield new program structures to be (recursively read) and eventually compiled or evaluated. Now, the important thing to note is that at READ-time, you have the FULL lisp environment at your disposal. Literally. You can get the macroexpansion code to run any lisp program you want to return the new form to be executed/compiled.
Finally, remember that the READer and compiler are available in your final executable lisp program (usually).
If this is still too vague or unclear, say so, and I'll try to come up with something more cogent soon. -- AlainPicard
Right. That's all fine. My question now is about runtime. Can you see the program at runtime? If so, does it look different at runtime than it does at "programming time"? If it does, what extra cognitive burden do you think this places on the programmer? -- AdamSpitz
What, in the above paragraph, is the program
at runtime? I'm not sure where you're going, how how this is related to lisp macros. At runtime, a lisp image has available to it: normal introspection (type-of, change-class, etc) plus EVAL and LOAD. If I write:
(defun foo ()
(with-dr-seuss-macro (thing1 thing2)
(play-with thing1 thing2)))
Are you asking about what information the function PLAY-WITH has (as it executes) about the macro WITH-DR-SEUSS? (I suspect you're not, because I'm pretty sure you've understood my explanation about read time). Can you rephrase your question? I need an AHA! moment to see what's really bugging you so I can (try to) explain. -- AlainPicard
I'm not sure how to explain it. Let me try to do it by describing how I work in Self.
, there's no difference between programming-time and runtime. Or, to put it another way, all programming is done at runtime. Or, to put it yet another way, there's just "time." :) The Self environment is a running Self program. Every Self object knows how to do things like, "Give me a list of your slots," or, "Give me the text of the method in this slot." And because, just like Lisp, Self can do things like compiling and "eval"-ing at runtime, you can implement a whole Self programming environment that way - you just write a Self program that can put up a visual representation of a live object (with all its methods), and when you click on a method and change its text and hit Save, it compiles the method and tells the object to save it. (In Self lingo, we call that an "outliner" - you can open up an outliner on any object, and it lets you view and modify the slots of that object, as well as giving you menus for doing some fancier things.)
So the Self environment is just a running Self program that lets me manipulate the objects in the running system. There's no concept of "compile time" - the world just exists, and you manipulate it, and that's it. Inside the debugger, I can look up and down the call stack and see the source code for each of the method activations on the stack, and that source code is the same as what I'd see if I were editing the code through an ordinary outliner (and I can edit the code while I'm in the debugger, if I want to - it's just editing the code at runtime, just like I'm always doing). For each method activation on the stack, I can click on any of the parameters or local variables and it brings me an ordinary outliner on that object.
That's what I mean by "looking at the program at runtime." The Self environment lets me see my objects and manipulate them, and everything is uniform (everything is an object, and I can see any object and manipulate it), and there's no such thing as "an object that doesn't exist at runtime." That's why macros seem so weird to me.
So when you told me that one of the disadvantages of macros was that they disappear after read time, I thought three things:
- Why does this non-uniformity exist? Macros don't seem to fit in orthogonally with the rest of the system. (I can't pass a macro to mapcar.)
- A lot of the macro examples I've seen are doing things that Smalltalk and Self do with ordinary methods. Given that Lisp programmers seem to be aware that functions have some advantages over macros, why don't they change the language to make it easier to use ordinary functions for those things?
- Don't Lisp programmers look at their programs while they're running? Doesn't it bother them that they look different at runtime than at programming time?
What does the Lisp debugger look like? Can you step through a macro call the same way you step through a function call? (I'm not even sure what that would mean
well, no, because there are no 'steps' to follow. What you can do, is use macroxexpand and friends to see exactly what the effect of the macro is. If you execute the resulting form, you can step through it in a debugger of course.
For what it's worth, I have the impression that this debate is missing something crucial about just what a macro is: it's something that takes some arguments (up to and including entire bodies of s-expressions) and returns a series of s-expressions that will likely be valid Lisp code, ie Lisp forms--and these s-expressions had better
be valid Lisp forms, because it's going to be "inlined" right into the code, and run as if
it were Lisp code.
It is for this reason that macros can't be passed as arguments like functions can, at least in Common Lisp and Scheme. Now, it can be argued that it should
be possible; indeed Paul Graham mentioned that he wanted to explore this possibility in Arc. There may be technical reasons why it's unreasonable to do so, however, but that doesn't mean that the idea shouldn't be explored.
Take my explanation with a little bit of a grain of salt, though: I'm not an experienced Lisp programmer, although I aspire to be one. I have read a lot of material (about half of PracticalCommonLisp
, and about half of OnLisp
, for example), and there's a LOT I still need to read (but then, don't we all?). --Alpheus
Some stuff that got too big moved reluctantly over to LispMacroDiscussionTwo
. We'll fix it soon, I'm sure. -- as
A few points in response:
- This non-uniformity is necessary. There are things you can't do at run-time. I think you understand that because we discussed it at the beginning of this thread.
- Dunno. lambda is a bit of a handful. I've always though a smart editing system that inserted code template (like IntelliSense in Visual Studio, but better) would be good for Lisp. PaulGraham is renaming lambda as fn in Arc.
- Dunno. I don't, but I use DrScheme. DrScheme correlates errors back to source anyway so you can always see the original pre-macro expansion source.
PS: I don't think we're having difficulties communicating. At least I understand your position but I see my role in this discussion as advocating macros in line with the original premise (what are macros used for/are they useful). I assume Self reflects the runtime (lexer, parser, compiler) into the, err, runtime. If so you have the parser available at runtime, and you can extend the parser and ...
I think that the non-uniformity between macros and everything else in CommonLisp
is not very important in this discussion. You can macroexpand code at runtime. It just looks different:
(apply #'foo (bar baz))
(macroexpand (cons 'foo (bar baz)))
On the other hand, you can't grab hold of that macro object and store it in data structures or pass it to functions. However, this is just a minor deficiency of CommonLisp
, and is ultimately beside the point, if this discussion is concerning LispMacro
s in general.
In the spirit of using macros to make up for deficiencies in the language, here's a read-macro that lets you grab a macro as a first-class object in the form of a function:
(set-dispatch-macro-character #\# #\,
(let ((sym (read)))
(cond ((not (symbolp sym)) (error "Bad syntax for #, form"))
((not (macro-function sym)) (error "No such macro: ~A" sym))
(t `#'(lambda (&rest args) (macroexpand (cons ',sym args))))))))
Now we can write:
(apply #,foo (foo bar)
for the macro FOO.
I think we're having trouble communicating because we see programming in completely different ways. I'm trying to think about programming from a human perspective, and Lisp people seem to think about it from a mathematical perspective. (I once suggested to a Self guy that Lisp's lambdas were the same as Self's or Smalltalk's blocks. "They're not the same at all," he said. "They may have the same mathematical properties, but their cognitive properties are completely different." And he's right - blocks are much more lightweight.) Lisp is appealing to my mathematician side, but not to my human side.
I don't know how else to say what I want to say. I think I'm starting to get annoying. :) Maybe we need a page on HumanOrientedProgramming?
Are you saying that mathematicians aren't human? While I'd be among the first to expound on the Platonic non-worldliness of mathematics, I would also be among the first to defend the innate human-ness of mathematics as well. After all, isn't mathematical knowledge, at its core, just a description of how humans have explored the world of ideas? Thus, HumanOrientedProgramming?
, if not be engulfed almost entirely
, by mathematical thought and reasoning.
It's *because* of their mathematical purity that I find languages like Lisp and Haskell so attractive. For that matter, Forth and Smalltalk are also impressive tiny mathematical systems of their own right! Indeed, Turing machines (and the Algol-based languages like C that are designed around them) are often given a bad rap, because they tend to be so...machine-like...but Forth demonstrates that even a Turing machine can have a certain stark mathematical purity and flexibility to it.
Ultimately, mathematics is abstraction, and abstractions can become powerful levers to help us understand the world in ways it was impossible to do before. It is mathematics that enable us to build skyscrapers and go to the moon. And it is mathematics that enables us to put a million transistors on a piece of silicon the size of a thumbnail. And any computer language that cannot leverage the innate power of mathematics is doomed to be inferior to those that can.
Now, I am not familiar with Smalltalk, except for a brief attempt to tinker with it, and a bit of reading of the history of it, as told by AlanKay
. And I am not at all familiar with Self. I am
, however, familiar enough with Smalltalk to tell you this
: Smalltalk, at its core, is a very
simple language. Because of this, it belongs in the ranks of Lisp and Forth, and maybe Haskell, as languages that can leverage abstract mathematical thought to a high degree. Thus, it is fated to be more powerful than C, or Java, or Fortran, or certainly C++, because these languages are so tied to the underlying Turing machine, that it's difficult to do anything flexibly or organically. Indeed, of these latter languages, C is probably the most powerful, due to its
simplicity; but C demonstrates that it's possible to have a simple language, yet have enough of a rigid structure that it's difficult to do anything organic with it.
Again, take my words with a grain of salt. I am, after all a mathematician! :-) In addition to that, I'm not as expert in many of these languages--although I sure would like to be! (Well, maybe not anything Algol-based. If I ever needed to hack on the Linux kernel, I'd grudgingly accept C, but beyond that, about the only other Algol languages I wouldn't mind using would be Python (which I have to use extensively for work) and maybe Ruby.) --Alpheus
One of the major advantages of macros is language extensions such as the following Scheme pattern matcher, implemented as a set of macros:
For example, consider the following concise code for matching on an algebraic datatype:
(define-struct Lam (args body))
(define-struct Var (s))
(define-struct Const (n))
(define-struct App (fun args))
[($ Var s) s]
[($ Const n) n]
[($ Lam args body) `(lambda ,args ,(unparse body))]
[($ App f args) `(,(unparse f) ,@(map unparse args))]))))
Here define-struct is a macro that expresses a record abstraction. The match macro expresses an algebraic sum type abstraction in a
way the bare Scheme code
(if (var? x)
((lambda (s) s) (var-s x))
(if (const? x)
((lambda (n) n) (const-n x))
(if (lam? x)
((lambda (body args) `(lambda ,args ,(unparse body))) (lam-body x) (lam-args x))
(if (app? x)
((lambda (args f) `(,(unparse f) ,@(map unparse args))) (app-args x) (app-fun x))
does not (at least not to the naked eye). In fact, the full expansion (not shown here) does runtime error checking in addition to some static expand-time error checking enforcing the correctness of the pattern templates.
Is the code above correct? It has two more closing parentheses than opening parentheses. The previous example has one extra closing parenthesis.
Note that the templates do not stand for expressions or lambda closures.
Many languages have algebraic datatypes, records and pattern matching facilities. However, in Lisp or Scheme you have the flexibility of adding your own language features. For example, if you like nonlinear patterns, they can be added. If you want to match on hashtables, you can add that. If you would like to use pattern matching to implement different views of an underlying structure, that's possible too.
In languages without macros, I have found that a lot of my programming effort tends to be wasted in workaround patterns for missing language features. That happens much less in Lisp and Scheme.
If you give a man a fish he'll eat for a day. If you give him a fishing rod, he'll eat for the rest of his life.
There's a awful lot of good info and view points between LispMacro
s and LispMacroDiscussion
, I'm going to try and refactor the two into something a bit more readable as opposed to this back and forth. wanna help? -- MarcoBaringer
Also don't forget the page DoWeWantLispMacros
I'll try to help if I can. In any case, you've certainly got my permission, for what it's worth. I don't think that we've reached a conclusion yet, though, so I'm not sure what the refactored version should look like. -- AdamSpitz
I think a good way to "see the light" with macros, (the right tool for the right job)- is by real world example. Let's say someone came in here and posted a video of "me using macros in this situation" and a comparison of "me using no macros". Or someone came in here and posted a program to download that showed us why and where macros were useful, and how they are superior.
Depending on the priority of what needs to get done, or the right tool for the right job - when *are* macros useful? The question needs to be answered by example. A video might show that in some situations macros are absolute garbage, and the programmer would just be wasting time (or the exact opposite.. show us!). It would take 50000 words to explain why in an essay, but with example, like a video, or an application that operates and "does some stuff really well" one might be able to "see the light" in only a few seconds "Aha, oh, I see so if I wanted to do this, I would use macros". Or "Oh, well, now I realize that macros are just a bunch of garbage. Sure, they have their uses in weird situations, but since the uses are virtually useless for my situation - which is a situation that doesn't include weird situations - I don't need them!" or "Oh, so macros are good in the minds of people who have been using them for 50 years, but in reality, they could be less stubborn and just use so and so method" or "Oh, well, that was a good video, I can see now in only a few seconds why macros are useful/useless for my situation - whereas if I would have tried to read this in text form, I wouldn't have 'got the point'. The real world example helped a lot.".
Proving by example usually is faster than proving by explaining.. Say I needed to quickly show you how a computer mouse works, how a mouse connects to it, etc. Would it be easier to in vite you over, and show you the mouse, and show you how to use it. Or, would it be easier to send you a post card with text on it, saying, in great detail (more than 500 words) how a computer mouse works.
So some of you have posted some code up here on the site, trying to prove situations where macros are useful. But there is something missing... it's not complete. Say you pulled out your stopwatch, and told me that you made the app in 5 seconds using macros, and it took you 6 hours using no macros. Well, that would be the completion of the "hard message to get across".. If you only said or theorized that it would take you longer using no macros (but you didn't offer complete real concrete evidence), that is not good enough. I think the human brain needs to see sheer proof, or concrete proof, or some sort of real world evidence.
The use of macros is better because:
so and so made an app in only 7 seconds, and he recorded this with his stop watch. He also did this and that, and this and that. The advantages of this situation are: yada yada yada. The disadvantages are little to nil.
The use of macros is not better, because:
so and so made an app in 8 hours using macros, and he recorded this with his stop watch. He also did this and that, and this and that. The advantages of this situation are: yada yada yada. The disadvantages are large, using this method - they are: yada yada yada.
Or, vice versa. But the point is - there must be some concrete evidence, or situations that prove that macros are good in some situations, that macros are bad in some situations, that macros are bad in almost all situations known to humans, in comparison to another method, etc., etc., etc..
Show us something that means something.. take a stop watch, and prove that you drove to the store in only 6 hours with carB, but it took you 70 hours in carA. But carA might last longer and not break down as often. Or, carB might last longer and not break down as often - making it even better, since it also goes faster.
Ultimate decision: in 99 percent of all situations carA is good for this this and this and has numerous advantages. CarB is good for this this and this, but no one ever uses these (1 percent of situations). So carA is good for 99 percent of all people. Or, vice versa.
Two opinions on a situation, but what's the easiest way to get the message across?
-- -- -- -- -- -- -- -- -- -- -- --
- Opinion 1 (A talking situation, no proof - just a theory, no real world example)
1+1= some number
The opinion of this speaker: It should be 3.
I think it's some number between one and ten, one of the lower ones.
I've seen my good friend telling people that the answer is 3. So it is 3.
-- -- -- -- -- -- -- -- -- -- -- --
- Opinion 2 (A "real proof" situation, with a real world example)
1+1= some number
The opinion of this speaker: I think it's 2.
I took 1 apple and put it beside another apple and there were two.
I also checked with my calculator and it said 2 also.
-- -- -- -- -- -- -- -- -- -- -- --
Which answer has the proof behind it to back it up?
Those are two different opinions, but the one situation uses proof and real world application. Both people are just talking, and giving their opinion, but there is a definite feeling of "well, I'm going to have to side with Opinion 2, because he came in and showed us why 1+1=2 . Opinion 1 may be a nice person, and may have good friends, but that doesn't make me "see the light".
I'm not saying that Opinion 1 isn't right.. he COULD have proved that 1+1=3 , but he didn't. He didn't come in and give us some real world applications or some proof. If he came in and showed us a suitcase with two shirts in it, and another suitcase with one shirt in it.. he would have proven that one suitcase plus one suitcase is 3 shirts. but He didn't do this. He tried to explain it in words, rather than by using a real world example.
Someone give proof that macros are useful. And I don't' mean just blabbering proof, I mean some real proof. Show by example, instead of text... maybe it's fun to write 5000 word essay on "how to use a computer mouse", but it would be better to just show someone how to use the mouse by example - say a video tape of a person using a mouse.
Show me a video of you coding a program, and how stupid and inefficient it is to do it "so and so way". Then show me why it's actually very efficient, in another situation (right tool for the right job). Depending on what the situation is, maybe it would in fact be useful in situationA AND situationB , but at different times or in different situations. Maybe it wouldn't in 99 percent of all situations. Then we know where to spend time.
Someone said macros are useful because of this reason, and because because because. Great, but lets see something concrete that portrays this. Or, if you have already written something that you thought was concrete - well, I didn't understand it, so it wasn't in fact concrete like you thought it was! I would much rather see some guy come in with oranges (concrete) showing that 5+5=10, rather than see some person writing an essay why 5+5 is 10. There are just so many numerous advantages in one situation.
There must be either some or a lot, or almost no situations where macros are either not useful or are useful. Prove it so that people can see it. Or, just keep talking and talking, and maybe prove it eventually. There have to be EXAMPLES out there of why macros are useful. It does not matter how many books are written about them. The examples or the uses must be present and operating. If you pointed me to some high tech website that was using Lisp and this website was beyond what I'd ever seen before, then fine. But until then?
Have you looked at Chapter 3 of Practical Common Lisp? There's a good example there of how macros can be helpful.
I would also add "The Nature of Lisp". The author of that piece goes to great length tying together Lisp, XML, and Ant.
Beyond that, PaulGraham
's essay "Beating the Averages" has a testimonial about how macros were crucial in developing ViaWeb?
, as well as an explanation as to why it's difficult to provide a "brief" example of why macros are so powerful (if I recall correctly).
And, finally, there are just some things that you don't get
, and indeed cannot get
, unless you put in hours, or maybe even months or years, of studying. I have taken three semesters of Algebraic Geometry. To this day, I only almost barely tell you (if that) what "schemes" or "pseudo-varieties" or "sheafs" are; yet, these complex structures are crucial
for reducing Algebraic Geometry to something understandable. Indeed, one of my professors described an attempt by one graduate student to understand Algebraic Geometry before these concepts were introduced: the poor student wandered the halls muttering to himself, having gone a little insane.
Similarly, I don't yet get Haskell's monads or arrows, for similar reasons.
Having said that, the above two resources ("Practical Common Lisp" Chapter 3 and "The Nature of Lisp") are powerful examples of what macros are intended to do.
Finally, I would recommend taking up the relatively simple challenge provided in the introduction (or at least the near-beginning, perhaps in the first chapter) of "Structure and Interpretation of Computer Programs", aka SICP. They suggest picking up a beginning Pascal book, and looking at the topics they cover in the class, and then compare it with the topics covered in SICP. If you want, you could probably repeat the exercise with C, C++ or Java in place of "Pascal", and any Lisp/Scheme (or perhaps Smalltalk) book in place of SICP. But, above all, please keep in mind that SICP is fully intended to be a textbook for an Introduction to Programming class--for people who have never used a computer language before! (When I finally came to that realization, it blew a fuse in my mind.)
If you still
want a video, SICP lectures, presented in the early 1980s to HP employees, are available online. --Alpheus
I'm a Smalltalker as well, so perhaps I can offer a little clarity, because I can see Adam's point, but I also know why macros are useful, so I'll try and give my spin on it. Let's say I'm building a system, and in that system, I want my objects to have accessors; however, more than just simple accessors, I want each accessor to follow a particular pattern. I want to declare an accessor foo, but on the runtime object, I want that to equate to...
self testFoo: aFoo.
self doSetFoo: aFoo.
foo := aFoo
aFoo ifNil:[self error: 'Nil not allowed.']
that's a public reader/writer, a private reader/writer that bypasses business rules, and a tester for testing possible foo values. Never mind the reason for this; the point is, it's a repetitive pattern that I simply can't avoid in Smalltalk, because it lacks macros. In Smalltalk, I have to write all those each time I want an accessor, and once written, those things simply exist, a Smalltalk system never shuts down and starts over, this is an important point, and the reason macros don't work well in Smalltalk. Were I in Lisp, or Scheme, I'd create a macro such as
that'd be macro expanded into all that code, and save myself the burden of typing all those little accessor methods. But Lisp and Scheme runtimes are only temporary; the program stops, everything disappears; the next time it is run, the macro expands again. One can also redefine macros in the runtime, but let's just ignore that for the sake of simplicity. If I've changed the source code, and renamed (accessor 'foo) to (accessor 'bar), then at the next runtime, the support methods for foo no longer exist, they have been replaced by bar, however, were Smalltalk to do this, all the foo junk would still be lying around, because Smalltalk "is" its environment, there is no shutting it down and re-expanding it, generating code in Smalltalk would leave all the generated code lying around as actual source, you couldn't tell the macro expansion from the handwritten code because the Smalltalk code browser runs on the live system. When one looks at a Lisp program, one looks at files, the original source; when one looks at Smalltalk source, one is looking at the running program directly, there are no files. In Smalltalk, and Self, one doesn't edit source files that get run to create a running program, rather, one edits classes, which are running objects.
Now, one can tackle problems like this differently in Smalltalk to avoid that repetitive code, by hacking doesNotUnderstand and dynamically handling those message calls with objects instead of generated code, so I could just write...
self accessor: #foo
in an objects initialize method, and get exactly the same behavior, but it'd be handled by looking up objects in a hash and dispatching the appropriate one, likely much slower that actual accessor code, but this doesn't change the fact it misses out by not having macros, for those times when macros are really handy. However, by having a clean syntax for lambdas, i.e. , one finds that many cases where a Lisper would use a macro, are simply unnecessary in Smalltalk, which makes much more liberal use of blocks, which are already pretty, and need no macro to gussy them up. Many many Lisp macros do nothing but clean up ungainly use of (lambda) to make the underlying functions more presentable to the programmer, totally unnecessary in Smalltalk. Now imagine if when a Lisp macro expanded, it replaced the original source it expanded from in the original files, and when you browsed the source code later, you only saw the expansions, that's what macros would do in Smalltalk, and I'm guessing that's why Adam doesn't see them as a Lisper does; he's thinking too Smalltalky. -- RamonLeon
Macros add a third dimension to programming: syntactic abstraction. Using or creating language constructs that express more with less. Reduction of program complexity by elimination of repeating constructs. -- ChrisEineke
Probably all discussion could be summed up to: Is ability to extend compiler worth breaking reflection? -- IvanTikhonov