makepp takes over from make.pl

Since my project was progressing very slowly, I took a new look at its closest competitor, makepp.  It is so much superior to GNU make, that I am putting up with make's strange syntax.  With that make.pl is dead, and I will also contribute my builtin commands to makepp.

I had not seen the advantage of my 100% Perl syntax in a makefile.  It makes a makefile rather hard to read, and forces build system maintaners to know Perl. The same is true of a coming up hybrid, PBS, which writes the makefiles in Perl, yet puts in the commands in strings, like make.


make.pl – Perl Makemake.pl – Frequently Asked Questions

Why is it not modularized?

While CPAN is easy to use, it's something only Perl hackers are familiar with.  But they will write Perl makefiles, which anybody may use to compile some project.  Hence, as long as its size permits, and until this reaches a stage where distributions will package it, it'll stay as easy as possible to install.

How can I build subdirectories?

Traditionally make does this by calling itself recursively.  This is not only inefficient, but plain wrong, as Peter Miller's excellent paper explains.  Always build the whole project from the top directory by including all directory specific makefiles.

What's the difference between a command and a normal function?

A command is a normal funtion with some additional special behaviour, which allows make.pl to echo the command before it is executed, and to automatically react to its return code, whereever it may get called.

Why so many builtin commands?

The builtins are identical on all platforms, saving configure to find out how exactly they work.  Also they are more efficient than fork/exec, which is important on the very slow Posix-emulation of the mainframe I have to work on.

How can I pipe data between builtin commands?

This is an unsolved problem.  A pipe is a system resource allowing processes to communicate asynchronously.  But the point of builtins is not having to fork.

For small amounts of data one command could write to an "in memory" file.  Then another could read from that.

Otherwise this would require the commands in a pipeline to run in separate threads.  And we'd need some buffer-based I/O mechanism that goes further than strings, allowing concurrent reading and writing up to the point where the writer closes it.

Why not "use Shell" instead of "use command"?

Commands have a special behaviour, in that their return code steers the progress of the make.pl run.  Besides, the Shell module impolitely mixes I/O redirection to be parsed by the shell with arguments to the command itself.

Why fsort instead of sort?

Perl has a widely used function sort which has nothing to do with file operations.  So there is a separate command for file sort.

What are "deferred variables"?

Actually they're normal variables.  The trick is that they don't hold their value directly, but are references to either scalar or list values.  Now, Perl allows you to reference an undefined variable, and only give that variable a value later.  You will aways get the value at the time you dereference.

Why "deferred variables"?

The foremost reason are predefined rules, be they builtin or from your master makefile.  This way the rule can be configured with variables, which will only later have a value.  Another advantage is that costly values (i.e. file-system based ones) need only be calculated when they are actually used.

Why does shellparse ignore my variable?

Give it a chance: shellparse "echo $PATH" is very different from your probably intended shellparse 'echo $PATH'.

Last modified: 2003-07-14
Powered by the GPL and the Artistic License