automake [Was: libhttp]

Matt Zimmerman mdz@debian.org
Sat, 22 Sep 2001 20:12:26 -0400


On Sat, Sep 22, 2001 at 01:05:28PM -0700, Ian Zimmerman wrote:

> Matt> If you use symlinks instead of VPATH, you still have to make a
> Matt> distinction between source files and derived files, in order to
> Matt> know what must be symlinked to the build directory.  This seems
> Matt> like more trouble than just using $(srcdir) where appropriate.
> 
> You make the symlinks _once_, before any building is done.  It can
> even be done from configure.  The entire source tree is symlinked,
> including derived files that come with the distribution.  If the user
> modifies any source files (implying they have write access to the
> source tree), the derived files are updated in place, which is the
> Right Thing, and it is also the same as would happen with VPATH +
> srcdir unless you deleted the derived file first.

Derived files should only be placed in the build tree, not the source tree,
since they are usually specific to the architecture and configuration being
built.  This happens automatically with VPATH, since the current directory
is always in the build tree, and files in the source tree are accessed via
$(srcdir).  Accidental references to a source file in the build tree are
errors, and easily caught.

> Matt> What other complications are you referring to?  
> 
> I remember when I stopped using automake: I was trying to decide whether
> to use $(srcdir) or not in a given dependency.  I thought for minutes and
> minutes, and couldn't figure it out.  And I am a programmer.  Now imagine
> someone who's trying to compile the package, it breaks, and they have to
> play with the build.

If the file is something that you ship with the distribution, it will be in
$(srcdir).  If it's something that is created as part of the build process,
it will be in "." (or a path relative to the current directory or
$(top_builddir).  (autoconf)Build Directories should probably say something
along those lines, but doesn't.

> Also consider all possible interactions with .PRECIOUS, .INTERMEDIATE,
> .SECONDARY etc.  Are you sure you really understand them?

These are GNU make features, and as far as I can tell, they don't have
anything to do with automake.  I can't find any references to them in any
Makefile.in's generated by automake in my packages and projects, either.  I
haven't ever used them in my projects, and I was only familiar with
.PRECIOUS.

All intermediate files go in the build directory.

> Matt> If any changes need to be made, configure.in and Makefile.am
> Matt> should be adjusted so that the situation is automatically
> Matt> detected and handled appropriately.
> 
> I think this is naive.  What if I want to rebuild just part of the
> package?  A library, say?  Darn, just _finding the right target_ in
> the Automake maze is a problem.

cd libfoo
make

works for my library packages.  If you want to build lib/bar.la, cd lib &&
make bar.la.  If you have more than one high-level target per directory, and
want to make things easier for your users, add to the top-level Makefile.am:

libfoo:
	make -C libfoo all
foo:
	make -C foo all

etc.  If you're using a non-recursive makefile setup, you can just do:

libfoo: libfoo/libfoo.la

or what have you.

> itz> 2 - The generated makefile includes a dependency of Makefile.in
> itz> on configure.in.  That means if the installing user has to modify
> itz> configure.in (something I had to do quite often in my pre-Debian
> itz> days when I compiled everything from source, to make tarballs
> itz> install cleanly), they must have automake on their machine (even
> itz> if they never touch Makefile.am).
> 
> Matt> This dependency is correct, since changes in configure.in can
> Matt> require corresponding changes in Makefile.in.  
> 
> Yes, but this is an artifact of automake.  The dependency is required
> the way automake works; that is what I object to.

It's the same way if you aren't using automake; modifying configure.in to,
say, modify the way some variables are grouped and used, also means
modifying Makefile.in correspondingly.  I don't think it's practical to try
to isolate them into completely separate layers.

> Matt> autoconf will already be required in order to recreate Matt>
> configure; requiring automake is not unreasonable.
> 
> In the recent past, mutual version dependencies among autoconf, automake
> and libtool were very fragile.  Getting the required version of automake
> frequently meant upgrading -- or worse, downgrading! -- autoconf as well.

This happens sometimes with most any sufficiently complex tool.  If you have
a working setup, though, you usually don't need to mess with it for a long
time.  automake 1.4 + autoconf 2.13 + libtool 1.3.x worked for a very long
time.

> itz> 4 - It assumes a recursive make strategy.  So following the
> itz> excellent advice in
> 
> Matt> The nice thing, of course, being that you could modify automake
> Matt> to use a one-big-makefile-split-into-chunks strategy without
> Matt> modifying any of your automake-based projects at all.  That is
> Matt> the value of abstraction.
> 
> I didn't say I hated abstraction, high level project description, or even
> Makefile.am's as they are.  I said I hated automake, the implementation.
> And I think it would be highly nontrivial to modify it this way.

Nontrivial, perhaps, but not difficult.  The largest missing piece is
allowing for source files in subdirectories, which automake currently
explicitly does not support.  I assume this is because it makes some
assumptions about where to place working files (e.g. with libtool).  If that
were fixed, you could write:

Makefile.am:
include foo/Makefile.am
include bar/Makefile.am

foo/Makefile.am:
bin_PROGRAMS  = foo/foo
foo_foo_SOURCES = foo/hello.c

bar/Makefile.am:
bin_PROGRAMS  = bar/bar

bar_bar_SOURCES = bar/hello.c

...and use a single-makefile build system.

-- 
 - mdz