Distributed processing by make


make is a popular, proven-to-work solution for describing and executing tasks with dependencies among them. Its original purpose was automating software build, but many people noticed that it is good for describing workflows composed of existing programs. The description of make may be somewhat cryptic for beginners but goes straight to the point. It has a natural fault tolerance model in which when some tasks failed due to machine crashes or other temporary reasons, simply re-executing make will perform the rest of the process rather than all. It essentially uses intermediate files as checkpoints. It has a natural parallel execution model where tasks without dependencies may be run in parallel. Finally it has been used by many programmers so the learning barrier is quite low for many people. It is natural for us to take advantage of them.

There seem many similar tools and research projects out there (SGE's built-in dmake, pgmake, distmake, ...), but this one seems different in several aspects.

  • It uses GNU make with no modification. All nice GNU extensions to the original UNIX make come free, and they continue to do so in future. GNU make supports -j (parallel execution). Our software is a thin layer built on top of this feature.
  • It uses GXP as the underlying execution engine, so it is easy to distribute jobs across clusters, using a variety of underlying resource access protocols (SSH, RSH, torque, SGE, etc.)

Getting Started


  • you must have GNU make installed. Check it with by typing 'make --version.' If you see a message like this, congrat and you are done.
 make --version
 GNU Make 3.81
 Copyright (C) 2006  Free Software Foundation, Inc.
 This is free software; see the source for copying conditions.
 There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
 This program built for x86_64-pc-linux-gnu

If the command is not found, do 'apt-get install make' on debian systems. A package will be available on virtually all Unix platforms. The last resort is to build it from source.

Step 1: Prepare Makefile

There are two things you need to know about it.

  • You must insert "SHELL=mksh" in your Makefile.
  • You must put "=" in front of command lines you want to distribute.

For example, let's say the original Makefile looks like this:

 all : 1.result 2.result 3.result 4.result 5.result
 %.result :
     your_program $< $@

Running make with this Makefile will result in executing:

 your_program 1.result
 your_program 2.result
 your_program 3.result
 your_program 4.result
 your_program 5.result

in sequence, all in the local host. If you want to run them distributed, modify the Makefile as follows.

 all : 1.result 2.result 3.result 4.result 5.result
 %.result :
     = your_program $< $@


  • A line "SHELL=mksh" has been inserted.
  • An equal sign ("=") has been put before the command line.

Step 2: Grab resources with GXP

I assume you are familiar with how to do this.

Step 3: Run it

 gxpc make -j

will launch the above five tasks in parallel (by -j option given to the command line), and dispatch them to free resources (by the job scheduler started by the above command). One host will execute a task at a time, so if there are only, say, two hosts, three tasks will wait in a queue and will be dispatched when one becomes free.


  • It does not modify GNU make at all. The above 'gxpc make' command simply passes all command line arguments to GNU make without interpretation. Thus, all nice features and commands lines of GNU make will be available here too. For example, 'gxpc make -n' will print what will be done. 'gxpc make --help' will show you GNU make command line help, GNU make's advanced features such as $(shell ...) can be used, etc.
  • It has a modest monitoring and reporting interface via the web. By default, it writes job and host status to state.html in the current directory. If you point to that file with your browser either locally or remotely, you will see a page like this
  • It uses GNU make's extension "SHELL=" variable with which we can specify the shell to execute individual commands. It is normally /bin/sh, and one rarely wants anything else. By substituting 'mksh' (bundled with GXP distribution) for the regular /bin/sh, we effectively intercept subprocess invocations of GNU make, and redirect them to our job scheduler. The job scheduler is automatically launched when you type 'gxpc make.'
  • For this kind of systems, the central issue is the scalability limitation in the task manager. The key is to minimize the resource consumption of outstanding jobs, jobs that have been spawned by GNU make but not finished. Due to the way GNU make detects terminations of subprocesses, it seems unavoidable to keep at least one process alive for each outstanding job (I assume GNU make detects termination of children by wait system call or friends, perhaps with SIGCHLD handling. I have not confirmed by source code inspection, but it will the case). Thus it is important to minimize the memory usage of such jobs. In this system, one such process consumes hundreds kilobytes of memory, so in modest systems with 1GB of memory will be able to handle a thousand of outstanding jobs comfortably (in general, there is no point in spawning more tasks than you can actually run in parallel, so if you have 500 processors, it is good idea to limit the number of jobs outstanding at a time to somewhere around 500 by 'gxpc make -j 500').

トップ   編集 バックアップ 添付 複製 名前変更 リロード   新規 一覧 最終更新   最終更新のRSS
© 2007 Taura lab. All Rights Reserved.
Powered by Pukiwiki, styled by Kei_ / Last-modified: 2016-07-12 (火) 15:12:44 (856d)