The goal of the Jupiter project is to investigate scalable Java Virtual Machine (JVM) architectures. Indeed, the project aims to design and implement a JVM that scales well on our 128-processor cluster of PC workstations, interconnected by a Myrinet network and with shared memory support in software (see the related ATHLOS project). We believe that such JVM scalability can be achieved by examining four main aspects of its design: 
  • Memory locality. At present, objects are allocated on the heap with little or no consideration for locality. While this approach may be appropriate for uniprocessors or small-scale SMPs, it is unlikely to work well on a cluster of workstations where remote memory access is one or two orders of magnitude slower than local memory access. Hence, one of our goals is to develop allocation heuristics for enhancing locality.

  • Parallel garbage collection. Garbage collection can consume a considerable amount of application time. Typically, JVMs employ "stop-the-world" garbage collectors, where program threads are halted during garbage collection. This approach will not work for large numbers of processors, for two reasons. First, the cost of "stopping the world" is considerably higher when the number of processors is large. Second, using a single thread to collect garbage results in an unacceptably large sequential fraction for any application. Consequently, we are developing a multi-threaded "on-the-fly" garbage collector that scales well to large numbers of processors.

  • Memory consistency model. To achieve scaling performance on a large number of processors, it is important to exploit the "relaxed" Java Memory Model. Presently no JVM implements the JMM faithfully, and indeed many implement it incorrectly, leading to lack of coherence and loss of optimization opportunities. The specification of the JMM is presently under revision. We will investigate the use of this revised model within a JVM and determine impact on performance.

  • Efficient Threads and Synchronization. With a large number of processors, it is critical to provide efficient threading support and synchronization mechanisms that scale well. We are examining means of providing such support.

In order to carry out our research, we have embarked on the design and implementation of a modular and extensible JVM infrastructure, called Jupiter. Jupiter implements design patterns which enhance the ability of developers to modify or replace discrete parts of the system in order to experiment with new ideas. Further, to the extent feasible, Jupiter maintains a separation between orthogonal modifications, so that the contributions of independent researchers can be combined with a minimum of effort. This flexible structure, similar to UNIX shells that build complex command pipelines out of discrete programs, allows the rapid prototyping of our research ideas by confining changes in JVM design to a small number of modules. In spite of this flexibility, Jupiter delivers good performance. Experimental evaluation of the current implementation of Jupiter using the SPECjvm98 benchmarks shows that it is on average 2.65 times faster than Kaffe and 2.20 slower than the Sun Microsystems JDK (interpreter versions only). By providing a flexible JVM infrastructure that delivers competitive performance, we believe we have developed a framework that supports further research into JVM scalability.

The current implementation of the Jupiter infrastructure is a working JVM that provides the basic facilities required to execute Java programs. It has an interpreter with multithreading capabilities. It gives Java programs access to the Java standard class libraries via a customized version of the GNU Classpath library, and is capable of invoking native code through the Java Native Interface. It provides memory allocation and collection using the Boehm garbage collector~\cite{boehm88}. On the other hand, it currently has no bytecode verifier, no JIT compiler, and no support for class loaders written in Java, though the design allows for all these things to be added in a straightforward manner.












































This web page is maintained by Tarek S. Abdelrahman. Last update: July 2002.

Address: http://www.eecg.toronto.edu/~tsa