PALATIN: A Platform for Interactive Algorithms
Description:
Writing distributed programs is a challenging task. Many researchers have tried to overcome this problem with the help of a suitable programming environment, using message passing in the 1970s, remote procedure calls in the 1980s, and distributed shared memory (DSM) in the 1990s. Still, the percentage of distributed applications based on these environment is very small. One possible xplanation for this is that each round is an evolutionary stage for the distributed programming paradigm, and each attempt brings us closer to a methodology that will ultimately be easy to use and efficient. A less optimistic explanation is that no matter what methodology people will come up with, ease of use and efficiency are two concepts that cannot be achieved at the same time. We believe that this is in fact true for the direction in which the design of universal platforms for distributed computing is currently heading. The basic dilemma is that procedural thinking and the von Neumann machine have dominated our way of designing and thinking about programs for distributed environments, but these approaches are inadequate for distributed computing. Thus, a different paradigm is needed, which we call the Spheres model (which is similar in flavor but much more powerful than cellular computing models). To illustrate the basic ideas behind Spheres, we constrast it with DSM models. The DSM approach suffers from the following deficiencies:
- Processes and data are separate entities. This creates access and sharing problems that have to be handled with great care to avoid inefficiencies and inconsistencies.
- Read and write are inherently synchronous operations. Thus, executions of tasks in distributed applications are not decoupled from each other. Also, read requests can potentially block the execution of a task.
- There is a linear addressable memory. This makes it possible to use implicit and/or non-local strategies such as hashing that appear to be efficient but that are actually expensive to emulate in a distributed environment.
- The memory is assumed to be reliable. This has made it tempting in the past to design data structures in which only the accesses are parallelized but not the structure itself, creating data structures that are highly vulnerable in an unreliable distributed setting.
- The memory management (including the mapping and scheduling of read/write requests) is not under the control of the application. This causes the design and verification of distributed applications to be a highly non-trivial task. Also, it prevents an application to optimize the data placement to its needs.
- It is not specified how processes can join an applicaiton. So this has to be handled by hand in distributed applications.
The deficits above reveal that major changes should be done in order to arrive at a paradigm useful for distributed applications:
- Instead of separating processes and data, organize all computation and storage into spheres, autonomous and atomic entities with their own, private processing and data.
- Allow concurrency outside spheres but only the sequential execution of tasks inside spheres. This tremendously simplifies the design and verification of operations within spheres.
- Instead of read and write operations, onle allow operations that are strictly asynchronous (i.e. fire-and-forget).
- Instead of a linear addressable memory, there is no memory besides the private memory within spheres, and spheres are interconnected by an explicit link structure which is under the control of the application.
- Instead of assuming reliable resources, take into account that spheres may fail, and therefore focus on self-stabilizing distributed applications.
- Instead of a hidden resource management, the sphere placement and migration is under the control of the application, and a connection between spheres is only allowed if there is a direct support for it by the underlying communication layer.
- Provide a rendezvous service that allows spheres to join the system withou juggling with IP addresses.
These are the basic ingredients of the Spheres model. The goal of this project is to implement a PALATIN platform that allows the easy design of efficient programs under this Spheres model. An early Spheres simulation environment has already been used in the course "Theory of Network Communication" in the fall 2003 semester, and an improved environment will be used for the same course in the fall 2004 semester.
Faculty members:
PhD students:
Publications
Implementations:
Christian Scheideler
Last modified: Wed June 2 2004