Parallel and Distributed Computing in Education

The natural world is certainly not organised through a central thread of control. Things happen as the result of the actions and interactions of (unimaginably) large numbers of independent agents, operating at all levels of scale from nuclear to astronomic. Computer systems aiming to be of real use in this real world need to model, at the appropriate level of abstraction, that part of it for which it is to be of service. If that modelling can reflect the natural concurrency in the system, it ought to be much simpler.

Yet, traditionally, concurrent programming is considered to be an advanced and difficult topic - certainly much harder than serial computing which, therefore, needs to be mastered first. But this tradition is wrong.

At Kent, we have been teaching parallel computing at the undergraduate level for the past ten years. Originally, this was presented to first-year students before they became too set in the ways of serial logic. When this course was expanded into a full unit (about 30 hours of teaching), timetable pressure moved it into the second year. Either way, the material is easy to absorb and, after only 5 hours of teaching, students have no difficulty in grappling with the interactions of 25 threads of control, appreciating and eliminating race hazards and deadlock.

Parallel computing is still an immature discipline with many conflicting cultures. Our success in educating people into successful exploitation of parallel mechanisms comes from focussing upon parallelism as a powerful tool for simplifying the description of systems, rather than simply as a means for improving their performance. One pre-requisite for this is never to start with an existing serial algorithm and say: "OK, parallelise that!". Another is to find a model of concurrency that has a semantics that is compositional - a fancy word for WYSIWYG - since, without that property, combinatorial explosions of complexity (in logic and/or run-time overhead) always get us. That seems to rule out low-level concurrency mechanisms, such as spin-locks, as well as higher-level primitives like semaphores and monitors!

We have always based our teaching upon Hoare's CSP, in the past as implemented within occam and transputers. We can now run occam on many parallel architectures and we have recently developed JavaPP, a binding of CSP within Java. This talk will argue that the compositional semantics of CSP does make a difference and compare it against Java's monitor methods for some classical problems of concurrency control. We also will make the case for the use of CSP to describe and reason about (irregular) shared-memory parallelism in ways that are as powerful as its traditional use for reactive message-passing systems.