Skip to main content

Implicit Parallel Programming in <i>pH</i>

  • 1st Edition - May 21, 2001
  • Latest edition
  • Authors: Rishiyur Nikhil, Arvind
  • Language: English

Parallel machines are now affordable and available to many users in the form of small symmetric shared-memory multiprocessors (SMPs). Unfortunately, programming practices have not… Read more

Purchase options

Sorry, this title is not available for purchase in your country/region.

World Book Day celebration

Where learning shapes lives

Up to 25% off trusted resources that support research, study, and discovery.

Description

Parallel machines are now affordable and available to many users in the form of small symmetric shared-memory multiprocessors (SMPs). Unfortunately, programming practices have not kept pace with this hardware advance. The vast majority of developers still write applications in sequential programming languages that do not exploit multiple processors. The traditional approaches for adding parallelism to applications are prone to introducing new, strange, and difficult-to-eliminate bugs.


In this important new text, the authors offer a completely different vision of the future, where parallel programming is the default and sequential programming is a special case. The foundation of this vision is an implicitly parallel programming language, pH, which is the result of two decades of research by the authors. A dialect and extension of the standard nonstrict and purely functional language Haskell, pH is essentially Haskell with implicitly parallel semantics. pH's extensions to Haskell comprise a disciplined approach to shared parallel state, so that a pH program-even a beginner's program-is implicitly parallel.


The authors have developed this text over ten years while teaching implicit parallel programming to graduate students at MIT and specialized short courses to undergraduates and software professionals in the U.S., Japan, and India.

Key features

* Provides a complete treatment of the language, the programming philosophy it embraces, and its theoretical underpinnings.
* Includes many clear yet small examples.
* Features programs, problems, solutions, and a downloadable pH implementation for SMP machines and related software.
* Is designed for students and professionals with a thorough knowledge of a high-level programming language but with no previous experience in parallel programming.

Readership

Programmers and designers of parallel computer systems.

Table of contents

Contents



Preface



Chapter 1 From Sequential to Implicit Parallel Programming




1.1 How Sequential Languages Obscure Parallelism


1.1.1 Parallelism in Matrix Multiplication


1.1.2 Matrix Multiplication in a Sequential Language


1.2 How to Achieve Parallel Execution?


1.3 Automatic Parallelization of Sequential Programs



1.4 Data Parallel Programming Languages


1.5 Explicit Parallelism: Shared-Memory Programming with Threads and Locks


1.5.1 Threads


1.5.2 Locks


1.5.3 Higher-Level Constructs for Shared-Memory Programming



1.6 Explicit Parallelism: Message-Passing


1.6.1 Matrix Multiplication with Message-Passing


1.6.2 Communication Issues and Their Effect on Programming


1.6.3 Message-Passing Programming Is Not Easy


1.7 Implicit Parallel Programming: Declarative Programming Languages


1.7.1 Implicit Parallelism Due to Lack of Antidependences


1.7.2 Limitations of Functional Languages


1.7.3 Prominent Functional Languages


1.8 pH: An Implicitly Parallel, Declarative Language


1.8.1 The Functional Layer


1.8.2 The I-structure Layer


1.8.3 The M-structure Layer


1.9 The Effect of Architecture on Programming



Chapter 2 Functions and Reduction




2.1 Basics


2.1.1 Numbers and Identifiers


2.1.2 Function Application


2.1.3 Spaces and Formatting


2.1.4 Comments


2.1.5 Operators


2.2 Higher-Order Functions and Currying


2.3 Data Types


2.3.1 Every Expression Has a Type


2.3.2 Type Correctness and Type Checking


2.4 Conditional Expressions


2.5 Recursion


2.5.1 Integrating a Function


2.5.2 Higher-Order Computation Structures


2.5.3 Syntactic Spice: Infix Operators as Identifiers


2.6 Blocks


2.6.1 The Function integrate, Revisited


2.6.2 Square Roots Using Newton's Method


2.6.3 More Syntactic Spice: "Layout" and the "Offside Rule"


2.7 Static Scoping


2.7.1 Scopes


2.7.2 Free Identifiers


2.7.3 Why Not Dynamic Scoping?


2.7.4 -Renaming


2.7.5 Scoping in Blocks


2.8 Loops


2.8.1 for Loops



2.9 More on Infix Operators



2.10 Pragmatics: Inlining Function Calls



Chapter 3 Types and Type Checking




3.1 Type Expressions


3.2 Static Type Checking and Type Inference


3.2.1 Type Rule for Functions


3.2.2 Type Rule for Conditionals


3.2.3 Type Rule for Blocks


3.3 Polymorphism


3.3.1 What Does a Polymorphic Type Mean?


3.3.2 Instantiating a Polymorphic Type


3.3.3 Which Identifiers Are Polymorphic?


3.4 Type Classes and Overloading


3.4.1 Some Useful Overloaded Functions



3.4.2 Implementation Considerations of Overloading


3.5 Typeful Programming



Chapter 4 Rewrite Rules, Reduction Strategies, and Parallelism




4.1 Syntax


4.2 Syntactic Desugaring: Translating into the Core


4.3 Answers and Simple Expressions


4.4 Syntactic Equivalences and Renaming


4.5 Rewrite Rules


4.5.1 Constants, Conditionals, and Rules


4.5.2 Function Application (-Reduction)


4.5.3 Instantiation (a.k.a. Substitution)


4.5.4 Block Flattening and Expression Lifting


4.5.5 Canonical Form


4.6 Sharing and Nonstrictness


4.6.1 Argument Sharing


4.6.2 Nonstrictness


4.7 Reduction Strategies: Determinacy and Termination


4.7.1 Determinacy


4.7.2 Termination


4.8 Specific Strategies


4.8.1 Marking Redexes for Lazy Evaluation


4.8.2 Marking Redexes for Call-by-Value Evaluation


4.8.3 Marking Redexes for Parallel Evaluation of Functional pH


4.8.4 Marking Redexes for Parallel Evaluation of Full pH


4.8.5 Reducing Marked Redexes



4.9 Semantics of Loops



Chapter 5 Tuples and Algebraic Product Types




5.1 Tuple Construction


5.2 Tuple Component Selection: Pattern Matching


5.3 Tuple Types


5.3.1 Composition of Transformations


5.3.2 Nested Tuples


5.3.3 Type Synonyms


5.3.4 More Higher-Order Computation Structures


5.4 Algebraic Product Types


5.4.1 Algebraic Types versus Type Synonyms


5.4.2 Polymorphism in Algebraic Types


5.4.3 Syntax of Constructors and Variables


5.4.4 Field Names in Algebraic Types


5.5 Tupled versus Curried Arguments


5.6 Rewrite Rules for Algebraic Types


5.6.1 Abstract Syntax


5.6.2 Static Translation (Syntactic Desugaring)


5.6.3 Rewrite Rule


5.6.4 Nonstrictness


5.7 Abstract Data Types


5.8 Modules and Large Programs



Chapter 6 Lists and Algebraic Sum Types




6.1 Sums of Products


6.1.1 Pattern Matching: Case Expressions


6.1.2 The Maybe Type


6.1.3 Subtleties in Pattern Matching


6.1.4 Enumerated Types (Simple Sums)


6.1.5 Field Names in Multiple Disjuncts


6.2 Lists: A Recursive Algebraic Type


6.2.1 Syntactic Sugar for Lists


6.2.2 Arithmetic Sequences


6.2.3 Recursive Functions on Lists


6.2.4 Sparse Vectors


6.2.5 Sorting a List


6.2.6 A Lexical Analyzer


6.3 Higher-Order Functions on Lists


6.3.1 The Art of Folding Maps


6.3.2 General Linear Recursion


6.3.3 Eliminating Intermediate Lists


6.3.4 Nonstrictness and Pipeline Parallelism


6.4 List Comprehensions


6.4.1 List Comprehensions as a Database Query Language


6.4.2 Syntax and Semantics of List Comprehensions


6.4.3 The Expressive Power of List Comprehensions



6.4.4 Desugaring List Comprehensions


6.5 More Recursive Algebraic Types


6.5.1 Graphs


6.5.2 Binary Trees


6.6 Implications of Nonstrictness


6.6.1 Nonstrictness and Parallelism


6.6.2 Nonstrictness and Cyclic and Infinite Structures


6.7 Semantics of Algebraic Sum Types and Case Expressions



Chapter 7 Arrays: Fast Indexed Data Structures




7.1 Arrays as Caches for Functions


7.2 Arrays with Other Index Types: Overloading


7.3 Multidimensional Arrays


7.4 Nonstrictness of Arrays


7.5 Generalizing Arrays Using Functions


7.5.1 Arrays versus Functions


7.5.2 Generalized Arrays


7.6 Array Comprehensions


7.7 Accumulators


7.8 Parallel Blocked Matrix Multiplication


7.9 LU Decomposition


7.9.1 Gaussian Elimination


7.9.2 Crout's Method


7.9.3 LU Decomposition of Skyline Matrices


7.10 The "Paraffins" Problem


7.10.1 Radicals, Orderings, and Canonical Forms


7.10.2 More Efficient Representations


7.10.3 Generating Radicals


7.10.4 First Attempt


7.10.5 Critique of rads_of_si e_n (Blind Carbon Copies)


7.10.6 Second Attempt


7.10.7 Paraffins from Radicals (Molotov Cocktails)


7.10.8 The Solution to the Paraffins Problem



Chapter 8 Sequencing, Input/Output, and Side Effects




8.1 Monadic I/O


8.1.1 Preliminaries: The "Unit" Data Type


8.1.2 The Basic Intuition


8.1.3 Monadic I/O in Haskell and pH


8.1.4 Counting Characters, Words, and Lines in a File


8.1.5 Limitations of Monadic I/O


8.2 Parallel I/O


8.3 Explicit Sequencing


8.3.1 Sequencing and Strictness


8.3.2 Nested Sequential and Parallel Forms, and Scoping


8.3.3 Implicit versus Explicit Sequencing


8.4 Rewrite Rules for Explicit Sequencing (Barriers)


8.4.1 Syntax


8.4.2 Syntactic Equivalence Rules


8.4.3 Rewrite Rules


8.4.4 Properties of Sequentialization


8.5 Updatable Cells


8.5.1 Random Number Generators, Unique Identifier Generators, and So On


8.5.2 Rewrite Rules for Updatable Cells



Chapter 9 I-structures




9.1 A Motivating Example


9.2 I-fields: I-structure Fields in Algebraic Types


9.3 I-structure Arrays


9.3.1 Basic I-array Operations


9.3.2 Expressions as Statements


9.3.3 Run-Time Errors Due to Multiple Definitions


9.3.4 Converting I-arrays into Functional Arrays


9.3.5 Desugaring Functional Arrays into I-arrays


9.3.6 Implementing Array Comprehensions Using I-arrays


9.3.7 map_foldl


9.4 Gaussian Elimination


9.5 I-structure Lists


9.5.1 Tail-Recursive map (Tail Recursion Modulo Cons)


9.5.2 Converting I-lists to Lists


9.5.3 Implementing List Comprehensions with Open Lists


9.6 Semantics of I-structure Arrays, Functional Arrays, and I-fields


9.6.1 Syntax


9.6.2 Desugaring and Rewrite Rules


9.7 Discussion



Chapter 10 M-structures: Mutable Synchronized State




10.1 M-structure Algebraic Types


10.1.1 The mFetch and sStore Operations


10.1.2 The "Examine" and "Replace" Operations


10.2 A Shared Parallel Queue


10.3 M-structure Arrays


10.4 Parallel Histograms


10.4.1 A Solution Using M-structures


10.4.2 A Comparison with Functional Solutions


10.5 M-lists


10.6 Mutable Lists


10.7 Atomicity Issues: Mutable Association Lists


10.8 Parallel Hash Tables


10.9 Parallel Graph Manipulation


10.9.1 A Simple M-structure Solution


10.9.2 An M-structure Solution Allowing Multiple Concurrent Traversals


10.9.3 A Functional Solution


10.9.4 Graphs That Change


10.10 Rewrite Rules for M-structures



Chapter 11 Conclusion




11.1 pH in Context


11.1.1 Parallel, Concurrent, and Distributed Languages


11.1.2 Where Does pH Sit in This Space?


11.2 What Remains for pH to Become a Production Language?


11.2.1 Efficient Implementation of pH


11.2.2 Interoperability and Application Domains


11.3 The Final Word



Appendix A An Introduction to the -Calculus



A.1 -Notation

A.1.1 Syntax of the Pure -Calculus

A.1.2 Free and Bound Variables

A.2 -Substitution

A.3 The -Calculus as a Reduction System

A.4 The Expressive Power of the -Calculus

A.4.1 Booleans and Conditionals

A.4.2 The Empty List and Testing for It

A.4.3 Recursion

A.4.4 Integers: Church's Representation

A.5 The Church-Rosser Property and Interpreters for the -Calculus

A.6 From the -Calculus to Practical Programming Languages



Appendix B Collected Rewrite Rules for pH



B.1 Core Abstract Syntax

B.1.1 Values, Simple Expressions, and Heap Terms

B.2 -Renaming

B.3 Rewrite Rules

B.3.1 Primitive Functions ( Rules)

B.3.2 Conditionals

B.3.3 Function Application (-Reduction)

B.3.4 Instantiation (a.k.a. Substitution)

B.3.5 Block Flattening and Expression Lifting

B.3.6 Sequentialization (Barriers)

B.3.7 I-structure and M-structure Cells

B.3.8 I-structure and M-structure Arrays



References



Index



Indicates advanced or optional material that may be skipped in the
interest of time.

Product details

  • Edition: 1
  • Latest edition
  • Published: May 21, 2001
  • Language: English

About the authors

RN

Rishiyur Nikhil

Rishiyur S. Nikhil is the Director of Software at Sandburst Corporation, where he manages the development of software for hardware synthesis. He has devoted seventeen years to designing and implementing languages and architectures as a researcher at the Cambridge Research Laboratory of DEC and Compaq Computer Corporation and as a Professor of Computer Science and Engineering at MIT.

Affiliations and expertise
Sandburst Corporation

A

Arvind

Arvind is the Johnson Professor of Computer Science and Engineering at MIT and President and founder of Sandburst Corporation, a new style chips company that exploits his recent research on high-level specification and description of architectures and protocols using term rewriting systems. He is an IEEE Fellow and was awarded the Charles Babbage Outstanding Scientist Award.

Affiliations and expertise
Massachusetts Institute of Technology