Advantages of parallel compared with serial computation, for route-planning problems and how increased data size may affect this.

Lab Contact: Dr Timothy G. Griffin
Supervisor: 
Dr Jagdish J. Modi
Special Resources: None

This should appeal to students interested in the design of parallel algorithms, but does not assume any previous parallel computing expertise.

In the past we have usually assumed that the number of processing elements and the problem size are comparable. Here, however, the emphasis is on a much larger data size with a given smaller number of processing elements available in practice [3,6].

Students have considered rail and road networks, minimizing a single parameter such as distance or time, and found an algorithm analogous to matrix multiplication to be suitable for parallel computation [1].

The main aim of the project is to implement such an algorithm, for example,  applying it to the California road network, and also if possible to consider such developments as improving the efficiency of computation by avoiding large sparse areas in order to minimise data movements, for example by adapting the Cannon or Fox-Otto algorithm [4,5].

It will then be of interest to see how one deals with different cases, the ratio of data size to the number of processing elements on a given system, and whether there is any fundamental change to the design of the algorithm as one progresses to even bigger data sizes.

The dataset can be built up using the California Road Network data available from the website at the following URL https://www.cs.utah.edu/~lifeifei/SpatialDataset.htm, which provides the required data for the proposed project.

Extension

If time permits the implementation could utilise FPGA execution resources which are available in the lab, which will significantly reduce calculation time. Furthermore, the SIMD nature of the algorithms also makes it more amenable in harnessing the FPGA architecture.

References:

[1] Batty M. Parallel Route Planning for the London Underground, Diploma in Computer Science, Computer Laboratory, 2007.

[2] Cormen, Leiserson, Rivest, Stein. Introduction to Algorithms, second edition, 2003.

[3] Modi J. Parallel Algorithms and Matrix Computation, OUP, 1988.

[4] Grama, Gupta, Karypis, Kumar. Introduction to Parallel Computing, Pearson Education, second edition, 2005.

[5] Akpan, O. Efficient parallel implementation of the Fox algorithm. Computer Science Department, Bowie State University, 2003.