Project for Advanced Algorithm and Parallel Programming course of Academic Year 2018-2019
The project implements using Nvidia CUDA the parallel algorithm proposed in Fast minimum spanning tree for large graphs on the GPU to solve the minimum spanning tree (MST) problem. The MST problem consists in determining a subset of arcs that allow to connect all the nodes of a given graph while miniizing the total sum of cost for all the selected arcs.
Example |
---|
The MST consist of arcs higlighted in non black edges. |
Red and blue edges result from the 1st and the 2nd iteration of the algorithm |
The solution was developed using Microsoft Visual Studio 2015 and was tested both on:
- NVIDIA GeForce GT 520 with 1 GB DDR3
- NVIDIA GeForce GTX 1060 with 6 GB DDR5 (a big thank to @EmilioImp for this!)
using 9th DIMACS Implementation Challenge graphs and [graph randomly generated by BOOST RMAT] (https://www.boost.org/doc/libs/1_53_0/libs/graph_parallel/doc/html/scalable_rmat_generator.html).
The results were verified against Boost implementation of kruskal minimum spanning tree algorithm which has been taken as golden model.
The Visual Studio solution consist of 4 projects and is structured as follows:
-
FastMST, implements the algorithm proposed in the paper and the main functionalities of graph object (load from a DIMACS file/add edge/print in graphviz format/ ecc...);
-
MST, recalls the algorithm implemented in FastMST to solve the MST problem on a given graph and outputs the MST cost;
-
repeatMST, recalls the algorithm implemented in FastMST to solve the MST problem a number of times on a given graph;
-
verifyMST, recalls the algorithm implemented in FastMST to solve the MST problem on a given graph and verifies that the final cost of the MST coincides with the one outputted by the golden models and that results from intermediate graphs are correct.
-
generateDimacsGraph, generates a random graph using BOOST library.
The directory Results contains the outputs of the runs and a the final presentation.