chitkara logo
Vol.2, Issue-5,Feburary 2016
Published by:-Chitkara University

OpenMP: An excellent API to implement parallel programs based on multithreading

Parallel programming is one of the most important branches of Computer Science that can be used to improve the efficiency and performance of the multi-core processing machines by the concurrent execution of the instructions provided by parallel program. In parallel programming, large complex problems are decomposed into the smaller parts, and then these parts are executed concurrently.

Thereafter, the results obtained from all parts are combined to get the final result. Various APIs have been introduced which are capable of designing efficient parallel programs. OpenMP is one of such API that can be used to design parallel programs based on HPC (high performance computing). OpenMP is an open-source API that used multithreading concepts to perform parallel programming. OpenMP API support number of programming languages, such as C, C++ and FORTRAN on diversified parallel architectures. In order to make programs parallel, OpenMP can be inserted into it in the form of compiler directives. After using OpenMP the programs becomes scalable and adapts themselves easily to be executed on multi-core environment.

There are various methods of parallel programming like Shared Memory Model, Thread Based Parallelism, Explicit Parallelism, Fork-Join Model etc. OpenMP programming is based on the fork-join concept of the parallel programming.

Fork-Join Model of OpenMP
For parallelization, OpenMP uses the fork-join model. All OpenMP programs begin with a single process handled by the master thread. This master thread executes serially until the first parallel region construct is reached. The procedure of fork-join is shown in the Figure given below-

  • Fork

    The master thread generates a team of child threads. The instructions which are required to be executed in parallel are executed by these child threads concurrently. The process of branching of the master thread into multiple child threads is called fork.

  • Join

    When the child threads finishes with their execution at the end of the parallel region, they wait for other threads to synchronize and then terminated, leaving only the master thread alone. The process of the termination of child threads is called join.

The part of the parallel program which is meant to be executed in parallel is marked using compiler directives which instructs the system to generate the threads before executing the parallel region. Each thread is identified using thread_ID which can be accessed by a function omp_get_thread_num() of OpenMP API. The integer is used to depict the thread_ID of the threads. The thread_ID of the master thread is 0. After finishing with the parallel region all threads join back into master thread which continues the execution of the program till the end.

Conclusion
The OpenMP is a powerful API that can be used to design efficient and high performance parallel programs based on multithreading. The OpenMP constructs can be inserted into the existing code as compiler directive giving full control to the programmer to exploit efficient multi-core processing.


By - Dr Sapna Saxena, Associate Professor, CSE, Chitkara University H.P.

About Technology Connect
Aim of this weekly newsletter is to share with students & faculty the latest developments, technologies, updates in the field Electronics & Computer Science and there by promoting knowledge sharing. All our readers are welcome to contribute content to Technology Connect. Just drop an email to the editor. The first Volume of Technology Connect featured 21 Issues published between June 2015 and December 2015. This is Volume 2.
Happy Reading!

Disclaimer:The content of this newsletter is contributed by Chitkara University faculty & taken from resources that are believed to be reliable.The content is verified by editorial team to best of its accuracy but editorial team denies any ownership pertaining to validation of the source & accuracy of the content. The objective of the newsletter is only limited to spread awareness among faculty & students about technology and not to impose or influence decision of individuals.