Introduction to Parallel Programming¶
1. What is the primary benefit of parallel programming in high-performance computing (HPC)?
- A. Reducing code complexity
- B. Accelerating computations and solving larger problems
- C. Simplifying debugging processes
- D. Improving data security
Click to reveal the answer
Answer: B. Accelerating computations and solving larger problems2. True or False: GPGPU programming is based on a SIMT architecture, which allows for massive parallel processing with many threads.
Click to reveal the answer
Answer: True3. Parallel programming models are generally classified based on which criteria?
- A. Memory architecture they target
- B. Programming language used
- C. Processor speed
- D. Compiler type
Click to reveal the answer
Answer: A. Memory architecture they target4. Which of the following is NOT a parallel programming model?
- A. Shared Memory Programming
- B. Distributed Memory Programming
- C. Object-Oriented Programming
- D. GPGPU Programming
Click to reveal the answer
Answer: C. Object-Oriented Programming5. True or False: Shared memory programming is similar to GPGPU programming, as both involve multiple threads accessing a shared memory space.
Click to reveal the answer
Answer: TrueShared Memory Programming - OpenMP¶
1. OpenMP primarily uses which type of parallelism?
- A. Data-level parallelism
- B. Thread-level parallelism
- C. Task-level parallelism
- D. Instruction-level parallelism
Click to reveal the answer
Answer: B. Thread-level parallelism2. True or False: In OpenMP, the fork-join model involves creating a team of threads at the beginning of a parallel section and then returning to a single thread after the section completes.
Click to reveal the answer
Answer: True3. Which of the following are components of OpenMP?
- A. Compiler directives, runtime libraries, and environmental variables
- B. Data storage, cache management, and memory optimization
- C. Communication protocols, data encryption, and security protocols
- D. Memory mapping, kernel routines, and GPU scheduling
Click to reveal the answer
Answer: A. Compiler directives, runtime libraries, and environmental variables4. In a Uniform Memory Access (UMA) model:
- A. Memory access times vary across processors
- B. Memory access time is consistent for all processors
- C. Each processor has exclusive access to its own memory
- D. Memory access is limited to one processor at a time
Click to reveal the answer
Answer: B. Memory access time is consistent for all processors5. True or False: Non-Uniform Memory Access (NUMA) allows each processor to access all memory regions with the same latency.
Click to reveal the answer
Answer: FalseDistributed Memory Programming Model - Message Passing Interface (MPI)¶
1. Which memory model is commonly used in supercomputers and clusters where each processor has its own memory?
- A. Shared Memory Model
- B. Distributed Memory Model
- C. Hybrid Memory Model
- D. Direct Access Model
Click to reveal the answer
Answer: B. Distributed Memory Model2. True or False: MPI is suitable for Single Program Multiple Data (SPMD) applications.
Click to reveal the answer
Answer: True3. Which MPI function is used for sending data directly from one process to another?
- A. MPI_Bcast
- B. MPI_Send
- C. MPI_Allreduce
- D. MPI_Scatter
Click to reveal the answer
Answer: B. MPI_Send4. In distributed memory programming, processes communicate with each other through:
- A. Shared variables
- B. Network-based message passing
- C. Direct cache access
- D. Thread synchronization
Click to reveal the answer
Answer: B. Network-based message passing5. True or False: In MPI, the memory of each process is shared across all nodes, allowing direct access by other processes.
Click to reveal the answer
Answer: FalseGPU Programming - General-Purpose Computing on GPUs (GPGPU)¶
1. Which of the following programming models is commonly used for GPGPU programming on NVIDIA GPUs?
- A. OpenMP
- B. MPI
- C. CUDA
- D. POSIX
Click to reveal the answer
Answer: C. CUDA2. True or False: GPGPU programming requires efficient management of data transfers between CPU and GPU to avoid bottlenecks.
Click to reveal the answer
Answer: True3. In GPGPU programming, what is a kernel?
- A. A function that runs in parallel on the GPU
- B. A data structure for memory management
- C. A debugging tool for GPU code
- D. A process management routine
Click to reveal the answer
Answer: A. A function that runs in parallel on the GPU4. What is the main advantage of GPGPU programming?
- A. Simplified coding
- B. Reduced power consumption
- C. Massive parallel processing capabilities
- D. Enhanced security
Click to reveal the answer
Answer: C. Massive parallel processing capabilities5. True or False: In GPGPU programming, the CPU and GPU share the same memory space, eliminating the need for data transfer.