Skip to content

CUDA and HIP: Thread Organization, Memory Model, Warps, and Wavefronts

1. Which variables in CUDA and HIP provide the dimensions of the grid?

  • A. blockDim.x/y/z
  • B. threadIdx.x/y/z
  • C. gridDim.x/y/z
  • D. warpSize
Click to reveal the answer Answer: C. `gridDim.x/y/z`

Memory Model in CUDA and HIP

2. Which type of memory in CUDA and HIP is accessible to all threads across blocks and the host?

  • A. Register
  • B. Shared
  • C. Local
  • D. Global
Click to reveal the answer Answer: D. Global

Warps in CUDA and Wavefronts in HIP

3. How many threads are in a warp in CUDA?

  • A. 16
  • B. 32
  • C. 64
  • D. 128
Click to reveal the answer Answer: B. 32

4. True or False: In HIP, a wavefront consists of 64 threads, while in CUDA, a warp consists of 32 threads.

Click to reveal the answer Answer: True

Thread Synchronization

5. Which function ensures all threads within a block reach the same execution point in CUDA and HIP?

  • A. cudaDeviceSynchronize
  • B. __syncthreads
  • C. hipDeviceSynchronize
  • D. printf
Click to reveal the answer Answer: B. `__syncthreads`

Memory Access and Performance

6. What is a common cause of performance loss due to thread divergence in CUDA and HIP?

  • A. Using printf in kernels
  • B. Running all threads in a single warp
  • C. Conditional branching (e.g., if statements) within warps or wavefronts
  • D. Using shared memory exclusively
Click to reveal the answer Answer: C. Conditional branching (e.g., `if` statements) within warps or wavefronts