Unified Memory in CUDA¶
1. What primary benefit does CUDA unified memory offer for memory management between CPU and GPU?
- A. It enables CPU and GPU to share memory without explicit data transfers
- B. It increases GPU memory size
- C. It automatically optimizes code performance
- D. It allows direct access to GPU registers from the CPU
Click to reveal the answer
Answer: A. It enables CPU and GPU to share memory without explicit data transfersUsing cudaMallocManaged()
vs __managed__
Variables¶
2. Which of the following functions allocates unified memory in CUDA?
- A.
cudaMalloc()
- B.
cudaMemcpy()
- C.
cudaMallocManaged()
- D.
cudaFree()
Click to reveal the answer
Answer: C. `cudaMallocManaged()`Example: Without Unified Memory¶
3. In a non-unified memory setup, which operation is required to transfer data from CPU to GPU?
- A.
cudaMalloc()
- B.
cudaMemcpy()
- C.
cudaDeviceSynchronize()
- D.
cudaFree()
Click to reveal the answer
Answer: B. `cudaMemcpy()`Example: With Unified Memory¶
4. True or False: When using unified memory, we no longer need to call cudaMemcpy()
to transfer data between the host and device.
Click to reveal the answer
Answer: TruePrefetching Pageable Memory¶
5. What is the purpose of cudaMemPrefetchAsync()
in CUDA unified memory?
- A. To allocate unified memory
- B. To free unified memory
- C. To move memory to a specific device in advance, reducing page faults
- D. To synchronize memory between CPU and GPU
Click to reveal the answer
Answer: C. To move memory to a specific device in advance, reducing page faultsBenefits of Using Unified Memory¶
6. Which of the following is NOT a benefit of CUDA unified memory?
- A. Simplified code without explicit memory copy operations
- B. Automatic memory paging between CPU and GPU
- C. Guaranteed best performance in all applications
- D. Easier management of memory resources between CPU and GPU