Rate this Page

torch.cuda.caching_allocator_alloc#

torch.cuda.caching_allocator_alloc(size, device=None, stream=None)[source]#

Perform a memory allocation using the CUDA memory allocator.

Memory is allocated for a given device and a stream, this function is intended to be used for interoperability with other frameworks. Allocated memory is released through caching_allocator_delete().

Parameters
  • size (int) – number of bytes to be allocated.

  • device (torch.device or int, optional) – selected device. If it is None the default CUDA device is used.

  • stream (torch.cuda.Stream or int, optional) – selected stream. If is None then the default stream for the selected device is used.

Note

See Memory management for more details about GPU memory management.