Batch Processing and Performance¶
Batch Processing¶
To take advantage of the parallel processing power of a GPU, all modules render audio in batches. Larger batches enable higher throughput on GPUs. The default batch size is 128, which requires \(\approx\)2.3GB of GPU memory, and is 16200x faster than realtime on a V100. (GPU memory consumption is approximately \(\approxeq\) 1216 + 8.19 \(\cdot\) batch_size MB, including the torchsynth model.)