Shortcuts

Batch Processing and Performance

Batch Processing

To take advantage of the parallel processing power of a GPU, all modules render audio in batches. Larger batches enable higher throughput on GPUs. The default batch size is 128, which requires \(\approx\)2.3GB of GPU memory, and is 16200x faster than realtime on a V100. (GPU memory consumption is approximately \(\approxeq\) 1216 + 8.19 \(\cdot\) batch_size MB, including the torchsynth model.)

gpu-speed-profilesgpu-mem-profiles

ADSR Batches

An example of a batch of 4 of randomly generated ADSR envelopes is shown below:

ADSR
Read the Docs v: stable
Versions
latest
stable
v1.0.1
v1.0.0
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.