Shortcuts

torchsynth.config

Global configuration for AbstractSynth and its component SynthModule.

torchsynth.config.BASE_REPRODUCIBLE_BATCH_SIZE = 32

Smallest batch size divisor that is supported for any reproducible output This is because Noise: creates deterministic noise batches in advance, for speed.

torchsynth.config.DEFAULT_BATCH_SIZE = 128

This batch size is a nice trade-off between speed and memory consumption. On a typical GPU this consumes ~2.3GB of memory for the default Voice. Learn more about batch processing.

torchsynth.config.N_BATCHSIZE_FOR_TRAIN_TEST_REPRODUCIBILITY = 1024

If a train/test split is desired, 10% of the samples are marked as test. Because researchers with larger GPUs seek higher-throughput with batchsize 1024, $9 cdot 1024$ samples are designated as train, the next 1024 samples as test, etc.

class torchsynth.config.SynthConfig(batch_size=128, sample_rate=44100, buffer_size_seconds=4.0, control_rate=441, reproducible=True, no_grad=True, debug=False, eps=1e-06)

Bases: object

Any SynthModule and AbstractSynth might use these global configuration values. Every SynthModule in the same AbstractSynth should have the save SynthConfig.

Parameters
  • batch_size (int) – Scalar that indicates how many parameter settings there are, i.e. how many different sounds to generate.

  • sample_rate (Optional[int]) – Scalar sample rate for audio generation.

  • buffer_size – Duration of the output in seconds.

  • control_rate (Optional[int]) – Scalar sample rate for control signal generation. reproducible: Reproducible results, with a small performance impact.

  • no_grad (bool) – Disables gradient computations.

  • debug (bool) – Run slow assertion tests. (Default: False, unless environment variable TORCHSYNTH_DEBUG exists.)

  • eps (float) – Epsilon to avoid log underrun and divide by zero.

to(device)

For speed, we’ve noticed that it is only helpful to have sample and control rates on device, and as a float.

torchsynth.config.check_for_reproducibility()

This method is called automatically if your SynthConfig specifies reproducibility=True.

Reproducible results are important to torchsynth and synth1B1, so we are testing to make sure that the expected random results are produced by torch.rand when seeded. This raises an error indicating if reproducibility is not guaranteed.

Running torch.rand on CPU and GPU give different results, so all seeded randomization where reproducibility is important occurs on the CPU and then is transferred over to the GPU, if one is being used. See https://discuss.pytorch.org/t/deterministic-prng-across-cpu-cuda/116275

torchcsprng allowed for determinism between the CPU and GPU, however profiling indicated that torch.rand on CPU was more efficient. See https://github.com/pytorch/csprng/issues/126

Read the Docs v: latest
Versions
latest
stable
v1.0.1
v1.0.0
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.