torchsynth.synth¶
SynthModule
wired together form a modular synthesizer.
Voice
is our default synthesizer, and is used to
generate synth1B1.
We base off pytorch-lightning LightningModule
because it makes multi-GPU
inference easy. Nonetheless, you can treat each synth as a native
torch Module
.
-
class
torchsynth.synth.
AbstractSynth
(synthconfig=None, *args, **kwargs)¶ Bases:
pytorch_lightning.core.lightning.LightningModule
Base class for synthesizers that combine one or more
SynthModule
to create a full synth architecture.- Parameters
synthconfig (
Optional
[SynthConfig
]) – Global configuration for this synth and all its componentSynthModule
. If none is provided, we use our defaults.
-
add_synth_modules
(modules)¶ Add a set of named children
SynthModule
to this synth. Registers them with the torchModule
so that all parameters are recognized.- Parameters
modules (
List
[Tuple
[str
,SynthModule
,Optional
[Dict
[str
,Any
]]]]) – A list ofSynthModule
classes with their names and any parameters to pass to their constructor.
-
forward
(batch_idx=None, *args, **kwargs)¶ Wrapper around output, which optionally randomizes the synth
ModuleParameter
values in a deterministic way, and optionally disables gradient computations. This all depends onsynthconfig
.- Parameters
batch_idx (
Optional
[int
]) – If provided, we generate this batch, in a deterministic random way, according tobatch_size
. If None (default), we just use the current module parameter settings.- Return type
- Returns
audio, parameters, is_train as a Tuple.
(batch_size x buffer_size audio tensor,
batch_size x n_parameters [0, 1] parameters tensor,
batch_size Boolean tensor of is this example train [or test], None if batch_idx is None)
-
freeze_parameters
(params)¶ Freeze a set of parameters by passing in a tuple of the module and param name.
-
get_parameters
(include_frozen=False)¶ Returns a dictionary of
ModuleParameterRange
for this synth, keyed on a tuple of theSynthModule
name and the parameter name.
-
property
hyperparameters
¶ Returns a dictionary of curve and symmetry hyperparameter values keyed on a tuple of the module name, parameter name, and hyperparameter name
-
load_hyperparameters
(nebula)¶ Load hyperparameters from a JSON file
- Parameters
nebula (
str
) – nebula to load. This can either be the name of a nebula that is included in torchsynth, or the filename of a nebula json file to load.
TODO add nebula list in docs See https://github.com/torchsynth/torchsynth/issues/324
- Return type
-
on_post_move_to_device
()¶ LightningModule trigger after this Synth has been moved to a different device. Use this to update children SynthModules device settings
- Return type
-
randomize
(seed=None)¶ Randomize all parameters
-
set_hyperparameter
(hyperparameter, value)¶ Set a hyperparameter. Pass in the module name, parameter name, and hyperparameter to set, and the value to set it to.
-
set_parameters
(params, freeze=False)¶ Set various
ModuleParameter
for this synth.
-
test_step
(batch, batch_idx)¶ This is boilerplate required by pytorch-lightning
LightningTrainer
when calling test.
-
unfreeze_all_parameters
()¶ Unfreeze all parameters in this synth.
-
class
torchsynth.synth.
Voice
(synthconfig=None, nebula='default', *args, **kwargs)¶ Bases:
torchsynth.synth.AbstractSynth
The default configuration in torchsynth is the Voice, which is the architecture used in synth1B1. The Voice architecture comprises the following modules: a
MonophonicKeyboard
, twoLFO
, sixADSR
envelopes (eachLFO
module includes two dedicatedADSR
: one for rate modulation and another for amplitude modulation), oneSineVCO
, oneSquareSawVCO
, oneNoise
generator,VCA
, aModulationMixer
and anAudioMixer
. Modulation signals generated from control modules (ADSR
andLFO
) are upsampled to the audio sample rate before being passed to audio rate modules.You can find a diagram of Voice in Synth Architectures documentation.