Introduction
This page is a collection of unordered notes related to the practice of sound processing and music making with SuperCollider.
I’m far from being an expert in SuperCollider and audio programming: Those information are not 100% fact checked by real DSP specialists, but I hope those notes will help some other confused newcomers desperately befundled about why their patches don’t work like they expect to.
There is a lot I didn’t document while writing patches for SC: For example, I ended writing my own mixer class because I was frustrated by the lack of easy way to fade a synth or group in and out the main audio bus. I barely scratched the surface though.
It’s another permament work in progress page and I will be updating it from time to time with new tidbits and typos.
Out and ReplaceOut: A warning
Beware of ReplaceOut
. A high-hat synth wrongly using this ugen instead of a simple Out
replaced the sound of a kick synth when the decay was longer, creating a very weird offbeat I thought was related to the track and mixer classes I was working on.
‘if’ in a SynthDef => ERROR: Non Boolean in test
if
is a client construct. It won’t work with Ugens, which are server objects, and throws this error when trying to branch from their outputs:ERROR: Non Boolean in test.
- This boils down to computation in a SynthDef just being a description for things that will be executed later.
- Alternative for branching and boolean logic:
Select
, trigger,And
,Or
,Not
,Xor
. - Using
Select
implies using the boolean value as an index of 0 or 1 for an array of signals. The first value in this array is therefor the “false” one.
list audio driver and .do syntaxic sugar
x = ServerOptions.devices;
x.do(_.postln);
Here the audio devices are listed, and this .do
syntaxic sugar refers to the current item as _
.
include and load SynthDef
// load common synthdefs
~synths =PathName(
Platform.userHomeDir ++ "/Production/SuperCollider/synthdefs"
).entries;
~synths.do({|item| item.fullPath.load});
Layering through multichannel expansion
Passing an array of values to an envelope used to shape pitch or amplitude is a way to leverage multichannel expansion and layer signals of different pitch or volume in a concise manner:
(
{
var sig = SinOsc.ar(
440 * [1.02, 1.5, 1, 0.5],
);
sig = sig * Env.perc(
[0.4, 0.001, 0.1, 2],
[1, 0.8, 1.2, 0.4, 3] * 2).ar;
Splay.ar(sig, 0.2) / 4;
}.play;
)
Pbind: grouping note information
It is possible to group several information about a note in a Pbind
like pitch, decay and duration to avoid handling each parameters separately and constantly checking if set of parameters has the right amount of value and match the right note:
~notes = [Pseq([[56, 1/2], [56, 1/3], [58, 1] ...), ...];
(
Pbind(
[\midinote, \dur], Pseq(~notes),
).play(TempoClock(120/60));
)
Here, pitch and duration information are transmitted as an array.
Hasher.ar(Sweep.ar)
Nathan Ho uses this trick to produce a deterministic burst of noise for kicks:
Hasher
produces a bipolar output unique to its input signal and will always produce the same output for the same input.Sweep
produces a linear ramp rated in seconds- Hashing the second with the first will guarantees the same 1sec burst of noise to be added to a kick and give it a “sampler” feeling
WhiteNoise
ugen gives a more “analog”, less synthetic feel to the kick.
Why multiplying the pitch envelope when making a kick?
Why doing { 1 + (8 * Env.perc(0.001, 0.13, curve: -8).ar);}.plot;
And not just { Env.perc(0.001, 0.13, curve: -8).ar;}.plot;
?
- Multiplying the oscillator frequency with just an envelope will produce a signal that drop way too fast to 0.
- The order of execution is not exactly like conventional mathematics, the same operation without parenthis will produce a different signal. We don’t manipulate numbers, at least not only.
- We modulate frequency. A strong initial amplitude before the sharp decrease to 0 is needed for the transient to pops out. Hence the multiplier.
- Adding the multiplied envelope to one guarantees the final frequency of the signal is not 0 but the one we modulate:
sig = SinOsc.ar(\freq.kr(50) * envPitch) * envAmp;
Distortion
With boundaries (or threshold) referring to minimum and maximum amplitude for a sample:
- clipping flatten out the spectrum at the defined boundaries. Every sample out of the boundaries take the value of the boudarie [it go paste from]
- folding “reflects” the samples not inside the boundaries: difference between the value of out of bound sample and the boundarie is added or substracted back to the signal
- warping consider boundaries like those of a plane and warp/translate back out of bound value to “the other side” (the opposite boundarie).
(source)
reciprocal quality, compensation scalar for (bypass) filter
In SuperCollider, the Quality factor of a filter is expressed as the reciprocal quality, ie. bandwidth / center frequency. See quote below. The ratio can be named damping ratio. In the case of a bypass filter, Q describe the selectivity of a filter.
To express it in layman terms, for a BPF
, the lower the reciprocal quality, the narrower the band of unfiltered frequencies will be. This also implies a drop in amplitude that has to be corrected.
Quote from E. Fieldsteel’s book:
For example, when applying a band-pass filter to broadband noise, rq values close to zero will drastically reduce the amplitude. In this specific case, a sensible starting point for a compensation scalar is the reciprocal of the square root of rq.
Which means: amplitude = 1/ rq.sqrt
.
rq = bandwidth / center frequency
sqrt(rq) = x^rq
Why this formula? It’s pretty shrimple:
- The reciprocal is the ratio.
- The square root of the rq is the divisor.
- The smaller the divisor is, the bigger the ratio will be.
- For a rq ranging from 0 to 1, the formula will produce an amplitude multiplier ranging from 1 to 10.
- The drop in decibels can rech -10db? A multiplier of 10 scale back the signal closer to its original amplitude
In the case of most bandpass filters (BPF included, AFAIK), filter quality is
equal to center frequency divided by bandwidth. In the case of SC, we invert
the values: reciprocal quality (rq) is equal to bandwidth divided by center
frequency. So a bandpass filter with an rq of 0.1 and a center frequency of
1000 has a bandwidth of 1000Hz * 0.1 = 100Hz. So the half power points of
the filter are at 950Hz and 1050Hz (a half power point is a point on the
frequency response curve where the output signal has fallen by -3dB relative
to the input signal).
Eli
References:
- Hank Zumbahlen, F 0 and Q in Filters
- https://www.analog.com/media/en/training-seminars/tutorials/mt-210.pdf
t_gate and other t* parameters (trigger control)
Arguments that begin with "t_" (e.g. t_trig), or that are specified as \tr in the def's rates argument (see below), will be made as a TrigControl. Setting the argument will create a control-rate impulse at the set value. This is useful for triggers.
Self-explanatory but the TrigControl
class is undocumented.
http://doc.sccode.org/Classes/SynthDef.html
arrow notation (->
)
This notation is simply used to create an association between to two objects. It’s an undocumented instance method from the Object
class. I understand it creates Association
that can then be accessed with the key
and value
methods.
a = [\x -> 700, \y -> 200, \z -> 900];
boolean expression: language vs server
Client-side, a boolean expression returntrue
or false
, but on the server, it means a signal with a value of either 1 or 0, and thus can be treated as any other sinal (for exemple, a multiplier for amplitude)
filter bands through multichannel expansion
Array of filtered signal generated from a range, then summed
sig = Splay.ar(BPF.ar(sig, (1..10).linlin(1, 10, 25, 800)), 0.1);
Patterns, Pbind, Events
- Patterns are blueprint. They describe the behavior. Streams follow the plan lay out by the patterns.
- Getting a results from a pattern means transforming it into a stream.
Pbind
is a way to give name to the value produced by the different patterns type.- A
Pbind
stream producesEvent
, a specialized kind ofDictionary
. - The names bound to the
Pbind
sub-patterns are (or can be) passed as key/pair to create a newEvent
. - The
Event
prototype define a set of default values, including a synth object, making so that playing an empty event (().play
) will still produce a sound. - In practice, the
Pbind
key/value pairs can be passed as parameters to a (custom) synth to produce sound when theEvent
is played - Calling
play
on aPbind
is like transforming it into a stream (asStream()
) and callingnext
indefinitely to generate events. I guess? - There are different kind of event
References:
- https://scsynth.org/t/noob-addressing-synth-created-by-pbind-outside-of-pbind/4806/6
- http://doc.sccode.org/Tutorials/A-Practical-Guide/PG_08_Event_Types_and_Parameters.html
Buffers
Quick facts
a client-side abstraction for a server-side buffer
- Buffers are array holding 32-bit floating point numbers
- Buffer are not freed when pressing
cmd+period
- They should not be created inside a SynthDef, except for
LocalBuf
- Most operations on buffers are asynchronous
freeAll
does, in fact, free all the buffers, provided the buffer numbers aren’t set manually.
Buffer allocation and memory
If I allocate a buffer and assign it to, for example, a variable b
, don’t free this buffer and reallocate another one for the same variable, a new buffer will be stored on the server and will be associated to a new buffer number. The previous one is not reused. To say it in another way, evaluating b = Buffer.read(...)
several time without freeing the buffer before with b.free
will create a new buffer and associated buffer number each time in memory.
b = Buffer.read(s, "/home/user/Production/Samples/maite/LA.wav");
// -> Buffer(0, nil, nil, nil, /home/user/Production/Samples/maite/LA.wav)
b = Buffer.read(s, "/home/user/Production/Samples/maite/mais_cest_tres_joli.wav");
// -> Buffer(1, nil, nil, nil, /home/user/Production/Samples/maite/mais_cest_tres_joli.wav)
b.free;
// -> Buffer(nil, nil, nil, nil, nil)
// buffer 0 still exist in memory:
b = Buffer.read(s, "/home/user/Production/Samples/maite/et_cest_bon.wav");
// -> Buffer(1, nil, nil, nil, /home/user/Production/Samples/maite/et_cest_bon.wav)
// Mr Clean
Buffer.freeAll();
Notes on a pulsar synthesis patch
https://nathan.ho.name/posts/pulsar-synthesis/
A few things to note here that are unrelated to the actual subject (pulsar synthesis)
linlin
andlinexp
everywhere. audio ugen actually implements this method? actually all ugens according to the doc?- Multichannel expansion occurs when randomLFOs (array?) is created
- The use of
flop
implies the data at this stage is a 2D array, and it is. The first array israndomLFOs
, then the signal is multiplied by this same array again (so array * array).flop
inverts rows and columns. - when using
Array.do
orArray.collect
, the index argument is optional - you can multiply by a boolean expression =>
* (pulsaretPhase < 1)
?
Pbind : midinote, subarray, Event
- I should use
\midinote
instead of translating frequency from hertz to midi with acollect
like a dumass, unless there is a specific reason to do so. - One can group parameters into subarray to associate, for example, pitch and duration information for each note instead of two separate sequence :
Pbind(
\instrument, \dub,
[\midinote, \dur], Pseq ([[72,1], [76, 0.5]...),
...
)
- A different approach can be used with
Event
object, where aPseq
contain an event embedding each information for each note played in the sequence :
Pseq([
(\instrument: \piano, \midinote: 72, \dur: 1)
...
])
glissendo / legato
This can be implemented using the Lag
, VarLag
, and associated convenience methogs lag
and varlag
. It should be used on control values, not audio.
Reference: Eli Fieldsteel - SuperCollider Mini Tutorial: 3. Lag UGens
On events
- A default
SynthDef
is loaded when the server is booted and is used by theEvent
class db
attribute translates toamp
and thus can be understood by synthdef with this parameter- frequency can be specified using any of the pitch specification (degree, note, midinote, freq, etc)
- Events automatically close the gate of a gated synth
play
ing aPbind
returns aEventStreamPlayer
platform, default sound, buffer
- The
Platform
class provide platform-specific values, such as aResourceDir
, Including default sounds to use for sample-based experiments. - Buffers have an unique id that can be retrieved with
.bufnum
Event
can be used as a key/value structure to store buffers and associate names to them
sum and largest amplitude value
The largest amplitude for the sum of several signals can be the reciprocal of the number of waveforms (1/<total waveforms>
)
Env and EnvGen
EnvGen
is server-side,Env
is language-side.Env
areplot
-able- If generated with
new
, levels, times, and curves arrays (matching in size) are to be provided - Some symbols can be passed to set the the shape of a curve :
\sin
,\exp
orlin
.
Session: Eli Fieldsteel, intro to delays - week 6 spring 2021 mus 499c
- A delay is basically a buffer of a certain duration, played a certain amount of time. Really makes you think
delay[c|n|l]
=> interpolation typeallpass
=> Delay with feedback.blend
method => Crossfade with another signal. can be used for dry/wet signal when using a delaycollect
returns an array of size n filled with objects generated from evaluating the given function. In the video, he usescollect
to call 20 delay synths to build some kind of multitap delay.linlin
is a range mapping methods. it maps a range of value to another range of values to produce an array of float produced from this mapping. same forlinexp
and friends.- Dynamic delay line? => Don’t use delay without interpolation (the
*n
ones) - A flanger can be created by simply using a sin osc modulating the delay time (here with a range of 1/1000sec to 1/100sec)
MultiTap
is a built-in multitap delay ugen
Why can’t I pass an array as a synthdef argument?
unfortunately synthdef structure must be fixed at compilation so you can’t have number of elements in array of signals as an argument sadly.
But. it is still possible to pass an array of numbers to a ugen function provided the size remains content and that the default value is declared as a literal array with the hash symbol #
.
Thus. If I want to use an array of frequencies to play chords with the ugen, I can do:
[130, 196, 260].do({arg freq; synth(\stab, [\freq, freq])});
or, in the synthdef
:
...
arg out=0, freq=#[130, 196, 260];
...
Which is much less verbose, but assumes the input is always an integer array of size 3. Although i can pass [120, 0,0]
if i just want a monophonic synth
References:
- FIELDSTEEL 2024
- https://scsynth.org/t/array-of-signals-in-a-synthdef/3845
- https://sc-users.bham.ac.narkive.com/sx4aovrq/problem-with-passing-array-args-to-a-synth
NamedControls (or what does \out.kr means?)
NamedControl
is a (kinda poorly documented?) method to write arguments inSynthDef
.An alternate method for writing arguments in synthdefs was quietly introduced in 2008 (sc 3.3?). I call it the "NamedControl style"
Nathan Ho- This would explain why it’s in every compositions found online but described nowhere.
NamedControl
has a page in the documentation but if you don’t know it’s called like that, good luck
References:
How to pass a ugen as an argument to a SynthDef?
Basically, you can’t. Either use a bus, or a extension like JitLib
.
With the first method, the source of the modulation is declared as a SynthDef, and its output passed through a bus using map
after declaring the synth. else…
^^ the preceding error dump is for error: can't set a control to a ugen
References:
Bus, order of execution
Bus
is simply a language-side construct to keep a reference to audio or control bus. Referencing a bus by its number is tedious, unclear and make difficult for a complex patch to evolve.- Order of execution is crucial in signal processing. by default, synth are aded to the tail of the default group. when instanciating a synth, the target and add action can be specified to change the define the order of execution for a given synth.
- Groups are a special kind of node acting as a collection of other nodes.
group
is the client-side representation of a group. there are useful to control several synth at once and manage the order of excution in a more granular way. - Order of execution can be specified when instanciating a synth with the
tail
andhead
methods. - It could help to visualize the order of excution flow as a linked list.
Scales, random notes, conversion
Scale
is a specialized class to generate pitch information.- Use
Scale.directory
to list all available scales (a lot) degreeToFreq
is used to, well, convert degree to frequencyfreq.midicps
is the root note in htz, like 120, where freq is an integer matching a midi signal?1
(last arg) is the octave. negative octaves are allowed
s.bind(
synth.new(\voice,
[
\freq, scale.minor(\just).degreeToFreq(scale.minor.degrees.choose, freq.midicps, 1),
\maxrelease, waittime,
\out, outbus.index
],
)
);
You can perform the same kind of operation server-side with DegreeToKey
.
TChoose
can also be used conjointly with some randomness provider like Dust
to add variation in frequency:
TChoose.kr(Dust.ar(3), [25, 27, 30, 35, 38]).midicps * 2;
References:
My synth doesn’t release despite the doneaction
Why doesn’t my synth release despite the DoneAction
on the envelope generator?
first: envgenerator excepts a gate
parameter, which is 1.0 by default. meaning, without a modulation on this parameter, be it another ugen or an external signal, the gate is forever open and the synth always on.
- The envelope will release if the gate input is 0 or less.
- Now the tricky part: depending on the envelope, releasing it may or may not depends on the presence of a gate signal:
- sustained envelopes have a non-nil releasenode. to release the signal, the gate has to be set at 0 or less at some point. example:
Env.adsr
,Env.asr
- timed envelopes don’t have a releasenode and so can finish with a gate > 0.
- sustained envelopes have a non-nil releasenode. to release the signal, the gate has to be set at 0 or less at some point. example:
Resources:
Demand ugens
Demand ugens seem to be poorly documented yet commonly used in compositions I found online.
- Demand rate is a thing alongside control and audio rate.
- Patterns are language-side and therefor can’t be used in a synthdef
- Demand ugens produce pattern-like behaviors that can be incorporated into a ugen function. see
Demand
,Dseq
,TDuty
,DConst
,DWhite
, etc. Triggers are used in the demand ugen to cue a 'demand' for a new value from the attached specialist demand rate ugens (which all begin with d and have names analogous to patterns)
Demand ugens are generators that run at control rate but generate a new value only when triggered ("at demand")
- Iannis ZannosWhenever there is a transition in the trigger signal from 0 to 1 the demand ugen will produce a new value which it obtains from the value generator.
- same- Describing Demand ugens as server-side generators yielding the next value on trigger makes much more sense for me.
- Example:
{Demand.kr(Dust.kr(2), 0, DRand([2, 4, 6, 8, 12], inf)).poll(5)}.play;
by design, a reset trigger only resets the demand ugens; it does not reset the value at demand's output.
References:
- Analogue Modelling Tips and Tricks
- Thor Magnusson - Musical Patterns on SC Server
- Zannos Iannis, A very step-by-step guide to SuperCollider, 2005
formant, lfo, and trigger random generator
- The
formant
ugen can be used for formant synthesis - There is no LFO ugen because it can be reproduced with any synth with control rate. example:
SinOsc.kr(0.01).exprange(220, 230);
TIRand.ar(1.5, 5, trigger)
to generate random number on each received trigger. here, withDust.kr(0.8)
astrigger
.
Resources:
- https://doc.sccode.org/classes/formant.html
- https://composerprogrammer.com/teaching/supercollider/sctutorial/12.2%20singing%20voice%20synthesis.html
Routine and “osc pre-emption” (latency)
Using routines directly instead of patterns means one would have to care about what is described as “osc pre-emption”.
SuperCollider client and server communicate using OSC. bundled messages can contain a parameter indicating the exact time at which a message should be executed. Messages can be sent in advance so that the timing is more accurate. A bare synth.new
doesn’t add a time tag in the message. The default event
time of the pattern system uses s.makebundle
by default to set a latency. Routines do not, hence the need to bind
the synth to a server when using a routine, unless working with real-time input. bind
is short for s.makebundle(s.latency, { ... }). .makebundle
. Without this binding operation, the timing can be inaccurate and produce wonky results.
Using the OffsetOut
ugen rather than Out
ensures that the scheduled start position of a synth leads to an accurate sample start position within a control period.
References:
- https://scsynth.org/t/why-you-should-always-wrap-synth-and-synth-set-in-server-default-bind/7310
- https://nathan.ho.name/posts/supercollider-beginner-advice/
- https://composerprogrammer.com/teaching/supercollider/sctutorial/8.1%20precise%20timing%20in%20sc.html
Delay, lifespan and DoneAction
A sound source duration is limited by the lifespan of its shortest doneaction. A sound source with a delay will not complete if a ugen (say, the enveloppe) has a DoneAction
because the synth will free before the end of the delay. To circunvent this crudely, one can use a separate terminating ugen (ex: line
) with a duration at least equal to the one needed for the sound processing to complete. In other words, freeing the synth is offload to a dedicated ugen separated from the sound processing chain. Or, more sensibly, processing effects can be done with a bus. (FIELDSTEEL 2024). DetectSilence
can also be used as a terminating ugen.
Control signals
What is designated as control signals refers to ugen outputs not meant to be processed as audio signal but rather as modulation: Envelopes, LFO, etc. Control signals have a dedicated control busses. Audio signals can be written to both control and audio busses.
Plotting enveloppes
env.adsr(0.01, 0.2, 0, 0.1, 1, -4).test(2).plot
. (using objects like TRand
won’t work?)
Analysing the output of a ugen with a plot
It’s very shrimple: env.linen(0.2, 4, 0.3).plot
. calling plot
will draw a graphic with a curve representing the output. this method is not implemented by every objects, but work on things like env and function.
example: { {|i| sinosc.ar(1 + i.midicps)}.dup(7) }.plot(1);
ugens and methods used in Synthdef’s “drone metal” video
ugens:
Pluck
is a karplus-strong ugen. karplus-strong is a form of string synthesis. The Arturia Microfreak also embbed a karplus-strong synthesis engine.Impulse
“outputs non-bandlimited single sample impulses”. Yeah? I understand it can works as a trigger. I think the fact that it’s non-bandlimiting means the output signal is more “noisy” than a band-limited one, given bandlimiting impulse is a form of synthesis.Dust
isn’t the same thing at all. It “just” generates random impulse between 0 and 1, with a given average number of signals per second.LeakDC
is a dc offset filter. Taking the definition from a Renoise tutorial, DC (direct current) offset (or bias) is an unwanted displacement of amplitude from 0 leading to issues like clipping or distortion.LeakDC
is a filter removing DC offset from a signal. It makes sense for it to be used in a drone metal patch.BLowShelf
is a more advanced low shelf filter from the “b” family of filter.BHiShelf
is its high shelf relative. It’s out of my depth for now but it’s an entry point to go beyond the basics and understand how and why there is no “standard” filters or effects and why people looks for specific analog filters.ReplaceOut
is likeOut
in that it add an output to a given bus. but instead of adding a new one each time it is called, it replace the output with the newest one. it can be used in practice to create mixers and fading synths in and out of an audio bus.LocalIn
andLocalOut
allows for the definition of a local bus in a synthdef. Useful to generate feedback loops. Fantastic example in the ugen documentation.limiter
limits the amplitude of a signal to the given max (1 by default). I don’t understand why quarks are needed if this ugen exists? methods:normalize
… normalizes the values of a collection (or any object implementing it) between a given range. it’s related to vector :
to normalize a vector in math means to divide each of its elements
to some value v so that the length/norm of the resulting vector is 1.
linlin
“wraps the receiver so that a linear input range is mapped to a linear output range.” not 100% sure of what it does. a receiver is “the object to which a message is sent. “. I think linlin purpose is to define a min and upper bound for the values the object will send and receive next. quite vague but seems to be used everywhere.tail
adds a synth at the tail of a group node (nodes are organized as a tree)n
References:
- synthdef - drone metal in supercollider
- https://tutorials.renoise.com/wiki/audio_effects
- https://en.wikipedia.org/wiki/karplus%e2%80%93strong_string_synthesis
- https://scsynth.org/t/how-and-why-use-advanced-filters/1900/2
- https://scsynth.org/t/underwater-my-first-piece-in-the-world-of-supercollider/7031
- https://stackoverflow.com/questions/23642694/what-does-it-mean-to-normalize-an-array
Session: Random modulation. client VS server
The issue was that i used rrand
, which is evaluated client-side, as a source of randomness for modulating the different synth. It didn’t work as intended. The function is evaluated when the synth is declared, not when it’s instanciated. On declared, the value is fixed and won’t change until the synth is declated again. To change a parameter value each time the synth is instanciated, server objects like TRand
or ExpRand
must be used.
Session: Cellular automata
I implemented an elementary cellular automata in sclang to get the hang of the language and explore if it could produce interesting rythms, where the binary output is used as a gate.
It sucked, but here is what I learned:
- The
collect
method can be used to produce a new collection from a collection (here, an array. i used it to replace a javascriptmap
. t_gate
can be used in aenvgen
to easily send a gate signal as an argument to a synth. it doesn’t seem to be documented, or maybe through a more generalized description of thet_*
syntax.slice
is also undocumented. it can replace its javascript counterpart by passing an object(x..y)
where x and y are the boudaries to slice the collection from.- A dictionary can be created from a collection of key/value pair with
dictionary.newfrom
. the value can then be accessed by usingat(key)
or indexing (dict[key]
). - Infinite loops can be created with
inf.do(...)
- For random number generation, be sure to wrap the call inside a function to get new number each time it’s called. ex:
next_state = array.fill(50, { rrand(0, 1) });
.
TODO: snippet
Way to generate random pitch input
midicps
converts MIDI note numbers to cycles per second. So instead of writing frequency by hand, one can use simpler numbers to represent pitch that will be converted in “tonal” frequency. With choose
or wchoose
, I can select several notes in an array [edit 12/15/23 11:43: I think this is bullshit, check the doc again]
pitch = [0, 2, 3, 5, 7, 10, 12, 15].midicps.choose;
References and resources
- FIELDSTEEL Eli, Supercollider For The Creative Musician: A Practical Guide, 2024
- HARKINS, Henry James, A Practical Guide to Patterns. SuperCollider 3.3 Documentation, 2009
- The SuperCollider forum
- [Norns scripting guide](https://monome.org/docs/norns/studies/
- cs203: fall 2021
- SuperCollider code
- Opinionated Advice for SuperCollider Beginners
- Composer Programmer (Nick Collins)
- companion website for SuperCollider for the Creative Musician: A Practical Guide
- Steftones/OctaGroove. A 8-track sequencer
- howto_co34pt_liveCode - Basic Rhythms
- Martin Marier’s videos. Great resource for French speaking people
- SynthDef on named controls