Introduction#

csoundengine is a library to run and control a csound process using its API (via ctcsound).

Engine#

The core of csoundengine is the Engine class. An Engine wraps a csound process transparently: it lets the user compile csound code and schedule events without any overhead.

from csoundengine import *
# create an Engine with default/detected options for the platform.
engine = Engine()

# Define an instrument
engine.compile('''
  instr synth
    ; pfields of the instrument
    kmidinote = p4
    kamp = p5
    kcutoff = p6
    kdetune = p7

    kfreq = mtof:k(kmidinote)
    ; A filtered sawtooth
    asig  = vco2:a(kamp*0.7, kfreq)
    asig += vco2:a(kamp*0.7, kfreq + kdetune)
    asig = moogladder2(asig, kcutoff, 0.9)
    ; Attack / Release
    aenv = linsegr:a(0, 0.01, 1, 0.2, 0)
    asig *= aenv
    outs asig, asig
  endin
''')

# Start a synth with indefinite duration. This returns the eventid (p1)
# of the running instrument, which can be used to further control it
event = engine.sched("synth", args=[48, 0.2, 3000, 4])

A csound process is launched by creating a new Engine. csoundengine will query the system regarding audio backend, audio device, number of channels, samplerate, etc., for any option that is not explicitly given. For example, in linux csoundengine will first check if jack is running (either as jack itself or within pipewire) and, if so, use that as backend, or fallback to using portaudio otherwise. If not specified otherwise, csoundengine will use the default audio devices for the backend and query the number of channels and samplerate to match them.

An Engine uses the csound API to communicate with csound. All audio processing is run in a thread with realtime priority to avoid dropouts

Built-in instruments#

An Engine provides built-in functionality to perform common tasks. For example:

Modulation / Automation#

Within csoundengine instruments can declare pfields as dynamic values (k-variables), which can be modified, modulated and / or automated after the event has started. Notice that in the definition of the ‘synth’ instrument, kmidinote = p4 or kcutoff = p6 assign a parameter (p4, p6) to a control variable.

# Schedule an event with a unique id
event = engine.sched("synth", dur=20, args=[48, 0.2, 3000, 4])

# Change midinote. setp means: set p-field. This sets p4 (kmidinote) to 50
engine.setp(event, 4, 50)

# Automate cutoff (p6), from 500 to 2000 hz in 3 seconds, starting in 4 seconds
engine.automatep(event, 6, (0, 500, 3, 2000), delay=4)

Session (high level interface)#

Each Engine can have an associated Session. A Session provides a higher level interface, allowing to:

  • Define instrument templates (an Instr), which can be instantiated at any order of evaluation, allowing to implement processing chains of any complexity

  • Define named parameters and default values. An Instr can use named parameters and assign default values; when an instrument is scheduled, only parameters which diverge from the default need to be passed.

  • A Session provides a series of built-in Instr’s to perform some common tasks, like playing samples from memory or from disk, perform audio analysis, etc.

from csoundengine import *

# When a session is created, the underlying Engine is created as well. The engine
# is thus created with default values
session = Session()

# If the Engine needs to be customized in some way, then the Engine needs to be
# created first
session = Engine(nchnls=4, ksmps=32).session()

# An Engine has only one Session assigned to it. Calling .session() on the engine
# again will return the same session
assert session.engine.session() is session

# define instruments
session.defInstr("synth", r'''
  |ibus, kmidi=60, kamp=0.1, ktransp=0, ifade=0.5|
  ; a simple sawtooth
  asig vco2 kamp, mtof:k(kmidi+ktransp)
  asig *= linsegr:a(0, ifade, 1, ifade, 0)
  ; output is routed to a bus
  busout(ibus, asig)
''')

session.defInstr("filter", r'''
  |ibus, imasterbus, kcutoff=1000, kresonance=0.9|
  asig = busin(ibus)
  asig = moogladder2(asig, kcutoff, kresonance)
  busmix(imasterbus, asig)
''')

# NB: p4 is reserved, attempting to use it will result in an error
session.defInstr("master", r'''
  imasterbus = p5
  asig = busin(imasterbus)
  asig compress2 asig, asig, -120, -40, -12, 3, 0.1, 0.01, 0.05
  outch 1, asig
''')

# create a master audio bus
masterbus = session.assignBus()

# Start a master instance at the end of the evaluation chain
master = session.sched("master", imasterbus=masterbus, priority=3)

# Launch some notes
for i, midinote in enumerate(range(60, 72, 2)):
    # for each synth, we create a bus to plug it to an effect, in this case a filter
    bus = session.assignBus()

    delay = i

    # Schedule a synth
    synth = session.sched("synth", delay=delay, dur=5, kmidi=midinote, ibus=bus)

    # Automate pitch transposition so that it descends 2 semitones over the
    # duration of the event
    synth.automatep('ktransp', [0, 0, dur, -2], delay=delay)

    # Schedule the filter for this synth, with a priority higher than the
    # synth, so that it is evaluated later in the chain
    filt = session.sched("filter",
                         delay=delay,
                         dur=synth.dur,
                         priority=synth.priority+1,
                         kcutoff=2000,
                         kresonance=0.92,
                         ibus=bus,
                         imasterbus=masterbus)

    # Automate the cutoff freq. of the filter, so that it starts at 2000 Hz,
    # it drops to 500 Hz by 80% of the note and goes up to 6000 Hz at the end
    filt.automatep('kcutoff', [0, 2000, dur*0.8, 500, dur, 6000], delay=start)

Offline Rendering#

Offline rendering is implemented via the Renderer class, which has the same interface as a Session and can be used as a drop-in replacement.

from csoundengine import *
from pitchtools import *

renderer = Renderer(sr=44100, nchnls=2)

renderer.defInstr('saw', r'''
  kmidi = p5
  outch 1, oscili:a(0.1, mtof:k(kfreq))
''')

events = [
    renderer.sched('saw', 0, 2, kmidi=ntom('C4')),
    renderer.sched('saw', 1.5, 4, kmidi=ntom('4G')),
    renderer.sched('saw', 1.5, 4, kmidi=ntom('4G+10'))
]

# offline events can be modified just like real-time events
events[0].automate('kmidi', (0, 0, 2, ntom('B3')), overtake=True)

events[1].set(delay=3, kmidi=67.2)
events[2].set(kmidi=80, delay=4)
renderer.render("out.wav")

A Renderer can also be created from an existing Session, either via makeRenderer() or via the context manager rendering(). In both cases an offline Renderer is created in which all instruments and data defined in the Session are also available.

Taking the first example, the same can be rendered offline by placing this:

...

masterbus = session.assignBus()
master = session.sched("master", imasterbus=masterbus, priority=3)
for i, midinote in enumerate(range(60, 72, 2)):
    bus = session.assignBus()
    delay = i
    synth = session.sched("synth", delay=delay, dur=5, kmidi=midinote, ibus=bus)
    synth.automatep('ktransp', [0, 0, dur, -2], delay=delay)
    filt = session.sched("filter", delay=delay, dur=synth.dur,
                         priority=synth.priority+1, kcutoff=2000,
                         ibus=bus,
                         imasterbus=masterbus)
    filt.automatep('kcutoff', [0, 2000, dur*0.8, 500, dur, 6000], delay=start)

inside the rendering context manager:

with session.rendering("out.wav") as session:
    masterbus = session.assignBus()
    master = session.sched("master", imasterbus=masterbus, priority=3)
    for i, midinote in enumerate(range(60, 72, 2)):
        bus = session.assignBus()
        delay = i
        synth = session.sched("synth", delay=delay, dur=5, kmidi=midinote, ibus=bus)
        synth.automatep('ktransp', [0, 0, dur, -2], delay=delay)
        filt = session.sched("filter", delay=delay, dur=synth.dur,
                         priority=synth.priority+1, kcutoff=2000,
                         ibus=bus,
                         imasterbus=masterbus)
        filt.automatep('kcutoff', [0, 2000, dur*0.8, 500, dur, 6000], delay=start)

csoundengine vs ctcsound#

csoundengine uses ctcsound to interact with csound. ctcsound follows the csound API very closely and requires good knowledge of it in order to avoid crashes and provide good performance. csoundengine bundles this knowledge into a wrapper which is flexible for advanced use cases but enables a casual user to start and control a csound process very easily. See below for a detailed description of csoundengine ´s features

Features#

  • Detection of current environment - csoundengine queries the os/hardware to determine the system samplerate, hardware number of channels and most appropriate buffer size

  • Named parameters and defaults - An instrument in csoundengine can have named parameters and default values. This makes it very easy to create instruments with many parameters. When an instance of such an instrument is scheduled csoundengine fills the values of any parameter which is not explicitely given with the default value. Any parg can also be modulated in real-time. See Engine.setp() and Engine.getp()

  • Event ids / Modulation - in csoundengine every event is assigned a unique id, allowing the user to control it during performance, from python or from csound directly.

  • Informed use of the Csound API - csoundengine uses the most convenient part of the API for each task (create a table, communicate with a running event, load a soundfile), in order to minimize latency and/or increase performance.

  • Automation - csoundengine provides a built-in method to automate the parameters of a running event, either via break-point curves or in realtime via any python process. See Engine.automatep(), Engine.setp() or the corresponding Synth methods: set() and automate()

  • Bus system - an Engine provides a bus system (both for audio and control values) to make communication between running events much easier. See assignBus() and Bus opcodes

  • Jupyter notebook - When used inside a jupyter notebook csoundengine generates customized html output and interactive widgets. For any scheduled event csoundengine can generate an interactive UI to control its parameters in realtime. It also provides %magic routines to compile csound code and interact with a running Engine. See Inside Jupyter

  • Processing chains - An instrument defined in a Session can be scheduled at any point within a processing chain, making instrument definitions more modular and reusable

  • Built-in functions - Any Engine / Session has built-in functionality for soundfile/sample playback, loading sf2/sf3 soundfonts, jsfx effects, audio analysis, etc.