Offline Rendering#
Offline rendering is implemented via the Renderer
class,
which has the same interface as a Session
and
can be used as a drop-in replacement.
Example
from csoundengine import *
from pitchtools import *
renderer = Renderer(sr=44100, nchnls=2)
renderer.defInstr('saw', r'''
kmidi = p5
outch 1, oscili:a(0.1, mtof:k(kfreq))
''')
events = [
renderer.sched('saw', 0, 2, kmidi=ntom('C4')),
renderer.sched('saw', 1.5, 4, kmidi=ntom('4G')),
renderer.sched('saw', 1.5, 4, kmidi=ntom('4G+10'))
]
# offline events can be automated just like real-time events
events[0].automate('kmidi', (0, 0, 2, ntom('B3')), overtake=True)
events[1].set(delay=3, kmidi=67.2)
events[2].set(kmidi=80, delay=4)
renderer.render("out.wav")
It is possible to create a Renderer
out of an existing
Session
, by calling session.makeRenderer
.
This creates a Renderer
with all Instr
and resources
(tables, include files, global code, etc.) in the Session
already defined.
from csoundengine import *
session = Session()
session.defInstr('test', ...)
table = session.readSoundfile('path/to/soundfile')
session.sched('test', ...)
session.playSample(table)
# Render offline
renderer = session.makeRenderer()
renderer.sched('test', ...)
renderer.playSample('path/to/soundfile')
renderer.render('out.wav')
A more convenient way to render offline given a live Session
is to use the
rendering()
method:
from csoundengine import *
session = Session()
session.defInstr('test', ...)
table = session.readSoundfile('path/to/soundfile')
with session.rendering('out.wav') as r:
r.sched('test', ...)
r.playSample(table)
Renderer#
- class csoundengine.offline.Renderer(sr=None, nchnls=2, ksmps=None, a4=None, priorities=None, numAudioBuses=1000, numControlBuses=10000, dynamicArgsPerInstr=16, dynamicArgsSlots=None)[source]#
A Renderer is used when rendering offline.
In most cases a
Renderer
is a drop-in replacement of aSession
when rendering offline (seemakeRenderer()
).Instruments with higher priority are assured to be evaluated later in the chain. Instruments within a given priority are evaluated in the order they are defined (first defined is evaluated first)
- Parameters:
sr (
int
|None
) – the sampling rate. If not given, the value in the config is used (see config[‘rec_sr’])nchnls (
int
) – number of channels.ksmps (
int
|None
) – csound ksmps. If not given, the value in the config is used (see config[‘ksmps’])a4 (
float
|None
) – reference frequency. (see config[‘A4’])priorities (
int
|None
) – max. number of priority groups. This will determine how long an effect chain can benumAudioBuses – max. number of audio buses. This is the max. number of simultaneous events using an audio bus. To disable bus support use 0 and set numControlBuses also to 0
numControlBuses – the number of control buses.
Example
from csoundengine import * renderer = Renderer(sr=44100, nchnls=2) Instr('saw', r''' kmidi = p5 outch 1, oscili:a(0.1, mtof:k(kfreq)) ''').register(renderer) score = [('saw', 0, 2, 60), ('saw', 1.5, 4, 67), ('saw', 1.5, 4, 67.1)] events = [renderer.sched(ev[0], delay=ev[1], dur=ev[2], args=ev[3:]) for ev in score] # offline events can be modified just like real-time events events[0].automate('kmidi', pairs=[0, 60, 2, 59]) events[1].set(3, 'kmidi', 67.2) renderer.render("out.wav")
Attributes:
Samplerate
Number of output channels
Samples per cycle
Reference frequency
All events scheduled in this Renderer, mapps token to event
A stack of rendered jobs
Csd structure for this renderer (see
Csd
The maximum number of dynamic controls per instr
Maps instr name to Instr instance
Number of priorities in this Renderer
A dict mapping soundfile paths to their corresponding TableProxy
Methods:
The render mode of this Renderer, one of 'online', 'offline'
initChannel
(channel[, value, kind, mode])Create a channel and, optionally set its initial value
setChannel
(channel, value[, delay])Set the value of a software channel
commitInstrument
(instrname[, priority])Create concrete instrument at the given priority.
registerInstr
(instr)Register an Instr to be used in this Renderer
defInstr
(name, body[, args, init, priority, ...])Create an
Instr
and register it with this rendererReturns a dict (instrname: Instr) with all registered Instrs
getInstr
(name)Find a registered Instr, by name
includeFile
(path)Add an #include clause to this offline renderer
addGlobalCode
(code)Add global code (instr 0)
schedEvent
(event)Schedule the given event
sched
(instrname[, delay, dur, priority, ...])Schedule an event
makeEvent
(start, dur, pfields5, instr[, ...])Create a SchedEvent for this Renderer
unsched
(event, delay)Stop a scheduled event
Returns True if this Engine was started with bus suppor
assignBus
([kind, value, persist])Assign a bus
setCsoundOptions
(*options)Set any command line options to use by all render operations
Returns the actual duration of the rendered score, considering an end marker
Returns a tuple (score start time, score end time)
setEndMarker
(time)Set the end marker for the score
render
([outfile, endtime, encoding, wait, ...])Render to a soundfile
Returns the last RenderJob spawned by
Renderer.render()
Returns the last rendered soundfile, or None if no jobs were rendered
writeCsd
(outfile)Generate the csd project for this renderer, write it to outfile
Returns the csd as a string
getEventById
(eventid)Retrieve a scheduled event by its eventid
getEventsByP1
(p1)Retrieve all scheduled events which have the given p1
strSet
(s[, index])Set a string in this renderer.
makeTable
([data, size, tabnum, sr, delay, ...])Create a table with given data or an empty table of the given size
readSoundfile
([path, chan, skiptime, delay, ...])Add code to this offline renderer to load a soundfile
playSample
(source[, delay, dur, chan, gain, ...])Play a table or a soundfile
automate
(event, param, pairs[, mode, delay, ...])Automate a parameter of a scheduled event
playPartials
(source[, delay, dur, speed, ...])Play a packed spectrum
- sr#
Samplerate
- nchnls#
Number of output channels
- ksmps#
Samples per cycle
- a4#
Reference frequency
-
scheduledEvents:
dict
[int
,SchedEvent
]# All events scheduled in this Renderer, mapps token to event
- controlArgsPerInstr#
The maximum number of dynamic controls per instr
-
numPriorities:
int
# Number of priorities in this Renderer
-
soundfileRegistry:
dict
[str
,TableProxy
]# A dict mapping soundfile paths to their corresponding TableProxy
- initChannel(channel, value=None, kind='', mode='rw')[source]#
Create a channel and, optionally set its initial value
- Parameters:
channel (
str
) – the name of the channelvalue (
float
|str
|None
) – the initial value of the channel, will also determine the type (k, S)kind – One of ‘k’, ‘S’, ‘a’. Leave unset to auto determine the channel type.
mode – r for read, w for write, rw for both.
Note
the mode determines the communication direction between csound and a host when running csound via its api. For offline rendering and when using channels for internal communication this is irrelevant
- setChannel(channel, value, delay=0.0)[source]#
Set the value of a software channel
- Parameters:
channel (
str
) – the name of the channelvalue (
float
|str
) – the new value, should match the type of the channel. Audio channels are not allowed offlinedelay – when to perform the operation. A delay of 0 will generate a chnset instruction at the instr0 level
- Return type:
None
- commitInstrument(instrname, priority=1)[source]#
Create concrete instrument at the given priority.
Returns the instr number
- Parameters:
instrname (
str
) – the name of the previously defined instrument to commitpriority – the priority of this version, will define the order of execution (higher priority is evaluated later)
- Return type:
int
- Returns:
The instr number (as in “instr xx … endin” in a csound orc)
- registerInstr(instr)[source]#
Register an Instr to be used in this Renderer
- Parameters:
instr (
Instr
) – the insturment to register- Return type:
bool
- Returns:
true if the instrument was registered, False if it was already registered in the current form
Example
>>> from csoundengine import * >>> renderer = Renderer(sr=44100, nchnls=2) >>> instrs = [ ... Instr('vco', r''' ... |kmidi=60| ... outch 1, vco2:a(0.1, mtof:k(kmidi)) ... '''), ... Instr('sine', r''' ... |kmidi=60| ... outch 1, oscili:a(0.1, mtof:k(kmidi)) ... ''')] >>> for instr in instrs: ... instr.register(renderer) # This will call .registerInstr >>> renderer.sched('vco', dur=4, kmidi=67) >>> renderer.sched('sine', 2, dur=3, kmidi=68) >>> renderer.render('out.wav')
- defInstr(name, body, args=None, init='', priority=None, doc='', includes=None, aliases=None, useDynamicPfields=None, **kws)[source]#
Create an
Instr
and register it with this renderer- Parameters:
name (str) – the name of the created instr
body (str) – the body of the instrument. It can have named pfields (see example) or a table declaration
args (
dict
[str
,float
|str
] |None
) – args: pfields with their default values. Only needed if not using inline argsinit (
str
) – init (global) code needed by this instr (read soundfiles, load soundfonts, etc)priority (
int
|None
) – has no effect for offline rendering, only here to maintain the same interface with Sessiondoc (
str
) – documentation describing what this instr doesincludes (
list
[str
] |None
) – list of files to be included in order for this instr to workaliases (
dict
[str
,str
] |None
) – a dict mapping arg names to real argument names.useDynamicPfields (
bool
|None
) – if True, use pfields to implement dynamic arguments (arguments given as k-variables). Otherwise dynamic args are implemented as named controls, using a big global tablekws – any keywords are passed on to the Instr constructor. See the documentation of Instr for more information.
- Return type:
- Returns:
the created Instr. If needed, this instr can be registered at any other Renderer/Session
Example
>>> from csoundengine import * >>> renderer = Renderer() # An Instr with named pfields >>> renderer.defInstr('synth', ''' ... |ibus, kamp=0.5, kmidi=60| ... kfreq = mtof:k(lag:k(kmidi, 1)) ... a0 vco2 kamp, kfreq ... a0 *= linsegr:a(0, 0.1, 1, 0.1, 0) ... busout ibus, a0 ... ''') # An instr with named table args >>> renderer.defInstr('filter', ''' ... {ibus=0, kcutoff=1000, kresonance=0.9} ... a0 = busin(ibus) ... a0 = moogladder2(a0, kcutoff, kresonance) ... outch 1, a0 ... ''')
>>> bus = renderer.assignBus() >>> event = renderer.sched('synth', 0, dur=10, ibus=bus, kmidi=67) >>> event.set(kmidi=60, delay=2) # This will set the kmidi param
>>> filt = renderer.sched('filter', 0, dur=event.dur, priority=event.priority+1, ... args={'ibus': bus, 'kcutoff': 1000}) >>> filt.automate('kcutoff', [3, 1000, 6, 200, 10, 4000])
- registeredInstrs()[source]#
Returns a dict (instrname: Instr) with all registered Instrs
- Return type:
dict
[str
,Instr
]
- getInstr(name)[source]#
Find a registered Instr, by name
Returns None if no such Instr was registered
- Return type:
Instr
|None
- includeFile(path)[source]#
Add an #include clause to this offline renderer
- Parameters:
path (
str
) – the path to the include file- Return type:
None
- addGlobalCode(code)[source]#
Add global code (instr 0)
- Return type:
None
Example
>>> from csoundengine import * >>> renderer = Renderer(...) >>> renderer.addGlobalCode("giMelody[] fillarray 60, 62, 64, 65, 67, 69, 71")
- sched(instrname, delay=0.0, dur=-1.0, priority=1, args=None, whenfinished=None, relative=True, **kwargs)[source]#
Schedule an event
- Parameters:
instrname (
str
) – the name of the already registered instrumentpriority – determines the order of execution
delay – time offset
dur – duration of this event. -1: endless
args (
Union
[Sequence
[float
|str
],dict
[str
,float
],None
]) – pfields beginning with p5 (p1: instrnum, p2: delay, p3: duration, p4: reserved)whenfinished (
Optional
[Callable
]) – not relevant in the context of offline renderingrelative – not relevant for offline rendering
kwargs – any named argument passed to the instr
- Return type:
- Returns:
a ScoreEvent, holding the csound event (p1, start, dur, args)
Example
>>> from csoundengine import * >>> renderer = Renderer(sr=44100, nchnls=2) >>> instrs = [ ... Instr('vco', r''' ... |kmidi=60| ... outch 1, vco2:a(0.1, mtof:k(kmidi)) ... '''), ... Instr('sine', r''' ... |kamp=0.1, kmidi=60| ... outch 1, oscili:a(kamp, mtof:k(kmidi)) ... ''')] >>> for instr in instrs: ... renderer.registerInstr(instr) >>> renderer.sched('vco', dur=4, kmidi=67) >>> renderer.sched('sine', 2, dur=3, kmidi=68) >>> renderer.render('out.wav')
- makeEvent(start, dur, pfields5, instr, priority=1)[source]#
Create a SchedEvent for this Renderer
This method does not schedule the event, it only creates it. It must be scheduled via
Renderer.schedEvent()
- Parameters:
start (
float
) – the start timedur (
float
) – the durationpfields5 (
list
[float
|str
]) – pfields, starting at p5instr (
str
|Instr
) – the name of the instr or the actual Instr instancepriority (
int
) – the priority
- Return type:
- unsched(event, delay)[source]#
Stop a scheduled event
This schedule the stop of a playing event. The event can be an indefinite event (dur=-1) or it can be used to stop an event before its actual end
- Parameters:
event (
int
|float
|SchedEvent
) – the event to stopdelay (
float
) – when to stop the given event
- Return type:
None
- assignBus(kind='', value=None, persist=False)[source]#
Assign a bus
- Parameters:
kind – the bus kind, one of ‘audio’ or ‘control’. The value, if given, will determine the kind if kind is left unset
value – an initial value for the bus, only valid for control buses
persist – if True, the bus exists until it is manually released. Otherwise the bus exists as long as it is unused and remains alive as long as there are instruments using it
- Return type:
Example
from csoundengine import * r = Renderer() r.defInstr('sender', r''' ibus = p5 ifreqbus = p6 kfreq = busin:k(ifreqbus) asig vco2 0.1, kfreq busout(ibus, asig) ''') r.defInstr('receiver', r''' ibus = p5 kgain = p6 asig = busin:a(ibus) asig *= a(kgain) outch 1, asig ''') bus = r.assignBus('audio') freqbus = s.assignBus(value=880) chain = [r.sched('sender', ibus=bus.token, ifreqbus=freqbus.token), r.sched('receiver', priority=2, ibus=bus.token, kgain=0.5)] # Make a glissando freqbus.automate((0, 880, 5, 440))
- setCsoundOptions(*options)[source]#
Set any command line options to use by all render operations
Options can also be set while calling
Renderer.render()
- Parameters:
*options (str) – any option will be passed directly to csound when rendering
- Return type:
None
Examples
>>> from csoundengine.offline import Renderer >>> renderer = Renderer() >>> instr = Instr("sine", ...) >>> renderer.registerInstr(instr) >>> renderer.sched("sine", ...) >>> renderer.setCsoundOptions("--omacro:MYMACRO=foo") >>> renderer.render("outfile.wav")
- renderDuration()[source]#
Returns the actual duration of the rendered score, considering an end marker
- Return type:
float
- Returns:
the duration of the render, in seconds
See also
- scoreTimeRange()[source]#
Returns a tuple (score start time, score end time)
If any event is of indeterminate duration (
dur==-1
) the end time will be infinite. Notice that the end marker is not taken into consideration here- Return type:
tuple
[float
,float
]- Returns:
a tuple (start of the earliest event, end of last event). If no events, returns (0, 0)
- setEndMarker(time)[source]#
Set the end marker for the score
The end marker will extend the rendering time if it is placed after the end of the last event; it will also crop any infinite event. It does not have any effect if there are events with determinate duration ending after it. In this case the end time of the render will be the end of the latest event. :rtype:
None
Note
To render only part of a score use the starttime and / or endtime parameters when calling
Renderer.render()
- render(outfile='', endtime=0.0, encoding='', wait=True, verbose=None, openWhenDone=False, starttime=0.0, compressionBitrate=None, sr=None, ksmps=None, tail=0.0, numthreads=0, csoundoptions=None)[source]#
Render to a soundfile
To further customize the render set any csound options via
Renderer.setCsoundOptions()
By default, if the output is an uncompressed file (.wav, .aif) the sample format is set to float32 (csound defaults to 16 bit pcm)
- Parameters:
outfile – the output file to render to. The extension will determine the format (wav, flac, etc). None will render to a temp wav file.
sr (
int
|None
) – the sample rate used for recording, overrides the samplerate of the rendererksmps (
int
|None
) – the samples per cycle used when renderingencoding – the sample encoding of the rendered file, given as ‘pcmXX’ or ‘floatXX’, where XX represent the bit-depth (‘pcm16’, ‘float32’, etc). If no encoding is given a suitable default for the sample format is chosen
wait – if True this method will block until the underlying process exits
verbose (
bool
|None
) – if True, all output from the csound subprocess is loggedendtime – stop rendering at the given time. This will either extend or crop the rendering.
tail – extra time at the end, usefull when rendering long reverbs
starttime – start rendering at the given time. Any event ending previous to this time will not be rendered and any event between starttime and endtime will be cropped
compressionBitrate (
int
|None
) – used when rendering to oggopenWhenDone – open the file in the default application after rendering. At the moment this will force the operation to be blocking, waiting for the render to finish.
numthreads – number of threads to use for rendering. If not given, the value in
config['rec_numthreads']
is usedcsoundoptions (
list
[str
] |None
) – a list of options specific to this render job. Options given to the Renderer itself will be included in all render jobs
- Return type:
- Returns:
a tuple (path of the rendered file, subprocess.Popen object). The Popen object is only meaningful if wait is False, in which case it can be further queried, waited, etc.
- lastRenderJob()[source]#
Returns the last RenderJob spawned by
Renderer.render()
- Return type:
RenderJob
|None
- Returns:
the last
RenderJob
or None if no rendering has been performed yet
See also
- lastRenderedSoundfile()[source]#
Returns the last rendered soundfile, or None if no jobs were rendered
- Return type:
str
|None
- writeCsd(outfile)[source]#
Generate the csd project for this renderer, write it to outfile
- Parameters:
outfile (
str
) – the path of the generated csd- Return type:
None
If this csd includes any datafiles (tables with data exceeding the limit to include the data ‘inline’) or soundfiles defined relative to the csd, these datafiles are written to a subfolder with the name
{outfile}.assets
, where outfile is the outfile given as argumentFor example, if we call
writeCsd
asrenderer.writeCsd('~/foo/myproj.csd')
, any datafiles will be saved in'~/foo/myproj.assets'
and referenced with relative paths as'myproj.assets/datafile.gen23'
or'myproj.assets/mysnd.wav'
- getEventById(eventid)[source]#
Retrieve a scheduled event by its eventid
- Parameters:
eventid (
int
) – the event id, as returned by sched- Return type:
SchedEvent
|None
- Returns:
the ScoreEvent if it exists, or None
- getEventsByP1(p1)[source]#
Retrieve all scheduled events which have the given p1
- Parameters:
p1 (
float
) – the p1 of the scheduled event. This can be a fractional value- Return type:
list
[SchedEvent
]- Returns:
a list of all scheduled events with the given p1
- strSet(s, index=None)[source]#
Set a string in this renderer.
The string can be retrieved in any instrument via strget. The index is determined by the Renderer itself, and it is guaranteed that calling strSet with the same string will result in the same index
- Parameters:
s (
str
) – the string to setindex (
int
|None
) – if given, it will force the renderer to use this index.
- Return type:
int
- Returns:
the string id. This can be passed to any instrument to retrieve the given string via the opcode “strget”
- makeTable(data=None, size=0, tabnum=0, sr=0, delay=0.0, unique=True)[source]#
Create a table with given data or an empty table of the given size
- Parameters:
data (
ndarray
|list
[float
] |None
) – the data of the table. Use None if the table should be emptysize (
int
) – if not data is given, sets the size of the empty table createdtabnum (
int
) – 0 to self assign a table numbersr (
int
) – the samplerate of the data, if applicable.delay (
float
) – when to create this tableunique – if True, create a table even if a table exists with the same data.
- Return type:
- Returns:
a TableProxy
- readSoundfile(path='?', chan=0, skiptime=0.0, delay=0.0, force=False)[source]#
Add code to this offline renderer to load a soundfile
- Parameters:
path – the path of the soundfile to load. Use ‘?’ to select a file using a GUI dialog
chan (
int
) – the channel to read, or 0 to read all channelsdelay (
float
) – moment in the score to read this soundfileskiptime (
float
) – skip this time at the beginning of the soundfileforce – if True, add the soundfile to this renderer even if the same soundfile has already been added
- Return type:
- Returns:
a TableProxy, representing the table holding the soundfile
- playSample(source, delay=0.0, dur=0.0, chan=1, gain=1.0, speed=1.0, loop=False, pan=0.5, skip=0.0, fade=None, crossfade=0.02)[source]#
Play a table or a soundfile
Adds an instrument definition and an event to play the given table as sound (assumes that the table was allocated via
readSoundFile()
or any other GEN1 ftgen)- Parameters:
source (
int
|str
|TableProxy
|tuple
[ndarray
,int
]) – the table number to play, aTableProxy
, the path of a soundfile or a tuple (numpy array, sr). Use ‘?’ to select a file using a GUI dialogdelay – when to start playback
chan – the channel to output to. If the sample is stereo/multichannel, this indicates the first of a set of consecutive channels to output to.
loop – if True, sound will loop
speed – the speed to play at
pan – a value between 0-1. -1=default, which is 0 for mono, 0.5 for stereo. For multichannel samples panning is not taken into account at the moment
gain – apply a gain to playback
fade (
float
|tuple
[float
,float
] |None
) – fade-in / fade-out ramp, in secondsskip – playback does not start at the beginning of the table but at starttime
dur – duration of playback. -1=indefinite duration, will stop at the end of the sample if no looping was set; 0=definite duration, the event is scheduled with dur=sampledur/speed. Do not use this if you plan to modify or modulate the playback speed.
crossfade – if looping, this indicates the length of the crossfade
- Return type:
- automate(event, param, pairs, mode='linear', delay=None, overtake=False)[source]#
Automate a parameter of a scheduled event
- Parameters:
event (
SchedEvent
) – the event to automate, as returned by schedparam (str) – the name of the parameter to automate. The instr should have a corresponding line of the sort “kparam = pn”. Call
ScoreEvent.dynamicParams()
to query the set of accepted parameterspairs (
Union
[Sequence
[float
],ndarray
]) – the automateion data as a flat list[t0, y0, t1, y1, ...]
, where the times are relative to the start of the automation eventmode (str) – one of “linear”, “cos”, “smooth”, “exp=xx” (see interp1d)
delay (
float
) – start time of the automation event. If None is given, the start time of the automated event will be used.overtake – if True, the first value is not used, the current value for the given parameter is used in its place.
- Return type:
float
- playPartials(source, delay=0.0, dur=-1, speed=1.0, freqscale=1.0, gain=1.0, bwscale=1.0, loop=False, chan=1, start=0.0, stop=0.0, minfreq=0, maxfreq=0, maxpolyphony=50, gaussian=False, interpfreq=True, interposcil=True, position=0.0)[source]#
Play a packed spectrum
A packed spectrum is a 2D numpy array representing a fixed set of oscillators. After partial tracking analysis, all partials are arranged into such a matrix where each row represents the state of all oscillators over time.
The loristrck packge is needed for both partial-tracking analysis and packing. It can be installed via
pip install loristrck
(see gesellkammer/loristrck). This is an optional dependency- Parameters:
source (
int
|str
|TableProxy
|ndarray
) – a table number, tableproxy, path to a .mtx or .sdif file, or a numpy array containing the partials datadelay – when to start the playback
dur – duration of the synth (-1 will play indefinitely if looping or until the end of the last partial or the end of the selection
speed – speed of playback (does not affect pitch)
loop – if True, loop the selection or the entire spectrum
chan – channel to send the output to
start – start of the time selection
stop – stop of the time selection (0 to play until the end)
minfreq – lowest frequency to play
maxfreq – highest frequency to play
gaussian – if True, use gaussian noise for residual resynthesis
interpfreq – if True, interpolate frequency between cycles
interposcil – if True, use linear interpolation for the oscillators
maxpolyphony – if a sdif is passed, compress the partials to max. this number of simultaneous oscillators
position – pan position
freqscale – frequency scaling factor
gain – playback gain
bwscale – bandwidth scaling factor
- Return type:
- Returns:
the playing Synth
Example
>>> import loristrck as lt >>> import csoundengine as ce >>> samples, sr = lt.util.sndread("/path/to/soundfile") >>> partials = lt.analyze(samples, sr, resolution=50) >>> lt.util.partials_save_matrix(partials, outfile='packed.mtx') >>> session = ce.Engine().session() >>> session.playPartials(source='packed.mtx', speed=0.5)
RenderJob
- class csoundengine.offline.RenderJob(outfile, samplerate, encoding='', starttime=0.0, endtime=0.0, process=None)[source]#
Represent an offline render process
A RenderJob is generated each time
Renderer.render()
is called. Each new process is appended toRenderer.renderedJobs
. The last render job can be accesses viaRenderer.lastRenderJob()
Attributes:
The soundfile rendered / being rendererd
Samplerate of the rendered soundfile
Encoding of the rendered soundfile
Start time of the rendered timeline
Endtime of the rendered timeline
The csound subprocess used to render the soundfile
The args used to render this job, if a process was used
Methods:
openOutfile
([timeout, appwait, app])Open outfile in external app
wait
([timeout])Wait for the render process to finish
-
outfile:
str
# The soundfile rendered / being rendererd
-
samplerate:
int
# Samplerate of the rendered soundfile
-
encoding:
str
= ''# Encoding of the rendered soundfile
-
starttime:
float
= 0.0# Start time of the rendered timeline
-
endtime:
float
= 0.0# Endtime of the rendered timeline
-
process:
Popen
|None
= None# The csound subprocess used to render the soundfile
- property args: list[str]#
The args used to render this job, if a process was used
- openOutfile(timeout=None, appwait=True, app='')[source]#
Open outfile in external app
- Parameters:
timeout – if still rendering, timeout after this number of seconds. None means to wait until rendering is finished
app – if given, use the given application. Otherwise the default application
appwait – if True, wait until the external app exits before returning from this method
-
outfile: