Csound Csound-dev Csound-tekno Search About

Re: Csound and other synthesis systems

Date1999-06-17 03:20
FromPaul Barton-Davis
SubjectRe: Csound and other synthesis systems
James McCartney writes:

>SuperCollider is a dynamically typed, real time garbage collected,
>fully object oriented language like Smalltalk with support for true closures, 
>coroutines, positional or keyword arguments, variable length argument
>lists, default argument values, etc. 
>It has a class library of 341 classes including a full set
>of Collection classes like Dictionary, Set, SortedList, etc.
>There are currently over 200 unit generators.

While I share your conviction neither Csound nor C++ are the right
language to write complex algorithmic processes/patches/whatever, that
doesn't mean that I think that the same language should be used for
both those complex algorithmic processes and the specification of what
is essentially a DSP program.

There is no justification for a DSP program to contain concepts such
as a Dictionary or a SortedList; on the other hand, there is no
justification for a sophisticated algorithmic language to be
constrained by the execution model embodied in a DSP program.

Furthermore, I don't want to have to be stuck with only a single
language to program a DSP with - Csound is bad enough, and I don't
believe that any other language is perfect for this. What's really
important is not the visible language but the inner parse tree that is
used to represent DSP programs written in any language. Quasimodo has
the basic structure to support multiple DSP languages by compiling
them all down to the same internal form, and these different languages
are themselves intended to be "plugins".

But to return to my point: I don't *want* a dynamically typed, real
time garbage collected, fully object oriented language to write DSP
code in. I *do* want such a language to write higher level
abstractions in, and possibly to have it automatically generate DSP
code. From what I could read of SuperCollider, this separation doesn't
exist. This is not necessarily so bad if one can use the language
simply. But its important to take a lesson from the history of Csound:
things that are cool and useful ultimately end up as opcodes, not as
things written in Csound's orc language. This means that many cool
"modules" consist of very little more than a couple of opcode
calls. 

I also don't understand anything of what you said about note on
velocity controlling the complexity of a patch. I think you must be
describing something rather different than what I think of as a patch,
which is essentially the specification of what things are connected
together. The best I can imagine is that velocity is used to
essentially choose one of a particular set of possible
interconnections before any actual sound is generated, but there is no
particular reason for this to be part of the language, and in fact,
its a lot less flexible than doing this switching visually, via a
velocity zone map that acts as a switch to route note information to a
particular patch. This can be modified without editing a program, and
in theory, the velocity can be patched through something else (e.g. a
limiter, or an expander, or whatever) before being used by the zone
map. Modular synthesis did have a point to it :)

Date1999-06-17 07:12
FromJames McCartney
SubjectRe: Csound and other synthesis systems
At 8:20 PM -0600 6/16/99, Paul Barton-Davis wrote:

>There is no justification for a DSP program to contain concepts such
>as a Dictionary or a SortedList; on the other hand, there is no
>justification for a sophisticated algorithmic language to be
>constrained by the execution model embodied in a DSP program.

Just because you cannot justify it to yourself does not mean
that there are no justifiable uses. And the algorithmic language
is in no way constrained or vice versa. In fact you have much
greater flexibility because your algorithmic composition can get
down and mess with the building structure of your DSP algorithms.
I do this a lot. It is very powerful. Maybe you need to use your
imagination a bit more.

>But to return to my point: I don't *want* a dynamically typed, real
>time garbage collected, fully object oriented language to write DSP
>code in.

The SC language does not implement the UGens, it creates the patching
structure. Again I think that because you have never used it
or been exposed to what you can do, you do not realize the power there.

>From what I could read of SuperCollider, this separation doesn't
>exist. This is not necessarily so bad if one can use the language
>simply.

Sure you can. There is an OrcScore ugen that can be used just like
you would write a csound numeric score.

>I also don't understand anything of what you said about note on
>velocity controlling the complexity of a patch. I think you must be
>describing something rather different than what I think of as a patch,
>which is essentially the specification of what things are connected
>together. The best I can imagine is that velocity is used to
>essentially choose one of a particular set of possible
>interconnections before any actual sound is generated, 

In supercollider your orchestra is not just some dead code.
Unit generators are real objects and can be manipulated and
combined in real time. In SuperCollider you are not defining
a static structure beforehand but are building a graph of 
ugens in real time for each event.
For example I can write in supercollider the following:

n.do({ z = AllpassN.ar(z, 0.05, 0.05.rand, 2); });

This creates a daisy chain of n allpass delay lines. Now I can
map n to some patch argument such as velocity and that
will cause the number of allpass delays in series to be equal to n.
In other words the structure of the patch can change per event.
Now you *could* implement this as a lookup table of patches in a 
static language, but then if I went and added more structural mappings
in SC, you'd begin to have a combinatorial explosion to match it in
a static language.

>particular reason for this to be part of the language, and in fact,
>its a lot less flexible than doing this switching visually, via a
>velocity zone map that acts as a switch to route note information to a
>particular patch. 

There is so much more you can do than this.

For example in SC if I write a function to be a simple sine wave
instrument:

f = { arg freq = 440.0, amp = 1.0;
	SinOsc.ar(freq, 0, EnvGen.ar(env, amp));
};

Now I can call it as follows giving it float arguments :

Synth.play({ f.value(800.0, 0.2); });

Or I can call it with other sub patches as arguments:

Synth.play({ f.value( XLine.kr(300, 2000, 4), SinOsc.kr(5, 0, 0.2, 0.4) ) });

Thus I am parameterizing a predefined patch function with other patches
to create a new one. This can be done from within a score. So that you
can write glissandi and such into your score and you don't have to worry
that your patch was originally built to be able to do a glissando, you
just plug in any control function per event.

Such things like evolving new programmatically constructed patches 
via some performer directed process is possible in SC.

And I have not mentioned other things like doing list processing on
arrays of channels, multichannel expansion, spawning subpatches, etc.
None of this is as flexible in a static environment.

btw: In SC all scheduled event start times are sample accurate and 
ksmps is settable *per event*.




   --- james mccartney     james@audiosynth.com   http://www.audiosynth.com
If you have a PowerMac check out SuperCollider2, a real time synth program:






dupswapdrop: the music-dsp mailing list and website
http://shoko.calarts.edu/~glmrboy/musicdsp/music-dsp.html