Csound Csound-dev Csound-tekno Search About

Re: Csound Performance on Multiprocessor Intel Systems?

Date1999-03-13 14:16
FromBen Jefferys
SubjectRe: Csound Performance on Multiprocessor Intel Systems?
Michael Gogins wrote:

> Csound could be rewritten to take advantage of multiprocessing, but the
> rewrite would be difficult and at a low level; it would involve creating a
> pool of threads, from which new instrument instances would receive one. This
> would improve performance only if the number of instrument instances was
> fairly large, because there would some additional processor overhead
> incurred by requiring each thread to synchronize with the ksmps period, and
> to manage the threads.

Here's a few ideas, but not really being very experienced in Csound
(still!) I leave it to others to discuss the viability. All would
require changes to Csound I think.

1. Just chop the total length of audio you wish to create into the
number of processors you have. Now this has very obvious problems in
that the creation of one sample depends on internal states reaches in
the generation of the previous sample. This is a big problem where, say,
you have 2 processors, one sample is created on processor 1, the next on
processor 2 and so on (striping). However if you just partition it into
the "first half" and "second half" (say), then perhaps a good estimate
of Csound's state just before generating the second half of the audio
can be created, using a quick low-resolution (low a-rate yes?) run of
the first half. The second half and first half can then be generated
simultaneously. This obviously extends to >2 processors. This may be
good enough for some orcs.

2. If you are just changing a small section of the audio each time (say
between t=6 and t=7 seconds) then you could store the internal state of
Csound at t=6, make your changes, then somehow map the stored state onto
your changed orc, then just generate audio from t=6 to t=7.

3. I'm not sure what Csound does by way of caching already, but maybe
some performance can be gained by caching the outputs of particular
elements of the orc between runs of Csound. There is obviously some kind
of dependency tree (directed graph really) which can be created (what
uses the output of what) so only stuff which has changed, and stuff
which depends on stuff which has changed, need be regenerated between
sessions. If your small change is near the final output of Csound, then
all the generation prior to it in the dependency graph can be used again
from the cache. You could have a set of cache files, each containing the
output of some element along with the code which generated it (or some
internal representation of it). Csound may well already do this so sorry
if I'm being obvious!

Those are my ideas anyway.


Finally, functional languages. I've just been shown a paper about
"Haskore", a Western music score generation system written in Haskell.
It has convertors for Csound scos and MIDI amongst others. Looks quite
neat, I will post the URL if people are interested. I feel that
functional languages are certainly the way forward, and was wondering if
there was an actual audio synthesis system written for Haskell or
similar fuctional language. I'm sure this is old ground but maybe
something has come up recently?

Bye!
Ben.