| I think a simpler design would suffice.
Instruments that send signals to each other would of course have to be in
the same address space. But I think for now, it would be enough to manually
assign them to groups with a new opcode:
igroup depends myinsno, otherinso1, otherinso2,...,otherinson
Global signals could be handled with something like OSC.
Other than that, all that's necessary is for each group to skip all "i"
statements except those assigned to it, and to collect audio into a common
output (again, with a new opcode):
netout igroup, achn1, achn2,...achnn
This opcode would need a semaphore so that each process would wait after
output until all processes have output and the current ksmps or buffer has
been written to the master output. But this would be pretty easy to do.
Duplicating ftables, orchestras, and so on is no big deal on current
machines. Address space, disk space, and network speed are negligible in
comparison with sheer processor overhead and cache hits.
I don't think parallelizing at the opcode level is a good idea unless the
transfer of a-rate variables across address space boundaries is much more
efficient than it is likely to be -- there are dozens or hundreds of these
transfars for each input or output transfer.
I'm wondering if this would be very usable in live performance compared to
just having a fast processor and a lot of memory, but I am quite confident
it would extremely helpful in rendering off-line -- it would enable the
limit of what is practical with Csound to jump an order of magnitude,
probably.
Regards,
Mike
----- Original Message -----
From: "Steven Yi"
To:
Sent: Sunday, April 09, 2006 11:48 PM
Subject: Re: [Cs-dev] Object design question for API use
Hi All,
Sorry for taking a while to chime in, but I thought I'd mention I've
been very interested in a csound that could scale both across
processors for multi-core chips and also scale across a heteregenous
network of machines. The interest though has not been for
collaborative playing though the ideas for the design of a parallel
csound should be able to work with some careful handling of the input
data from performers.
Some things that's been on my mind regarding this:
-there would have to be an analysis phase to determine dependencies
after compilation but before rendering
-a dependency graph could be generated from comparing opcodes against
a database of known dependency features so that the dependency info
could be developed from the data structures currently in place; this
would require work to keep the known information in sync, but it would
then not affect memory requirements by adding more information to the
data structure, and also would have the advantage of being developed
separately. May be easier to work with then for the case of opcode
plugins so that no discovery mechanism would have to be implemented to
identify the dependencies.
-there's a lot of issues involved with analyzing dependencies between
instruments (communication via global variables, zak, software bus,
ftable, etc.) and on other things like ftables and external files(i.e.
audio files, text files used for ftables, etc.). Some of these are
easier when there is a shared-memory situation (multi-core processor)
but are trickier in a distributed situation. Ftables seem particularly
troublesome to analyze the graph of dependencies especially if the
ftable number is coming in via a pfield in an i-statement.
-partitioning work between processors or between computers could have
different strategies depending on realtime or disk render
-Endianess would be a factor for communication
-curious about strategies where other processors could render their
load ahead of time while the main host processor would grab as
possible to max out the processing
It seems that the best chance of getting noticable performance
increase would be from a shared-memory system versus a distributed
system, which may be a good way to go as it seems Intel Core Duo
processors are becoming the standard for the current round of laptops,
and multicore chips seem to be on the horizon all around.
Most of the ideas I've had so far mainly revolve around partitioning
the problems and scheduling at instrument/note level, but there could
also be some significant gains from parallelizing at the opcode level,
or by an opcode-by-opcode basis. I'm not sure if doing it at that
granular a level and to also do it at a higher level would take away
from each other though if they compete without knowing about each
others demands on the CPU.
Just some of what's going on in my mind lately; I'm still just getting
into researching into parallel computing in earnest and am very
interested to see something like this possible with Canonical Csound.
steven
On 4/8/06, Michael Gogins wrote:
> Sure, I'd like to see your code. BTW, I doubt string processing is going
> to
> add any significant overhead either. When it comes to overhead, think
> about
> functions to call, bytes to move or access, and multiplications to do.
> Obviously in an audio synthesizer the audio processor overhead will
> usually
> far outstrip anything else. Doing 100 string transfers x 64 characters a
> second (or even 1000) will be small compared to doing 44,100 x 2 channels
> x
> 8 bytes a channel transfers a second, and of course the audio
> multiplications are far beyond any string processing or GUI event
> processing
> (any stuff the GUI does to move pixels around is now on a separate card
> with
> its own memory and processor, so forget that).
>
> Regards,
> Mike
>
> ----- Original Message -----
> From: "Iain Duncan"
> To: ; "Michael Gogins"
>
> Sent: Saturday, April 08, 2006 11:20 PM
> Subject: Re: [Cs-dev] Object design question for API use
>
>
> >> It's in line with my own experience that synchronization overhead is
> >> quite low.
> >
> > Yeah, I just checked again, and if the display is not updating 128
> > knobs, the cpu diff is about 2% for me. So well worth it!
> >
> >> What I suggest is that you go ahead and do the multi-Csound,
> >> audio-mixing part now and get it out there. A lot of people would be
> >> interested in using this, especially for rendering big complex pieces
> >> off-line.
> >>
> >> If and when you do this, be SURE to make the connections/queues between
> >> different instances of Csound and the mixer that blends them ABSTRACT
> >> so
> >> that they can be network connections. Then Csound would be clusterable.
> >> Then the instrument definitions could basically be as elaborate as one
> >> would like.
> >
> > The multi-csound part is less of a pressing concern for me than certain
> > other components, *but* I very much want to get the base right to enable
> > that. If that is something you are interested in and able to help with,
> > I will make the multi-csound part a higher priority. A multi-user setup
> > with clustering csound on multi-cpus would rock for live improvised
> > computer music!
> >
> > May I send you my base module code for comment/criticism? As is, it is
> > working, but I think it would be good to add the ability to put an
> > arbitrary string into the message. However, I don't want to unecessarily
> > slow down message transfer. Perhaps the message structure should depend
> > on the message type? Ideally, the message structure should be flexible
> > enough that ALL communication between ALL modules happens as messages
> > send from modules to the controller and vice versa. I forsee needing a
> > string in there to send error messages to guis, allow sending of csd
> > filenamens, and use for OSC messages if necessary. At the same time,
> > this message will also be used to write a single audio sample in to
> > csound in the case of loading audio files remotely, so we want to keep
> > that as quick as possible.
> >
> > Thanks
> > Iain
> >
>
>
>
>
>
> -------------------------------------------------------
> This SF.Net email is sponsored by xPML, a groundbreaking scripting
> language
> that extends applications into web and mobile media. Attend the live
> webcast
> and join the prime developer group breaking into this new coding
> territory!
> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642
> _______________________________________________
> Csound-devel mailing list
> Csound-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/csound-devel
>
-------------------------------------------------------
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=k&kid0944&bid$1720&dat1642
_______________________________________________
Csound-devel mailing list
Csound-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/csound-devel
-------------------------------------------------------
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642
_______________________________________________
Csound-devel mailing list
Csound-devel@lists.sourceforge.net |