I did some preliminary work on parallel processing csound with the csoundAPI , Python and Pyro. For my purposes (realtime), I did find that the added latency was not tolerable. Keep in mind that I just used very basic tools to test this out, and that better results should be expected with more work and more competence. Both Michael and Victor did describe ways of passing audio buffers between different instances of csound (on the same computer), and I have used Pyro to pass stuff (buffers or other data) between different computers. The technical aspect of splitting, and of communication between the different instances can be solved with these tools. But, a more significant problem is how to split the processing load. Steven has a much better grip on this issue than me, and I have little to add to his explanation. I do also agree with Steven that multicore/multi-CPU on one computer seems more interesting than clustering for (realtime) audio work. The current ImproSculpt version does not have paralellization of audio processing, but rather runs csound in a single process on a dedicated CPU, and runs other tasks (e.g. GUI) on another CPU or another computer. best Oeyvind 2008/1/16, Steven Yi : > I had been experimenting on a multi-core implementation of csound as > opposed to a distributed cluster solution. I had originally though > clustering would be an interesting thing to do a few years ago but I > think that these days it would be more important to support multi-core > versus cluster as CPU's are heading in that direction. > > The problems with either way of parallelizing Csound has to do with > the interdependence of instruments through global variables, zak > channels, buss, etc. as well as shared resources like ftables. On a > multi-core solution where memory is shared, ftables are not so big an > issue and need access to be limited. For the instrument > interdependence, that's awfully difficult to analayze as opcodes may > alter global data such as ftables. What I had come up with in the > multi-core work I was doing was to just follow Csound's natural > processing order of processing instances of instruments in instr id > order (all instances of instr 1 first, then instr2 , etc.). I then > wrote mutex opcodes and the user would then have to guard access to > shared resources themselves, which I think is an alright expectation > to have. This setup would not allow processing all of instr 1 at the > same time as instr 2 but would do all instances of instr 1 first, then > instr 2, which can limit its effectiveness, but then again it's a much > simpler solution to implement first and I saw as a required first > step. Even working on this just a little made apparent a number of > design changes required to csound's internals to achieve even the > simpler multi-core implementation (i.e. all goto opcode work by using > the curevt pointer which points at the current instrument instance, > but that should not be used as multiple instrument instances may be > actively processed at the same time). > > The problem with a generic clustered solution I saw were having to > deal with network latency, global memory, access to resources (i.e. > wave files, where would they reside? would need copies available to > all nodes on the cluster), and heterogeneous processing profiles due > to possible different cpu types. This all becomes very tricky to > balance and profiling would be necessary to know how to divide all the > work in a way that will make the most use of each CPU and do so in a > way that makes it faster than network latency slows down everything. > Custom clustered solutions where one might say design orchestra's so > no instruments depend on each other or one manually creates an orc per > node or something like that is feasible and may provide performance > gains, but I think a generic clustered solution would be very > difficult and that a multi-core solution would yield more results for > the work put in. > > That's just my two cents on that! =P > steven > > > > On Jan 16, 2008 5:28 AM, Victor Lazzarini wrote: > > Yes, once Csound lib is available, then it's a case of writing > > distributed programs, with tools such as MPI. Depending on > > the cluster and on how the access to it is granted, it might > > be even possible to collect audio data on a microcomputer > > connected to it and play it in realtime. > > > > Victor > > > > > > At 12:37 16/01/2008, you wrote: > > >Victor Lazzarini wrote: > > > > > >>yes, but you are talking here of using the Csound API , either from C or > > >>perhaps from Java or Python. With C it's possible to use MPI and build > > >>a cluster-csound host. > > > > > >Hmm, sounds promissing. I'm not totally sure what the implications are of > > >"using the Csound API". Would it be possible to throw a regular csd at > > >cluster like this and have it render to disc? > > > > > >>I think Oeyvind has looked into doing something with Python, perhaps he can > > >>give you some pointers. > > > > > >That would be great! Looking forward to any information. > > > > > >-- > > >peace, love & harmony > > >Atte > > > > > >http://atte.dk | http://myspace.com/attejensen > > >http://anagrammer.dk | http://modlys.dk > > > > > > > > >Send bugs reports to this list. > > >To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe > > >csound" > > > > Victor Lazzarini > > Music Technology Laboratory > > Music Department > > National University of Ireland, Maynooth > > > > > > > > > > Send bugs reports to this list. > > To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound" > > > > > Send bugs reports to this list. > To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound" >