[Cs-dev] Scaling multicore to cluster
Date | 2013-08-29 01:07 |
From | Andres Cabrera |
Subject | [Cs-dev] Scaling multicore to cluster |
Attachments | None None |
Hi, I'm wondering if anyone has any thoughts or ideas whether the multicore facilities in Csound could be extended beyond multicore to include multi-machines? Of course there are issues like synchronization and network protocols to consider, but I'm thinking this would be a nice match to some of the things going on here in the Allosphere, as we are doing visualizations of very large datasets, so it might be good to have sonifications of very large number of elements.Any thoughts? Cheers, Andrés |
Date | 2013-08-29 08:19 |
From | Victor Lazzarini |
Subject | Re: [Cs-dev] Scaling multicore to cluster |
Back in the day, I did some initial experiments using Csound 5 and MPI in a big multiprocessor machine. But that was by using multiple Csound instances. Not sure how we could do this in a neat way inside a single instance, but maybe things have moved on in the world of interprocess communication now. What is Allosphere? Victor On 29 Aug 2013, at 01:07, Andres Cabrera wrote: > Hi, > > I'm wondering if anyone has any thoughts or ideas whether the multicore facilities in Csound could be extended beyond multicore to include multi-machines? Of course there are issues like synchronization and network protocols to consider, but I'm thinking this would be a nice match to some of the things going on here in the Allosphere, as we are doing visualizations of very large datasets, so it might be good to have sonifications of very large number of elements. > > Any thoughts? > > Cheers, > Andrés > ------------------------------------------------------------------------------ > Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more! > Discover the easy way to master current and previous Microsoft technologies > and advance your career. Get an incredible 1,500+ hours of step-by-step > tutorial videos with LearnDevNow. Subscribe today and save! > http://pubads.g.doubleclick.net/gampad/clk?id=58040911&iu=/4140/ostg.clktrk_______________________________________________ > Csound-devel mailing list > Csound-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/csound-devel Dr Victor Lazzarini Senior Lecturer Dept. of Music NUI Maynooth Ireland tel.: +353 1 708 3545 Victor dot Lazzarini AT nuim dot ie ------------------------------------------------------------------------------ Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more! Discover the easy way to master current and previous Microsoft technologies and advance your career. Get an incredible 1,500+ hours of step-by-step tutorial videos with LearnDevNow. Subscribe today and save! http://pubads.g.doubleclick.net/gampad/clk?id=58040911&iu=/4140/ostg.clktrk _______________________________________________ Csound-devel mailing list Csound-devel@lists.sourceforge.net |
Date | 2013-08-29 09:13 |
From | Michael Gogins |
Subject | Re: [Cs-dev] Scaling multicore to cluster |
Attachments | None None |
If on separate machines, would have to be essentially separate instances of Csound. Some kind of map reduce where map is over compute one kperiod and reduce is mixing down into one spout. Plus network channels. On Aug 29, 2013 3:19 AM, "Victor Lazzarini" <Victor.Lazzarini@nuim.ie> wrote:
Back in the day, I did some initial experiments using Csound 5 and MPI in a big multiprocessor machine. But that was by using multiple Csound instances. Not |
Date | 2013-08-29 09:15 |
From | Richard Dobson |
Subject | Re: [Cs-dev] Scaling multicore to cluster |
This is something I have an interest in, dating back to the time of our "LHCsound" project** to sonify collision data from the LHC. Much depends on the nature of the dataset. The LHC data (from the ATLAS detector) was typically in three columns or more, and the simplest approach was to create a different instrument for each column (and where one column, typically the first representing distance from the beam axis, was selected as the source of all event timing); these could in principle be rendered by separate Csound instances. Much would depend therefore on the nature of those large datasets, Of course audio synchronisation would require control by a single master sample clock; I assume the Allosphere already has audio hardware that can manage that aspect. As it happens we received an invitation (so I was told by Lily Asquith) from the Allosphere to be part of a collaboration, when the project was at its peak of publicity back in 2010, but being only a 6-month funded project with otherwise no institutional support, this was sadly not possible. Carla Scaletti has been working on some of "real" post-discovery Higgs data, I assume using Kyma, and is probably better placed both technically and geographically to contribute. One aspect I discussed at the time but was not in a position to follow up was the possibility (highly appropriate to particle collision data) of a full periphonic surround render, e.g. using Higher-Order Ambisonic B-Format (but VBAP etc would do very well for it too). Since a dataset is by definition pre-created, there would seem to be no particular need to compute the audio in real time; the task would essentially be playback of N channels of precomputed audio. In the case of my LHC examples, some of the data was timed at such a high density (rendering 10,000 data lines in a few seconds) that even though my Csound instruments comprised samples rather than synthesis, it was still too densely timed to render cleanly in real time (at least, on my machine!). It is in effect a form of granular synthesis where every grain was a new note. That might yet be a viable model for a massively distributed cluster of Csound engines. Funding-permitting, I hope to restart activity in this area in the context of outreach to UK schools (with their new emphasis on teaching programming and CS); one interetsing angle could be multiple Csound engines running on a cluster of (overclocked?) Raspberry Pis. Richard Dobson ** http://www.lhcsound.com (the original site - flash-based, so will not play on iOS devices) http://www.lhcsound.wordpress.com (new site focussing on Carla Scaletti's work) My prototype and unfinished software creating Csound scores: http://people.bath.ac.uk/masrwd/lhcsoundresources.html This uses the wxWidgets library. The only reasons I have not (yet) posted the sources are (a) I have not got around to it, and (b) the code was put together very quickly indeed, and is embarrassing in many ways! On 29/08/2013 01:07, Andres Cabrera wrote: > Hi, > > I'm wondering if anyone has any thoughts or ideas whether the multicore > facilities in Csound could be extended beyond multicore to include > multi-machines? Of course there are issues like synchronization and > network protocols to consider, but I'm thinking this would be a nice > match to some of the things going on here in the Allosphere, as we are > doing visualizations of very large datasets, so it might be good to have > sonifications of very large number of elements. > > Any thoughts? > > Cheers, > Andrés > > > ------------------------------------------------------------------------------ > Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more! > Discover the easy way to master current and previous Microsoft technologies > and advance your career. Get an incredible 1,500+ hours of step-by-step > tutorial videos with LearnDevNow. Subscribe today and save! > http://pubads.g.doubleclick.net/gampad/clk?id=58040911&iu=/4140/ostg.clktrk > > > > _______________________________________________ > Csound-devel mailing list > Csound-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/csound-devel > ------------------------------------------------------------------------------ Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more! Discover the easy way to master current and previous Microsoft technologies and advance your career. Get an incredible 1,500+ hours of step-by-step tutorial videos with LearnDevNow. Subscribe today and save! http://pubads.g.doubleclick.net/gampad/clk?id=58040911&iu=/4140/ostg.clktrk _______________________________________________ Csound-devel mailing list Csound-devel@lists.sourceforge.net |
Date | 2013-08-29 10:32 |
From | jpff@cs.bath.ac.uk |
Subject | Re: [Cs-dev] Scaling multicore to cluster |
> What is Allosphere? www.allosphere.ucsb.edu A large spherical space with speakers and images. I first heard about it at ICMC Singapore 2003 when it was in the studio report from UCSB; I followed with my studio report which was something of a contrast (no equipment, no students, no staff) ==John ------------------------------------------------------------------------------ Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more! Discover the easy way to master current and previous Microsoft technologies and advance your career. Get an incredible 1,500+ hours of step-by-step tutorial videos with LearnDevNow. Subscribe today and save! http://pubads.g.doubleclick.net/gampad/clk?id=58040911&iu=/4140/ostg.clktrk _______________________________________________ Csound-devel mailing list Csound-devel@lists.sourceforge.net |
Date | 2013-09-01 23:20 |
From | Andres Cabrera |
Subject | Re: [Cs-dev] Scaling multicore to cluster |
Attachments | None None |
Hi, Thanks everyone for the thoughtful replies. The Allosphere is an "instrument" (both in the technical and musical sense) built for both scientific visualization and musical performance. It consists of a small computer cluster (I think the current count is around 10) driving more than 20 projectors to generate realtime interactive visualization and sonification (In 3D, in 360 degrees above and below). The sound system consists of 54.1 channels on three rings (12,30,12) in rings (elevated, ear height and lower). It uses Echoaudio Audiofire interfaces (currently four, but moving to more to allow 100+ channels) connected to a single OS X machine which handles audio (and projector remote control and other stuff). My thoughts are about building a system that automatically sets up the network and distributes the csound instruments and events to the different machines that render the audio and send it back using something like netjack. I'm wondering what kind of processes are worth parallelizing in this day and age. I'm thinking things like: - Spatialized additive synthesis - Heavy processing like lotsof instances of phase vocoder (or sliding phase vocoder) - Ambisonics projection with many sources - Wavefield synthesis with many sources Also, I think the multicore Csound is related to this, but actually has somewhat different goals, so a slightly different method needs to be used for a cluster. - In MC Csound threads are spawned depending on whether threaded is faster than non threaded - For “Cluster Csound” the goal is to spread calculations across machines when it is not possible to produce the calculations in a single machine. The calculations might be “slower” but might render quicker because they are parallelized. Any thoughts or ideas? Cheers, Andrés On Thu, Aug 29, 2013 at 1:15 AM, Richard Dobson <richarddobson@blueyonder.co.uk> wrote: > > This is something I have an interest in, dating back to the time of our > "LHCsound" project** to sonify collision data from the LHC. Much depends > on the nature of the dataset. The LHC data (from the ATLAS detector) was > typically in three columns or more, and the simplest approach was to > create a different instrument for each column (and where one column, > typically the first representing distance from the beam axis, was > selected as the source of all event timing); these could in principle be > rendered by separate Csound instances. Much would depend therefore on > the nature of those large datasets, > > Of course audio synchronisation would require control by a single master > sample clock; I assume the Allosphere already has audio hardware that > can manage that aspect. > > As it happens we received an invitation (so I was told by Lily Asquith) > from the Allosphere to be part of a collaboration, when the project was > at its peak of publicity back in 2010, but being only a 6-month funded > project with otherwise no institutional support, this was sadly not > possible. Carla Scaletti has been working on some of "real" > post-discovery Higgs data, I assume using Kyma, and is probably better > placed both technically and geographically to contribute. > > One aspect I discussed at the time but was not in a position to follow > up was the possibility (highly appropriate to particle collision data) > of a full periphonic surround render, e.g. using Higher-Order Ambisonic > B-Format (but VBAP etc would do very well for it too). > > Since a dataset is by definition pre-created, there would seem to be no > particular need to compute the audio in real time; the task would > essentially be playback of N channels of precomputed audio. In the case > of my LHC examples, some of the data was timed at such a high density > (rendering 10,000 data lines in a few seconds) that even though my > Csound instruments comprised samples rather than synthesis, it was still > too densely timed to render cleanly in real time (at least, on my > machine!). It is in effect a form of granular synthesis where every > grain was a new note. That might yet be a viable model for a massively > distributed cluster of Csound engines. > > Funding-permitting, I hope to restart activity in this area in the > context of outreach to UK schools (with their new emphasis on teaching > programming and CS); one interetsing angle could be multiple Csound > engines running on a cluster of (overclocked?) Raspberry Pis. > > Richard Dobson > > ** > http://www.lhcsound.com (the original site - flash-based, so will not > play on iOS devices) > > http://www.lhcsound.wordpress.com (new site focussing on Carla > Scaletti's work) > > My prototype and unfinished software creating Csound scores: > http://people.bath.ac.uk/masrwd/lhcsoundresources.html > > This uses the wxWidgets library. The only reasons I have not (yet) > posted the sources are (a) I have not got around to it, and (b) the code > was put together very quickly indeed, and is embarrassing in many ways! > > > On 29/08/2013 01:07, Andres Cabrera wrote: > > Hi, > > > > I'm wondering if anyone has any thoughts or ideas whether the multicore > > facilities in Csound could be extended beyond multicore to include > > multi-machines? Of course there are issues like synchronization and > > network protocols to consider, but I'm thinking this would be a nice > > match to some of the things going on here in the Allosphere, as we are > > doing visualizations of very large datasets, so it might be good to have > > sonifications of very large number of elements. > > > > Any thoughts? > > > > Cheers, > > Andrés > > > > > > ------------------------------------------------------------------------------ > > Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more! > > Discover the easy way to master current and previous Microsoft technologies > > and advance your career. Get an incredible 1,500+ hours of step-by-step > > tutorial videos with LearnDevNow. Subscribe today and save! > > http://pubads.g.doubleclick.net/gampad/clk?id=58040911&iu=/4140/ostg.clktrk > > > > > > > > _______________________________________________ > > Csound-devel mailing list > > Csound-devel@lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/csound-devel > > > > > ------------------------------------------------------------------------------ > Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more! > Discover the easy way to master current and previous Microsoft technologies > and advance your career. Get an incredible 1,500+ hours of step-by-step > tutorial videos with LearnDevNow. Subscribe today and save! > http://pubads.g.doubleclick.net/gampad/clk?id=58040911&iu=/4140/ostg.clktrk > _______________________________________________ > Csound-devel mailing list > Csound-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/csound-devel |