| Steven, what I was thinking of adding to your code is:
(1) Synchronize all Csound API calls that access mutable global data.
Some of these of course are called off the CSOUND struct in opcode
code.
(2) Synchronize a selected set of global opcode reads and writes:
busses, mixers, signal flow graph opcodes, in/out opcodes, function
table opcodes.
(3) Provide user-level synchronization opcodes as you suggest.
It's not totally clear to me what the practical tradeoffs are between
built-in synchronization of a selected set of opcodes, and providing
only user-level synchronization. In any case, I think it would take
some thought to come up with the clearest and simplest orchestra
syntax for user-level synchronization.
I think this would take us to 75% - 95% of what we really need, and
probably somewhat more than PD/Max/Supernova.
What I am trying to figure out is what more, both in theory and in
practice, ParCS gives us.
Regards,
Mike
On Mon, Jun 7, 2010 at 1:07 PM, Steven Yi wrote:
> I'm catching up now with this thread. One thing I wanted to note with
> what I had written for parallel Csound is that for protecting
> resources, it involved user-intervention to add locks/unlocks to
> csound code. I was concerned that there would be issues in trying to
> automate locking in that it would a lock around every opcode, instead
> of protecting regions of code. I guessed at the time that it would be
> more efficient to protect the region of code versus the cost of
> multiple locks/unlocks.
>
> Note too the design assumed that instrument order would be maintained
> in processing (process all instr 1, then all instr 2, etc.), as this
> is Csound's non-parallel instrument processing order. That's a bit
> limiting, but it also simplified issues greatly.
>
> The original design I put together really was more of an effort to get
> something going for parallel csound, a somewhat conservative approach
> to get the ball rolling. I haven't spent much time on it since, nor
> have I had a chance to look at the ParCS branch work, though I'm
> finding new interest after this thread.
>
> steven
>
>
>
> On Mon, Jun 7, 2010 at 12:14 PM, Michael Gogins
> wrote:
>> On looking again at the code, the ParCS branch appears to work at the
>> instrument level of concurrency.
>>
>> I am willing to work on the code myself, but only if it solves
>> problems that Steven Yi's version, which I did a little work on, does
>> not solve.
>>
>> The problems that need to be solved in any concurrent Csound are:
>>
>> (1) Run instruments in parallel.
>> (2) Synchronize writing to the output buffers.
>> (3) Synchronize reading and writing other global data.
>>
>> Both the ParCS branch and the Yi code will do (1) and (2). The ParCS
>> branch appears to do (3). In addition, the ParCS branch appears to:
>>
>> (4) Assemble a signal flow graph of instruments at orchestra compile
>> time, to determine which instances can run in parallel; it may be that
>> single instances of different instrument templates can run in parallel
>> if they are on the same "level" of the signal flow graph.
>> (5) Cost out nodes in the signal flow graph of instruments, to
>> determine if they are worth running running concurrently, or perhaps
>> to prioritize and balance performance?
>>
>> What I need to know is if my understanding of (4) and (5) is correct,
>> and what the limitations of ParCS are.
>>
>> It would certainly be possible to leave aside ParCS and finish Yi's
>> approach, which would work both with the existing compiler and with
>> the new compiler. I believe that much of (3) can be accomplished
>> simply by synchronizing Csound API calls and some opcode write calls.
>> Directly writing to global variables would not be protected (or
>> perhaps it could in the assignment opcodes); but calling member
>> functions of the Csound API from inside opcodes and instruments, and
>> some opcode writes, _would_ be synchronized, e.g. function table
>> access would be synchronized.
>>
>> It seems like just doing that much would be somewhat ahead of what
>> Pure Data will ever do, and be roughly as concurrent as Max/MSP or
>> SuperNova. It seems like ParCS might do even more, and that is why I
>> am anxious to get answers to my questions.
>>
>> Regards,
>> Mike
>>
>> On Mon, Jun 7, 2010 at 11:42 AM, Victor Lazzarini
>> wrote:
>>> I'm not sure what you meant really, but in my opinion (backed up by
>>> the FAUST results) parallelisation works best
>>> at the instrument granularity. In some cases, it's possible that it
>>> would be good at opcode level, but I am convinced it is not
>>> a good idea at performance loop level (at least with current hardware).
>>>
>>> I would like to see the ParCS project completed and released. I would
>>> work on it, if I am shown what needs to be done.
>>>
>>> Victor
>>>
>>>
>>> On 7 Jun 2010, at 12:30, Michael Gogins wrote:
>>>
>>>> Yes, Miller Puckette also was pessimistic about doing more than poly~.
>>>>
>>>> After looking again at the parallel Csound code, I believe I was
>>>> mistaken in thinking it parallelized on the opcode level of
>>>> granularity. I think its difference from Steven Yi's code is in being
>>>> integrated into the orc compiler to define a directed acyclical graph
>>>> of instrument instances (not opcode instances) for the purpose of
>>>> properly partitioning the work between cores, and synchronizing global
>>>> variables. This of course is still an improvement.
>>>>
>>>> Because of this, I am now wondering if a completely new design for the
>>>> signal flow graph is not required for software synthesizers to take
>>>> the best possible advantage of multiple cores. As Peiman noted, there
>>>> are cases, such as the pvs opcodes, where concurrent running of opcode
>>>> instances would be musically advantageous.
>>>>
>>>> Regards,
>>>> Mike
>>>>
>>>> On Mon, Jun 7, 2010 at 12:04 AM, Tim Blechmann wrote:
>>>>>> Thanks for your comments! Are you and John ffitch in communication?
>>>>>
>>>>> well, not really ... except that, he was sending me the papers on
>>>>> parallel
>>>>> csound some time ago ... however i think that different projects
>>>>> require
>>>>> different approaches for parallelization ... so supercollider and
>>>>> csound
>>>>> require different approaches. and as far as i understand, there
>>>>> will never
>>>>> be a generic way to parallelize max/pd signal graphs apart from the
>>>>> poly~ or
>>>>> pd~ hacks.
>>>>>
>>>>> cheers tim
>>>>>
>>>>>
>>>>> ------------------------------------------------------------------------------
>>>>> ThinkGeek and WIRED's GeekDad team up for the Ultimate
>>>>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the
>>>>> lucky parental unit. See the prize list and enter to win:
>>>>> http://p.sf.net/sfu/thinkgeek-promo
>>>>> _______________________________________________
>>>>> Csound-devel mailing list
>>>>> Csound-devel@lists.sourceforge.net
>>>>> https://lists.sourceforge.net/lists/listinfo/csound-devel
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Michael Gogins
>>>> Irreducible Productions
>>>> http://www.michael-gogins.com
>>>> Michael dot Gogins at gmail dot com
>>>>
>>>> ------------------------------------------------------------------------------
>>>> ThinkGeek and WIRED's GeekDad team up for the Ultimate
>>>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the
>>>> lucky parental unit. See the prize list and enter to win:
>>>> http://p.sf.net/sfu/thinkgeek-promo
>>>> _______________________________________________
>>>> Csound-devel mailing list
>>>> Csound-devel@lists.sourceforge.net
>>>> https://lists.sourceforge.net/lists/listinfo/csound-devel
>>>
>>>
>>> ------------------------------------------------------------------------------
>>> ThinkGeek and WIRED's GeekDad team up for the Ultimate
>>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the
>>> lucky parental unit. See the prize list and enter to win:
>>> http://p.sf.net/sfu/thinkgeek-promo
>>> _______________________________________________
>>> Csound-devel mailing list
>>> Csound-devel@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/csound-devel
>>>
>>
>>
>>
>> --
>> Michael Gogins
>> Irreducible Productions
>> http://www.michael-gogins.com
>> Michael dot Gogins at gmail dot com
>>
>> ------------------------------------------------------------------------------
>> ThinkGeek and WIRED's GeekDad team up for the Ultimate
>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the
>> lucky parental unit. See the prize list and enter to win:
>> http://p.sf.net/sfu/thinkgeek-promo
>> _______________________________________________
>> Csound-devel mailing list
>> Csound-devel@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/csound-devel
>>
>
> ------------------------------------------------------------------------------
> ThinkGeek and WIRED's GeekDad team up for the Ultimate
> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the
> lucky parental unit. See the prize list and enter to win:
> http://p.sf.net/sfu/thinkgeek-promo
> _______________________________________________
> Csound-devel mailing list
> Csound-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/csound-devel
>
--
Michael Gogins
Irreducible Productions
http://www.michael-gogins.com
Michael dot Gogins at gmail dot com
------------------------------------------------------------------------------
ThinkGeek and WIRED's GeekDad team up for the Ultimate
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the
lucky parental unit. See the prize list and enter to win:
http://p.sf.net/sfu/thinkgeek-promo
_______________________________________________
Csound-devel mailing list
Csound-devel@lists.sourceforge.net |