Csound Csound-dev Csound-tekno Search About

Csound 7: New Processing Graph Proposal

Date2015-11-23 15:43
FromSteven Yi
SubjectCsound 7: New Processing Graph Proposal
Hi All,

I'm posting this to the user list to solicit feedback.  I was talking
with Victor today over coffee and we were reviewing what's done so far
for Csound 7 and some new things we might consider to include for the
release. One idea that came up today was very interesting that we
thought would be a major addition to the system.

One of the issues that has been raised numerous times over the years
is the issue of instrument processing order and modifying that order.
The problem is that the existing processing model has been in place
for many years, is well known and easy to understand, and is somewhat
at odds with dynamic ordering.

In discussing it, we came up with the idea to create a new, separate
processing graph.  The existing system orders instances by instrument
number and is really keyed to the instrument definition, not the
instrument instance.  We can see the existing system as a fixed tree
where instrument definitions create a fixed node where instances of
that instrument are appended to for performance. It would look
something like this:

ROOT
|-INSTR 1
  |--instances
|-INSTR 2
  |--instances

The existing system has an almost 30 year old practice now associated
with it for ordering of computation.  It is simple, if inflexible, but
it is easy to reason about. It is also tied heavily into the score and
event processing system. By this, I mean that "i" events can be
defined as "instantiate instrument x, and also attach to processing
graph at node INSTR x".

Instead of having aspects of ordering that is tied to the instrument
definition, the proposal is to create a new processing graph that only
deals with concrete instrument instances.  This system would be an
additional system to use and would not change the existing system; the
existing practices and historical works retain their meaning and
continue to operate as is.

In the new system, the user would work with the graph entirely within
orchestra code.  Users would create instances of instruments and
explicitly attach them to target nodes.  A global root node would be
available for an engine; new nodes can be created to group instrument
instances together and allow users to specify ordering.

An example of this might look like this:

instr MyInstr
...
endin

;; create instance of MyInstr, starting now, indefinite duration, etc.
inst0:Instr = MyInstr(0, -1, 440, -12)
inst1:Instr = MyInstr(0, -1, 880, -12)

append_to_node(ROOT_NODE, inst1)
append_to_node(ROOT_NODE, inst0)

In the above, inst0 and inst1 are defined as variables of type Instr
(using new CS7 syntax).  append_to_node adds the instances to the
globally available ROOT_NODE.

Another example would be:

instrNode:Node = Node()

add_to_node(ROOT_NODE, instrNode)
add_to_node(instrNode, inst0)
add_to_node(instrNode, inst1)

The system would initially look a lot like SuperCollider's node system
and have the same qualities of using nodes to determine ordering, but
not express dependency between node items. User's would be required to
deal with communications between instrument instances, such as using a
global array or bus system, as one would in SC3.

Also, note that the above revives the notion of instrument as opcode
that was explored by Matt Ingalls with subinstruments. We would have
to change a little bit of what happens: defining a named instrument
would automatically generate an opcode with the name as opname, then
uses arguments with p2 and p3 as arg 1 and 2, and so on.

Some additional notes:

1. User can modify order of instr instances.  This would be done using
opcodes such as remove_from_node(node, instr), insert_into_node(node,
instr, index).  As this is using actual instances and has nothing to
do with instr definitions, the user is in complete control over
ordering (and has the complete responsibility as well).

2. To communicate directly to an instrument instance, local channels
may be used.  This would involved overloading chnget and chnset to
take in Instr instances, such as:

chnset inst0, "cutoff", k1  ;; used outside an instrument instance
chnget this, "cutoff"  ;; used within an instrument instance

Using channels allows a good correlation with the existing global
channel system.  This should also be fairly easy to then tie into the
API for exposing sending/getting values to an instance of an
instrument.

3. Victor and I also spoke about attaching opcodes directly to the new
graph.  (This tied into conversations about opcodes as values we had.)
There are complications here and needs some further exploration.

4. The new graph would not work with the existing parallel processing
implementation.  Exploring something like Supernova in SC3 is a
possibility, as is modifying the existing system to analyse based on
instrument instances rather than definitions.

5. This work would be best to implement in Csound 7 due to the use of
multi-character type names.  The new types and the system could
technically be written in CS6, but then we'd have to use
single-character type names to define Nodes and Instr instances.
Using the longer type names seems more appropriate to CS7.

6. The existing system would not be modified, and the event/score
system would not change. However, this does not mean one wouldn't be
able to use events to work with the new graph. For example, one could
write:

myNode:Node = Node()
...

instr NewGraph
  instr:Instr = MyInstr(.,.,.,.)
  add_to_node(myNode, instr)
endin

...

i "NewGraph" 0 2
i "NewGraph" 2 2

7. From an application developer perspective, one would be able to do
things like create mixers with effects and dynamically modify the
effects chain without losing any state of existing effects.


This is the basic proposal for the new processing graph. It should be
considered a starting point.  The implementation will require
community feedback to understand what are all of the features we would
want out of the system as well as discover potential issues.

Thanks!
steven

Csound mailing list
Csound@listserv.heanet.ie
https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND
Send bugs reports to
        https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here

Date2015-11-23 16:34
FromMichael Gogins
SubjectRe: Csound 7: New Processing Graph Proposal
This is a good direction.

I would like to see the following features in such a system:

As in the current graph, it should be possible to have Csound
dynamically create new nodes of an existing instrument-type node when
the score implicitly demands it. The new nodes could become children
of the original node, or something like that. Such dynamic creation
should be enabled/disabled programmatically, somehow, perhaps as an
attribute of the original node.

The new graph should be designed with an eye towards efficient multi-threading.

There is an enormous body of academic and institutional research on
signal flow processing graphs, as these are used in a large number of
military and commercial embedded systems. I think a review of the
literature would be advisable before doing the design.

Best,
Mike

-----------------------------------------------------
Michael Gogins
Irreducible Productions
http://michaelgogins.tumblr.com
Michael dot Gogins at gmail dot com


On Mon, Nov 23, 2015 at 10:43 AM, Steven Yi  wrote:
> Hi All,
>
> I'm posting this to the user list to solicit feedback.  I was talking
> with Victor today over coffee and we were reviewing what's done so far
> for Csound 7 and some new things we might consider to include for the
> release. One idea that came up today was very interesting that we
> thought would be a major addition to the system.
>
> One of the issues that has been raised numerous times over the years
> is the issue of instrument processing order and modifying that order.
> The problem is that the existing processing model has been in place
> for many years, is well known and easy to understand, and is somewhat
> at odds with dynamic ordering.
>
> In discussing it, we came up with the idea to create a new, separate
> processing graph.  The existing system orders instances by instrument
> number and is really keyed to the instrument definition, not the
> instrument instance.  We can see the existing system as a fixed tree
> where instrument definitions create a fixed node where instances of
> that instrument are appended to for performance. It would look
> something like this:
>
> ROOT
> |-INSTR 1
>   |--instances
> |-INSTR 2
>   |--instances
>
> The existing system has an almost 30 year old practice now associated
> with it for ordering of computation.  It is simple, if inflexible, but
> it is easy to reason about. It is also tied heavily into the score and
> event processing system. By this, I mean that "i" events can be
> defined as "instantiate instrument x, and also attach to processing
> graph at node INSTR x".
>
> Instead of having aspects of ordering that is tied to the instrument
> definition, the proposal is to create a new processing graph that only
> deals with concrete instrument instances.  This system would be an
> additional system to use and would not change the existing system; the
> existing practices and historical works retain their meaning and
> continue to operate as is.
>
> In the new system, the user would work with the graph entirely within
> orchestra code.  Users would create instances of instruments and
> explicitly attach them to target nodes.  A global root node would be
> available for an engine; new nodes can be created to group instrument
> instances together and allow users to specify ordering.
>
> An example of this might look like this:
>
> instr MyInstr
> ...
> endin
>
> ;; create instance of MyInstr, starting now, indefinite duration, etc.
> inst0:Instr = MyInstr(0, -1, 440, -12)
> inst1:Instr = MyInstr(0, -1, 880, -12)
>
> append_to_node(ROOT_NODE, inst1)
> append_to_node(ROOT_NODE, inst0)
>
> In the above, inst0 and inst1 are defined as variables of type Instr
> (using new CS7 syntax).  append_to_node adds the instances to the
> globally available ROOT_NODE.
>
> Another example would be:
>
> instrNode:Node = Node()
>
> add_to_node(ROOT_NODE, instrNode)
> add_to_node(instrNode, inst0)
> add_to_node(instrNode, inst1)
>
> The system would initially look a lot like SuperCollider's node system
> and have the same qualities of using nodes to determine ordering, but
> not express dependency between node items. User's would be required to
> deal with communications between instrument instances, such as using a
> global array or bus system, as one would in SC3.
>
> Also, note that the above revives the notion of instrument as opcode
> that was explored by Matt Ingalls with subinstruments. We would have
> to change a little bit of what happens: defining a named instrument
> would automatically generate an opcode with the name as opname, then
> uses arguments with p2 and p3 as arg 1 and 2, and so on.
>
> Some additional notes:
>
> 1. User can modify order of instr instances.  This would be done using
> opcodes such as remove_from_node(node, instr), insert_into_node(node,
> instr, index).  As this is using actual instances and has nothing to
> do with instr definitions, the user is in complete control over
> ordering (and has the complete responsibility as well).
>
> 2. To communicate directly to an instrument instance, local channels
> may be used.  This would involved overloading chnget and chnset to
> take in Instr instances, such as:
>
> chnset inst0, "cutoff", k1  ;; used outside an instrument instance
> chnget this, "cutoff"  ;; used within an instrument instance
>
> Using channels allows a good correlation with the existing global
> channel system.  This should also be fairly easy to then tie into the
> API for exposing sending/getting values to an instance of an
> instrument.
>
> 3. Victor and I also spoke about attaching opcodes directly to the new
> graph.  (This tied into conversations about opcodes as values we had.)
> There are complications here and needs some further exploration.
>
> 4. The new graph would not work with the existing parallel processing
> implementation.  Exploring something like Supernova in SC3 is a
> possibility, as is modifying the existing system to analyse based on
> instrument instances rather than definitions.
>
> 5. This work would be best to implement in Csound 7 due to the use of
> multi-character type names.  The new types and the system could
> technically be written in CS6, but then we'd have to use
> single-character type names to define Nodes and Instr instances.
> Using the longer type names seems more appropriate to CS7.
>
> 6. The existing system would not be modified, and the event/score
> system would not change. However, this does not mean one wouldn't be
> able to use events to work with the new graph. For example, one could
> write:
>
> myNode:Node = Node()
> ...
>
> instr NewGraph
>   instr:Instr = MyInstr(.,.,.,.)
>   add_to_node(myNode, instr)
> endin
>
> ...
>
> i "NewGraph" 0 2
> i "NewGraph" 2 2
>
> 7. From an application developer perspective, one would be able to do
> things like create mixers with effects and dynamically modify the
> effects chain without losing any state of existing effects.
>
>
> This is the basic proposal for the new processing graph. It should be
> considered a starting point.  The implementation will require
> community feedback to understand what are all of the features we would
> want out of the system as well as discover potential issues.
>
> Thanks!
> steven
>
> Csound mailing list
> Csound@listserv.heanet.ie
> https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND
> Send bugs reports to
>         https://github.com/csound/csound/issues
> Discussions of bugs and features can be posted here

Csound mailing list
Csound@listserv.heanet.ie
https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND
Send bugs reports to
        https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here

Date2015-11-23 17:06
FromSteven Yi
SubjectRe: Csound 7: New Processing Graph Proposal
Hi Michael,

I'm not sure anything automatic would be good. I think with the
original proposal, the user is free to perform operations to the graph
when an event/note is fired. The user is then free to explicitly say
what happens.  The existing system's problem is that it implicitly
operates as both an allocation of an instance and attachment to the
signal processing graph (the "i" statement).  Breaking that up gives
us the opportunity to explicitly say what happens, without having
baggage of trying to implement something in the core system that we
have to keep modifying for new use cases.

I suspect with the new system, one would take a little time to develop
a personal set of code for one's self to do the "automatic" aspects,
then carry on and just use that base performance code. This gives
users complete flexibility and means within the core we have just a
small set of functions to maintain.

Also, for multi-core, I agree it is a big concern, but at the same
time, I would note the point made in the original post.  The Node
system here does not express dependencies between Instr instances,
only order of operation.  The signals from one Instr instance do not
feed into the Node to which it is attached.  This is not a signal
processing graph, per-se.  However, this is a known system from SC3
and has a history and practice associated with it. (I'm fairly sure
this is SC3's model; others with more SC3 experience please correct me
if I am wrong on this point.)

For actual signal graphs, we would need to take even further steps.  I
believe Matt did some work along these lines with making the out
opcode write to, what is effectively, a local spout. (At least, I seem
to remember some code done at the caller of the subinstrument to
replace and restore spout).  We would need to move those ideas along
further to make all instruments read from local spin/spout but do so
in a way amenable to parallelism.  The one worry here is that changes
required to do this may conflict with the historical processing model;
we would need to carefully analyse the implementation and ensure any
new semantic changes of what is input and output are backwards
compatible.

steven


On Mon, Nov 23, 2015 at 4:34 PM, Michael Gogins
 wrote:
> This is a good direction.
>
> I would like to see the following features in such a system:
>
> As in the current graph, it should be possible to have Csound
> dynamically create new nodes of an existing instrument-type node when
> the score implicitly demands it. The new nodes could become children
> of the original node, or something like that. Such dynamic creation
> should be enabled/disabled programmatically, somehow, perhaps as an
> attribute of the original node.
>
> The new graph should be designed with an eye towards efficient multi-threading.
>
> There is an enormous body of academic and institutional research on
> signal flow processing graphs, as these are used in a large number of
> military and commercial embedded systems. I think a review of the
> literature would be advisable before doing the design.
>
> Best,
> Mike
>
> -----------------------------------------------------
> Michael Gogins
> Irreducible Productions
> http://michaelgogins.tumblr.com
> Michael dot Gogins at gmail dot com
>
>
> On Mon, Nov 23, 2015 at 10:43 AM, Steven Yi  wrote:
>> Hi All,
>>
>> I'm posting this to the user list to solicit feedback.  I was talking
>> with Victor today over coffee and we were reviewing what's done so far
>> for Csound 7 and some new things we might consider to include for the
>> release. One idea that came up today was very interesting that we
>> thought would be a major addition to the system.
>>
>> One of the issues that has been raised numerous times over the years
>> is the issue of instrument processing order and modifying that order.
>> The problem is that the existing processing model has been in place
>> for many years, is well known and easy to understand, and is somewhat
>> at odds with dynamic ordering.
>>
>> In discussing it, we came up with the idea to create a new, separate
>> processing graph.  The existing system orders instances by instrument
>> number and is really keyed to the instrument definition, not the
>> instrument instance.  We can see the existing system as a fixed tree
>> where instrument definitions create a fixed node where instances of
>> that instrument are appended to for performance. It would look
>> something like this:
>>
>> ROOT
>> |-INSTR 1
>>   |--instances
>> |-INSTR 2
>>   |--instances
>>
>> The existing system has an almost 30 year old practice now associated
>> with it for ordering of computation.  It is simple, if inflexible, but
>> it is easy to reason about. It is also tied heavily into the score and
>> event processing system. By this, I mean that "i" events can be
>> defined as "instantiate instrument x, and also attach to processing
>> graph at node INSTR x".
>>
>> Instead of having aspects of ordering that is tied to the instrument
>> definition, the proposal is to create a new processing graph that only
>> deals with concrete instrument instances.  This system would be an
>> additional system to use and would not change the existing system; the
>> existing practices and historical works retain their meaning and
>> continue to operate as is.
>>
>> In the new system, the user would work with the graph entirely within
>> orchestra code.  Users would create instances of instruments and
>> explicitly attach them to target nodes.  A global root node would be
>> available for an engine; new nodes can be created to group instrument
>> instances together and allow users to specify ordering.
>>
>> An example of this might look like this:
>>
>> instr MyInstr
>> ...
>> endin
>>
>> ;; create instance of MyInstr, starting now, indefinite duration, etc.
>> inst0:Instr = MyInstr(0, -1, 440, -12)
>> inst1:Instr = MyInstr(0, -1, 880, -12)
>>
>> append_to_node(ROOT_NODE, inst1)
>> append_to_node(ROOT_NODE, inst0)
>>
>> In the above, inst0 and inst1 are defined as variables of type Instr
>> (using new CS7 syntax).  append_to_node adds the instances to the
>> globally available ROOT_NODE.
>>
>> Another example would be:
>>
>> instrNode:Node = Node()
>>
>> add_to_node(ROOT_NODE, instrNode)
>> add_to_node(instrNode, inst0)
>> add_to_node(instrNode, inst1)
>>
>> The system would initially look a lot like SuperCollider's node system
>> and have the same qualities of using nodes to determine ordering, but
>> not express dependency between node items. User's would be required to
>> deal with communications between instrument instances, such as using a
>> global array or bus system, as one would in SC3.
>>
>> Also, note that the above revives the notion of instrument as opcode
>> that was explored by Matt Ingalls with subinstruments. We would have
>> to change a little bit of what happens: defining a named instrument
>> would automatically generate an opcode with the name as opname, then
>> uses arguments with p2 and p3 as arg 1 and 2, and so on.
>>
>> Some additional notes:
>>
>> 1. User can modify order of instr instances.  This would be done using
>> opcodes such as remove_from_node(node, instr), insert_into_node(node,
>> instr, index).  As this is using actual instances and has nothing to
>> do with instr definitions, the user is in complete control over
>> ordering (and has the complete responsibility as well).
>>
>> 2. To communicate directly to an instrument instance, local channels
>> may be used.  This would involved overloading chnget and chnset to
>> take in Instr instances, such as:
>>
>> chnset inst0, "cutoff", k1  ;; used outside an instrument instance
>> chnget this, "cutoff"  ;; used within an instrument instance
>>
>> Using channels allows a good correlation with the existing global
>> channel system.  This should also be fairly easy to then tie into the
>> API for exposing sending/getting values to an instance of an
>> instrument.
>>
>> 3. Victor and I also spoke about attaching opcodes directly to the new
>> graph.  (This tied into conversations about opcodes as values we had.)
>> There are complications here and needs some further exploration.
>>
>> 4. The new graph would not work with the existing parallel processing
>> implementation.  Exploring something like Supernova in SC3 is a
>> possibility, as is modifying the existing system to analyse based on
>> instrument instances rather than definitions.
>>
>> 5. This work would be best to implement in Csound 7 due to the use of
>> multi-character type names.  The new types and the system could
>> technically be written in CS6, but then we'd have to use
>> single-character type names to define Nodes and Instr instances.
>> Using the longer type names seems more appropriate to CS7.
>>
>> 6. The existing system would not be modified, and the event/score
>> system would not change. However, this does not mean one wouldn't be
>> able to use events to work with the new graph. For example, one could
>> write:
>>
>> myNode:Node = Node()
>> ...
>>
>> instr NewGraph
>>   instr:Instr = MyInstr(.,.,.,.)
>>   add_to_node(myNode, instr)
>> endin
>>
>> ...
>>
>> i "NewGraph" 0 2
>> i "NewGraph" 2 2
>>
>> 7. From an application developer perspective, one would be able to do
>> things like create mixers with effects and dynamically modify the
>> effects chain without losing any state of existing effects.
>>
>>
>> This is the basic proposal for the new processing graph. It should be
>> considered a starting point.  The implementation will require
>> community feedback to understand what are all of the features we would
>> want out of the system as well as discover potential issues.
>>
>> Thanks!
>> steven
>>
>> Csound mailing list
>> Csound@listserv.heanet.ie
>> https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND
>> Send bugs reports to
>>         https://github.com/csound/csound/issues
>> Discussions of bugs and features can be posted here
>
> Csound mailing list
> Csound@listserv.heanet.ie
> https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND
> Send bugs reports to
>         https://github.com/csound/csound/issues
> Discussions of bugs and features can be posted here

Csound mailing list
Csound@listserv.heanet.ie
https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND
Send bugs reports to
        https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here

Date2015-11-23 17:06
FromHlöðver Sigurðsson
SubjectRe: Csound 7: New Processing Graph Proposal
So I understand you right, it would be possible to completely skip using the processing graph system and one would have all the benefits of parallel processing (where they apply). Just a an open thought: the current orchestra ordering system is very clear and easy to understand, you know excacly where the signal runs and how it changes in which order. Using nodes and groups would add complexity, like in SC; groups, heads and tails of nodes. Which is very convenient for a realtime system. Would we want to harvest more of the realtime capabilites that these graphs would bring along, and in doing so, would we neglect non-realtime processing and compleatly stop developing the score part of csound? (which has been turning into less and less important part of csound.) What I mean by harvesting the new possibilities would be for example define a new instrument after csound orchestra has been compiled? My idea would be for that to add a csound terminal/repl that would run on concurrent thread to calculate new instrument defenitions before sending them to the main thread as not to glitch or miss out any audio sample unnecessarily.  I'm more interested in this discussion rather than the answer per say. It is my opinion that csound exceeds in non-realtime heavy calculation of sounds, being strong at realtime performance and realtime coding could possibly put csound way ahead.

2015-11-23 16:43 GMT+01:00 Steven Yi <stevenyi@gmail.com>:
Hi All,

I'm posting this to the user list to solicit feedback.  I was talking
with Victor today over coffee and we were reviewing what's done so far
for Csound 7 and some new things we might consider to include for the
release. One idea that came up today was very interesting that we
thought would be a major addition to the system.

One of the issues that has been raised numerous times over the years
is the issue of instrument processing order and modifying that order.
The problem is that the existing processing model has been in place
for many years, is well known and easy to understand, and is somewhat
at odds with dynamic ordering.

In discussing it, we came up with the idea to create a new, separate
processing graph.  The existing system orders instances by instrument
number and is really keyed to the instrument definition, not the
instrument instance.  We can see the existing system as a fixed tree
where instrument definitions create a fixed node where instances of
that instrument are appended to for performance. It would look
something like this:

ROOT
|-INSTR 1
  |--instances
|-INSTR 2
  |--instances

The existing system has an almost 30 year old practice now associated
with it for ordering of computation.  It is simple, if inflexible, but
it is easy to reason about. It is also tied heavily into the score and
event processing system. By this, I mean that "i" events can be
defined as "instantiate instrument x, and also attach to processing
graph at node INSTR x".

Instead of having aspects of ordering that is tied to the instrument
definition, the proposal is to create a new processing graph that only
deals with concrete instrument instances.  This system would be an
additional system to use and would not change the existing system; the
existing practices and historical works retain their meaning and
continue to operate as is.

In the new system, the user would work with the graph entirely within
orchestra code.  Users would create instances of instruments and
explicitly attach them to target nodes.  A global root node would be
available for an engine; new nodes can be created to group instrument
instances together and allow users to specify ordering.

An example of this might look like this:

instr MyInstr
...
endin

;; create instance of MyInstr, starting now, indefinite duration, etc.
inst0:Instr = MyInstr(0, -1, 440, -12)
inst1:Instr = MyInstr(0, -1, 880, -12)

append_to_node(ROOT_NODE, inst1)
append_to_node(ROOT_NODE, inst0)

In the above, inst0 and inst1 are defined as variables of type Instr
(using new CS7 syntax).  append_to_node adds the instances to the
globally available ROOT_NODE.

Another example would be:

instrNode:Node = Node()

add_to_node(ROOT_NODE, instrNode)
add_to_node(instrNode, inst0)
add_to_node(instrNode, inst1)

The system would initially look a lot like SuperCollider's node system
and have the same qualities of using nodes to determine ordering, but
not express dependency between node items. User's would be required to
deal with communications between instrument instances, such as using a
global array or bus system, as one would in SC3.

Also, note that the above revives the notion of instrument as opcode
that was explored by Matt Ingalls with subinstruments. We would have
to change a little bit of what happens: defining a named instrument
would automatically generate an opcode with the name as opname, then
uses arguments with p2 and p3 as arg 1 and 2, and so on.

Some additional notes:

1. User can modify order of instr instances.  This would be done using
opcodes such as remove_from_node(node, instr), insert_into_node(node,
instr, index).  As this is using actual instances and has nothing to
do with instr definitions, the user is in complete control over
ordering (and has the complete responsibility as well).

2. To communicate directly to an instrument instance, local channels
may be used.  This would involved overloading chnget and chnset to
take in Instr instances, such as:

chnset inst0, "cutoff", k1  ;; used outside an instrument instance
chnget this, "cutoff"  ;; used within an instrument instance

Using channels allows a good correlation with the existing global
channel system.  This should also be fairly easy to then tie into the
API for exposing sending/getting values to an instance of an
instrument.

3. Victor and I also spoke about attaching opcodes directly to the new
graph.  (This tied into conversations about opcodes as values we had.)
There are complications here and needs some further exploration.

4. The new graph would not work with the existing parallel processing
implementation.  Exploring something like Supernova in SC3 is a
possibility, as is modifying the existing system to analyse based on
instrument instances rather than definitions.

5. This work would be best to implement in Csound 7 due to the use of
multi-character type names.  The new types and the system could
technically be written in CS6, but then we'd have to use
single-character type names to define Nodes and Instr instances.
Using the longer type names seems more appropriate to CS7.

6. The existing system would not be modified, and the event/score
system would not change. However, this does not mean one wouldn't be
able to use events to work with the new graph. For example, one could
write:

myNode:Node = Node()
...

instr NewGraph
  instr:Instr = MyInstr(.,.,.,.)
  add_to_node(myNode, instr)
endin

...

i "NewGraph" 0 2
i "NewGraph" 2 2

7. From an application developer perspective, one would be able to do
things like create mixers with effects and dynamically modify the
effects chain without losing any state of existing effects.


This is the basic proposal for the new processing graph. It should be
considered a starting point.  The implementation will require
community feedback to understand what are all of the features we would
want out of the system as well as discover potential issues.

Thanks!
steven

Csound mailing list
Csound@listserv.heanet.ie
https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND
Send bugs reports to
        https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here

Csound mailing list Csound@listserv.heanet.ie https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND Send bugs reports to https://github.com/csound/csound/issues Discussions of bugs and features can be posted here

Date2015-11-23 17:31
FromSteven Yi
SubjectRe: Csound 7: New Processing Graph Proposal
I think this should be considered an additive change, not a
replacement, and should not take away anything from the existing
processing model.  The two models are inherently different, but we
would get the benefit of reusing everything else (instruments,
opcodes, etc.). All existing functionality is also maintained: one
could still recompile instruments, still use notes and events, etc.

This new processing model undoubtedly adds more complexity as one has
the responsibility of node and instrument instance ordering. That's
both a blessing and a curse.  However, as the older processing model
is still retained, one has options as to what would be appropriate to
use.

I can also imagine using both graphs at the same time. Using Blue as
an example, I might do all score related things by writing instruments
and notes in the classic way.  However, for the mixer aspect, I might
use the new processing model.  Assuming the two graphs are run
concurrently (first all of the old event/audio processing happens,
then the new graph is run, per k-cycle), it would give me options.  If
all aspects of mixing are done in the new graph, and all source
signals from instruments are in the older graph, it effectively
removes any concerns about instrument ordering in the old graph. This
would mean I could add new instruments or replace existing ones.  In
the new graph, since I am working with instances and not definitions,
this also means I can dynamically manage the graph like a mixer. I
could do things like instantiate 3 reverbs, place them in different
parts of the new graph, redefine what the reverb is, then add an
instance of the new reverb somewhere.

I also think a computer music system really does require an event and
scheduler system. It's useful for timed scores, but also for things
like temporal recursion.  I don't imagine it going away ever, and is
sort of a fundamental building block for many kinds of musical
projects. However, just in looking at things like MIDI keyboards, we
can see that not all computer music programs require schedulers and
events.  If there is any concerns over events and scheduling, I'd say
that if anything, the idea of notes/events and the scheduler would
only get further clarified and potentially more developed after this.



On Mon, Nov 23, 2015 at 5:06 PM, Hlöðver Sigurðsson  wrote:
> So I understand you right, it would be possible to completely skip using the
> processing graph system and one would have all the benefits of parallel
> processing (where they apply). Just a an open thought: the current orchestra
> ordering system is very clear and easy to understand, you know excacly where
> the signal runs and how it changes in which order. Using nodes and groups
> would add complexity, like in SC; groups, heads and tails of nodes. Which is
> very convenient for a realtime system. Would we want to harvest more of the
> realtime capabilites that these graphs would bring along, and in doing so,
> would we neglect non-realtime processing and compleatly stop developing the
> score part of csound? (which has been turning into less and less important
> part of csound.) What I mean by harvesting the new possibilities would be
> for example define a new instrument after csound orchestra has been
> compiled? My idea would be for that to add a csound terminal/repl that would
> run on concurrent thread to calculate new instrument defenitions before
> sending them to the main thread as not to glitch or miss out any audio
> sample unnecessarily.  I'm more interested in this discussion rather than
> the answer per say. It is my opinion that csound exceeds in non-realtime
> heavy calculation of sounds, being strong at realtime performance and
> realtime coding could possibly put csound way ahead.
>
> 2015-11-23 16:43 GMT+01:00 Steven Yi :
>>
>> Hi All,
>>
>> I'm posting this to the user list to solicit feedback.  I was talking
>> with Victor today over coffee and we were reviewing what's done so far
>> for Csound 7 and some new things we might consider to include for the
>> release. One idea that came up today was very interesting that we
>> thought would be a major addition to the system.
>>
>> One of the issues that has been raised numerous times over the years
>> is the issue of instrument processing order and modifying that order.
>> The problem is that the existing processing model has been in place
>> for many years, is well known and easy to understand, and is somewhat
>> at odds with dynamic ordering.
>>
>> In discussing it, we came up with the idea to create a new, separate
>> processing graph.  The existing system orders instances by instrument
>> number and is really keyed to the instrument definition, not the
>> instrument instance.  We can see the existing system as a fixed tree
>> where instrument definitions create a fixed node where instances of
>> that instrument are appended to for performance. It would look
>> something like this:
>>
>> ROOT
>> |-INSTR 1
>>   |--instances
>> |-INSTR 2
>>   |--instances
>>
>> The existing system has an almost 30 year old practice now associated
>> with it for ordering of computation.  It is simple, if inflexible, but
>> it is easy to reason about. It is also tied heavily into the score and
>> event processing system. By this, I mean that "i" events can be
>> defined as "instantiate instrument x, and also attach to processing
>> graph at node INSTR x".
>>
>> Instead of having aspects of ordering that is tied to the instrument
>> definition, the proposal is to create a new processing graph that only
>> deals with concrete instrument instances.  This system would be an
>> additional system to use and would not change the existing system; the
>> existing practices and historical works retain their meaning and
>> continue to operate as is.
>>
>> In the new system, the user would work with the graph entirely within
>> orchestra code.  Users would create instances of instruments and
>> explicitly attach them to target nodes.  A global root node would be
>> available for an engine; new nodes can be created to group instrument
>> instances together and allow users to specify ordering.
>>
>> An example of this might look like this:
>>
>> instr MyInstr
>> ...
>> endin
>>
>> ;; create instance of MyInstr, starting now, indefinite duration, etc.
>> inst0:Instr = MyInstr(0, -1, 440, -12)
>> inst1:Instr = MyInstr(0, -1, 880, -12)
>>
>> append_to_node(ROOT_NODE, inst1)
>> append_to_node(ROOT_NODE, inst0)
>>
>> In the above, inst0 and inst1 are defined as variables of type Instr
>> (using new CS7 syntax).  append_to_node adds the instances to the
>> globally available ROOT_NODE.
>>
>> Another example would be:
>>
>> instrNode:Node = Node()
>>
>> add_to_node(ROOT_NODE, instrNode)
>> add_to_node(instrNode, inst0)
>> add_to_node(instrNode, inst1)
>>
>> The system would initially look a lot like SuperCollider's node system
>> and have the same qualities of using nodes to determine ordering, but
>> not express dependency between node items. User's would be required to
>> deal with communications between instrument instances, such as using a
>> global array or bus system, as one would in SC3.
>>
>> Also, note that the above revives the notion of instrument as opcode
>> that was explored by Matt Ingalls with subinstruments. We would have
>> to change a little bit of what happens: defining a named instrument
>> would automatically generate an opcode with the name as opname, then
>> uses arguments with p2 and p3 as arg 1 and 2, and so on.
>>
>> Some additional notes:
>>
>> 1. User can modify order of instr instances.  This would be done using
>> opcodes such as remove_from_node(node, instr), insert_into_node(node,
>> instr, index).  As this is using actual instances and has nothing to
>> do with instr definitions, the user is in complete control over
>> ordering (and has the complete responsibility as well).
>>
>> 2. To communicate directly to an instrument instance, local channels
>> may be used.  This would involved overloading chnget and chnset to
>> take in Instr instances, such as:
>>
>> chnset inst0, "cutoff", k1  ;; used outside an instrument instance
>> chnget this, "cutoff"  ;; used within an instrument instance
>>
>> Using channels allows a good correlation with the existing global
>> channel system.  This should also be fairly easy to then tie into the
>> API for exposing sending/getting values to an instance of an
>> instrument.
>>
>> 3. Victor and I also spoke about attaching opcodes directly to the new
>> graph.  (This tied into conversations about opcodes as values we had.)
>> There are complications here and needs some further exploration.
>>
>> 4. The new graph would not work with the existing parallel processing
>> implementation.  Exploring something like Supernova in SC3 is a
>> possibility, as is modifying the existing system to analyse based on
>> instrument instances rather than definitions.
>>
>> 5. This work would be best to implement in Csound 7 due to the use of
>> multi-character type names.  The new types and the system could
>> technically be written in CS6, but then we'd have to use
>> single-character type names to define Nodes and Instr instances.
>> Using the longer type names seems more appropriate to CS7.
>>
>> 6. The existing system would not be modified, and the event/score
>> system would not change. However, this does not mean one wouldn't be
>> able to use events to work with the new graph. For example, one could
>> write:
>>
>> myNode:Node = Node()
>> ...
>>
>> instr NewGraph
>>   instr:Instr = MyInstr(.,.,.,.)
>>   add_to_node(myNode, instr)
>> endin
>>
>> ...
>>
>> i "NewGraph" 0 2
>> i "NewGraph" 2 2
>>
>> 7. From an application developer perspective, one would be able to do
>> things like create mixers with effects and dynamically modify the
>> effects chain without losing any state of existing effects.
>>
>>
>> This is the basic proposal for the new processing graph. It should be
>> considered a starting point.  The implementation will require
>> community feedback to understand what are all of the features we would
>> want out of the system as well as discover potential issues.
>>
>> Thanks!
>> steven
>>
>> Csound mailing list
>> Csound@listserv.heanet.ie
>> https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND
>> Send bugs reports to
>>         https://github.com/csound/csound/issues
>> Discussions of bugs and features can be posted here
>
>
> Csound mailing list Csound@listserv.heanet.ie
> https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND Send bugs reports to
> https://github.com/csound/csound/issues Discussions of bugs and features can
> be posted here

Csound mailing list
Csound@listserv.heanet.ie
https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND
Send bugs reports to
        https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here

Date2015-11-24 16:45
FromAaron Krister Johnson
SubjectRe: Csound 7: New Processing Graph Proposal
I'm sure I'm not alone in my opinion that, far from the score element of Csound being "less important", it's what sets Csound apart: a flexible, generalized way to order non-realtime, and realtime events.

I would never want to see that part of Csound go away. No other DSP software has anything close to the power of Csound's orc/score paradigm.

That said, I can also see why people would be excited by the prospect of improving responsiveness and power on the RT front.


Aaron Krister Johnson
http://www.untwelve.org

On Mon, Nov 23, 2015 at 11:31 AM, Steven Yi <stevenyi@gmail.com> wrote:
I think this should be considered an additive change, not a
replacement, and should not take away anything from the existing
processing model.  The two models are inherently different, but we
would get the benefit of reusing everything else (instruments,
opcodes, etc.). All existing functionality is also maintained: one
could still recompile instruments, still use notes and events, etc.

This new processing model undoubtedly adds more complexity as one has
the responsibility of node and instrument instance ordering. That's
both a blessing and a curse.  However, as the older processing model
is still retained, one has options as to what would be appropriate to
use.

I can also imagine using both graphs at the same time. Using Blue as
an example, I might do all score related things by writing instruments
and notes in the classic way.  However, for the mixer aspect, I might
use the new processing model.  Assuming the two graphs are run
concurrently (first all of the old event/audio processing happens,
then the new graph is run, per k-cycle), it would give me options.  If
all aspects of mixing are done in the new graph, and all source
signals from instruments are in the older graph, it effectively
removes any concerns about instrument ordering in the old graph. This
would mean I could add new instruments or replace existing ones.  In
the new graph, since I am working with instances and not definitions,
this also means I can dynamically manage the graph like a mixer. I
could do things like instantiate 3 reverbs, place them in different
parts of the new graph, redefine what the reverb is, then add an
instance of the new reverb somewhere.

I also think a computer music system really does require an event and
scheduler system. It's useful for timed scores, but also for things
like temporal recursion.  I don't imagine it going away ever, and is
sort of a fundamental building block for many kinds of musical
projects. However, just in looking at things like MIDI keyboards, we
can see that not all computer music programs require schedulers and
events.  If there is any concerns over events and scheduling, I'd say
that if anything, the idea of notes/events and the scheduler would
only get further clarified and potentially more developed after this.



On Mon, Nov 23, 2015 at 5:06 PM, Hlöðver Sigurðsson <hlolli@gmail.com> wrote:
> So I understand you right, it would be possible to completely skip using the
> processing graph system and one would have all the benefits of parallel
> processing (where they apply). Just a an open thought: the current orchestra
> ordering system is very clear and easy to understand, you know excacly where
> the signal runs and how it changes in which order. Using nodes and groups
> would add complexity, like in SC; groups, heads and tails of nodes. Which is
> very convenient for a realtime system. Would we want to harvest more of the
> realtime capabilites that these graphs would bring along, and in doing so,
> would we neglect non-realtime processing and compleatly stop developing the
> score part of csound? (which has been turning into less and less important
> part of csound.) What I mean by harvesting the new possibilities would be
> for example define a new instrument after csound orchestra has been
> compiled? My idea would be for that to add a csound terminal/repl that would
> run on concurrent thread to calculate new instrument defenitions before
> sending them to the main thread as not to glitch or miss out any audio
> sample unnecessarily.  I'm more interested in this discussion rather than
> the answer per say. It is my opinion that csound exceeds in non-realtime
> heavy calculation of sounds, being strong at realtime performance and
> realtime coding could possibly put csound way ahead.
>
> 2015-11-23 16:43 GMT+01:00 Steven Yi <stevenyi@gmail.com>:
>>
>> Hi All,
>>
>> I'm posting this to the user list to solicit feedback.  I was talking
>> with Victor today over coffee and we were reviewing what's done so far
>> for Csound 7 and some new things we might consider to include for the
>> release. One idea that came up today was very interesting that we
>> thought would be a major addition to the system.
>>
>> One of the issues that has been raised numerous times over the years
>> is the issue of instrument processing order and modifying that order.
>> The problem is that the existing processing model has been in place
>> for many years, is well known and easy to understand, and is somewhat
>> at odds with dynamic ordering.
>>
>> In discussing it, we came up with the idea to create a new, separate
>> processing graph.  The existing system orders instances by instrument
>> number and is really keyed to the instrument definition, not the
>> instrument instance.  We can see the existing system as a fixed tree
>> where instrument definitions create a fixed node where instances of
>> that instrument are appended to for performance. It would look
>> something like this:
>>
>> ROOT
>> |-INSTR 1
>>   |--instances
>> |-INSTR 2
>>   |--instances
>>
>> The existing system has an almost 30 year old practice now associated
>> with it for ordering of computation.  It is simple, if inflexible, but
>> it is easy to reason about. It is also tied heavily into the score and
>> event processing system. By this, I mean that "i" events can be
>> defined as "instantiate instrument x, and also attach to processing
>> graph at node INSTR x".
>>
>> Instead of having aspects of ordering that is tied to the instrument
>> definition, the proposal is to create a new processing graph that only
>> deals with concrete instrument instances.  This system would be an
>> additional system to use and would not change the existing system; the
>> existing practices and historical works retain their meaning and
>> continue to operate as is.
>>
>> In the new system, the user would work with the graph entirely within
>> orchestra code.  Users would create instances of instruments and
>> explicitly attach them to target nodes.  A global root node would be
>> available for an engine; new nodes can be created to group instrument
>> instances together and allow users to specify ordering.
>>
>> An example of this might look like this:
>>
>> instr MyInstr
>> ...
>> endin
>>
>> ;; create instance of MyInstr, starting now, indefinite duration, etc.
>> inst0:Instr = MyInstr(0, -1, 440, -12)
>> inst1:Instr = MyInstr(0, -1, 880, -12)
>>
>> append_to_node(ROOT_NODE, inst1)
>> append_to_node(ROOT_NODE, inst0)
>>
>> In the above, inst0 and inst1 are defined as variables of type Instr
>> (using new CS7 syntax).  append_to_node adds the instances to the
>> globally available ROOT_NODE.
>>
>> Another example would be:
>>
>> instrNode:Node = Node()
>>
>> add_to_node(ROOT_NODE, instrNode)
>> add_to_node(instrNode, inst0)
>> add_to_node(instrNode, inst1)
>>
>> The system would initially look a lot like SuperCollider's node system
>> and have the same qualities of using nodes to determine ordering, but
>> not express dependency between node items. User's would be required to
>> deal with communications between instrument instances, such as using a
>> global array or bus system, as one would in SC3.
>>
>> Also, note that the above revives the notion of instrument as opcode
>> that was explored by Matt Ingalls with subinstruments. We would have
>> to change a little bit of what happens: defining a named instrument
>> would automatically generate an opcode with the name as opname, then
>> uses arguments with p2 and p3 as arg 1 and 2, and so on.
>>
>> Some additional notes:
>>
>> 1. User can modify order of instr instances.  This would be done using
>> opcodes such as remove_from_node(node, instr), insert_into_node(node,
>> instr, index).  As this is using actual instances and has nothing to
>> do with instr definitions, the user is in complete control over
>> ordering (and has the complete responsibility as well).
>>
>> 2. To communicate directly to an instrument instance, local channels
>> may be used.  This would involved overloading chnget and chnset to
>> take in Instr instances, such as:
>>
>> chnset inst0, "cutoff", k1  ;; used outside an instrument instance
>> chnget this, "cutoff"  ;; used within an instrument instance
>>
>> Using channels allows a good correlation with the existing global
>> channel system.  This should also be fairly easy to then tie into the
>> API for exposing sending/getting values to an instance of an
>> instrument.
>>
>> 3. Victor and I also spoke about attaching opcodes directly to the new
>> graph.  (This tied into conversations about opcodes as values we had.)
>> There are complications here and needs some further exploration.
>>
>> 4. The new graph would not work with the existing parallel processing
>> implementation.  Exploring something like Supernova in SC3 is a
>> possibility, as is modifying the existing system to analyse based on
>> instrument instances rather than definitions.
>>
>> 5. This work would be best to implement in Csound 7 due to the use of
>> multi-character type names.  The new types and the system could
>> technically be written in CS6, but then we'd have to use
>> single-character type names to define Nodes and Instr instances.
>> Using the longer type names seems more appropriate to CS7.
>>
>> 6. The existing system would not be modified, and the event/score
>> system would not change. However, this does not mean one wouldn't be
>> able to use events to work with the new graph. For example, one could
>> write:
>>
>> myNode:Node = Node()
>> ...
>>
>> instr NewGraph
>>   instr:Instr = MyInstr(.,.,.,.)
>>   add_to_node(myNode, instr)
>> endin
>>
>> ...
>>
>> i "NewGraph" 0 2
>> i "NewGraph" 2 2
>>
>> 7. From an application developer perspective, one would be able to do
>> things like create mixers with effects and dynamically modify the
>> effects chain without losing any state of existing effects.
>>
>>
>> This is the basic proposal for the new processing graph. It should be
>> considered a starting point.  The implementation will require
>> community feedback to understand what are all of the features we would
>> want out of the system as well as discover potential issues.
>>
>> Thanks!
>> steven
>>
>> Csound mailing list
>> Csound@listserv.heanet.ie
>> https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND
>> Send bugs reports to
>>         https://github.com/csound/csound/issues
>> Discussions of bugs and features can be posted here
>
>
> Csound mailing list Csound@listserv.heanet.ie
> https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND Send bugs reports to
> https://github.com/csound/csound/issues Discussions of bugs and features can
> be posted here

Csound mailing list
Csound@listserv.heanet.ie
https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND
Send bugs reports to
        https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here

Csound mailing list Csound@listserv.heanet.ie https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND Send bugs reports to https://github.com/csound/csound/issues Discussions of bugs and features can be posted here

Date2015-11-24 17:06
FromVictor Lazzarini
SubjectRe: Csound 7: New Processing Graph Proposal
There is no plan to remove anything. It will always be there. The new model will be provided alongside it.

Victor Lazzarini
Dean of Arts, Celtic Studies, and Philosophy
Maynooth University
Ireland

On 24 Nov 2015, at 16:45, Aaron Krister Johnson <akjmicro@GMAIL.COM> wrote:

I'm sure I'm not alone in my opinion that, far from the score element of Csound being "less important", it's what sets Csound apart: a flexible, generalized way to order non-realtime, and realtime events.

I would never want to see that part of Csound go away. No other DSP software has anything close to the power of Csound's orc/score paradigm.

That said, I can also see why people would be excited by the prospect of improving responsiveness and power on the RT front.


Aaron Krister Johnson
http://www.untwelve.org

On Mon, Nov 23, 2015 at 11:31 AM, Steven Yi <stevenyi@gmail.com> wrote:
I think this should be considered an additive change, not a
replacement, and should not take away anything from the existing
processing model.  The two models are inherently different, but we
would get the benefit of reusing everything else (instruments,
opcodes, etc.). All existing functionality is also maintained: one
could still recompile instruments, still use notes and events, etc.

This new processing model undoubtedly adds more complexity as one has
the responsibility of node and instrument instance ordering. That's
both a blessing and a curse.  However, as the older processing model
is still retained, one has options as to what would be appropriate to
use.

I can also imagine using both graphs at the same time. Using Blue as
an example, I might do all score related things by writing instruments
and notes in the classic way.  However, for the mixer aspect, I might
use the new processing model.  Assuming the two graphs are run
concurrently (first all of the old event/audio processing happens,
then the new graph is run, per k-cycle), it would give me options.  If
all aspects of mixing are done in the new graph, and all source
signals from instruments are in the older graph, it effectively
removes any concerns about instrument ordering in the old graph. This
would mean I could add new instruments or replace existing ones.  In
the new graph, since I am working with instances and not definitions,
this also means I can dynamically manage the graph like a mixer. I
could do things like instantiate 3 reverbs, place them in different
parts of the new graph, redefine what the reverb is, then add an
instance of the new reverb somewhere.

I also think a computer music system really does require an event and
scheduler system. It's useful for timed scores, but also for things
like temporal recursion.  I don't imagine it going away ever, and is
sort of a fundamental building block for many kinds of musical
projects. However, just in looking at things like MIDI keyboards, we
can see that not all computer music programs require schedulers and
events.  If there is any concerns over events and scheduling, I'd say
that if anything, the idea of notes/events and the scheduler would
only get further clarified and potentially more developed after this.



On Mon, Nov 23, 2015 at 5:06 PM, Hlöðver Sigurðsson <hlolli@gmail.com> wrote:
> So I understand you right, it would be possible to completely skip using the
> processing graph system and one would have all the benefits of parallel
> processing (where they apply). Just a an open thought: the current orchestra
> ordering system is very clear and easy to understand, you know excacly where
> the signal runs and how it changes in which order. Using nodes and groups
> would add complexity, like in SC; groups, heads and tails of nodes. Which is
> very convenient for a realtime system. Would we want to harvest more of the
> realtime capabilites that these graphs would bring along, and in doing so,
> would we neglect non-realtime processing and compleatly stop developing the
> score part of csound? (which has been turning into less and less important
> part of csound.) What I mean by harvesting the new possibilities would be
> for example define a new instrument after csound orchestra has been
> compiled? My idea would be for that to add a csound terminal/repl that would
> run on concurrent thread to calculate new instrument defenitions before
> sending them to the main thread as not to glitch or miss out any audio
> sample unnecessarily.  I'm more interested in this discussion rather than
> the answer per say. It is my opinion that csound exceeds in non-realtime
> heavy calculation of sounds, being strong at realtime performance and
> realtime coding could possibly put csound way ahead.
>
> 2015-11-23 16:43 GMT+01:00 Steven Yi <stevenyi@gmail.com>:
>>
>> Hi All,
>>
>> I'm posting this to the user list to solicit feedback.  I was talking
>> with Victor today over coffee and we were reviewing what's done so far
>> for Csound 7 and some new things we might consider to include for the
>> release. One idea that came up today was very interesting that we
>> thought would be a major addition to the system.
>>
>> One of the issues that has been raised numerous times over the years
>> is the issue of instrument processing order and modifying that order.
>> The problem is that the existing processing model has been in place
>> for many years, is well known and easy to understand, and is somewhat
>> at odds with dynamic ordering.
>>
>> In discussing it, we came up with the idea to create a new, separate
>> processing graph.  The existing system orders instances by instrument
>> number and is really keyed to the instrument definition, not the
>> instrument instance.  We can see the existing system as a fixed tree
>> where instrument definitions create a fixed node where instances of
>> that instrument are appended to for performance. It would look
>> something like this:
>>
>> ROOT
>> |-INSTR 1
>>   |--instances
>> |-INSTR 2
>>   |--instances
>>
>> The existing system has an almost 30 year old practice now associated
>> with it for ordering of computation.  It is simple, if inflexible, but
>> it is easy to reason about. It is also tied heavily into the score and
>> event processing system. By this, I mean that "i" events can be
>> defined as "instantiate instrument x, and also attach to processing
>> graph at node INSTR x".
>>
>> Instead of having aspects of ordering that is tied to the instrument
>> definition, the proposal is to create a new processing graph that only
>> deals with concrete instrument instances.  This system would be an
>> additional system to use and would not change the existing system; the
>> existing practices and historical works retain their meaning and
>> continue to operate as is.
>>
>> In the new system, the user would work with the graph entirely within
>> orchestra code.  Users would create instances of instruments and
>> explicitly attach them to target nodes.  A global root node would be
>> available for an engine; new nodes can be created to group instrument
>> instances together and allow users to specify ordering.
>>
>> An example of this might look like this:
>>
>> instr MyInstr
>> ...
>> endin
>>
>> ;; create instance of MyInstr, starting now, indefinite duration, etc.
>> inst0:Instr = MyInstr(0, -1, 440, -12)
>> inst1:Instr = MyInstr(0, -1, 880, -12)
>>
>> append_to_node(ROOT_NODE, inst1)
>> append_to_node(ROOT_NODE, inst0)
>>
>> In the above, inst0 and inst1 are defined as variables of type Instr
>> (using new CS7 syntax).  append_to_node adds the instances to the
>> globally available ROOT_NODE.
>>
>> Another example would be:
>>
>> instrNode:Node = Node()
>>
>> add_to_node(ROOT_NODE, instrNode)
>> add_to_node(instrNode, inst0)
>> add_to_node(instrNode, inst1)
>>
>> The system would initially look a lot like SuperCollider's node system
>> and have the same qualities of using nodes to determine ordering, but
>> not express dependency between node items. User's would be required to
>> deal with communications between instrument instances, such as using a
>> global array or bus system, as one would in SC3.
>>
>> Also, note that the above revives the notion of instrument as opcode
>> that was explored by Matt Ingalls with subinstruments. We would have
>> to change a little bit of what happens: defining a named instrument
>> would automatically generate an opcode with the name as opname, then
>> uses arguments with p2 and p3 as arg 1 and 2, and so on.
>>
>> Some additional notes:
>>
>> 1. User can modify order of instr instances.  This would be done using
>> opcodes such as remove_from_node(node, instr), insert_into_node(node,
>> instr, index).  As this is using actual instances and has nothing to
>> do with instr definitions, the user is in complete control over
>> ordering (and has the complete responsibility as well).
>>
>> 2. To communicate directly to an instrument instance, local channels
>> may be used.  This would involved overloading chnget and chnset to
>> take in Instr instances, such as:
>>
>> chnset inst0, "cutoff", k1  ;; used outside an instrument instance
>> chnget this, "cutoff"  ;; used within an instrument instance
>>
>> Using channels allows a good correlation with the existing global
>> channel system.  This should also be fairly easy to then tie into the
>> API for exposing sending/getting values to an instance of an
>> instrument.
>>
>> 3. Victor and I also spoke about attaching opcodes directly to the new
>> graph.  (This tied into conversations about opcodes as values we had.)
>> There are complications here and needs some further exploration.
>>
>> 4. The new graph would not work with the existing parallel processing
>> implementation.  Exploring something like Supernova in SC3 is a
>> possibility, as is modifying the existing system to analyse based on
>> instrument instances rather than definitions.
>>
>> 5. This work would be best to implement in Csound 7 due to the use of
>> multi-character type names.  The new types and the system could
>> technically be written in CS6, but then we'd have to use
>> single-character type names to define Nodes and Instr instances.
>> Using the longer type names seems more appropriate to CS7.
>>
>> 6. The existing system would not be modified, and the event/score
>> system would not change. However, this does not mean one wouldn't be
>> able to use events to work with the new graph. For example, one could
>> write:
>>
>> myNode:Node = Node()
>> ...
>>
>> instr NewGraph
>>   instr:Instr = MyInstr(.,.,.,.)
>>   add_to_node(myNode, instr)
>> endin
>>
>> ...
>>
>> i "NewGraph" 0 2
>> i "NewGraph" 2 2
>>
>> 7. From an application developer perspective, one would be able to do
>> things like create mixers with effects and dynamically modify the
>> effects chain without losing any state of existing effects.
>>
>>
>> This is the basic proposal for the new processing graph. It should be
>> considered a starting point.  The implementation will require
>> community feedback to understand what are all of the features we would
>> want out of the system as well as discover potential issues.
>>
>> Thanks!
>> steven
>>
>> Csound mailing list
>> Csound@listserv.heanet.ie
>> https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND
>> Send bugs reports to
>>         https://github.com/csound/csound/issues
>> Discussions of bugs and features can be posted here
>
>
> Csound mailing list Csound@listserv.heanet.ie
> https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND Send bugs reports to
> https://github.com/csound/csound/issues Discussions of bugs and features can
> be posted here

Csound mailing list
Csound@listserv.heanet.ie
https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND
Send bugs reports to
        https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here

Csound mailing list Csound@listserv.heanet.ie https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND Send bugs reports to https://github.com/csound/csound/issues Discussions of bugs and features can be posted here

Date2015-11-25 05:02
Fromthorin kerr
SubjectRe: Csound 7: New Processing Graph Proposal
Just a quick question on instrument instantiation. I know colons have been introduced with the functional syntax, but they still seem a little un-csoundy to me.
Couldn't an existing opcode like nstance be modified to be used like this?
So instead of 
inst0:Instr = MyInstr(0, -1, 440, -12)

you could use 

inst0 nstance "MyInstr" 0, -1, 440, -12

Thorin



On Tue, Nov 24, 2015 at 1:43 AM, Steven Yi <stevenyi@gmail.com> wrote:
Hi All,

I'm posting this to the user list to solicit feedback.  I was talking
with Victor today over coffee and we were reviewing what's done so far
for Csound 7 and some new things we might consider to include for the
release. One idea that came up today was very interesting that we
thought would be a major addition to the system.

One of the issues that has been raised numerous times over the years
is the issue of instrument processing order and modifying that order.
The problem is that the existing processing model has been in place
for many years, is well known and easy to understand, and is somewhat
at odds with dynamic ordering.

In discussing it, we came up with the idea to create a new, separate
processing graph.  The existing system orders instances by instrument
number and is really keyed to the instrument definition, not the
instrument instance.  We can see the existing system as a fixed tree
where instrument definitions create a fixed node where instances of
that instrument are appended to for performance. It would look
something like this:

ROOT
|-INSTR 1
  |--instances
|-INSTR 2
  |--instances

The existing system has an almost 30 year old practice now associated
with it for ordering of computation.  It is simple, if inflexible, but
it is easy to reason about. It is also tied heavily into the score and
event processing system. By this, I mean that "i" events can be
defined as "instantiate instrument x, and also attach to processing
graph at node INSTR x".

Instead of having aspects of ordering that is tied to the instrument
definition, the proposal is to create a new processing graph that only
deals with concrete instrument instances.  This system would be an
additional system to use and would not change the existing system; the
existing practices and historical works retain their meaning and
continue to operate as is.

In the new system, the user would work with the graph entirely within
orchestra code.  Users would create instances of instruments and
explicitly attach them to target nodes.  A global root node would be
available for an engine; new nodes can be created to group instrument
instances together and allow users to specify ordering.

An example of this might look like this:

instr MyInstr
...
endin

;; create instance of MyInstr, starting now, indefinite duration, etc.
inst0:Instr = MyInstr(0, -1, 440, -12)
inst1:Instr = MyInstr(0, -1, 880, -12)

append_to_node(ROOT_NODE, inst1)
append_to_node(ROOT_NODE, inst0)

In the above, inst0 and inst1 are defined as variables of type Instr
(using new CS7 syntax).  append_to_node adds the instances to the
globally available ROOT_NODE.

Another example would be:

instrNode:Node = Node()

add_to_node(ROOT_NODE, instrNode)
add_to_node(instrNode, inst0)
add_to_node(instrNode, inst1)

The system would initially look a lot like SuperCollider's node system
and have the same qualities of using nodes to determine ordering, but
not express dependency between node items. User's would be required to
deal with communications between instrument instances, such as using a
global array or bus system, as one would in SC3.

Also, note that the above revives the notion of instrument as opcode
that was explored by Matt Ingalls with subinstruments. We would have
to change a little bit of what happens: defining a named instrument
would automatically generate an opcode with the name as opname, then
uses arguments with p2 and p3 as arg 1 and 2, and so on.

Some additional notes:

1. User can modify order of instr instances.  This would be done using
opcodes such as remove_from_node(node, instr), insert_into_node(node,
instr, index).  As this is using actual instances and has nothing to
do with instr definitions, the user is in complete control over
ordering (and has the complete responsibility as well).

2. To communicate directly to an instrument instance, local channels
may be used.  This would involved overloading chnget and chnset to
take in Instr instances, such as:

chnset inst0, "cutoff", k1  ;; used outside an instrument instance
chnget this, "cutoff"  ;; used within an instrument instance

Using channels allows a good correlation with the existing global
channel system.  This should also be fairly easy to then tie into the
API for exposing sending/getting values to an instance of an
instrument.

3. Victor and I also spoke about attaching opcodes directly to the new
graph.  (This tied into conversations about opcodes as values we had.)
There are complications here and needs some further exploration.

4. The new graph would not work with the existing parallel processing
implementation.  Exploring something like Supernova in SC3 is a
possibility, as is modifying the existing system to analyse based on
instrument instances rather than definitions.

5. This work would be best to implement in Csound 7 due to the use of
multi-character type names.  The new types and the system could
technically be written in CS6, but then we'd have to use
single-character type names to define Nodes and Instr instances.
Using the longer type names seems more appropriate to CS7.

6. The existing system would not be modified, and the event/score
system would not change. However, this does not mean one wouldn't be
able to use events to work with the new graph. For example, one could
write:

myNode:Node = Node()
...

instr NewGraph
  instr:Instr = MyInstr(.,.,.,.)
  add_to_node(myNode, instr)
endin

...

i "NewGraph" 0 2
i "NewGraph" 2 2

7. From an application developer perspective, one would be able to do
things like create mixers with effects and dynamically modify the
effects chain without losing any state of existing effects.


This is the basic proposal for the new processing graph. It should be
considered a starting point.  The implementation will require
community feedback to understand what are all of the features we would
want out of the system as well as discover potential issues.

Thanks!
steven

Csound mailing list
Csound@listserv.heanet.ie
https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND
Send bugs reports to
        https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here

Csound mailing list Csound@listserv.heanet.ie https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND Send bugs reports to https://github.com/csound/csound/issues Discussions of bugs and features can be posted here

Date2015-11-25 10:12
FromOeyvind Brandtsegg
SubjectRe: Csound 7: New Processing Graph Proposal
I think this sounds great.
One question, if you run two concurrent graphs like you describe,
would that have any impact on multiprocessing realtime performance ?
Thinking, one graph would have to finish all its work before the other
starts working, I guess?
(this is not at all an objection, just thinking about how it will work)
Oeyvind

2015-11-23 18:31 GMT+01:00 Steven Yi :
> I think this should be considered an additive change, not a
> replacement, and should not take away anything from the existing
> processing model.  The two models are inherently different, but we
> would get the benefit of reusing everything else (instruments,
> opcodes, etc.). All existing functionality is also maintained: one
> could still recompile instruments, still use notes and events, etc.
>
> This new processing model undoubtedly adds more complexity as one has
> the responsibility of node and instrument instance ordering. That's
> both a blessing and a curse.  However, as the older processing model
> is still retained, one has options as to what would be appropriate to
> use.
>
> I can also imagine using both graphs at the same time. Using Blue as
> an example, I might do all score related things by writing instruments
> and notes in the classic way.  However, for the mixer aspect, I might
> use the new processing model.  Assuming the two graphs are run
> concurrently (first all of the old event/audio processing happens,
> then the new graph is run, per k-cycle), it would give me options.  If
> all aspects of mixing are done in the new graph, and all source
> signals from instruments are in the older graph, it effectively
> removes any concerns about instrument ordering in the old graph. This
> would mean I could add new instruments or replace existing ones.  In
> the new graph, since I am working with instances and not definitions,
> this also means I can dynamically manage the graph like a mixer. I
> could do things like instantiate 3 reverbs, place them in different
> parts of the new graph, redefine what the reverb is, then add an
> instance of the new reverb somewhere.
>
> I also think a computer music system really does require an event and
> scheduler system. It's useful for timed scores, but also for things
> like temporal recursion.  I don't imagine it going away ever, and is
> sort of a fundamental building block for many kinds of musical
> projects. However, just in looking at things like MIDI keyboards, we
> can see that not all computer music programs require schedulers and
> events.  If there is any concerns over events and scheduling, I'd say
> that if anything, the idea of notes/events and the scheduler would
> only get further clarified and potentially more developed after this.
>
>
>
> On Mon, Nov 23, 2015 at 5:06 PM, Hlöðver Sigurðsson  wrote:
>> So I understand you right, it would be possible to completely skip using the
>> processing graph system and one would have all the benefits of parallel
>> processing (where they apply). Just a an open thought: the current orchestra
>> ordering system is very clear and easy to understand, you know excacly where
>> the signal runs and how it changes in which order. Using nodes and groups
>> would add complexity, like in SC; groups, heads and tails of nodes. Which is
>> very convenient for a realtime system. Would we want to harvest more of the
>> realtime capabilites that these graphs would bring along, and in doing so,
>> would we neglect non-realtime processing and compleatly stop developing the
>> score part of csound? (which has been turning into less and less important
>> part of csound.) What I mean by harvesting the new possibilities would be
>> for example define a new instrument after csound orchestra has been
>> compiled? My idea would be for that to add a csound terminal/repl that would
>> run on concurrent thread to calculate new instrument defenitions before
>> sending them to the main thread as not to glitch or miss out any audio
>> sample unnecessarily.  I'm more interested in this discussion rather than
>> the answer per say. It is my opinion that csound exceeds in non-realtime
>> heavy calculation of sounds, being strong at realtime performance and
>> realtime coding could possibly put csound way ahead.
>>
>> 2015-11-23 16:43 GMT+01:00 Steven Yi :
>>>
>>> Hi All,
>>>
>>> I'm posting this to the user list to solicit feedback.  I was talking
>>> with Victor today over coffee and we were reviewing what's done so far
>>> for Csound 7 and some new things we might consider to include for the
>>> release. One idea that came up today was very interesting that we
>>> thought would be a major addition to the system.
>>>
>>> One of the issues that has been raised numerous times over the years
>>> is the issue of instrument processing order and modifying that order.
>>> The problem is that the existing processing model has been in place
>>> for many years, is well known and easy to understand, and is somewhat
>>> at odds with dynamic ordering.
>>>
>>> In discussing it, we came up with the idea to create a new, separate
>>> processing graph.  The existing system orders instances by instrument
>>> number and is really keyed to the instrument definition, not the
>>> instrument instance.  We can see the existing system as a fixed tree
>>> where instrument definitions create a fixed node where instances of
>>> that instrument are appended to for performance. It would look
>>> something like this:
>>>
>>> ROOT
>>> |-INSTR 1
>>>   |--instances
>>> |-INSTR 2
>>>   |--instances
>>>
>>> The existing system has an almost 30 year old practice now associated
>>> with it for ordering of computation.  It is simple, if inflexible, but
>>> it is easy to reason about. It is also tied heavily into the score and
>>> event processing system. By this, I mean that "i" events can be
>>> defined as "instantiate instrument x, and also attach to processing
>>> graph at node INSTR x".
>>>
>>> Instead of having aspects of ordering that is tied to the instrument
>>> definition, the proposal is to create a new processing graph that only
>>> deals with concrete instrument instances.  This system would be an
>>> additional system to use and would not change the existing system; the
>>> existing practices and historical works retain their meaning and
>>> continue to operate as is.
>>>
>>> In the new system, the user would work with the graph entirely within
>>> orchestra code.  Users would create instances of instruments and
>>> explicitly attach them to target nodes.  A global root node would be
>>> available for an engine; new nodes can be created to group instrument
>>> instances together and allow users to specify ordering.
>>>
>>> An example of this might look like this:
>>>
>>> instr MyInstr
>>> ...
>>> endin
>>>
>>> ;; create instance of MyInstr, starting now, indefinite duration, etc.
>>> inst0:Instr = MyInstr(0, -1, 440, -12)
>>> inst1:Instr = MyInstr(0, -1, 880, -12)
>>>
>>> append_to_node(ROOT_NODE, inst1)
>>> append_to_node(ROOT_NODE, inst0)
>>>
>>> In the above, inst0 and inst1 are defined as variables of type Instr
>>> (using new CS7 syntax).  append_to_node adds the instances to the
>>> globally available ROOT_NODE.
>>>
>>> Another example would be:
>>>
>>> instrNode:Node = Node()
>>>
>>> add_to_node(ROOT_NODE, instrNode)
>>> add_to_node(instrNode, inst0)
>>> add_to_node(instrNode, inst1)
>>>
>>> The system would initially look a lot like SuperCollider's node system
>>> and have the same qualities of using nodes to determine ordering, but
>>> not express dependency between node items. User's would be required to
>>> deal with communications between instrument instances, such as using a
>>> global array or bus system, as one would in SC3.
>>>
>>> Also, note that the above revives the notion of instrument as opcode
>>> that was explored by Matt Ingalls with subinstruments. We would have
>>> to change a little bit of what happens: defining a named instrument
>>> would automatically generate an opcode with the name as opname, then
>>> uses arguments with p2 and p3 as arg 1 and 2, and so on.
>>>
>>> Some additional notes:
>>>
>>> 1. User can modify order of instr instances.  This would be done using
>>> opcodes such as remove_from_node(node, instr), insert_into_node(node,
>>> instr, index).  As this is using actual instances and has nothing to
>>> do with instr definitions, the user is in complete control over
>>> ordering (and has the complete responsibility as well).
>>>
>>> 2. To communicate directly to an instrument instance, local channels
>>> may be used.  This would involved overloading chnget and chnset to
>>> take in Instr instances, such as:
>>>
>>> chnset inst0, "cutoff", k1  ;; used outside an instrument instance
>>> chnget this, "cutoff"  ;; used within an instrument instance
>>>
>>> Using channels allows a good correlation with the existing global
>>> channel system.  This should also be fairly easy to then tie into the
>>> API for exposing sending/getting values to an instance of an
>>> instrument.
>>>
>>> 3. Victor and I also spoke about attaching opcodes directly to the new
>>> graph.  (This tied into conversations about opcodes as values we had.)
>>> There are complications here and needs some further exploration.
>>>
>>> 4. The new graph would not work with the existing parallel processing
>>> implementation.  Exploring something like Supernova in SC3 is a
>>> possibility, as is modifying the existing system to analyse based on
>>> instrument instances rather than definitions.
>>>
>>> 5. This work would be best to implement in Csound 7 due to the use of
>>> multi-character type names.  The new types and the system could
>>> technically be written in CS6, but then we'd have to use
>>> single-character type names to define Nodes and Instr instances.
>>> Using the longer type names seems more appropriate to CS7.
>>>
>>> 6. The existing system would not be modified, and the event/score
>>> system would not change. However, this does not mean one wouldn't be
>>> able to use events to work with the new graph. For example, one could
>>> write:
>>>
>>> myNode:Node = Node()
>>> ...
>>>
>>> instr NewGraph
>>>   instr:Instr = MyInstr(.,.,.,.)
>>>   add_to_node(myNode, instr)
>>> endin
>>>
>>> ...
>>>
>>> i "NewGraph" 0 2
>>> i "NewGraph" 2 2
>>>
>>> 7. From an application developer perspective, one would be able to do
>>> things like create mixers with effects and dynamically modify the
>>> effects chain without losing any state of existing effects.
>>>
>>>
>>> This is the basic proposal for the new processing graph. It should be
>>> considered a starting point.  The implementation will require
>>> community feedback to understand what are all of the features we would
>>> want out of the system as well as discover potential issues.
>>>
>>> Thanks!
>>> steven
>>>
>>> Csound mailing list
>>> Csound@listserv.heanet.ie
>>> https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND
>>> Send bugs reports to
>>>         https://github.com/csound/csound/issues
>>> Discussions of bugs and features can be posted here
>>
>>
>> Csound mailing list Csound@listserv.heanet.ie
>> https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND Send bugs reports to
>> https://github.com/csound/csound/issues Discussions of bugs and features can
>> be posted here
>
> Csound mailing list
> Csound@listserv.heanet.ie
> https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND
> Send bugs reports to
>         https://github.com/csound/csound/issues
> Discussions of bugs and features can be posted here



-- 

Oeyvind Brandtsegg
Professor of Music Technology
NTNU
7491 Trondheim
Norway
Cell: +47 92 203 205

http://flyndresang.no/
http://www.partikkelaudio.com/
http://soundcloud.com/brandtsegg
http://soundcloud.com/t-emp

Csound mailing list
Csound@listserv.heanet.ie
https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND
Send bugs reports to
        https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here

Date2015-11-25 12:05
FromSteven Yi
SubjectRe: Csound 7: New Processing Graph Proposal
The issue with this is that Csound is a statically-typed language and
the old syntax requires type names be one letter long.  In the nstance
example, inst0 would be analyzed as a variable with name "inst0" and
type "I".  The explicitly-typed version would have a variable name
"inst0" with a type of "Instr".  You should be able to do:

inst0:Instr nstance "MyInstr", 0, -1, 440, -12

but we would still need a way to type the inst0 variable as Instr.

In a future Csound, we would have type inference and the type of inst0
would be determined though its usage. In that case, we could write:

inst0 = MyInstr(0, -1, 440, -12)

and the compiler would determine for us that inst0 is of type Instr,
due to the return type of MyInstr().  Type inference has well-known
models (i.e., extended Hindley-Milner) that we could implement.  The
colon syntax is an evolutionary step to get there that allows us to
start defining and using data types with names longer than one letter.

As Csound continues to evolve, I think we will see the language
growing, but it should not ever break older code.  However, some
language features and system ideas require a newer syntax that can not
work with the older opcode-call syntax. I'd just note that users won't
lose anything by continuing to use older syntax styles, they just
might not be able to use some newer features.

On Wed, Nov 25, 2015 at 5:02 AM, thorin kerr  wrote:
> Just a quick question on instrument instantiation. I know colons have been
> introduced with the functional syntax, but they still seem a little
> un-csoundy to me.
> Couldn't an existing opcode like nstance be modified to be used like this?
> So instead of
> inst0:Instr = MyInstr(0, -1, 440, -12)
>
> you could use
>
> inst0 nstance "MyInstr" 0, -1, 440, -12
>
> Thorin
>
>
>
> On Tue, Nov 24, 2015 at 1:43 AM, Steven Yi  wrote:
>>
>> Hi All,
>>
>> I'm posting this to the user list to solicit feedback.  I was talking
>> with Victor today over coffee and we were reviewing what's done so far
>> for Csound 7 and some new things we might consider to include for the
>> release. One idea that came up today was very interesting that we
>> thought would be a major addition to the system.
>>
>> One of the issues that has been raised numerous times over the years
>> is the issue of instrument processing order and modifying that order.
>> The problem is that the existing processing model has been in place
>> for many years, is well known and easy to understand, and is somewhat
>> at odds with dynamic ordering.
>>
>> In discussing it, we came up with the idea to create a new, separate
>> processing graph.  The existing system orders instances by instrument
>> number and is really keyed to the instrument definition, not the
>> instrument instance.  We can see the existing system as a fixed tree
>> where instrument definitions create a fixed node where instances of
>> that instrument are appended to for performance. It would look
>> something like this:
>>
>> ROOT
>> |-INSTR 1
>>   |--instances
>> |-INSTR 2
>>   |--instances
>>
>> The existing system has an almost 30 year old practice now associated
>> with it for ordering of computation.  It is simple, if inflexible, but
>> it is easy to reason about. It is also tied heavily into the score and
>> event processing system. By this, I mean that "i" events can be
>> defined as "instantiate instrument x, and also attach to processing
>> graph at node INSTR x".
>>
>> Instead of having aspects of ordering that is tied to the instrument
>> definition, the proposal is to create a new processing graph that only
>> deals with concrete instrument instances.  This system would be an
>> additional system to use and would not change the existing system; the
>> existing practices and historical works retain their meaning and
>> continue to operate as is.
>>
>> In the new system, the user would work with the graph entirely within
>> orchestra code.  Users would create instances of instruments and
>> explicitly attach them to target nodes.  A global root node would be
>> available for an engine; new nodes can be created to group instrument
>> instances together and allow users to specify ordering.
>>
>> An example of this might look like this:
>>
>> instr MyInstr
>> ...
>> endin
>>
>> ;; create instance of MyInstr, starting now, indefinite duration, etc.
>> inst0:Instr = MyInstr(0, -1, 440, -12)
>> inst1:Instr = MyInstr(0, -1, 880, -12)
>>
>> append_to_node(ROOT_NODE, inst1)
>> append_to_node(ROOT_NODE, inst0)
>>
>> In the above, inst0 and inst1 are defined as variables of type Instr
>> (using new CS7 syntax).  append_to_node adds the instances to the
>> globally available ROOT_NODE.
>>
>> Another example would be:
>>
>> instrNode:Node = Node()
>>
>> add_to_node(ROOT_NODE, instrNode)
>> add_to_node(instrNode, inst0)
>> add_to_node(instrNode, inst1)
>>
>> The system would initially look a lot like SuperCollider's node system
>> and have the same qualities of using nodes to determine ordering, but
>> not express dependency between node items. User's would be required to
>> deal with communications between instrument instances, such as using a
>> global array or bus system, as one would in SC3.
>>
>> Also, note that the above revives the notion of instrument as opcode
>> that was explored by Matt Ingalls with subinstruments. We would have
>> to change a little bit of what happens: defining a named instrument
>> would automatically generate an opcode with the name as opname, then
>> uses arguments with p2 and p3 as arg 1 and 2, and so on.
>>
>> Some additional notes:
>>
>> 1. User can modify order of instr instances.  This would be done using
>> opcodes such as remove_from_node(node, instr), insert_into_node(node,
>> instr, index).  As this is using actual instances and has nothing to
>> do with instr definitions, the user is in complete control over
>> ordering (and has the complete responsibility as well).
>>
>> 2. To communicate directly to an instrument instance, local channels
>> may be used.  This would involved overloading chnget and chnset to
>> take in Instr instances, such as:
>>
>> chnset inst0, "cutoff", k1  ;; used outside an instrument instance
>> chnget this, "cutoff"  ;; used within an instrument instance
>>
>> Using channels allows a good correlation with the existing global
>> channel system.  This should also be fairly easy to then tie into the
>> API for exposing sending/getting values to an instance of an
>> instrument.
>>
>> 3. Victor and I also spoke about attaching opcodes directly to the new
>> graph.  (This tied into conversations about opcodes as values we had.)
>> There are complications here and needs some further exploration.
>>
>> 4. The new graph would not work with the existing parallel processing
>> implementation.  Exploring something like Supernova in SC3 is a
>> possibility, as is modifying the existing system to analyse based on
>> instrument instances rather than definitions.
>>
>> 5. This work would be best to implement in Csound 7 due to the use of
>> multi-character type names.  The new types and the system could
>> technically be written in CS6, but then we'd have to use
>> single-character type names to define Nodes and Instr instances.
>> Using the longer type names seems more appropriate to CS7.
>>
>> 6. The existing system would not be modified, and the event/score
>> system would not change. However, this does not mean one wouldn't be
>> able to use events to work with the new graph. For example, one could
>> write:
>>
>> myNode:Node = Node()
>> ...
>>
>> instr NewGraph
>>   instr:Instr = MyInstr(.,.,.,.)
>>   add_to_node(myNode, instr)
>> endin
>>
>> ...
>>
>> i "NewGraph" 0 2
>> i "NewGraph" 2 2
>>
>> 7. From an application developer perspective, one would be able to do
>> things like create mixers with effects and dynamically modify the
>> effects chain without losing any state of existing effects.
>>
>>
>> This is the basic proposal for the new processing graph. It should be
>> considered a starting point.  The implementation will require
>> community feedback to understand what are all of the features we would
>> want out of the system as well as discover potential issues.
>>
>> Thanks!
>> steven
>>
>> Csound mailing list
>> Csound@listserv.heanet.ie
>> https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND
>> Send bugs reports to
>>         https://github.com/csound/csound/issues
>> Discussions of bugs and features can be posted here
>
>
> Csound mailing list Csound@listserv.heanet.ie
> https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND Send bugs reports to
> https://github.com/csound/csound/issues Discussions of bugs and features can
> be posted here

Csound mailing list
Csound@listserv.heanet.ie
https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND
Send bugs reports to
        https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here

Date2015-11-25 12:15
FromSteven Yi
SubjectRe: Csound 7: New Processing Graph Proposal
I think the current idea we had come up with is that the two graphs
are run in parallel, but synchronously. If all of the current kperf
code (the code used to run one buffer of audio) is placed in a
function called kperf_classic(), we could then say that the new kperf
would look something like:

int kperf(CSOUND* cs) {
  kperf_classic(cs);
  kperf_new_graph(cs);
}

so that both graphs advance one buffer at a time, and ordered such
that the classic system is run first.

For parallelism, this might actually take a step backwards initially,
but just because the existing code would need to be updated for the
new graph.  As Michael noted, there is a lot of research on data flow
engines and parallelism, but as I also mentioned the new graph does
not express dependencies between nodes in the tree.

I suspect the existing SAT solver could be modified and used to
analyze the node dependencies, but we would have to track information
about nodes a little differently.  Ideally, we would still get
automated parallelism and not have to go down the SuperNova/SC3 path
where users explicitly deal with parallelism.

On Wed, Nov 25, 2015 at 10:12 AM, Oeyvind Brandtsegg
 wrote:
> I think this sounds great.
> One question, if you run two concurrent graphs like you describe,
> would that have any impact on multiprocessing realtime performance ?
> Thinking, one graph would have to finish all its work before the other
> starts working, I guess?
> (this is not at all an objection, just thinking about how it will work)
> Oeyvind
>
> 2015-11-23 18:31 GMT+01:00 Steven Yi :
>> I think this should be considered an additive change, not a
>> replacement, and should not take away anything from the existing
>> processing model.  The two models are inherently different, but we
>> would get the benefit of reusing everything else (instruments,
>> opcodes, etc.). All existing functionality is also maintained: one
>> could still recompile instruments, still use notes and events, etc.
>>
>> This new processing model undoubtedly adds more complexity as one has
>> the responsibility of node and instrument instance ordering. That's
>> both a blessing and a curse.  However, as the older processing model
>> is still retained, one has options as to what would be appropriate to
>> use.
>>
>> I can also imagine using both graphs at the same time. Using Blue as
>> an example, I might do all score related things by writing instruments
>> and notes in the classic way.  However, for the mixer aspect, I might
>> use the new processing model.  Assuming the two graphs are run
>> concurrently (first all of the old event/audio processing happens,
>> then the new graph is run, per k-cycle), it would give me options.  If
>> all aspects of mixing are done in the new graph, and all source
>> signals from instruments are in the older graph, it effectively
>> removes any concerns about instrument ordering in the old graph. This
>> would mean I could add new instruments or replace existing ones.  In
>> the new graph, since I am working with instances and not definitions,
>> this also means I can dynamically manage the graph like a mixer. I
>> could do things like instantiate 3 reverbs, place them in different
>> parts of the new graph, redefine what the reverb is, then add an
>> instance of the new reverb somewhere.
>>
>> I also think a computer music system really does require an event and
>> scheduler system. It's useful for timed scores, but also for things
>> like temporal recursion.  I don't imagine it going away ever, and is
>> sort of a fundamental building block for many kinds of musical
>> projects. However, just in looking at things like MIDI keyboards, we
>> can see that not all computer music programs require schedulers and
>> events.  If there is any concerns over events and scheduling, I'd say
>> that if anything, the idea of notes/events and the scheduler would
>> only get further clarified and potentially more developed after this.
>>
>>
>>
>> On Mon, Nov 23, 2015 at 5:06 PM, Hlöðver Sigurðsson  wrote:
>>> So I understand you right, it would be possible to completely skip using the
>>> processing graph system and one would have all the benefits of parallel
>>> processing (where they apply). Just a an open thought: the current orchestra
>>> ordering system is very clear and easy to understand, you know excacly where
>>> the signal runs and how it changes in which order. Using nodes and groups
>>> would add complexity, like in SC; groups, heads and tails of nodes. Which is
>>> very convenient for a realtime system. Would we want to harvest more of the
>>> realtime capabilites that these graphs would bring along, and in doing so,
>>> would we neglect non-realtime processing and compleatly stop developing the
>>> score part of csound? (which has been turning into less and less important
>>> part of csound.) What I mean by harvesting the new possibilities would be
>>> for example define a new instrument after csound orchestra has been
>>> compiled? My idea would be for that to add a csound terminal/repl that would
>>> run on concurrent thread to calculate new instrument defenitions before
>>> sending them to the main thread as not to glitch or miss out any audio
>>> sample unnecessarily.  I'm more interested in this discussion rather than
>>> the answer per say. It is my opinion that csound exceeds in non-realtime
>>> heavy calculation of sounds, being strong at realtime performance and
>>> realtime coding could possibly put csound way ahead.
>>>
>>> 2015-11-23 16:43 GMT+01:00 Steven Yi :
>>>>
>>>> Hi All,
>>>>
>>>> I'm posting this to the user list to solicit feedback.  I was talking
>>>> with Victor today over coffee and we were reviewing what's done so far
>>>> for Csound 7 and some new things we might consider to include for the
>>>> release. One idea that came up today was very interesting that we
>>>> thought would be a major addition to the system.
>>>>
>>>> One of the issues that has been raised numerous times over the years
>>>> is the issue of instrument processing order and modifying that order.
>>>> The problem is that the existing processing model has been in place
>>>> for many years, is well known and easy to understand, and is somewhat
>>>> at odds with dynamic ordering.
>>>>
>>>> In discussing it, we came up with the idea to create a new, separate
>>>> processing graph.  The existing system orders instances by instrument
>>>> number and is really keyed to the instrument definition, not the
>>>> instrument instance.  We can see the existing system as a fixed tree
>>>> where instrument definitions create a fixed node where instances of
>>>> that instrument are appended to for performance. It would look
>>>> something like this:
>>>>
>>>> ROOT
>>>> |-INSTR 1
>>>>   |--instances
>>>> |-INSTR 2
>>>>   |--instances
>>>>
>>>> The existing system has an almost 30 year old practice now associated
>>>> with it for ordering of computation.  It is simple, if inflexible, but
>>>> it is easy to reason about. It is also tied heavily into the score and
>>>> event processing system. By this, I mean that "i" events can be
>>>> defined as "instantiate instrument x, and also attach to processing
>>>> graph at node INSTR x".
>>>>
>>>> Instead of having aspects of ordering that is tied to the instrument
>>>> definition, the proposal is to create a new processing graph that only
>>>> deals with concrete instrument instances.  This system would be an
>>>> additional system to use and would not change the existing system; the
>>>> existing practices and historical works retain their meaning and
>>>> continue to operate as is.
>>>>
>>>> In the new system, the user would work with the graph entirely within
>>>> orchestra code.  Users would create instances of instruments and
>>>> explicitly attach them to target nodes.  A global root node would be
>>>> available for an engine; new nodes can be created to group instrument
>>>> instances together and allow users to specify ordering.
>>>>
>>>> An example of this might look like this:
>>>>
>>>> instr MyInstr
>>>> ...
>>>> endin
>>>>
>>>> ;; create instance of MyInstr, starting now, indefinite duration, etc.
>>>> inst0:Instr = MyInstr(0, -1, 440, -12)
>>>> inst1:Instr = MyInstr(0, -1, 880, -12)
>>>>
>>>> append_to_node(ROOT_NODE, inst1)
>>>> append_to_node(ROOT_NODE, inst0)
>>>>
>>>> In the above, inst0 and inst1 are defined as variables of type Instr
>>>> (using new CS7 syntax).  append_to_node adds the instances to the
>>>> globally available ROOT_NODE.
>>>>
>>>> Another example would be:
>>>>
>>>> instrNode:Node = Node()
>>>>
>>>> add_to_node(ROOT_NODE, instrNode)
>>>> add_to_node(instrNode, inst0)
>>>> add_to_node(instrNode, inst1)
>>>>
>>>> The system would initially look a lot like SuperCollider's node system
>>>> and have the same qualities of using nodes to determine ordering, but
>>>> not express dependency between node items. User's would be required to
>>>> deal with communications between instrument instances, such as using a
>>>> global array or bus system, as one would in SC3.
>>>>
>>>> Also, note that the above revives the notion of instrument as opcode
>>>> that was explored by Matt Ingalls with subinstruments. We would have
>>>> to change a little bit of what happens: defining a named instrument
>>>> would automatically generate an opcode with the name as opname, then
>>>> uses arguments with p2 and p3 as arg 1 and 2, and so on.
>>>>
>>>> Some additional notes:
>>>>
>>>> 1. User can modify order of instr instances.  This would be done using
>>>> opcodes such as remove_from_node(node, instr), insert_into_node(node,
>>>> instr, index).  As this is using actual instances and has nothing to
>>>> do with instr definitions, the user is in complete control over
>>>> ordering (and has the complete responsibility as well).
>>>>
>>>> 2. To communicate directly to an instrument instance, local channels
>>>> may be used.  This would involved overloading chnget and chnset to
>>>> take in Instr instances, such as:
>>>>
>>>> chnset inst0, "cutoff", k1  ;; used outside an instrument instance
>>>> chnget this, "cutoff"  ;; used within an instrument instance
>>>>
>>>> Using channels allows a good correlation with the existing global
>>>> channel system.  This should also be fairly easy to then tie into the
>>>> API for exposing sending/getting values to an instance of an
>>>> instrument.
>>>>
>>>> 3. Victor and I also spoke about attaching opcodes directly to the new
>>>> graph.  (This tied into conversations about opcodes as values we had.)
>>>> There are complications here and needs some further exploration.
>>>>
>>>> 4. The new graph would not work with the existing parallel processing
>>>> implementation.  Exploring something like Supernova in SC3 is a
>>>> possibility, as is modifying the existing system to analyse based on
>>>> instrument instances rather than definitions.
>>>>
>>>> 5. This work would be best to implement in Csound 7 due to the use of
>>>> multi-character type names.  The new types and the system could
>>>> technically be written in CS6, but then we'd have to use
>>>> single-character type names to define Nodes and Instr instances.
>>>> Using the longer type names seems more appropriate to CS7.
>>>>
>>>> 6. The existing system would not be modified, and the event/score
>>>> system would not change. However, this does not mean one wouldn't be
>>>> able to use events to work with the new graph. For example, one could
>>>> write:
>>>>
>>>> myNode:Node = Node()
>>>> ...
>>>>
>>>> instr NewGraph
>>>>   instr:Instr = MyInstr(.,.,.,.)
>>>>   add_to_node(myNode, instr)
>>>> endin
>>>>
>>>> ...
>>>>
>>>> i "NewGraph" 0 2
>>>> i "NewGraph" 2 2
>>>>
>>>> 7. From an application developer perspective, one would be able to do
>>>> things like create mixers with effects and dynamically modify the
>>>> effects chain without losing any state of existing effects.
>>>>
>>>>
>>>> This is the basic proposal for the new processing graph. It should be
>>>> considered a starting point.  The implementation will require
>>>> community feedback to understand what are all of the features we would
>>>> want out of the system as well as discover potential issues.
>>>>
>>>> Thanks!
>>>> steven
>>>>
>>>> Csound mailing list
>>>> Csound@listserv.heanet.ie
>>>> https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND
>>>> Send bugs reports to
>>>>         https://github.com/csound/csound/issues
>>>> Discussions of bugs and features can be posted here
>>>
>>>
>>> Csound mailing list Csound@listserv.heanet.ie
>>> https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND Send bugs reports to
>>> https://github.com/csound/csound/issues Discussions of bugs and features can
>>> be posted here
>>
>> Csound mailing list
>> Csound@listserv.heanet.ie
>> https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND
>> Send bugs reports to
>>         https://github.com/csound/csound/issues
>> Discussions of bugs and features can be posted here
>
>
>
> --
>
> Oeyvind Brandtsegg
> Professor of Music Technology
> NTNU
> 7491 Trondheim
> Norway
> Cell: +47 92 203 205
>
> http://flyndresang.no/
> http://www.partikkelaudio.com/
> http://soundcloud.com/brandtsegg
> http://soundcloud.com/t-emp
>
> Csound mailing list
> Csound@listserv.heanet.ie
> https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND
> Send bugs reports to
>         https://github.com/csound/csound/issues
> Discussions of bugs and features can be posted here

Csound mailing list
Csound@listserv.heanet.ie
https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND
Send bugs reports to
        https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here

Date2015-11-25 12:50
FromSteven Yi
SubjectRe: Csound 7: New Processing Graph Proposal
One other thing to note about the newer syntax is that another avenue
to explore is the explicit declaration of variables, such as:

declare inst0:Instr

inst0 nstance "MyInstr", 0, -1, 440, -12

or some syntax like this.  I'm not a huge fan of pre-declaring
variables for Csound, but it's an option I thought I'd mention.

On Wed, Nov 25, 2015 at 12:05 PM, Steven Yi  wrote:
> The issue with this is that Csound is a statically-typed language and
> the old syntax requires type names be one letter long.  In the nstance
> example, inst0 would be analyzed as a variable with name "inst0" and
> type "I".  The explicitly-typed version would have a variable name
> "inst0" with a type of "Instr".  You should be able to do:
>
> inst0:Instr nstance "MyInstr", 0, -1, 440, -12
>
> but we would still need a way to type the inst0 variable as Instr.
>
> In a future Csound, we would have type inference and the type of inst0
> would be determined though its usage. In that case, we could write:
>
> inst0 = MyInstr(0, -1, 440, -12)
>
> and the compiler would determine for us that inst0 is of type Instr,
> due to the return type of MyInstr().  Type inference has well-known
> models (i.e., extended Hindley-Milner) that we could implement.  The
> colon syntax is an evolutionary step to get there that allows us to
> start defining and using data types with names longer than one letter.
>
> As Csound continues to evolve, I think we will see the language
> growing, but it should not ever break older code.  However, some
> language features and system ideas require a newer syntax that can not
> work with the older opcode-call syntax. I'd just note that users won't
> lose anything by continuing to use older syntax styles, they just
> might not be able to use some newer features.
>
> On Wed, Nov 25, 2015 at 5:02 AM, thorin kerr  wrote:
>> Just a quick question on instrument instantiation. I know colons have been
>> introduced with the functional syntax, but they still seem a little
>> un-csoundy to me.
>> Couldn't an existing opcode like nstance be modified to be used like this?
>> So instead of
>> inst0:Instr = MyInstr(0, -1, 440, -12)
>>
>> you could use
>>
>> inst0 nstance "MyInstr" 0, -1, 440, -12
>>
>> Thorin
>>
>>
>>
>> On Tue, Nov 24, 2015 at 1:43 AM, Steven Yi  wrote:
>>>
>>> Hi All,
>>>
>>> I'm posting this to the user list to solicit feedback.  I was talking
>>> with Victor today over coffee and we were reviewing what's done so far
>>> for Csound 7 and some new things we might consider to include for the
>>> release. One idea that came up today was very interesting that we
>>> thought would be a major addition to the system.
>>>
>>> One of the issues that has been raised numerous times over the years
>>> is the issue of instrument processing order and modifying that order.
>>> The problem is that the existing processing model has been in place
>>> for many years, is well known and easy to understand, and is somewhat
>>> at odds with dynamic ordering.
>>>
>>> In discussing it, we came up with the idea to create a new, separate
>>> processing graph.  The existing system orders instances by instrument
>>> number and is really keyed to the instrument definition, not the
>>> instrument instance.  We can see the existing system as a fixed tree
>>> where instrument definitions create a fixed node where instances of
>>> that instrument are appended to for performance. It would look
>>> something like this:
>>>
>>> ROOT
>>> |-INSTR 1
>>>   |--instances
>>> |-INSTR 2
>>>   |--instances
>>>
>>> The existing system has an almost 30 year old practice now associated
>>> with it for ordering of computation.  It is simple, if inflexible, but
>>> it is easy to reason about. It is also tied heavily into the score and
>>> event processing system. By this, I mean that "i" events can be
>>> defined as "instantiate instrument x, and also attach to processing
>>> graph at node INSTR x".
>>>
>>> Instead of having aspects of ordering that is tied to the instrument
>>> definition, the proposal is to create a new processing graph that only
>>> deals with concrete instrument instances.  This system would be an
>>> additional system to use and would not change the existing system; the
>>> existing practices and historical works retain their meaning and
>>> continue to operate as is.
>>>
>>> In the new system, the user would work with the graph entirely within
>>> orchestra code.  Users would create instances of instruments and
>>> explicitly attach them to target nodes.  A global root node would be
>>> available for an engine; new nodes can be created to group instrument
>>> instances together and allow users to specify ordering.
>>>
>>> An example of this might look like this:
>>>
>>> instr MyInstr
>>> ...
>>> endin
>>>
>>> ;; create instance of MyInstr, starting now, indefinite duration, etc.
>>> inst0:Instr = MyInstr(0, -1, 440, -12)
>>> inst1:Instr = MyInstr(0, -1, 880, -12)
>>>
>>> append_to_node(ROOT_NODE, inst1)
>>> append_to_node(ROOT_NODE, inst0)
>>>
>>> In the above, inst0 and inst1 are defined as variables of type Instr
>>> (using new CS7 syntax).  append_to_node adds the instances to the
>>> globally available ROOT_NODE.
>>>
>>> Another example would be:
>>>
>>> instrNode:Node = Node()
>>>
>>> add_to_node(ROOT_NODE, instrNode)
>>> add_to_node(instrNode, inst0)
>>> add_to_node(instrNode, inst1)
>>>
>>> The system would initially look a lot like SuperCollider's node system
>>> and have the same qualities of using nodes to determine ordering, but
>>> not express dependency between node items. User's would be required to
>>> deal with communications between instrument instances, such as using a
>>> global array or bus system, as one would in SC3.
>>>
>>> Also, note that the above revives the notion of instrument as opcode
>>> that was explored by Matt Ingalls with subinstruments. We would have
>>> to change a little bit of what happens: defining a named instrument
>>> would automatically generate an opcode with the name as opname, then
>>> uses arguments with p2 and p3 as arg 1 and 2, and so on.
>>>
>>> Some additional notes:
>>>
>>> 1. User can modify order of instr instances.  This would be done using
>>> opcodes such as remove_from_node(node, instr), insert_into_node(node,
>>> instr, index).  As this is using actual instances and has nothing to
>>> do with instr definitions, the user is in complete control over
>>> ordering (and has the complete responsibility as well).
>>>
>>> 2. To communicate directly to an instrument instance, local channels
>>> may be used.  This would involved overloading chnget and chnset to
>>> take in Instr instances, such as:
>>>
>>> chnset inst0, "cutoff", k1  ;; used outside an instrument instance
>>> chnget this, "cutoff"  ;; used within an instrument instance
>>>
>>> Using channels allows a good correlation with the existing global
>>> channel system.  This should also be fairly easy to then tie into the
>>> API for exposing sending/getting values to an instance of an
>>> instrument.
>>>
>>> 3. Victor and I also spoke about attaching opcodes directly to the new
>>> graph.  (This tied into conversations about opcodes as values we had.)
>>> There are complications here and needs some further exploration.
>>>
>>> 4. The new graph would not work with the existing parallel processing
>>> implementation.  Exploring something like Supernova in SC3 is a
>>> possibility, as is modifying the existing system to analyse based on
>>> instrument instances rather than definitions.
>>>
>>> 5. This work would be best to implement in Csound 7 due to the use of
>>> multi-character type names.  The new types and the system could
>>> technically be written in CS6, but then we'd have to use
>>> single-character type names to define Nodes and Instr instances.
>>> Using the longer type names seems more appropriate to CS7.
>>>
>>> 6. The existing system would not be modified, and the event/score
>>> system would not change. However, this does not mean one wouldn't be
>>> able to use events to work with the new graph. For example, one could
>>> write:
>>>
>>> myNode:Node = Node()
>>> ...
>>>
>>> instr NewGraph
>>>   instr:Instr = MyInstr(.,.,.,.)
>>>   add_to_node(myNode, instr)
>>> endin
>>>
>>> ...
>>>
>>> i "NewGraph" 0 2
>>> i "NewGraph" 2 2
>>>
>>> 7. From an application developer perspective, one would be able to do
>>> things like create mixers with effects and dynamically modify the
>>> effects chain without losing any state of existing effects.
>>>
>>>
>>> This is the basic proposal for the new processing graph. It should be
>>> considered a starting point.  The implementation will require
>>> community feedback to understand what are all of the features we would
>>> want out of the system as well as discover potential issues.
>>>
>>> Thanks!
>>> steven
>>>
>>> Csound mailing list
>>> Csound@listserv.heanet.ie
>>> https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND
>>> Send bugs reports to
>>>         https://github.com/csound/csound/issues
>>> Discussions of bugs and features can be posted here
>>
>>
>> Csound mailing list Csound@listserv.heanet.ie
>> https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND Send bugs reports to
>> https://github.com/csound/csound/issues Discussions of bugs and features can
>> be posted here

Csound mailing list
Csound@listserv.heanet.ie
https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND
Send bugs reports to
        https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here