Re: [Cs-dev]
Date | 2012-12-21 19:53 |
From | Stéphane Rollandin |
Subject | Re: [Cs-dev] some developments in Csound 6 |
> yes, if you can parse your instrument into the AST tree, then you can > load it through csoundCompileTree(). So the path is open now for parsers > to be implemented for other languages than the original Csound. My plan > is that there will be enough clarity on the how the tree is structured, > through documentation and/or API to allow for this. Also a fast loadable > format is planned. > > Effectively with the infrastructure changes we are implementing, the > Csound engine is getting separated from the Csound compiler, which > should enable further developments in these areas. > > Victor Brilliant ! Stef ------------------------------------------------------------------------------ LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d _______________________________________________ Csound-devel mailing list Csound-devel@lists.sourceforge.net |
Date | 2012-12-22 05:30 |
From | Andres Cabrera |
Subject | Re: [Cs-dev] some developments in Csound 6 |
Hi, This is great news, and a nice Christmas present! It's interesting how this presents a different paradigm to SC. In SC you define the instruments and the ordering of the graph is determined when the instruments are instantiated, either by the order they are instantiated, or by explicitly placing instruments before or after others. This provides an interesting flexibility, which is that some notes in a instrument can come before or after other notes of another at the same time. However, I'm not sure whether this is so useful in practice as instruments are generally constructed with assumptions as to what comes before or after, so the rigidness of the Csound model might actually simplify things. But I'm also thinking that for modular synth style chaining, a module, say ring modulation might want to be placed anywhere in the chain regardless of its instrument number. How could this be handled? Any thoughts on this? Other thoughts: Can the number of an instrument be changed when it has already been loaded? Can instrument definitions be deleted from the engine? Cheers, Andrés On Fri, Dec 21, 2012 at 11:53 AM, Stéphane Rollandin |
Date | 2012-12-22 09:16 |
From | Victor Lazzarini |
Subject | Re: [Cs-dev] some developments in Csound 6 |
The instrument number can be changed by loading it again with a different number. With named instruments, because number assignment is done internally, the order in which they appear on the orchestra determines their number, and, at the moment, the order which they are loaded also matters (subsequent compilations put them to the end of the list). I think the number system is a good way to define precisely what goes where. In my opinion, modules (like ring modulation in your example) are better handled by UDOs. There is a remove opcode that has been there for a long time, which can be used to delete instruments. But with the current system, there is no real need, unless you want to free memory, as instruments replace existing ones. So if you want to swap the order of instrs 1 & 2, just send in an orchestra where instruments have been renamed. Victor On 22 Dec 2012, at 05:30, Andres Cabrera wrote: > Other thoughts: Can the number of an instrument be changed when it has > already been loaded? > Can instrument definitions be deleted from the engine? Dr Victor Lazzarini Senior Lecturer Dept. of Music NUI Maynooth Ireland tel.: +353 1 708 3545 Victor dot Lazzarini AT nuim dot ie ------------------------------------------------------------------------------ LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d _______________________________________________ Csound-devel mailing list Csound-devel@lists.sourceforge.net |
Date | 2012-12-22 15:12 |
From | Andres Cabrera |
Subject | Re: [Cs-dev] some developments in Csound 6 |
Attachments | None None |
Hi, I agree that modulation can be an udo, but what if I want the flexibility of having it either before or after a filter? The solution would be coding it twice and turning off on when you turn the other on. Writing this, I'm thinking the solution would be giving the instrument two numbers (I think that can be done currently). Would this work with the new system and would it allow you to place an instrument wherever you need it in the chain? Cheers, On Dec 22, 2012 1:16 AM, "Victor Lazzarini" <Victor.Lazzarini@nuim.ie> wrote:
The instrument number can be changed by loading it again with a different number. With named instruments, because number assignment is done internally, the |
Date | 2012-12-22 15:16 |
From | Jacob Joaquin |
Subject | Re: [Cs-dev] some developments in Csound 6 |
Attachments | None None |
For modular synth style modules, and this really depends on the context, placing a ring modulator into an instrument makes more sense than an UDO. And of course, that instrument could still use an UDO. Back to my point / rant. If a composer has an string instr 1 graphed into a reverb instr 2, and then decides to add a ring modulator in between, reloading the reverb as instr 3, especially in a context of a live show, would be very disruptive. Though this might be avoided if we went back to BASIC styled numbed. instr 10, instr 20, instr 30, etc.
Best, Jake On Sat, Dec 22, 2012 at 1:16 AM, Victor Lazzarini <Victor.Lazzarini@nuim.ie> wrote: The instrument number can be changed by loading it again with a different number. With named instruments, because number assignment is done internally, the codehop.com | #code #art #music |
Date | 2012-12-22 15:20 |
From | Victor Lazzarini |
Subject | Re: [Cs-dev] some developments in Csound 6 |
Attachments | None None |
yes, multiple instrument numbers work. The current system allows everything that was currently possible, with the exception of instr 0, which can only be loaded in the first compilation. On 22 Dec 2012, at 15:12, Andres Cabrera wrote:
Dr Victor Lazzarini Senior Lecturer Dept. of Music NUI Maynooth Ireland tel.: +353 1 708 3545 Victor dot Lazzarini AT nuim dot ie |
Date | 2012-12-22 15:26 |
From | Victor Lazzarini |
Subject | Re: [Cs-dev] some developments in Csound 6 |
Attachments | None None |
Well, you should never place "always on" single-instance type of effects on low order instruments anyway. Any other instruments that will be instantiated/de-instantiated multiple times can be replaced without any disruption. It is a non-issue really. Victor On 22 Dec 2012, at 15:16, Jacob Joaquin wrote: For modular synth style modules, and this really depends on the context, placing a ring modulator into an instrument makes more sense than an UDO. And of course, that instrument could still use an UDO. Back to my point / rant. Dr Victor Lazzarini Senior Lecturer Dept. of Music NUI Maynooth Ireland tel.: +353 1 708 3545 Victor dot Lazzarini AT nuim dot ie |
Date | 2012-12-22 15:34 |
From | Andres Cabrera |
Subject | Re: [Cs-dev] some developments in Csound 6 |
Attachments | None None |
And another question. Does it take long to compile instruments? Would it be better in practice to compile them in a separate thread and pass them to the engine when they are ready? Cheers, On Dec 22, 2012 7:12 AM, "Andres Cabrera" <mantaraya36@gmail.com> wrote:
|
Date | 2012-12-22 15:36 |
From | Steven Yi |
Subject | Re: [Cs-dev] some developments in Csound 6 |
I'd imagine that what you want to accomplish could be done by a mixer instrument. I'm perhaps influenced by my own design in blue, where instruments are mostly generators, except for the mixer instrument. The mixer instrument then uses UDO's in whatever order it wants, including multiple calls to UDO's. Using instruments as effects processors is much more limited by design. It's a problem that exists with the signal flow graph system, in that you can't have a an effect and apply the one definition arbitrarily in multiple parts of the signal graph. I wrote about a mixer design here: http://www.csounds.com/journal/issue13/emulatingMidiBasedStudios.html It is essentially the same as what I use in blue. The mixer instrument gathers the signals then handles applying effects and mixing down to the master outs. So IMO, it's not an issue that changing the design of instrument render order needs to solve, but rather taking a different approach to mixing. With a mixer instrument, all works out fine. On Sat, Dec 22, 2012 at 10:12 AM, Andres Cabrera |
Date | 2012-12-22 15:39 |
From | Jacob Joaquin |
Subject | Re: [Cs-dev] some developments in Csound 6 |
Attachments | None None |
This is an issue. And I can think of a lot of real world examples where this will be problem. If Csound is to be flexible, it needs to have a fluid design. The ability to add instruments during runtime is a HUGE step forward. Awesome stuff here. The next step would be able to allow instruments to be inserted an rearranged into the execution.
I propose instead of having instruments execute in order of their numbering, have some sort of data structure that is more or less a list of the order. Like [1, 2, 3, 100, 101].
Then somewhere, someone can insert an instrument into this data struct. Here's some very quick pseudo orchestra code that isn't meant to be a final design: instr 1000 prepend instr 2
.... endin Then the execution would be [1, 1000, 2, 3, 100, 101]. Best, Jake On Sat, Dec 22, 2012 at 7:26 AM, Victor Lazzarini <Victor.Lazzarini@nuim.ie> wrote:
codehop.com | #code #art #music |
Date | 2012-12-22 15:49 |
From | Victor Lazzarini |
Subject | Re: [Cs-dev] some developments in Csound 6 |
Attachments | None None |
The API design will have to allow various means of compilation. The way it works now, with csoundCompileOrc() is that instruments will be added when they're ready anyway, so if this call is placed in a separate thread with care taken to make it thread safe, that would be accomplished. Compilation is very fast, parsing is what takes the most time, the two can be done separately with csoundParseOrc() and csoundCompileTree(). The first one parses only, the second compiles and merges the orchestra into the engine. Victor On 22 Dec 2012, at 15:34, Andres Cabrera wrote:
Dr Victor Lazzarini Senior Lecturer Dept. of Music NUI Maynooth Ireland tel.: +353 1 708 3545 Victor dot Lazzarini AT nuim dot ie |
Date | 2012-12-22 16:07 |
From | Andres Cabrera |
Subject | Re: [Cs-dev] some developments in Csound 6 |
Attachments | None None |
Very interesting, thanks. On Dec 22, 2012 7:50 AM, "Victor Lazzarini" <Victor.Lazzarini@nuim.ie> wrote:
|
Date | 2012-12-22 18:08 |
From | Steven Yi |
Subject | Re: [Cs-dev] some developments in Csound 6 |
Hi Jake, Could you list some examples? I can't think of any that really requires modifying instrument performance order myself, taking into account the mixing solution I replied with earlier. Doing something like PD but with Csound as an engine should be possible with re-parsing instruments or UDO's (where subpatches map to UDO's, and the top level patch maps to an instrument). If there are use cases that can not be solved with the instrument performance ordering as-is, then we need to take them into account for architectural changes. Note too, one of the things I am interested in myself beyond re-parsing, is manual instrument/udo manipulation. This would be things like adding/removing opcodes in the performance chain. The reason I'm interested in this is because applications like blue could use this to insert new effects into a mixer without triggering a new init-pass of a running mixer instrument instance. This is not vital though as re-parsing would work alright, but would be a next step as it could prevent pops on instrument changes. Thanks, steven On Sat, Dec 22, 2012 at 10:39 AM, Jacob Joaquin |
Date | 2012-12-22 18:53 |
From | David Akbari |
Subject | Re: [Cs-dev] some developments in Csound 6 |
Great work! What impact does what you mentioned here about instr 0 have (if any) on initializing/ using global variables? Does instr 0 only running once imply that future/ added global variables in a given .orc must necessarily occur within a numbered instr? How does this new and amazing sounding aspect to Csound affect things like the ZAK opcodes (zakr, zakw) where instrs are trying to read or write to zak channels that may or may not exist on a given k-cycle? Best regards, David On Sat, Dec 22, 2012 at 9:20 AM, Victor Lazzarini |
Date | 2012-12-22 20:13 |
From | Victor Lazzarini |
Subject | Re: [Cs-dev] some developments in Csound 6 |
you can still add new global variables, but they need to be defined in an instrument. The instr 0 question is not cast in stone, but at the moment, it appears less confusing not to load it again, as it would give the impression that engine parameters like sr etc could be changed (and this is not in our current plans). If instr 0 is allowed in subsequent compilations, sr etc will be ignored in them. It seems better to ignore it altogether. The zak issues should be solved by zak, as usual. They will handle inexistent channels as they do now and issue any errors as required. Victor On 22 Dec 2012, at 18:53, David Akbari wrote: > Great work! > > What impact does what you mentioned here about instr 0 have (if any) > on initializing/ using global variables? Does instr 0 only running > once imply that future/ added global variables in a given .orc must > necessarily occur within a numbered instr? > > How does this new and amazing sounding aspect to Csound affect things > like the ZAK opcodes (zakr, zakw) where instrs are trying to read or > write to zak channels that may or may not exist on a given k-cycle? > > Best regards, > David > > On Sat, Dec 22, 2012 at 9:20 AM, Victor Lazzarini > |
Date | 2012-12-22 20:20 |
From | Jacob Joaquin |
Subject | Re: [Cs-dev] some developments in Csound 6 |
Attachments | None None |
I believe that by allowing instruments to be loaded during runtime will open up Csound to new design pattern possibilities. Things we aren't even fully considering because this is brand new. One of these is that named instruments that have a common bus interface, using chn for example, could be dropped into an orchestra with zero modification to the code.
In this case, there would potentially be libraries upon libraries of community built instruments that are ready to be used practically out of the box, hot swapped, inserted, removed, and rearranged into instrument graphs. All in realtime, during live performances, without the composer/performer have to think so far ahead as to place they're always on instruments way down at the bottom.
Allowing the instrument performance to be reordered during runtime is a key. I'll add more as I go, but I don't have time to write a thesis right now. :) 4 days off of work, busy almost the entire time.
Best, Jake On Sat, Dec 22, 2012 at 10:08 AM, Steven Yi <stevenyi@gmail.com> wrote: Hi Jake, codehop.com | #code #art #music |
Date | 2012-12-22 21:29 |
From | Jacob Joaquin |
Subject | Re: [Cs-dev] some developments in Csound 6 |
Attachments | None None |
Here's an example. instr myStrings ... endin instr myReverb ... endin In cases where a user utilizes named instruments, they are stuck with no space to insert an instr myRingMod between the two. This would extend to any chained effect instruments as well.
Best, Jake On Sat, Dec 22, 2012 at 12:20 PM, Jacob Joaquin <jacobjoaquin@gmail.com> wrote:
codehop.com | #code #art #music |
Date | 2012-12-22 21:52 |
From | Steven Yi |
Subject | Re: [Cs-dev] some developments in Csound 6 |
Attachments | None None |
And how does my other email regarding a mixer instrument not solve the larger issue? I'm using the technique in blue and it is working fine... Also, as mentioned earlier, using instruments as effects has the problem that one can not easily reuse an effect in multiple places in the signal graph, I.e. if you have a reverb instrument used for local reverb used on instruments, and the same reverb code as a global reverb further down the graph. On Dec 22, 2012 4:30 PM, "Jacob Joaquin" <jacobjoaquin@gmail.com> wrote:
Here's an example. |
Date | 2012-12-22 22:28 |
From | Jacob Joaquin |
Subject | Re: [Cs-dev] some developments in Csound 6 |
Attachments | None None |
Using the orchestra from the mixer as an example, one could not insert a new automation instrument at the top in a live context. So while technically an issue with chaining an effect, it shows the same issue in a different context. It's not just effects, but generators with modulation inputs as well. Reusing the same code for instruments is possible in multiple places. For some reason this isn't working for me now, and I think it has worked for me in the past, and the manual says it's possible:
instr 1, 5 ... endin And then similar to your automation instruments, you can using either indexed or named busses to do the connecting in the score, so that the instrument body itself doesn't need modification. This is made a lot easier if the code is generated at a higher level in a scripting language, but it's physically possible and can be streamlined to a point where it's user friendly. It's a beautiful thing, too.
On Sat, Dec 22, 2012 at 1:52 PM, Steven Yi <stevenyi@gmail.com> wrote:
codehop.com | #code #art #music |
Date | 2012-12-22 23:08 |
From | Justin Smith |
Subject | Re: [Cs-dev] some developments in Csound 6 |
Attachments | None None |
I have successfully done modular synthesis orchestras (with the old parser) where each module had some number of zak outputs and inputs supplied at init time, and was defined with multiple numbers to allow various kinds of patching.
On Sat, Dec 22, 2012 at 2:28 PM, Jacob Joaquin <jacobjoaquin@gmail.com> wrote: Using the orchestra from the mixer as an example, one could not insert a new automation instrument at the top in a live context. So while technically an issue with chaining an effect, it shows the same issue in a different context. It's not just effects, but generators with modulation inputs as well. |
Date | 2012-12-23 00:08 |
From | Steven Yi |
Subject | Re: [Cs-dev] some developments in Csound 6 |
In terms of automation, if your application uses the API, then automation values are likely going to be coming from the host application and done outside the csound render order. I.e. (pseudo-code) int status = 0; while(status) { setValuesIntoCsound(); status = performKsmps(); getValuesFromCsound(); } At least, that's my approach with blue. The application generates signals as its easier to keep track of the automation data during a live render with the host app, than it is to track it in csound as score data. This allows easy handling of updating values during a render. On the other hand, for blue, if it's going to render a CSD to be used outside of blue (to render to disk), it will render down all of the automation information into notes and instruments. In that case, the process goes by initialiazing a global value in the header, then instruments that update automations happen after sound generating instruments. Even though the automations are updated after they are first read, because of the intialization in the instr 0 space, the values are correctly generated to the right ksmps frame when read by instruments. Anyways, this is a design approach to using instruments and UDO's that works for me and is already implemented in working code. It allows inserting effects anywhere into a signal graph, and with re-parsing and/or manual instrument manipulation, will allow graph changes in realtime. It allows generating automation values (managed by host) dynamically. Those are the use cases that I'm most interested in for my work. This seems to me to cover the use cases you've mentioned as well. If there's something else that isn't considered, then let's break it down into clear use cases. It's certainly possible to change instrument render ordering and other things, but it's also going to come at a price of more code to implement, test, and maintain. If it's really necessary to achieve a goal that's not supported with the current design, then great, let's sort it out. On Sat, Dec 22, 2012 at 5:28 PM, Jacob Joaquin |
Date | 2012-12-24 20:54 |
From | Jacob Joaquin |
Subject | Re: [Cs-dev] some developments in Csound 6 |
Attachments | None None |
Is this easy to do? Easy for who? On Sat, Dec 22, 2012 at 4:08 PM, Steven Yi <stevenyi@gmail.com> wrote: In terms of automation, if your application uses the API, then codehop.com | #code #art #music |
Date | 2012-12-24 21:12 |
From | Steven Yi |
Subject | Re: [Cs-dev] some developments in Csound 6 |
I wrote my email coming from my experience developing an automation system using Csound and blue. If you have something else in mind, please offer it; as I said in previous emails, I'm happy to hear proposals and discuss them here and offer my points of view, as I have. That said, I'm also happy to agree to disagree and see changes to Csound I may not use, if others find it useful. My opinion is just one of many, no more or less important than any other. I've offered my opinions and experiences and you are free to take them into account or disregard them. On Mon, Dec 24, 2012 at 3:54 PM, Jacob Joaquin |
Date | 2012-12-24 22:08 |
From | Steven Yi |
Subject | Re: [Cs-dev] some developments in Csound 6 |
Just another note, I am happy to find new ways to do things too. What I've written is based on my experience, but I'd be more than happy to hear a better way. That'd only be great for me to learn something, and I'd be grateful for the new solutions. On the other hand, please take what I wrote as my best effort to describe working solutions to the problems we have discussed so far, and possible problems to consider in the proposals mentioned. Thanks, steven On Mon, Dec 24, 2012 at 4:12 PM, Steven Yi |