[Cs-dev] Channels
Date | 2009-02-05 14:36 |
From | jpff |
Subject | [Cs-dev] Channels |
As we edge towards multiprocessor Csound we need to look at places in the code that need protection. I have protected memory allocation. Now looking at the channels; each channel needs a lock, but the GetChannelPtr function yields the data but not the structure that includes any lock. So either we need a largely redundant GetChannelLock or need to change GetChannelPtr to give the lock location as well as the data. Or have I missed some alternative? Any thoughts or ideas? ==John ffitch ------------------------------------------------------------------------------ Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM) software. With Adobe AIR, Ajax developers can use existing skills and code to build responsive, highly engaging applications that combine the power of local resources and data with the reach of the web. Download the Adobe AIR SDK and Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com _______________________________________________ Csound-devel mailing list Csound-devel@lists.sourceforge.net |
Date | 2009-02-05 19:34 |
From | Anthony Kozar |
Subject | Re: [Cs-dev] Channels |
I suggest that we move towards "Csound 6" as was suggested here a couple of months ago. The API can then be redesigned as needed to accommadate many of the suggested features that it is currently ill-suited for. I think we should create a branch in the csound5 module (not a new module) so that development can continue along both lines and changes will be easy to merge from the trunk to the branch. We can continue to add new opcodes and minor features to Csound 5 so that it is not a complete freeze. (I believe all of this was suggested before -- I think it's the right way to go). Anthony jpff wrote on 2/5/09 9:36 AM: > As we edge towards multiprocessor Csound [...] > So either we need a largely redundant > GetChannelLock or need to change GetChannelPtr to give the lock > location as well as the data. Or have I missed some alternative? > > Any thoughts or ideas? ------------------------------------------------------------------------------ Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM) software. With Adobe AIR, Ajax developers can use existing skills and code to build responsive, highly engaging applications that combine the power of local resources and data with the reach of the web. Download the Adobe AIR SDK and Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com _______________________________________________ Csound-devel mailing list Csound-devel@lists.sourceforge.net |
Date | 2009-02-05 20:13 |
From | Jonatan Liljedahl |
Subject | [Cs-dev] some ideas for Csound 6 |
Here's some ideas for Csound 6: encoding the tag in the decimals of a floatingpoint number is really hackish and old-style. Put it in the event struct as a separate variable. The same syntax can be kept in the score, just don't parse it as a float but as two separate integers. >From an API point of view, csoundScoreEvent() could return the pointer to the event, which would then be passed along to callbacks and such, so that the host program can know exactly which event called the callback. It would also be really cool if it was possible to send continuous controldata to _specific_ events. There's many alternatives of how this could be achieved.. One is direct access to variables in a specific event, or special channels exported from an instrument, accessed by f(event,channel_name). This way a host program could have really flexible communication with the audio synthesis. And, it would be nice with a more flexible and program-language-like orchestra syntax, where k0 zoo a1 foo k1*k0 a2 bar a1, k2 would be written a2 = bar(foo(k1 * zoo()), k2); and with direct access to the DSP graph for host apps, for visualisation of orchestras or even construction of them, bypassing csounds orchestra syntax. Anthony Kozar wrote: > I suggest that we move towards "Csound 6" as was suggested here a couple of > months ago. The API can then be redesigned as needed to accommadate many of > the suggested features that it is currently ill-suited for. > > I think we should create a branch in the csound5 module (not a new module) > so that development can continue along both lines and changes will be easy > to merge from the trunk to the branch. We can continue to add new opcodes > and minor features to Csound 5 so that it is not a complete freeze. (I > believe all of this was suggested before -- I think it's the right way to > go). > > Anthony -- /Jonatan [ http://kymatica.com ] ------------------------------------------------------------------------------ Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM) software. With Adobe AIR, Ajax developers can use existing skills and code to build responsive, highly engaging applications that combine the power of local resources and data with the reach of the web. Download the Adobe AIR SDK and Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com _______________________________________________ Csound-devel mailing list Csound-devel@lists.sourceforge.net |
Date | 2009-02-05 20:45 |
From | Michael Gogins |
Subject | Re: [Cs-dev] Channels |
For which features is the current API ill suited? What new features would you like to see added? Regards, Mike On Thu, Feb 5, 2009 at 2:34 PM, Anthony Kozar |
Date | 2009-02-06 03:09 |
From | Anthony Kozar |
Subject | Re: [Cs-dev] Channels |
I was thinking mostly of cases where adding some major new feature like multiprocessing or multiple score strings would require parameter changes for existing API functions. But I am also refering to the "batch mode" orientation of the API that "Pinball" recently called attention to. The API was a great step forward for Csound, but it shows very clearly its origins as a commandline program where it reads a bunch of options in the traditional argc/argv format and then you have to call csoundCompile() and csoundPerform() in order to do almost anything. It doesn't matter whether you want to perform a score, create an f-table, run an analysis, or just get a list of opcodes: most of these tasks require calling csoundCompile() and/or csoundPerform() which means you have to supply at least an orchestra file in some cases where it makes no sense. An example of an application where improvements would be helpful is an f-table editor. It would be nice to call a single function in the Csound API to run a GEN function and return an opaque "f-table object". Further API calls could retrieve length, contents, etc. so that an editor could display an f-table as a user edits it graphically. Currently, I believe that it would be necessary to write out a simple CSD or Orc/Sco and run csoundCompile(), then probably csoundPerformKsmps() to create tables at "time zero" before you can use the API functions to read the table. That's a lot of overhead (programming and run-time). I'm moving further discussions of these and other ideas for Csound 6 to the new thread "some ideas for Csound 6" created by Jonatan. Anthony Michael Gogins wrote on 2/5/09 3:45 PM: > For which features is the current API ill suited? What new features > would you like to see added? > > Regards, > Mike > > On Thu, Feb 5, 2009 at 2:34 PM, Anthony Kozar > |
Date | 2009-02-06 03:38 |
From | Anthony Kozar |
Subject | Re: [Cs-dev] some ideas for Csound 6 |
Thanks Jonatan for starting this discussion again. I am including a list of ideas below that I think have all been suggested previously so that we do not have to spend a lot of time rehashing them. Most of these ideas were proposed or elaborated on by other developers or users than myself. They should make a good starting point for a "Csound 6 wish list". - multiprocessor/cluster support - finish new orchestra parser - multiple strings in score statements - new score parser ?? - load or modify instruments during performance - move away from the command-line "batch mode" orientation of the API - allow CSDs, orcs, scos to be read from strings in memory - allow instruments and scores to be built programatically - API calls to instantiate opcodes/signals and link them together - merge Cscore and real-time event APIs (share data structs) - embed a small scripting language (Lua?) so that instruments and scores may be constructed directly within Orcs/Scos without the need to write an API client - allow setting options via API calls instead of building argv/argc - eliminate the need for writing any temporary files to disk (can still be an option for debugging) - allow other Csound "objects" to be created, manipulated, and destroyed with the API *without* having to instantiate a CSOUND object and call csoundPerform(). eg. f-tables or utility analyses so that external editors for these may more easily be written - alternative "functional style" for instrument code - support for other/modular language "front ends" to the Csound engine (i.e. alternatives to Csound's orchestra and score langs); this will not be difficult if all of the above are implemented - eg. a SAOL front-end - support for a SAOL-like "control" event in the score - API support for sending control signals to specific instrument instances - arrays of opcodes and signals - "always on" instruments (instantiated from orchestra, not score) ? - SIMD support ? - move obsolete, redundant, buggy, and confusing opcodes into a "deprecated" plugin module that users could install for backwards compatibility but which would discourage their use in new pieces For my part, I think support for Mac OS 9 could be dropped so that threading, networking, and other advanced features can be implemented without #ifdefs and stubs. Anthony Kozar mailing-lists-1001 AT anthonykozar DOT net http://anthonykozar.net/ Jonatan Liljedahl wrote on 2/5/09 3:13 PM: > Here's some ideas for Csound 6: ------------------------------------------------------------------------------ Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM) software. With Adobe AIR, Ajax developers can use existing skills and code to build responsive, highly engaging applications that combine the power of local resources and data with the reach of the web. Download the Adobe AIR SDK and Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com _______________________________________________ Csound-devel mailing list Csound-devel@lists.sourceforge.net |
Date | 2009-02-06 05:23 |
From | Steven Yi |
Subject | Re: [Cs-dev] some ideas for Csound 6 |
I would say that the most interesting things to me would be getting the parser done, which would open up the way to getting instruments to be programmatically created. The reason I see it this way is that the new parser code is much simpler IMO to see how everything is working when it parses then compiles down the code, as it already works in two passes (build AST, compile). If that's finished, it should be easy to change how the compilation step works, then removing the mass allocation of memory for all variables and the erasure of what variables are set to what addresses. If we replace that with something that looks more like scripting engine's allocation of variables one at a time in a Map, we could then do some dynamic instrument creation/modification as we could then have access to the variables by name. Downside to that is an impact on performance. It'd really only impact allocation of new instruments though, and with today's computers at the speed they are, I'd imagine it being more valuable to have the ability to modify instruments than to improve instrument allocation time. Another downside is that we wouldn't be able to highly optimize the compiled code to do things like dead code elimination or expression optimization, as we'd need a way to undo that if instruments change. Perhaps the engine could be created to run with a compiled mode for speed or a scripting mode for live changes. If we move to creating instrument dynamically, then it shouldn't be a problem to expose API methods then so that host languages/programs could use them. In addition, it'd also mean one could create a different orch language using a different grammar definition and reuse the engine. This would make Csound a much more generic audio engine which I think would be a plus. As for control events, I'm not too familiar with them from SAOL, but I imagine you're referring to something like MIDI controller events. blue does this with widgets for instruments to do parameter automation by synthesizing instruments just for sending global k-rate signals to the instrument that also reads the k-rate signals. So, if a program takes are of managing to create the instruments and variables in the instrument code so that there are no variable name clashes, then that feature is possible already, but a separate solution may be nice. Thanks! steven On Thu, Feb 5, 2009 at 7:38 PM, Anthony Kozar |
Date | 2009-02-06 08:00 |
From | Anthony Kozar |
Subject | Re: [Cs-dev] some ideas for Csound 6 |
The SAOL control events are kind of similar to MIDI controllers except that they can have arbitrary names and target a single note event. Csound can do something similar by defining extra instruments to be "controllers" and inserting i statements for control events. This is, in some ways, a more generic and thus flexible solution. But the SAOL method is less work and arguably more intuitive. SAOL included some nice innovations but only has one decent implementation at this time (Sfront). Since SAOL is derived from Csound, I think it would be nice for Csound to move towards implementing enough features of SAOL that a SAOL front end to the Csound engine would be possible. Here is an explanation of the SAOL control event: http://www.cs.berkeley.edu/~lazzaro/sa/book/control/sasl/index.html#control Another link of interest: http://sourceforge.net/projects/saolsound Anthony Steven Yi wrote on 2/6/09 12:23 AM: > As for control events, I'm not too familiar with them from SAOL, but I > imagine you're referring to something like MIDI controller events. > blue does this with widgets for instruments to do parameter automation > by synthesizing instruments just for sending global k-rate signals to > the instrument that also reads the k-rate signals. So, if a program > takes are of managing to create the instruments and variables in the > instrument code so that there are no variable name clashes, then that > feature is possible already, but a separate solution may be nice. ------------------------------------------------------------------------------ Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM) software. With Adobe AIR, Ajax developers can use existing skills and code to build responsive, highly engaging applications that combine the power of local resources and data with the reach of the web. Download the Adobe AIR SDK and Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com _______________________________________________ Csound-devel mailing list Csound-devel@lists.sourceforge.net |
Date | 2009-02-06 08:54 |
From | Oeyvind Brandtsegg |
Subject | Re: [Cs-dev] some ideas for Csound 6 |
Thanks for the inspiring discussion about the next version. I wonder if the "compiled mode" vs "scripted mode" could be selected on an instrument by instrument basis ? This way it could be possible to "freeze" single instruments for optimization at any time during performance, would it not ? best Oeyvind 2009/2/6 Steven Yi |
Date | 2009-02-06 11:36 |
From | Jonatan Liljedahl |
Subject | Re: [Cs-dev] some ideas for Csound 6 |
Steven Yi wrote: > I would say that the most interesting things to me would be getting > the parser done, which would open up the way to getting instruments to > be programmatically created. The reason I see it this way is that the > new parser code is much simpler IMO to see how everything is working > when it parses then compiles down the code, as it already works in two > passes (build AST, compile). If that's finished, it should be easy to > change how the compilation step works, then removing the mass > allocation of memory for all variables and the erasure of what > variables are set to what addresses. If we replace that with > something that looks more like scripting engine's allocation of > variables one at a time in a Map, we could then do some dynamic > instrument creation/modification as we could then have access to the > variables by name. > > Downside to that is an impact on performance. It'd really only impact > allocation of new instruments though, and with today's computers at > the speed they are, I'd imagine it being more valuable to have the > ability to modify instruments than to improve instrument allocation > time. Another downside is that we wouldn't be able to highly optimize > the compiled code to do things like dead code elimination or > expression optimization, as we'd need a way to undo that if > instruments change. Perhaps the engine could be created to run with a > compiled mode for speed or a scripting mode for live changes. Why would anything need to be run in scripting mode (on-the-fly interpreter) instead of compiled (interpretation already done, just go through a sequence of bytecode)? Also, I'm not sure about the dynamic instr _modification_, other than programmatically create a new one and replace an old one. There's not many languages out there that allow for self-modifying code.. But that depends on how you look at the DSP graph, is it a result of the actual variables and their assignments and usage (as in current csound) or is the DSP graph created by dynamically connecting opcode instances? then the variables would not be signals but opcode objects, like this: o1 = vco2 ... o2 = out o1 -> o2 But then "o1 = vco2" means _creation_ of a vco2 instance, not the performance. So it would be strange to put kontrolvariables in the creation args. Then maybe there should be a vco2.perform(kvars,...) method, etc.. Personally I prefer the way csound orch works now, it uses variables as patchcords. A self-modifying instrument could then mean that such a "patchcord" could be (dis)connected and moved between opcodes... But this could be (and can already be) managed by controllable patch-matrixes, like zak arrays.. It would be interesting with local arrays though, and the possibility to access them like in ordinary languages: my_array[x] Here's another loose idea btw: take away the distinction between opcode and instrument! instead of creating an instr, you create a user opcode, and the toplevel scope is simply the main user opcode, which then creates instances of other opcodes. Don't know how the score would work though... but it's an interesting thought to play with :) > If we move to creating instrument dynamically, then it shouldn't be a > problem to expose API methods then so that host languages/programs > could use them. In addition, it'd also mean one could create a > different orch language using a different grammar definition and reuse > the engine. This would make Csound a much more generic audio engine > which I think would be a plus. I agree. -- /Jonatan [ http://kymatica.com ] ------------------------------------------------------------------------------ Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM) software. With Adobe AIR, Ajax developers can use existing skills and code to build responsive, highly engaging applications that combine the power of local resources and data with the reach of the web. Download the Adobe AIR SDK and Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com _______________________________________________ Csound-devel mailing list Csound-devel@lists.sourceforge.net |
Date | 2009-02-07 20:20 |
From | Steven Yi |
Subject | Re: [Cs-dev] some ideas for Csound 6 |
Hi Anthony, If I understand correctly, SAOL control events seem very limited in that they only change the value discretely. I saw no option to slowly change over time linearly or with any other curve. If we did implement something in csound this way, I would say we would need some method to change the value over time, but likely that would lead to just doing what blue does and having an instrument to do that work. One thing blue has to do is generate an instrument for every variable as it has to get compiled down that way; this might be a case for having the variables names maintained so that variables can be queried by name, so that a single instrument could be used for any number of variables. steven On Fri, Feb 6, 2009 at 12:00 AM, Anthony Kozar |
Date | 2009-02-07 20:31 |
From | Steven Yi |
Subject | Re: [Cs-dev] some ideas for Csound 6 |
Hi Jonatan, The use of scripting mode is not about recompiling the the sequence to bytecode every run, but about the internals allowing live changes. Dyanmic runtime modification is useful; it won't be of interest to people using csound directly with a text editor, but for program writers, csound becomes open enough for others to build max-like patching programs. I could certainly see one day blue having an instrument type that would be very reaktor-like where a GUI is created and code is done visually. Matt Ingalls had originally made sub-instruments so that an instrument could function as an opcode, and UDO's came after that. The interesting problem to me is that p-fields act as arguments to an instrument, but sub-instruments share those p-fields and use a secondary set of arguments when called. I actually think that having an instrument vs opcode is fine, and I think if I was modeling this in an object-oriented language that I'd still have an Instrument class, with opcodes just being the normal code within a method. Instrument instances in csound carry metadata for releasing, midi-note, their pfields, etc. that an opcode doesn't need to necessarily worry about. So that's all just to say I don't see a problem with the split, but if a separate way could be done where the two concepts are merged then I wouldn't mind that either. steven On Fri, Feb 6, 2009 at 3:36 AM, Jonatan Liljedahl > Why would anything need to be run in scripting mode (on-the-fly > interpreter) instead of compiled (interpretation already done, just go > through a sequence of bytecode)? > > Also, I'm not sure about the dynamic instr _modification_, other than > programmatically create a new one and replace an old one. There's not > many languages out there that allow for self-modifying code.. > > But that depends on how you look at the DSP graph, is it a result of the > actual variables and their assignments and usage (as in current csound) > or is the DSP graph created by dynamically connecting opcode instances? > then the variables would not be signals but opcode objects, like this: > > o1 = vco2 ... > o2 = out > o1 -> o2 > > But then "o1 = vco2" means _creation_ of a vco2 instance, not the > performance. So it would be strange to put kontrolvariables in the > creation args. Then maybe there should be a vco2.perform(kvars,...) > method, etc.. > > Personally I prefer the way csound orch works now, it uses variables as > patchcords. A self-modifying instrument could then mean that such a > "patchcord" could be (dis)connected and moved between opcodes... > But this could be (and can already be) managed by controllable > patch-matrixes, like zak arrays.. It would be interesting with local > arrays though, and the possibility to access them like in ordinary > languages: my_array[x] > > Here's another loose idea btw: take away the distinction between opcode > and instrument! instead of creating an instr, you create a user opcode, > and the toplevel scope is simply the main user opcode, which then > creates instances of other opcodes. Don't know how the score would work > though... but it's an interesting thought to play with :) > >> If we move to creating instrument dynamically, then it shouldn't be a >> problem to expose API methods then so that host languages/programs >> could use them. In addition, it'd also mean one could create a >> different orch language using a different grammar definition and reuse >> the engine. This would make Csound a much more generic audio engine >> which I think would be a plus. > > I agree. > > -- > /Jonatan [ http://kymatica.com ] > > ------------------------------------------------------------------------------ > Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM) > software. With Adobe AIR, Ajax developers can use existing skills and code to > build responsive, highly engaging applications that combine the power of local > resources and data with the reach of the web. Download the Adobe AIR SDK and > Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com > _______________________________________________ > Csound-devel mailing list > Csound-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/csound-devel > ------------------------------------------------------------------------------ Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM) software. With Adobe AIR, Ajax developers can use existing skills and code to build responsive, highly engaging applications that combine the power of local resources and data with the reach of the web. Download the Adobe AIR SDK and Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com _______________________________________________ Csound-devel mailing list Csound-devel@lists.sourceforge.net |
Date | 2009-02-08 01:35 |
From | Anthony Kozar |
Subject | Re: [Cs-dev] some ideas for Csound 6 |
I suppose the SAOL control events might seem limited, but from the perspective that you can change any variable in an instrument without preparing the instrument for such control, they are quite flexible over Csound. In practice, such tweaking may not be useful without some forethought since most parameter changes need to be continuous to avoid clicks. But still, it is a simple matter to ensure continuity with a portamento opcode. Anyways, a more flexible facility would be fine too. Even if the Csound score language is not augmented to support it, if the Csound engine can support the SAOL-like control feature (or more), then it will be possible to have a SAOL language interpreter with Csound as the back end. I have been combing through old musings on Csound 6, and here are some additional ideas that have been proposed: 1. convert Csound core to C++ 2. sample accurate scheduling 3. allow a "full directed acyclical DSP graph", in place of instrument list/opcode list type graph. 4. remove the distinction between instruments and UDOs 5. flexible output routing for instruments without using globals, buses, zak 6. orch opcodes as arguments to score statements: instr 1 asig oscil p4, p5, 1 out asig endin i1 0 2 7500 440.0 i1 1 1 [[line 0, p3, 7500]] [[kp5 expon 330.0, p3, 220.0]] The last of these would be another way of achieving more score control over instruments without preparation. No. 3 from this list would probably enable Nos. 4, 5, and 6. It might be difficult to both implement these suggestions in an elegant way and maintain backwards compatibility with the existing orchestra and score languages. In that case, I think as long as the engine supports modifying individual instances of instruments (and Michael's concept in #3), then we could provide two parallel options in Csound 6: 1) the traditional orc/sco model, and 2) a new language that blends synthesis and event control with greater flexibility than the traditional model and hopefully none of its baggage. I'm sure all of this is vague -- I'd be happy to elaborate :) Anthony Steven Yi wrote on 2/7/09 3:20 PM: > If I understand correctly, SAOL control events seem very limited in > that they only change the value discretely. I saw no option to slowly > change over time linearly or with any other curve. If we did > implement something in csound this way, I would say we would need some > method to change the value over time, but likely that would lead to > just doing what blue does and having an instrument to do that work. > One thing blue has to do is generate an instrument for every variable > as it has to get compiled down that way; this might be a case for > having the variables names maintained so that variables can be queried > by name, so that a single instrument could be used for any number of > variables. ------------------------------------------------------------------------------ Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM) software. With Adobe AIR, Ajax developers can use existing skills and code to build responsive, highly engaging applications that combine the power of local resources and data with the reach of the web. Download the Adobe AIR SDK and Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com _______________________________________________ Csound-devel mailing list Csound-devel@lists.sourceforge.net |