[Csnd] [OT] New Blog about Computer Music Design Theory
Date | 2010-03-29 19:11 |
From | Jacob Joaquin |
Subject | [Csnd] [OT] New Blog about Computer Music Design Theory |
I'm here to announce a new blog: Slipmat -- Computer Music Design Theories http://slipmat.noisepages.com/ Slipmat will cover broad topics related to musical language design, injected with my own personal theories and philosophies, rather than focusing on any particular language. Though there will certainly be much discussion of Csound. :) Best, Jake |
Date | 2010-03-29 20:38 |
From | Anthony Palomba |
Subject | [Csnd] Re: [OT] New Blog about Computer Music Design Theory |
Hey Jacob, I really like your idea. It makes working with csound very
much Ultimately I assume the python script generates a .csd file that csound runs. -ap On Mon, Mar 29, 2010 at 1:11 PM, Jacob Joaquin <jacobjoaquin@gmail.com> wrote: I'm here to announce a new blog: |
Date | 2010-03-29 23:55 |
From | Jacob Joaquin |
Subject | [Csnd] Re: Re: [OT] New Blog about Computer Music Design Theory |
> Hey Jacob, I really like your idea. It makes working with csound very much > like supercollider or common music. In fact I have been working on > something similar. It would be great to have the readability of a score but > be able to spawn algorithmic processes as well. Maybe also integrate some > computation math library. I'd love to hear more about what you're doing. > Ultimately I assume the python script generates a .csd file that csound > runs. As of right now, the code you see is purely fantasy. So it does nothing. I have experimented with writing a language that translates into Csound, but the more I play with it, the more I think it would be better to rebuild a new language from nearly the ground up. Not that I'm qualified. :) Best, Jake |
Date | 2010-03-30 01:35 |
From | Michael Gogins |
Subject | [Csnd] Re: Re: Re: [OT] New Blog about Computer Music Design Theory |
I have at various times written prototype systems of this sort. They are not easy to get right! My current take on this idea is to use the Synthesis Toolkit in C++ for unit generators, and possibly Csound opcodes wrapped up in C++ classes, wrap it all in Lua, and use jackdmp as the synthesizer "engine" to manage how the parts send signals to each other and run in parallel. This isn't running but I might have it running sometime this year -- or next -- if I don't change the design again. With this design, the musician would write both instruments and compositions in Lua. The basic idea would be fairly similar to what Jacob has outlined in Python, but Lua is a much better choice for this purpose. (I've tried prototypes in both.) Regards, Mike On Mon, Mar 29, 2010 at 6:55 PM, Jacob Joaquin |
Date | 2010-03-30 01:56 |
From | Greg Schroeder |
Subject | [Csnd] Re: Re: Re: Re: [OT] New Blog about Computer Music Design Theory |
(scratches head) @Jacob, This is genuine curiosity - what is the content distinction between Slipmat and your existing csound blog on the same site? Greg On Tue, Mar 30, 2010 at 9:35 AM, Michael Gogins <michael.gogins@gmail.com> wrote: I have at various times written prototype systems of this sort. They |
Date | 2010-03-30 02:30 |
From | Jacob Joaquin |
Subject | [Csnd] Re: Re: Re: Re: [OT] New Blog about Computer Music Design Theory |
> With this design, the musician would write both instruments and > compositions in Lua. The basic idea would be fairly similar to what > Jacob has outlined in Python, but Lua is a much better choice for this > purpose. (I've tried prototypes in both.) I just want to make a clarification. I'm not proposing for my final design to be Python. I'm just using the Python language as my starting point as I design this mock specification over the next 6-12 months. I'm in the middle of a project that will keep me from spending time prototyping a computer music system. Since I can't do that, despite this strong urge, I'm just going to be taking notes, and making these notes public. By doing this, I've already learned that I should read up on Lua. :) Best, Jake |
Date | 2010-03-30 02:35 |
From | Jacob Joaquin |
Subject | [Csnd] Re: Re: Re: Re: Re: [OT] New Blog about Computer Music Design Theory |
> This is genuine curiosity - what is the content distinction between Slipmat > and your existing csound blog on the same site? It'll probably take a week or two for the two blogs diverge, but The Csound Blog will be focused solely on issues relating to Csound. Slipmat will be a place where I will be feel free to talk about sounder servers, post supercollider code, common music, virtual machines, etc. There will be discussion of Csound as well, but it will be in a much larger context than what ever random Csound instrument I'm currently working on. Best, Jake |
Date | 2010-03-30 09:13 |
From | Victor Lazzarini |
Subject | [Csnd] Re: Re: Re: Re: [OT] New Blog about Computer Music Design Theory |
Or the SndObj library... pysndobj. On 30 Mar 2010, at 01:35, Michael Gogins wrote: > I have at various times written prototype systems of this sort. They > are not easy to get right! My current take on this idea is to use the > Synthesis Toolkit in C++ for unit generators, and possibly Csound > opcodes wrapped up in C++ classes, wrap it all in Lua, and use jackdmp > as the synthesizer "engine" to manage how the parts send signals to > each other and run in parallel. This isn't running but I might have it > running sometime this year -- or next -- if I don't change the design > again. > > With this design, the musician would write both instruments and > compositions in Lua. The basic idea would be fairly similar to what > Jacob has outlined in Python, but Lua is a much better choice for this > purpose. (I've tried prototypes in both.) > > Regards, > Mike > > On Mon, Mar 29, 2010 at 6:55 PM, Jacob Joaquin > |
Date | 2010-03-30 09:22 |
From | DavidW |
Subject | [Csnd] Re: Re: Re: Re: Re: [OT] New Blog about Computer Music Design Theory |
you beat me to it Victor! an excellent approach IMO. Else what is really needed is a python > csound parser. David On 30/03/2010, at 7:13 PM, Victor Lazzarini wrote: > Or the SndObj library... pysndobj. > > On 30 Mar 2010, at 01:35, Michael Gogins wrote: > >> I have at various times written prototype systems of this sort. They >> are not easy to get right! My current take on this idea is to use the >> Synthesis Toolkit in C++ for unit generators, and possibly Csound >> opcodes wrapped up in C++ classes, wrap it all in Lua, and use >> jackdmp >> as the synthesizer "engine" to manage how the parts send signals to >> each other and run in parallel. This isn't running but I might have >> it >> running sometime this year -- or next -- if I don't change the design >> again. >> >> With this design, the musician would write both instruments and >> compositions in Lua. The basic idea would be fairly similar to what >> Jacob has outlined in Python, but Lua is a much better choice for >> this >> purpose. (I've tried prototypes in both.) >> >> Regards, >> Mike >> >> On Mon, Mar 29, 2010 at 6:55 PM, Jacob Joaquin |
Date | 2010-03-30 16:26 |
From | Anthony Palomba |
Subject | [Csnd] Re: Re: Re: Re: Re: Re: [OT] New Blog about Computer Music Design Theory |
Does a csound python SndObj library exist? This would be a wonderful thing to have. That way my python program becomes the score and I do not have to mess with any csd rendering. At that point CSound then becomes like Common Music. My python score would be a combination of real time events and spawned algorithmic processes. Can we please add this to the official wish list? -ap On Tue, Mar 30, 2010 at 3:22 AM, DavidW <vip@avatar.com.au> wrote: you beat me to it Victor! |
Date | 2010-03-30 16:34 |
From | Michael Gogins |
Subject | [Csnd] Re: Re: Re: Re: Re: Re: Re: [OT] New Blog about Computer Music Design Theory |
What you are asking for has existed for years, in several different forms. See "A Csound Tutorial" and "A Csound Algorithmic Composition Tutorial" by me for a few examples. The basic options are: Write Python to generate a score, and... (1) Shell out to run Csound. (2) import the csnd module and add notes to the csnd.CppSound object. (3) import the CsoundAC module and add notes to the CsoundAC.MusicModel or CsoundAC.Score objects. (4) Use the Python opcodes inside a Csound instrument definition to send events to Csound from Csound. SndObj is for sounds, not scores. Nevertheless, you can use any SndObj classes in Python in any of the scenarios above. Conversely, you can generate scores using any method above and then realize them using SndObj, although this would be more work than just using Csound; in particularly I am not sure how to manage polyphony in SndObj. Hope this helps, Mike On Tue, Mar 30, 2010 at 11:26 AM, Anthony Palomba |
Date | 2010-03-30 16:37 |
From | Victor Lazzarini |
Subject | [Csnd] Re: Re: Re: Re: Re: Re: Re: [OT] New Blog about Computer Music Design Theory |
No, SndObj is a general-purpose audio processing library, with python bindings (PySndObj), a separate and altogether different thing from Csound. Victor On 30 Mar 2010, at 16:26, Anthony Palomba wrote: Does a csound python SndObj library exist? |
Date | 2010-03-30 16:47 |
From | Anthony Palomba |
Subject | [Csnd] Re: Re: Re: Re: Re: Re: Re: Re: [OT] New Blog about Computer Music Design Theory |
Hello Michael, yes I am familiar with many of your examples. As I understand it, all these methods require that a csd file be created and then rendered. Which is not a real time processes. What I am proposing is an environment where the python script can spawn real time events.There would be no need to maintain separate csd file and python program. As far as I know the python csound interface does not allow you to do that. Is that a correct assertion? -ap On Tue, Mar 30, 2010 at 10:34 AM, Michael Gogins <michael.gogins@gmail.com> wrote: What you are asking for has existed for years, in several different |
Date | 2010-03-30 16:53 |
From | Michael Gogins |
Subject | [Csnd] Re: Re: Re: Re: Re: Re: Re: Re: Re: [OT] New Blog about Computer Music Design Theory |
No, you CAN send notes to Csound from Python in real time. There is an example of this in the csound/examples/python folder. There are several functions, but the easiest to use is: csnd.CppSound.inputMessage(scoreline) Hope this helps, Mike On Tue, Mar 30, 2010 at 11:47 AM, Anthony Palomba |
Date | 2010-03-30 17:15 |
From | Anthony Palomba |
Subject | [Csnd] Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: [OT] New Blog about Computer Music Design Theory |
I know I can send messages to a csound instrument via python. I have already reviewed the example you mentioned. What I am proposing is an environment where csd file and python program are the same thing. And I don't mean a csd file pasted inside a python script, or loading a csd file. We are talking about creating a computer music language environment that gives me access to csound opcodes that I can use to create a python score. Basically what I am describing is system that works like Common Music. I would actually be using Common Music but its csound support is pretty lacking. While the existing solution we have gets the job done, it really does no compare in elegance and simplicity. In fact, getting python to work with csound is a lot of work. Just because you are used to it does not mean there might not be a better solution. -ap On Tue, Mar 30, 2010 at 10:53 AM, Michael Gogins <michael.gogins@gmail.com> wrote: No, you CAN send notes to Csound from Python in real time. There is an |
Date | 2010-03-30 17:21 |
From | Peiman Khosravi |
Subject | [Csnd] Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: [OT] New Blog about Computer Music Design Theory |
What about blue? Not quite what you have in mind but as you know one can use jython inside blue (don't even need to have jython/python installed). I find that the easiest way... Best, Peiman On 30 Mar 2010, at 17:15, Anthony Palomba wrote: I know I can send messages to a csound instrument via python. |
Date | 2010-03-30 17:22 |
From | Jacob Joaquin |
Subject | [Csnd] Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: [OT] New Blog about Computer Music Design Theory |
> What I am proposing is an environment where csd file and > python program are the same thing. And I don't mean > a csd file pasted inside a python script, or loading a csd file. I really do want to hear more. Sounds as if there is much overlap between our two concepts. Best, Jake |
Date | 2010-03-30 17:40 |
From | Michael Gogins |
Subject | [Csnd] Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: [OT] New Blog about Computer Music Design Theory |
Ok, finally I understand you. We need either a new language or the ability to compile an instrument template in running csound by evaluating python. A new music language would be best because it would not have fit csound's assumptions. The csound opcodes and those of other systems could probably be wrapped and reused. MKG from cell phone On Mar 30, 2010 12:16 PM, "Anthony Palomba" <apalomba@austin.rr.com> wrote: |
Date | 2010-03-31 14:18 |
From | Jacob Joaquin |
Subject | [Csnd] Re: Re: [OT] New Blog about Computer Music Design Theory |
> It would be great to have the readability of a score but be able to spawn > algorithmic processes as well. Anthony, I posted a rudimentary example that shows what a score might look with stand alone events combined with a generative process. See example 4: Coding in Time with the @ Scheduler http://slipmat.noisepages.com/2010/03/coding-in-time-with-the-scheduler/ Though def hat_eights() is not technically algorithmic in nature, it would be possible to do so. In theory. Best, Jake |
Date | 2010-03-31 15:19 |
From | Steven Yi |
Subject | [Csnd] Re: Re: Re: [OT] New Blog about Computer Music Design Theory |
Just some random thoughts: I've thought a good bit about synthesis system designs over the years, partially from having to figure ways to support features in blue together with going through Csound. A design I had come up with was to have a generic engine that just ticked away, and everything being a sub-module of that, whether it's a Csound-style instrument, a timeline (or Csound SCO), a script, etc. I would imagine having a timeline object that one would in turn attach score objects or script objects, UI objects, etc. Communications would travel either over a generic message bus, advertising objects in a global memory space, or simply by passing references when building up everything. With an open internal design like this, other things like video mixing could just be another node. I imagine most commercial software has this kind of division. Csound is close but it ties in the timeline of a SCO somewhat tightly as well as things like variables and instrument stuff into the Csound object. The idea of a generic system would mean that one could employ it in whatever way one wanted, whether it was for a commandline system or GUI program. One could then script to use this generic engine to fine tune things. Multiple languages could be constructed on top of the same engine, or if the modules to parse scripts are done separately, could be mixed. i.e. a typical script to run it would be: engine = Engine() timeline = Timeline() engine.attach(b) mem = MemorySystem() # for variables # parse Csound-style sco, attach to timeline CsoundScore(timeline, "someFile.sco") # parse Csound-style orc, attach to engine CsoundOrc(engine, mem,"someFile.orc") engine.run() engine.wait() engine.close() Then a utility commandline could just be made to run this like csound does today, using orc/sco or CSD. To move it further, one could manually do things in realtime via the nodes attached to timeline: class MyAlgorithmicScoreGenerator: [constructors that takes in Timeline object] def tick(): [do a bunch of score generation] for event in generatedEvents: timeline.insert(event.start, event.end, event) timeline.remove(self) [where event is a tickable] timeline.insert(40, -1, MyAlgorithmicScoreGenerator()) By separating out the different parts of the system and clearly defining the roles of each, one can work with the objects as one would wish. Utility scripts that pre-create and organize objects would be built up so that end users could run the engine like Csound today or build other languages or applications on top of it. Ideally, a system like this could be usable to build an app like PD, Csound, and SuperCollider, as much as it could be used to build commercial sequencers like Cubase or Logic. I had meant to prototype something like this in Java for a while but never got around to it. I would choose Java today since the JVM has so many good scripting languages built on top of it (Jython, JRuby, Clojure, Groovy, etc.) and since it has strong-value for long-term software (protected from hardware changes through VM, large business support, GPL, etc.). That would make sense for my needs, though C++ would probably be more of interest to those wanting all the extra power one can muster. A nice thing about all this that I like is that for something like Jake is talking about, this kind of system could be something used as a platform to build a synthesis language. On Wed, Mar 31, 2010 at 9:18 AM, Jacob Joaquin |
Date | 2010-03-31 17:44 |
From | Anthony Palomba |
Subject | [Csnd] Re: Re: Re: Re: [OT] New Blog about Computer Music Design Theory |
Well I think it would be hard to create an API that would be generic enough to support everyone's interface. On top of that, support all the various software interfaces (java, lua, python,etc.) that would want to use that interface. To go back to what Jacob originally suggested: it would be great to have a computer music language that allowed you to express a musical score. One that makes it easy to define start times of gestures and processes. The whole orc/sco model is pretty antiquated and cumbersome. Basically we would need a python interface that exposed all opcodes (no small task I am sure). And a way to build the csound signal chain in real time. So if we took for Jacob's example function we might get the following... def sine_arp(dur, amp, pitch, lfo_freq): With some python macro definitions, the python script/score might look something like this... @0 VolumeCurve(exponentialdecay, start, end) Anthony
|
Date | 2010-03-31 18:34 |
From | Steven Yi |
Subject | [Csnd] Re: Re: Re: Re: Re: [OT] New Blog about Computer Music Design Theory |
Well, I don't think it would be difficult for an API to be created, at least from what I've thought through in my head. If it would be C++ based, then using SWIG would handle bindings to languages so that's not too complicated. For ORC/SCO, I don't think it's bad if you think of it differently as Tickable/Schedule. If SCO is just a call to a timeline to insert an event at start and duration with params, and any tickable can add more to the timeline, then you'd essentially get what you are talking about. With a timeline just being another object on the engines list of tickables, it means one can just have a synthesis tickable on the top level engine that just runs or one can have it be a sub-object of a timeline. The timeline can also call back to the engine to stop it when the timeline is done (the way csound does when it runs out of score), or it could repeat, or whatever. (This is an area of Csound's engine that could get revised IMO). Csound itself has this capability with the event opcode, but it can only schedule pre-defined instruments. In a generic engine, you could change it to create an object at that time and schedule the object rather than just a note saying what pre-defined instrument to call at that time. Also, coding score gestures in Csound ORC code isn't the most straightforward thing to do. As for language vs. GUI I disagree, sometimes one is better than the other IMO. For expressiveness and precision, text is great. For visualization and manipulation of data, I prefer GUI's. That is part of blue's design, to try to leverage the best interface (text or GUI) where appropriate. For example, I think having blue's mixer and effects system allows an easy way to work with organizing connections between instruments and effects as well as manipulate parameters of effects. I much prefer this over trying to do it in code. However, for score work, I almost always use python, because script is easiest for me to express what I am trying to do. I also think coding an instrument is easier in code, but visualizing and manipulating parameters is much easier with a GUI. This is partially why I chose the design for BlueSynthBuilder instruments and didn't go a Reaktor style route where one would have to use a GUI to hook up components. (It's also why I like MacCsound/QuteCsound's design for single standalone projects.) I think trying to think of the benefits/drawbacks to both types of interfaces helps to find where one can be better used than the other and ultimately help to focus on the music and not the interface during the music making process. On Wed, Mar 31, 2010 at 12:44 PM, Anthony Palomba |
Date | 2010-03-31 18:39 |
From | Victor Lazzarini |
Subject | [Csnd] Re: Re: Re: Re: [OT] New Blog about Computer Music Design Theory |
I like your reasoning. I would prefer to call the 'tick()' function do() or process() or DoProcess(), though... ;) Victor On 31 Mar 2010, at 15:19, Steven Yi wrote: > Just some random thoughts: I've thought a good bit about synthesis > system designs over the years, partially from having to figure ways to > support features in blue together with going through Csound. A design > I had come up with was to have a generic engine that just ticked away, > and everything being a sub-module of that, whether it's a Csound-style > instrument, a timeline (or Csound SCO), a script, etc. I would > imagine having a timeline object that one would in turn attach score > objects or script objects, UI objects, etc. Communications would > travel either over a generic message bus, advertising objects in a > global memory space, or simply by passing references when building up > everything. With an open internal design like this, other things like > video mixing could just be another node. > > I imagine most commercial software has this kind of division. Csound > is close but it ties in the timeline of a SCO somewhat tightly as well > as things like variables and instrument stuff into the Csound object. > > The idea of a generic system would mean that one could employ it in > whatever way one wanted, whether it was for a commandline system or > GUI program. One could then script to use this generic engine to fine > tune things. Multiple languages could be constructed on top of the > same engine, or if the modules to parse scripts are done separately, > could be mixed. > > i.e. a typical script to run it would be: > > engine = Engine() > timeline = Timeline() > engine.attach(b) > > mem = MemorySystem() # for variables > > # parse Csound-style sco, attach to timeline > CsoundScore(timeline, "someFile.sco") > > # parse Csound-style orc, attach to engine > CsoundOrc(engine, mem,"someFile.orc") > > engine.run() > engine.wait() > engine.close() > > Then a utility commandline could just be made to run this like csound > does today, using orc/sco or CSD. > > To move it further, one could manually do things in realtime via the > nodes attached to timeline: > > class MyAlgorithmicScoreGenerator: > [constructors that takes in Timeline object] > def tick(): > [do a bunch of score generation] > for event in generatedEvents: > timeline.insert(event.start, event.end, event) > timeline.remove(self) > [where event is a tickable] > > timeline.insert(40, -1, MyAlgorithmicScoreGenerator()) > > By separating out the different parts of the system and clearly > defining the roles of each, one can work with the objects as one would > wish. Utility scripts that pre-create and organize objects would be > built up so that end users could run the engine like Csound today or > build other languages or applications on top of it. Ideally, a system > like this could be usable to build an app like PD, Csound, and > SuperCollider, as much as it could be used to build commercial > sequencers like Cubase or Logic. > > I had meant to prototype something like this in Java for a while but > never got around to it. I would choose Java today since the JVM has > so many good scripting languages built on top of it (Jython, JRuby, > Clojure, Groovy, etc.) and since it has strong-value for long-term > software (protected from hardware changes through VM, large business > support, GPL, etc.). That would make sense for my needs, though C++ > would probably be more of interest to those wanting all the extra > power one can muster. > > A nice thing about all this that I like is that for something like > Jake is talking about, this kind of system could be something used as > a platform to build a synthesis language. > > > > On Wed, Mar 31, 2010 at 9:18 AM, Jacob Joaquin > |
Date | 2010-03-31 19:13 |
From | Steven Yi |
Subject | [Csnd] Re: Re: Re: Re: Re: [OT] New Blog about Computer Music Design Theory |
:) I'm all for any name that makes sense! Sidenote: I had started a Java project to experiment with this idea a long while back but didn't spend much time on it. I may try again to use it as an experiment test-ground for engine ideas, though don't know if I'd try it in now in Java or C++. If I setup something I'll email this list for anyone curious. :P On Wed, Mar 31, 2010 at 1:39 PM, Victor Lazzarini |
Date | 2010-03-31 19:17 |
From | Michael Gogins |
Subject | [Csnd] Re: Re: Re: Re: Re: Re: [OT] New Blog about Computer Music Design Theory |
I think "tick" is appropriate in both the technical and the poetic senses. We are talking about a "synchronous" data flow graph, therefore "tick" reminds us that things happen synchronously. Poetically, it's what computer music people have often called this. If you propose an asynchronous design, then "process" would be better. Regards,. Mike On Wed, Mar 31, 2010 at 2:13 PM, Steven Yi |
Date | 2010-03-31 19:27 |
From | Michael Gogins |
Subject | [Csnd] Re: Re: Re: Re: Re: Re: [OT] New Blog about Computer Music Design Theory |
FWIW, after doing several prototypes of such systems I have the following list of problems that designers should solve: (1) Plugin unit generators. (2) Dynamic voice allocation, AKA automatic polyphony. (3) Transparent multi-threading in the signal flow graph. (4) Real-time safety in the signal flow graph. (5) Absolutely as simple an interface as the problems will allow. (6) Multiple sample frames per tick, but at multiple rates, i.e. some units run at different rates compared to others. (7) Arbitrary arguments/parameters to units and connecting units. (8) Inject new instruments/units at run time. In my various prototypes I solved (1), (2), (3), (5), and (7), did not care about (6) but I do think it is important, did not solve (4) but I think it can be done using a custom memory allocator together with an "open" pre-tick call and a "close" post-tick call, did not solve (8) but I also think that is important. Regards, Mike On Wed, Mar 31, 2010 at 2:17 PM, Michael Gogins |
Date | 2010-04-05 23:24 |
From | Anthony Palomba |
Subject | [Csnd] Re: Re: Re: Re: Re: Re: Re: [OT] New Blog about Computer Music Design Theory |
So is there a way with the existing csound python interface to get a OnTick() message or some clock pulse? -ap On Wed, Mar 31, 2010 at 1:27 PM, Michael Gogins <michael.gogins@gmail.com> wrote: FWIW, after doing several prototypes of such systems I have the |
Date | 2010-04-06 14:15 |
From | Michael Gogins |
Subject | [Csnd] Re: Re: Re: Re: Re: Re: Re: Re: [OT] New Blog about Computer Music Design Theory |
Yes, if your API host calls PerformKsmps. Then you can call your own callback immediately before that. Regards, Mike On Mon, Apr 5, 2010 at 6:24 PM, Anthony Palomba |
Date | 2010-04-07 05:31 |
From | rasputin |
Subject | [Csnd] ChucK? was re New Blog about Computer Music Design Theory |
Is ChucK somewhat related to this concept? I don't get the feeling it's being worked on anymore, but it did seem to be more of a realtime tool. (http://chuck.cs.princeton.edu; discussion forum at http://electro-music.com/forum/forum-140.html) It still has a small but active user community. Steven Yi wrote: > > Just some random thoughts: I've thought a good bit about synthesis > system designs over the years, partially from having to figure ways to > support features in blue together with going through Csound. A design > I had come up with was to have a generic engine that just ticked away, > and everything being a sub-module of that, whether it's a Csound-style > instrument, a timeline (or Csound SCO), a script, etc. I would > imagine having a timeline object that one would in turn attach score > objects or script objects, UI objects, etc. Communications would > travel either over a generic message bus, advertising objects in a > global memory space, or simply by passing references when building up > everything. With an open internal design like this, other things like > video mixing could just be another node. > > I imagine most commercial software has this kind of division. Csound > is close but it ties in the timeline of a SCO somewhat tightly as well > as things like variables and instrument stuff into the Csound object. > > The idea of a generic system would mean that one could employ it in > whatever way one wanted, whether it was for a commandline system or > GUI program. One could then script to use this generic engine to fine > tune things. Multiple languages could be constructed on top of the > same engine, or if the modules to parse scripts are done separately, > could be mixed. > > i.e. a typical script to run it would be: > > engine = Engine() > timeline = Timeline() > engine.attach(b) > > mem = MemorySystem() # for variables > > # parse Csound-style sco, attach to timeline > CsoundScore(timeline, "someFile.sco") > > # parse Csound-style orc, attach to engine > CsoundOrc(engine, mem,"someFile.orc") > > engine.run() > engine.wait() > engine.close() > > Then a utility commandline could just be made to run this like csound > does today, using orc/sco or CSD. > > To move it further, one could manually do things in realtime via the > nodes attached to timeline: > > class MyAlgorithmicScoreGenerator: > [constructors that takes in Timeline object] > def tick(): > [do a bunch of score generation] > for event in generatedEvents: > timeline.insert(event.start, event.end, event) > timeline.remove(self) > [where event is a tickable] > > timeline.insert(40, -1, MyAlgorithmicScoreGenerator()) > > By separating out the different parts of the system and clearly > defining the roles of each, one can work with the objects as one would > wish. Utility scripts that pre-create and organize objects would be > built up so that end users could run the engine like Csound today or > build other languages or applications on top of it. Ideally, a system > like this could be usable to build an app like PD, Csound, and > SuperCollider, as much as it could be used to build commercial > sequencers like Cubase or Logic. > > I had meant to prototype something like this in Java for a while but > never got around to it. I would choose Java today since the JVM has > so many good scripting languages built on top of it (Jython, JRuby, > Clojure, Groovy, etc.) and since it has strong-value for long-term > software (protected from hardware changes through VM, large business > support, GPL, etc.). That would make sense for my needs, though C++ > would probably be more of interest to those wanting all the extra > power one can muster. > > A nice thing about all this that I like is that for something like > Jake is talking about, this kind of system could be something used as > a platform to build a synthesis language. > > > > On Wed, Mar 31, 2010 at 9:18 AM, Jacob Joaquin |
Date | 2010-04-07 16:30 |
From | Jacob Joaquin |
Subject | [Csnd] Re: ChucK? was re New Blog about Computer Music Design Theory |
> Is ChucK somewhat related to this concept? I don't get the feeling > it's being worked on anymore, but it did seem to be more of a > realtime tool. I went through a ChucK phase about 2 years ago. Loved it. Though I eventually set it aside, as I felt it need a little bit more maturing. I don't think there's been an update since. Might have something to do with Dr. Wang founding the company smule. (http://www.smule.com/). It's worth downloading and playing with for a couple of weeks. There is a lot of cool things about ChucK. Best, Jake |
Date | 2010-04-07 16:34 |
From | Steven Yi |
Subject | [Csnd] Re: ChucK? was re New Blog about Computer Music Design Theory |
Not that I'm aware of, though I haven't gone through the internals of ChucK's engine. From what I understand from playing with it a little bit and the posted goals for realtime scheduling, I would venture to say it's not related. On Wed, Apr 7, 2010 at 12:31 AM, rasputin |
Date | 2010-04-07 16:50 |
From | Anthony Palomba |
Subject | [Csnd] Re: Re: ChucK? was re New Blog about Computer Music Design Theory |
Chuck is great for realtime performance, I could see it being good for a laptop orchestra. But when it comes to describing gestures or a score. Chuck is pretty limited in its ability. There are many things out there that do a better job. -ap On Wed, Apr 7, 2010 at 10:34 AM, Steven Yi <stevenyi@gmail.com> wrote: Not that I'm aware of, though I haven't gone through the internals of |
Date | 2010-04-07 19:20 |
From | Brian Redfern |
Subject | [Csnd] Re: Re: Re: ChucK? was re New Blog about Computer Music Design Theory |
I saw a nice use of chuck at a rave, the "dj" was flipping between different shreds on his laptop and tweaking them with a midi controller. On Wed, Apr 7, 2010 at 8:50 AM, Anthony Palomba |