It would be interesting to see the code for your realtime system, could you upload a copy somewhere ? Oeyvind 2008/4/25, Tim Mortimer : > > Hi Oeyvind, > > I did something along these lines a few months back, & have revived it to a > degree by simply plonking relevant data into ftables by hand, & triggering / > dispersing the patterns via MIDI while i experiment with the begin > consolidating the whole set up > > i was experimenting with some FLTK fronted interactivity & pattern > definition for this last week, & got to a point where i thought "this has > potential" & started looking at wx again for some cataloguing & databasing > interfaceing ideas. > > like a lot of my development, it gets to the point where i feel the concept > is proven (or abandoned, but fortunately not in this case) & i start > investigating how to push it forward from there. > > the nasty part with my latest prototype is it's driving a preset capable > custom SDIF data based additive synthesis instrument that uses Adsynt! > hardly an illustrative inroad i assure you.... > > # include is my new friend, & the prototype clocks in at the moment at > around 2000 lines of csound code > > how do i intend to proceed? either by passing my Python dictionary data to > csound via hosting the wxinterface using pyops, or finally embracing the API > wholeheartedly in Python & going for broke - possibly winding up with > something that looks a bit like improsculpt, but that ties in to all my > existing Parseval .txt file based "tracker" score format . my aim is to jam > patterns, but record / publish them to an editable parseval score > > the parseval score format already supports most of the conditional duration > statements based on various polyphony "models" (monophonic, or "keyboard > based" polyphony models - one voice per produceable "note" / "pitch") & I > have begun looking at how to track the motific & pattern useage throughout > the score > > things are very fluid here at the moment - the realisation that the whole > > i 10.1 n n > i 10.2 n n > i 10.3 n n > > polyphony thing was completely arbitrarily assignable > > i 10.13 n n > i 10.27 n n > i 10.3 n n > > is going to turn a lot of my work to date on it's head > > & i'm abandoning voiceleading for the moment in favour of a more > "feldmanesque" approach to pitch classes, in part because voiceleading / > contrapunctal strategies was imposing to much of an implied rhythmic basis > that ultimately i'm trying to move away from... > > my aim going forward is to prototype the interfacing & the more "common > music / athena" like aspects by driving very simple csound instruments - > this will be perhaps the more "bare bones" example you may be looking for. > > stay tuned, & i most certainly welcome your potential involvement & a > consolidation of some of our ideas Oeyvind. > > I better press send now before my PC crashes.... sorry for any typos... > > > Oeyvind Brandtsegg-2 wrote: > > > > Not to answer your specific question, > > but to ask about the realtime setup: > > How do you dispatch realtime events to Csound in your setup ? > > I have a "barebones" setup for realtime algorithmic composition here: > > http://oeyvind.teks.no/ftp/barebones.zip > > > > > > > > > ----- > ******************* > www.phasetransitions.net > hermetic music * python * csound * possibly mindless ranting > various werk in perpetual delusions of progress.... > > -- > View this message in context: http://www.nabble.com/real-time-GUI-for-Python---csound-algorithmic-%22atomic-elements%22-database-%28or-something...%29-tp16886575p16894906.html > Sent from the Csound - General mailing list archive at Nabble.com. > > > > Send bugs reports to this list. > To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound" >