| All,
I use Csound to create music. I start with a midi keyboard connected to
a computer program that maps the 12 tones of the keyboard to an
arbitrary range of pitches, mostly based on Just Intonation. I noodle
around until I find something interesting, then write it down on music
manuscript paper. Then I transcribe it into a text file input to a macro
preprocessor I wrote, which creates Csound input files. The Macro
processor supports my compositional esthetic, which is based on
improvisation. I map decisions that a musician would make while
improvising with other musicians into indeterminant calculations, then
repeat.
What I would like to see is the ability to map these actions into a
visual representation of computer generated performers on a screen, kind
of like MTV for the cyber set. Take a look at some VRML demos at
http://ligwww.epfl.ch/~babski/StandardBody/mpeg4/mpeg4.html . Wouldn't
it be interesting if the improvisations that result in Csound
realizations in .wav files could be rendered as VRML musicians playing
on imaginary instruments?
Prent Rodgers
Mercer Island, WA
"Its cold, but its a damp cold." |