Csound Csound-dev Csound-tekno Search About

another csound environment - advice needed

Date2006-03-13 18:10
FromAtte André Jensen
Subjectanother csound environment - advice needed
Hi

I mainly use csound live, running on a linux laptop and controlled from 
3 usb keyboards. My current setup is quite flexible, very stable and 
devided into three parts:

1) Csound. Instruments are split into seperate files. Same with macro 
defs, table defs and pgmassign's. A bash script assembles the resulting 
.csd and runs csound on it. Csound reads realtime midi from the alsa 
midi-through device.

2) Python. I wrote a python script that uses pyseq to split incomming 
midi based on zones (combination of midi key range + midi channel) to 
midi channels. It also filters a few controllers and can send initial 
program change + volume for each zone.

3) Bash. The Python script is started from a bash script, one for each 
setup. This also connects the input of pyseq to the usb keyboards (each 
sending on different midi channels) and the output of pyseq to the 
midi-through device.

Although this works fine, it's tricky to generate setups, and the whole 
thing is just not so inviting to work with. So I'm thinking hard about 
redoing the whole thing in a single python script with gui. This script 
and csound would be all that's required, communication from python to 
csound is done over midi messages via alsa. Is should be a snap to 
generate setups, and instrument names should be used in the zone part.

Now, my question is. it this a clever way to go? Should I go for closer 
integration and what would it take to get rid of the via-alsa-midi 
communication? I'd prefer python over C/C++ since this is what I know 
and it's soo nice to work in. I believe Iain has done something similar 
with C++, maybe a posibility would be to add modules in C++ that do the 
csound-communication-part... Any ideas, suggestions, warnings, 
inspiration, advice are most welcome...

-- 
peace, love & harmony
Atte

http://www.atte.dk

Date2006-03-14 17:10
FromAndres Cabrera
SubjectRe: another csound environment - advice needed
Hi,
I haven't tried this, but maybe OSC is a better way to communicate than
MIDI. I've seen some implementation of OSC for python somewhere, but I
don't know where.
I'd like to hear if you're succesful with this.

Cheers,
Andrés

On Mon, 2006-03-13 at 19:10 +0100, Atte André Jensen wrote:
> Hi
> 
> I mainly use csound live, running on a linux laptop and controlled from 
> 3 usb keyboards. My current setup is quite flexible, very stable and 
> devided into three parts:
> 
> 1) Csound. Instruments are split into seperate files. Same with macro 
> defs, table defs and pgmassign's. A bash script assembles the resulting 
> .csd and runs csound on it. Csound reads realtime midi from the alsa 
> midi-through device.
> 
> 2) Python. I wrote a python script that uses pyseq to split incomming 
> midi based on zones (combination of midi key range + midi channel) to 
> midi channels. It also filters a few controllers and can send initial 
> program change + volume for each zone.
> 
> 3) Bash. The Python script is started from a bash script, one for each 
> setup. This also connects the input of pyseq to the usb keyboards (each 
> sending on different midi channels) and the output of pyseq to the 
> midi-through device.
> 
> Although this works fine, it's tricky to generate setups, and the whole 
> thing is just not so inviting to work with. So I'm thinking hard about 
> redoing the whole thing in a single python script with gui. This script 
> and csound would be all that's required, communication from python to 
> csound is done over midi messages via alsa. Is should be a snap to 
> generate setups, and instrument names should be used in the zone part.
> 
> Now, my question is. it this a clever way to go? Should I go for closer 
> integration and what would it take to get rid of the via-alsa-midi 
> communication? I'd prefer python over C/C++ since this is what I know 
> and it's soo nice to work in. I believe Iain has done something similar 
> with C++, maybe a posibility would be to add modules in C++ that do the 
> csound-communication-part... Any ideas, suggestions, warnings, 
> inspiration, advice are most welcome...
> 
> -- 
> peace, love & harmony
> Atte
> 
> http://www.atte.dk
> -- 
> Send bugs reports to this list.
> To unsubscribe, send email to csound-unsubscribe@lists.bath.ac.uk
> 
> 
> ,

Date2006-03-15 09:33
FromOeyvind Brandtsegg
SubjectRe: another csound environment - advice needed
I'm not sure how much difference it does in latency,
but I prefer to let the (realtime) control signals go as short a route as possible.
Sending the midi data via python seems to be such an extra step,
and in the setup I'm building, I think I will let csound handle the midi input (and routing, mapping, initializing), since csound is the application that will need to get the data with a shortest possible latency. I will split the midi data stream, and then forward any midi data that should undergo any algorithmic processing to python. The way I see it, realtime audio control data tolerates less latency than algorithmic and larger scale control data.

Oeyvind





> From: Andres Cabrera [andres@geminiflux.com]
> Sent: 2006-03-14 18:10:06 CET
> To: csound@lists.bath.ac.uk
> Subject: Re: [Csnd] another csound environment - advice needed
> 
> Hi,
> I haven't tried this, but maybe OSC is a better way to communicate than
> MIDI. I've seen some implementation of OSC for python somewhere, but I
> don't know where.
> I'd like to hear if you're succesful with this.
> 
> Cheers,
> Andrés
> 
> On Mon, 2006-03-13 at 19:10 +0100, Atte André Jensen wrote:
> > Hi
> > 
> > I mainly use csound live, running on a linux laptop and controlled from 
> > 3 usb keyboards. My current setup is quite flexible, very stable and 
> > devided into three parts:
> > 
> > 1) Csound. Instruments are split into seperate files. Same with macro 
> > defs, table defs and pgmassign's. A bash script assembles the resulting 
> > .csd and runs csound on it. Csound reads realtime midi from the alsa 
> > midi-through device.
> > 
> > 2) Python. I wrote a python script that uses pyseq to split incomming 
> > midi based on zones (combination of midi key range + midi channel) to 
> > midi channels. It also filters a few controllers and can send initial 
> > program change + volume for each zone.
> > 
> > 3) Bash. The Python script is started from a bash script, one for each 
> > setup. This also connects the input of pyseq to the usb keyboards (each 
> > sending on different midi channels) and the output of pyseq to the 
> > midi-through device.
> > 
> > Although this works fine, it's tricky to generate setups, and the whole 
> > thing is just not so inviting to work with. So I'm thinking hard about 
> > redoing the whole thing in a single python script with gui. This script 
> > and csound would be all that's required, communication from python to 
> > csound is done over midi messages via alsa. Is should be a snap to 
> > generate setups, and instrument names should be used in the zone part.
> > 
> > Now, my question is. it this a clever way to go? Should I go for closer 
> > integration and what would it take to get rid of the via-alsa-midi 
> > communication? I'd prefer python over C/C++ since this is what I know 
> > and it's soo nice to work in. I believe Iain has done something similar 
> > with C++, maybe a posibility would be to add modules in C++ that do the 
> > csound-communication-part... Any ideas, suggestions, warnings, 
> > inspiration, advice are most welcome...
> > 
> > -- 
> > peace, love & harmony
> > Atte
> > 
> > http://www.atte.dk
> > -- 
> > Send bugs reports to this list.
> > To unsubscribe, send email to csound-unsubscribe@lists.bath.ac.uk
> > 
> > 
> > ,
> 
> --
> Send bugs reports to this list.
> To unsubscribe, send email to csound-unsubscribe@lists.bath.ac.uk
> 

Date2006-03-15 10:21
FromAtte André Jensen
SubjectRe: another csound environment - advice needed
Oeyvind Brandtsegg wrote:
> I'm not sure how much difference it does in latency,
> but I prefer to let the (realtime) control signals go as short a route as possible.
> Sending the midi data via python seems to be such an extra step,

True. But implemening a easy to maintain, easy to customize zone based 
splitter in csound doesn't seem like a good idea in csound. Or am I 
missing something? If you (or anyone else) has some code that could 
change my mind, I'd be happy to change it :-)

-- 
peace, love & harmony
Atte

http://www.atte.dk

Date2006-03-15 18:53
FromJean-Pierre Lemoine
SubjectRe: another csound environment - advice needed
Hi

This is the design I am using for combining real time csound and real 
time opengl stuff. The csound engine has the responsability to handle 
all the control rate signals that are then read by the Java opengl host 
program using csound channels. Basically the csound orchestra contains 
loopsegs that are used to animate vertex and pixel shader parameters. In 
place of a loopseg, a midi control message can also be read by csound 
and pass to the host program via channel.

I have two threads: one for performing the csound csd file, another for 
displaying the graphic animation. I have found that this design is very 
stable, and the performance excellent. All that is running using a 
control rate equal to the sample rate (44100) without any glitch. Of 
course I have to lower the control rate if the instrument is becoming 
complex, but part of the game is to design csound instrument having a 
"rich" sound and low cpu requirement.

The scound csd is generated by a high level editor, encapsulating some 
of  the csound stuff.

Jean-Pierre

Oeyvind Brandtsegg wrote:

>I'm not sure how much difference it does in latency,
>but I prefer to let the (realtime) control signals go as short a route as possible.
>Sending the midi data via python seems to be such an extra step,
>and in the setup I'm building, I think I will let csound handle the midi input (and routing, mapping, initializing), since csound is the application that will need to get the data with a shortest possible latency. I will split the midi data stream, and then forward any midi data that should undergo any algorithmic processing to python. The way I see it, realtime audio control data tolerates less latency than algorithmic and larger scale control data.
>
>Oeyvind
>
>
>
>
>
>  
>
>>From: Andres Cabrera [andres@geminiflux.com]
>>Sent: 2006-03-14 18:10:06 CET
>>To: csound@lists.bath.ac.uk
>>Subject: Re: [Csnd] another csound environment - advice needed
>>
>>Hi,
>>I haven't tried this, but maybe OSC is a better way to communicate than
>>MIDI. I've seen some implementation of OSC for python somewhere, but I
>>don't know where.
>>I'd like to hear if you're succesful with this.
>>
>>Cheers,
>>Andrés
>>
>>On Mon, 2006-03-13 at 19:10 +0100, Atte André Jensen wrote:
>>    
>>
>>>Hi
>>>
>>>I mainly use csound live, running on a linux laptop and controlled from 
>>>3 usb keyboards. My current setup is quite flexible, very stable and 
>>>devided into three parts:
>>>
>>>1) Csound. Instruments are split into seperate files. Same with macro 
>>>defs, table defs and pgmassign's. A bash script assembles the resulting 
>>>.csd and runs csound on it. Csound reads realtime midi from the alsa 
>>>midi-through device.
>>>
>>>2) Python. I wrote a python script that uses pyseq to split incomming 
>>>midi based on zones (combination of midi key range + midi channel) to 
>>>midi channels. It also filters a few controllers and can send initial 
>>>program change + volume for each zone.
>>>
>>>3) Bash. The Python script is started from a bash script, one for each 
>>>setup. This also connects the input of pyseq to the usb keyboards (each 
>>>sending on different midi channels) and the output of pyseq to the 
>>>midi-through device.
>>>
>>>Although this works fine, it's tricky to generate setups, and the whole 
>>>thing is just not so inviting to work with. So I'm thinking hard about 
>>>redoing the whole thing in a single python script with gui. This script 
>>>and csound would be all that's required, communication from python to 
>>>csound is done over midi messages via alsa. Is should be a snap to 
>>>generate setups, and instrument names should be used in the zone part.
>>>
>>>Now, my question is. it this a clever way to go? Should I go for closer 
>>>integration and what would it take to get rid of the via-alsa-midi 
>>>communication? I'd prefer python over C/C++ since this is what I know 
>>>and it's soo nice to work in. I believe Iain has done something similar 
>>>with C++, maybe a posibility would be to add modules in C++ that do the 
>>>csound-communication-part... Any ideas, suggestions, warnings, 
>>>inspiration, advice are most welcome...
>>>
>>>-- 
>>>peace, love & harmony
>>>Atte
>>>
>>>http://www.atte.dk
>>>-- 
>>>Send bugs reports to this list.
>>>To unsubscribe, send email to csound-unsubscribe@lists.bath.ac.uk
>>>
>>>
>>>,
>>>      
>>>
>>--
>>Send bugs reports to this list.
>>To unsubscribe, send email to csound-unsubscribe@lists.bath.ac.uk
>>    
>>
>
>  
>


Date2006-03-15 19:22
FromAtte André Jensen
SubjectRe: another csound environment - advice needed
Iain Duncan wrote:

> As to latency, I don't think on linux that is an issue, but I use C++
> instead of python.

I already knew that we have quite different goals and just as different 
approaches. But it would still be helpful for me to understand your setup.

Could you explain a bit more. What exactly does your C++ layer do, and 
how does it communicate with csound. Do you use any midi, and if so 
who's receiving it, who's processing it how, and sending it where?

And how flexible do you need to be? I could be on stage and suddenly 
feel like having another sound assigned to the lower part of my middle 
keyboard.

-- 
peace, love & harmony
Atte

http://www.atte.dk

Date2006-03-16 02:06
FromIain Duncan
SubjectRe: another csound environment - advice needed
> Oeyvind Brandtsegg wrote:
> 
>> I'm not sure how much difference it does in latency,
>> but I prefer to let the (realtime) control signals go as short a route
>> as possible.
>> Sending the midi data via python seems to be such an extra step,
> 
> 
> True. But implemening a easy to maintain, easy to customize zone based
> splitter in csound doesn't seem like a good idea in csound. Or am I
> missing something? If you (or anyone else) has some code that could
> change my mind, I'd be happy to change it :-)

You are correct. I've done it, and once it gets complicated, it's not
pretty and not very maintainable. It's also limited to one midi channel,
a big pain. By taking your midi control to a host layer you can use all
the midi input devices you like, and still assign one directly to csound
if desired.

As to latency, I don't think on linux that is an issue, but I use C++
instead of python. It was harder to get working, but I think in the long
run its a better solution. I have tested the latency with my host and
gotten it dead low, like -b8 -B16 and stuff. The midi latency is
outweighed in my particular case by the latency of a kperiod anyway. If
I were playing virtuosic stuff manually, I would run less audio
processing and get the kperiods down smaller.

To me, another big advantage is not having to change how instruments are
built, they all get turned on from the score. But, that is partially
because of how I implement my instruments. For live use, I use
mono-synth voices ( even for the chords ) which allows one to send a new
note with amp 0 to act as a note off, keeping the voice instruments
always going. By keeping the voices always going on the heavy synths I
have a more steady load and no surprises in cpu use, it stays within
about 5% no matter what, so I can safely push the use up to about
85-90%. This also means that I can use score data as well as live input
to control all the instruments with no changes to how instruments are
implemented. Anything that would be done with midi continuous
controllers is done with table reads, and that way anything anywhere can
write to the tables and saving/loading snapshots of controller setups is
very simple. I'm not a big fan of csound midi control, it's really
difficult to use as elegantly as the score line mechanism.

Hope that helps!
Iain

Date2006-03-16 03:18
FromIain Duncan
SubjectRe: another csound environment - advice needed
> I'm not sure how much difference it does in latency,
> but I prefer to let the (realtime) control signals go as short a route as possible.
> Sending the midi data via python seems to be such an extra step,
> and in the setup I'm building, I think I will let csound handle the midi input (and routing, mapping, initializing), since csound is the application that will need to get the data with a shortest possible latency. I will split the midi data stream, and then forward any midi data that should undergo any algorithmic processing to python. The way I see it, realtime audio control data tolerates less latency than algorithmic and larger scale control data.
> 
> Oeyvind

I'm not sure that there will be a noticeable difference, unless python
is much slower than C++. For my set up, in both cases midi is handled by
portmidi, and in both cases it will not be realized acoustically until
the beginning of the next ksamp. So I don't think the midi input will
get to a sounding note any slower. Would love to know if I am confused
on this though.

Iain

Date2006-03-16 03:45
FromIain Duncan
SubjectRe: another csound environment - advice needed
> I already knew that we have quite different goals and just as different
> approaches. But it would still be helpful for me to understand your setup.

> Could you explain a bit more. What exactly does your C++ layer do, and
> how does it communicate with csound. Do you use any midi, and if so
> who's receiving it, who's processing it how, and sending it where?

At the moment, my C++ layer handles gui input to and from csound control
variables, as well as being a simple front end for loading csds,
playing, pausing, etc. I have made a testing version with midi handled
by C++ and it worked fine, but I have not integrated that yet into my
real version. Midi is handled by a midi input thread within the C++
layer, which also includes a gui thread and a csound thread. All
communication with csound uses a message structure and queues for thread
protection. This allows the csound thread to have high processor
priority so that the gui will gracefully lag without bothering audio
under cpu load. Midi messages get turned into a generic message that
goes first to a controller thread which handles all parsing and
translation into csound type data, and then to the csound thread where
it gets put into csound via api calls to make notes or write to tables.

> And how flexible do you need to be? I could be on stage and suddenly
> feel like having another sound assigned to the lower part of my middle
> keyboard.

I don't think that would be hard, because you could have the various
layers be different csound instrument channels and then dynamically
reroute how the midi input gets split as well as what voices go where.
However, the code for my setup is quite complicated as I have some long
term goals involving a multi-user client-server model that I am coding
for at the base level, and they made it a lot more complicated.

Iain
> 

Date2006-03-17 01:44
FromAndres Cabrera
SubjectRe: another csound environment - advice needed
Hi Jean Pierre,
What software are you using for your graphical stuff? Is it your own
java software?

Andŕes

On Wed, 2006-03-15 at 19:53 +0100, Jean-Pierre Lemoine wrote:
> Hi
> 
> This is the design I am using for combining real time csound and real 
> time opengl stuff. The csound engine has the responsability to handle 
> all the control rate signals that are then read by the Java opengl host 
> program using csound channels. Basically the csound orchestra contains 
> loopsegs that are used to animate vertex and pixel shader parameters. In 
> place of a loopseg, a midi control message can also be read by csound 
> and pass to the host program via channel.
> 
> I have two threads: one for performing the csound csd file, another for 
> displaying the graphic animation. I have found that this design is very 
> stable, and the performance excellent. All that is running using a 
> control rate equal to the sample rate (44100) without any glitch. Of 
> course I have to lower the control rate if the instrument is becoming 
> complex, but part of the game is to design csound instrument having a 
> "rich" sound and low cpu requirement.
> 
> The scound csd is generated by a high level editor, encapsulating some 
> of  the csound stuff.
> 
> Jean-Pierre
> 
> Oeyvind Brandtsegg wrote:
> 
> >I'm not sure how much difference it does in latency,
> >but I prefer to let the (realtime) control signals go as short a route as possible.
> >Sending the midi data via python seems to be such an extra step,
> >and in the setup I'm building, I think I will let csound handle the midi input (and routing, mapping, initializing), since csound is the application that will need to get the data with a shortest possible latency. I will split the midi data stream, and then forward any midi data that should undergo any algorithmic processing to python. The way I see it, realtime audio control data tolerates less latency than algorithmic and larger scale control data.
> >
> >Oeyvind
> >
> >
> >
> >
> >
> >  
> >
> >>From: Andres Cabrera [andres@geminiflux.com]
> >>Sent: 2006-03-14 18:10:06 CET
> >>To: csound@lists.bath.ac.uk
> >>Subject: Re: [Csnd] another csound environment - advice needed
> >>
> >>Hi,
> >>I haven't tried this, but maybe OSC is a better way to communicate than
> >>MIDI. I've seen some implementation of OSC for python somewhere, but I
> >>don't know where.
> >>I'd like to hear if you're succesful with this.
> >>
> >>Cheers,
> >>Andrés
> >>
> >>On Mon, 2006-03-13 at 19:10 +0100, Atte André Jensen wrote:
> >>    
> >>
> >>>Hi
> >>>
> >>>I mainly use csound live, running on a linux laptop and controlled from 
> >>>3 usb keyboards. My current setup is quite flexible, very stable and 
> >>>devided into three parts:
> >>>
> >>>1) Csound. Instruments are split into seperate files. Same with macro 
> >>>defs, table defs and pgmassign's. A bash script assembles the resulting 
> >>>.csd and runs csound on it. Csound reads realtime midi from the alsa 
> >>>midi-through device.
> >>>
> >>>2) Python. I wrote a python script that uses pyseq to split incomming 
> >>>midi based on zones (combination of midi key range + midi channel) to 
> >>>midi channels. It also filters a few controllers and can send initial 
> >>>program change + volume for each zone.
> >>>
> >>>3) Bash. The Python script is started from a bash script, one for each 
> >>>setup. This also connects the input of pyseq to the usb keyboards (each 
> >>>sending on different midi channels) and the output of pyseq to the 
> >>>midi-through device.
> >>>
> >>>Although this works fine, it's tricky to generate setups, and the whole 
> >>>thing is just not so inviting to work with. So I'm thinking hard about 
> >>>redoing the whole thing in a single python script with gui. This script 
> >>>and csound would be all that's required, communication from python to 
> >>>csound is done over midi messages via alsa. Is should be a snap to 
> >>>generate setups, and instrument names should be used in the zone part.
> >>>
> >>>Now, my question is. it this a clever way to go? Should I go for closer 
> >>>integration and what would it take to get rid of the via-alsa-midi 
> >>>communication? I'd prefer python over C/C++ since this is what I know 
> >>>and it's soo nice to work in. I believe Iain has done something similar 
> >>>with C++, maybe a posibility would be to add modules in C++ that do the 
> >>>csound-communication-part... Any ideas, suggestions, warnings, 
> >>>inspiration, advice are most welcome...
> >>>
> >>>-- 
> >>>peace, love & harmony
> >>>Atte
> >>>
> >>>http://www.atte.dk
> >>>-- 
> >>>Send bugs reports to this list.
> >>>To unsubscribe, send email to csound-unsubscribe@lists.bath.ac.uk
> >>>
> >>>
> >>>,
> >>>      
> >>>
> >>--
> >>Send bugs reports to this list.
> >>To unsubscribe, send email to csound-unsubscribe@lists.bath.ac.uk
> >>    
> >>
> >
> >  
> >
> 
> 
> 
> -- 
> Send bugs reports to this list.
> To unsubscribe, send email to csound-unsubscribe@lists.bath.ac.uk
> 
> 
> 1

Date2006-03-17 06:44
FromJean-Pierre Lemoine
SubjectRe: another csound environment - OpenGL engine
Hi Andres,
I have developped a very light graphic engine using a Java OpenGL 
wrapper: lwjgl (lwjgl.org). The graphical stuff is very basic (but the 
results great!): layers of  pictures are superposed and are transformed 
using pixel shaders. Thus, everything is done in the GPU, letting more 
cpu for the csound part.
lwjgl is great for the following reason (in my case)
    wrapper performance
    true full screen mode
    working also with the Eclipse SWT  framework (3.2), which I am using 
for the composition editor.
    many example sources, a great community (as willing to help as the 
csound one)
    portable to other platforms (linux, OSX)
Feel free to ask for more information.

Jean-Pierre

Andres Cabrera wrote:

>Hi Jean Pierre,
>What software are you using for your graphical stuff? Is it your own
>java software?
>
>Andŕes
>
>On Wed, 2006-03-15 at 19:53 +0100, Jean-Pierre Lemoine wrote:
>  
>
>>Hi
>>
>>This is the design I am using for combining real time csound and real 
>>time opengl stuff. The csound engine has the responsability to handle 
>>all the control rate signals that are then read by the Java opengl host 
>>program using csound channels. Basically the csound orchestra contains 
>>loopsegs that are used to animate vertex and pixel shader parameters. In 
>>place of a loopseg, a midi control message can also be read by csound 
>>and pass to the host program via channel.
>>
>>I have two threads: one for performing the csound csd file, another for 
>>displaying the graphic animation. I have found that this design is very 
>>stable, and the performance excellent. All that is running using a 
>>control rate equal to the sample rate (44100) without any glitch. Of 
>>course I have to lower the control rate if the instrument is becoming 
>>complex, but part of the game is to design csound instrument having a 
>>"rich" sound and low cpu requirement.
>>
>>The scound csd is generated by a high level editor, encapsulating some 
>>of  the csound stuff.
>>
>>Jean-Pierre
>>
>>    
>>
>