Hello. While trying to work out just how to time events for a music sequencer in Python, it came to my attention that it won't work. At least, I have no idea what's the best way to implement RT from an interpreted language. Some of the features I was going to include were soundfonts, OSC, jack, and recording to wav. But now someone has recommended just using Csound to take care of all the real-time stuff; does anyone have any advice, agree or disagree with that idea? If I were to just import Csound to do the rt stuff, then I wouldn't need to also import liblo or pyjack or any platform-specific audio, nor try to find some other way to manipulate soundfonts... right? I had a problem before using csoundapi~, in that t statements from Pd didn't work, and note times passed from Pd were treated as seconds, not beats, so controlling tempo within Csound didn't work either. I guess one solution would be to use tempo within my app to adjust p2/p3 values before sending them? However, that makes it difficult to send sync data if someone wants to use a drum synth or something. Thanks for any tips. -Chuckk -- http://www.badmuthahubbard.com