[Cs-dev] Internals of csound during realtime output
Date | 2011-05-25 09:38 |
From | Alex Weiss |
Subject | [Cs-dev] Internals of csound during realtime output |
Attachments | None None |
Hi, With regards to the A/V synch issue I wrote about a couple of days ago, I decided to write a lightweight application in C with the Csound API instead of an opcode. That allows me to implement my own I/O, which I realized is necessary for a tight synch between audio and video. I know that there are callbacks for realtime in-/ and output that I could simply implement. But I'd like to understand the inner mechanics of csound first, e.g. what exactly happens after the synthesis is done: When exactly are the callbacks called, which part of the code fills the buffers, etc. In a way, what I'm looking for is a brief summary of the internals of csound during realtime output. I've already checked the "Inside Csound" doc on csounds.com and John ffitch's "What happens when you run csound," but neither talk output realtime output during performance.
I've also looked at Victor's CoreAudio module implementation, as I will be using CoreAudio. I see that the registered IOProc fills the output buffers, the way I've known it to work. But what does rtplay_ do then? That still leaves me puzzled.
If somebody could enlighten me a bit, I'd appreciate it. Thanks, Alex
|
Date | 2011-05-25 14:48 |
From | Victor Lazzarini |
Subject | Re: [Cs-dev] Internals of csound during realtime output |
Attachments | None None |
Basically the rtplay() function are called by csound when a buffer is ready for output. In the case of the coreaudio IO module, this then gets placed in a circular buffer that is read by the coreaudio callback. It's pretty much the same for the portaudio module, except that as far as I can remember the coreaudio code is lock-free. Similarly rtrecord() is the interface into csound. But in your program you can ignore these functions (they are only really useful if you are writing a new module for Csound) and tap either the output buffers or the spout (the ksmps out block) via the API. Victor On 25 May 2011, at 09:38, Alex Weiss wrote: Hi, Dr Victor Lazzarini Senior Lecturer Dept. of Music NUI Maynooth Ireland tel.: +353 1 708 3545 Victor dot Lazzarini AT nuim dot ie |
Date | 2011-05-25 14:57 |
From | Alex Weiss |
Subject | Re: [Cs-dev] Internals of csound during realtime output |
Attachments | None None |
Thanks, Victor. What are the advantages to tapping the buffers directly instead of supplying my own rtplay callback? On Wed, May 25, 2011 at 3:48 PM, Victor Lazzarini <Victor.Lazzarini@nuim.ie> wrote:
|
Date | 2011-05-25 15:05 |
From | Victor Lazzarini |
Subject | Re: [Cs-dev] Internals of csound during realtime output |
Attachments | None None |
It's simpler, because in order to use rtplay(), you will need to build a whole new IO module. Effectively what you do is this: 1. Create a csound instance 2. Use an API function to tell csound you will do your own IO (forgot the name, but it's in the docs), or add -+rtaudio=null to your compile options 3. Compile your CSD 4. Manage your calls to csoundPerformKsmps() or csoundPerformBuffer(), either in the same thread or on a separate thread 5. Tap spout or output buffer after each call to the above 6. send the samples to the output using the recommended form for your platform Victor On 25 May 2011, at 14:57, Alex Weiss wrote: Thanks, Victor. What are the advantages to tapping the buffers directly instead of supplying my own rtplay callback? Dr Victor Lazzarini Senior Lecturer Dept. of Music NUI Maynooth Ireland tel.: +353 1 708 3545 Victor dot Lazzarini AT nuim dot ie |
Date | 2011-05-25 15:12 |
From | Alex Weiss |
Subject | Re: [Cs-dev] Internals of csound during realtime output |
Attachments | None None |
Ah, that makes sense. I assume you were talking about csoundSetHostImplementedAudioIO? Is there any more meaning to the "state" variable, other than that it has to be non-zero if I want to implement my own IO? On Wed, May 25, 2011 at 4:05 PM, Victor Lazzarini <Victor.Lazzarini@nuim.ie> wrote:
|
Date | 2011-05-25 15:18 |
From | Victor Lazzarini |
Subject | Re: [Cs-dev] Internals of csound during realtime output |
Attachments | None None |
Off the top of my head, no. On 25 May 2011, at 15:12, Alex Weiss wrote: Ah, that makes sense. I assume you were talking about csoundSetHostImplementedAudioIO? Is there any more meaning to the "state" variable, other than that it has to be non-zero if I want to implement my own IO? Dr Victor Lazzarini Senior Lecturer Dept. of Music NUI Maynooth Ireland tel.: +353 1 708 3545 Victor dot Lazzarini AT nuim dot ie |
Date | 2011-05-25 16:17 |
From | Alex Weiss |
Subject | Re: [Cs-dev] Internals of csound during realtime output |
Attachments | None None |
OK, last question and then I think I'll be set: Why the need for a temporary software buffer in csound (the one that is specified with -b)? Why isn't the spout buffer directly copied into the hardware buffer? On Wed, May 25, 2011 at 4:18 PM, Victor Lazzarini <Victor.Lazzarini@nuim.ie> wrote:
|
Date | 2011-05-25 18:05 |
From | Victor Lazzarini |
Subject | Re: [Cs-dev] Internals of csound during realtime output |
Attachments | None None |
Because the spout buffer is the size of ksmps vector, which can be down to 1 sample (it's user-defined). It's a different thing altogether and it is not best practice to mix it up with IO buffers. Victor On 25 May 2011, at 16:17, Alex Weiss wrote: OK, last question and then I think I'll be set: Why the need for a temporary software buffer in csound (the one that is specified with -b)? Why isn't the spout buffer directly copied into the hardware buffer? Dr Victor Lazzarini Senior Lecturer Dept. of Music NUI Maynooth Ireland tel.: +353 1 708 3545 Victor dot Lazzarini AT nuim dot ie |