| The software bus (ref "chani") is what I originally suggested as an
"automation bus"; similar in principle to GUI widget control of an
instrument or opcode, but the control signal comes from a generic
numbered input channel, to which control data is sent via the host app.
Indeed, it was proposed as a way of removing the FLTK opcodes from the
Csound engine itself, out to a host app where they more logically
belong. The great difference bewteen Csound 5 and previous incarnations
is that the relationship between the Csound audio engine and the host
"wrapper" (which may be no more than a console program, but could
equally be a full-blown GUI or a python script) is made explicit, and
clearly distinguished.
Nevertheless, it has to be recognised that there is a limit to how much
dynamic control over N parameters can be managed by any user reliant on
moving sliders or clicking switches. Even within legato performance,
transitions between notes can be complex, involving not just pitch but
amplitude and timbral transitions. There are three basic solutions:
(a) get more fingers
(b) use intelligent agent software (external or internal) to translate a
single gesture to multiple concurrent parameter updates
(c) compose the thing using a score file or a sequencer!
My vision was that breakpoint automation data from a DAW such as SONAR
could one day be routed directly to a Csound plugin instance; how far
from that we are I am not sure. In the meantime we also have the OSC
send/receive opcodes that also offer a mechanism for external automation
control of instruments, given an OSC-savvy host. DAW manufacturers have
basically dropped the baton (if they ever held it) with regard to
interchange of automation data.
Richard Dobson
Art Hunkins wrote:
> This "one note" model is the way I end up building most of my real-time
> compositions.
>
> An additional reason for the model is that in a fairly dense sustained
> texture, initializing an instrument can result in a sonic "glitch." Better
> to just keep the "note" on all the time rather than stopping and restarting
> it.
>
> Art Hunkins
>
> ----- Original Message -----
> From: "Mike Coleman"
> To: ; "Developer discussions"
>
> Sent: Thursday, September 13, 2007 1:07 AM
> Subject: Re: [Cs-dev] Engine Changes,or Csound 6 (was Re: [Csnd] feature
> request: multiple strings in"i"statements)
>
>
>
>>On 9/12/07, Richard Dobson wrote:
>>
>>>The price of being able to make authentic portamento that accurately
>>>replicates what real players do, is some sort of mechanism that can tell
>>>the instrument what the next note is going to be.
>>
>>Yes, but alternatively one could give up on the idea of discretizing
>>into notes in this situation. So, for example, one could imagine a
>>part that in csound terms consisted of just one "note". During that
>>note's duration, frequency and volume could be modulated via some
>>novel mechanism (e.g., a MIDI controller wheel for real-time, or some
>>new kind of score file mechanism for non-RT). This would give you all
>>the portamento you could want.
>>
>>If we can change the model, the problem can be solved. The questions
>>are whether it is worth doing this and if so how best to do so.
>>
>>Mike
>>
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
Csound-devel mailing list
Csound-devel@lists.sourceforge.net |