| I have just received my VST 2 SDK. At this time, I am engaged in adapting
Gabriel Maldonado's DirectCsound 2.8 sources to work with the JavaSound 0.86
API and with the VST 2 SDK. If and when a publicly available, real-time SAOL
compiler becomes available, I will adapt that to the same purposes. I do not
currently have the time or the skill to write my own SAOL compiler from
scratch. If anyone is interested in working with me to test or adapt such a
compiler that they are developing, please let me know, on the list or off
the list.
So far I have only given the VST 2 SDK the most cursory once-over. It does
promise that VST hosts (such as Cubase) will be able to send MIDI channel
messages to a software synthesizer plugin and have that plugin synthesize or
process audio with sample frame accurate timing. Cubase will also receive
and record MIDI channel messages from plugins. In the future, plugins will
be able to read, write, and rewrite audio files through a host-based
protocol out of real time.
On the plus side, getting Csound to work in the Cubase environment is a very
enticing prospect, particularly as the plugin can send MIDI to the host.
There is an obvious possibility of developing algorithmic composition
plugins as well as algorithmic synthesis and processing plugins.
On the minus side, MIDI channel messages are a very limiting protocol.
Fortunately Steinberg have defined a protocol for cent-accurate pitches and
sample-frame accurate times within the MIDI channel message.
By contrast, the JavaSound API provides all of the functionality and
cross-platform capability of VST 2 plugins, and more besides - but no host
application framework at all... yet.
What I am shooting for:
A composition framework that supports music notation, MIDI sequencing, and
audio recording, with overdubbing and signal processing. The framework is
designed to be extensible and to be high-precision. Algorithmic composition
and synthesis plugins can be written by anyone and will just drop into the
framework; they can be as elaborate and as precise as desired, in other
words can be adequate to the demands of music research. Data peculiar to the
plugins is be saved directly in the framework files, using XML text.
Such a framework would support the high end of both commercial music
production and academic music research.
My own Silence framework provides some of these capabilities, but depends on
shelling out to commercial software for music notation, sequencing and
recording.
Cubase with synthesis plugins provides some of these capabilities, but
without the precision and abstraction of the Silence Music Modeling
Language.
In other words, Silence is mathematically and compositionally adequate to
the task of representing and generating music, excepting it uses absolute
time throughout instead of meter, but it is not adequate to the user
interface demands of the recording studio or live performance.
>From a software engineering point of view, what needs to be designed and
adhered to is a set of abstract interfaces or protocols that are stable,
adequate to the task of music, and easy to use and to implement. This is a
very demanding criterion that is nowhere near being met. Protocols are
required for representing performance control data, several kinds of music
representation data including notation, and signal processing network
specifications. The closest things we have at this time are, respectively,
MIDI, proprietary sequencer and notation file formats, and Csound orchestras
or SAOL bitstreams. MIDI in its present form will never be adequate, but is
not far from being adequate with additional precision; the proprietary file
formats have to go in favor of XML; and SAOL (if it proves technically
adequate) could solve the DSP network representation problem.
dupswapdrop: the music-dsp mailing list and website
http://shoko.calarts.edu/~glmrboy/musicdsp/music-dsp.html |