| The next step here at this end is to figure out why I'm getting the same
non-results from incorrect path in Csound's pointers to the hetro files on
the Windows/DOS platform to play with this alternative idea in greater
detail and control. Will run some tests with various versions of Csound to
map out which work or don't this weekend, hopefully.. ;-) I know I've seen
it work in one of the versions that has been avaiable.
=== back on the time-domain rotation aspect of this topic ---
I can see how it would be relatively trivial to place the time-domain
samples of a waveform on a 2D grid, rotate the grid and read the samples out
to another wave within the bounds of the amplitude limitations of the
digital domain. Being a Csound 'consumer' more than 'developer' has it's
limitations. The absorbing C/C++ via the osmosis method through my pillow
is not panning out here.
=====
My other point, which I don't have the background to discuss in depth is the
inherent difference between how magnetic media stores audio information vs.
the digital domain's cut and dry 2D approach to storing the same source
information.
The idea of rotating chunks of audio information stored on tape relative to
the playback head and how those teeny magnetic particles's orientation would
be altered relative to each other is an extremely intriguing idea.
The smearing/altering of audio information during this type of tape playback
technique must be phenomenal at the minimum.
I must find a recording of the subject piece immediately, if not sooner.
cheers!
Ricardo MadGello
Out & About.. . . . . . . .
-----Original Message-----
From: Sean Costello [mailto:costello@seanet.com]
Sent: Monday, March 22, 1999 7:46 PM
To: madgello@oz.net
Cc: csound@maths.ex.ac.uk
Subject: Re: Cage's Williams Mix - an alternative approach
Ricardo:
What you are describing sounds very similar to how Metasynth is used.
Metasynth is a Mac-only commercial program (http://www.uisoftware.com)
that converts PICT images into sound, and vice versa, by mapping the
pixels to a time/frequency representation that then drives a bank of
oscillators. Simple enough, but the fun comes when you start using a
program like Photoshop to alter the pictoral outputs of a sound, then
converting the resulting image back into sound. I haven't used
Metasynth, but I have played around with Coagula, a very nice shareware
program that converts images into sound (but not vice versa). Malcolm
Slaney has a paper on the net that describes such experiments dating
back to 1955.
I like the idea of rotating the sound image to see what happens to the
resulting sound. Of course, the results will be far different for
rotating a time/frequency representation than rotating a time domain
waveform, but the results would be interesting in either case.
Sean Costello
Ricardo MadGello wrote:
> This sounds very similar to an experiment I did with the bmp2wav and vice
> versa program that was mentioned her a few weeks back.
>
> I recorded some straight verbal text in wave format.
> Then used the wav2bmp program to turn it into a bmp which was essentially
> similar to what hetro does.
> The resulting bmp was brought into a graphics editor and rotated 90
degrees,
> thus changing the time domain into frequency and the frequency domain into
> time.
> This bmp was run back through the bmp2wav converter and saved as wave
file.
> The result was an interesting sound event with an interesting side effect
of
> having added reverb from somewhere I don't quite understand yet.
>
> I can see how rotating each original sound event's spectral content to
some
> arbitrary angle in the graphic editor and converting it back to wave in a
> like manner could achieve a similar context as the Cage piece, though not
> the same physics. Putting these snippets together in some multitrack
sound
> editor would be interesting to pursue.
>
> What fun. A new sound toy!
>
> Ricardo MadGello
> Out & About.. . . . . . . .
>
> -----Original Message-----
> From: owner-csound-outgoing@maths.ex.ac.uk
> [mailto:owner-csound-outgoing@maths.ex.ac.uk]On Behalf Of Jean-Michel
> DARRÉMONT
> Sent: Saturday, March 20, 1999 4:20 AM
> To: csound@maths.ex.ac.uk
> Subject: Cage's Williams Mix
>
> Hi,
>
> Reading the book:"Conversing with Cage" from Richard Kostelanetz I noticed
> Cage's commentary about his
> realisation of Williams mix in 1952.
> They choped up a recorded tape in 1097 fragments and spliced them back
into
> the band.
> In that way they put the splices in any orientation refered to the normal
> horizontal reading.
> The splices where played mainly diagonaly.
> He said that the sounds produced that way were "perfectly beautiful
sounds"
> no
> doubt they are at least quite unusual.
> Here comes to mind this question: how can a soundfile be read in CSound at
a
> variable angle saying that 0° is the normal playback, 180° backward and
360°
> normal playback again?
>
> It would be interesting to try this, specialy when we consider they spent
> one
> year with a five or six persons team to realize Williams Mix, cutting,
> splicing tiny pieces of tape, using chance operations to determine length
> and
> angle of reading in a terribly meticulous work.
>
> Digital synthesis could do that in a clic and that way experience and
bring
> the process further.
>
> Is hetro/adsyn necessary or pvoc or something simpler?
>
> Any idea?
>
> Regards.
> --
> Jean-Michel DARREMONT |