Csound Csound-dev Csound-tekno Search About

Re: Morphing

Date1997-03-02 19:52
FromRichard Wentk
SubjectRe: Morphing
At 18:29 01/03/97 -0800, you wrote:
>
> | To create an acceptable morph you have to be able to
> | isolate these factors and control them individually. This
> | is moderately easy for monophonic sounds but spectacularly
> | difficult for polyphonic sounds.
>
>Problem is, that `morphing', as Jean Piche pointed out, is
>not a technical term.  In fact, it wouldn't really denote
>anything at all, wouldn't its ethymology suggest some sort
>of treatment of something's `morphology'.  In other words,
>the concept is intuitive, but its realization is not.

Well, the concept comes from the visual field where it's much easier to do.
Yes, it's a neologism and doesn't have the same definite meaning as
something like 'wavelet transform.' 

But one of the issues in the computer music field is the gap between
technical/academic models which are mathematically based, and
perceptual/psychoacoustic experiences, which are what most people actually
respond to when they listen to music. 

This is what makes the morphing question such an interesting challenge. I
think it hightlights the lack of understanding of the latter, and the
relative rigidity of the former - given the crude techniques available
today - when modelling real-world non-abstract musical information.

>Surely, for some processing to be called a true `morph',
>you'd have to have a handle on every psychoacoustically
>important feature of the signal.  But you won't be able to
>get that from any known DSP technique. 

See above. No indeed, there is no general purpose morphing algorithm which
will work on any and all material, and I doubt if there ever will be. But
it is possible to come close using a variety of techniques. If you design a
physical model that works well for a family of instruments, and then change
the parameters so that the model moves from a simulation of one member of
the family to another, is that not a morph? And of course there are other
approaches you can take. 

There are no DSP algorithms that will do polyphonic pitch extraction in the
general case either, and again, quite possibly there never will be - but
does that mean that a full orchestral passage isn't valid as a musical
experience, or that limited examples of polyphonic pitch extraction aren't
worth pursuing? 

The idea isn't to build a full model that accounts for every possible kind
of sound or every single psychoacoustic feature, but to tease out the
information that's most audibly obvious and concentrate on that, using
whatever model works best for the job at hand. 

>Even more, if you
>look close enough, `morphing' doesn't even work in graphics.
>Typically, you'll see two images "approach" each other
>through a more or less boring, brownish and blurry swamp of
>intermediary states.

Not true. If morphing didn't work visually it wouldn't be the big budget ad
and movie gimmick that it is. In fact it has a very obvious appeal, and one
that visual designers are prepared to spend huge amounts of money to
achieve. The results may not be technically rigorous, but they satisfy
designers, directors and viewers. Certainly the commercially produced
morphs I've seen have been nothing like you describe. 

>
>The general problem here is that the salient features you
>think you're interpolating in the straightest manner
>imaginable, will inevitably engage in unforeseeable complex,
>interesting and distracting interactions (to counteract the
>noisy intermediary states, composers have always spent the
>extra effort of actually inventing and carefully shaping
>each state--as opposed to applying a mechanism).

This is only true if you insist on applying a model rigidly - and is
exactly why the FFT interpolation technique doesn't work. Success or
failure aside, is there no point in investigating those distracting
interactions? 

If you attempt the process, at least you have the chance to make
interesting perceptual discoveries. If you write it off as irrelevant -
using the excuse that there's no DSP theory to support it - that will never
happen. 

Besides, audio morphs are clearly possible and a number of composers have
created successful examples. I'd suggest there's something interesting and
useful to be learned here! 

R.

Date1997-03-03 03:07
FromLonce LaMar Wyse
SubjectRe: Morphing

 > See above. No indeed, there is no general purpose morphing algorithm which
 > will work on any and all material, and I doubt if there ever will be. But
 > it is possible to come close using a variety of techniques. If you design a
 > physical model that works well for a family of instruments, and then change
 > the parameters so that the model moves from a simulation of one member of
 > the family to another, is that not a morph? And of course there are other
 > approaches you can take. 

General morphing technique in 3 easy steps:

Take any two sounds.
Represent them in the same space (Fourier, FM, physical model, or any of the
	other *infinite* number of possible model spaces) 
Now they are each at some location in the same representation space, just take
	one of the *infinite* number of paths from one to the
	other, and you have a morph.

Obvioulsy, this is abstract, and doesn't say how to find a space for
the two sounds to live in, but it is, in fact, what every morph
construction does. The two infinities here show how vague we are being
when we talk as if there were one "right" morph, or as if there could
be a definitive piece of morphing software.

 ===
We have been exploring a slightly more complicated version of the
above "general" formula. We have a large collection of sound models,
every single one using a different algorithm. That is, their
representations exist in different parameter spaces. So how?

We find two different models that have parameter regions where they
produce sounds that are perceptually similar. Then to "morph" from a
sound made with one of the models to a sound made with the other, we
chose a parameter path that bring sound A into the overlapping
perceptual region, switch models as subtly as possible, and continue
on the path in the other model's parameter space until we arrive at
sound B.

In this way, both sounds don't actually reside in the same parameter
space, but a convincing "morph" can still be achieved.


- lonce