Csound Csound-dev Csound-tekno Search About

[Csnd] Stereo Sources with hrtfmove2 Opcode

Date2020-03-12 13:52
FromMark Ferguson
Subject[Csnd] Stereo Sources with hrtfmove2 Opcode
Hi all,

I'm experimenting with the excellent hrtfmove2 opcode: http://www.csounds.com/manual/html/hrtfmove2.html.

I'm relatively new to binaural/HRTF-based processing (and this opcode specifically), so forgive the potential naivety of the question!

The opcode requires a mono source, which is moved through 3D space. How can one use a stereo source, or is it even possible? I had envisaged simply splitting the two signals (i.e. L & R from a conventional stereo source) into two mono sources, then giving them 'symmetrical' parameters (so, right signal = azimuth 90˚ and left = -90˚; both would then have the same height info).

Would this, in theory, work? I'm going to give it a go, but just wonder if anyone has a more mathematical approach or any experience with this, or if an ambisonics-based approach would yield more fruit. I just figured that ambisonics wouldn't account for HRTF and the spectral dataset used by hrtfmove2, which is working superbly through my headphones.

MF. 

Csound mailing list
Csound@listserv.heanet.ie
https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND
Send bugs reports to
        https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here

Date2020-03-12 14:25
From"Jeanette C."
SubjectRe: [Csnd] Stereo Sources with hrtfmove2 Opcode
Mar 12 2020, Mark Ferguson has written:
...
> The opcode requires a mono source, which is moved through 3D space. How can one use a stereo source, or is it even possible? I had envisaged simply splitting the two signals (i.e. L & R from a conventional stereo source) into two mono sources, then giving them 'symmetrical' parameters (so, right signal = azimuth 90˚ and left = -90˚; both would then have the same height info).
...
I have tried to work with stereo samples and HRTF. It depends on the
sample, I found. Of course, you can feed both channels of your stereo
audio separately into two HRTF opcodes. Putting them in the places you
suggest is fine for a balanced stereo impression.

Phasing, however, could occur. I'm sure someone here can explain when
this is most likely to happen and why. At a guess, microphone setups for
the original come in to play.

Best wishes,

Jeanette

-- 
  * Website: http://juliencoder.de - for summer is a state of sound
  * Youtube: https://www.youtube.com/channel/UCMS4rfGrTwz8W7jhC1Jnv7g
  * SoundCloud: https://soundcloud.com/jeanette_c
  * Twitter: https://twitter.com/jeanette_c_s
  * Audiobombs: https://www.audiobombs.com/users/jeanette_c
  * GitHub: https://github.com/jeanette-c

I thought love was just a tingling of the skin <3
(Britney Spears)

Csound mailing list
Csound@listserv.heanet.ie
https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND
Send bugs reports to
        https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here

Date2020-03-12 15:34
FromAnders Genell
SubjectRe: [Csnd] Stereo Sources with hrtfmove2 Opcode
Doesn't stereo already imply spatial information? Using hrtf-move to achieve more realistic spatialization would be muddled combined with the less advanced spatialization in a stereo signal.
Whan could be done, perhaps, is to put two virtual speakers in a virtual room, and auralize that setup so that one could move around in it listening to the effect of the room on the stereo reproduction at different positions...

Regards,
Anders
Csound mailing list Csound@listserv.heanet.ie https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND Send bugs reports to https://github.com/csound/csound/issues Discussions of bugs and features can be posted here

Date2020-03-12 15:53
FromGiuseppe Silvi <000006613a17e48d-dmarc-request@LISTSERV.HEANET.IE>
SubjectRe: [Csnd] Stereo Sources with hrtfmove2 Opcode
Hi,
Left and Right do not implicitly mean stereo more than a monophonic signal passed through HRTF panning. What is inside a pair of tracks could (not must and truly not implicitly does) describe a stereophonic panorama. So, firstly, what you have inside those LR tracks?
Some examples: a Blumlein stereo pair (two coincident fig-8, 90 degrees of divergence) is a pure intensity-difference pair. It simply does not describe stereophony by phase differences, only by amplitude differences. During reproduction, both channels are feeding amplitude differences at loudspeakers, and loudspeakers are feeding amplitude and phase differences at ears (across the 17cm distance). Wich is the significance of processing a Blumlein pair through HRTF?
If you put headphones instead of reproducing a Blumlein pair throught loudspeaker there is no difference: the amplitude-to-amplitude-and-phase transaction is respected either. 
Nevertheless, if you really want to pass through HRTF an LR pair the 180 degrees you described is too wide. The frontal aperture of a stereo pair is more closed. You could start with 90 degrees (-45 +45) to test your listening. However, the inter-channels relationship in HRTF will create phase interferences between channels. 

The ambisonic realm is scalable in HRTF. But if you need a binaural signal, why you need an intermediate technology? Need you both stereo and binaural “master”?

Giuseppe

> On 12 Mar 2020, at 14:52, Mark Ferguson  wrote:
> 
> Hi all,
> 
> I'm experimenting with the excellent hrtfmove2 opcode: http://www.csounds.com/manual/html/hrtfmove2.html.
> 
> I'm relatively new to binaural/HRTF-based processing (and this opcode specifically), so forgive the potential naivety of the question!
> 
> The opcode requires a mono source, which is moved through 3D space. How can one use a stereo source, or is it even possible? I had envisaged simply splitting the two signals (i.e. L & R from a conventional stereo source) into two mono sources, then giving them 'symmetrical' parameters (so, right signal = azimuth 90˚ and left = -90˚; both would then have the same height info).
> 
> Would this, in theory, work? I'm going to give it a go, but just wonder if anyone has a more mathematical approach or any experience with this, or if an ambisonics-based approach would yield more fruit. I just figured that ambisonics wouldn't account for HRTF and the spectral dataset used by hrtfmove2, which is working superbly through my headphones.
> 
> MF.
> 
> Csound mailing list
> Csound@listserv.heanet.ie
> https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND
> Send bugs reports to
>        https://github.com/csound/csound/issues
> Discussions of bugs and features can be posted here

Csound mailing list
Csound@listserv.heanet.ie
https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND
Send bugs reports to
        https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here