1 / 54

Sound localization

Sound localization. What are the factors that determine how well we can tell where a sound is coming from?. Bottom line. Acoustics, peripheral coding, and central processing are all important in sound localization. Importance of sound localization.

oshin
Download Presentation

Sound localization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Sound localization What are the factors that determine how well we can tell where a sound is coming from?

  2. Bottom line Acoustics, peripheral coding, and central processing are all important in sound localization.

  3. Importance of sound localization • Clear survival value: evolutionary importance • Involves comparisons between ears and across the spectrum: requires a brain

  4. Sound localization: the important phenomena • Effects of position in azimuth • Effects of frequency • Effects of position in elevation

  5. Sound 1 Sound 2 The minimum audible angle (MAA) Speakers Which sound came from the right? Threshold for detecting a change in spatial location of a sound source

  6. MAA at different positions From Gelfand (1998)

  7. MAA in azimuth Judged position MAA From Blauert (1983)

  8. MAAazimuth as a function of frequency From Blauert (1983)

  9. MAA in elevation From Blauert (1983)

  10. The MAA is • Better at midline than to the sides in azimuth • Not so good for sounds around 1500 Hz • Not as good in elevation as in azimuth WHY?

  11. Explanations for the characteristics of the MAA • Acoustic cues available • Peripheral coding • Central processing

  12. Acoustic cues used in localization • Interaural differences • Interaural intensity differences (IIDs) • Interaural time differences (ITDs) • Spectral shape cues

  13. Interaural intensity differences From Gelfand (1998)

  14. Interaural time differences

  15. Why are we better at “straight ahead” than off to the sides?

  16. Cone of confusion From Gelfand (1998)

  17. Cone of confusion 150 degrees 30 degrees 30 degrees 150 degrees

  18. Why are we better at “straight ahead” than off to the sides? • Cone of confusion, front-back confusions • In the brain, more neurons respond to “straight ahead” than to “off-to-the side”.

  19. The brain can afford to devote more neurons to localizing straight ahead because • we can turn our heads • most sounds come from straight ahead • it isn’t necessary to localize sounds accurately off to the side • none of the above; it can’t afford to do this

  20. IIDs and frequency From Gelfand (1998)

  21. Why IIDs depend on frequency 1600Hz From Gelfand (1998)

  22. The reason that sound is more intense at the ear close to the sound source is • the inverse square law; sound has to travel farther to the far ear • it takes longer for sound to travel farther • the head absorbs and reflects sound on the side closer to the source • the head absorbs and reflects sound on the side farther from the source

  23. If there is a talker on your left and a talker on your right, and you really want to hear the talker on your right, you should • turn your nose toward the talker on the right • turn your nose toward the talker on the left • turn your left ear to the left talker and your right ear to the right talker (and listen to your right ear)

  24. Interaural time differences From Gelfand (1998)

  25. ITDs and frequency: How could your brain know if a sound arrived at one ear .2 ms later than at the other?

  26. .6 ms .1 ms Phase ambiguity 1000 Hz: phase difference is 216 degrees 4000 Hz: phase difference is 864 degrees but looks like 144 degrees

  27. Phase ambiguity When the interaural delay is longer than the period of the tone, then interaural time comparison gives an ambiguous result. The comparison gives a result of x degrees phase difference, but the real difference could be 360+x or 720+x, etc.

  28. .6 ms .6 ms Interaural onset time comparisons How about the first (onset) response? It can be used but not all sounds have abrupt onsets and it only happens once.

  29. Interaural time differences Maximum = .65 ms F = 1/p F = 1/.65 F = 1538 Hz (and any higher frequency) From Gelfand (1998)

  30. The reason that phase ambiguity occurs is • the ear does not encode the starting phase • the ear provides no information about phase • phase locking does not occur above 5000 Hz • people use the place code for frequecies above 2000 Hz

  31. Interaural cues and frequency IID Cue goodness ITD 250 500 1000 2000 4000 8000 16000 Neither cue is so good around 1500 Hz.

  32. How do we know that (other) characteristics of the auditory system don’t also contribute?

  33. Lateralization experiments Interaural differences under earphones create the perception of a sound source located inside the head, at a position determined by the interaural difference.

  34. 15 dB 9 dB 0 dB Interaural intensity difference discrimination Good performance From Gelfand (1998)

  35. We know it’s the acoustics in the case of IIDs, because if we artificially create IIDs at low frequencies, people hear sound source at different locations.

  36. Interaural time (phase) difference discrimination Still can’t do high frequency Better at small phase separations (straight ahead) From Gelfand (1998)

  37. We know it’s the auditory system in the case of ITDs, because if we “artificially” create ITDs at high frequencies, people still can’t tell what the sound source location is.

  38. ITD Envelope ITD

  39. AM lateralization From Yost (1994)

  40. We know there is a contribution of the auditory system in the case of differences between positions, because if we artificially create different positions, people can still “lateralize” better for midline than for lateral positions.

  41. Explanations for the characteristics of the MAA • Acoustic cues available • Peripheral coding • Central processing

  42. The MAA is • Better at midline than to the sides in azimuth • Not so good for sounds around 1500 Hz • Not as good in elevation as in azimuth WHY?

  43. Sound source moving along an arc directly overhead at midline No interaural differences What are the acoustic cues to sound elevation?

  44. What are the acoustic cues to sound elevation? Elevation - no Azimuth - yes, but front-back confusions Elevation - no Azimuth - yes Elevation - yes Azimuth - yes Localization in elevation requires pinnas From Blauert (1983)

  45. What do pinnas do for us? The acoustic cue used to localize in elevation is spectral shape. From Gelfand (1998)

  46. That localization is less precise in elevation than in azimuth suggests that spectral shape is not as good a cue to location as interaural differences.

  47. Localization in azimuth with one ear is similar in precision to localization in elevation. From Gelfand (1998) Spectral shape cues are available, but used as supplemental Information for localization in azimuth.

  48. Another role of spectral shape cues Unprocessed sound Sound shaped by “HRTF filters”

  49. Reason that localization in azimuth is better straight ahead than off to the side • acoustic cues available • peripheral coding • central processing

  50. Reason that localization in azimuth is not so good for sounds around 1500 Hz • acoustic cues available • peripheral coding • central processing

More Related