PNW AES Banner

n.b. Chrome users need to refresh their browsers to ensure they have the latest content.

Hybrid In-Person/Zoom Meeting held at Digipen Institute of Technology in Redmond WA, Wednesday, January 31, 2024 7:30PM PST(UTC -8)

What Does "Accurate" Even Mean?
Presented by
James D. (jj) Johnston
Chief Scientist
Immersion Networks
Meeting Produced by
Dan Mortensen - AESPNW Committee, Dansound Inc.

image linked to aes_jan2024_01.jpg
Video frame of James D. (jj) Johnston discussing accuracy at the PNW Section, January 31, 2024.
image linked to aes_jan2024_02.jpg
Video frame of James D. (jj) Johnston discussing accuracy at the PNW Section, January 31, 2024.
image linked to aes_jan2024_03.jpg
Video frame of James D. (jj) Johnston discussing accuracy at the PNW Section, January 31, 2024.


Video Hosting Courtesy Dan Mortensen and Dansound Inc.

Video Screen grabs by Gary Louie, PNW AES Secretary.


PNW Section met in hybrid style January 2024, both in-person at the Digipen Institute of Technology in Redmond, WA and on Zoom to hear noted audio researcher James D. (jj) Johnston discuss audio accuracy. About 16 persons attended live and 65 on Zoom, with 56 reporting being AES members.

jj typically gives his home AES Section an annual discussion, this year being on the thought provoking usage of some terms regarding audio accuracy. Some of his concepts, in a nutshell:

  • Accuracy of an audio signal that goes through any wire, any active or passive circuitry, any ADC/DAC is relatively easy to thoroughly measure for accuracy. It is easy because we have the source to directly compare it to.
  • We can say with a great deal of confidence that there are thresholds of accuracy in these signals where we can claim audible transparency.
  • Speaker/rooms are not so easy to measure for accuracy. They are impossible to compare to an original acoustic event like a classical concert for accuracy. These things are difficult to measure because of lack of direct access to the source for direct comparison. Speakers and rooms have to be considered as a system. The ideal design for one depends on the other.
  • Many audiophiles and engineers fail to pay proper attention to the issues of head transfer functions and head movement. The advancement of audio will need to address head transfer function and head movement.
  • Preferences are inarguable. Stop arguing with people about what they like and stop demeaning them for it.

About Our Presenter:
James D. (jj)Johnston

  • received the BSEE and MSEE degrees from Carnegie-Mellon University, Pittsburgh, PA in 1975 and 1976 respectively.
  • Worked 26 years for AT&T Bell Labs and its successor AT&T Labs Research.
  • One of the first investigators in the field of perceptual audio coding.
  • One of the inventors and standardizers of MPEG 1/2 audio Layer 3 and MPEG-2 AAC, as well as the AT&T Labs-Research PXFM (perceptual transform coding) and PAC (perceptual audio coding) and the ASPEC algorithm that provided the best audio quality in the MPEG-1 audio tests.
  • Currently working in the area of auditory perception of soundfields, electronic soundfield correction, ways to capture soundfield cues and represent them, and ways to expand the limited sense of realism available in standard audio playback for both captured and synthetic performances.
  • Mr. Johnston is an IEEE Fellow, an AES Fellow, a NJ Inventor of the Year, an AT&T Technical Medalist and Standards Awardee, and a co-recipient of the IEEE Donald Fink Paper Award.
  • In 2006, he received the James L. Flanagan Signal Processing Award from the IEEE Signal Processing Society
  • He presented the 2012 Heyser Lecture  at the AES 133rd Convention: Audio, Radio, Acoustics and Signal Processing: the Way Forward.



Reported by Gary Louie, PNW Secretary. Additional report material from "justdafactsmaam"


Last Modified, 03/04/2024, 17:38:00, dtl