PNW AES Banner

n.b. Chrome users need to refresh their browsers to ensure they have the latest content.

Meeting held via ZOOM Thursday, November 17th, 2022, 6PM PST (UTC -8).

What is "Bandwidth?"
And Why Do I Care?
Presented by James D. (jj) Johnston
AES Fellow, Chief Scientist of Immersion Networks
with
Bob Smith, SoundSmith Labs, PNWAES Vice Chair
image linked to jj_1.jpg
Our presenter for the evening, James D. (jj) Johnston, AES Fellow, PNWAES Committee Member, Technical Advisor, and Chief Scientist at Immersion Networks.
image linked to jj_slide.jpg
jj Johnston begins his presentation What is "Bandwidth": and why do I care?
image linked to bobs_slide.jpg
Bob Smith, SoundSmith Labs and PNWAES Vice Chair and Technical Contributor displays plots while explaining how to interpret them.

Zoom video recording and chat log by Luke Pacholski, screenshots extracted by Gary Louie.


PNW Section met on Zoom Nov 17, for a presentation by AES Fellow James D. (jj) Johnston titled, What is "Bandwidth" and Why Do I Care? explaining often misunderstood concepts about digital audio relating to bandwidth. PNW vice-chair Bob Smith also presented supporting information and real-world measurements. About 39 AES members and 36 non-members attended, including a large contingent from the Audio Science Review online forum. PNW Chair Dan Mortensen presided.

After a review of some basic concepts, Mr. Johnston (jj, lower case) showed graphs of the power spectrum (meaning only the energy, not the phase) of a variety of pulses, from zero to quite an extensive length, pointing out along the way that no signal that falls between samples can ever be a proper bandwidth to sample in the first place. Then he showed the problems of having a large discontinuity at the edges of a waveform by comparing a very long pulse (sharp peak at DC, ridiculously wide bandwidth) with a proper rate-converting pulse (having a 20kHz baseband, and the rest of the spectrum under -120dB re. peak). There was a brief discussion on how to design filters and filter implementations, then a discussion on anti-imaging filters (same design as anti-aliasing filters). He again pointed out those imaginary "steps" in a properly reconstructed signal are all out-of-band, by producing a "squared off waveform," the filtered (by a proper anti-aliasing filter) result, and their spectra. The only "proper" one, the filtered one, again contains no "squareness."

jj also showed some examples of "time resolution" in standard PCM systems, pointing out that it's the system bandwidth and the bit depth that control time resolution, most specifically not the sampling rate, except that the sampling rate must be twice the bandwidth. Many questions were posted in Zoom chat and submitted during the presentation.

Bob Smith then showed plots from specific real-life DAC measurements, where those unfamiliar with the measurement purpose might be confused by the displays. Detailed examination demonstrated no laws of physics or rules of digital audio were violated.

Attendees were then welcome to unmute and introduce themselves, which took quite a while as people were happy to discuss their audio work.

Our Presenters: James D. (jj) Johnston is Chief Scientist of Immersion Networks. He has a long and distinguished career in electrical engineering, audio science, and digital signal processing. His research and product invention spans hearing and psychoacoustics, perceptual encoding, and spatial audio methodologies.

He was one of the first investigators in the field of perceptual audio coding, one of the inventors and standardizers of MPEG 1/2 audio Layer 3 and MPEG-2 AAC. Most recently, he has been working in the area of auditory perception and ways to expand the limited sense of realism available in standard audio playback for both captured and synthetic performances.

Johnston worked for AT&T Bell Labs and its successor AT&T Labs Research for two and a half decades. He later worked at Microsoft and then Neural Audio and its successors before joining Immersion. He is an IEEE Fellow, an AES Fellow, a NJ Inventor of the Year, an AT&T Technical Medalist and Standards Awardee, and a co-recipient of the IEEE Donald Fink Paper Award. In 2006, he received the James L. Flanagan Signal Processing Award from the IEEE Signal Processing Society, and presented the 2012 Heyser Lecture at the AES 133rd Convention: Audio, Radio, Acoustics and Signal Processing: the Way Forward. In 2021, along with two colleagues, Johnston was awarded the Industrial Innovation Award by the Signal Processing Society "for contributions to the standardization of audio coding technology."

Mr. Johnston received the BSEE and MSEE degrees from Carnegie-Mellon University, Pittsburgh, PA in 1975 and 1976 respectively.

Bob Smith, SoundSmith Labs, PNW Section Vice Chair, earned a BSEE from the University of Washington and has worked in the biomedical industry for over 50 years. The last 25 years he has spent developing acoustic research and audio engineering disciplines for Stryker/Physio-Control to improve speech intelligibility for medical device voice prompting and voice recording systems in noisy environments. He is responsible for voice prompting in 30+ languages. The department now handles acoustic measurements of components such as drivers, microphone capsules and system measurements including Thiele-Small parameters, polar plots, waterfalls, frequency response, impulse response, several speech intelligibility methods, etc.



Reported by Gary Louie, PNW Section Secretary.


Last Modified, 12/08/2022, 15:56:00, dtl