Actually signal processing is already used for most machine learning of audio signals, including speech recognition. The reason is that ML algorithms, including deep learning has a hard time learning the information you can get from a discrete Fourier transform.
Audio data in time domain are just too noisy for most machine learning, and doing some signal processing as a preprocessor step often helps a lot.
Here it seems like he works with non-audio data, where this is less common.
Audio data in time domain are just too noisy for most machine learning, and doing some signal processing as a preprocessor step often helps a lot.
Here it seems like he works with non-audio data, where this is less common.