The author discusses methods and problems of acoustic signal processing for systems to enable machines to understand spoken communication. Emphasis is on research outside of the ARPA-sponsored SUR (Speech Understanding Research) study. This acoustic level processing includes three steps, not necessarily distinct: (1) preprocessing the original analog signal or its digitized form by basic techniques such as amplitude compression; (2) analysis of the preprocessed signals using fast Fourier transformations, digital filtering, etc; and (3) parameterizing the results in phoneme-sized chunks by formats, autocorrelation techniques, etc. Problems include (1) environmental noise, (2) transducer limitations, (3) determining an appropriate parameterization technique, and (4) coping with wide phonetic, syntactic, and semantic variability of speech.