This quarter we concentrated all our efforts on the unsupervised learning approach to the very-low-rate vocoder. In particular, we have detailed the design of a segment vocoder. The segment vocoder represents speech in terms of a sequence of segment templates, which have been automatically extracted from running speech. In addition to designing the segment vocoder, we implemented three programs: (1) the segmentation program needed to extract segment templates, (2) an initial version of the segment vocoder, and (3) a display and playout program for comparing two files. All three programs were written to allow interactive use. We have performed some experiments with the segment vocoder. The results of these experiments are discussed in this QPR. Keywords include: Speech compression, linear prediction, clustering, spectral template, vocoder, unsupervised learning, diphone, phonetic vocoder, phoneme recognition, time warping, segmentation, segment vocoder, and segment quantization and space sampling.