One reason we are able to recognize speech, despite all the acoustic variation in the signal, and even in very difficult listening conditions, is that the speech situation contains a great deal of redundancy – more information than is strictly necessary to decode the message. There is, firstly, our general ability to make predictions about the nature of speech, based on our previous linguistic experience – our knowledge of the speakers, subject matter, language, and so on. But in addition, the wide range of frequencies found in every signal presents us with far more information than we need in order to recognize what is being said. As a result, we are able to focus our auditory attention on just the relevant distinguishing features of the signal – features that have come to be known as acoustic cues. What are these cues, and how can we prove their role in the perception of speech? It is not possible to obtain this information simply by carrying out an acoustic analysis of natural speech: this would tell us what acoustic information is present but not what features of the signal are actually used by listeners in order to identify speech sounds. The best an acoustic description can do is give us a rough idea as to what a cue might be. But to learn about listeners' perception, we need a different approach.