You may have driven a car that advised you to "fasten your seat belt." This is an example of talking machine that use output from a voice response system.
There are two types of voice-response system:
One uses a reproduction of a human voice and other sounds, and the other uses speech synthesis. Like monitors, voice response systems provide temporary, soft-copy output.
The first type of voice-response system selects output from user recorded words, phrases, music, alarms, or anything you might record on audiotape, just as a printer would select characters. In these recorded voice-response systems, the actual analog recordings of sounds are converted into digital data, and then permanently stored on desk or in a memory chip. When output occurs, a particular sound is converted back into analog before being routed to a speaker. Chips are mass-produced for specific applications, such as output for automatic teller machines, microwave ovens, smoke detectors, elevators, alarms clocks, automobile warning systems video games, and vending machines, to mention only a few. When sounds are stored on disk, the user has the flexibility to update them to meet changing application needs.
Speech synthesis system, which convert raw data into electronically produced speech, are more popular in the microcomputer environment. All you need to produce speech on a PC are sound expansion board, speakers (or headset), and appropriate software. Such software often is packaged with the sound board. To produce speech, sounds resembling the phonemes (from 50 to 60 basic sound units) are combined to make up speech. The existing technology produces synthesized speech with only limited vocal variation and phrasing, however. Even with its limitations, the number of speech synthesizer applications is growing. For example, a visually injure person can use the speech synthesizer to translate printed words into spoken words. Translation systems offer one of the most in interesting applications for speech synthesizers and speech-recognition devices
Researchers are making progress toward enabling conversations among people who are speaking different languages. A prototype system has already demonstrated that three people, each speaking a different language (English, German, and Japanese), can carry on a computer-aided conversation. Each person speaks and listens to his or her native language.