inventions inventions inventions inventions

Speech Synthesis - Invented by Christian Gottlieb Kratzenstein

Christian Gottlieb Kratzenstein-Speech Synthesis
: Christian Gottlieb Kratzenstein (Know about Christian Gottlieb Kratzenstein)
: 1779
: Germany
: Communication

About Invention

Speech synthesis is the artificial production of human speech. A computer system used for this purpose is called a speech synthesizer, and can be implemented in software or hardware products. A text-to-speech (TTS) system converts normal language text into speech; other systems render symbolic linguistic representations like phonetic transcriptions into speech.

Long before electronic signal processing was invented, there were those who tried to build machines to create human speech. Some early legends of the existence of ""Brazen Heads"" involved Pope Silvester II (d. 1003 AD), Albertus Magnus (1198–1280), and Roger Bacon (1214–1294).

In 1779, the Danish scientist Christian Kratzenstein, working at the Russian Academy of Sciences, built models of the human vocal tract that could produce the five long vowel sounds (in International Phonetic Alphabet notation, they are [a], [e], [i], [o] and [u]).This was followed by the bellows-operated ""acoustic-mechanical speech machine"" by Wolfgang von Kempelen of Pressburg, Hungary, described in a 1791 paper.This machine added models of the tongue and lips, enabling it to produce consonants as well as vowels. In 1837, Charles Wheatstone produced a ""speaking machine"" based on von Kempelen's design, and in 1857, M. Faber built the ""Euphonia"". Wheatstone's design was resurrected in 1923 by Paget.

In the 1930s, Bell Labs developed the vocoder, which automatically analyzed speech into its fundamental tone and resonances. From his work on the vocoder, Homer Dudley developed a keyboard-operated voice synthesizer called The Voder (Voice Demonstrator), which he exhibited at the 1939 New York World's Fair.

The Pattern playback was built by Dr. Franklin S. Cooper and his colleagues at Haskins Laboratories in the late 1940s and completed in 1950. There were several different versions of this hardware device but only one currently survives. The machine converts pictures of the acoustic patterns of speech in the form of a spectrogram back into sound. Using this device, Alvin Liberman and colleagues were able to discover acoustic cues for the perception of phonetic segments (consonants and vowels).

Dominant systems in the 1980s and 1990s were the MITalk system, based largely on the work of Dennis Klatt at MIT, and the Bell Labs system,the latter was one of the first multilingual language-independent systems, making extensive use of natural language processing methods.

Early electronic speech synthesizers sounded robotic and were often barely intelligible. The quality of synthesized speech has steadily improved, but output from contemporary speech synthesis systems is still clearly distinguishable from actual human speech.

As the cost-performance ratio causes speech synthesizers to become cheaper and more accessible to the people, more people will benefit from the use of text-to-speech programs.


Invention Images



View Photos


View Photos