×
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT

Largest searchable audio library soon

Last Updated 08 December 2009, 16:40 IST
ADVERTISEMENT

Professor John Coleman and his team are one of four teams to win the ‘Digging into Data’ competition set up to encourage imaginative, forward-thinking research using large-scale computing in Humanities.

The resulting database will contain a year’s worth of spoken English and the project Mining a year of speech will create the world’s largest searchable database of spoken English sound recordings.

Professor Coleman said: “In a world where there’s more multimedia than text, audio searching is becoming a vital technology: even Google is moving into it now. We will provide the data so that it is searchable, but we can’t even begin to imagine the full range of questions about language that people will want to use it for.”
The team will work in partnership with Lou Burnard (Oxford University Computing Services), Mark Liberman and colleagues from the University of Pennsylvania and the British Library Sound Archive.

While the American side of the partnership will work on sound recordings in the Linguistic Data Consortium at Penn, the English team will prepare and release the four million word spoken part of British National Corpus (BNC), the largest set of recordings of “language in the wild” ever made.
Although the BNC was transcribed and published electronically many years ago, the speech recordings that accompany it have not previously been released, apart from a small sample.

It is almost unique among speech archives in that it captured huge quantities of unscripted speech recorded by hundreds of volunteers .
Professor Coleman said: “If the word “phonetics” makes you think of elocution teachers, then think again. For at least a century, the scientific study of speech has sat right on the borderline of the arts and sciences, and our team is no stranger to developing cutting-edge computational technology for the analysis of spoken language.”

If this collection of sound recordings were to be played end-to-end, it would take over a year of continuous listening to find what was being searched for. Professor Coleman and Professor Liberman’s project will use a variant of automatic speech recognition technology to label every word and every vowel and consonant in the recordings, and a demonstration search engine so that enquirers can rapidly find examples of the bits of spoken English they are looking for.

For example, someone interested in history might want to ask for the recording where George Bush said “Read my lips”, someone learning English might want to hear how “misled” is pronounced, or an English pronunciation specialist might be interested in how many people pronounce “schism” with an initial ‘s’, and how many with an initial ‘sh’.

It takes a trained phonetician around 10 minutes to label each word in one minute of speech, and about 100 minutes to label every vowel and consonant; so it would take over 100 years’ work to label such a vast database by hand. But as all of the material in the database already has a corresponding written transcript, Professor Coleman and Professor Liberman’s teams will use speech recognition technology to index the “year of speech” in under 15 months.

ADVERTISEMENT
(Published 08 December 2009, 16:40 IST)

Deccan Herald is on WhatsApp Channels| Join now for Breaking News & Editor's Picks

Follow us on

ADVERTISEMENT
ADVERTISEMENT