Significant improvements in AI speech-to-text transcription service means Speechmatics can now use a single language model to recognise all English accents and dialects

speechmatics

Speech-to-text AI software Speechmatics has enhanced the accuracy of its transcription service (used by Red Bee for its automated subtitling service) by up to 16%, as part of a ‘Next Generation languages’ update.

The update includes both new machine learning algorithms and refinements to Speechmatics’ existing technology. The company’s own testing showed an up to 16% increase in accuracy for its global English transcriptions as well as a range of other languages.

Following the series of updates, the AI service will now offer a single English language model that supports all major accents and dialects. This removes the requirement to use multiple language packs of English to cover different dialects.

Head of speech at Speechmatics David Pye, said: “Our innovation in machine learning means we can make big jumps in advancing speech recognition technology, including dialect-agnostic speech recognition. We’re doing away with specific dialect language models for English as our modelling is now so advanced we no longer need them.”

Hewson Maxwell, head of technology, access services at Red Bee Media, added: “We’ve been really thrilled to have seen an average of 22% improvement in accuracy using the new Speechmatics’ Next Generation models across all our core languages, on top of the already excellent base. These improvements will allow us to produce more subtitles for less and we are fast-tracking the new models in to production as fast as we can.”