Tech

How Does Machine Translation Work?

machine translators

February 1st, 2022   |   Updated on February 24th, 2022

The days when translating from one language to another involved the use of a bilingual dictionary are long gone.

In today’s world, if you come across words or phrases in a foreign language, all you have to do is use an online translation tool, and you’ll get a translation almost instantly. Machine translation (MT) has become so widely used that Google Translate translates more than 100 billion words every day.

In addition to personal use, machine translation supports companies and businesses in reaching out to a global audience. They can translate their website content into a multitude of languages, essentially eliminating language barriers. This not only allows them to reach new markets but also gives marginalized groups access to more information.

Machine translation (MT) is a type of automated translation in which a computer translates material without the assistance of a human translator. An algorithm is used to translate text from one language – the source language – to another language – the target language.

This algorithm needs to be trained using data samples. The data samples can be generic or specialized. Google Translate, for example, is a generic machine translation engine which means it’s designed for general purposes, so it’s not trained with data samples pertaining to a specific domain. As more people use the platform, more data is collected, and the algorithm and output improve.

In contrast, specialized machine training engines are trained with specialized sets of data and are constantly fine-tuned by developers, so the output is more precise.

There are many different varieties of MTs, but four of the most frequent are as follows:

  • Rules-based machine translation: This type relies on rules developed by programmers in collaboration with language experts based on grammar rules, dictionaries, and semantic patterns.
  • Statistical machine translation: This type relies on algorithms that analyze previously translated text samples to form a database of translations organized according to the likelihood that one word or phrase in the source language will correspond to another word or phrase in the target language.
  • Syntax-based machine translation: This type translates syntactic units rather than words. It’s a subtype of statistical machine translation.
  • Neural machine translation: This type combines statistical machine translation with neural networks. It’s the most complex type but also the most powerful.

You’ll also find hybrid machine translation systems that combine several approaches.

Neural Machine Translation

As we already mentioned, neural machine translation is a more sophisticated version of statistical machine translation. It uses a vast artificial neural network to predict extended phrase and sentence sequences. NMT requires less memory than statistical translation because the models are trained together to maximize translation quality.

During training, these networks compare the expected translation to the correct output automatically adjust their set parameters to improve quality. Humans need to train them during this initial learning phase which involves large datasets.

Using deep learning and artificial intelligence, NMT is the most advanced approach to machine translation.

Technically, NMTs are any type of machine translation that uses an artificial neural network to predict a sequence of numbers. Let’s say you give it a sentence in English which needs to be translated into German.

The sentence could be “I drink too much coffee.” This is the input sentence. Each word will correspond to a number.

The network will take this sentence encoded as a sequence of numbers and find the corresponding sequence of numbers in the target language while the user will get “Ich trinke zu viel Kaffee.” as the output or the answer to their query.

But how does the network turn one sequence of numbers into another? The short answer is that it uses a complicated mathematical formula.

The first sentence is turned into a string of numbers that goes through the formula and gets turned into another string of numbers.

This is done millions of times which means millions of sentences from English get turned into strings of numbers and then translated into corresponding strings of numbers in German. With each sentence, the neural network learns, changes slightly, and refines its parameters using back-propagation.

Statistical machine translation also turns phrases into strings of numbers when translating, but it doesn’t assign relationships between words the way neural networks do. If a neural network receives data samples where two words have similar use cases, it will give them numerical values closer together to reflect this. For instance, if the data samples show that the words “but” and “however” have similar use cases, the neural network will give them close numerical values.

These networks always consider the context of the sentence. They analyze the words and their order in the sentence. As a result, they provide a higher level of fluency.

Short History Of Translation Technology

Some of the techniques we use today in translation technology date back to an Arabic cryptographer by the name of Al-Kindi who came up with a method based on frequency analysis in the 9th century, but translation technology didn’t really take shape until the mid-20th century when computers became more affordable and widely available.

The 1950s saw the introduction of the world’s first machine translation (MT) system, developed by Georgetown University and IBM. This system was rule-based and used dictionaries and pre-programmed rules. By today’s standards, it was slow and unreliable, but back then, it was revolutionary and paved the path to advancements in MT.

Voice-to-text technology started in the 1970s when DARPA and the US Department of Defense began researching speech recognition technology.

The 1980s saw the introduction of electronic dictionaries and terminological databases. The ALP System, developed at Coventry Lanchester Polytechnic University, was the first to introduce the concepts that would later become current translation management systems (TMS).

By the beginning of the 1990s, IBM researchers had developed statistical machine translation, and more commercial computer-assisted translation tools entered the market. In the late 1990s, IBM released a new version of its statistical translation engine, which was now phrase-based rather than word-based. It remained the market standard for years until Google’s neural machine translation (NMT) technology entered the race in 2016.

When Google launched Google Translate in 2006, it was still statistical and used predictive algorithms based on the words and sentences it had previously learned. The output often had grammatical errors.