One of problems faced when building a Tamil spell checker, albeit somewhat marginal, can be phrased as follows:
Given a series of Tamil alphabets, how do you decide if the letters are true Tamil word (even out of dictionary) or if it is a transliterated English word ?
e.g. Between the words, ‘உகந்த’ vs ‘கம்புயுடர்’ can you decide which is true Tamil word and which is transliterated ?
This is somewhat simple with help of a neural network; given sufficient “features” and “training data” we can train some of these neural networks easily. With current interest in this area, tools are available to make this task quite easy – any of Pandas, Keras, PyTorch and Tensorflow may suffice.
Generally, the only thing you need to know about Artificial Intelligence (AI) is that machines can be trained to do tasks based on two distinctive learning processes:
Read more at the Wikipedia – the current “problem” is a classification task.
Naturally for task of classifying a word, we may take features as following:
- Word length
- Are all characters unique ?
- Number of repeated characters ?
- Vowels count, Consonant count
- In Tamil this information is stored as (Kuril, Nedil, Ayudham) and (Vallinam, Mellinam and Idayinam)
- Is word palindrome ?
- We can add bigram data as features as next step
Basically this task can be achieved with new code checked into Open-Tamil 0.7 (dev version) called ‘tamil.utf8.classify_letter‘
To make data sets we can use Tamil VU dictionary as a list of valid Tamil words (label 1); next we can use a transliterated list of words from English into Tamil as list of invalid Tamil words (label 0).
Using a 1, 0 labeled data, we may use part of this combined data for training the neural network with gradient descent algorithm or any other method for building a supervised learning model.
Building Transliterated Data
Using the Python code below and the data file from open-tamil repository you can build the code and run it,
def jaffna_transliterate(eng_string): tamil_tx = algorithm.Iterative.transliterate(jaffna.Transliteration.table,eng_string) return tamil_tx def azhagi_transliterate(eng_string): tamil_tx = algorithm.Iterative.transliterate(azhagi.Transliteration.table,eng_string) return tamil_tx def combinational_transliterate(eng_string): tamil_tx = algorithm.Iterative.transliterate(combinational.Transliteration.table,eng_string) return tamil_tx # 3 forms of Tamil transliteration for English word jfile = codecs.open('english_dictionary_words.jaffna','w','utf-8') cfile = codecs.open('english_dictionary_words.combinational','w','utf-8') afile = codecs.open('english_dictionary_words.azhagi','w','utf-8') with codecs.open('english_dictionary_words.txt','r') as engf: for idx,w in enumerate(engf.readlines()): w = w.strip() if len(w) < 1: continue print(idx) jfile.write(u"%s\n"%jaffna_transliterate(w)) cfile.write(u"%s\n"%combinational_transliterate(w)) afile.write(u"%s\n"%azhagi_transliterate(w)) jfile.close() cfile.close() afile.close()
to get the following data files (left pane shows ‘Jaffna’ transliteration standard, while the right pane shows the source English word list); full gist on GitHub at this link
In the next blog post I will share the details of training the neural network and building this classifier. Stay tuned!