Citizens and residents of Toms River, NJ, USA, have a higher incidence of cancer directly attributed to the causative agents by untreated industrial effluents dumped into their groundwater.
The journalist Dan Fagin, wrote a self-titled book laying out the details of the problem and its aftermath – personally for an unfortunate few, and legally from lawsuits to compensate for suffering of the people – all stemming from the negligence of the industrial overlords. You can read more about his book here. It got 2014 Pulitzer prize for non-fiction.
Close to town of Karaikudi (Ramanathapuram/Sivagangai District) we know of stories of neighboring villages polluted by a certain public-sector petro-product company etc. But despite court-orders they continue to be functioning with little respite for the citizens.
I see what is happening in Thoothukudi right now as an effort to raise awareness of higher than random incidence of cancer most-likely attributed to past Sterlite unsafe practices; people are coming together to literally save their livelihoods and need assurances and updated practices and concrete buy-in before questionable practices can be allowed to continue.
Our computers – interconnects, microprocessor chips, IC’s in appliances like microwave oven, toaster etc, smart phones, lines of telephone wires, and cable TV’s need copper. We have to find a way to extract it safely and dispose off the effluents in internationally agreed safe processes, and continuously monitor the neighboring areas for contamination of ground-water, air and other environmental hazards. Up until that point it should be a no-go.
People of Tamilnadu deserve their voices heard. Tamil people have written on Copper – செப்பேடு – now we don’t want our fates written by Copper. Surely this can’t be too much to ask!
Sometimes its good to look around and learn from what’s happening in other realms of Indian language processing. In my limited experience language efforts in computing for Indian language revolve around the Dravidian languages, Bengali, Marathi or Hindi. சில நேரங்களில் குண்டு சட்டியில் குதிரை ஓட்டுரமாதிரி கணினி மொழியியல் ஆயிடக்கூடாது – தனிபட்டபடியும் சரி – மொழிகளுக்கிடையிலும் சரி.
Some good project efforts in Hindi Language processing (open-source) are reviewed in this blog; [there are projects like open-tamil API for Hindi, e.g. a get_letters like function, provided by tokenizer project here (with caveat that it is a small function only compared to expansive open-tamil), but we talk about the ML/A.I. focused projects here].
Hindi word embedding called Hindi2vec (along lines of word2vec project). The idea is to associate similar words (e.g. ‘பல்’,’நாக்கு’,’வாய்’) with similar vectors within a neighborhood of each other using concepts of linear-algebra – vector spaces and matrices. So when you search or mistype or want to classify there is a neighborhood of known words closer to the potentially unknown word input from the user; such known neighborhood identification can help decision making and drive various learning, classification or dialogue systems.
Hindi Transliteration Model project and the DeepTrans project– this is a really cool where they developed a reference data set of English to Hindi and trained a model for transliteration from English to Hindi of user input.
We can do this in Tamil with the as we have many transliteration schemes as set out in open-tamil, but the even a same user is not strictly going to follow the scheme strictly, nor do different users follow the same scheme – in all these cases a machine learning A.I. model maybe more robust by virtue of learning the underlying rules. Very interesting project, and fairly simple to implement for Tamil from open-tamil transliterate module and SciKit Learn or other frameworks with high 95% correct prediction rate.
Hindi-English parallel dictionary with 8MB size (probably 500,000 words or so I imagine) here – this can be a good jump starting point for translation projects if such existed for Tamil. e.g. Can we have a parallel dictionary English – Tamil for the simple TVU word list/dictionary ?
Hindi Sentiment Analysis project does a ternary [good, bad, neutral] classification of text. They do this by using a CDAC-model which is super curious to me; maybe CDAC-India (Pune) has a Tamil POS-Tagger too ? Probably they do.
Tamil POS-Taggers widely reported; AU-KBC Chennai has a POS-Tagger, probably the best for Tamil; Dr. Vasu Renganathan has a POS-Tagger, but both these works are not available currently for open-source use, however their techniques are openly shared via their papers in INFITT conferences.
Sorkandu project can also be revived for making an open-source POS-Tagger
Emotion Recognition in Hindi Speech project – this work from IIT KGP students builds a reference audio data set with known emotion labels and build some kind of a machine learning model, and then they get 5x better than random coin-toss/guess for the audio emotion recognition from speech.
We probably don’t have any work on this direction in the open, but interestingly NIST in USA sponsored a Tamil Key Word Search (KWS), reports of which were published by a Singapore team in academic journals. More interestingly the KWS challenge released 2 hrs of speech data with tagged information. In USA, government released data usually qualifies for public-domain – e.g. pictures from NASA etc. so maybe there is a way to get this data. கடவுளுக்கு தான் வெளிச்சம்!
While we know, Google ASR, Youtube online translation of English videos into Tamil closed-captioning, foreign languages to Tamil Translation, Transliteration inputs all use perhaps the most advanced models in Tensorflow on cloud hardware, none of this technology is directly usable for free – maybe for a price via their Google cloud API offerings – and we probably don’t know all the details of how they achieved these magical software applications for Tamil language – anyones guess like mine is using the massive data sets they have from our Tamil news groups, emails, websites, and user input + Tensorflow A.I / ML magic. At least, we have to be grateful for Google-aandavar like some friends commented on freetamilcomputing group. 🙂
Surprisingly, to my knowledge, there are no planned efforts, ongoing or completed open-source projects like these in Tamil. Maybe another avenue for growth, and in this case Hindi projects (at least in open-source domain) seem to have forged ahead!
எனது தமிழ் ஆசிரியர் கேட்டார் ‘சவம்-னா இறப்பின் பின் உள்ள சடலம், என்பது தெறியாதா?’ அப்பதான் ‘அன்பே சிவம்‘ என்பதை ‘அன்பே சவம்‘ என்று அவசரத்தில் எழுதியது புலபட்டது. அன்பழகன் சார், அவருக்கு இனிமையா இதை பாடம் கற்பிக்க மட்டும் ஒரு வாய்பாகதான் தெறிஞ்சிருக்கு. அப்ப எனக்கு edit-distance by one (சிவம் -> சவம்) அதனால் வந்த வினை என தெறியாது. அவர்கையில் கற்றது வாழ்வில் ஒரு நல்ல அனுபவம்.
Today’s blog topic is spell-checking.
It is well known that Bayesian methods can be used to correct spelling error (see Prof. Daniel Jurafsky & James H. Martin book chapter); the above example (‘அன்பே சிவம்’) with real-word error (i.e. error is made not in dictionary word but semantic error) can be easily corrected if we have word level bi-gram data and uni-gram data. This can easily be collected from Tamil Wikipedia data dumps, or Project Madurai. [Hint: project tip for engineering/math/cs students].
While letter level uni, bi and tri-gram data exist for Tamil in open-tamil project and part of my work at Tamil-TTS here, this remains to be not publicly available. Once this data – made available in public-domain – can be integrated, the various Tamil spell checkers in Tamil like Rajaraman’s Vaani, Dr. Vasu Renganathan’s, and our solthiruthi can make use of it. Potentially hunspell, aspell tools can be updated at their suggestion level modules to provide appropriate suggestions.
Future generations will never know of ‘அன்பே சவம்’. 🙂
Adding formant synthesis will improve this simple TTS. Are you a ECE/EEE/CS/Mathematics or Engineering undergraduate student interested to improve this code ? Do you want to learn more about filters ? Or, do you have another person in mind ?