படம்

Tamil Entry via Keypad

One problem that seem to not draw interest from various actors in digital Tamil community seems to be the Tamil input via 4 x 3 standard Keypad.

A standard 4×3 keypad shown with digits and letters, including Japanese key entry on a vodafone device. Image credits to Wikipedia.

Problem Statement: Given a 4×3 matrix of keys in a phone keypad, how can we input the basic 13 + 18 + 12×18 = 247 letters of Tamil alphabet using this device ?

Alternate: Clearly, 247 letters have an information content of \log{2}{247} = 7.94836723158 bits or roughly 8bits. So we can simply punch in 3 keys for indicating this 8bit combination and we are done. Provide a table to the user about 247 letters and their 3-numeric key map and we have solved this problem in one way.

This is not very satisfying however; we seem to put the user to more work; we would instead like to have similar entry method in Tamil just like in English (where 3 letters are grouped per telephone key). The processor for application in the phone or mainframe can decode any ambiguity of the telephone keypad mapping into meaningful words or phrases.

Ideas: We can come up with various proposals; being lazy, and the official jester of Tamil computing community, I will try and make a simple combinatorial analysis for this problem without giving a specific solution.

Details: We can consider the factors of 247 = 19 x 13 which form a matrix of all letters representing the Tamil alphabets and we can count the partitions of this matrix onto the smaller keypad matrix. Following the roman letters of English alphabet consisting of 26 letters are fit easily into the 4 x 3 matrix on average of little less than 3 letters per key, we can also adopt a similar convention.

There are many ways to fit this large 19 x 13 matrix into a 4 x 3 matrix. Using simple combinatorial analysis we may show 19 letters can be divided into 4 groups as {19 \choose 4} (ignoring the assignment of letter groups to keys – 4! ways) along the rows. Similarly, we group along columns in {13 \choose 3} ways (and ignoring the 3! column permutation themselves). In all we have a total of {19 \choose 4}\times{13 \choose 3} = 1801371 key grouping combinations.

Clearly we have an alternate possibility of grouping the 19 x 13 matrix as a transposed matrix – i.e grouping dimension of 13 elements of Tamil alphabets into larger keypad dimension of 4, and assigning 19 elements along the fewer keypad dimension of 3. This alternative gives us {13 \choose 4}\times{19 \choose 3} = 692835

Together we have a total of 1,801,371. Thats roughly 1.8 million possibilities! Check them yourself by running this code:

. The real grand total of possible designs is to include the key permutations of the grouping we have already found, thereby adding a factor of 4! \times 3! = 144 to the previous 1.8 million so we get grand total of keypad mapping designs as 259,397,424 or 259 million keyboard combinations in all!

Conclusion: How are we going to find a suitable keypad mapping? Well we may need more heuristics and more cleverness to find the keypad mappings [a few definitely exist in this 259 million possibilities, which maximize a utility function.

So that leads us to the next problem: what is the utility of mapping a Tamil letters in the keypad ? Well – we don’t know apparently, so it doesn’t exist! This also ties into the philosophical question of what is the purpose of all software if not to support use.

ஆழக்கற்றல் – Deep Learning – மின் புத்தகம்

Michael Nielsen, a well known computer scientist and Quantum Computing expert [author of famous: ‘Introduction to Quantum Computation and Quantum Information,’ with Isaac Chuang, has written a nice book in expository detail about Deep Learning.

Front Cover
Book: “Quantum Computation and Quantum Information” from authors Michael Nielsen, Isaac Chuang. (C) 2000 Cambridge University Press. Google Books URL here

Nielsen’s new book, Neural networks and deep learning here, has taken a more modern approach to (web) publishing in releasing the whole book in Creative Commons Non-Commercial Share Alike [NC-SA] license.

இந்த புத்தகத்தில் கணினி தரவுகளைக்கொண்டு எப்படி [ஒரு படிபடியான் நிரல் இல்லமல், தரவின் கற்றல் வழியே மற்றும்] நிரைய செயல்பாடுகளை சாதிக்கமுடியும் என்றும், இதன் அடிப்படையான செயற்கை நரம்புகளின் பினைப்புகள் மற்றும் அதன் கோட்பாடுகளையும் உடைத்து வைக்கிரார் திரு. நீல்சன். அல்வா மாதிரி ருசித்து பருகுங்கள்.

செயற்கை நரம்புகளின் பினைப்புகள் [‘Artificial Neural Networks’] மூலம் எப்படி கையெழுத்து வழி எண்களை உணரலாம் ? வழிகள் கூறுகிறார் திரு. நீல்சன் Neural networks and deep learning என்ற புத்தகத்தில்

பொறியாளர்கள் கவணம்! இதனை தமிழாக்கம் செய்யலாம் – முனைவீர்களா?

செல்வா

வருங்காலத்தில் ஒரு தமிழ் செயற்கை நுண்ணறிவு உருவாக்கப்படும். உடனுக்குடன் ‘இன்ஸ்டண்டா’ ஆங்கிலத்தில் இனையான தமிழ் சொற்களை தேடி அல்லது உருவாக்கி சொல்லும். ஆமாம் எந்திரம் சொல்லாடலில் எப்படியும் உள்ளே வரப்போகிரது. நமக்கும் உதவட்டுமே!

தமிழ் மரபுகளுடன், மொழி பழக்கவழக்கங்களுடன் சரிவர, முடிந்த அளவு வட மொழி சொற்கள் சேற்காமல், மேலும் ஒரு படி அதிகமாக ஆங்கிலம் கலப்பின்றி [முற்றிலும் ஒழிக்கமுடியுமா? தெரியவில்லை; கணினிதானே, இலக்கைவைத்தால் முடியாதா என்ன ?]

R2-D2 மற்றும் C-3PO Star Wars திரைபடத்தின் கதாப்பாதிர ரோபோக்கள்.

R2-D2 மற்றும் C-3PO Star Wars திரைபடத்தின் கதாப்பாதிர ரோபோக்கள். (c) Lucas Films, Inc. and Star Wars franchise

இத்தகைய செயற்கை நுண்ணறிவு உருவாக்கினால், அதற்கு செல்வா என்று செல்லமாக பெயரிடுவோம். அரிமா ரோபோ C-3PO, R2D2 மாதிரியான, புவியில் இல்லாத தமிழ் அறிவு கொண்ட ஒரு ஓரகில் [Oracle]-ஆக அமையுமோ என்னவோ. ஐயா கலாம் சொன்னது கனவுகள் நினைவாக விழித்திடு; தூக்கத்தை கலைத்திடு.

 

India A.I. report – highlights

ஏற்கணவே எழுதிணபடி  இந்திய செயற்கை நுண்ணறிவு அறிக்கை வெளியிட்ட குழுவின் தலைவர்,  IIT-சென்னையைச் சேர்ந்த பேரா. காமகோடி. இந்த அறிக்கையில், முக்கியமான விஷயங்ககள் கீழே படம் வடிவங்களில் பாற்க;

India-AI-report-1

படம் 1: இந்திய செயற்கை நுண்ணறிவு அறிக்கை – மாற்றுத்திறணாளிகள் பற்றி

India-AI-report-2

படம் 2: இந்திய செயற்கை நுண்ணறிவு அறிக்கை – இந்தியமொழிகள் பற்றி

காதல் -> தவம் ?

எப்படி “காதல்” என்ற சொல்லை, ஓர் எழுத்து மாற்றத்தினால் மட்டுமே, “தவம்” என்று மாற்றுவது ?

காதல்
கானல்
காறல்
கால்
காழ்
சீழ்
சீவ
சீவம்
சைவம்
தவம்

இதனை எப்படி கண்டடைந்தோம் ?. இதனை எப்படி கணினிமயமாக்கலாம் ?

விரைவில்.

GPUs powering the AI revolution

Ganapathy Raman Kasi*, Muthiah Annamalai+

[This article originally appeared in the 2017 Tamil Internet conference, UT-SC, Toronto, Canada, magazine ]

Introduction

The current hot trend in AI revolution is “deep learning” – which is a fancy way of talking about multi-layered convolutional neural networks; this field of study has heralded a new age in computing extending human capabilities by automation and intelligent machines [1].

These neural networks aren’t the same as neuron networks in your brain! We are talking about artificial neural networks which reside in computers and tries to mimic the biological neural network with its synapses (connections) of axons, dendrons and their activation potentials. These thinking machines have their beginnings in post WW-II research at MIT, in the work of Seymour Papert who introduced “Perceptrons,” and Norbert Weiner’s “Cybernetics”.

But do we know why there is sudden interest in these biologically inspired computer models ? It is due to GPUs which has accelerated all the complex computations associated with neural networks for it be practical in such a large scale. They allow these networks to operate on gigabytes (or even terabytes) of data and have significantly reduced the computation time from months to days, or days to hours, or hours to minutes usually by an order of magnitude – not possible in an earlier generation of computing. Before we jump into the details let us understand why we need deep learning and convolutional neural networks in the first place.

Scientific Innovations

Science and engineering have traditionally advanced by our ability to understand phenomena in natural world and describe them mathematically, since the times of Leonardo Da Vinci, Nicolas Copernicus, Galileo Galilei, Tycho Brahe, Johannes Kepler and Isaac Newton. However gaining models through experimentation and scientific breakthroughs piece-meal for each problem at hand is a slow process. Outside of Physics and Mathematics the scientific method is largely driven by an empirical approach.

It is in such pursuits of building models of unknown processes where observational data far exceed our human intelligence to divine an analytical model, the advent of deep learning and GPU based multi-layered neural networks provide an ad-hoc computable model. System identification for particular classification tasks, image recognition, and speech recognition to the modern miracle of a self-driving cars are all enabled by deep learning technology. All this came about due to the seminal work of many innovators culminating in the discovery of efficient convolutional neural networks by Prof. Geoff Hinton, who trained them by hardware acceleration via GPUs.

An original pioneer in the field of AI, before the AI winter, Prof. Geoff Hinton and co-workers [2] recently showed deep learning models that beat status-quo benchmarks on classification and prediction tasks on the following speech, text or image datasets: Reuters, TIMIT, MNIST, CIFAR and ImageNet, setting off the renewed interest in the field of AI from academia and industry giants – Google, Microsoft, Baidu and Facebook alike [3].

What is a GPU ?

GPU stands for Graphics Processing Unit [4]. These were originally designed for graphics rendering used in video games in 1990s. They have a large number of parallel cores which are very efficient for doing simple mathematical computations like matrix multiplications. These computations are the fundamental basis for machine learning methods such as deep learning. While the improvement in CPUs over years has slowed down over the years as Moore’s law has hit a bottleneck, the GPUs increase in performance has continued unabated showing tremendous improvements over the generations.

Figure. 1 (left): Deep Learning training task times as function of various GPU processors from NVidia. Figure. 2(right): AlexNet training throughput for 20 iterations on various CPU/GPU processing platforms.

Such GPUs were originally invented for shading algorithms algorithms, are now applied in training large machine learning models using a Open CL or CUDA like frameworks (variants of C-language with description for parallel execution via threading) from the vendors.

The pioneering hardware vendors include Nvidia with their GPU series like GeForce, Tesla; AMD with its Radeon, GP GPU, Google has entered this race with its TPU (Tensor Processing Unit) and some offerings from Intel for ML training applications. Nvidia and AMD are the main players in the GPU space with Nvidia laying special emphasis on parallel computing and deep learning over the years. Nvidia just announced the new Volta generation chip based GPU V100 which is about 2.5 x faster than the previous generation chip Pascal GP100 which was announced less than 2 years ago [5].Compared to CPU, however GPUs are more than 50x faster for Deep learning. Performance of GPUs as function of various GPU families in shown in Figure. 1, and for another AlexNet data set is shown in Figure. 2.

Hardware Innovation

If the Harvard architecture and RISC architecture based CPUs have been workhorses of personal computer revolution, then the advent of high framerate video-gaming pushed the CPU based graphics rendering from CPU + Video card based rendering to CPU + GPU, to CPU + GPU + GP-GPU (general purpose GPU); some of this overview is shown in Figure. 3a, 3b.

Figure. 3(a,b): Evolution of GPU performance from video graphics cards and rendering from CPU; courtesy PC Magazine [4]; Figure. 3(c): NVIDIA Tesla GPU applications in scientific research.

Limitations

GPU’s are suitable for large numerical algorithms where various data have to be moved through a computational pipeline often in parallel; this SIMD problem, like genome sequencing shown in Figure. 3c, when solved by GPU gain the maximum speedup/acceleration. However, there is a fundamental limitations of GPU acceleration due to the Amdahl’s law which saturates the parallelization upto the available serial bottlenecks for a given computational task.

Software Frameworks

To build a deep learning application one may use their labeled datasets to build a learning model on any of the various frameworks [6] (both open-source or closed) provided from competing vendors in the industry as follows:

  1. TensorFlow, developed by google, python API over C++ engine, low level api, good for researchers, not commercially supported; notably Google is in process of developing a TPU – an advanced version of GPU for direct use with TensorFlow.

  2. Caffe 2, developed by UC Berkeley used at Facebook among other places, focussed on computer vision, one of the earlier frameworks to gain significant adoption, Python API over C++ and CUDA code

  3. Scikit Learn (Python based) general inference and machine-learning framework

  4. Theano written in python, grand-daddy of deep learning frameworks

  5. CNTK developed by Microsoft

Applications

Tamil applications for deep learning including providing or improving existing solutions to the problems of,

  1. Tamil Speech Recognition
  2. Tamil Character Recognition [7,8]
  3. Natural Language Processing for Tamil

Hardware acceleration and availability of big-data (labeled datasets) will play key role in the success of applying deep learning techniques to these problems.

References

  1. Jensen Huang, “Accelerating AI with GPUs: A New Computing Model,” link

  2. G. E. Hinton et-al. “Imagenet classification with deep convolutional neural networks,” Advances in Neural Information Processing Systems (2012).

  3. LeCun, Y., Bengio, Y. and Hinton, G. E., “Deep Learning” Nature, Vol. 521, pp 436-444. (2015), link.

  4. GPU definition at PC Magazine Encyclopedia, PC Magazine, (2017) link.

  5. Tesla GPU Application notes from NVidia, (2017) link.

  6. Comparing deep learning frameworks”, Deeplearning4j.org (2017), link.

  7. Prashanth Vijayaraghavan, Mishra Sra, “Handwritten Tamil Recognition using a Convolutional Neural Network,” NEML Poster (2015) link.

  8. R. Jagadeesh Kannan, S. Subramanian, “An Adaptive Approach of Tamil Character Recognition Using Deep Learning with Big Data-A Survey”, Proceedings of 49th Annual Convention of Computer Society of India (vol. 1) pp 557-567 (2015), link.