2007-இல் எனது முதல் தமிழ் மென்பொருளை உருவாக்கினேன். இன்று தொலைந்த மென்பொருள்கலில் ஒன்று. எப்படி தொலயவிட்டேன் ? காலம்.
2007-இல் எனது முதல் தமிழ் மென்பொருளை உருவாக்கினேன். இன்று தொலைந்த மென்பொருள்கலில் ஒன்று. எப்படி தொலயவிட்டேன் ? காலம்.
விடை: சொல் ஏணி (word-ladder games ) என்பன காதல்-இல் இருந்து தவம் வரை மாற்ற உதவும் – இதை காண்க.
இதை கொண்டு ஏற்கனவே ‘காதல் -> தவம்‘ எழுதினோம்.
மேலும் இந்த ஆய்வுக்கட்டுரை அழகாக உள்ளளது – (கட்டுரை) ‘Word Morph and Topological Structures: A Graph Generating Algorithm’, Jürgen Klüver, Jörn Schmidt, Christina Klüver, (2016), Complexity, Vol. 21, No. S1. Wiley Publications.
I’m happy to announce Open-Tamil 0.7 release, today 23rd March, 2018. Open-Tamil is distributed under MIT license, and available for Python 2.6, 2.7, 3+ and PyPy platforms, via the Python Package index at https://pypi.python.org/pypi/Open-Tamil/0.7
You can install the package via ‘$ pip install –upgrade open-tamil’ command issued in your console.
Following updates are made to the Python package:
1. tamilphonetic – convert EN input to Tamil text
2. tamilwordfilter – filter Tamil input only from all input text data
3. tamilurlfilter – filter Tamil text from the input website data
4. tamiltscii2utf8 – convert encoding from TSCII to UTF-8 for input file
5. tamilwordgrid – generate a crossword from Tamil input text and write to output.html file
6. tamilwordcount – like UNIX wc program but for Tamil
In addition to the package, a web interface was developed for Open-Tamil in Django hosted at http://tamilpesu.us for demonstrating some of our capabilities.
We like to thank all our contributors in general, and in particular those members who contributed new code or bug fixes going into this release.
Previous release was v0.67 on Aug 23rd, 2017 and v0.65 was released on Oct 22nd 2016. Please share the word, and send us any bugs, feature requests or feedback via our github page https://github.com/Ezhil-Language-Foundation/open-tamil
Muthu for Open-Tamil team.
Debugging – அதாவது கணினியில் பிழைகளை கண்டு திருத்தம் செய்வது எப்படி ? பைத்தான் மொழியில் இது சற்று சகஜமானது : முழு விவரம் இங்கு.
Computer programs don’t always work like how we want them to. So at times we need to stop the program in the middle of execution and inspect them. By doing that – looking at the variables, functions, statements/source code in the debugger – we can understand the problem better than before and by stepping through the source code we can understand the source of the error to arrive at a solution.
This may sound somewhat complex, but in practice its quite repetitive and you will get the hang of it. Its the equivalent of a software detective work, and it is surprisingly fun, and you keep getting better at it with more practice.
To debug python we use the python module ‘pdb‘ [read documents இங்கு]; pdb is named evocatively like the more famous, powerful gdb – GNU source debugger. The simple usage is to call your program throwing the error from the command line as follows,
$ python -m pdb myscript.py
Once you see the (Pdb) prompt you can do the following:
Finally, you can figure out the cause of the problem and fix it!
Bon Voyage. You are starting on a powerful journey to write cool software and fix buggy ones!
Goodluck, and Godspeed.
[This article originally appeared in the 2017 Tamil Internet conference, UT-SC, Toronto, Canada, magazine ]
The current hot trend in AI revolution is “deep learning” – which is a fancy way of talking about multi-layered convolutional neural networks; this field of study has heralded a new age in computing extending human capabilities by automation and intelligent machines .
These neural networks aren’t the same as neuron networks in your brain! We are talking about artificial neural networks which reside in computers and tries to mimic the biological neural network with its synapses (connections) of axons, dendrons and their activation potentials. These thinking machines have their beginnings in post WW-II research at MIT, in the work of Seymour Papert who introduced “Perceptrons,” and Norbert Weiner’s “Cybernetics”.
But do we know why there is sudden interest in these biologically inspired computer models ? It is due to GPUs which has accelerated all the complex computations associated with neural networks for it be practical in such a large scale. They allow these networks to operate on gigabytes (or even terabytes) of data and have significantly reduced the computation time from months to days, or days to hours, or hours to minutes usually by an order of magnitude – not possible in an earlier generation of computing. Before we jump into the details let us understand why we need deep learning and convolutional neural networks in the first place.
Science and engineering have traditionally advanced by our ability to understand phenomena in natural world and describe them mathematically, since the times of Leonardo Da Vinci, Nicolas Copernicus, Galileo Galilei, Tycho Brahe, Johannes Kepler and Isaac Newton. However gaining models through experimentation and scientific breakthroughs piece-meal for each problem at hand is a slow process. Outside of Physics and Mathematics the scientific method is largely driven by an empirical approach.
It is in such pursuits of building models of unknown processes where observational data far exceed our human intelligence to divine an analytical model, the advent of deep learning and GPU based multi-layered neural networks provide an ad-hoc computable model. System identification for particular classification tasks, image recognition, and speech recognition to the modern miracle of a self-driving cars are all enabled by deep learning technology. All this came about due to the seminal work of many innovators culminating in the discovery of efficient convolutional neural networks by Prof. Geoff Hinton, who trained them by hardware acceleration via GPUs.
An original pioneer in the field of AI, before the AI winter, Prof. Geoff Hinton and co-workers  recently showed deep learning models that beat status-quo benchmarks on classification and prediction tasks on the following speech, text or image datasets: Reuters, TIMIT, MNIST, CIFAR and ImageNet, setting off the renewed interest in the field of AI from academia and industry giants – Google, Microsoft, Baidu and Facebook alike .
GPU stands for Graphics Processing Unit . These were originally designed for graphics rendering used in video games in 1990s. They have a large number of parallel cores which are very efficient for doing simple mathematical computations like matrix multiplications. These computations are the fundamental basis for machine learning methods such as deep learning. While the improvement in CPUs over years has slowed down over the years as Moore’s law has hit a bottleneck, the GPUs increase in performance has continued unabated showing tremendous improvements over the generations.
Figure. 1 (left): Deep Learning training task times as function of various GPU processors from NVidia. Figure. 2(right): AlexNet training throughput for 20 iterations on various CPU/GPU processing platforms.
Such GPUs were originally invented for shading algorithms algorithms, are now applied in training large machine learning models using a Open CL or CUDA like frameworks (variants of C-language with description for parallel execution via threading) from the vendors.
The pioneering hardware vendors include Nvidia with their GPU series like GeForce, Tesla; AMD with its Radeon, GP GPU, Google has entered this race with its TPU (Tensor Processing Unit) and some offerings from Intel for ML training applications. Nvidia and AMD are the main players in the GPU space with Nvidia laying special emphasis on parallel computing and deep learning over the years. Nvidia just announced the new Volta generation chip based GPU V100 which is about 2.5 x faster than the previous generation chip Pascal GP100 which was announced less than 2 years ago .Compared to CPU, however GPUs are more than 50x faster for Deep learning. Performance of GPUs as function of various GPU families in shown in Figure. 1, and for another AlexNet data set is shown in Figure. 2.
If the Harvard architecture and RISC architecture based CPUs have been workhorses of personal computer revolution, then the advent of high framerate video-gaming pushed the CPU based graphics rendering from CPU + Video card based rendering to CPU + GPU, to CPU + GPU + GP-GPU (general purpose GPU); some of this overview is shown in Figure. 3a, 3b.
GPU’s are suitable for large numerical algorithms where various data have to be moved through a computational pipeline often in parallel; this SIMD problem, like genome sequencing shown in Figure. 3c, when solved by GPU gain the maximum speedup/acceleration. However, there is a fundamental limitations of GPU acceleration due to the Amdahl’s law which saturates the parallelization upto the available serial bottlenecks for a given computational task.
To build a deep learning application one may use their labeled datasets to build a learning model on any of the various frameworks  (both open-source or closed) provided from competing vendors in the industry as follows:
TensorFlow, developed by google, python API over C++ engine, low level api, good for researchers, not commercially supported; notably Google is in process of developing a TPU – an advanced version of GPU for direct use with TensorFlow.
Caffe 2, developed by UC Berkeley used at Facebook among other places, focussed on computer vision, one of the earlier frameworks to gain significant adoption, Python API over C++ and CUDA code
Scikit Learn (Python based) general inference and machine-learning framework
Theano written in python, grand-daddy of deep learning frameworks
Tamil applications for deep learning including providing or improving existing solutions to the problems of,
Hardware acceleration and availability of big-data (labeled datasets) will play key role in the success of applying deep learning techniques to these problems.
Jensen Huang, “Accelerating AI with GPUs: A New Computing Model,” link
G. E. Hinton et-al. “Imagenet classification with deep convolutional neural networks,” Advances in Neural Information Processing Systems (2012).
LeCun, Y., Bengio, Y. and Hinton, G. E., “Deep Learning” Nature, Vol. 521, pp 436-444. (2015), link.
GPU definition at PC Magazine Encyclopedia, PC Magazine, (2017) link.
Tesla GPU Application notes from NVidia, (2017) link.
“Comparing deep learning frameworks”, Deeplearning4j.org (2017), link.
Prashanth Vijayaraghavan, Mishra Sra, “Handwritten Tamil Recognition using a Convolutional Neural Network,” NEML Poster (2015) link.
R. Jagadeesh Kannan, S. Subramanian, “An Adaptive Approach of Tamil Character Recognition Using Deep Learning with Big Data-A Survey”, Proceedings of 49th Annual Convention of Computer Society of India (vol. 1) pp 557-567 (2015), link.
One of major achievements of last year has been collecting inputs from our team and writing up two important papers – one for historical review and other for collective call to action on great opportunity that is Tamil open-source software.
We also take time to thank all co-authors who have pulled together their efforts at short notice to make these research works happen! Together these two papers represent a value of tens of thousands of Indian rupees, or more in the making (going by estimates of other Tamil software foundations).
We also thank conference organizers for partial travel grant toward making this presentation happen. Thank you!
Ezhil, Open-Tamil conference articles – 2017 presented at Tamil Internet Conference, August, 2017, in Toronto, Canada. Both the papers were well received and good academic and development points were debated at the forum.
For questions and queries on these articles, please write to us at firstname.lastname@example.org or leave your comments below.
Ezhil Language Foundation