Machine Learning and Optimization: “It’s not rocket science”
Whenever the term “artificial intelligence” is spoken, we always assume that it’s about cyborgs and AIs dominating the world. We have grown so accustomed to the idea of science fiction that we forget the real science behind it — algorithms and optimization… lots of it.
Shamane Siriwardhana, software engineer for Enadoc, said in his article “Optimization isn’t Rocket Science in ML” that it’s not science fiction magic that makes Cortana, the best afternoon date you can think of. It’s a group of algorithms coupled with a lot of trial and error that teach artificial intelligence to be intelligent.
Yes, it takes quite a lot of effort to make a machine intelligent. No different than us, right?
Everyday, new technologies on artificial intelligence sprout out of nowhere are getting discovered with new ways to teach machines. In his article, Shamane implies that it’s all thanks to the process of optimization that is exactly what it sounds like: the act of finding the most effective way of teaching a machine.
Machine Learning and Industry 4.0
According to The MIT Press, the interplay between optimization and machine learning is one of the most important developments in modern computational science. Optimization formulations and methods are proving to be vital in designing algorithms to extract essential knowledge from huge volumes of data.
Machine learning, however, is not simply a consumer of optimization technology but a rapidly-evolving field that is itself generating new optimization ideas.
From a bird’s eye-view, it’s basically optimization giving birth to better machine intelligence and vice versa, creating a cycle like that of the industrial revolution: a process of development and innovation that produces new developments with each cycle. Machine learning is the steam engine while optimization is the coal.
Still, it’s an arduous task of trial and error. Thankfully, we have experts that can shed some light to the complicated process of optimization.
One insight from Shamane is that it’s not about modeling a problem from scratch (where output depends on set of features) to fit a certain equation. The secret is to compare the output of the model into another output, which in most cases is called “Supervised Machine Learning.”
This is just one of the techniques you can find in his article. Read more of those here.
You know the drill: Fill out the form and we’ll be there faster than the speed of sound