Whether it's leaping ahead in quality assurance for telecommunications companies, increasing Right First Time rates for critical infrastructure, or bolstering operational efficiency, machine learning (ML) has become an essential component of the modern business world.
Since the boom of ChatGPT at the beginning of 2023, Artificial Intelligence (AI) has climbed to the top of mind in the private and public spheres. AI and ML implementation is increasing across a multitude of use cases and industries, while governments and regulatory authorities invest in reports on the potential of AI and host summits about its practical ramifications.
But AI and ML did not come into being with Sam Altman and OpenAI. To understand the current state of machine learning and where it is going, it’s important to know how it began.
Machine learning, also referred to as the cybernetic mind or electrical brain, is a subset of AI in which statistical models and algorithms are used to mimic the way a human mind learns information. After being trained on data sets and learning to notice patterns, ML programs are designed to make decisions based on analysis.
Following the Second World War, mathematicians and scientists began exploring the creation of AI and ML. Alan Turing, the famous computer scientist who broke the Nazi’s Enigma Code during WWII and is credited as the Father of AI, wrote about the possibility for machines to mimic human thinking through computations based on available data and reason in his 1950 paper Computing Machinery and Intelligence.
Throughout the 1950s, the first major experiments in artificial intelligence took place. While the groundwork for what would become modern machine learning was laid more than 70 years ago, it struggled with the technological limitations of the day. Besides lacking the sheer computing power needed for complex calculations, old computers could only execute commands and not follow them, an important aspect of modern machine learning.
Computers were also very expensive, with one report stating that renting a computer for a month could run a €150,000 bill. This meant the development of AI was out of reach for most. By 1956, a group of scientists presented a proof of concept called the Logic Theorist, the first artificial intelligence program ever created. Though research progressed into the 1970s, scientists realised that computers were still too weak to function at the level needed for the models and algorithms being created. Due to this, AI research gradually became less prominent.
Through the 1960s and 1970s, machine learning and neural networks were being used to train AI and were a fundamental part of the subject. But towards the end of the 1970s, AI researchers moved away from ML research, forcing the ML community to establish itself as a separate branch of study parallel to the AI field.
While large-scale development of ML wasn't prevalent in the 1980s, important theoretical contributions were made. One prime example is the introduction of Example Based Learning (EBL) by Gerald Dejong. One of the critical moments in the progression of ML was the rise of the internet in the 1990s. Thanks to the vast increase in data accessibility enabled by the World Wide Web and the steady increase of compute power, a resurgence occurred in ML research that enabled one of its first major breakthroughs.
In 1997, scientific history was made when IBM’s Deep Blue chess program beat grandmaster world chess champion Gary Kasparov. Thanks to the guiding light of Moore’s Law, computational power had increased exponentially since the early days of AI research, making this ML victory possible. For the first time, people saw machine intelligence outperform peak human ability.
The beginning of the Classical ML Era is around 2005. Classical machine learning — also known as statistical learning — uses models or algorithms to analyse massive data sets, identify patterns, and make predictions. Common models include linear regression, logistic regression, decision trees, random forests, k-nearest neighbours algorithms, and support vector machines, just to name a few.
Unlike the deep learning neural network models that would be subsequently introduced, classical machine learning models are less computationally cumbersome. They rely more on the quality of the training data set than deep learning models. Due to their more straightforward training and operational processes, these models are considered "Explainable AI."
This era is also referred to as the "Recommendation Era" due to the growth of recommendation algorithms embedded in search engines like Google or YouTube. ML was also being deployed for facial recognition and fraud detection during this time.
Let’s have a chat to explore the possibilities.