Another computer learning discipline gaining traction, mainly because of its superfast outcomes when developing smart applications, is Reinforcement Learning. This blog provides a helicopter view of what this compute form is all about, how it differs from machine learning and deep learning and its ranking in the computer learning landscape.
The classical approach to machine learning (ML) requires human involvement to predefine some top-level rules so programmes know what they’re supposed to be learning and the conclusions they’re supposed to be drawing. A certain amount of data is also required from which patterns can be identified so these conclusions can be drawn. Reinforcement learning (RL), on the other hand, isn’t quite so needy. It simply requires a set of scenarios from which it can draw conclusions by trial and error using feedback from its own actions and experiences. Because of these self-learning characteristics, it’s regarded by some to be the holy grail of artificial intelligence because zero human input is necessary.
RL was originally widely used by the gaming gurus for optimisation purposes, but the insight it provides into unforeseen scenarios makes it an extremely powerful tool when developing computer models for supply chain management or industrial robotics, for example. In these fields it’s neither feasible nor practical to predict every possible combination of circumstances that might occur in everyday situations.
Getting your head around RL can be difficult but if you think about it in dog training terms, the concept is fairly easy to grasp because our four-legged friends are perfect examples of reinforced learners. If you throw a dog a ball and the dog retrieves said ball, it will be rewarded with a biscuit. The more times it does this, the more biscuits it will be given. However, if your dog doesn’t bring you the ball when told to do so, it will get nothing. Reinforcement learning follows very similar principals, but at machine level and you don’t need to clean-up after it!
While machine learning, and its more advanced subset, deep learning (DL), have the ability solve many problems previously considered to be out of bounds for computers, their respective algorithms require huge amounts of high-quality labelled data. Applications are therefore limited to certain sectors, such as audio or visual recognition, where there is an abundance of training data. These computer learning programmes are not so insightful for R&D into the newest technologies where labelled data is scare, limited by regulatory constraint (as is the case with military data) or doesn’t exist at all – driverless cars being a prime example. This is when RL comes into its own.
Another string in the bow of this powerful AI learning tool is that the “intelligence” obtained is not restricted to the classical compartmentalised processing patterns that mimic the human mind. RL’s “non-human” like learning capabilities are perfectly suited to resolving problems we might not even have been aware of in the first place. It’s a particularly powerful when developing autonomous machines for industrial usage because the goal posts are constantly changing. Creating robots that can handle objects is an incredibly complicated task and involves a good deal of trial and error. Dactyl, an AI-powered robot built by research lab OpenAI, used RL to solve a Rubik’s cube one handed. The process might well have taken several minutes but the outcome is a major milestone in AI terms.
The shortcoming of RL, however, is that it requires expensive GPUs and hundreds of machines, thus limiting accessibility to the larger tech companies and/or research labs that; a) have specialist comms facilities to accommodate the necessary infrastructure or b) sufficient cash to access the required compute resources. These limitations potentially cut out the smaller players from leveraging the benefits of this powerful learning tool.
AI and tech start-ups are the seedbeds of many pioneering technologies and need easy and affordable access to the different learning algorithms that make these innovations possible. Delivering this is what makes Kao Data stand out from conventional data centre and colocation providers. Built for hyperscale computing at industrial scale, our future-ready campus in Harlow comprises the cutting-edge infrastructure needed to support AI companies pushing the boundaries of R&D.
We’re already working with several AI companies, including Instadeep, and we’re providing a full fibre networks to connect Cambridge-based research institutions and enterprises to key sites in the UK and mainland Europe. Not only that, our technical team were instrumental to the swift deployment of Cambridge-1,a £40million supercomputer built by NVIDIA to serve as a hub for collaboration by AI researchers, scientists and start-ups globally.