A company based in San Francisco designed an AI system that could solve a Rubik’s cube by using a robotic hand. It not only involved several computing units for constantly running codes, but also a highly sophisticated graphic processing unit. This system had to run for months to achieve this task. This might have required an estimated 2.8-gigawatt-hours of electricity to complete the training process. As increasingly Artificial Intelligence systems start being trained for operating self-driving cars, smart homes, complex analytical functions, the computing power and the resource footprint for achieving such computing power will also keep increasing. In that sense when Artificial Intelligence starts moving out of research labs and into industrial set-ups the scale of operations will increase manifolds, and so will the resource consumption.
Many experts see this as a contributor to climate change and therefore want to mindful of the ecological footprint of their algorithms. They have developed different indices to measure track and improve on these.
All the Restrictive Use Cases
Given this huge computing power requirement because of long complex codes, with more neural networks, data consumption, and energy, it is mostly usable in scenarios where such resources are available in ample supply like tech firms, offices, or large machines like drones and cars. But in many technologies, such resources are not available for the specific functions they serve. This includes several micro-devices, medical instruments, etc.
With a change in this resource-intensive model of AI technologies, Artificial Intelligence technology can expand into many other fields. They will make them appropriate for more simple tasks and tasks requiring less complex functions. Therefore, this model of extravagant AI technologies is not only detrimental to the climate and the ecosystem but also the growth opportunities available to the industry. Yet, there has been a dearth of innovation involving more efficient codes which require fewer data and less computing power.
Following The Five Line of Code for a Better Robot Making Future
A research paper published in May 2021 a team of scientists led by Johannes Overvelde located at AMOLF, which is a publicly funded physics research institute in Denmark. They published the results of an experiment that was able to show how it is possible to build a “multi-component” robotic system to achieve its goal of unidirectional mobility with just five lines of code.
The lead scientist called this experiment “a prisoner’s dilemma with robots”. As the robots did not have ways to communicate with each other and were guided by a single sensor each, yet were able to coordinate to reach an optimum outcome. Such resource optimizing algorithms can be essential in forwarding the goals of creating autonomous nano-bots, Artificial Intelligence-driven medical equipment for conducting intensive probes, etc., which cannot be created with the presently available models. This also shows researchers a way to reduce the climatic footprint of AI without compromising on its functionalities.