DeepMind stands as the world’s largest research-focused artificial intelligence operation which has channelized the work of the Artificial Intelligence sector through its specifically targeted research and development.
However, the Company is losing out on a lot of money to implement the experiments and perform the research. These expenses have not been met properly every year and therefore the company now stands with a debt of $1 billion in the past three years and has been expecting more $1 billion debt in the coming 12 months time period.
DeepMind has raised the bar of a number of research per year which engages a number of projects in a short duration of 12 months. The cost that has been charged for each project is huge. The dollars spent are comparable to a scientist’s larger project in a lifetime.
Projects such as the Large Hadron Collider, which happens to be the world’s largest machine that lets scientists know about different theories of particle physics, incurred a cost of $1 billion per year and the Higgs Boson discovered is estimated to be a wholesome machine of $10 billion.
The more has also been working in Genuine Machine Intelligence to design some of the greatest variety of computer genre, capable of Star-Trek Power which runs code of quantum and analyses language, as well as sound, is of another level of expense.
The losses of DeepMind has been significantly high and noteworthy from $154 million in 2016 to $341 million in 2017, $572 million in 2018. The question that arises in this context is whether Deepmind is on the right track scientifically or not which has been explored by Wired as
“DeepMind has been putting most of its eggs in one basket, a technique known as deep reinforcement learning.”
Reinforcement learning is a technique that combines deep learning, which is mainly used for recognizing patterns. With the reinforcement learning getting “geared” around the learning based on reward signal just like a score in a game or victory or defeat in a game like chess which is read through the card patterns.
DeepMind owned technique by providing it with a name and patent signature in 2013. The exciting paper showed how a single neural network system could be trained to enable playing different Atari games.
Atari games such as Breakout and Space Invaders were targeted to be played better than humans or at least equally Jared. The paper was designed to form a key catalyst in DeepMind’s January 2014 sale to Google since it had a revenue charge that pulled the trigger.
“Further advances of the technique have fueled DeepMind’s impressive victories in Go and the computer game StarCraft.”
What has arisen the trouble is the technique which is very specific to narrow circumstances in the region of growth. In playing the game of Breakout, tiny changes involving motion like moving the paddle up a few pixels or moving them down or in a channeled direction can result in dramatic drops in the overall performance.
DeepMind’s designed StarCraft brought outcomes that were similarly limited to decrement in information criteria, with better-than-human results when played on a single map framework with a single “race” of character, but relatively poor results on different maps with different types of characters. To switch the characters, you need to get the system retrained from scratch.
In a few practical studies, deep reinforcement learning has emerged as a form that has implemented turbocharged memorization. The systems that use it have higher grades of performance and are capable of establishing awe-inspiring work, but they have a short-coming of only a “shallow understanding of what they are doing”.
The consequences have devoid the current systems of flexibility and they are unable to compensate for if the world changes, sometimes even in tiny ways. The DeepMind’s recent results with kidney disease have been questioned in similar ways as per the report.
A huge amount of data is required in the Deep reinforcement learning of the machine which is particularly applied in millions of self-played games like that of Go. Well, it was well distanced from what a human would require to become world-class at Go, which has been considered often as way more difficult or expensive.
The report states,
“That brings a requirement for Google-scale computer resources, which means that, in many real-world problems, the computer time alone would be too costly for most users to consider.”
An estimate revealed that sustaining only the training time for AlphaGo charged $35 million and continuing the expenses,
“the same estimate likened the amount of energy used to the energy consumed by 12,760 human brains running continuously for three days without sleep.”
claims the report.
After having seen the economic perspective, The real issue, as Ernest Davis and the reporter did argue in the forthcoming book Rebooting AI, is trust. Till Now, deep reinforcement learning can only be trusted in environments that are well controlled and scheduled to work in alignment, with only a few surprises; that works fine for Go whose rules have stayed intact for a longer period of time.
The threat-worldly situation will less likely be chosen to be encountered here. Other questions revolving around the commercial success and the lasting criteria was also highlighted in a very problematic manner where not many people are aware of the commercial demands of the company and are less likely to let AI play the course with funds.