That's exactly what is about to happen. Deep learning has the potential to do robot control. Currently researchers are beating tons of video games using reinforcement learning with deep networks. Applying the same methods to robots shouldn't be too hard. And we've also come a long way with machine vision as well over the past 5 years.
I'm as bullish on AGI as anyone in the medium term, but deep learning is not even playing the same game as AGI, let alone in the same ballpark or having the potential to achieve it.
Deep learning is still mere perception. It doesn't handle memory or processing, it just transforms input into output, typically trained by Big Data, way bigger than necessary statistically speaking, given the world we live in.
AGI requires super aggressive unsupervised learning in recurrent networks, likely with specialized subsystems for episodic and procedural memory, as well as systems that condense knowledge down to layers of the network that are closer to the inputs. At a minimum. And nobody is really working on any of that yet (or at least succeeding) because it's really damn hard.
That's why everyone in "AI" is rebranding as a deep learning expert, even though deep learning is really just 1980s algos on 2016 hardware - you gotta sex up feed forward backprop or you don't get paid.
Edit: to be fair, robot control is much simpler than AGI, and might be mostly solved with deep learning somewhat soon, I forgot the context of your post.
Sure, and I probably shouldn't have glossed over that. That sort of research is definitely progress, though it's not paradigm shifting in any way. I do think that we are getting past perception slowly but surely, I just don't think we're there yet.
What really doesn't exist is any meaningful stab at unsupervised (or self-supervised) training on completely unstructured inputs or any sort of knowledge condensation/compression, at least for time dependent problems. These are of paramount importance to the way we think, and to what we can do.
There's a lot of trivially low hanging fruit, too - I still have yet to see even a grad school thesis that starts with an N+M node recurrent network and trains an N node subnetwork to match the outputs based on fuzzed ins, and then backs that out into an unsupervised learning rule that's applicable to multiple problems. Or better, a layered network that is recurrent but striated, that tries to push weights towards the lower layers while reproducing the same outputs (hell, even with a FF network this would be an interesting problem to solve if it was unsupervised). These are straightforward problems that would open up new avenues of research if good methods were found, but are mostly unexplored right now.
I could be wrong, if I had real confidence that we were close I'd be working on this stuff, but I'm collecting a paycheck doing web dev instead...
Sequence predicting RNNs are basically unsupervised, in that they can learn from lots raw of unlabelled data. And they learn useful internal representations which can be adapted for other tasks. There is lots of old work on unsupervised learning rules for RNNs, including recurrent autoencoders and history compression.
> we introduce a form of memory-augmented neural network called a differentiable neural computer, and show that it can learn to use its memory to answer questions about complex, structured data
So it seems that deep neural nets can have memory mechanisms and be trained to solve symbolic operations.
I'm not talking about AGI at all! Just robot control. It's a difficult problem sure, but not that difficult. There has been massive progress on it, and related problems. I have no doubt we will have 'solved' it in at a decade.
The systems which learn video games work only for games where the state of the game is entirely visible, and the desired player action can be decided based only on the current state. PacMan, yes. Doom, not that way.
That's only because they didn't use recurrent neural networks which save information over time. RNNs make it possible to play games with hidden state. Deepmind is currently working on that with starcraft, which is vastly more complicated than pacman. They also have some work on 3d games like doom.
A few weeks ago there was a paper posted on "Synthetic Gradients" which should make it much more practical to train RNNs for games. Before it required saving every single computation the computer makes to memory, which uses a huge amount of memory and computation. Using synthetic gradients they need only store a few steps in the past. And it can learn online.
The ability for AI to approach these problems has only been possible in the last 2-3 years. The tech was really not there in 2008, and it's still very rough and cutting edge in 2016. But we are at least seeing the first glimpses that it's definitely possible. If AI can play starcraft, then surely it can control a simple robot. And anyway see my other comment.