Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The key difference is learning in animals occurs by breaking things down in terms of modular concepts, so even when things are not recognized new things can be labeled as a composition of smaller nearby concepts. Machines cannot yet do this well at all and certainly not as flexibly.

Actually, that's pretty much what deep learning is doing. For instance: https://papers.nips.cc/paper/5027-zero-shot-learning-through...

That paper was from a few years ago, I think the state of the art is better now, but it's trying to do exactly what you're talking about. More broadly, what you're talking about falls under the umbrella of transfer learning (that is, a model's ability to learn helpful information about task Y by training on related task X, preferably by learning and sharing useful features.)



I'm talking about learning modularly. Children can categorize things they've never seen before by inventing labels on the spot, they're not limited to selecting from a preexisting set (e.g. a lion is a "big cat"). They can recognize novelty and ask questions if nothing they know quites fit.

As a human, you are able to learn the general concept of leg and understand it, even if it is in a context you have never seen before and an object you've never seen or never seen used in that way before. Everything you learn, is also as part of a set of relations. Each of individual concept modified in a precise manner as you learn something new about any of them. A big part of human intelligence, from the simple naming of things, to the highest levels of science is taking parts of things you know and putting them together in novel ways.

In Neural nets this: https://arxiv.org/abs/1511.02799 is in line of what I mean.


Well, not with traditional feedforward networks (LeNet, etc.). You can't run the classifier and find tires, then wheels, and then a car; but you do get composition of features.


> not with traditional feedforward networks (LeNet, etc.)

I'd argue they are implicitly doing this.

> You can't run the classifier and find tires, then wheels, and then a car;

Why can't you run a classifier for tires, one for wheels, one for cars, then combine their outputs for a final classifier maybe based on a decision tree? You can train all the networks at the same time and it will give you a probability distributions for all 4 outputs (tires, wheels, cars, blended). What am I missing?


That would just be your opinion. It has not been shown. It's still an open research question over what neural nets are actually learning in their intermediate layers.

You're going to need large amounts of fine-grained labeled data for each category. You've also just manually determined some sort of (brittle) object ontology. What if there are only 3 tires? What if there are four tires on the road but no car? All sorts of edge cases, and all you've done is train a classifier for cars, not actually solved driving in any meaningful way.


Doesn't scale. You don't have N brains to compose every representation.


Agreed -- If you get away from traditional feedforward networks by adding recurrence throughout, then at least there is some chance of learning scale-free features and compositionality.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: