Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Honest question: Why do you think that generalized and specialized AI are distinct things? Is it not possible that general AI is a specialized AI applied over the field of specialized AI generation?


What people mean by generalized and special AI is not consistent in the field, but everyone agrees that the current brand of AI driven by statistical learning techniques and large scale neural networks is far from explaining how even the simplest of nervous systems work. The key obstacle is adaptation. Several people believe that they've more or less solved the recognition problem. However, adaptation is a totally different thing. There are no tools in the current AI toolkit that we can use to make a robot that can go out unsupervised in the real world, do something useful and come back safely. Whereas even the Nematode C. elegans with only 302 neurons is remarkably flexible, it can forage for food, remember cues that prediction food, manage food resources, get away from danger or noxious stimuli etc. This allows it to survive quite well in a world that is constantly changing in unpredictable ways. This is the kind of intelligence that proponents of so called general AI want, and I agree we have a couple of major breakthroughs away.


And we have a complete wiring diagram for C. elegans, and no clue how it does any of the things you talked about. So, yeah, general AI is really far off.


To be honest, the wiring diagram is a bit of a distraction from the really big questions. It has its uses for sure, and is really essential in many situations but overall it gives this illusion that we understand something important about the system, where in reality we don't. Understanding a biological system from its wiring diagram is something like understanding a city by studying its road map.


I have no idea what I'm talking about. But why couldn't we build some sort of bio-computer hybrid system around a simple form of life, like "C. elegans" but augmented with traditional CPUs?


That's another option and there are people who do that https://blog.inf.ed.ac.uk/insectrobotics/.


> Is it not possible that general AI is a specialized AI applied over the field of specialized AI generation?

AI problems can be characterised as those where there's no clear path to a solution (otherwise we just call it "programming"); tackling them necessarily involves trial-and-error, backtracking, etc.

Since there are far too many possibilities to enumerate, solving such problems requires reasoning about the domain, e.g. finding representations which are smooth enough to allow gradient descent (or even exact derivatives); finding general patterns which will apply to unseen data; finding rules which facilitate long chains of deduction; etc.

The difficulty is that there's usually a tradeoff between the capability/expressiveness of a system, and how much it can be reasoned about. If we choose a domain powerful enough to represent "the field of specialised AI generation", for example turing machines or neural networks, methods like deduction, pattern-finding, gradient following, etc. get less and less applicable and we end up relying more on brute-force.

To me, this is where the AI breakthroughs are lurking. For example, discovering a representation for arbitrary programs which allows a meaningful form of gradient descent to be used, without degenerating into million-dimensional white noise; or to take deductive knowledge regarding one program and cheaply "patch" it to apply to another; and so on.


My two cents: they are separate because there is no current algorithm that can take us from modeling (whether classical statistics or neural net) to intelligence. Applying our current specialized techniques to AI generation has not gotten us there. That is because the techniques are mostly model tweaking techniques. The models are generated and trained for each problem domain. A combined solution may be developed soon, but I doubt it.

There was a great article recently on HN that highlights the current problems:

http://www.theverge.com/2016/10/10/13224930/ai-deep-learning...

https://news.ycombinator.com/item?id=12684417

Just because we may acquire the processing power estimated to be used in the brain (in operations per second) doesn't mean we know how to write the software to accomplish the task. It is very clear current algorithms won't cut it.

Also, I think we are a few orders of magnitude off on raw processing requirements because I think it is a bandwidth issue as much as an operations per second issue.

TL;DR - you could throw as much processing power and data as you want at any current deep NN or their derivatives and you wouldn't get general intelligence.

That said I don't think the winter will be as bad as before because, like OP says, specialized AI is useful.


Specialized AI is all about X,Y pairs. Given X, predict Y. There are other problems it's good at too, like given X, choose a Y to optimize Z, but at it's core it's largely the same. On the fringes, you have stuff about exploration, which is AWESOME, but still pretty niche. At least 99% of the "AI" you hear about is of the X,Y variety. More to your point, if we can make generalized AI from "given X, predict Y," then nobody's figured out how to do it, and nobody has super promising research tracks to get it.

I think a lot of the early AI research (not my specialty) had the idea that if we made a bunch of systems that were good at their own piece of the puzzle, then we could just tack them together and get real intelligence. It just didn't turn out that way. Something I'm more familiar with is graphical models, and while they in principal could do amazing things when you stick little expert components together, we've proved the complexity grows pretty badly in the most general cases that would have been really amazing. I'd bet similar things happened in other "let's put a bunch of specialized systems together" tracks. Maybe we can do it, but not the naive way that would have been great.

Then you can get interesting and philosophical about it, where you might even say that emulating intelligence and intelligence are different. Like the chinese room thing, or even a character in a story vs a physical person. I'd rather not weigh in on that right now, but there are good interesting arguments both ways.


>Then you can get interesting and philosophical about it, where you might even say that emulating intelligence and intelligence are different.

This would be a very surprising result. For example, if I can make a TSP-solver-emulator... I have a TSP solver.


I guess I should have been more specific. I meant sorta convincingly emulating intelligence versus fully meeting some other definition. Is a turing test enough?


Specialized AI - think about retina neurons, capable of detecting direction, edges, depth etc.

General AI - think about thinking machines

We can do the former, have no clue how to do the latter.


It's not such a huge jump to the latter when you nail the former.


This is what's called a bald assertion.


Can you give a little bit of details about your background in AI or at least statistics/machine learning?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: