Neuromorphic mostly just means "like how the brain works". It encompasses a variety of software & hardware approaches.
The most compelling and obvious one to me is hardware purpose-built to simulate spiking neural networks. In the happy case, SNNs are extremely efficient. Basically consuming no energy. You could fool yourself into thinking we can just do this on the CPU due to the sparsity of activations. I think there is even a set of problems this works well for. But, in the unhappy cases SNNs are impossible to simulate on existing hardware. Neuronal avalanches follow power law distribution and meaningfully-large ones would require very clever techniques to simulate with any reasonable fidelity.
> the system isn't just simulating neurons but involves a variety of methods and interactions across "agents" or sub-systems.
I think the line between "neuron" and "agent" starts to get blurry in this arena.
We somehow want a network that is neuromorphic in structure but we don't want it to be like the brain and take 20 years or more to train?
Secondly how do we get to claim that a particular thing is neuromorphic when we have such a rudimentary understanding of how a biological brain works or how it generates things like a model of the world, understanding of self etc etc.
Something to consider is that it really could take 20+ years to train like a brain.
But once you’ve trained it, you can replicate at ~0 cost, unlike a brain.
Yes, but the underlying point is that in this case you can train the AI in parallel, and there's a decent chance this or something like it will be true for future AI architectures too. What does it matter that the AI needs to be trained on 20 years of experiences if all of those 20 years can be experienced in 6 months given the right hardware?
I think we're talking at cross-purposes here. I understand you, but what if the type of learning that leads to intelligence is inherently serial in some important way and can't just be parallelized? What if the fact that it takes a certain amount of chronological time is important? etc
What I'm trying to express is we seem to want to cherry pick certain features from nature but ignore others that are inconvenient and that is understandable, but currently because our knowledge of the biological systems is so incomplete we really don't know which (if any) of these features gives rise to intelligence. For all we know we could be doing training in a seemingly-efficient way that completely precludes intelligence actually emerging.
My take, for pragmatic reasons rather than how the brain actually works, is that an agent-based architecture is great because some tasks can be solved more effectively by specific algorithms or workflows rather than operating at the low level of neural networks (NN).
The most compelling and obvious one to me is hardware purpose-built to simulate spiking neural networks. In the happy case, SNNs are extremely efficient. Basically consuming no energy. You could fool yourself into thinking we can just do this on the CPU due to the sparsity of activations. I think there is even a set of problems this works well for. But, in the unhappy cases SNNs are impossible to simulate on existing hardware. Neuronal avalanches follow power law distribution and meaningfully-large ones would require very clever techniques to simulate with any reasonable fidelity.
> the system isn't just simulating neurons but involves a variety of methods and interactions across "agents" or sub-systems.
I think the line between "neuron" and "agent" starts to get blurry in this arena.