There was this research where faces were almost perfectly reconstructed from money's brain signals. How were they able to achieve such perfect recreation from monkey but not even close from human brain.
> How were they able to achieve such perfect recreation from monkey
Because the macaque study didn't decode faces from fMRI. They first used fMRI to locate the face patches, then used tungsten microelectrodes for single-unit electrophysiology to record spikes from individual face-selective neurons in ML/MF and AM. [0]
Single-unit recordings capture individual spike patterns at a resolution fMRI, which averages across hundreds of thousands of neurons per voxel, simply cannot provide.
So, the question I have is, could the reassemble the data from a person who has face blindness?
From my rather weak understanding on the subject, humans have a fast path for facial recognition over many other mammals. Some in some people this is otherwise broken or has been co-opted to fast recognize something else.
Is there a case for non-dystopian applications for such a project, should it succeed?
I get that we're all driven by curiosity, and the brain is very mysterious, but at some point I really wonder when scientists will start to taboo projects like this for ethical reasons, just like they currently taboo human cloning.
Methods like MVPA (decoding among finite sets of cognitive state classes) are actually widely used for insight in cognitive neuroscience.
Ethical concerns are discussed within the field; most papers had explicit ethics sections and discussions long before AI conferences required them for all submissions. In practice these experiments require a participant lying motionless (≈1-2 mm range) in an MRI scanner with controlled gaze and attention for many hours, and even then zero-shot reconstruction is not really possible; the SNR requires many repetitions.
No, it's "this tool cannot be used by bad guys or good guys, but can be used by highly funded labs that do neuroscience". It's something that freaks people out until they gradually learn what is actually involved
^ There's a research team at Meta that studies this. You need an MEG -- thats $2-5M + the shielded room it lives in and the experts that can operate it.
EEG doesn't work due to low spatial resolution and how finicky it is to place the electrodes to get a good signal
The signals from neurons are just unbelievably tiny and are in an absolute sea of noisy trash. No one is ever going to read your thoughts without your consent (or by wrestling you into a big MEG, in which case you have bigger things to worry about). No one is going to be reading your dreams with any sort of accuracy either.
But in seriousness: not news and doesn’t change any of what I said. You have a class of 20 objects that they recall as they dream. Same setup (fMRI), small n, very very simplified design.
Look the reason we can’t do this is both physics AND information theoretic. You are getting in the best case an EXTREMELY reduced dimensionality, it’s not as though this is an early days of AI thing where it’s like “it’s not possible today but there’s nothing in principle stopping us from a Kurzweil like world”. It’s just not really possible.
Anyway the studies on this are restricted to specific neuroscience questions. Paper shows dreams contain object-like representations in the visual cortex — this is cool! And important! But it doesn’t imply anything for decoding thoughts and dreams.
https://www.bbc.co.uk/news/science-environment-40131242