Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As a curious individual identifying as a scientist at heart, I tend to agree. I know I do my best to adopt an understanding of reality and base things off it but more often than not I'm forced to adopt some correlation and go with that until I can find a better foundational concept to build on.

I'd say I do better than many of my human peers in this regard who just adopt correlation and go with that. At some point we have to wonder if we as humans just have a sort of millions of years evolutionary head start of embedded correlations in our makeup and interpretive strategy for survival in reality.

If that's the case, then at what point can machines produce similar or perhaps better correlative interpretations than humans and what do we consider the basis to compare against: reality itself (which we often don't seem to understand) or our own ability to interact and manipulate reality.

There's this deep perhaps unconscious bias for us humans to think we're special and differentiate ourselves, perhaps as a sort of survival mechanism. I am unique, important, have self determinism, etc because I don't know how to view myself out of this framing of the world. What am I if I'm just a biological correlation machine and so on. I'm no different, I like to think of myself as special because it can be depressing for some to think otherwise.

Personally, I adopted a more epicurean perspective flavor of life years ago in that I tend to focus on my well being (without oppressing others). If I am just a biological machine, that's fine, as long as I'm a happy biological machine and survive to continue my pursuit of happiness. Whether AI is conscious or not, or all that different than me isn't that important so long as it doesn't effect my happiness in a negative way.

There are many cases which it very well could, so overall, I'm a bit of an opponent because frankly I don't think what's going on with AI is all that different than what we do biologically. We don't understand consciousness really at all so what's to say we can't accidently create consciousness given the correct combination of computational resources. Current correlative reasoning structures aren't really that similar to what we know is going on at a biological level in human brains (the neural models simply aren't the same and aren't even a clean reductionist view). Some models have tried to introduce these, maybe they're sort of converging, maybe there not. Regardless, we're seeing improved correlative reasoning ability of these systems approaching what I'd argue a lot of humans seem to do... so, personally, I think we should tread cautiously, especially considering who it is who "owns" and has (or will have) access to these technologies (its not you and me).

We've had jumps in computing over the years that has forced humans to redefine ourselves as a differentiation between what's possible by machines and what we are. Arguably this has gone on since simple machines and tools but with less threat to our definition as self. I always find it curious how we or at least some to be in a continuous pursuit to replace ourselves, not just through reproduction and natural life/death processes, but to fully replace all aspects of ourselves. It seems to have been accelerated by modern economic systems and I'm not sure to what end this pursuit is actually seeking. As a society it doesn't seem to be helping our common man, it seems to be helping a select few instead and we need to ask if it's going to help us all and how.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: