I feel this, because it’s like I don’t need to know about something, I just need to know how to know about something. Like, the initial contact with a mystery subject is overcome by knowing how to describe the mystery in a way that AI understands what I don’t understand, and seeks to fill in the understanding.
An example, I have no clue about React. I do know why I don’t like to use React and why I have avoided it over the years. I describe to some ML tool the difficulties I’ve had learning React and using it productively .. and voila, it plots a chart through the knowledge that, kinda, makes me want to learn React and use it.
It’s like, the human ability to form an ontology in the face of mystery even if it is in accurate or faulty, allows the AI to take over and plot an ontological route through the mystery into understanding.
Another thing I realized lately, as ML has taken over my critical faculties, is that it’s really only useful for things that are already known by others. I can’t ask ML to give me some new, groundbreaking idea about something - everything it suggests has already been thought, somewhere, by a real human - and this its not new or groundbreaking. It’s just contextually - in my own local ontological universe - filling in a mystery gap.
Pretty fun times we’re having, but I do fear for the generations that will know and understand no other way than to have ML explain things for them. I don’t think we have the ethics tools, as cultures and societies, to prevent this from becoming a catastrophe of glib, knowledge-less folks, collapsing all knowledge into a raging dumpster fire of collective reactivity, but I hope someone is training a model, somewhere, to rescue us from this, somehow ..
> But when they came to writing, Theuth said: “O King, here is something that, once learned, will make the Egyptians wiser and will improve their memory; I have discovered a potion for memory and for wisdom.” Thamus, however, replied: “O most expert Theuth, one man can give birth to the elements of an art, but only another can judge how they can benefit or harm those who will use them. And now, since you are the father of writing, your affection for it has made you describe its effects as the opposite of what they really are. In fact, it will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing, which is external and depends on signs that belong to others, instead of trying to remember from the inside, completely on their own. You have not discovered a potion for remembering, but for reminding; you provide your students with the appearance of wisdom, not with its reality. Your invention will enable them to hear many things without being properly taught, and they will imagine that they have come to know much while for the most part they will know nothing. And they will be difficult to get along with, since they will merely appear to be wise instead of really being so.
That's a interesting and very fitting quote. Basically saying that since we can now write down information, people will get lazier about remembering things. Basically the exact same claim as the submission article.
I think there is some validity to the nature of generational knowledge loss through differing information systems. At one end of the scale, you’ve got 80,000 year old stories, still being told - at the other end of the scale, you’ve got App Of The Day™ style social media, and kids who can’t write an email, use a dictionary, or read a book.
This is no hyperbole - humans have to constantly fight the degeneracy of our knowledge systems, which is to say that knowledge has to be generated and communicated - it can’t just “exist” and be useful, it has to be applied to be useful. Technology of knowledge which doesn’t get applied, does not persist, or if it does (COBOL), what once was common becomes arcane.
So, if there is hope, it lays with the proles: the way every-day people use ML, is probably the key to all of this. It’s one thing to know how to prompt an LLM to give you a buildable source tree; its another thing entirely to use it somehow to figure out what to make out of the leftover ingredients in the fridge.
Those recipes and indeed the applications of the ingredients, are based on human input and mores.
So the question for me, still really unanswered, is: How long will it take until those fridge-ingredient recipes become bland, tasteless and grey?
I think this belies the imperative that AL and ML must never become so pervasive that we don’t, also, write things down for ourselves. Oh, and read a lot, of course.
It seems, we need to stop throwing books away. Oh, and encourage kids to cook, and create their own recipes... hopefully they’ll have time and resources for that kind of lifestyle…
No doubt, this curse (which is also missing generalization, i.e. evolution/generalization/specialization) is all for the sake of self-awareness, or at least, awareness, of some particular thing.
As long as humans remain aware that they are engaging with an AI/ML, we might still have a chance. Computers definitely need to be identifiable as such.
An example, I have no clue about React. I do know why I don’t like to use React and why I have avoided it over the years. I describe to some ML tool the difficulties I’ve had learning React and using it productively .. and voila, it plots a chart through the knowledge that, kinda, makes me want to learn React and use it.
It’s like, the human ability to form an ontology in the face of mystery even if it is in accurate or faulty, allows the AI to take over and plot an ontological route through the mystery into understanding.
Another thing I realized lately, as ML has taken over my critical faculties, is that it’s really only useful for things that are already known by others. I can’t ask ML to give me some new, groundbreaking idea about something - everything it suggests has already been thought, somewhere, by a real human - and this its not new or groundbreaking. It’s just contextually - in my own local ontological universe - filling in a mystery gap.
Pretty fun times we’re having, but I do fear for the generations that will know and understand no other way than to have ML explain things for them. I don’t think we have the ethics tools, as cultures and societies, to prevent this from becoming a catastrophe of glib, knowledge-less folks, collapsing all knowledge into a raging dumpster fire of collective reactivity, but I hope someone is training a model, somewhere, to rescue us from this, somehow ..