I'd bet people outside the field are going to be disillusioned before long as the marketing BS falls apart. Even years ago, I was already getting questions from friends that surprised me in how detached their perceptions are from the reality of state of the art ML. Eventually, they'll realize that no, we're not building superhuman AI today or soon.
However, there's a ton of cool shit happening and tons of stuff already powering industry $$$. Some of this works because of mismatched expectations, like companies who are trying to sell magic. They will fail as their customers stop buying their products that fail spectacularly 2% of the time, but really can't fail 2% of the time.
But there's tons of room in the market for stuff that fails spectacularly 2% of the time. For that reason, I don't see a "real" reckoning coming. People's expectations are too high, sure. But the reality is still Good Enough™ for a lot of problems and getting better.
>Some of this works because of mismatched expectations, like companies who are trying to sell magic.
We were asked to look into a company that had an, I kid you not, "AI-powered problem solving platform".
The suggestions it spit out in their demo for agriculture were things like "What if the earth was upside down", and I read it like "Maaaaan, what if, like, just imagine like, the earth wasn't like below, but it was like, above... Duuuuude. Just imagine.". i.e: you could get these recommendations with a few dollars worth of haschish.
The modern mechanical turk. Pay someone to be very stoned and to sit in a box made to look like a supercomputer. Have it occasionally print a prompt for the "AI" to reflect on.
Add a pitch deck and a blockchain and you have a startup with the right buzzword cloud.
> commercial successes of AI have been primarily in advertising, marketing, and investing.
Which is why people say that it has few useful applications. People don't care if those areas become more effective. Another important application is mass surveillance, which people would also argue isn't a good thing.
Amen. Translation's such a great example. I got some ingredients at a local market recently and used the google image translate function on it. I'm sure the translation was way off, and parts were entirely nonsensical, but it was enough for me to infer what the directions were. Tons of value in imperfect translation.
Yeah, also MoE models like Wu Dao 2.0 that if combined with work at FB with hypothesis generation (causality testing) and Codex can give some interesing applications like autowriting programs to specification.
There were also some progress with geometric deep learining which kinda unifies architectures AFAIK.
That’s an interesting insight. What are the long term societal implications of enabling/inducing (Jevons paradox) activities which are not “important enough” to need 100% correctness?
There was a saying before 2008: "How do you start an AI based FOREX trading fund? You pretend to be the russian prodigy developing the algo, while I do the insider trading".
Can I propose another insight? AI sometimes works and works spectacularly and does not fail in 2% of the cases, or even in 0.02%. Just look at iPhone photos. But that AI application require deep domain knowledge + deep AI (or, well, ML, whatever) knowledge. You can't simply throw sklearn at an arbitrary problem and expect to get great results.
IPhone photos do some weird shit a small portion of the time too. I've had mostly great experiences as well as the rare unrealistically colored outcomes or misplaced bokeh.
"AI" is not used at all in ad tech. (This is actually a problem when hiring people today. Nobody can do machine learning without some stupid big neural net framework.)
It is, but it isn't as widespread as some thinks it is. Old school techniques like decision trees perform better for almost every ad tech tasks and are much cheaper, so if you can't do those you are basically worthless in the field. But sometimes a neural network is actually useful.
This appears to be the pattern of AI development overall: booms and busts of hype and disillusionment that leave behind useful tools that most don’t consciously consider to be remnants of a previous AI bubble.
I went through Stanford CS in the mid 1980s, just as it was becoming clear that expert systems didn't really do much. There had been Stanford faculty running around claiming "strong AI real soon now" to Congress. Big-time denial as the field collapsed. Almost all the 1980s AI startups went bust. The "AI winter" followed. Today, expert systems are almost forgotten.
This time around, there are large, successful industries using machine learning. Tens of thousands of people understand it. It's still not "strong AI", but it's useful and profitable. So work will continue.
We're still at least one big idea short of "strong AI". A place to look for it is in "common sense", narrowly defined as "not doing something really bad in the next 30 seconds". This just requires lower mammal level AI, not human level. Something with the motion planning and survival capabilities of a squirrel, for example, would be a big advance.
(I once had a conversation with Rod Brooks about this. He'd been building six-legged robot insects, and was giving a talk about how his group was making the jump to human level AI, with a project called "Cog". I asked why such a big jump? Why not try for a robot mouse, which might be within reach. He said "I don't want to go down in history as the man who created the world's best robot mouse". Cog was a flop, and Brooks goes down in history as the the creator of the robot vacuum cleaner.)
> There had been Stanford faculty running around claiming "strong AI real soon now" to Congress.
That would be Ed Feigenbaum, the man who (IMHO) almost single-handedly brought on the AI winter of the 80s. Because of him we had to call all the AI research we did in the 90s something other than "AI" lest it get shut down instantly.
Um, yes. His book. "The Fifth Generation - Artificial Intelligence and Japan's Computer Challenge to the World" lays out his position. For a short version, here's his testimony before a congressional committee, asking for funding.[1] "The era of reasoning machines is inevitable. It is the "manifest destiny" of computing."
He was taken seriously at the time. Chief Scientist of the USAF at one point. Turing Award.
> The era of reasoning machines is inevitable. It is the "manifest destiny" of computing."
Isn't this pretty much the current opinion of the majority of the thousands of AI researchers and programmers today? Maybe this guy was early to the party but his vision seems in alignment with today's practitioners.
Yes; if anything the hype is even stronger today. The difference is that neural nets on modern hardware really do quite a few useful things, while expert systems on 80s hardware mostly did not.
The reason the hype of today is more dangerous is what I call the grandmother issue: In the 80s we dreamed of a program that would recognize your grandmother when she walked in the room. That was one of our holy grails.
Good news: The grandmother recognizer was finally built with deep neural nets in the 2000s and it works pretty well. It's not perfect but if you tell it to play grandma's favorite song when she walks in the room it's no big deal if it occasionally fails to recognize her, or if it plays the song when someone other than your grandma walks in the room.
Bad news: We now have people who (effectively) want to attach the grandmother recognizer to a machine gun with instructions to shoot your grandmother and only your grandmother, and never to fail to shoot her if she walks in the room. Suddenly the Type 1 and Type 2 errors are a whole lot more consequential. Modern NNs are simply not fit for that job.
We have pretty good AI for low-consequence purposes, but it cannot be used for high-consequence purposes without a lot more fundamental research. Incremental improvements to deep learning are not going to get us there.
Wellllll, um, not really.... Any NN recognizer can be fooled and your neighbor's dog may as likely be shot by the "granny gun AI". You need to read up on the validity of facial recognition software. It ain't nearly human!
It's an indication that we're missing something. A squirrel has somewhere around 75 million to 100 million neurons. That's not all that many in terms of modern hardware. We ought to be able to build squirrel-level AI with cellphone-sized hardware. What are we missing?
There is a big jump from "let's create some algorithms and data structures inspired by a simple model of the brain and see what kind of problems these algorithms are useful for" to "why doesn't our model fully and faithfully represent an actual brain".
The first of these questions has led to real progress in things like image recognition, whereas the second of these questions has not led any real progress in digitizing squirrel brains.
OpenWorm is, after years of work, up to nematode level.[1] Their WormSim is an emulation, at the neuron level, of the simplest creature whose nervous system has been fully mapped. 302 neurons, 25 muscle cells. Runs in a browser.
Don't you think, though, that each boom and bust cycle leaves us closer to real accomplishments?
We now have protein folders and superhuman Go players -- that's new.
I agree that ML ("AI") is currently at the alchemy stage. And, guess what? A neural network isn't even Turing complete! [citation needed - correct me if I am wrong.] So ML can only compute SOME functions.
AGI, when it comes, and believe me, it will, will have ML as part of its structure, but only a small part.
Of course that depends on what you mean by "real accomplishments".
It seems to me that that deep nets have effectively maximized the potential of using gradient pursuit to model patterns. But if you remove gradients from your data, or shrink your data down to tens of samples, or shift the problem to logic, or need to use functions that aren't convex or differentiable, deep nets run smack into a wall.
Luckily human perception makes extensive use of gradients, as does most search, so problems in those arenas have been unsurprisingly amenable to solution using deep nets (vision, speech, game play, etc). But many of the problems that remain untouched by DL, like human cognition, are NOT driven by gradients. Will deep nets eventually fill that void? I doubt it. You can convert only so many problems with big data into gradients to pursue them efficiently with DL before that transformation trick runs out of steam.
Personally I think deep net language modeling is one of those areas, and soon we'll encounter the limit of their generalizable contextual phrase association. Then because deep nets are so difficult to selectively revise or extend the specifics that they have learned, the vanguard of ever more complex deep nets (transformers) will eventually sink beneath their own weight, taking the last best hope for DL-based general AI with them.
> But if you remove gradients from your data, or shrink your data down to tens of samples, or shift the problem to logic, or need to use functions that aren't convex or differentiable, deep nets run smack into a wall,
We're making some progress on the convexity bit (heat functions, RL, etc.), but yes, there are other areas of statistical research all involved in trying to solve those sorts of problems.
ML is not necessarily a panacea, but just because you can point to problems that it doesn't solve doesn't mean it has no "real accomplishments."
> DL, like human cognition, are NOT driven by gradients.
I mean, each cycle does leave us with real accomplishments. If the question is whether it leaves us closer to AGI, then it's an open question. Like, when AGI happens, it will certainly trace its roots back to things happening in each period, but its roots will also go back to Gauss and Newton and co, so nobody knows whether it gets us closer in the way you probably mean.
ML is Turing complete in the sense that every computable function can be approximated to arbitrary precision by a 3-layer neural network. Classic result from the 90s, the paper in question (iirc) is titled something like “neural networks are universal approximators” Turing machines also can only approximate to arbitrary precision, so the computation models are equivalent.
I have no idea if each boom brings us closer. Scientific discovery is not exactly a linear process; we can’t observe before the fact if we’re getting closer or are on a dead end path, and that’s even assuming we can ever get there at all.
But it’s hard to deny that each boom doesn’t give us something useful. Neural Nets might not exactly make an AGI, but they do have uses.
I think there are these technological problems, many that overlap with AI, that a lot of politically powerful people want to come to fruition to enable the World Economic Forum lauded "Fourth Industrial Revolution" [1] and they are willing to invest insane amounts of money in them without expecting much of a return, thus creating really big p/e ratios. Examples include:
1. Self-driving cars. The ultimate being no steering wheel and carefully software controlled and permissioned as to where you are allowed to go.
2. AI powered Body monitoring devices (e.g Fitbit) that can be used to biologically monitor millions of consumers body states and functions.
3. Any new kind of surveillance technology. Miracle mass surveillance is something that's really interesting to this market.
4. Large centralized social media networks that aggressively moderate content with AI.
5. Various kinds of transhumanist biotech that design all sorts of biotech stuff with AI. What the heck is Calico up to anyway?
6. Blockchain stuff. AI enabled or otherwise.
They are also interested in environment tech like any kind of alternative energy, no matter how speculative or impractical and fake meat, but I digress.
So I would guess the bubble is the WEF crowd and friends with absurd amounts of money trying to make their future, for good or ill, a reality and investing without much regard for the economics of the project.
It is logical. Protect the elites with automation and control the masses to prevent a "Second French Revolution".
Which may be postponed but doubtfully avoided.
You will own nothing and will be happy.:)
that's the right answer. New technologies are always overestimated in the short term, but underestimated in the long term. The long-term usually is not a generalized solution, but is still a true solution.
speech recognition hype... now most phone trees use them.
electric car hype... now california is full of them.
self driving cars... now many new cars have driver assist.
Post must be machine-generated - it says nothing correct.
Electric cars are NOT AI!
But it is true that "california is full of them", whatever "them" may be. That may have something to do with the crowd (whores and madmen) attracted by the California Gold Rush and whose descendants now mine their wealth in VC endeavors, including deep learning.
I was making a general statement about "technologies"...
I'm saying AI is a technology that is being overestimated now. People are trying to use it for everything now. I think in the future when the hype has worn off it will not have worked for everything hyped. But it will have worked well for a couple things to the point they are taken for granted or have become invisible.
People believe that achieving X means the agent will be intelligent and can thus do Y, Z, A, B, C etc.
But then someone builds an agent that can do X, but it can't do anything else. So people don't view it as intelligent, since it doesn't do any of all the other things people associated with being able to do X.
So it has nothing to do with being used to computers solving new problems, it is that we now realize that solving those problems doesn't really require the level of intelligence we thought it would. Of course solving new problems is great, but it isn't nearly as exciting as being able to create intelligent agents.
Edit: And it isn't just hobbyists making this mistake, you see big AI researchers often make it as well and they have for a long time. You will find tons of articles of AI researchers saying something along the lines of "Now we solved X, therefore solving A, B, C, D, E, F is just around the corner!", but they basically never deliver on that. Intelligence was harder than they thought.
So, our brains are just a little ecosystem of competing functions, and our pre-frontal cortex / prediction engine (the piece most AI researchers care about) is trained on species survival, but beholden to the "lizard brain."
I'm still on the bandwagon that AGI is already solved, and we just don't recognize it because we aren't as complex or magical as we like to pretend we are. We are just currently selected-for species survival, and not even in the present environment as the dominant species of the planet. That doesn't make us "Generally intelligent" any more than many other systems on the planet (including the computational ones). I'm not even convinced that "intelligence" is especially a thing.
If we're trying to replicate "human intelligence," then I think that's something else entirely, and would require removing a lot of capability from the AI that I don't think anybody would want to remove from AI, given that we have plenty of serviceable humans to do the work of human-intelligence style tasks.
No, when folks say AGI, they mean "cares about the things I care about, and is more accurate, but can speak in a way I understand." But how is a network of neurons going to care about what you care about if they see the world in a different way, or have been trained on solving a different set of problems than "living a life as a human?" It just doesn't make sense to expect, or even want, human intelligence from a synthetic intelligence.
Exactly. People make the case that something along the lines of AGI is necessary and sufficient for some task, then someone comes along and solve it with something completely unlike AGI, proving that it's not a necessary condition. Or in more straightforward but less precise terms, solving this is AI and then it isn't.
The same will happen with deep learning networks because if we're honest no serious researcher would really consider this "AI" either. It's just a big self-correcting linear algebra machine. Researchers will keep using that "AI" label for as long as it gets them research grants or selling products.
I wouldn't call it a "linear algebra machine" if the central point (of NNs) is that they are highly non-linear (the deeper the more non-linear, roughly)
Precisely. AI outside the beaten-to-death academic problems and datasets is a vast and completely un-plowed field with enormous potential. And unlike last time, there are some sub-fields of this that do actually work this time (computer vision, signal processing, speech recognition/synthesis, some NLP tasks, etc) and beat older baselines, often by a mile. There is a lot of froth, but there's quite a bit of substance as well, and the field is moving faster than any other I have ever experienced.
I worked for a startup for a while that, late in its life, decided it needed to use AI in their product. Then that turned into needing ML in the product. Then it turned out it was just the owners trying to market the product as AI and ML powered when half of the product was the frontend used to configure the decision engine. The manually-configured decision engine was being sold as AI and ML, with the terms used interchangeably. I was actually a little surprised when it didn't work out, but not that surprised.
The main issue is that people have unrealistic expectations in domains that are personally relevant, medicine being an iconic example. So, there there might still be a 'reckoning' of sorts, even if Walmart applies AI successfully behind the scenes to power non-critical user-facing stuff.
Oh for sure. I guess I could have worded it better (not just scare quoted "real"), but it seems like a public expectations reckoning will happen, but not a reckoning that really affects practitioners who aren't 'script-kiddies' for lack of a better term.
(side note, does anyone know a less insulting way to refer to the group people call 'script-kiddies' that still gets the same point across?)
I don't believe that there is a polite way to say "These people have no idea what they're actually doing, they're applying other people's logic blindly, and if anything goes wrong they're stuck."
If that's not what you meant to communicate, you should probably explain more what you mean by 'script kiddie'.
There certainly are tasks where a 2% failure rate is fine, but even more importantly, where we see ML/DL having the biggest impact today is in regards to complex system-type problems, where there is often a less-than-discrete notion of failure.
Looking at apps we use every day, almost all of them owe some core feature to ML/DL. ETA prediction, translation, search, spam filtering, speech synthesis, autocomplete, recommendation engines, fraud detection—and that's not even touching the world of computer vision behind nearly every popular photo app.
A key understanding gap in the general public's knowledge of ML is that people think AI === Skynet, and they've therefore been lied to about the field's progress and impact, when in reality, they probably interface with a dozen pieces of technology that are built on top of recent breakthroughs in ML/DL.
Something that I'm fascinated with is the question, would other models show equivalent results if given the same amount of compute?
I looked through the list of solvers for the protein folding challenges and there were other deep learning, neural network, and classical machine learning approaches on there. Even some hybrid ones! But none of the participants had even a fraction of the compute power that AlphaFold had behind it. Some of the entries were small university teams. Others were powered by the computers some professor had in their closet (!). Most of the teams were dramatically under-powered as compared to AlphaFold. How much did this influence the final result?
What would the other results look like if they'd been on equal footing? Would they have been closer?
Probably not. It's not clear how to scale other methods to make use of so much compute. They top out sooner, with fewer parameters and less compute.
One way to look at why deep learning is having the impact it does is that unlike other ML methods, it's actually capable of making use of so much compute. It gives us modular ways to add more and more parameters and still fit them effectively.
What always astonishes me that deep learning seems to work on human timescales. For other problems (even if they are polynomial in complexity) we get into infeasible runtimes when we increase the problems complexity by 10x. With deep learning the fuzzy, approximative nature seems to help to grasp the gist of the 10x problem and somehow allows us to reach 95% of the solution in just e.g. 2x the runtime. Heuristics might play in the same league, but the development time kind of scales with the problem, while in deep learning I would put it in the linear or log basket.
First, having 'infinite compute' is a way for researchers to be sure that compute isn't the thing holding back their method. So, DeepMind made a protein folder using all the compute they had available; later others managed to greatly reduce the amount of compute needed to get equivalent results, by re-implementing and innovating on DeepMind's initial write-up.
Second, I think there's a lot of interesting ground to explore in hybridizing ML with more classical algorithms. The end-to-end deep learning approach gives us models that get the best scores on imagenet classification, but are extremely prone to domain shift problems. An alternative approach is to take a 'classical' algorithm and swap out certain parts of the algorithm with a deep network. Then you should often get something more explainable: The network is explicitly filling in a specific parameter estimation, and the rest of the algorithm gives you some quality guarantees based on the quality of that parameter estimator. I saw some nice work along these lines in sparse coding a couple years ago, for example...
No, some DL architectures are unquestionably superior to others especially for some tasks. In the first few years of deep CNN development, several limits quickly became apparent (sensitivity to scale, vanishing gradients, gradient signal loss with some activation functions, and many engineering constraint tradeoffs like poor RAM use). This was addressed by making changes to a simple dense net which begat architectures that suffer less from those limits.
Without question, limits exist in every DL architecture; we just haven't taken the time to diagnose or quantify all the limits yet. Now that attention has shifted to transformers, analysis of DNNs' inherent limits is made substantially more difficult, given transformers' huge size. That, and their multi-million dollar training cost likely will make if infeasible to diagnose or cure each design's inherent limits.
The advantage of DeepMind from more compute resources is cumulative, you can develop better models if you have more compute. So the question really is: If academia and other research institutions had DeepMind's resources, would their output match DeepMind's?
Papers say that both are trained on PDB dataset. And still, we see a dramatic gap between old and new Alphafold models. Both were trained by Deepmind, probably with a similar computer power. I think it's obvious that it's not just compute power, method matters a lot.
actually, companies can buy the amount of TPU time that AlphaFold used. Note that the price breaks from "here is a listed price" to "contact sales" at the largest size (TPU v3-2048). I assume that during initial training and experimentation that DM kept 1 or two TPU v3-2048 busy 24/7 for 6 months to a year. That's entirely within the budget of well-heeled companies
ANNs are among the few kinds of programs that perform significantly better on small problems with exaflops than with teraflops. Gurobi, LS-Dyna, or American Fuzzy Lop will produce only marginally better results as you scale from a closet full of computers to a warehouse-scale computer. GCC or linear regression won't even work marginally better.
You can't just throw more compute at an algorithm and have it spit better results! Well, unless your algorithm uses deep learning that is. And that's why teams with a lot of compute use DL, and why team that use DL use a lot of compute
Presumably the team started with existing methods on their fleet of compute and was forced to develop new ones. While development happens your fleet also grows and your ability to utilize it expands and you end up not going back to ever try methods that are generations behind on your current fleet because the overhead of making it work doesn't justify it.
It's like asking if using the google homepage from 10k iterations ago would perform better than the current version on the present user cohorts. There's just too much invested to justify testing things like that when you can use the time to improve what is showing promise.
I don't think computational resources will have greater effect, because you can see this by simple plot running time x axis and model performance y axis.
Imagine every program that fits within 10TB. It’s a finite number of programs. Inside there is GTP-3, 2 and 1. Every GAN. Tesla FSD. It’s everything. What this entire industry is really about at its core is finding programs in that set that behave in intelligent ways. That’s all. Recently we’ve had success in letting the computer dig through programs automatically until it finds a good one. So overnight we went from testing maybe hundreds per year to billions per month or something like that. So obviously things have been getting weird.
Imagine how many we’ve dug up so far out of the total set. It’s an infinitesimal percentage. We haven’t even scratched the surface.
Imagine the set. What’s inside? Is there something inside that we will regret?
While there are those in the AI community who believe scaling laws will unearth such programs, they are deluded. There are problems which scale faster than our computational resources allow, even if hardware scaling continues unabated for millennia to come.
The space of all programs in 10TB is far too large to count, even if we could harness galactic computation. Even within a much smaller search space, there are valid programs which cannot be found by gradient descent. Let BT(n) be the number of distinct binary trees less than or equal to height n. This number scales according to the following recurrence relation:
BT(n+1)=(BT(n)+2)²−1
Consider the space of all binary trees of height 20 - there fewer atoms in the visible universe. And this is just laying out bits on a hard drive. There are other functions (e.g. Busy beaver and friends) which scale even faster. The space of valid programs in 10TB is too large to enumerate, never mind evaluate.
In case anyone here is interested in learning more about program synthesis, there is a new workshop at NeurIPS 2021 which explores some of these topics. You can check it out here: https://aiplans.github.io/
You're not wrong, but the numbers are quite different if you have a strong search heuristic. And that is where neural nets excel. See AlphaGo and derivatives. So this isn't just some random gradient descent exploring the full space; a properly designed and trained neural net will effectively cordon off vast swaths of the search space, and suddenly the problem is probably many orders of magnitude more tractable.
That's really why deep learning shines in technical applications. Solution search heuristics were until recently solely within the domain of biological neural networks; now we have created technology which is capable of extracting superior heuristics over the course of learning. And it's already paying off in industrial science, despite the cries of naysayers, though the applications are still in infancy.
There is a vast chasm of computational complexity between Chess, Go, and protein folding, to program induction. Unlike problems where the configuration space grows exponentially, the space of valid programs is at least super-exponential and depending on the language family, (e.g. context free, context sensitive, recursively enumerable), can often be undecidable. Furthermore, many language induction problems do not have optimal substructure or overlapping subproblems, two important prerequisites for reinforcement learning to work. In these settings, gradient-based heuristics will only get you so far.
If you are interested in learning more about the limits of gradient descent, you should look into the TerpreT problem [1]. There are surprisingly tiny Boolean circuits which can be found using constraint solving, but we have not yet been able to learn despite the success of reinforcement learning in other domains. I'm not saying that program induction is impossible, but it is extremely hard even for relative "simple" languages like source code.
There are two answers to your question. One is, humans have simply not found very good solutions in these spaces. We have designed algorithms which barely work on denumerable sets, but are unsuitable in more general spaces. There are many decision problems which are known to be undecidable, i.e., there exists no algorithm that decides first order logic propositions. There are fragments which are decidable, and essentially all useful algorithms in CS fall into this category.
The second answer is that we are exploiting statistical regularities in the data which make these problems effectively regular or context-free in practice. When someone asks you to solve a new programming problem, you are applying some heuristics that have worked on similar problems in the past. Given a truly novel problem, you can either make some assumptions to reduce it into a more tractable form (e.g. 3-SAT), or design some clever search heuristic, but without any prior examples of programs or a distribution over probable inputs, you can do no better than naïve search.
>The second answer is that we are exploiting statistical regularities in the data which make these problems effectively regular or context-free in practice. When someone asks you to solve a new programming problem, you are applying some heuristics that have worked on similar problems in the past. Given a truly novel problem, you can either make some assumptions to reduce it into a more tractable form (e.g. 3-SAT), or design some clever search heuristic, but without any prior examples of programs or a distribution over probable inputs, you can do no better than naïve search.
How is this different from training a neural network on a data set and relying on interpolation for inference? Isn't that exactly what neural nets learn, statistical relationships between inputs? After all, isn't that effectively the mathematical definition for a heuristic? You don't know exactly what the solution is, but there are similarities between past examples you've encountered, and these form priors for your shortcut through the solution space.
I'm not sure if neural nets can extrapolate (which would imply searching outside of the training space, i.e. innovating) but if they are truly universal function approximators, I don't see why not. Especially if the solution space is smooth and continuous in the extrapolation range...whatever that means in high D space.
To your first point, it seems you are saying that neither humans or neural networks can solve these intractable problems. It would not surprise me if a massive, well tuned neural net, possibly with yet to be developed components, could discover better heuristics than even an intelligent and experienced human for problems in this class. But that might be optimistic.
> How is this different from training a neural network on a data set and relying on interpolation for inference? Isn't that exactly what neural nets learn, statistical relationships between inputs?
The huge difference is that humans has a rational part of the brain which determines if examples are important to train on or not, doing extremely efficient problem space pruning. And then it trains the neural network in real time using that rational part as guidance. We can't program without the rational part, so computers likely wont able to program without it either. We can read sentences and detect objects in images without the rational parts so ML AI can do that.
We have no idea at all how to build that rational part, without something replacing its role the current methods will just give us extremely primitive parts of human thinking such as image recognition.
AlphaGO actually got a rational part, the code they used to play through the game and see if moves leads to a win. That means that AlphaGO has the full human capability, but for a very limited domain, just GO, the rational part of GO doesn't generalize well to more interesting problems.
Somewhat off topic, and I ask out of genuine curiosity - when did the “this computation will take sifting through more combinations than there are visible atoms in the universe” meme emerge?
I’ve never quite understood why that’s an important metric for considering processing. Does it have an actual impact on the computability of something, or is it just a visualisation to help human minds scale the grasp of something?
I prefer "sifting through more combinations than there are elementary particles in the universe times the universe's age in picoseconds"; it's a fairly reliable indicator that exhaustive search is not a viable way to solve the problem, not just today but, with classical computing, ever, because it's very unlikely that we can make a working computer out of less than a single elementary particle, try more than one guess per picosecond (per computer), or wait longer than the current age of the universe for an answer. I've seen such calculations at least since the 01990s.
It might turn out to be wrong (for example, elementary particles or black holes might have exploitable structure, a picosecond is pretty long compared to the Planck time, or closed timelike curves in spacetime might allow you to spend an unboundedly long time computing something), but at the very least it's a strong suggestion that exhaustive search will not be fruitful.
For more immediate purposes I prefer the dollar cost of carrying out the computation with currently available hardware.
Quantum computing, when it becomes practical, will give you only a quadratic speedup, as far as we know. So the relevant problem size then expands to the square of the number above.
We don't have the words to describe how quickly these nonlinear recurrence relations grow. Many program synthesis problems are essentially super-exponential. Look at the Ackermann function, which describes the complexity of many decidable algorithms as you increase problem size -- it's a different kind of scaling altogether.
It’s a pretty useful concept when you consider cryptography. The idea that enumeration is mathematically infeasible for some set is the basis for many cryptographic functions.
But basically yeah it’s just an illustration of the difficulty. Obviously we’ll never have a supercomputer’s worth of computing power for every atom.
This isn't an anti-AGI argument and it doesn't disprove humans. Humans have the same problem. It's harder to write a program to do a thing than it is to just do the thing.
It's appealing to think we can just make a program that learns programs and then use that to learn to do anything computable. But this is a well studied field and it turns out that when you generalize a learning problem that way you make the learning problem a lot harder.
The space of programs that could possibly.identify dogs in images is much much much larger than the space of images that contain dogs. The images are bounded by the number of pixels in the image times the color depth. What is the space of programs bounded by? 10TB? That's roughly 256^10000000000 programs. That's just a stupidly large number.
Obviously not every 10TB string is a valid program. You can reduce that number. But what current research in program synthesis tells us is that you can't reduce it as much as you might hope.
So the point is that, just like for humans, it's easier to learn to do a thing than it is to learn to write a program to do a thing.
> This isn't an anti-AGI argument and it doesn't disprove humans.
You're saying you can't find intelligent programs this way because the search space is large. That's an anti-AGI argument, and it's fallacious because humans evolved.
Yes, you can only search an infinitesimal subset of the search space. The same is true for DNA. The argument is clearly invalid without at least reference to properties that gradient descent has, or that evolution has but it does not, which you have not done. It is wrong for the same reasons the watchmaker analogy is.
Do humans fit in 10TB? Do humans fit in our computational model at all? Neurons aren't simple callback closures or matrices, and neurons don't even form the full picture.
A lot of the brain is for unrelated things. Imagine how many neurons you eliminate by excluding the cerebellum or the brain stem. And if you carved away the brain until only the intellect remained, distilling its digital equivalent could reduce its size even more. And if it were just the algorithm, I think 10TB would be enough.
> If it were just the algorithm, I think 10TB would be enough
Just thinking about it makes me shiver. I had been assuming that AGI or strong AI needed enormous advances in hardware like quantum computing or numerous iterations of Moore's Law. Would it be correct to say that the right 10TB bit pattern might give us strong AI on today's commodity hardware -- at least in theory?
It’s such a fascinating question. Given some domestic computer, what is the most intelligent program it might be able to run in something like real-time? My intuition is that domestic computers could achieve a sparse intellect but maybe not a rich sensory relationship with the physical world.
With every year we expand our vast collective computational resources. It’s like a powder keg. It’s sitting dormant just waiting for a good program to be stumbled upon. I hope I’m not alive when that happens
Human intelligence is about more than just the brain. How intelligent would you be if you grew up in a sensory deprivation tank? If you didn't need to eat, if you didn't crave social interactions. Without family or community. If you met a human who grew up this way, they might not seem particularly intelligent either.
That's a very interesting way of thinking about things, but I think unfortunately the search techniques we are using (gradient descent with certain datasets/self-supervision tasks) limit us to exploring a very small subset of that 10TB of possible programs. A search strategy that could actually find the optimal 10TB program to accomplish some task would actually be a superintelligence beyond anything we have ever thought of creating using present day AI techniques.
> find the optimal 10TB program to accomplish some task would actually be a superintelligence
I am having a difficult time understanding what the operative meaning of "intelligence" here is. "AI" doesn't transcend the Turing machine. Intelligence doesn't mean more computer cycles per unit time either and compute cycles don't transcend the TM. What makes intelligence intelligence is what it can in principle do; speed is irrelevant. There is no essential difference between AI and non-AI.
The number possible bit arrangements in 10TB is 2 raised to the power of 80 trillion.
This comes out to 3.1 × 10^24082399653118
To emphasize the size of that search space- let us measure the diameter of the observable universe in planck lengths, the shortest possible length. We would need over 24 trillion digits to write the number of universes we would need for the quantity of planck lengths to equal the number of possible programs.
That 99.99% number is missing several billion additional 9s.
Are you stating that you have a corpus of 10TB of software source and you are frankenstiening it with interesting results? I find that hard to believe.. Surely its like Monkeys and typewriters, 10TB in that context wouldnt be nearly enough.
Parent is not talking about a 10TB corpus of software but the corpus/set of all possible software weighing 10TB (or less I guess, specially if you take the lottery ticket hypothesis into consideration).
Unambiguously yes. 10TB is a lot of space. That's large enough for a program that does nothing but show 8 hours of HD footage of your kids being tortured and eaten.
It's also more than enough to define a program that would reliably precipitate a global thermonuclear war if connected to the Internet.
In order to search that space you have to have a specification language for the behavior of the target program. Things like copilot/GPT-3 currently take natural language as input, but that cannot be used for anything that needs to be verifiable or have correctness. Maybe start by having the nets generate implementations of the IETF RFCs in a variety of languages given the text of the RFC as input? Not easy.
I used to be a huge sceptic of this cycle of deep learning. But not anymore. GANs are incredible and there's a huge amount of low hanging fruit left to pluck. I definitely don't believe AI will repeat it's history (history doesn't have to cycle).
Yeah I was totally anti hype when everyone was excited about style transfer and stuff like that. I found most of the VQGAN+CLIP way overhyped (sure it's abstract and funky but you get bored of that look after 10 images - it's like that deep dream filter from 5 years ago, shouldn't we have progressed more by now) but lately CLIP guided diffusion has blown me away. I literally thought it was all deception and cherry picking until I tried it myself[1] and realized the amazing outputs from https://twitter.com/RiversHaveWings are real.
They mean "not cherry-picked or otherwise faked"; that is, that the images shown on that Twitter account are really typical of the images produced by CLIP guided diffusion, not, say, hand-painted.
The recent advances in deep learning are good enough to keep industry busy for the next 20 years. Each year has brought us substantial improvements and if you look past the popular benchmarks there's no sign of things slowing down.
I think it is that we are going to hit soon local maximum where modern NN architectures with better vertical integration and better hardware are going to create lots of value but other hand doesn't bring use anywhere near AGI.
No sane person believed that deep learning would get us to AGI withing the next few decades. Anyone saying so is doing it to fool VCs out of their money.
I'd like to agree with you but just 1 year ago every AI related thread here were chock full of people breathlessly predicting AGI within years... and presumably HN is a well informed community!
It's often not about expertise or sanity, but AGI captures people's imagination to a huge extent and as evolution has conditioned us to do, we ascribe certain behaviors and desires to systems which only exhibit those by random chance.
As a layman I wonder when we're going to start bearing the fruit of all this effort in DL. In the average person's day to day life I mean.
I don't rely on DL driven systems and I'm not even a skeptic. I want it to work. But I can't rely on these systems in the same way that I rely on my computer/phone, a light bulb, or a refrigerator. Is that ever going to change?
DL is already bearing fruit everywhere - the key is that the places where it works are narrow domains. In general the larger the scope of intelligent behavior the less successful it has been.
Someone has already mentioned face unlock, but also dictation is miles ahead of where it used to be. Similarly text-to-speech is absurdly better than it used to be and is approaching indistinguishability from human speech in some cases (again, narrow domains are more successful!)
Smartwatches are capable of detecting falling motion and alerting emergency responders, and are increasingly able to detect (some types of) cardiac incidents. Again here the theme is intelligent behavior in very narrow domains, rather than some kind of general omni-capable intelligence.
The list goes on, but I think there's a problem where so many companies have overpromised re: AI in more general circumstances. Voice assistants are still pretty primitive and unable to understand the vast majority of what users want to speak about. Self-driving still isn't here. To some degree I think the overpromising and underdelivering re: larger-scoped AI has poisoned the well against what is working, which is intelligent systems in narrow domains, where they are absolutely rocking it.
> the key is that the places where it works are narrow domains
I've observed that not only are the domains narrow, but the domains of domains are narrow. In other words the real-world applications are mostly limited to pattern recognition, reconstruction, and generation.
What I wonder is this. Is DL a dead end?
Are we going to reach a ceiling and only have Face ID, Snapchat filters, spam detection, and fall detection to show for it? Certainly there'll be creative people that'll come up with very clever applications of the technology. Maybe we'll even get almost-but-not-really-but-still-useful-actually vehicle autonomoy.
I can't imagine a world without the transistor, the internet, ink, smart phones, satellites, etc. What I'm seeing coming out of DL is super cool but it feels like a marginal improvement on what we have now and no more. And that's fine... but a lot of very smart people that I know are heavily investing in AI because they're banking on it being the new big technological leap.
> What I'm seeing coming out of DL is super cool but it feels like a marginal improvement on what we have now and no more
"Marginal" here seems to be doing a lot of heavy lifting and IMO isn't fair. The ultimate point of technology isn't to inspire a Jetsons-like sense of wonder (though it is nice when it happens), it's to make life better for people generally. The best technology winds up disappearing into the background and is unremarked-upon.
Like better voice recognition or text-to-speech. We've become accustomed to computers being able to read things without sounding like complete robots - and the technology has become so successful that it's simply become the baseline expectation - nobody says "wow Google Assistant sounds so natural" - but if you trotted out a pre-DL voice synthesis model it would be immediately rejected.
I also wouldn't characterize "ability to automatically detect cardiac episodes and summon help" as some kind of marginal improvement!
I think there's a bit of confusion here re: a desire for DL to be the revolutionary discovery that enables a sci-fi expectation of AI (self driving cars! a virtual butler!), vs. the reality of DL being a powerful tool that enables vast improvements in various narrow domains - domains that can be highly consequential to everyday life, but ultimately isn't very sci-fi.
Does that make DL a dead-end? For those who practice it we aren't close to the limits of what we can do - and there are vast, vast use cases that remain to be tackled, so no? But for those whose expectations are predicated on a sci-fi-inspired expectation, then maybe? It's likely DL in and of itself won't lead us to a fully-conversant virtual butler, for example.
[edit] And to be fair - the sci-fi-level expectations were planted by lots of people in the industry! Lots of it was mindless hype by self-described thought leaders and various other folks wanting to suck up investment money, so it's not fair to blame folks generally for having overinflated expectations about ML. There's been a vast amount of confusion about the technology in large part because companies themselves have vastly overstated what it is.
> The best technology winds up disappearing into the background and is unremarked-upon.
Very much agree, but what I've seen is that DL based solutions do not disappear into the background.
It's so rare for them to disappear into the background that, sitting here at my computer right now, thinking real hard, I can't come up with a single consumer DL product that works reliably. I'm pretty sure there are a few things but it's soooo rare.
Face ID works most of the time but the success rate for me is like 1 in 50. It's very very cool technology but it's also very unreliable. Also if face ID never existed I don't think my life would be worse off in any way.
The same basic issue I can apply to ever DL solution I can think of. The best way I can describe it is they feel... janky. Always janky.
I've had similar conversations before and, after some back and forth, the bullish-on-AI person ends up saying much of what you said. Here's where we end up in a weird stalemate...
> I also wouldn't characterize "ability to automatically detect cardiac episodes and summon help" as some kind of marginal improvement!
Maybe not a marginal improvement, but there's a lot of amazing technology in the medical, industrial, and military sectors. For example people are surprised that FLIR was actively used in the military in the early 90s!
I have no doubt that DL is going to drive a lot of the innovation in highly specialized areas.
What I'm talking about (and terrible at communicating, honestly) is general purpose consumer applications. Can DL significantly improve the lives of every day people? Right now I'm seeing a lot of toy applications, innovation in highly specialized areas, and only hopeful ambition for general use.
What I'm waiting for is that magic moment when I use a technology that a) works flawlessly and b) changes how I live my life. As soon as I see a DL based solution that does that then I'm sold. I just haven't seen it yet.
Deep learning is narrowly applicable to every domain, that's the beauty of it. It's delivering 1.1-10x efficiency improvements for a lot of common workflows, which might not seem that impressive but really adds up.
My cousin is an MMA fighter in another country, just today he got a contract from an american agent and asked me to translate it. I was able to throw it into google translate and in under 2 seconds it produced a flawless translation of 20 pages of legalese.
I have a fairly affordable Hyundai that's able to drive 80 miles on a highway without me touching the steering wheel.
I built an app that uses image recognition to automate food logging, from the surveys that we did it cut down the time to log from 15minutes a day to under 2.
I've worked on systems to monitor patients at risk of falling in a hospital setting.
> My cousin is an MMA fighter in another country, just today he got a contract from an american agent and asked me to translate it. I was able to throw it into google translate and in under 2 seconds it produced a flawless translation of 20 pages of legalese.
Flawless sounds like an overstatement. I would hope that you use a professional before signing contract? That's serious stuff.
> I have a fairly affordable Hyundai that's able to drive 80 miles on a highway without me touching the steering wheel.
Are you referring to lane assist or OpenPilot? In both cases you need to be focused enough on the road that (IMO at least) it doesn't make that big of a difference either way. Certainly not life changing.
> I built an app that uses image recognition to automate food logging, from the surveys that we did it cut down the time to log from 15minutes a day to under 2.
Can it detect hot dogs?
> I've worked on systems to monitor patients at risk of falling in a hospital setting.
See my response wrt specialized (medical, industrial, military) settings. There's a lot of other incredible technology at work in hospitals.
> My friend built Tonal, which can track your exercise form
People exercised just fine before this. I'd classify Tonal as a marginal improvement, at best. I've actually found that removing technology and falling back to simple calisthenics (done properly of course) is having a much greater impact than adding more technology, for various reasons.
> Alphafold will be a huge deal for drug discovery.
I agree, but it falls under the category of specialized use cases. It's very exciting though.
> My cousin is an MMA fighter in another country, just today he got a contract from an american agent and asked me to translate it. I was able to throw it into google translate and in under 2 seconds it produced a flawless translation of 20 pages of legalese.
You do realize that todays translation services often reverses the meaning of sentences? They are useful for reading random posts where you don't care about the results, but they should never be used when you absolutely need to know the meaning of statements.
What you did is akin to putting your sleeping friend into a tesla, turn on the autopilot and see the teslan leave on the road, then posting "See, the tesla drove away perfectly, AI really automated driving!". You don't even know if it arrived safely, and even if it did the tech isn't reliable enough to safely do what you did.
I've worked on similar systems and am aware of these issues. I said that the translation was flawless because I'm bilingual and read it to make sure that there were no mistakes, which I would have expected to see. It was a fairly standard contract and it probably also helps that a lot of the machine translation datasets contain a ton of EU legal documents since they need to be translated for all member states (see https://www.statmt.org/europarl/)
Most people unlock their phones using a deep learning model based facial recognition system, they talk to their devices thanks to deep learning, they translate documents with transformers, even google maps uses GNNs for ETA estimates and routing (https://arxiv.org/abs/2108.11482).
The cameras on mobile phones got so much better thanks to deep learning, snapchat filters, zoom backgrounds, etc all use CNNs.
The truth is that artificial intelligence and machine learning today are still aspirational titles - the current tech simple does not learn or reason in any way that is similar to human or even biological reasoning.
Data mining and probabilistic pattern recognition are much more accurate descriptions, but don't sound as exciting.
It's definitely possible that true AI will one day exist, but it may be anywhere from 5 to 1000 years away. I suspect the current approaches will not resemble the final form when it comes to AI.
I think the calculus was that you didn't need "true" AI for a self-driving car.
That still might be accurate, just maybe not in the near term. It may be controversial, but I think that humanity's hubris is the biggest barrier towards developing more effective AI.
Currently, the cost of ML R&D has a floor based on what advertising companies are willing to pay people to work in the space. This actually has a huge upside, as most of the tools that have made ML so accessible (pytorch, tensorflow, all the research advances) are coming out of these companies. But it has the downside that if I want to get someone to work on my ML problem I have to compete with what google and fb can pay.
A consequence, I guess, is that there are lots of unexplored / underexplored problems waiting to be tackled, and there are tools around that can make it happen. If there is a reckoning in the advertising space, there will be lots of other applications to focus on.
Well, one issue is that Google and FB can finance said research because they make so much money off ads to begin with, and by doing so essentially provide enough funding to keep the field going
So this reckoning in the advertising space would only be a net positive for society if others came in to fill that funding gap and threw enough money at researchers to keep the field afloat in a similar fashion
All of their arguments for why deep learning is reaching its limits seem to me to be arguments for why its future is bright. The fact that we have no even halfway plausible theory of why deep models generalize well, or why training them via sgd works, or even what the effect of depth is in a neural network, is exciting, not discouraging. I do not believe these are impossible problems to solve (well maybe the generalization one, but even there surely more can be said than the current state of knowledge which is basically a shrug). And even partial solutions should yield big practical steps forward.
There's so much BS around use if AI/ML in the enterprise space right now.
It's totally self perpetuating.
My sales team keep asking me to add AI to X product. Doesn't matter if it's not required, or even makes sense they ask because the competition is doing it and they get asked by our clients/prospects in an expecting tone, if we offer some AI on our products.
There are places we do offer it where it genuinely adds value but this 'sprinkle AI on everything for the sake of it' needs to die.
> My sales team keep asking me to add AI to X product. Doesn't matter if it's not required, or even makes sense they ask because the competition is doing it and they get asked by our clients/prospects in an expecting tone, if we offer some AI on our products.
It could be worse, they might be asking you to put it on a blockchain :P
Personally, I'm won't think of it as 'artificial intelligence' until an AI system can teach me to speak a human language as well as a native human speaker of that language could. Seems a long ways off.
All it appears to be now is a collection of very sophisticated pattern matching algorithms that build their own opaque internal sorting/matching/classification algorithms based on the data they're trained on, as I understand it. This is of course incredibly useful in some domains, but it's not really 'intelligence'.
And, they can't do math very well:
> "For example, Hendrycks and his colleagues trained an AI on hundreds of thousands of math problems with step-by-step solutions. However, when tested on 12,500 problems from high school math competitions, "it only got something like 5 percent accuracy," he says. In comparison, a three-time International Mathematical Olympiad gold medalist attained 90 percent success on such problems."
> AI system can teach me to speak a human language as well as a native human speaker of that language could. Seems a long ways off.
And when such a system appears, you'll claim that it's still not AI --- just a fancy pattern matching trick --- and say that it's not real AI until some other arbitrary benchmark is met.
"AI" is just what machine learning can't quite do yet.
> collection of very sophisticated pattern matching algorithms
What do you think human brains are? Humans are Turing machines as well --- all physical computers are. We process inputs, match them against internal state, and generate outputs. You can't criticize AI on the grounds that it's "pattern matching": everything is pattern matching.
You can't look at the output of GPT-3 and tell me that it's some kind of dumb regular expression here. You just can't.
Except, human brain changes on second by second bases, it is highly distributed and concurrent system with unimaginable redundancy, furthermore brain capability of learning/adaptability from just from one or few examples/tries does not compare to any AI models; what aspect of GPT-3 you comparing to human brain, specifically ?
Pattern recognition is certainly a major component of human cognition but is hardly the whole story. Solution of complex mathematical problems - those are not pattern recognition problems. Two quite similar equations may have radically different outputs for the same inputs, so that screws up the whole pattern recognition approach, doesn't it?
> Two quite similar equations may have radically different outputs for the same inputs, so that screws up the whole pattern recognition approach, doesn't it?
No, it doesn't, and that's because you have an inadequate understanding of what "pattern matching" is. The domain over which patterns are matched --- in both the brain and artificial neural networks --- isn't just the input, but a combination of the input and the previous state of the computation doing the matching. It's this recurrence, this recursive evaluation of previous state, that makes human minds Turing complete. "Pattern matching" is more powerful than you think when you combine it with attention and memory, and ML models have had both for years. Do you think ML models are dumb regex lists or something? ML models and the brain have state.
Pattern matching is like memorizing all the questions and their answers from previous exams before going in to take yours. When you encounter one you haven't seen before you obviously fail every time. That's why only the non-intelligent students do this.
But ML models soon will have ability to decompose solutions and recombine them. This with added causality and generation of test data might make them quite powerful.
I think domains that will get actually solved first due to this are programs/maths as usually it's easier to verify solutions that to devise them. For other stuff you would have to simulate the universe.
That's why for example self driving cars basically record everything and will simulate all possible scenarios especially these based on disengagement data.
No it doesn't. Solving math is generative pattern matching. I once was very good at math and it's just works like puzzles in your mind. If you know all the trick that can be used to form solution you simply enumerate through them brute force (or something little more eloquent). It's just super hard to generate trick on your own and that's why if you miss single idea in math you are lost forever.
Also it doesn't take single human to devise these algos. It took whole generations of humanity. We are just approaching possible lower bound of compute power of single brain.
I think we might get program and proof synthesis in 1-30 years with this kind of progress and funding. AFAIK in 2048 silicon compute might get bigger than whole compute of humanity.
I'd like to think I'm Turing complete --- well, as much as any physical object can be. I can walk through any algorithm. That makes me Turing complete, yes?
The article “An Inconvenient Truth About AI > AI won't surpass human intelligence anytime soon” doesn't even once mention or justify the claim in its title.
We once utilized an ad agency that relied heavily on "AI" to pre-test ad creative.
The agency presented regional branding campaign creative with our national flag flying at half mast. The AI predicted success. The ad agency stood by the AI.
Certainly would have generated clicks.
But the ad agency lost a customer. Not sure if the AI would have predicted that!
My two cents, over the last decade I’ve observed a hype cycle where DS type roles become more prominent than software in organizations followed by a decline.
Businesses pay for results, applying 1000 slightly different variations of the same techniques on the same dataset produces very little return. Businesses take note, don’t give raises etc and the DS team fades.
Eventually someone tries the new hot thing like Deep Nets and sees a large gain in a core business metric with relatively little effort. As every team is going through the same hype cycle DS salaries spike.
There are still plenty of targets to hit with unitasking AI like we have that could each have their imagenet moment. But there is a shortage of domain experts who understand both those targets and AI.
That creates opportunity for domain experts who learn AI. Less so the other way around because these domains are generally more complicated than AI and lack the unending tsunami of online courses to learn the details.
So long as researchers are willing to reify AI as a thing, the collective delusion will continue. I'm specifically looking at researchers focused on the ethics of AI; they set up AI as a thing more than other researchers. AI is not a thing. It's a field of research. Know thyself.
Seems to assume carbon emissions from electricity will stay constant, yet that is likely to fall which reduces the impact in one dimension (cost remains).[0][1]
However, there's a ton of cool shit happening and tons of stuff already powering industry $$$. Some of this works because of mismatched expectations, like companies who are trying to sell magic. They will fail as their customers stop buying their products that fail spectacularly 2% of the time, but really can't fail 2% of the time.
But there's tons of room in the market for stuff that fails spectacularly 2% of the time. For that reason, I don't see a "real" reckoning coming. People's expectations are too high, sure. But the reality is still Good Enough™ for a lot of problems and getting better.