I feel like there is an absurd amount of negative rhetoric about how AI doesn't have any real world use cases in this comment thread.
I do believe that the product leadership is shoehorning it into every nook and cranny of the world right now and there are reasons to be annoyed by that but there are also countless incredible use cases that are mind blowing, that you can use it every day for.
I need to write about some absolutely life changing scenarios, including: got me thousands of dollars after it drafted a legal letter quoting laws I knew nothing about, saved me countless hours troubleshooting an RV electrical problem, found bugs in code that I wrote that were missed by everyone around me, my wife was impressed with my seemingly custom week long meal plan that fit her short term no soy/dairy allergy diet, helped me solve an issue with my house that a trained professional completely missed the mark on, completely designed and wrote code for a halloween robot decoration I had been trying to build for years, saves my wife hundreds of hours as an audio book narrator summarize characters for her audio books so she doesn't have to read the entire book before she narrates the voices.
I'm worried about some of the problems LLMs will create for humanity in the future but those are problems we can solve in the future too. Today it's quite amazing to have these tools at our disposal and as we add them in smart ways to systems that exist today, things will only get better.
Call me glass half full... but maybe it's because I don't live in Seattle
Its not about the tech, the negativity is due to the mismatch between hype and reality. LLMs are incredibly useful, for certain things, like the ones you have found. Others simply don't work.
Is it going to deliver on even 1% of the hype any time soon? Unlikely.
1% of what hype? AGI? Because other than AGI, I think it's delivered on most of the hype already.
I think our tooling is holding us back more than the actual models, and even if they never advance at all from here (unlikely), we'll still get years of improvement and innovation.
I'm mostly saying the hype is real on a lot of things today. Is it working perfectly for everything? Definitely not, but I'm of the opinion giving it another 10 years and it just might be. I'm amongst the many working to make it better and all I see is a million possibilities of what can be done that we have only worked through a few of the issues. Did it change EVERYTHING over night? No, it was a big breakthrough, the rest is still catching up.
Hype along the lines of people not having work anymore and that "AGI" is around the corner, etc are real?
Yes strong AI is always about 10 years off.
But yes any new tech takes time to work itself out. No question that LLMs are useful but they will wildly under-deliver by current hype standards. They have their own strengths and weaknesses like everything, but they can be very misleading, thus the hype.
> I feel like there is an absurd amount of negative rhetoric about how AI doesn't have any real world use cases in this comment thread
Yep.
I feel like actually, being negative on AI is the common view now, even though every other HN commenter thinks they’re the only contrarian in the world to see the light and surely the masses must be misguided for not seeing it their way.
The same way people love to think they’re cooler than the masses by hating [famous pop artist]. “But that’s not real music!” they cry.
And that’s fine. Frankly, most of my AI skeptic friends are missing out on a skill that’s helped me a fair bit in my day to day at work and casually. Their loss.
Like it or not, LLMs are here to stay. The same way social media boomed and was here to stay, the same way e-commerce boomed and was here to stay… there’s now a whole new vertical that didn’t exist before.
Of course there will be washouts over time as the hype subsides, but who cares? LLMs are still wicked cool to me.
I don’t even work in AI, I just think they’re fascinating. The same way it was fascinating to me when I made a computer say “Hello, world!” for the first time.
I think the disconnect for me is that I want AI to do a bunch of mundane stuff in my job where it is likely to be discouraged so I can focus on my work. My employer's CEO just implemented an Elon-style "top 5" bi-weekly report. Would they find it acceptable for me to submit AI-generated writing? I just had to do my annual self and peer reviews. Is AI writing valid here? A company wanted to put me, a senior engineer, through a five stage interview process, including a software-graded Leetcode style assessment. Should I be able to use AI to complete it?
These aren't meant to be gotcha rhetorical questions, just parts of my professional life where AI _isn't_ desirable by those in power, even if they're some of the only real world use cases where I'd want to use it. As someone said upthread, I want AI to do my dishes and laundry so I can focus on leisure and creative pursuits (or, in my job, writing code). I don't want AI doing creative stuff for me so I can do dishes and laundry.
> I feel like actually, being negative on AI is the common view now, even though every other HN commenter thinks they’re the only contrarian in the world to see the light and surely the masses must be misguided for not seeing it their way
I have mostly seen people on HN criticizing the few people in tech who have attached themselves to the hype and senselessly push it everywhere, not "the masses." The masses don't particularly like AI. It seems like it's only people hyping it that think everyone but Luddites are into it.
You're both painting a narrative that anti-AI sentiment is a popular bandwagon everyone is doing to be cool, as well as not that big actually and everyone is loving AI. Which is it?
> I feel like there is an absurd amount of negative rhetoric about how AI doesn't have any real world use cases in this comment thread.
What I feel is people are denouncing the problems and describing them as not being worth the tradeoff, not necessarily saying it has zero use cases. On the other end of the spectrum we have claims such as:
> countless incredible use cases that are mind blowing, that you can use it every day for.
Maybe those blow your mind, but not everyone’s mind is blown so easily.
For every one of your cases, I can give you a counter example where doing the same went horribly wrong. From cases being dismissed due to non-existent laws being quoted, to people being poisoned by following LLM instructions.
> I'm worried about some of the problems LLMs will create for humanity in the future but those are problems we can solve in the future too.
No, they are not! We can’t keep making climate change worse and fix it later. We can’t keep spreading misinformation at this rate and fix it later. We can’t keep increasing mass surveillance at this rate and fix it later. That “fix it later” attitude is frankly naive. You are falling for the narrative that got us into shit in the first place. Nothing will be “fixed later”, the powerful actors will just extract whatever they can and bolt.
> and as we add them in smart ways to systems that exist today, things will only get better.
No, they will not. Things are getting worse now, it’s absurd to think it’s inevitable they’ll get better.
Yea I do think you make a lot of valid points about the tradeoffs of the advances. I think anything we do to progress humanity technologically will have negative outcomes on everything else. I think as humans make things better for ourselves it will almost always rely on destroying something in nature in return. The capitalistic world we live in will almost always drive that to the extreme quickly.
As for the other points, are the LLMs wrong sometimes, yes. But so are humans so it's not really a novel thing to point out. The question is, are they more correct than humans? I have seen they can be more accurate, less biased, etc... and we are driving toward higher accuracy and other ways to make them right.
And the fix later attitude is not great toward everything and I was referring to the accuracy issues that people often point out as why AI is hype. The things you mention are side effects and those should be controlled because the cat is out of the bag. You can spend your time yelling at the clouds or try to do something to make it better. I assure you, capitalism is a tough enemy. This is no different than another type of combustable engine that was created that has negative consequences on the environment in different ways.
I'm not disagreeing with you... mostly just saying: the hype is warranted
> are the LLMs wrong sometimes, yes. But so are humans so it's not really a novel thing to point out.
The thing with humans is that you can build trust. I know exactly who to ask if I have a question about music, or medicine, or a myriad of other topics. I know those people will know the answers and be able to assess their level of confidence in them. If they don’t know, they can figure it out. If they are mistaken, they’ll come back and correct themselves without me having to do anything.
Comparing LLMs to random humans is the wrong methodology.
> This is no different than another type of combustable engine that was created that has negative consequences on the environment in different ways.
Combustible engines don’t make it easy to spy on people, lie to them, and undermine trust in democracy.
I do believe that the product leadership is shoehorning it into every nook and cranny of the world right now and there are reasons to be annoyed by that but there are also countless incredible use cases that are mind blowing, that you can use it every day for.
I need to write about some absolutely life changing scenarios, including: got me thousands of dollars after it drafted a legal letter quoting laws I knew nothing about, saved me countless hours troubleshooting an RV electrical problem, found bugs in code that I wrote that were missed by everyone around me, my wife was impressed with my seemingly custom week long meal plan that fit her short term no soy/dairy allergy diet, helped me solve an issue with my house that a trained professional completely missed the mark on, completely designed and wrote code for a halloween robot decoration I had been trying to build for years, saves my wife hundreds of hours as an audio book narrator summarize characters for her audio books so she doesn't have to read the entire book before she narrates the voices.
I'm worried about some of the problems LLMs will create for humanity in the future but those are problems we can solve in the future too. Today it's quite amazing to have these tools at our disposal and as we add them in smart ways to systems that exist today, things will only get better.
Call me glass half full... but maybe it's because I don't live in Seattle