Nah, cr and rc are different tokens and LLMs would have no issues telling them apart. An older model might have trouble explaining that cr and rc are similar and can thus get easily mixed up, but the characters are probably more different to the LLM than they are to us.
This is maybe just me, but I need to remind myself of the opposite: All solutions are temporary and imperfect. So just do something, I can always go back and correct it.
For me this means that I'm allowed to store away 3 things from the moving box, even though I don't know where to put the rest yet. To invite others even though I don't know what to cook yet or to write a bad implementation quickly, instead of spending hours figuring out the best one.
I think a balance of perfectionism and can-do is important. But as people are predisposed differently, either advice might make sense in different circumstances.
> But as people are predisposed differently, either advice might make sense in different circumstances.
This is a very good point. Life is a balancing act. Different people may need the opposite advice. Or you may need the opposite advice at different times. Not just in this case, but in life in general.
I actually did not mean the post as a call to perfection/action but I can see why it could be seen as such. You can have a temporary solution that is good-enough for its purpose at a time. I advocate for the conscious effort to evaluate your intention so that you can feel a sense of agency around whatever happens afterwards because you are more likely to lose context, and the initial intention, as time passes.
Yes! Sounds like a dream. My value isn't determined by some economic system, but rather by myself. There is so much to do when you don't have to work. Of course, this assumes we actually get to UBI first, and it doesn't create widespread poverty. But even if humanity will have to go through widespread poverty, we'd porbably come out with UBI on the other side (minus a few hundred millions starved).
There's so much to do, explore and learn. The prospect of AI stealing my job is only scary because my income depends on this job.
Hobbies, hanging out with friends, reading, etc. That's basically it.
Probably no international travel.
It will be like a simple retirement on a low income, because in a socialist system the resources must be rationed.
This will drive a lot of young ambitious people to insanity. Nothing meaningful for them to achieve. No purpose. Drug use, debauchery, depression, violence, degeneracy, gangs.
It will be a true idiocracy. No Darwinian selection pressures, unless the system enforces eugenics and population control.
> Hobbies, hanging out with friends, reading, etc. That's basically it.
> It will be like a simple retirement on a low income [...].
Yes, like retirement but without the old age. Right now I'm studying, so I do live on a very low income. But still, there are so many interesting things! For example, I'm trying to design a vacuum pump to 1mbar to be made of mostly 3d printed parts. Do vacuum pumps exist and can I buy them? Absolutely. But is it still fun to do the whole designing process? You bet. And I can't even start explaining all the things I'm learning.
> This will drive a lot of young ambitious people to insanity.
I teach teenagers in the age where they have to choose their profession. The ones going insane will be the unambitious people, those who just stay on TikTok all day and go to work because what else would they do? The ambitious will always have ideas and projects. And they won't mind creating something that already exists, just because they like the process of it.
We already see this with generative AI. Even though you could generate most of the images you'd want already, people still enjoy the process of painting or photographing. Humans are made to be creative and take pleasure from it, even if it is not economically valuable.
Hell, this is Hacker News. Hacking (in its original sense) was about creativity and problem-solving. Not because it will make you money, but because it was interesting and fun.
There is nothing "introverted high IQ nerd" about being creative. Think about everyone that is practicing music, artistry, crafts, rhetoric, cooking, languages, philosophy, writing, gardening, carpentry, and whatever you can think of. Most of them don't do it for money.
> [...] how it will affect all types of people and cultures on this planet.
Some will definitely feel without purpose. But I'd argue that just having a job so that you have a purpose is just a band-aid, not a real solution. I won't say that purposelessness isn't a problem, just that it would be great to actually address the issue.
Granted, I do hold a utopic view. I continue to be curious due to my religious belief, where I'm looking forward to life unconstrained by age. Regardless whether this will manifest, I think it is healthy to remain curious and continue learning. So on "how it will affect all types of people": I really do think that people without purpose need to engage in curiosity and creativity, for their own mental health.
Yes a few of us will enjoy the peaceful life of contemplation like Aristotle, but not everyone is genetically wired that way.
Introverts are only 25% - 40% of the population, and most people are not intellectually or artistically gifted (whether introvert or not), but they still want to contribute and feel valued by society.
> I'd argue that just having a job so that you have a purpose
It's not just about having a job. It's having an important or valuable role in society, feeling that your contributions actually matter to others - such as building or fixing things that others depend on, or providing for a family,
What would motivate a young boy to go through years of schooling, higher education, and so on, just to become a hobbyist, tinkering around on projects that no one else will ever use or really need? That may be acceptable for some niche personality types but not the majority.
Aspiring engineers or entrepreneurs are not merely motivated by having a job.
I am envisioning the AGI or ASI scenario which truly overtakes humans in all intellectual and physical capabilities, essentially making humans obsolete. That would smash the foundations and traditions of our civilization. It's an incredible gamble.
Wait, wait, wait. Our society's gonna fall apart due to a lack of Darwinian selection pressure? What do you think we're selecting for right now?
Seems to me like our culture treats both survival and reproduction as an inalienable right. Most people would go so far as to say everyone deserves love, "there's a lid for every pot".
> This will drive a lot of young ambitious people to insanity. Nothing meaningful for them to achieve.
Maybe, if the only flavor of ambition you're aware of is that of SV types. Plenty of people have found achievement and meaning before and alongside the digital revolution world.
I mean common people will be affected just as badly as SV types. It will impact everyone.
Jobs, careers, real work, all replaced by machines which can do it all better, faster, cheaper than humans.
Young people with modest ambitions to learn and master a skill and contribute to society, and have a meaningful life. That can be blue collar stuff too.
How will children respond to the question - "What do you want to be when you grow up?"
They can join the Amish communities where humans still do the work.
This is fascinating. I mean it's not an argument against LLMs (I have only one brain, even though I'd like to have more). But I really hope that we'll learn much more about how our brains work.
> Training GPT-4 used 50 GWh of energy. Like the 20,000 households point, this number looks ridiculously large if you don’t consider how many people are using ChatGPT.
> Since GPT-4 was trained, it has answered (at minimum) about 200,000,000 prompts per day for about 700 days. Dividing 50GWh by the total prompts, this gives us 0.3Wh per prompt. This means that, at most, including the cost of training raises the energy cost per prompt by 10%, from 10 Google searches to 11. Training doesn’t add much to ChatGPT’s energy cost.
How does people using it offset the amount of energy used to train it? If I use three hundred pounds of flour learning to make pizza, the subsequent three hundred pounds of flour I use making delicious pizzas doesn't make the first 300 go away. Am I misunderstanding the numbers?
Its not offset, its amortized.
Your effective flour / pizza is (300 + 300) / num_pizzas.
The total marginal flour expended will go up as you make more pizzas, but the effective cost will actually go down as the upfront cost is amortized over lifetime usage.
You don’t misunderstand the numbers, you misunderstand the point. If you flush your pizzas down the toilet, it’s a waste. If you feed 300 people with it, it’s not, even if you end up using the same amount of ingredients.
If you're able to serve delicious pizzas afterwards, it was worth wasting the first kg (you might call it an investment).
If you're able to bring value to millions of users, it was worth to invest a few GWh into training.
You might disagree on the usefulness. I think, you shouldn't have wasted a kg of flour because I won't ever eat your pizzas anyway. But many (you, your guests, ChatGPT users) might think it was worth it.
It doesn't make it go away. Using your analogy - if you used 300lb to learn and then only made 10 lb of pizza after that, it would be a pretty poor use of resources.
If you instead went on to produce millions of pizzas for people and 30,000lb of flour, that 300lb you used to learn looks like a pretty reasonable investment.
See my other comment here. One AI training run does not exist in a vacuum. Do you think they built billions of dollars in datacenters full of computer power just to let it sit idle?
> Training GPT-4 used 50 GWh of energy. Like the 20,000 households point, this number looks ridiculously large if you don’t consider how many people are using ChatGPT.
> Since GPT-4 was trained, it has answered (at minimum) about 200,000,000 prompts per day for about 700 days. Dividing 50GWh by the total prompts, this gives us 0.3Wh per prompt. This means that, at most, including the cost of training raises the energy cost per prompt by 10%, from 10 Google searches to 11. Training doesn’t add much to ChatGPT’s energy cost.
> Just because you divide a number by a lot to get a small number doesn't make the original number smaller.
A bus emits more CO2 than a car. Yet it is more friendly to the environment because it transports more people.
> Those are 200M/d prompts that wouldn't happen without the training.
Sure, but at least a few millions are deriving value from it. We know this because they pay. So this value wouldn't have been generated without the investment. That's how economics work.
Those 200M/d prompts would be replaced with some other activities to solve the same problems. So if training did not happen, maybe instead of 200M/d prompts, you'd have 200M/d trips to the local library, using 200M cars to each drive three miles.
Reminds me also of the "Up Goer Five". An xkcd poster which roughly explains Saturn V with only the top 1000 used words in English[0]. Even better IMO is the collab video with MinutePhysics[1].
It gave me a much better intuition than my math course.