LLMs aren’t people, and the authors have not convinced me that they will behave like people in this context.
This was my initial reaction as well, before reading the interview in full. They admit that there are problems with the approach, but they seem to have designed the simulation in a very thoughtful way. There really doesn't seem to be a better approach, apart from enlisting vast numbers of people instead of using LLMs/agent systems. That has its own problems as well of course, even leaving cost and difficulty aside.
There’s no option to create original content...
While this is true, I'd say the vast majority of users don't create original content either, but still end up shaping the social media environment through the actions that they did model. Again, it's not perfect but I'm more convinced that it might be useful after reading the interview.
I'm not sure the experiment can be done other than to try interventions on real users of a public social media service as Facebook did in the article I linked. Of course people running those services usually don't have the incentives to test harm reduction strategies and certainly don't want to publicize the results.
> the vast majority of users don't create original content
That's true now at least most of the time, but I think it's as much because of design and algorithmic decisions by the platforms to emphasize other types of content. Early Facebook in particular was mostly original content shared between people who knew each other. The biggest problem with that was it wasn't very profitable.
> > There’s no option to create original content...
> While this is true, I'd say the vast majority of users don't create original content either, but still end up shaping the social media environment through the actions that they did model. Again, it's not perfect but I'm more convinced that it might be useful after reading the interview.
Ok, but, this is by design. Other forms of social media, places like Mastodon etc have a far, far higher rate of people creating original content.
The fundamental problem with social media (and many other things) is humans, specifically our biological makeup and (lack of) overriding mechanisms. One could argue that pretty much everything we call 'civilised behavior' is an instance of applying a cultural override for a biological drive. Without it, we are very close to shit-flinging murderous apes.
For so many of our problems what goes wrong is that we fail to stop our biological drive from taking the wheel to the point where we consciously observe ourselves doing things we rationally / culturally know we should not be doing.
Now the production side of media/content/goods evolves very fast and does not have a similarly strong legacy biological drive holding it back, so it is very, very good (and ever improving) at exploiting the sitting duck that is our biological makeup (food engineering, game engineering etc. are very similar to social media engineering in this regard).
The only reliable defense against that is training ourselves to not give in to our biological drives when they are counterproductive. For some that might be 'disconnect completely' (i.e. take away the temptations altogether), but having a healthy approach to encountering the temptations is far more robust. I am of the opinion that labeling the social media purveyors and producers in general as evil abusers is not necessarily inaccurate, but counterproductive in that it tends to absolve individuals of their responsibility in the matter. Imagine telling a heroin addict: "you can't help it, it's those evil dealers that are keeping you hooked to the heroin".
U.S. adults commonly engage with popular social media platforms but are more inclined to browse content on those websites and apps than to post their own content to them. The vast majority of those who say they use these platforms have accounts with them, but less than half who have accounts -- and even smaller proportions of all U.S. adults -- post their own content.
The analysis also reveals another familiar pattern on social media: that a relatively small share of highly active users produce the vast majority of content.
That's junk science and doesn't refute the specific point I made. Facebook users are far more likely to post original content than X users. It might just be some blurry backlit vacation photos but it is original content.
Algorithmic choices are likely a major contributor to the phenomenon. If posting vacation photos on Facebook gets interactions from friends and family, more people will do it. If it doesn't, fewer people will.
This was my initial reaction as well, before reading the interview in full. They admit that there are problems with the approach, but they seem to have designed the simulation in a very thoughtful way. There really doesn't seem to be a better approach, apart from enlisting vast numbers of people instead of using LLMs/agent systems. That has its own problems as well of course, even leaving cost and difficulty aside.
There’s no option to create original content...
While this is true, I'd say the vast majority of users don't create original content either, but still end up shaping the social media environment through the actions that they did model. Again, it's not perfect but I'm more convinced that it might be useful after reading the interview.