My point is that the mathematical simulation is more precise because you've exactly stated the model and ran it. While doing it verbally allows for all kinds of interpretation and is therefore much harder to interpret.
I apologise but I also really sincerely don't understand your verbal reasoning in the context of this particular paper. They took two underlying distributions and managed to produce something that looks like a real world one. I'm not sure what you're trying to say that's equivalent to that?
I reread our back and forth, and I think my clearest explanation is in the original post where I quote from the author. So I think we might be at the limits of my ability to communicate in this medium.
But even in the instance you are talking about -- let's just abstract away everything else -- we know that the output of a gaussian process over time will be a very wide dispersion.
If you assume stock prices follow some kind of gaussian distribution (yes, I know they don't), and just follow them in time you get this crazy wide distribution.
Its a "stylized fact". And running it as a metaphor for some other process doesn't give me more insight. But I fear this explanation is even more confusing, and not less.
I won't dismiss what you're saying but tbh it's very opaque - do you have a link to something that explains what you're trying to say? Something mathematical would help (for the reasons I outlined in my earlier reply!)
Let's take a function that produces normally distributed random variables. Let's run this function at every moment in time starting at t=0, and then sum it with the prior sum.
You can use a function that sort of looks like that to simulate a lot of different processes (a random walk, brownian motion etc).
Here is a more mathematical write up of the properties:
Thanks. I dived into this and tbh I'd like someone to take what you've mentioned and extend the discussion in the original paper further.
IMO, the paper's simulation tells us explicitly the full extent of what they've been considering, so it's a useful tool for the reader to judge the ideas. I know you might disagree but it gives a firm definition of ideas to me, at least, and avoids me having to over interpret based on my own biases of what I'm reading.
Given the starting numbers, the math is fine. But because the starting numbers are not real data, they had to create them out of a simplified model - and all the decisions that went into that model can be criticized and used as reasons that the output is wrong.
I don't think that's a problem. A lot of valid work has been done on simplified models and they're there to illustrate what might be going on rather than absolute proof. And yes you can criticise them, since that is the point.
I apologise but I also really sincerely don't understand your verbal reasoning in the context of this particular paper. They took two underlying distributions and managed to produce something that looks like a real world one. I'm not sure what you're trying to say that's equivalent to that?