I think part of the problem is that a lot of people still seem to think there is a knowable/measurable/learnable source of absolute truth but since any learning is almost always implicitly "Appeal to the people", then you get populism rather than truth.
Even if you work out a way that "Professor blah has a higher value opinion than John Smith", this won't always be the case. Sometimes the majority are right, sometimes authority is wrong, sometimes publish views change over time, sometimes truth is hidden for "higher" reasons so I don't really see where we end up.
That said, for things that probably are largely uncontraversial like "a good way to write a prime sieve in Erlang" or something, it's probably going to give a pretty decent response.
Maybe so, but I suspect there are better ways to sort through information, even if they're opinions tied to narrative. The current model, driven by ad dollars and cheap information, seems to have led to a landscape of noise; too much low-quality information to sift through. I'd argue that even the system of expert gatekeepers that came before was more effective than this.
There ARE better and worse ideas, even in the opinions space, I daresay. Perhaps not always right or wrong ones, but they can be sorted more effectively than they currently are, I believe.
Reddit, Hackernews, Quora, all provide examples of what a better model might look like. There's more open discussion, upvotes and downvotes, etc. People have the opportunity to synthesize and draw their own conclusions.
Even if you work out a way that "Professor blah has a higher value opinion than John Smith", this won't always be the case. Sometimes the majority are right, sometimes authority is wrong, sometimes publish views change over time, sometimes truth is hidden for "higher" reasons so I don't really see where we end up.
That said, for things that probably are largely uncontraversial like "a good way to write a prime sieve in Erlang" or something, it's probably going to give a pretty decent response.