There's a huge problem in many branches of science where, using normal methodology, it's possible to come up with an endless stream of results that are unlikely to have any predictive validity outside of the extremely specific circumstances of the experiment -- and often not even there. This gives the illusion of knowledge, but everything you "know" falls apart if you actually try to apply it to the world around you.
An attempt by DARPA to evaluate the various fields of social science found that only about half of the papers they tried to replicate actually gave the same results when tried a second time. People in those fields could give pretty good predictions of which papers would replicate, but the publishing apparatus doesn't seem to have much in the way of quality control to filter out the papers everyone knows are junk -- nor do people cite the junk studies less often. Here's a very readable writeup with details:
What's more, correlational studies have another problem: even a consistent-but-small correlation between two variables usually is non-causal when you're looking at sufficiently complicated webs of cause and effect, e.g. the ones found in the social sciences like psychology. The math here has some profound, dire implications, and I found this article to be a huge eye-opener:
If 80% or more of your published and "peer reviewed" predictions were non-reproducible crap (as is the case with said soft sciences), then you'd indeed better not say anything about the subject at all.
Just because you're "doing science" and getting some results back, doesn't mean you're doing something accurate or worth over not doing.
Itβs more that performing this experiments with rats would only be slightly less representative of the general population than the college undergraduates they almost certainly used.