I remember a situation where we had a near miss with data loss (replica failed and master had a bad disk). We didn't want to put the production database under extra load by taking a live backup while it was handling all production traffic, so we restored a backup. But it was "bad". Tried the one before it, and the one before that. Apparently they were busted for over a month due to a config change. We restored a month-old backup and started applying binlogs (which thankfully we had been backing up). But that meant replaying a month of transactions into the restored database. I can't remember the details but I think we ended up replacing the bad disk, resilvering the array and live-cloning the primary before the binlogs got fully applied to the one we restored from the old backup.
Went through that once in the mid- late 90’s. Each restore and test took hours, so 3-4 attempts took 2 days - with me sleeping, crying and praying in a conference room.