I have some idle time and have been asked to look into producing a code review checklist. This is quite an interesting matter, as to me, much like programming, code reviewing is a highly idiosyncratic, ineffable, thing, and so I wonder whether the community here has somehow managed to formalize it.
There are obvious things to put in such a checklist (e.g. short and legible methods, check for potential null reference exceptions) but sometimes even glaring bugs can sneak through code review. A recent example: in a PR, I noticed a method call had been deleted and replaced with nothing, and asked the developer about it. The developer assured me the functionality that the method had provided was implemented elsewhere in the PR, and that indeed seemed to be the case.
Turns out, that other implementation did not in fact cover all the functionality the deleted method call had provided, and an entire feature was lost due to that PR. In light of this, something else that could go in a checklist is "when a method call is deleted, verify what the method did". Another is that deleting that method call meant the method had no references (dead code), yet was not deleted, so neither I nor the other reviewers saw the method body in the diff, which would have shown that a feature was being lost. Another item could be "If method calls are deleted, check for dead code".
Do you have a formal method for conducting code reviews? What things do you look for?
Here's the rough, hierarchical checklist I wrote up for myself at some point: https://gist.github.com/beyang/b3b494f028996b32ab3dae237bc1c...
I don't go through this entire checklist for every code review obviously, but whenever there's a diff that's large/complex enough for me to say "hoo-boy", I revisit it to remind myself of what I actually need to check in order to obtain an adequate understanding of the change.
The other thing that matters is tooling. One of the annoying things about most CR interfaces is they lack the capabilities of an IDE that actually let you jump around and make sense of the code in a tractable manner. Shameless plug: I helped build a browser extension that adds these capabilities back into most of the mainstream CR systems: https://docs.sourcegraph.com/integration/browser_extension. People swear by it and I'd highly recommend trying it out.
To get back to your original scenario, it seems like you asked the right initial questions, but stopped short of building a satisfactory understanding of what was going on. You took the author's word for it, when you could've asked a few more probing questions to check your (and their) understanding of the change. At the end of the day, the threshold for "LGTM" is a function of both how well you understand the code and how much you trust your teammates. Better process and better tooling can help out a lot with the former, but the latter is a human element that can't be ignored either.