They gave me a small project to do, and told me to do it as though I were building it for a customer. I think that was warning enough that it ought to gracefully handle bad input — any real-world program needs to do the same.
To be honest, unless they explicitly discussed this with you before, or went through it with you afterwards, I'm with empath75 on this one. We used to do something like that at $former_workplace and every once in a while, a candidate would come up with a program that didn't validate (most of) the input or failed in similar trivial ways.
It turned out that some of them, indeed, simply didn't care -- and didn't know, either. We'd explain what the problem was and they'd shrug or say they'd seen <big name software> breaking like that, too, you fix it if it turns out someone actually breaks it.
Others, however, would skip it so that they could focus on stuff that was more complicated or more relevant. They'd validate one set of inputs, just to show that they know it needs to be done and can do it, but not everything. Or they'd throw in a comment like //TODO: Validate this by <insert validation methods here>. Most of the time we'd just ask them to talk us through some of the validations -- and most of the time they could write them on the spot.
You could argue that this is very relevant in real life, and that even if it weren't, what's relevant is the interviewer's choice, not the candidate's (although tbh the latter is one of the reasons why tech interviews suck so much).
But at the end of the day it is an interview setting, not a real-life setting, no matter how much you try to make it seem otherwise. At the end of the day, the people doing it are still young candidates trying to impress their interviewers, not company employees working on a company project under the supervision of tech leads. You don't end up with much useful data unless you allow for some flexibility in this sort of experiment.