To my mind, "real understanding" would mean an ability to make non-trivial inferences and to discover new things, not present in the training set. That would be logical thinking, for instance.
Much of what LLMs currently do is not logical but deeply kabbalistic: rehashing the words, the sentence and paragraph structures, highly advanced pattern matching, working at the textual level instead of the "meaning" level.
AIs can definitely mux a couple ideas and come up with a concept that’s not in the training work set already. In fact, it is often so willing to do it that the concepts often don’t make a sense, but certainly it does generate ideas that are not there in the training set. This is still just the “it’s an infringement machine” argument redux yet again - yes, it absolutely does have the ability to mash up ideas to produce something new.
Nobody ever trained it to make up a bunch of slurs for cancer kids. Nobody has ever trained it on poems about drug use on the spaceship Nostromo. Dolphin mixtral will give it the old college try though.