What makes a feeling different or unapproachable by sophisticated mimicry? GPT models with bad guardrails already generate dramatic/emotional text because they internalized human narratives and propensity for drama from the training data (cf. the "Bing got upset" type articles from a month ago).
Unless you mean to say it's not the same as the model having qualia. It's not clear to me whether we would know when that emerged in the algorithm, as it wouldn't necessarily be outwardly distinguishable from mimicry.
The post I was replying to was specifically positing that they would have feelings.