In your message you say it is gibberish, but I have completely different results and get very good Base64 on super long and random strings.
I frequently use Base64 (both ways) to bypass filters in both GPT-3 and 4/Bing so I'm sure it works ;)
It sometimes make very small mistakes but overall amazing.
At this stage if it can work on random data that never appeared in the training set it's not just luck, it means it has acquired that skill and learnt how to generalise it.
It could when I tested 1 week after the first version of chatGPT was in private beta. It's always been able to convert base64 both ways.
It sometimes gets some of the conversion wrong or converts a related word instead of the word you actually asked it to convert. This strongly suggests that it's the actual LLM doing the conversion (and there's no reason to believe it wouldn't be).
This behavior will likely be replicated in open source LLMs soon.
In your message you say it is gibberish, but I have completely different results and get very good Base64 on super long and random strings.
I frequently use Base64 (both ways) to bypass filters in both GPT-3 and 4/Bing so I'm sure it works ;)
It sometimes make very small mistakes but overall amazing.
At this stage if it can work on random data that never appeared in the training set it's not just luck, it means it has acquired that skill and learnt how to generalise it.