Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Seems pretty unintuitive to me that Unicode would allow someone to serialize normal code as invisible characters

If you have a text encoding with two invisible characters, you can trivially encode anything that you could represent in a digital computer in it, in binary, by treating one as a zero and the other as a one. More invisible characters and some opinionated assumptions about what you are allows denser representation than one bit per character.

Of course, the trick in any case is you have to also slip in the call to decode and execute the invisible code, and unless you have a very unusual language, that’s going to be very visible.



I see now, those “decode” and “eval” are huge red flags that are downplayed heavily by the author. Cheers for the response




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: