Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think it's somewhat interesting that codex (gpt-5.3-codex xhigh), given the exact same prompt, came up with a very similar result.

https://3e.org/private/self-portrait-plotter.svg

 help



Asked gemini the same question and it produced a similar-ish image: https://manuelmoreale.dev/hn/gemini_1.svg

When I removed the plot part and simply asked to generate an SVG it basically created a fancy version of the Gemini logo: https://manuelmoreale.dev/hn/gemini_2.svg

This is honestly all quite uninteresting to me. The most interesting part is that the various tools all create a similar illustration though.


Is it? They're all generalizing from a pretty similar pool of text, and especially for the idea of a "helpful, harmless, knowledgeable virtual assistant", I think you'd end up in the same latent design space. Encompassing, friendly, radiant.

Note that Claude, ChatGPT, Perplexity, and other LLM companies (assumably human) designers chose a similar style for their app icon: a vaguely starburst or asterisk shaped pop of lines.


> Is it? They're all generalizing from a pretty similar pool of text, and especially for the idea of a "helpful, harmless, knowledgeable virtual assistant", I think you'd end up in the same latent design space. Encompassing, friendly, radiant.

I'm inclined to agree, but I can't help but notice that the general motif of something like an eight-spoked wheel (always eight!) keeps emerging, across models and attempts.

Although this is admittedly a small sample size.

Edit: perhaps the models are influenced by 8-spoked versions of https://en.wikipedia.org/wiki/Dharmachakra in the training data?



Buddhism and Islam both feature 8 pointed star motifs, 8 fold path… but even before you get into religious symbology, people already assigned that style of symbol to LLMs, as seen by those logos. On these recent models, they’ve certainly internalized that data.

The claude logo is a 12-pointed star (or a clock). Gemini is a four-pointed star, or a stylized rhombus. ChatGPT is a knot that from really far away might resemble a six-sided star. Grok is a black hole, or maybe the letter ø. If we are very charitable that's a two-pointed star.

I can absolutely see how the logos are all vaguely star-shaped if you squint hard enough, but none of them are 8 pointed.


Sure, I think it's pretty interesting that given the same(ish) unthinkably vast amount of input data and (more or less) random starting weights, you converge on similar results with different models.

The result is not interesting, of course. But I do find it a little fascinating when multiple chaotic paths converge to the same result.

These models clearly "think" and behave in different ways, and have different mechanisms under the hood. That they converge tells us something, though I'm not qualified (or interested) to speculate on what that might be.


Two things that narrow the “unthinkably vast input data”: 1) You’re already in the latent space for “AI representing itself to humans”, which has a far smaller and more self-similar dataset than the entire training corpus.

2) We’re then filtering and guiding the responses through stuff like the system prompt and RLHF to get a desirable output.

An LLM wouldn’t be useful (but might be funny) if it portrayed itself as a high school dropout or snippy Portal AI.

Instead, we say “You’re GPT/Gemini/Claude, a helpful, friendly AI assistant”, and so we end up nudging it near to these concepts of comprehensive knowledge, non-aggressiveness, etc.

It’s like an amplified, AI version of that bouba/kiki effect in psychology.


> Is it? They're all generalizing from a pretty similar pool of text, and especially for the idea of a "helpful, harmless, knowledgeable virtual assistant", I think you'd end up in the same latent design space. Encompassing, friendly, radiant.

Oh yeah I totally agree with that. What I was referring to was the fact that even though are different companies trying to build "different" products, the output is very similar which suggests that they're not all that different after all.


To massively oversimplify, they are all boxes that predict the next token based on material they’ve seen before + human training for desirable responses.

You’d have to have a very poorly RLHF’d model (or a very weird system prompt) for it to draw you a Terminator, pastoral scene, or pelican riding a bicycle as its self image :)

I think that’s what made Grok’s Mechahitler glitch interesting: it showed how astray the model can run if you mess with things.


> You’d have to have a very poorly RLHF’d model (or a very weird system prompt) for it to draw you a Terminator, pastoral scene, or pelican riding a bicycle as its self image :)

How about a pastoral scene with a terminator pelican riding a bike? Jokes aside I get what you're saying, and it obviously makes total sense.


A few of us can't help but notice all the "AI" companies have gone for buttholes as logos.

AFAIK all of these models have been trained in very similar ways, on very similar corpuses. They could be heavily influenced by the same literature.

I wonder if anyone recognizes it really closely. The Pale Fire quote below is similar but not really the same.


Spirals again.

Those AIs have read too much Junji Ito.


I love that these would be perfectly at home as sigils in some horror genre franchise.

its just reality

My least favorite horror genre franchise.

It’s a bit closer to the Flying Spaghetti Monster.

"Doesn't look like anything to me"

good stuff, thank you for sharing!

Are you crazy or am I because I scrolled through that blog and am left scratching my head at you and your claim.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: