I wonder if you would get better results if you tell the LLM there's a token limit in the prompt.
something like "You only have 1000 tokens. Generate an analog clock showing ${time}, with a CSS animated second hand. Make it responsive and use a white background. Return ONLY the HTML/CSS code with no markdown formatting"
I love clocks and I love finding the edges of what any given technology is capable of.
I've watched this for many hours and Kimi frequently gets the most accurate clock but also the least variation and is most boring. Qwen is often times the most insane and makes me laugh. Which one is "better?"
Clock drawing is widely used as a test for assessing dementia. Sometimes the LLMs fail in ways that are fairly predictable if you're familiar with CSS and typical shortcomings of LLMs, but sometimes they fail in ways that are less obvious from a technical perspective but are exactly the same failure modes as cognitively-impaired humans.
I think you might have stumbled upon something surprisingly profound.
In lucid dreams there's a whole category of things like this: reading a paragraph of text, looking at a clock (digital or analog), or working any kind of technology more complex than a calculator.
For me personally, even light switches have been a huge tell in the past, so basically almost anything electrical.
I've always held the utterly unscientific position that this is because the brain only has enough GPU cycles to show you an approximation of what the dream world looks like, but to actually run a whole simulation behind the scenes would require more FLOPs than it has available. After all, the brain also needs to run the "player" threads: It's already super busy.
Stretching the analogy past the point of absurdity, this is a bit like modern video game optimizations: the mountains in the distance are just a painting on a surface, and the remote on that couch is just a messy blur of pixels when you look at it up close.
So the dreaming brain is like a very clever video game developer, I guess.
Yes, that's how you enter the lucid state. You find ways to tell that you're dreaming and condition yourself to check for those while awake. Eventually you will do it inside a dream and realize that you're dreaming.
Yeah. It’s very common to notice anomalies inside of a dream. But the anomalies weave into the dream and feel normal. You don’t have much agency to enter a lucid state from a pre-lucid dream.
So the idea is to develop habits called “reality checks” when you are awake. You look for the broken clock kind of anomalies that the grandparent comment mentioned. You have to be open to the possibility of dreaming, which is hard to do.
Consider this difficulty. Are you dreaming?
…
…
How much time did it take to think “no”? Or did you even take this question seriously? Maybe because you are reading a hn comment about lucid dreams, that question is interpreted as an example instead of a genuine question worth investigating, right? That’s the difficulty. Try it again.
The key is that the habit you’re developing isn’t just the check itself — it’s the thinking that you have during the check, which should lead you to investigate.
You do these checks frequently enough you end up doing it in a dream. Boom.
There’s also an aspect of identifying recurring patterns during prelucidity. That’s why it helps to keep a dream journal for your non-lucid dreams.
The first time it happened to me, it was accidental. I dreamed that I was in a college classroom but I realized that I never went to college. I was not trying to and had never lucid dreamed before, and so it was very surprising.
be careful as adding consciousness to a dream means CPU cycles so you wake Up more tired, its cool for kids and teens but grown adults shouldnt explore this to avoid bad rest
Over time, with accumulated experience, all dreams are lucid from the start. Because of that they are very calm and pleasant; the dreamer is no longer reactive to what happens in the dream because they know nothing is at stake.
That’s a caution to getting addicted to it, but not never doing it. I’ve had powerful experiences in lucid dreaming that I wouldn’t trade for a little more rest. I was already in a retreat where I was basically resting all the time.
I met someone once who claimed that he lucid dreams almost every night by default and it is exhausting. He smokes weed at night to avoid dreaming entirely. I didn’t dig in super deep, but it sounded pretty intense!
IMO they would benefit from skipping the weed and instead continue to practice lucid dreaming. Over time they will develop their skill and will learn to simply contemplate the dream without reacting to it. It is a calming experience.
It seems that I’ve been stuck in a lucid dream for a couple of decades, no matter how carefully write text on a phone keyboard it never comes out as intended.
Conceptual deficit is a great failure mode description. The inability to retrieve "meaning" about the clock -- having some understanding about its shape and function but not its intent to convey time to us -- is familiar with a lot of bad LLM output.
I would think the way humans draw clocks has more in common with image generation models (which probably do a bit better with this task overall) than a language model producing SVG markup, though.
LLMs don't do this because they have "people with dementia draw clocks that way" in their data. They do it because they're similar enough to human minds in function that they often fail in similar ways.
An amusing pattern that dates back to "1kg of steel is heavier of course" in GPT-3.5.
First: generalization. The failure modes extend to unseen tasks. That specific way to fail at "1kg of steel" sure was in the training data, but novel closed set logic puzzles couldn't have been. They display similar failures. The same "vibe-based reasoning" process of "steel has heavy vibes, feather has light vibes, thus, steel is heavier" produces other similar failures.
Second: the failures go away with capability (raw scale, reasoning training, test-time compute), on seen and unseen tasks both. Which is a strong hint that the model was truly failing, rather than being capable of doing a task but choosing to faithfully imitate a human failure instead.
I don't think the influence of human failures in the training data on the LLMs is nil, but it's not just a surface-level failure repetition behavior.
If you're keeping all the generated clocks in a database, I'd love to see a Facemash style spin-off website where users pick the best clock between two options, with a leaderboard. I want to know what the best clock Qwen ever made was!
Please make it show last 5 (or some other number) of clocks for each model. It will be nice to see the deviation and variety for each model at a glance.
This is honestly the best thing I've seen on HN this month. It's stupid, enlightening... funny and profound and the same time. I have a strong temptation to pick some of these designs and build them in real life.
Could you please change and adjust the positions of the titles (like GPT 5)? On Firefox Focus on iOS, the spacing is inconsistent (seems like it moves due to the space taken by the clock). After one or two of them, I had to scroll all the way down to the bottom and come back up to understand which title is linked to which clock.
This same principle is why my favorite image generation model is the earlier models from 2019-2020 where they could only reliably generate soup. It's like Rorschach tests, it's not about what's there, it's about what you see in them. I don't want a bot to make art for me, sometimes I just want some shroom-induced inspirational smears.
Not on page load, it regenerates every minute. There's a little hovering question mark in the top right that explains things, including the prompt to the models.
They have it available on the site under the (?) button:
"Create HTML/CSS of an analog clock showing ${time}. Include numbers (or numerals) if you wish, and have a CSS animated second hand. Make it responsive and use a white background. Return ONLY the HTML/CSS code with no markdown formatting."
unfortunately due to the government shutdown, the BLS inflation data for September 2025 is delayed from October 15 (as it normally is) until October 24[1], so please check back then to see if he is >109 Cent.
assuming future stability, the site will automatically update on the 15th of every month.
This is a powerful visual representation. I would suggest that the impact could be even stronger if you provided side-by-side images of 50 Cent, where the second is scaled up proportionately.
that’s a good idea. in future versions, i might need to consider multiple renderings as different economists likely prefer alternative visualizations of 50’s monetary adjustments
I dont mean to contradict myself but big mac index also doesnt show the true inflation either. This is because it should be easier to make a big make in 2025 than in 1995 due to automation.
I think they are rounding a float for the number display and not rounding for the image as you can see different sized image segments for the months where the number remains at 100 cents. You could still be correct, I have no way of verifying.
2. Copy and paste this into your browser location bar: javascript:void(document.getElementsByTagName("video")[0].playbackRate = 50/prompt("Inflation-adjusted 50 Cent value:"))
3. Enter the inflation-adjusted 50 Cent value, which as we are talking about this today, is 109.
Et voila, inflation-adjusted 50 Cent music, and anyone finding this later can adjust it to their current inflation-adjusted value.
I believe there are limits on how slow the browsers will playback video. This code is not guaranteed to work past any possible hyperinflations or massive deflations that may occur in the future.
If you're curious how that may sound with a more careful job done then the browsers will do with stretching, consider Beethoven's 9th symphony stretched to 24 hours: https://www.youtube.com/watch?v=JSJ9Bkhb1Q4&list=PLMEcbs3sHQ... Some of you may well legitimately love this. Obviously the frequency profile of doing this to a 50 Cent piece will be quite different but it at least gives the idea.
[1]: It is sheer coincidence that this video ID ends in "Ass". This is "50 Cent - In Da Club (Official Music Video)" for those wondering.
reply