Hacker Newsnew | past | comments | ask | show | jobs | submit | zehaeva's commentslogin

I think at this point you need to consider how the human eye see color. It's not like each wavelength gets picked up and then communicated perfectly.

(I'm going to skip over some basic stuff, and use some generalities)

Each Cone in the eye responds to a range of frequencies. This means that things that unless it's on the extreme low, or high, end of the frequencies that the human eye can discern you are going to have two, or all three, Cone types responding. The strength of those responses is what your brain uses to interpret the color that you see.

The real problem is that out in space there is no attenuation of sunlight, it's bright. Super crazy bright. It basically overloads all of your Cones, and Rods, all at once, there is no way for your brain to find a signal of "oh there's more higher wavelengths here so interpret bluer than normal" because all of the signals got maxed out. If you max out all of the signals, you get white. It doesn't matter that in absolute terms there's more blue, the lower and mid frequencies are also maxed out.


IIUC, saturation is a (not uncommon) distractor here. As you get the same observation when desaturated by a neutral filter. Even on the "ground" with low air mass (Sun vertical, at altitude, etc).

Off topic, but I feel like this could be made into a Zen Koan from The Codeless Code[0]. You're almost there with it!

[0] https://thecodelesscode.com/


Offer topic, but the Codeless Code isn't Zen Koans. It's formatted like Zen Koans, and it's entertaining and brings value to the world, but it isn't the same thing.


I am blown away that one of the programmers on Theme Park is now one the leading researchers in modern AI. Wonders will never cease.


I think Demis was just so embarrassed by the AI in Black and White he constructed his life around fixing it. I'm expecting a patch drop from him as his last contribution to humanity.


What he's talking about is that the CGI in B5 was filmed in 4:3 and not in 16:9 like the rest of the show. When they did the "high res" releases they had to make the choice of doing everything in 16:9 or in 4:3.

In 4:3 it looks good and like the original airing show, in 16:9 any non-digital/composite shot looks freaking fantastic. But once you get to any digital or composite shot it takes a nose dive in quality.


There was recent (last few years) work done to fix all of this.

https://www.modeemi.fi/~leopold/Babylon5/DVD/DVDTransfer.htm...

I have these versions, and it's world better than everything beforehand. It's also a good opportunity for a re-watch!


A very well written show. It's crazy to think a lot of people don't even get to watch this, because it's so hard to find. I couldn't find it anywhere (except for DVD's) so I had to resort to torrents.


There's a 4K 4:3 remaster streaming on HBO Max. It wasn't exactly well advertised (certainly not to the extent that it has been hard to miss this Mad Men advertising), but it exists and is a good way to watch the show. Feels a bit less cinematic not being in 16:9, but it looks good other than that because the show was shot to be 4:3 safe (as that was still the most common TV at the time).


Also a US only thing. Torrents keeping the rest of us alive.


I loved B5's story, but no remaster is going to fix the cheeseball acting of the show. It felt 200 years old even when it was airing. (Sigh, I'm probably still going to rewatch it...)


I think Garibaldi was actually quite good.

Sheridan was maybe a bit cheesy at times but definitely gave off that vibe of a leader one could follow into battle.


It's streaming, for free, on Tubi


Not available outside the US, I think. Torrents are then just more convenient.


United States, Canada, Mexico, Australia, United Kingdom. Parts of Latin America: Costa Rica, Ecuador, El Salvador, Guatemala, Panama, Puerto Rico.

If you want to pirate it, with yt-dlp and the URL, you can download items from the Tubi web site. (Don't ask me how I know.)


Isn't this basically what Jonathan Blow is trying to do with his "new" programming language, Jai?


Isn't Jai still mostly a C-like, with manual memory management and other archaic rituals?

See, SQL for example don't care about the hardware or the internals of the database it runs on, why couldn't we have something like that for gameplay?


When I went to Portugal I was struck by how much Portuguese there does sound like Spanish with a Russian accent!


Part of this is the "dark L" sound


I’d guess that the sibilants, consonant clusters, and/or vowel reduction would play a big role.


I don't think this _really_ contributes to the conversation, but I think we can sum this entire post up with just one XKCD comic.

https://xkcd.com/927/


The point of Dune, or the Butlerian Jihad within Dune, isn't that Humans are more capable than the Thinking Machines. It is that Humans should be the author of their own destiny, that and the Thinking Machines were enslaving humanity and going to exterminate them. Just like how the Imperium was enslaving all of humanity and was going to lead to the extinction of humanity. This was seen, incompletely, by Paul and later, completely, by Leto II who then spent 10,000 years working through a plan to allow humanity to escape extinction and enslavement.

Dune's a wild ride man!


I am reading Dune Messiah now and it clearly isn't as good as the first book. I consider the story more of a self-contained book than a series.

Taking the first book by itself, it doesn't speak much about the relationship between man and machine. The fundamental themes are rooted in man's relationship with ecology (both as the cause and effect).


A lot of what is great about Dune are the inversions of the story past the first novel.

If you take the first book alone you're left with only one facet of a much grander story. You're also left with the idea a white savior story that says might makes right, which really isn't what was going on at all.


>You're also left with the idea a white savior story that says might makes right

I think the first book is more nuanced than that. It's a demonstration of the nietzschean perspective, but it doesn't make any assertions about morality.

The story shows us how humans are products of their environment: Striving for peace or "morality" is futile, because peace makes men weak, which creates a power vacuum which ends peace. Similarly, being warlike is also futile because even if you succeed, it guarantees that you will become complacent and weak. It's never said outright, but all of the political theory in the book is based on the idea that "hard times make strong men, strong men make good times, good times make weak men, weak men make bad times". It's like the thesis of "Guns Germs and Steel": frank herbert proposes that in the long term, no cultural or racial differences matter; that everything is just a product of environmental factors. In a way it's also the most liberal perspective you can have. But at the same time, it is also very illiberal because in the short term race and culture does matter.

The "moral" of dune is that political leaders don't really have agency because they are bound by their relationships that define power in the first place, which are a product of the environment. Instead, the real power is held by the philosopher-kings outside of the throne because they have the ability to change the environment (like pardot kynes, who is the self-insert for frank herbert). The book asks us to choose individual agency and understanding over the futility of political games.

From the use of propaganda to control the city-dwellers in the beginning of the book to the change in paul's attitudes towards the end of the book I think the transactional nature of the atredies's goodwill is pretty plainly spelt out for us. I mean we learn by the end that paul is part harkonnen by blood, and in the same way as the harkonenn use of the "brutal" rabban and "angelic" feyd, it's all public relations. Morality is a tool of control.

I think the reason you are uneasy about the idea of the "white savior" playing a role in the book is because you actually subscribe to this fake morality yourself, in real life. You are trying to pidgeonhole the story like it's "Star Wars" or something. Dune is against the idea of "morality" itself. By bringing up the "white savior" concept, you are clearly thinking in terms of morality. By having some morality, this puts you at odds with the real point of the book, which is where the unease comes from. You want the dissonance to be resolved, but the real story of dune is open-ended.


I have said much of the same about dune in my own life to others, about how the main thesis is "hard times make strong men, ...", but that still does boil down to might makes right.

Saying that the first book alone doesn't make any assertions about morality is somewhat hilarious. The baron is queer coded, so too is feyd, the "good guys" are strong manly men. Even just the idea that "hard times make strong men, .." is a morality in and of itself.

I never said I was uneasy about the idea of a white savior, you are reading far too much into my beliefs and ideals. I would also appreciate that you do not project onto my any of your imaginings of my own beliefs. You do not know me.

That said, if you have only read the first books you truly are getting only one small facet of the story that Herbert was trying to tell. A lot of what is laid out in the first novel is inverted and overturned by the 3rd and 4th novels.

Finally, you have written a lot about one book out of a long series of books. I would suggest that, like your wont to project some sort of belief onto me, you, too, are projecting too much upon just the first entry of a much, much, larger epic.


The later books deal with it more.


What if you had told it again that you don't think that's right? Would it have stuck to it's guns and went "oh, no, I am right here" or would it have backed down and said "Oh, silly me, you're right, here's the real dosage!" and give you again something wrong?

I do agree that to get the full usage out of an LLM you should have some familiarity with what you're asking about. If you didn't already have a sense of what a dosage is already, why wouldn't 100mcg be the right one?


I replied in the same thread "Are you sure that sounds like a low dose". It stuck to the (correct) recommendation in the 2nd response, but added in a few use cases for higher doses. So seems like it stuck to its guns for the most part.

For things like this, it would definitely be better for it to act more like a search engine and direct me to trustworthy sources for the information rather than try to provide the information directly.


I noticed this recently when I saw someone post with an AI generated map of Europe which was all wrong. I tried the same and asked ChatGPT to generate a map of Ireland and it was wrong too. So then I asked to find me some accurate maps of Ireland and instead of generating it gave me images and links to proper websites.

Will definitely be remembering to put "generate" vs "find" in my prompts depending on what I'm looking for. Not quite sure how you would train the model to know which answer is more suitable.


I'm more reminded of Tom Scott's talk at the Royal Institution "There is no Algorithm for Truth"[0].

A lot of what you're talking about is the ability to detect Truth, or even truth!

[0] https://www.youtube.com/watch?v=leX541Dr2rU


> I'm more reminded of Tom Scott's talk at the Royal Institution "There is no Algorithm for Truth"[0].

Isn't there?

https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_induc...


There are limits to such algorithms, as proven by Kurt Godel.

https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...


True, and in the case of Solomonoff Induction, incompleteness manifests in the calculation of Kolmogorov complexity used to order programs. But what incompleteness actually proves is that there is no single algorithm for truth, but a collection of algorithms can make up for each other's weaknesses in many ways, eg. while no single algorithm can solve the halting problem, different algorithms can cover cases for which the others fail to prove a definitive halting result.

I'm not convinced you can't produce a pretty robust system that produces a pretty darn good approximation of truth, in the limit. Incompleteness also rears its head in type inference for programming languages, but the cases for which it fails are typically not programs of any interest, or not programs that would be understandable to humans. I think the relevance of incompleteness elsewhere is sometimes overblown in exactly this way.


If there exists some such set of algorithms that could get a "pretty darn good approximation of truth" I would be extremely happy.

Given the pushes for political truths in all of the LLMs I am uncertain if they would be implemented even if they existed.


You're really missing the points with LLMs and truth if you're appealing to Godel's Incompleteness Theorem


Why?


The limitations of “truth knowing” using an autoregressive transformer are much more pressing than anything implied by Gödel’s theorem. This is like appealing to a result from quantum physics to explain why a car with no wheels isn’t going to drive anywhere.

I hate when this theorem comes up in these sort of “gotcha” when discussing LLMs: “but there exist true statements without a proof! So LLMs can never be perfect! QED”. You can apply identical logic to humans. This adds nothing to the discussion.


Ah understood, yes that is a bit ridiculous.


That Wikipedia article is annoyingly scant on what assumptions are needed for the philosophical conclusions of Solomonoff's method to hold. (For that matter, it's also scant on the actual mathematical statements.) As far as I can tell, it's something like "If there exists some algorithm that always generates True predictions (or perhaps some sequence of algorithms that make predictions within some epsilon of error?), then you can learn that algorithm in the limit, by listing through all algorithms by length and filtering them by which predict your current set of observations."

But as mentioned, it's uncomputable, and the relative lack of success of AIXI-based approaches suggests that it's not even as well-approximable as advertised. Also, assuming that there exists no single finite algorithm for Truth, Solomonoff's method will never get you all the way there.


> "computability and completeness are mutually exclusive: any complete theory must be uncomputable."

This seems to be baked into our reality/universe. So many duals like this. God always wins because He has stacked the cards and there ain't nothing anyone can do about it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: