Hacker Newsnew | past | comments | ask | show | jobs | submit | delecti's commentslogin

I have a related anecdote.

When I worked at Amazon on the Kindle Special Offers team (ads on your eink Kindle while it was sleeping), the first implementation of auto-generated ads was by someone who didn't know that properly converting RGB to grayscale was a smidge more complicated than just averaging the RGB channels. So for ~6 months in 2015ish, you may have seen a bunch of ads that looked pretty rough. I think I just needed to add a flag to the FFmpeg call to get it to convert RGB to luminance before mapping it to the 4-bit grayscale needed.


I wouldn't worry about it too much, looking at ads is always a shitty experience. Correctly grayscaled or not.

True, though in the case of the Kindle they're not really intrusive (only appearing when it's off) and the price to remove them is pretty reasonable ($10 to remove them forever IIRC).

As far as ads go that's not bad IMO)


The price of an ad-free original kindle experience was $409. The $10 is on top of the price the user paid for the device.

Lets not distort the past. The ads were introduced a few years later with the Kindle Keyboard, which launched with an MSRP of $140 for the base model, or $115 with ads. That was a substantial discount on a product which was already cheap when it released.

All for ads which are only visible when you aren't using the device anyway. Don't like them? Then buy other devices, pay to have them removed, get a cover to hide them, or just store it with the screen facing down when you aren't using it.


Yes and here in Europe they were introduced even later, with kindle 4 IIRC.

I don't think Kindle ads were available in my region in 2015 because I don't remember seeing these back then, but you're a lucky one to fix this classic mistake :-)

I remember trying out some of the home-made methods while I was implementing a creative work section for a school assignment. It’s surprising how "flat" the basic average looks until you actually respect the coefficients (usually some flavor of 0.21R + 0.72G + 0.07B). I bet it's even more apparent in a 4-bit display.


I remember using some photo editing software (Aperture I think) that would allow you to customize the different coefficients and there were even presets that give different names to different coefficients. Ultimately you can pick any coefficients you want, and only your eyes can judge how nice they are.

>Ultimately you can pick any coefficients you want, and only your eyes can judge how nice they are.

I went to a photoshop conference. There was a session on converting color to black and white. Basically at the end the presenter said you try a bunch of ways and pick the one that looks best.

(people there were really looking for the “one true way”)

I shot a lot of black and white film in college for our paper. One of my obsolete skills was thinking how an image would look in black and white while shooting, though I never understood the people who could look at a scene and decide to use a red filter..


This is actually a real bother to me with digital — I can never get a digital photo to follow the same B&W sensitivity curve as I had with film so I can never digitally reproduce what I “saw” when I took the photo.

Film still exists, and the hardware is cheap now!

I am shooting a lot of 120-format Ilford HP5+ these days. It's a different pace, a different way of thinking about the craft.


> I shot a lot of black and white film in college for our paper. One of my obsolete skills was thinking how an image would look in black and white while shooting, though I never understood the people who could look at a scene and decide to use a red filter..

Dark skies and dramatic clouds!

https://i.ibb.co/0RQmbBhJ/05.jpg

(shot on Rollei Superpan with a red filter and developed at home)


If you really want that old school NTSC look: 0.3R + 0.59G + 0.11B

This is the coefficients I use regularly.


Interesting that the "NTSC" look you describe is essentially rounded versions of the coefficients quoted in the comment mentioning ppm2pgm. I don't know the lineage of the values you used of course, but I found it interesting nonetheless. I imagine we'll never know, but it would be cool to be able to trace the path that lead to their formula, as well as the path to you arriving at yours

The NTSC color coefficients are the grandfather of all luminance coefficients.

It is necessary that it was precisely defined because of the requirements of backwards-compatible color transmission (YIQ is the common abbreviation for the NTSC color space, I being ~reddish and Q being ~blueish), basically they treated B&W (technically monochrome) pictures like how B&W film and videotubes treated them: great in green, average in red, and poorly in blue.

A bit unrelated: pre-color transition, the makeups used are actually slightly greenish too (which appears nicely in monochrome).


To the "the grandfather of all luminance coefficients" ... https://www.earlytelevision.org/pdf/ntsc_signal_specificatio... from 1953.

Page 5 has:

    Eq' = 0.41 (Eb' - Ey') + 0.48 (Er' - Ey')
    Ei' = -0.27(Eb' - Ey') + 0.74 (Er' - Ey')
    Ey' = 0.30Er' + 0.59Eg' + 0.11Eb'
The last equation are those coefficients.

I was actually researching why PAL YUV has the same(-ish) coefficients, while forgetting that PAL is essentially a refinement of the NTSC color standard (PAL stands for phase-alternating line, which solves much of NTSC's color drift issues early in its life).

It is the choice of the 3 primary colors and of the white point which determines the coefficients.

PAL and SECAM use different color primaries than the original NTSC, and a different white, which lead to different coefficients.

However, the original color primaries and white used by NTSC had become obsolete very quickly so they no longer corresponded with what the TV sets could actually reproduce.

Eventually even for NTSC a set of primary colors was used that was close to that of PAL/SECAM, which was much later standardized by SMPTE in 1987. The NTSC broadcast signal continued to use the original formula, for backwards compatibility, but the equipment processed the colors according to the updated primaries.

In 1990, Rec. 709 has standardized a set of primaries intermediate between those of PAL/SECAM and of SMPTE, which was later also adopted by sRGB.


Worse, "NTSC" is not a single standard, Japan deviated it too much that the primaries are defined by their own ARIB (notably ~9000 K white point).

... okay, technically PAL and SECAM too, but only in audio (analogue Zweikanalton versus digital NICAM), bandwidth placement (channel plan and relative placement of audio and video signals, and, uhm, teletext) and, uhm, teletext standard (French Antiope versus Britain's Teletext and Fastext).


(this is just a rant)

Honestly, the weird 16-239 (on 8-bit) color range and 60000/1001 fps limitations stem from the original NTSC standard, which considering both the Japanese NTSC adaptation and European standards do not have is rather frustating nowadays. Both the HDVS and HD-MAC standards define it in precise ways (exactly 60 fps for HDVS and 0-255 color range for HD-MAC*) but America being America...

* I know that HD-MAC is analog(ue), but it has an explicit digital step for transmission and it uses the whole 8 bits for the conversion!


Ya’ll are a gold mine. Thank you. I only knew it from my forays into computer graphics and making things look right on (now older) LCD TV’s.

I pulled it from some old academia papers about why you can’t just max(uv.rgb) to do greyscale nor can you do float val = uv.r

This further gets funky when we have BGR vs RGB and have to swivel the bytes beforehand.

Thanks for adding clarity and history to where those weights came from, why they exist at all, and the decision tree that got us there.

People don’t realize how many man hours went into those early decisions.


> People don’t realize how many man hours went into those early decisions.

In my "trying to hunt down the earliest reference for the coefficients" I came across "Television standards and practice; selected papers from the Proceedings of the National television system committee and its panels" at https://archive.org/details/televisionstanda00natirich/mode/... which you may enjoy. The "problem" in trying to find the NTSC color values is that the collection of papers is from 1943... and color TV didn't become available until the 50s (there is some mention of color but I couldn't find it) - most of the questions of color are phrased with "should".


This is why I love graphics and game engines. It's this focal point of computer science, art, color theory, physics, practical implications for other systems around the globe, and humanities.

I kept a journal as a teenager when I started and later digitized it when I was in my 20s. The biggest impact was mostly SIGGRAPH papers that are now available online such as "Color Gamut Transform Pairs" (https://www.researchgate.net/publication/233784968_Color_Gam...).

I bought all the GPU Gems books, all the ShaderX books (shout out to Wolfgang Engel, his books helped me tremendously), and all the GPU pro books. Most of these are available online now but I had sagging bookshelves full of this stuff in my 20s.

Now in my late 40s, I live like an old japanese man with minimalism and very little clutter. All my readings are digital, iPad-consumable. All my work is online, cloud based or VDI or ssh away. I still enjoy learning but I feel like because I don't have a prestigious degree in the subject, it's better to let others teach it. I'm just glad I was able to build something with that knowledge and release it into the world.


Cool. I could have been clearer in my post; as I understand it actual NTSC circuitry used different coefficients for RGBx and RGBy values, and I didn't take time to look up the official standard. My specific pondering was based on an assumption that neither the ppm2pgm formula nor the parent's "NTSC" formula were exact equivalents to NTSC, and my "ADHD" thoughts wondered about the provenance of how each poster came to use their respective approximations. While I write this, I realize that my actual ponderings are less interesting than the responses generated because of them, so thanks everyone for your insightful responses.

There are no stupid questions, only stupid answers. It’s questions that help us understand and knowledge is power.

I’m sure it has its roots in amiga or TV broadcasting. ppm2pgm is old school too so we all tended to use the same defaults.

Like q3_sqrt


Yep, used in the early MacOS color picker as well when displaying greyscale from RGB values. The three weights (which of course add to 1.0) clearly show a preference for the green channel for luminosity (as was discussed in the article).

I'm not sure I understand your complaint. The "expected result" is either of the last two images (depending on your preference), and one of the main points of the post is to challenge the notion of "ground truth" in the first place.

Not a complaint, but both the final images have poor contrast, lighting, saturation and colour balance, making them a disappointing target for an explanation of how these elements are produced from raw sensor data.

But anyway, I enjoyed the article.


That’s because it requires much more sophisticated processing to produce pleasing results. The article is showing you the absolute basic steps in the processing pipeline and also that you don’t really want an image that is ‘unprocessed’ to that extent (because it looks gross).

No, the last image is the "camera" version of it- though it's not clear if he means the realtime processing before snapping the picture or with the postprocessing that happens right after. Anyway, we have no way to understand how far the basic-processed raw picture is from a pleasing or normal-looking result because a) the lighting is so bad and artificial that we have no idea of how "normal" should look; b) the subject is unpleasant and the quality "gross" in any case.

The bottle in question seems to be glass, so many of those questions aren't really relevant. Glass doesn't degrade much from UV light, or at all from biological activity, whether on land or under 7 miles of ocean. Glass is denser than water, so it sank.

Because it's an LLM spambot, it "saw" a couple of keywords and wrote a comment that's vaguely relevant to the article at hand. Do help with kicking it out by flagging its comments.

Ugh, thought at first you might be just being mean but a quick look at its other comments 100% confirms. I don’t understand — what’s even the point of such comment slop. I mean on Reddit it’s for karma and selling accounts or whatever. But here on HN?

> AI placeholders during development as it can majorly speed up/unblock

Zero-effort placeholders have existed for decades without GenAI, and were better at the job. The ideal placeholder gives an idea of what needs to go there, while also being obvious that it needs to be replaced. This [1] is an example of an ideal placeholder, and it was made without GenAI. It's bad, and that's good!

[1] https://www.reddit.com/r/totalwar/comments/1l9j2kz/new_amazi...

A GenAI placeholder fails at both halves of what a placeholder needs to do. There's no benefit for a placeholder to be good enough to fly under the radar unless you want it to be able to sneak through.


it's not better as they fundamentally fail to capture the atmosphere and look of a scene

this means that for some use cases (early QA, design 3D design tweaks before the final graphic is available etc.) they are fully useless

it's both viable and strongly preferable to track placeholders in some consistent way unrelated to their looks (e.g. have a bool property associated with each placeholder). Or else you might overlook some rarely seen corner cases textures when doing the final cleanup

so no, placeholder don't need to be obvious at all, and like mentioned them looking out of place can be an issues for some usages. Having something resembling the final design is better _iff_ it's cheap to do.

so no they aren't failing, they are succeeding, if you have proper tooling and don't rely on a crutch like "I will surely notice them because they look bad"


Are they necessarily smoking and vaping cannabis though? My vape is visually pretty similar to a tobacco vape, and vaping doesn't usually have much odor either way (unless it's scented vape juice, but I'm not terribly worried about cognitive impairment from bubble gum).

As far as my experience goes, yes. I can tell by the scent. And actually at stoplights I can smell it even with windows rolled up.

The TV would definitely spy on you, the connected device might not. And even if it does, you can pick one from a company you mind less, or who you've already given up on trying to prevent spying on you. For me, that means a Chromecast; I haven't managed the effort to de-Google, and most of what I watch is Youtube anyway. For some that might be Apple, who is probably the least egregious offender among the big companies. Or you could use a Raspberry Pi or other small computer and have even more control, at the cost of being higher effort.

Yeah, Tor Books publishes without DRM, and they seem to be one of the bigger SFF publishers these days. John Scalzi, George R.R. Martin (though not the ASoIaF books), Robert Jordan, Annalee Newitz, Charlie Jane Anders, and a bunch of other SFF authors I recognize. I'm sure there are others, but all the once I've noticed have been from Tor.

Indeed, and I love Tor for this. Brandon Sanderson has also come out against DRM. I already loved the man's books, now I love the man too

They view EVs as a moral threat. Can't get cognitive dissonance about your neighbor's dope new EV with perks your new ICE doesn't have, if your neighbor can't get EVs either. Loads of examples of "this is worse, so we're going to make it worse, so we're sure that it is worse".


I wish my ev has dope perks... too bad California is dead set on making EV charging more expensive then gas lol.


Yeah, I was being a bit glib about that part.

IMO, the biggest perk is dependent on the ability to charge at home. If you can, then the price per mile is about half (if Google is right that California rates are about $0.30/kWh) or less than for an ICE. But even if the $/mile were equal, never needing to visit a gas station again is itself the biggest perk.

And sure there are people for whom an EV won't meet their range needs, but probably way fewer than think that's the case for them.


It’s closer to 0.40-0.70c/kwh. My lowest rate is $0.40c/kwh and that goes away insanely fast just doing almost nothing. PGE is criminally priced in CA. I get maybe 200kwh before it jumps to $0.50/kwh rate and will keep jumping.

I don’t have AC. I don’t have anything. That’s just a fridge, computer, and a little bit of cooking. Genuinely have no idea how I even hit 10kwh/day because I have nearly nothing on in this place.


>But even if the $/mile were equal, never needing to visit a gas station again is itself the biggest perk.

I maybe fuel up once a month unless I'm doing a road trip. It isn't that big a deal.


Charge at home, that’s the whole point. My F150 lightning costs about $14 in electric charges a month for about 600 miles on average.


Home electricity in California is about 45¢/kWh. If your F150 mileage is typical, you're getting about 2 miles per kWh. 600 miles would cost about $135 here in California. Meanwhile, a 20 mpg gas car would cost about $110/month at $3.65/gallon.

You must be paying about 4.7 cents per kWh, or about 90% less than you'd pay here.


That's only certain parts of California, right? I mean, a big part, but definitely not all of it. PG&E is a tire fire, I feel bad for you guys.


Everywhere I can reach with an extension cord. :)


One would think that California would be the first place to have regulations for cheap electricity.


7c/kWh, 11c/kWh at peak hours

Those prices are wild.


They also view the Chinese as a moral threat. They'd rather set the country on fire than cede the territory that small Chinese EVs could take (which, given current American consumer preferences, would likely be rather small.


As soon as you encode imperfect data in an immutable key, you always have to check when you retrieve it. If that piece of data isn't absolutely 100% guaranteed to be perfect, then you have to query both halves of the load balanced DB anyway.


I'm extremely bearish on AI, but I'm not sure I agree with the framing "not even Amazon could..." All of the advertising around Alexa focused on the simple narrow use cases that people now use it for, and I'm inclined to assume that advertising is part of it. I think another part is probably that voice is really just not that fantastic of an interface for any other kind of interactions. I don't find it surprising that OpenAI's whole framing around ChatGPT, of it being a text-based chat window (as are the other LLMs), is where most of the use seems to happen. I like it best when Alexa acts as a terse butler ("turn on the lights" "done"), not a chatty engaging conversationalist.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: