Hacker Newsnew | past | comments | ask | show | jobs | submit | thousand_nights's commentslogin

the bayer pattern is one of those things that makes me irrationally angry, in the true sense, based on my ignorance of the subject

what's so special about green? oh so just because our eyes are more sensitive to green we should dedicate double the area to green in camera sensors? i mean, probably yes. but still. (⩺_⩹)


Green is in the center of the visible spectrum of light (notice the G in the middle of ROYGBIV), so evolution should theoretically optimize for green light absorption. An interesting article on why plants typically reflect that wavelength and absorb the others: https://en.wikipedia.org/wiki/Purple_Earth_hypothesis

Green is the highest energy light emitted by our sun, from any part of the entire light spectrum, which is why green appears in the middle of the visible spectrum. The visible spectrum basically exists because we "grew up" with a sun that blasts that frequency range more than any other part of the light spectrum.

I have to wonder what our planet would look like if the spectrum shifts over time. Would plants also shift their reflected light? Would eyes subtly change across species? Of course, there would probably be larger issues at play around having a survivable environment … but still, fun to ponder.

That comment does not make sense. Do you mean the sun emits it's peak intensity at green (I don't believe that is true either, but at least it would make a physically sensical statement). To clarify why the statement does not make sense, the energy of light is directly proportional to its frequency so saying that green is the highest energy light the sun emits is saying the sun does not emit any light at frequency higher than green, i.e. no blue light no UV... That's obviously not true.

> Do you mean the sun emits its peak intensity at green

That's presumably what they mean. It's more or less true, except the color in question is at the green / yellow transition.

See e.g. https://s3-us-west-2.amazonaws.com/courses-images-archive-re...


> Do you mean the sun emits it's peak intensity at green (I don't believe that is true either, but at least it would make a physically sensical statement).

Yes, that's what I meant, as I was sloppy with my language, and it's definitely true.

https://www.sciencedirect.com/topics/physics-and-astronomy/s...


Several reasons, -Silicon efficiency (QE) peaks in the green -Green spectral response curve is close to the luminance curve humans see, like you said. -Twice the pixels to increase the effective resolution in the green/luminance channel, color channels in YUV contribute almost no details.

Why is YUV or other luminance-chrominance color spaces important for a RGB input? Because many processing steps and encoders, work in YUV colorspaces. This wasn't really covered in the article.


Not sure why it would invoke such strong sentiments but if you don’t like the bayer filter, know that some true monochrome cameras don’t use it and make every sensor pixel available to the final image.

For instance, the Leica M series have specific monochrome versions with huge resolutions and better monochrome rendering.

You can also modify some cameras and remove the filter, but the results usually need processing. A side effect is that the now exposed sensor is more sensitive to both ends of the spectrum.


Not to mention that there are non-Bayer cameras that vary from the Sigma Foveon and Quattro sensors that use stacked sensors to filter out color entirely differently to the Fuji EXR and X-Trans sensors.

You think that's bad? Imagine finding out that all video still encodes colour at half resolution simply because that is how analog tv worked.

I don't think that's correct. It's not "all video" - you can easily encode video without chroma subsampling - and it's not because this is how analog TV worked, but rather for the same reason why analog TV worked this way, which is the fact that it lets you encode significantly less data with barely noticeable quality loss. JPEGs do the same thing.

It's a very crude method, with modern codecs I would be very surprised if you didn't get a better image just encoding the chroma at a lower bitrate.

Isn't it the other way round? We did and still do chroma subsampling _because_ we don't see that much of a difference?

If the Bayer pattern makes you angry, I imagine it would really piss you off to realize that the whole concept encoding an experienced color by a finite number of component colors is fundamentally species-specific and tied to the details of our specific color sensors.

To truly record an appearance without reference to the sensory system of our species, you would need to encode the full electromagnetic spectrum from each point. Even then, you would still need to decide on a cutoff for the spectrum.

...and hope that nobody ever told you about coherence phenomena.


nta you're replying to, but as someone who doesn't know rust, on first glance it seems like it's littered with too many special symbols and very verbose. as i understand it this is required because of the very granular low level control rust offers

maybe unreadable is too strong of a word, but there is a valid point of it looking unapproachable to someone new


I think the main issue people who don't like the syntax have with it is that it's dense. We can imagine a much less dense syntax that preserves the same semantics, but IMO it'd be far worse.

Using matklad's first example from his article on how the issue is more the semantics[1]

    pub fn read<P: AsRef<Path>>(path: P) -> io::Result<Vec<u8>> {
      fn inner(path: &Path) -> io::Result<Vec<u8>> {
        let mut file = File::open(path)?;
        let mut bytes = Vec::new();
        file.read_to_end(&mut bytes)?;
        Ok(bytes)
      }
      inner(path.as_ref())
    }
we can imagine a much less symbol-heavy syntax inspired by POSIX shell, FORTH, & ADA:

    generic
        type P is Path containedBy AsRef
    public function read takes type Path named path returns u8 containedBy Vector containedBy Result fromModule io
      function inner takes type reference to Path named path returns u8 containedBy Vector containedBy Result fromModule io
        try
            let mutable file = path open fromModule File 
        let mutable bytes = new fromModule Vector
        try
            mutable reference to bytes file.read_to_end
        bytes Ok return
      noitcnuf
      path as_ref inner return
    noitcnuf
and I think we'll all agree that's much less readable even though the only punctuation is `=` and `.`. So "symbol heavy" isn't a root cause of the confusion, it's trivial to make worse syntax with fewer symbols. And I like RPN syntax & FORTH.

[1] https://matklad.github.io/2023/01/26/rusts-ugly-syntax.html


That might be an interesting extension to a dev environment or git - convert terse rust into semi-verbose explanation.

Sort of like training wheels, eventually you stop using it.


People often misuse unreadable when they mean unfamiliar. Rust really isn't that difficult to read when you get used to it.


Chinese isn't that difficult to read when you get used to it, too.


> littered with too many special symbols and very verbose

This seems kinda self-contracticting. Special symbols are there to make the syntax terse, not verbose. Perhaps your issue is not with how things are written, but that there's a lot of information for something that seems simpler. In other words a lot of semantic complexity, rather than an issue with syntax.


I think it's also that Rust needs you to be very explicit about things that are very incidental to the intent of your code. In a sense that's true of C, but in C worrying about those things isn't embedded in the syntax, it's in lines of code that are readable (but can also go unwritten or be written wrong). In the GCed languages Rust actually competes with (outside the kernel) — think more like C# or Kotlin, less like Python — you do not have to manage that incidental complexity, which makes Rust look 'janky'.


yes, first thing i thought of. although i'm quite confident it's still outside the scope of our lifetimes, i do worry for future generations


with gemini you have to spend 30 minutes deleting hundreds of useless comments littered in the code that just describe what the code itself does


The comments would improve code quality because it's a way for the LLM to use a scratchpad to perform locally specific reasoning before writing the proceeding code block, which would be more difficult for the LLM to just one shot.

You could write a postprocessing script to strip the comments so you don't have to do it manually.


I haven't had a comment generated for 3.0 pro at all unless specified.


exactly, people have preferences, i don't get how this turned into white vs dark mode supremacy war with people seething and attacking each other over what should be a boolean config setting


> If you can capture that market for next 20 years, it's worth $200 billion.

that's like 5% of NVIDIA's current market cap. sounds like peanuts when you lay it out like that


Perhaps.

But that's just the USA's software developers in just their first year after graduating. Software devs are 1% of the US job market, the first year after graduation is (66-21=45 years, 1/45 ~= 2%) of a working life, the US is just 4% of the world's population/25% GDP.

For the 1% to matter, there have to be other jobs that LLMs can do as well as a fresh graduate. I don't know, are LLMs like someone the first year out of law school or medical school, or are those schools better than software? Certainly the home robotics' AI are nowhere near ready yet, no plumber, no driver (despite the news about new car AIs), would you trust an Optimus to cut your hair? etc.

For the 2% to matter, depends how seriously you take the projections of improvements. Myself, I do not. Looks like exponential improvements come at exponential costs, and you run out of money to spend for further improvements very quickly.

For the 4% to matter, depends on how fast other economies grow. 4% by population, about 25% by GDP. I believe China is still growing quite fast, likely to continue. Them getting +160% growth, and thus getting 2.6x times the money available to burn on AI tokens, over the next 20 years would be unsurprising.

All in all, I don't think the USA is competent enough at large-scale projects to handle the infrastructure that this kind of AI would need, so I think it's a bubble and will burst before 2030 because of that. China seems to be able to pull off this kind of infrastructure, so may pull ahead after the US does whatever it does.


> For the 1% to matter, there have to be other jobs that LLMs can do as well as a fresh graduate. I don't know, are LLMs like someone the first year out of law school or medical school, or are those schools better than software?

Before looking to medical and law schools, I might look to middle-manager school or salesperson school or bookkeeper school.

I don’t know enough to speculate even beyond those crude guesses, but as I thought about this question, I found it interesting to skim the US’ employment-by-detailed-occupation chart:

https://www.bls.gov/cps/cpsaat11b.htm


doesn't sweden just.. list everyone's full name, address, job, birthday, age and stuff on a public website?

i always found this so weird, it's like a stalker's paradise


yeah but the pharma comapnies are only in the business of selling drugs so they would need to diversify into retirement homes or somnething to profit from actually curing people


Or some kind of organization, some kind of union of all people, could pool their money and invest in such a thing.


That sounds like theft!

/s


I mean, that's one way to look at it I suppose, and it's why you see Healthcare Insurers and Private Equity diversify into elder care for a captive audience.

In reality though, I was not-so-subtly trying to suggest that if something is necessary for the public good (curing diseases) but a bad business model, then perhaps Capitalism itself is the wrong vehicle for that segment of industry and a different option - be it an incentive structure, government-owned pharmaceutical research, or managed economy - is needed.

Society fundamentally needs things that are simply bad business - sheltering everyone (lowers long-term housing revenue), feeding everyone (lowers long-term food revenue), healing everyone (lowers long-term healthcare revenue), educating everyone (lowers the value of degrees/credentials). If our economic model prohibits or discourages achieving optimal resource usage and human outcomes, then it's our obligation to explore and identify alternatives that may improve those outcomes respectively.


> In reality though, I was not-so-subtly trying to suggest that if something is necessary for the public good (curing diseases) but a bad business model, then perhaps Capitalism itself is the wrong vehicle for that segment of industry and a different option - be it an incentive structure, government-owned pharmaceutical research, or managed economy - is needed.

I believe the great innovation of capitalism is markets, and the next era of economic and social progress will be driven by mixed capital/social good markets.

For example, what if you tied the tax rate for an industry to a combination of broad social goods (say, homelessness) and industry-specific goods (say the incidence rate of cancer for cancer drug companies), such that if we’re in a the middle of a homelessness crisis and many people have cancer, the tax rate might be 50%, vs if there is virtually no homelessness and we’ve cured cancer, maybe it’s 10%. Obviously there are other market approaches but eventually they would be converted to capital markets, so something like the above makes sense to me as a start.


I think we've discovered that markets are great for some things, and disastrous for others. e.g. fire services, and health.


isn't this just a way of saying that markets aren't representing externalities correctly?

a company doesn't have to pay for bad things they produce as a byproduct (e.g., pollution) and they dont get to benefit from good things they produce as a byproduct (e.g., curing a disease).


as a casual observer living in the uk, what brexit has done is stopped the influx of highly educated and economically contributing people from the EU, and instead replaced them with people who are claiming "asylum" from asian and african countries

downvotes ahoy


As a long term Brit I kind of get that impression too although there has been a lot of regular immigration also. I bet the brexit voters who tended not to be keen on immigration have been pleased with that.

Also a lot of regular Brits have moved abroad. Dyson who famously advocated for brexit to help Britain moved to Singapore, my friends have moved to France, Portugal, Spain and Dubai.


Downvotes because while you're right it has reduced immigration from the EU, the vast majority of post-Brexit migration to the UK has no been asylum seekers, and most asylum seekers have not been Asian or African.


Had to look it up but I found Indian and Nigeria specifically as country of origin for work related migration


Economic migration is very different from asylum seekers, as the person above claimed.

The vast majority of people arriving from Nigeria and India do so on visas, and would have near zero chance of getting asylum claims approved.


hello it's me the average voter

my wealth is reduced


Wages or wealth? Don't forget to factor covid in.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: