Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

On the other hand, there’s a lot of real problems that real people actually deal with that just need a logistic regression to save million bucks here and there. I like that space more.


This describes 95% of machine learning at FAANG+, unfortunately nobody likes to talk about. Context: I work at a FAANG.


God, and it's such a snooze-fest.

Writing TFX code at Google is like having your soul-sucked through your rear-end! Imagine TF1 with all the broken APIs, but now it's all distributed! Fun.


> ... having your soul-sucked through your rear-end!

I laugh-snorted reading that! I am stealing that phrase.


Probably not the best analogy. Sounds like a good time.


Probably 99%. The people doing next-level stuff are largely being humored so the FAANGs can hire more early 20s grunts who think they're going to be working on ML (they won't be) or even given the time of day by leads on such projects (ditto). You give some famous professor a high-6 or low-7 figure salary in a position where his job is to publish papers--or not, because no one who matters in the company will ever read them--but you get incrementally more effort out of the youngsters who think they've got a chance (and they don't) of working on something more exciting than Jira tickets.

I don't think the FAANGs will ever come up with AGI and I am glad for that. If the private sector gets there first, I am going to "accidentally" die of hypothermia on a hiking trip because I will do anything not to live in their world.


This space isn't sexy to write about, there's no fame and glory in it.



Far sexier and larger than most care to admit.


If you have a few minutes, can you list a few of these problems? Just curious here!


I’m a consultant. I do not work in tech. Typical opportunities look at a decision that gets made many many times. This includes systems where everything gets treated the same despite some 80/20 kind of situation, which is a lot of them. Lots of older businesses have these kinds of setups where stuff is run on gut instinct or decent enough but risk adverse rules. Don’t think crazy neural net image recognition whatever. Really just look at what the business spends a lot of money on and think “could they do that smarter?”

A common thing I do is say company X has a fleet of Y assets. They repair them every N years. A good solution would be to predict which ones need repairs. Do those ones more often. Do the healthy ones less often. Pay more attention to the ones that are valuable.

Better outcomes, millions less spending. Probably don’t even need a live model in prod. Just a semi annual manual export run to excel for some planner guy who’s been keeping the schedule for 2 decades


I've heard about this, in different contexts. What it mainly comes down to is that incremental improvements can have massive impacts when you can apply them at a scale available at FAANG. I first read about this outside the context of machine learning, but it certainly would apply here.

For those of us who don't work at such scale, can you (maybe with a little fuzziness to avoid telling too much about an internal project) give a few examples of the kind of projects where a fairly simple model can have a 1M+ impact?


Here's a ~5 minute talk with 5 such examples (where relatively simple ML models made a 1M+ impact at a FAANG) :) https://youtu.be/zyOEOd1HkSY?t=946 Happy to talk about more details if you message me through my profile!


Thank you for the link! They were all interesting, and yes, all the result of having a high scale. For anyone curious and thinking about watching the video (I recommend watching it), the topics were 1) should you immediately re-run a failed ad payment (getting paid vs transaction costs/flagged for repeated billing), 2) should you send an IM immediately after a login failure (cost of text message vs possibility user will give up and not reset password), 3) should you fetch data for pre-loading in a web page (higher engagement with page vs cost of unnecessary loading), 4) video upload quality, 5) taking screen real estate for less commonly used UI features.

Interesting examples, and yes, they're all the kind of thing that might not justify the effort for an ML model (and might not have enough data to train) for a small website or operation, but can easily justify the cost and effort when you have a huge number of transactions.

On another note, this is why I often like lightening talks. So many people think that what they're doing falls below the threshold for what is an interesting presentation, when in fact it's the most relevant thing a lot of people will see at a conference.


Wow, of those five I'd only call 3) not evil, maybe 1). (based on the video, where the twisted reasons for them are explained.)


I’m a consultant. I actually do a very different kind of thing. Yeah big tech hyper optimized content so that a tiny boost of engagement improves a billion users by a tiny amount so the net effect is huge. I’m skeptical of it tbh. I think it forgets emergent effects and externalities over time.

What I refer to for my work is the low hanging fruit. Old problems that businesses solve with manpower or overly unspecific rules. Something where just a little clarity can help them hone their efforts on the 80/20 of it all. I made a slightly more detailed post in an adjacent response


Would you kindly tell what types of projects are such nice successes?


See adjacent responses




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: