Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Right, so if the productivity gains were so blindingly obvious and immediate for everyone, mandates wouldn't be needed.

These companies tried to quantify the productivity impact of work from home, so it's utterly bewildering to me that they would push these tool-use mandates without actually quantifying the impact LLM tools have on productivity. If it were just 'getting familiar' with AI tools to help define an AI-driven product mindset, I'd expect these CEOs to have more than a naive perception of the tools and their limitations.

I honestly wonder where these mandates started--part of me feels like this is the nascent stage of a VC panic that their AI investment strategy might not work out.



I think, we need to step back a little in these discussions. We need to ask, what productivity gains are we hoping to find?

In any knowledge or specialised work, operating a tool faster does not give great results, rather raises the risk of error and quality decline.

Did duolingo once face existential threat because they failed to produce specific feature sometime? Did one of their feature suffer and cause user loss because it took more time for an engineer to write the actual code?

Additionally, beyond formatting and obvious logical errors, every new code should in theory need some human review, which means more automated code means longer review. Assuming code is now produce at 2x, it also balances out that review will now take 2x. Additionally review is much more mentally taxing than putting out one’s thought into code, so risk of bugs and security holes also increases in the long term.

While Software devs cost money, the job involves thinking, and that means it can’t be compared to factory work where someone is standing at next step to simply drill a screw in spotX and the next person will simply put on a cover. Despite years of effort, the attempt to make the process mirror factory floor(did anyone notice the open floor parallel to factory floors?) it failed.

While many hate to see it this way, just like a surgeon will take his time to perform a brain/heart/tumor surgery, a SWE will do thinking, planning, coding and reviews. Giving a surgeon an autonomous bot that can spread the incision area faster or perform the incision faster does not mean productivity gain, it just means the doctor still needs to plan where to make the incision, how much to spread, what to chop off, what to avoid.


> […] push these tool-use mandates without actually quantifying the impact LLM tools have on productivity

The way I have seen this “measured” is by asking (demanding) people to pull some “time saved” number out of their ass. That number is then taken as fact, without question. So the measurable productivity gains are all based on coerced people making things up, omitting instances of the LLM tools slowing things down. It’s a house of cards, and it’s going to fall after a few more months of empty promises without results. “You claimed 300% more productivity, but delivered the same amount of work, and took on this massive AI bill. What the fuck are you doing?”

LLMs are cool, but they aren’t magic. This shit is exhausting, just skip to the part where you fire a bunch a software engineers, because that’s clearly what you actually want to do :/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: