Hacker Newsnew | past | comments | ask | show | jobs | submit | ebiester's commentslogin

First, no matter what you do, if a human has write access to the production database, the database can be deleted.

Second, there is a legitimate reason to destroy a database in development and automation. The biggest problem I see is often treating your development data like pets not cattle. You absolutely need to have safeguards that this cannot be run in production, but if a human has access to the credentials to run in production, the agent has access.

So, then, what do we do? In a larger organization, we can depend on the dev/ops split to maintain this. For a solo developer, or a small team, it takes a lot more discipline. Even before AI, junior and even mid-level developers didn't have the knowledge to segment. And senior devs often got complacent because they thought they knew enough.

They likely need some combination of https://www.cloudbees.com/blog/separate-aws-production-and-d..., introduction to terraform, introduction to GitHub actions, and some sort of vm where production credentials live (and AI doesn't!)

But at that point you're past vibe coding. And from what I can tell, the successful vibe coders are quickly learning that they need to go past it pretty quickly with all these horror stories.


You don't need the same permissions in prod and dev.

And in both cases, the humans don't need direct access to the raw CSP API. Use a local proxy that adds more safety checks. In dev, sure, delete away.

In prod, check a bunch of things first (like, has it been used recently?). Humans do not need direct access to delete production resources (you can have a break-glass setup for exceptional emergencies).


Most IAM policies start as "whatever made the deploy pass." Need rds:CreateDBInstance? Fine, rds:* it is. Ship it. Months later that same role can wipe the cluster and nobody remembers why it ever had that permission.

Separate accounts help, but only if someone actually goes back and cleans it up, which… yeah, doesn't really happen.


I have all the empathy for people in the world.

A corporation is not a person. If your organization cannot handle the load, then you need to adjust your practices. The organization needs to prioritize their paying users. The organization needs to shift people from new features to keeping the lights on. And maybe the organization needs to find another strategy to manage its azure transition.


A corporation is made of people. GitHub cannot exist but for the people who continue to work for it. And they’ve already said, multiple times, that restoring availability is their top priority.

A corporation is made of people, but its ethos is the product of decision-making. If a corporation is consistently, say, unethical, is it because they hire only unethical individuals? Or because unethical people somewhere along the chain of command make unethical decisions?

I'm not exactly sure what you're getting at with this question. It seems to still conflate corporate-level decisions with boots-on-the-ground work.

Are you suggesting that whatever decisions their upper-level management makes that you consider unethical irreversibly and irrevocably taints all the difficult and honorable work that their engineers and operations people are performing?


I’m saying their lower-level employees are probably honest, hard-working people like everyone else. But the detachment that comes from a large corporate structure makes the higher-ups decide things that aren’t as honourable.

“Corporations are made up of people” is a strange way to excuse the reality that the ‘bad’ things that corporations do are often decided by top management.


Ah. I didn’t intend to excuse the decisions of upper management when I said that. My intent was to counter the notion that a corporation and its workers can’t be analyzed independently.

A corporation is just a business formation, and businesses are made of individual people working for it. Those people’s motivations and efforts can, and often should, be evaluated separately from the decisions of management.


We agree, thank you for the clarification. Have a nice day!

There is a lot of room to reevaluate the lessons of software development pre-web in the context of the current environment.

Like, if waterfall of a project can be done in 2 weeks, is it agile now?


> Like, if waterfall of a project can be done in 2 weeks, is it agile now?

Sure. The thing is, the waterfall guys would tell you it's impossible to do it in 2 weeks because you need to have written down everything first. "Thousands of pages" was the terms they used.

Agile guys would point you to the Agile manifesto which would lead you to "working code over documentation" and "people over process".

A 2 week period to go from initial spec to product in a user's hands to capture feedback and make changes from there is much closer to agile than to waterfall. In fact it's more or less exactly some older versions of Scrum (which didn't permit deviating from the planned sprint user stories midway through the sprint, instead changes influenced the subsequent sprint).


The DoD's 2167 standard from the late '80s mentions the following documentation that should be produced as part of the development process (section 6.2 and Appendix D):

- System/Segment Specification

- Software Development Plan

- Software Configuration Management Plan

- Software Quality Evaluation Plan

- Software Requirements Specification

- Interface Requirements Specification

- Software Standards and Procedures Manual

- Software Top Level Design Document

- Software Detailed Design Document

- Interface Design Document

- Data Base Design Document

- Software Product Specification

- Version Description Document

- Software Test Plan

- Software Test Description

- Software Test Procedure

- Software Test Report

- Computer Sytem Operator's Manual

- Software User's Manual

- Computer System Diagnostic Manual

- Software Programmer's Manual

- Firmware Support Manual

- Operational Concept Document

- Computer Resources Integrated Support Document

- Configuration Management Plan

- Engineering Change Proposal

- Specification Change Notice


This is a particular artifact of the government system process. These are contracted pieces of work that Company A would deliver, Company B would administer, and Company C would be contracted out for additional work. Further, all specifications were created ahead of time because changes would cost extra. (Anyone who has done government contracting can talk to the shenanigans involved with it - I have not lived in this world for a long time.)

That said, we still do ad-hoc versions of many of these. For example, a system/segment specification today is an OpenAPI document between microservices. Most larger SaaS companies have the equivalent of a Software Configuration Management plan - Who can change terraform or a GHA, what are the standards that they conform to (linter, peer review standards).


> This is a particular artifact of the government system process.

Yes, a government process meant to implement the waterfall approach.

If you look at Dr. Royce's paper which originated the concept, he was very explicit that it required upwards of thousands of pages of documentation to be written up front, if you were doing it "right".

By the time the required documentation had all been written, there should be essentially nothing left to do but to actually type out the punch cards as specified and turn then into a system of compiled programs.

Now, this appealed to government because it put documentation in place that was felt to be more viable for contracting processes, but ever since Dr. Brooks chaired a 1987 Defense Science Board study on the issues already facing the DoD trying to implement waterfall methods, they've been trying to restructure their software acquisition methods to pursue better outcomes rather than more concretely defined outputs.

Of course it's still a tremendous challenge for them even now, and it remains common to see defense acquisition projects that will say "Agile" to the right people even as they prescribe a full waterfall-style 'system engineering V' approach behind the scenes.

The ad-hoc responses that the commercial space often involves is usually more appropriate, believe it or not. They get process added when process is helpful, but not before it is helpful.


I wrote about this - https://www.ebiester.com/agile/2023/04/22/what-agile-alterna... - Royce was describing what he saw as an anti-pattern that it was risky and invited failure without iterations.

(and my link to the Royce paper isn't working anymore - I need to fix that!) - I am planning on a followup that takes the last 3 years of change in mind.


> I wrote about this - https://www.ebiester.com/agile/2023/04/22/what-agile-alterna... - Royce was describing what he saw as an anti-pattern that it was risky and invited failure without iterations.

Yes, that's why his paper essentially said "you're going to have to build two." One to figure out the mistakes you can't predict ahead of time, and the second for the real deal. Do your best to get through the first one as fast as you can, but still deliberate enough that there won't be any bugs left behind for the second one.

But a third or subsequent iteration was definitely a failure in his mind, and even building two (or one-and-a-half, depending on your framing) was simply a concession to the reality that actual implementation would run into unpredictable issues, for much the same reason computer science had already learned the halting problem was undecidable.

I have a book with his paper and to the extent he speaks of iteration as desirable, it is only iteration between succeeding steps of the overall 'waterfall'. E.g. in an ideal world you iterate between system requirements and their decomposition into software requirements (updating the system reqs as necessary to ensure the software reqs you're writing are accounted for). Likewise for system requirements to software analysis, and so on.

As you point out, he mentions that this concept is “risky and invites failure”, and goes on to allow for re-refinement and re-implementation of the software requirements and program design phases based on experience from the testing phase. But he goes on to emphasize: “However, I believe the illustrated approach [waterfall with reimplementation post-test] to be fundamentally sound”.

The rest of his paper then goes into the detail of these phases, and he specifically notes early on that there is a natural question, of how much documentation is enough? And he gives a very clear answer: “My own view is ‘quite a lot’; certainly more than most programmers, analysts or program designers are willing to do if left to their own devices.”

It's not an accident that the DoD software acquisition requirements based on waterfall as mentioned by the other comments were so numerous or onerous. As Dr. Royce puts it:

- “The first rule of managing software development is ruthless enforcement of documentation requirements”

- When asked to review software projects the first thing he does is review the documentation. If the documentation is seriously lacking his recommendation is to replace the whole project management and shift 100% of work to fixing documentation.

- “Management of software is simply impossible without a very high degree of documentation”

- If procuring a $5M hardware device he'd expect a 30 page spec to suffice. If procuring a $5M software system, he'd “... estimate a 1,500 page specification is about right.”

I wasn't pulling "thousands of pages" from thin air. It's right in his paper and he's extremely clear about this. It's not an off-hand remark, he goes on to justify why he thinks that mass of documentation is required.

I want to emphasize that he's writing from the problems he was facing in his era. Computer systems necessarily were room-sized installations, interactive computer time was incredibly expensive, but paper was cheap. There was no Internet to speak of to share powerful and efficient open-source libraries. There was no "continuous deployment" or "continuous integration".

The system had to work well pretty quickly after the subsystems were built, installed, integrated and tested or this newfangled computer system that cost millions in 1960s dollars to run per month would be nothing more than a money sink while the nerds tried to troubleshoot.

Nowadays we don't develop under those kinds of strictures and we've put tremendous investments into allowing real useful systems to be developed using the simpler processes that even back then were much easier to develop around, when it could be used (Dr. Royce's paper even leads off by describing the 'nice' process as he explains why you can't use it as system size grows). The voluminous test documentation he's propose are things we pretty much do write today, but we call them test suites and we grow them along with the program, rather than write them all months before coding.

I think there's a lot to be said for what a modern-day waterfall process might look like with the technologies and iteration speeds available to us now, the only problem is that I think it will still resemble agile more than it would resemble the process Dr. Royce described.


Indeed, I came across this not as a contractor but in my university textbook :) I wanted to collect the document list that forms the "thousands of pages" mentioned above in the waterfall model.

Yeah and that's helpful too, because we typically talk about caricatures about both agile and waterfall and I think people truly don't realize that waterfall isn't simply "think about what you do before you do it" and nor is agile "code first; think later".

If people truly understood what waterfall is and how it's supposed to be carried out, they'd be less apt to recommend it. Nothing prevents a team from employing planning in an agile effort, but doing this doesn't turn it into a waterfall project and you shouldn't describe it as such.

If anything, teams that refuse to use agile (thinking it inherently means meetings, story points and not looking beyond 14 days) often send up choosing something even simpler, like cooking up a simple design doc of 4-6 pages before implementing it.

But that's still not waterfall, it's just another of the infinite renditions of agile methods that are out there, just without the consultancies issuing formal training certs.


at one point or another in my career (gov contracting) I had to write or co-write or review every one of these. and without fail, within 6-12 months they would be stale/inaccurate/obsolete/… the truth is, even on projects where sufficient time is allocated to write these, there is never (literally) time allocated to keep them up-to-date

That doesn't do justice to either waterfall or agile.

Oh certainly - I'm conflating the adjective of agile with the manifesto of agile. I've been on projects with multi-hundred page design docs and multi-week UATs. And nobody wants to go back to prince2 for example.

The point I was trying to make is we should be diving back into the older methodologies and accumulated wisdom and re-evaluate some of the older dead ends with new context.


Missing here: some organizations were rewarding high token usage as productivity without critical evaluation. People were afraid to be in the bottom because outcomes weren't being measured.

It is a giant Goodhart's law lesson


Give your agent a perfectly working code, insist that the output is not what it should be. Go to lunch. By the time you come back, the poor thing will evaporate a small lake trying to figure it out.

"i'm in aisle 32 of the data centre. please evaluate the previous query using exclusively servers 2438-2458. and quickly, it's f-ing freezing in here".

What!? Companies rewarding high token usage? That's inane, insane, and small brained. Who in their right mind equivocates spending more money to bring more productive. I'll just set up some burn jobs to kill tokens unnecessarily and then everyone else will too and the company will go bankrupt in 10 days. It seems inconceivable for a company to set up a "who can spend the most of our money" leaderboard for any other context

I have friends at two different companies that are taking a stick, rather than carrot, approach to this. They've set monthly minimums for token usage. Anything less than that gets you dinged in your next performance review. Imagine hiring a carpenter and writing a bad online review for them because they didn't use their hammer enough, even though the end product was on time, on budget, and worked well.

I was at a company 20 years ago that took this approach to automated tests. Everyone must write 2 a day, even if that's the only code they write that day. Once it was clear that this was being checked with automation, scripts were going around to generate and commit tests that 1 + 2 == 3 (replace with random numbers). Of course tokens are being burned this way at companies like this.


I think a better analogy would be "didn't use enough nails", since it's consumables. To which the response would be "nailgun. pop. pop. pop-pop. pop-pop-pop. pop. 'Those damn squirrels sure can move'".

Go look up "Tokenmaxxing."

Yes, it's as stupid as it sounds.


  What!? Companies rewarding high token usage? That's inane, insane, and small brained. Who in their right mind equivocates spending more money to bring more productive.
Given that all of AI is built around the premise that whoever sets fire to the most money wins, it's just users following the lead the vendors.

This is essentially companies making their engineers use LLMs as much as possible, and if you don’t, you go on a pip. Many such cases.

If you think this qualifies as insane, you really haven't met many managers, have you...

there are boards… endless boards… ranking by token usage :)

Can I rephrase it slightly?

Humans have some repeatable bugs in our wetware, and it can be predictably exploited in a way that is hard to correct. It isn't "some people" - it's all of us, and the moment we think we're immune is the moment that we are most easily affected.

Yes, even the smartest of us are idiots in some very predictable ways.


Business/Enterprise accounts are billed at $20/seat + API prices, not subscription prices. You can give them a monthly dollar quota or let them go unlimited, but they're not being subsidized like in team. And team can't get a 20x plan from what I can tell.


First, the authors make very little money on most textbooks. You would be shocked. The money is staying with the book publishers.

Second, they've started publishing new editions so quickly with only the problem sets changed (in general) so that students can't use previous editions. If you're learning on your own, you can get some good deals on older editions for just that reason.

And on top of that, they maintain their own platforms so that even if you buy them used, you have to subscribe to a service to take the tests! All of this lines up to finding as many ways to extract money from students and at interest after it's all said and done.


Okay... you're an HTMX fellow. We live in the age of AI so if you're going to make an example, don't show us the trivial things we know HTMX can do.

You need to show a real application with pieces of the system that coordinate and complex interactions. Recreate Jira's backlog and sprint board that can have an arbitrary set of business logic for how a ticket moves through a workflow. Put it through its paces, don't give me a toy.


I'd point to that being a particular bad example. For one thing, you are going to have to enforce that business logic on the back end or your data structures will get shredded.

If one thing drives me crazy about the current situation it is the techniques you found on the most advanced web sites in 1999 are effectively lost, like the techniques used to build the Egyptian pyramids. Redrawing the whole page can be amazingly fast over a fast network (LAN/localhost which is a real situation in the enterprise) if you aren't loading 50x more CSS than you need and not loading 300 trackers or having a real time auction with 10,000 bidders for ads -- that kind of app can feel more responsive than many desktop and mobile applications today.

I have no doubt whatsoever that you could make an issue tracker with HTMX which would embarrass JIRA.

What amazes me about React though is that I can literally walk around inside a React web page, see

https://aframe.io/

and when I look at things like Vue and Svelte I see a lot of things that "look like a good mental modal for everyday web applications" but with React I can "draw anything I want" Thing is that people mostly want to make form applications and the framework that would serve them best is something like react-hook-form with a simpler substrate than React underneath it.

Right now I am working on biosignals demos where I might have a radar that reads respiration and a heart rate monitor and a myoelectric sensor and it is really easy to snap together a few components in JSX and write a little bit of code that fetches data from the devices and uses a library of functions to process it for the components. It should be just as easy to drag and drop a few components from a visual palette and configure them on the fly but React is not good for that.

Back in 2006 I was working on Javascript systems such as decision support applications and knowledge graph editors that were that flexible and... the rest of the world just hasn't caught up.


They explicitly said they were ceding the frontier model game to others, and that they were content saying a few months behind the state of the art. In the long run, this is an interesting freeloader play that a few people are making. https://www.cnbc.com/2025/04/04/microsoft-ai-chief-sees-bene...


So, it's just changing the problem up a level.

First, is a 500 because you are using the API in a way that is unexpected a customer found defect? If Claude can't find the answer, what is the expectation of support?

If an internal team makes a change that breaks your workflow (because it was an unexpected use case), is that a CFD?

Do teams slow down in new features because the API must be the stress test of a public api?

I'm fine with unsupported frontends but an external API will be very difficult to keep static.


The last company I worked for before going into consulting full time was a startup where I was the then new CTOs first technical hire. The company before then outsourced the actual technical work to a third party consulting company until they found product market fit.

His primary mandate was API and micro service first.

Our customers were large health care systems.

We had a customer facing website that was built on top of the same APIs that we sold our customers.

Our customers paid for the features they wanted and those features were available on our website, they were used for their website and mobile apps and the ETL process was either via a file they sent us and we ran through the same APIs or they could use our APIs directly for both online and batch processes.

This is no different from the API mandate Bezos made at Amazon back in 2000.

You don’t have to keep an API static - that’s what versioning is for.


I think the talking point is maintaining a well versioned and solid API as product is way harder than shipping a few screens that can change whenever you need them to. (behind those screens being a bunch of duct tape to a clusterF of internal APIs). no guarantees.

what you're saying is that you were at a company that did that hard thing of shipping APIs as product.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: