The whole idea of "Continuous Delivery" is not always one that meshes well with some shrink-wrap workflows.
I worked for hardware manufacturers, for most of my career, and software was always just a "sidekick" to The Main Show. We just got the "Supporting Actor" nods.
I'd say that 90% of HN seems to be Web/SaaS (and, these days, crypto), which is an excellent workflow; just not the one I do. Nowadays, I have some integrated stuff, but it's mostly native iOS/TVOS/WatchOS/MacOS deliverables.
CD is nice, but I feel that CI is better, for a team. Even that, is overkill for the way I work. I'm spoiled, I tend to work alone, or very loosely-coupled. That gives me a different workflow from what many experience. I had to spend a couple of years "un-learning" a lot of stuff from my Big Team days.
The way I work results in astonishingly good stuff, very quickly, but the scope is much narrower than what a lot of folks here, do.
As such, I find little utility in telling others that the way they do things is wrong, and that they should be doing it my way. I do talk about how I do things, but I'm not judgmental about it.
I do feel that software quality, in general, is fairly problematic, but don't claim to have the silver bullet. I know what works for me, and I try to stay in my lane.
i work on a very large team and solo for my side hustle. love working solo, no red tape to fight. no waiting 24 to 48 hours for a teammate to review my code per iteration and up to 24 more for it to deploy.
just signed 2 contractors on for my solo project though and its really messing up my process. all the overhead is so high I'm not even sure it's worth it even if they were free.
I like to imagine it as a sort of volume vs surface area problem.
The amount of service you can deliver to costumers is your surface area, but the amount of work you put in only contributes to the volume.
As the number of employees grow you will necessarily be less efficient, but there aren’t really any other great ways to gain the required surface area.
there are perhaps interesting options with bringing in partner-level labor instead of buying someone's time-in-seat and the relative personal investment involved in each
the big point glossed over is that wage labor, where the value they produce is taken from them, isn't a great incentive for great work. that's imo a far bigger issue than fundamental challenges with collaborating, and why so many managers need to continuously trick their employees into productivity
I have been thinking about _how_ to build good (and stable) software a lot in last 10 years. Tried many approaches and verified them one by one. In my experience this is the ultimate fix. It may not work for big tech companies, but for most projects, it is much better to choose this approach.
Collaborative efforts to maximize good results, and predictable gains for each participators.
Thanks for sharing your experience, I feel like this is not very well resourced territory for our industry
Btw you may find Graber/Wengrow's recent Dawn of Everything to be an inspiration for how these sort of structures could scale up to larger groups of people than conventionally thought historically possible
tbh this is a big weakness of the left (in terms of its marketing at least) that I'm trying to understand better. one idea is that whatever solutions we conceive of now, are constrained within the understanding we've gained living under capitalism/state power etc., and that we need to dismantle the current system incl exploitation/keeping the majority of peoples time/energy locked up in wage labor/systemic lack of freedom to disobey an order or to freely relocate/etc, before we can explore a wider range of solutions. But I get how this is unsatisfying, sounds like it leaves a vacuum for bad/regressive solutions to come in, but this is something I'm only starting to learn about.
That's why I appreciated the new Graeber/Wengrow, it finds a wealth of prev overlooked ways that humans have lived, post discovery of agriculture, in large societies and with greater freedoms and prosperity defined in new ways besides within the terms of economics. This gives me optimism that capitalism/states aren't necessary or inevitable and that we might be stuck on this shitty plateau having convinced ourselves it's the only realistic one
> no waiting 24 to 48 hours for a teammate to review my code per iteration and up to 24 more for it to deploy
This probably depends on the team and practices that are used within it.
Most code changes in my current team are reviewed within an approximately an hour, deployments are a bit more tricky, especially if you have external clients with their own environments and deployment procedures.
Honestly, CI/CD cycles for the internal dev/test environments take anywhere from 5-20 minutes (with the heavier tests being run separately) and the technical aspects of delivering software (build, test, package, scan, generate docs, send) take around another 10-20 minutes.
It's usually when you have to deal with manual procedures and people that it all slows down.
So, it can be really good to automate as much as you can: code style and warning checks, code coverage checks, dependency date checks, dependency CVE checks, unit tests, integration tests, performance tests, packaging and storing build artifacts, redeployments to whatever environments are permissible (usually automated to dev/test, manual approval against prod).
Ya.. my company does all of that. It's just the people process that makes everything slow.
Well, mostly the people process. Having millions of dependencies doesn't help with build and test times either. Usually up to 20 min to run something.
> Having millions of dependencies doesn't help with build and test times either. Usually up to 20 min to run something.
I feel that pain! Honestly working with monoliths and older and larger projects is very demotivating. Now, I'm not saying that you need to go full on microservices either, but working with a few smaller services instead of a single huge one has been a game changer in my experience!
I'm not sure I ever want to go back to the loop of changing some code and then having to wait minutes for the app to launch locally just so I can test something and realize that it's still wrong.
We do use microservices, but even then it's still slow AF. I'm not entirely sure why, I guess just via the code dependencies it includes nearly everything. It's all C++ which probably doesn't help.
> I worked for hardware manufacturers, for most of my career, and software was always just a "sidekick" to The Main Show.
Which is an enormous, catastrophic, fantastic mistake that should be leaving everyone breathless with shock.
Realising that the software matters is why Tesla is worth more than most of the other car manufacturers combined.
This is why Apple is the #1 biggest company in the world.
This is why every time some "hardware" has to be deployed, every enterprise admin rolls their eyes and groans.
This is why IoT, medical, and factory automation security is a trash fire.
Smart televisions aren't, and you can waste $5000 on one just to have a substantially better experience by plugging in a $150 box from Apple.
And on, and on, and on...
I literally told the local Toyota rep that I wouldn't buy a 2019 model-year vehicle specifically because it didn't have Apple CarPlay. The built-in system is simply garbage. Maps that are 4+ years out of date!
Apple has nearly monthly updates for iOS, which means if I plug my phone into my car, effectively my car gets monthly updates. With Toyota? Maybe once in a decade they'll release an update, and then never again, slowly but surely deprecating its software capabilities down to "worthless".
Similarly, Nikon releases updates for their existing cameras once in a blue moon. Recently they announced "Firmware 2.0" for their flagship Z9, and I was shocked. This is likely a one-time aberration, probably caused by their software division not being ready in time for the initial shipments. I guarantee that there will never be a Firmware 3.0. Never! Where I live, this camera with one lens and typical accessories will set you back $10K and is deprecating at an exponential rate because Nikon does not give a s%*t about software. Meanwhile my iPhone and its camera will keep getting updates.
Well, hardware is a different world from software. I’ll bet that Tesla doesn’t do “sight unseen” updates. They probably wouldn’t be allowed to.
They likely have huge batteries of tests that the software needs to pass (CI), but the actual release build and “sign-off” involves a human.
Who will get their ass chewed off, if the update borks.
But everything before that point, is 1000% better and more agile than most hardware companies.
I’ll bet SpaceX has a lot more meatware in their process.
I always liked CI, as a basic infrastructure, for my team. I very much believe in early integration testing[0], but automated testing can be a trap. It should not be the only testing, for firmware.
If you push a bad release to a Web server, you have one point of failure, but also, one point of recovery.
If you push out a bad firmware release, you have a million $10K bricks. You may also have fires, explosions, and crashes.
Although I often had real disagreements with the hardware folks, I am entirely sympathetic to their priorities.
The main issue, was that they considered software developers to be “cowboys,” and judging from the general quality level of even enterprise software, I can understand their bias.
However, I am not a “cowboy.” I am absolutely anal about Quality, and I’m regularly attacked as being “too uptight,” by software developers.
Software is a different animal from hardware, and needs to be done differently. Quality, however, should not suffer.
As a standalone developer, I’ve learned to eliminate “concrete galoshes”[1], and CI tends to be that, but only in my case. What works for me, may not work for others. Just as importantly, what works for others, may not work for me.
I’ve spent the last few years, refining a personal process for my software development. It works great. You can see for yourself. Most of my work is open-source, or source-available[2].
For the people who are downvoting me, you do all realise that I'm not comparing a pure-software website to some embedded IoT thing, right?
Most of my examples are hardware with software as "necessary evil" vs hardware with software "being taken seriously."
Apple TV is a hardware appliance that takes the software seriously.
My Samsung "flagship" TV is a hardware appliance that does not.
They both get updates. One gets frequent updates that makes the product noticeably better. The other gets infrequent updates that have made it worse.
Cars from most manufacturers are hardware with trash software.
Tesla sell the cars, but unlike their competition their cars are regularly updated with new software. They have weekly(fortnightly?) updates rolled out to their for their beta testers! Not exactly daily CI/CD, but compare this to Toyota. They literally never release updates for most models, ever. And it's not like their 1.0 release is perfect! Mine has a bunch of small bugs and irritations that they should have patched... but never ever will.
It's not a question of "alternate process" or a "different workflow". They have no process! Their release strategy is "don't"!
Apple is about to release a complete car software + hardware suite. So not something you plug in, but the entire "avionics" as it were will come as a OEM part from Apple instead of the car manufacturer.
They're going to wipe the floor with their competition. The screens and software from GM, Ford, Audi, BMW have nowhere near the quality, commitment to updates, backporting of new features, etc, etc...
I will literally stand in line outside of the dealer to get a new car that has this style of Apple-made hardware+software instead of a lump of metal with paint on it and fabric on the inside.
Because I know it will get updates, and that those updates won't make things worse.
As a fellow embedded dev, I think that any system where you run a regular, meaningful risk of bricking with updates is a badly designed system. Other than that, no disagreement. CI is a cheap, fast first step in validation. It's not the stopping point.
Well, I don't do embedded anymore. I enjoyed it, but it can be nerve-wracking.
I write end-user application code, for Apple devices, in Swift. I really enjoy that.
I also do some backend stuff (in PHP). It's not my forte, and I like to avoid it, if possible, but I'm highly skeptical of a lot of backend stuff, these days, and like to know who I'm letting in the back door.
I'd like to do some Bluetooth stuff. I've written a bunch of BLE stuff (even given a class in it[0]), but I haven't found a venue that gives me an excuse (the Meshtastic stuff looks like it might be a good bet, though).
Really? I challenge you to find any items other than 7 and perhaps 8 that don't apply to the types of development you mention.
Essentially the only thing that's different with the types of development you mention is that "final deployment to production" looks different, as it usually involves more or less physically transporting artifacts to your customer.
But the rest is just the same. You should be doing trunk-based development, practise something like code review, gate on integration tests, feature flag, spin up virtual production clones, etc.
In other words, the fact that getting software to your end users is a clumsy process does not preclude you from doing every other step of the process with short feedback loops.
Using distributed version control isn't applicable to most people working in games.
9) is also debatable - requiring someone to clone the entire application to make an infra change to a testing environment.
Adding tickets to commit messages isn't necessarily a requirement - some work (at least in my area) is prototypey and maybe ill defined (the task might be to define it).
Being able to deploy from your own machine is a double edged sword; the situation you need this is an absolute last resort. Enabling deployments from dev machines means credentials to environments, write access to infra, and likely skirting around normal processes.
> Using distributed version control isn't applicable to most people working in games.
Are you sure about this? I mean, code absolutely should be versioned and if you can afford the storage, you should also version all of your assets, like models and audio.
> Are you sure about this? I mean, code absolutely should be versioned and if you can afford the storage, you should also version all of your assets, like models and audio.
Yes, and I never said you shouldn't use version control , simply that distributed version control isn't necessarily applicable (which is what the claim is).
Let's be honest, LFS is duct tape on a pig. It doesn't support SSH, for one. Mirroring a repository is mired with landmines, and probably most importantly it breaks the decentralised model of git by centralising the storage of your binary data.
> Either way, not using version control for any collaborative project is just asking for issues.
Which is not what I said, at all. The article says _distributed_ version control.
> Not using distributed version control in particular might just make things more annoying, as anyone who has ever worked with SVN might attest to.
By distributed version control you mean git here, right? The advantage of git isn't it's technical merit, or the advantages of DVCS (particularly if you're using something like LFS which turns it into a centralized VCS). The advantage of git is that it's well supported in many tools (ci, merge bits, code review, deployment, package managers). Frankly, my experience is the complexity brought on by git (which is quite a few years using it along side perforce and more recently plastic) often outweighs the benefits over something like perforce, particularly on large repos.
#2 is to improve quality, shorten feedback loops, and simplify merges. You get those benefits whether or not you're able to push things to your customer at will.
Both #2 and #5 really just set a quality bar high enough to reduce long-term maintenance costs. I don't see why you wouldn't want that in a non-web scenario.
#12 is not talking about deploying to production, and I don't see why you shouldn't be able to deploy to a dev environment from your machine. It's nice not to have to run everything locally. Even for non-EU companies.
So aside from #9 which is of questionable utility for any type of development, no, I'm not convinced you've pointed valid examples.
Pretty much everyone with more than one production environment will. Imagine developing Firefox. How do you do a "self-healing deployment" automatically whenever trunk changes?