The thing folks don't mention regarding AWS is the inherent competitive advantage their micro-startups have. We focus on AWS launching managed ElasticSearch or managed Kafka, and talk about them (legally) using open source contributions to make money, but I think those are minor compared to things like this.
What AWS has is a culture and institutional knowledge on how to launch new products that take foundational AWS services (S3, Lambda, EC2, DDB, etc.) and glues (!) them together better than what a competing non-AWS company can do. This is a bold claim (since AWS launches some very crappy products), but imagine being able to use AWS infrastructure at cost, having internal knowledge on how to best optimize that infrastructure and access to the engineers that own those services while you build abstractions and better user experiences on top of them.
I don't know how cos that compete in any related space can survive. When AWS is willing to throw whatever against a wall (launching 50+ services a year) to see what sticks, sooner or later they're going to land in your space.
Become more locked into AWS's foundational services -> these abstractions on top of them start to make more sense in engineering complexity / delivery time / possible cost dimensions -> Use more of these -> Become more locked into AWS's foundational services.
Speaking as a former AWS Engineer, I disagree with the sentiment that they are able to glue AWS services together better than what a competing non-AWS company can do. Internally the use of AWS is subject to the same constraints and APIs you and I have.
Their competitive advantage is their captive customer base, which will much rather pay a premium to use an AWS-managed service than use another vendor.
Which is that companies do not procure individual AWS services but rather AWS itself. Meaning that whenever AWS releases a new tool it is instantly approved and available for use across the company (baring internal processes e.g. security hardening).
Compare this with a startup which has to go through a 6 month long procurement process complete with vendor bake-offs in order to sell their similar tool.
If AWS continues to move into the application space they will surely dominate the enterprise because of this.
Just yesterday I selected SNS for a project instead of a local provider because AWS put it through the security audits we care about and we don't win prizes for spending time on these decisions.
-- AWS advertises the tool via the console and preintegrates it from both directions
-- IT+Procurement already approved AWS for projects, so PMs can skip vendor/tool approval+onboarding dances and focus on the budget one
True of not just AWS but Azure + GCP too
Startups can compete, but gets into stuff like deep tech or cross-vendor integrations, where the visibility and integration advantages don't apply as much to the cloud vendors so they rather go after easier targets until they can't. (Folks here posted about UI, but for b2b, I disagree for most cases, unless there's something deeply technical about it that a 20 person team can't copy.)
The exception is open source software. Free Software's biggest risk to the org is patent and license encumbrance. If we can develop an easy way to detect such things in Free Software, and we develop communities where enterprises can freely contribute to them (to make them more Enterprsey), then it has a chance to replace incumbent managed solutions.
- Historically, OSS seems to be free product dev for big cloud (... cue AWS's paid PR people to say otherwise ... ). Their integration, advertising, and procurement advantages makes it MUCH easier to win contracts before the OSS devs may even know it is being used and without
a bid process. For a fraction of the effort and contribution, they are switching it to a model of monopoly channel owners vs content producers and and driving the sw margins to 0 on the content side. That's why anti-big-cloud LGPL-when-SaaS style licenses are emerging. There are always exceptions, but it's not the axis to compete on unless you do such a license..
- I agree about the community aspect, indirectly. If the software, in addition to being OSS, relies somehow on community and its steward -- not just source code -- and participation in it is somehow what's paying for the OSS dev, yes. For example, maybe the community is also a social network (Slack/Teams across orgs), or generating threat intel -- the software (post-scale) matters less post-scale, so forking is ok.
The ability to stop by the desk of S3 team member and ask whatever technical questions and get authoritative answers is enough to defeat any competitors who want to build products on top of S3.
Note mentioning accessing to road-map, strategic investment, genuine appreciation of product strength and weakness etc.
Don't forget being able to get high priority in the backlog if you need a feature from another service in order to launch.
Former AWS engineer who launched a service here. That, access to source code and being able to setup an hour-long meeting with any engineer are the big points. Not that I think that lacking these is insurmountable, but they're very nice to have.
I did not, I am comparing that to a random guy from some random startup, who is ranking even behind the poor customers who cannot get hold any devs for their confusing issues of using AWS...
Yes. Compared to those, newly AWS services are more likely to work with, and integrate with existing services. However, the further you stray from 'Compute' the less likely this is to be the case. More 'esoteric' services tend to be their own microcosm and sometimes feel like they could have come from another company entirely (Quicksight? etc)
This is still light years ahead of Azure (and to a less extent GCP), where even compute services will not necessarily work with one another. You need to make sure the "SKU"s are compatible. Want to use some fancy storage? Oh no you need to use SKUs XYZ and premium this premium that. Whereas if AWS releases a new storage type (such as IO2), you can pretty much assume you can attach that to any of your existing instances (even if some particular types could be recommened).
Not to mention surprising behavior when you try to mix and match features. GCP and AWS, you have instances working perfectly fine, but you have discovered that they provide the ability to create 'internal' load balancers? Cool! Create one, point to the instances, or point to their respective automatically managed groups (ASGs or instance groups). It will be there in case you need it, your workloads are unaffected. Do that on Azure, and now your instances have no internet connectivity whatsoever, as all traffic is now routed through it. There are footguns everywhere.
Technically, GCP tends to be the most advanced of the bunch (their automatic instance migration is brilliant, meanwhile AWS keeps sending emails to us saying that some instance is degraded and it's our problem now). Their networking capabilities are impressive as well (first to have global anycast load balancers, Google's premium network, subnets spanning AZs, etc). However, they do seem to be too opinionated. Want proxy protocol on your NLBs, even though NLBs preserve source IP so in theory you don't need this(but with K8s ingress you might). AWS says sure, we have the feature, enable it, we don't care. Google says: why do you need proxy protocol, the source IP is there. These are not the headers you are looking for. Azure says: proxy protocol wat?
> This is still light years ahead of Azure (and to a less extent GCP), where even compute services will not necessarily work with one another.
You can't use the SQL Server Virtual Machine extension on an Azure VM to extend the disks if the VM size is one of the AMD EPYC CPU types.
During the support call, the Microsoft tech shared a screenshot of the source code for the SQL VM extension, and it had a switch statement that decides if each feature is "supported" or not.
Let that sink in: Microsoft literally hard-codes their VM-size-to-feature lookups in probably thousands and thousands of places with huge switch statements full of code like this:
case "Standard_M416ms_v2": return false;
case "Standard_M416s_v2": return false;
case "Standard_M64ls": return true;
case "Standard_M64ms": return true;
This is their standard coding practice.
So next time you try a new VM size or type, don't be surprised if things randomly don't work or "aren't supported" for mysterious reasons...
> I don't know how cos that compete in any related space can survive. When AWS is willing to throw whatever against a wall (launching 50+ services a year) to see what sticks, sooner or later they're going to land in your space.
This is true for a subset of products, but not uniformly. To the extent you're building an infrastructure product, you get to choose what axis to compete on. If you're going up against AWS, then trying to compete with them on things like cost and reliability are likely poor choices. But something like user/dev experience isn't. DynamoDB has a mongo compatible API and yet Mongo's Atlas hosted service is responsible for most of the company's growth over the past year. Why? Because it provides a unique offering, not just a 'good-enough' offering, which is what a lot of higher-level AWS services are.
> sooner or later they're going to land in your space.
Absolutely. I've seen this a handful of times with companies I consult, where they suddenly find themselves competing with AWS. I call it the November surprise because it happens around Re:Invent.
There are several reasons this is a tough thing to compete against, and AWS's vertical integration is just one of them. I've already written about them and also how to come out ahead if you find yourself in this situation: https://www.gkogan.co/blog/big-cloud/
We were using Alooma for ETL for years until Google bought it and started to deprecate AWS connections. It was a massive PITA, but it mostly worked. We switched over to AWS DMS and it was easy. Honestly it didn't take much effort. It has worked flawlessly - literally zero errors - from the day we started it up, and best of all, it's free. All you pay for is the instance it's using for you. That sort of thing can save startups much needed money. Yes, you're tied to the ecosystem - and that's what they want - but it's worth it. Once I talk to people and basically say the same thing you're saying, they start to look at AWS a bit differently.
I think their key advantage is sales. Imagine a product that adds a small amount of value to a company but requires a long drawn out sales process including research on available vendors, pricing, security, use cases, determining requirements, etc vs a developer going to the AWS console and clicking "create databrew". It's no competition.
And with sales taking up such a huge percentage of a lot of these SAAS companies revenue, Amazon can pass the lack of sales to the customer as cost savings. Skip the sales process and the sales cost. win win.
I don't see the same broad amount of services launched out of Azure as I do from AWS, and definitely not from GCP.
I don't know if this is a strategic difference or an execution / cultural difference (AWS ships products faster, but they're almost barely usable in v1)
How are you judging the "broad amount of services" launched out of Azure? They release something on the order of 10-25 updates a week, their services feed runs nonstop.
I claim no special knowledge of AWS, but Azure is full apace, certainly faster than even us global SIs can keep up with in terms of providing support and capabilities.
What AWS has is a culture and institutional knowledge on how to launch new products that take foundational AWS services (S3, Lambda, EC2, DDB, etc.) and glues (!) them together better than what a competing non-AWS company can do. This is a bold claim (since AWS launches some very crappy products), but imagine being able to use AWS infrastructure at cost, having internal knowledge on how to best optimize that infrastructure and access to the engineers that own those services while you build abstractions and better user experiences on top of them.
I don't know how cos that compete in any related space can survive. When AWS is willing to throw whatever against a wall (launching 50+ services a year) to see what sticks, sooner or later they're going to land in your space.
Become more locked into AWS's foundational services -> these abstractions on top of them start to make more sense in engineering complexity / delivery time / possible cost dimensions -> Use more of these -> Become more locked into AWS's foundational services.
This feels very different from Azure or GCP.