There's a discourse forum if you're into async, or there's a Discord server for more up-to-the-minute updates. Along with the github repos to track bugs, feature requests, and see how the project is made.
I've been a consumer of bazzite for almost a full year now. I built an AMD based gaming machine and wanted an experience as pleasant as SteamOS but for HTPC's. That was what first turned me on to the universal-blue project.
I later picked up a framework (exactly 2 weeks ago) and have been daily driving Bluefin on it and the experience is exactly what I'd want from a daily driver. The durability and mindlessness of a Chromebook for updates, options to install all my tools/utilities, and disposable/composable development environments built right into the base system.
So "Cloud Native" speaks to multiple aspects of how universal-blue is both built, distributed, and some of the guiding principals behind the project.
I'll start at the very basics, where we define "Cloud Native":
Cloud native is the software approach of building, deploying, and managing modern applications in cloud computing environments.
I'll get a little hand wavy here as our desktops/laptops aren't typically defined as a "cloud" (read: grouping of machines, typically behind an API that someone else manages but makes available to users or customers). However - we can look at it as a target platform for deployment. How universal-blue gets there is the real interesting part. That "made by cloud native nerds" is a very compact way of describing how the project is built, tested, and rolled out.
Universal-blue images are all built in CI. In this case - its a combination of base layer components - sometimes built in COPR for some projects that are included in the product operating system, and then those COPR built artifacts are included by a Containerfile. Along with all the goodness contained in SilverBlue (fedora rpm artifacts).
That containerfile is built, tested, and signed in GitHub actions, and a manifest is then updated (somewhere, i don't actually know where ublue looks for these manifests to identify it has an update - it might just be the GHCR registry - but don't hold me to that).
Now this probably all sounds like something you see in your day to day if you work in infrastructure, or in a company producing software/services for the web. But what's really unique from a consumed operating system perspective is that those builds and tests effectively gatekeep the "blessed" configuration for universal-blue. Classically you have kernel modules that get built on your machine using a technique known as DKMS (Dynamic Kernel Module System). Every update to the kernel you have to re-build some library as part of the update process. And if your particular version of a module hasn't been vetted with what kernel you just pulled you can be left in a rather bad state - I've had this happen to me with the proprietary nvidia drivers as an example.
How ublue delivers these modules is part of that not-so-secret sauce that makes it wonderful. These modules are built in the cloud in that same release pipeline, and if they fail - they don't get released! You simply don't see an update for that day, and things hum along just fine. This breaking happening somewhere other than your computer is part of that reliability promise - you wont be dealing with kernel module breakage, the release maintainers will have to resolve the break (or communicate with upstream to find a solution) so your incoming stream of updates can be unblocked.
Finally - there are a lot of "patterns" - or process to achieve a desired outcome, that has been piloted in the Cloud Native world. Someone mentioned CloudNative makes them think of CoreOS. I'm glad you brought this up - If you keep your older versions (by pinning, or ublue keeps the last known booted image by default, and you can change this to keep say 10 if you wanted) - you can always roll back to the version that worked before you encountered a fault. This same pattern exists in the base-line SilverBlue distribution.
This is not an exhaustive analysis but I've already penned a good novel here. I hope this helps outline how universal-blue brings Cloud Native to the desktop. I encourage you to kick the tires and try it out, even if only in a virtual machine.
Having personally also fucked up a silverblue install (featuring a DKMS kernel module I built to support my DSLR camera hdmi capture card) with proprietary nvidia drivers - and then let it sit on that partition long enough that my fedora version was too out of date to pull updates for; and as someone who builds CI pipelines in $DAYJOB: thank you so very very much.
Oh i have a rather hard time to notice AI comments if the language they are written in isn't my native one.
Could you tell me what's most suspicious about the text? Imo the structure is a bit to well rounded and it kind of reads like a transcript of something someone said not like a comment.
Doesn't look like gpt4 to me, someone should make a "guess the LLM" game.
It's a sad world when a thoughtful, well-structured, obviously experience-based and informative comment is immediately assumed to be word guessing machine generated garbage.
You certainly could. You'd be incurring another service running on azure to act as the disk broker. A lot of end users prefer to use things like the providers persistent disk and acknowledge the limitations there in (like 16 disk maximum per instance, and heightened costs for having managed storage)
But there's nothing stopping you from enlisting Azure PV's as a resource, Ceph managed PV's, and other incantations of durable storage. I only ask that you really consider the cost/benefit of each, and pick what makes the most sense to you.
My thoughts would be to use the azure PV disk type, and if that's not dynamic enough to meet your needs, then enlist ceph + large volumes and carve those up into RDB's to share among your workloads.
I'm sure there are others with differing opinions, and I'm happy to help you work through them (but not on a HN comment thread)
Seek me out in Slack as @lazypower, or ping me on the Juju IRC channel irc.freenode.net #juju I'm @lazypower there as well.
And finally, our juju user mailing list is another great resource for supporting questions like the above:
I don't think its documented in any official capacity, but we (sig cluster ops) did generate some visuals that might aid in grokking the topology of Kubernetes as a whole, and we did model this after production-setups.
A few things to keep in mind:
These maps are service centric, and abstract units as vertical columns in their respective diagrams. Services must be HA to be considered “production ready”
Additional concerns that may/may-not be represented here:
- TLS Security on all endpoints
- TLS Key Rotation in the event of compromise/upgrade/expiration
- Durable storage backed workloads
- ETCD state snapshots for cluster point-in-time recovery
- User/RBAC - this still needs more info before i can outline it (time limited)
- Network policy for namespace/application isolation (this is an unspoken requirement for many business units)
We left off working on a Network draft diagram, and if you’re interested in contributing/participating in this process, join us in the #sig-cluster-ops slack channel. We meet thursdays (or have, new year schedule dependent)
As Marco alluded, our only supported storage mechanism to date (Which is represented and managed via the charm) is Ceph backed by RBD storage.
We have plans on including other storage vendors and mechanisms, but they aren't on the roadmap for the very near term. If end users start requesting a specific storage solution, it would go in our planning doc and get added to the roadmap. We're quite active with our early adopters that give us feedback and file bugs/requests.
To date you could continue to use alternative storage providers such as NFS or Gluster - but we don't have the PV creation+enlistment captured in the charm code just yet, but again, due to priorities. End users pretty much set the priorities for us, and we then circle back with some light weight planning and execution.
Juju is brilliant for orchestrating your services. Its refreshing to see a S.O.A. approach to configuration management and embraces it rather than having it as an after thought.
@tmikaeld - yeah the vagrant story with juju is an emerging one and great for getting started quickly on your Windows/OSX machine! But when running native, I prefer to use the local provider. LXC is so fast. When combined with BTRFS snapshots you get machines in ms.
take your pick :)
Discourse: https://universal-blue.discourse.group/ Discord: https://universal-blue.org/mission/ (link on left side, click discord) GitHub: https://github.com/ublue-os/