I just deleted 1,372 disks from Google cloud and 7 project spaces.
-
Scott Williams 🐧replied to Taggart :donor: on last edited by
@mttaggart @Viss @arichtman Ripe timing with the advent of nodejs making stateless applications more mainstream plus complete lack of a coherent business model that meant others managed to productize Docker before Docker itself could figure out how to do it.
-
@Viss @mttaggart @arichtman That's also true. Much in the same way all the junior devs are putting AI on their resume today when their core experience is sticking an OpenAI token into some code they copy and pasted off the internet to make a chat bot.
-
Taggart :donor:replied to Scott Williams 🐧 on last edited by
@vwbusguy @Viss @arichtman Node resonates because that is a lot of how I got started using it. But it wasn't just hype. There were real problems of deployability and reproducibility that it solved for Linux admins and developers targeting Linux servers.
I'll cop to missing Solaris on account of being still in school and not being a BSD expert, but when I was running school IT systems, Docker arrived and immediately solved longstanding complications.
-
Taggart :donor:replied to Taggart :donor: on last edited by
@vwbusguy @Viss @arichtman And I wasn't alone. I distinctly remember the conversation amongst a lot of working Linux folks at the time being one of excitement and optimism.
-
Scott Williams 🐧replied to Taggart :donor: on last edited by
@mttaggart @Viss @arichtman Indeed. In context, Red Hat had bought Qumranet and was competing with Xen, VMWare, and VirtualBox and saying things like you could run 5 VMs on Red Hat for the cost of 3 on VMWare, etc. Hypervisors were a huge deal. OpenStack vs Eucalyptus was the big hype.
On top of that, proprietary PaaS like Heroku was huge.
Docker came along as a way to do VM-like workloads with the overhead of a PaaS in the midst of all of that discussion.
-
Scott Williams 🐧replied to Scott Williams 🐧 on last edited by
@mttaggart @Viss @arichtman Docker was way less complicated to deploy than something like Eucalyptus or OpenStack and you could run it on your existing Linux servers instead of a proprietary PaaS or something awkward like Red Hat OpenShift 2 was.
Now you also had a way for a developer to actually ship what "works on my laptop" to the server with more assurances than before.
-
spmatich :blobcoffee:replied to Scott Williams 🐧 on last edited by
@vwbusguy I have done a bit of on-prem midrange support in a past life. Oncall is so bad for your health. Are you running openshift on prem? Who is supporting the clusters?
-
Scott Williams 🐧replied to spmatich :blobcoffee: on last edited by
@spmatich I did Openshift at my last two employers, but currently using Rancher here, on prem, with paid support. We have a team that supports it in addition to other infrastructure.
-
Sass, Davidreplied to Scott Williams 🐧 on last edited by
@vwbusguy I just have one question.
How is backup being done and stored, and does the storage storing the backup included in that cost?
The last time I was designing an on-prem storage system to store the backups of the data WITHOUT the infrastructure backups we basically ended up with the triple of the cost of the live system.
-
@vwbusguy @Viss @arichtman @mttaggart what do you think about Docker Swarm today? I tried k8s in my homelab and I hated it. Just not a great fit for such a low scale. Now I run Docker Swarm and I hate it much less. Still not great though but I see no alternative...
-
Scott Williams 🐧replied to DrRac27 on last edited by [email protected]
@DrRac27 @Viss @arichtman @mttaggart If you want a small scale lightweight k8s, then I recommend k3s. You can run k3s on one node.
-
@vwbusguy @Viss @arichtman @mttaggart thats what I tried first but I liked it even less. In k8s I at least had to learn how it works and every upgrade has a defined path. In k3s the install is `curl | sh` and what about upgrades? Just swapping out the binary and hope nothing breaks? I got it up and running with Ansible but I was not feeling great about it and expected it to break all the time. With swarm I just install the debian package and use the community.docker.docker_swarm ansible module
-
Scott Williams 🐧replied to DrRac27 on last edited by [email protected]
@DrRac27 @Viss @arichtman @mttaggart Upgrade for k3s is you run that same script again. It upgrades the components for you. You can also revert versions and you can backup etcd in case you want to start fresh. Etcd on k3s single node is just an sqlite database.
-
Scott Williams 🐧replied to Scott Williams 🐧 on last edited by
@DrRac27 @Viss @arichtman @mttaggart Coincidentally, Ansible is the reason I got into using k3s. I've been running AWX on it for years in my dayjob for an environment where I didn't have k8s established but just wanted to run Ansible AWX there.
-
Scott Williams 🐧replied to Sass on last edited by [email protected]
@sassdawe That's a valid question. It's important context that we weren't starting from scratch on prem but have plenty of existing infrastructure. Backups are both local to cluster storage (eg, Longhorn) and in a completely external ceph environment (RGW and/or EBS). Longhorn, etcd snapshots, Rancher Backup, etc, are backed up to Ceph RGW. Basically, if we have the node token secret and RGW secrets for a cluster, we can recreate everything.
-
Scott Williams 🐧replied to Scott Williams 🐧 on last edited by
@sassdawe Needles to say, paying for Longhorn commercial support was part of the cost factor. Even though it's 100% FOSS, doing it in production without commercial support is too much of a continuity risk. Doing it this way was also a small fraction of the cost of ODF from Red Hat.
-
Scott Williams 🐧replied to Scott Williams 🐧 on last edited by
@sassdawe If you mean host hardware contingency, we're using Rancher Elemental to provision hardware with SLE Micro and assign it to clusters as necessary. There are other ways to do this with k8s, such as metal3, which is what Openshift uses under the hood.
https://elemental.docs.rancher.com/
https://metal3.io/I definitely recommend doing reproducible immutable #Linux for #Kubernetes hosts, whether that's Sidero, SUSE, RHEL, or Fedora.
-
DrRac27replied to Scott Williams 🐧 on last edited by
@vwbusguy @Viss @arichtman @mttaggart ok good to know. I still don't think it is right for me but at least I learned sth, thanks!
-
Scott Williams 🐧replied to DrRac27 on last edited by
@DrRac27 @Viss @arichtman @mttaggart For things that I run in a container that don't need all the overhead of Kubernetes, I use podman with systemd to manage, so they end up running more like traditional Linux services, but getting updates through `podman pull` instead of yum update. Podman plays nicer with rootless, firewalld, cgroups2, etc., and has a fairly straightforward migration path to k8s if you end up needing to go bigger down the road.
-
Scott Williams 🐧replied to Scott Williams 🐧 on last edited by
@DrRac27 @Viss @arichtman @mttaggart My general opinion is that podman with a proxy in front (eg, caddy, nginx) can do most of what swarm can with less overhead and if you really need more than that, then you probably should be thinking about Kubernetes anyway.