I just deleted 1,372 disks from Google cloud and 7 project spaces.
-
Scott Williams 🐧replied to Scott Williams 🐧 last edited by
This is likely the last time I'll have those kind of numbers as workloads have been largely moved to baremetal on premise #Kubernetes systems over the past year. We basically bought the same hardware we were renting from Google and now run things at a fraction of the spend. The CapX to do it was less than one year of the OpX spend to not.
-
Taggart :donor:replied to Scott Williams 🐧 last edited by
-
-
@mttaggart @vwbusguy oh hey guys, what's going on? Yanking @Viss chain again? Count me in!
-
-
Gene Pasquetreplied to Scott Williams 🐧 last edited by
@vwbusguy this is very interesting as we've been trying to optimise our platform costs too. How much of the gcliyd cost did you end up saving? 1/3?
-
Scott Williams 🐧replied to Gene Pasquet last edited by
@etenil I'm not going to share that publicly, but it was a significant amount that was very much worth the investment, including paid commercial support for the open source components we're using to manage it all.
-
Markus Werlereplied to Scott Williams 🐧 last edited by
@vwbusguy I am curious about what you use as #Kubernetes Stack, especially which load balancer you prefer for Pods that need their unique IP address. We played around with #k3s and #MetalLB, but I would like to know if you prefer other options.
-
Scott Williams 🐧replied to Markus Werle last edited by [email protected]
@markuswerle We did MetalLB early on with a Sidero Metal prototype using BGP and it worked. We ended up going with Rancher (RKE2 and k3s) and utilizing an external ha_proxy + keepalived setup to proxy to the backend for this particular stack, but we're also using multus with bridging and dhcp for other implementations, especially ones running VM workloads via Harvester. Generally, flannel or calico for CNI depending on the use case.
-
Scott Williams 🐧replied to Scott Williams 🐧 last edited by
@markuswerle This isn't the question you asked, but one thing I like about flannel, besides the relative simplicity, is that it easily integrates with wireguard so you can encrypt k8s traffic between nodes in a kernel native way.
-
@vwbusguy
If you can answer, I'm curious if the savings included the cost of an on-prem data center? Or was that already something you had? (Server room, racks, routers, firewalls, backup generator, redundant fiber internet, IT, facilities and security personel). Also, I can't speak highly enough of OpenStack for a private cloud. -
@Char I am fortunate to work at a place that has robust, established data centers. It was one is the original 4 nodes of the internet, for reference.
OpenStack is one of the things we retired and migrated into this new system, but that's more of a matter of what was specifically practical for our use cases than a statement about OpenStack. We're still using ceph and libvirt images, so there's a lot of familiar concepts.
-
@Viss @arichtman @mttaggart I see you have also familiarized yourself with the Kubernetes documentation.
-
@vwbusguy @arichtman @mttaggart oh totally. nothing like "doing security" by adding 50 container ships worth of attack surface
-
@Viss @vwbusguy @arichtman On the other hand, going on prem!
-
@mttaggart @vwbusguy @arichtman by the simple mechanic of putting shit behind a firewall, you are staggeringly safer than using the cloud, by the sheer virtue of the raw access to the stuff isnt "just public".
-
@vwbusguy @Viss @mttaggart god it's so real. One really incisive take I saw recently was that the balance is wrong on kubernetes WRT Kubernetes platform development/extension. The take is that controllers are made quite simple to write, as they in principal have very defined responsibilities. But in practice you get all the controllers acting on shared mutable state, and sometimes fighting over properties of resources - see VPA vs HPA perhaps for a common example.
-
Scott Williams 🐧replied to Viss last edited by [email protected]
@Viss @mttaggart @arichtman And how much of it isn't encrypted between services nor at rest in many setups.
-
@arichtman @vwbusguy @mttaggart exposed api endpoints, super secret secrets hanging out in env vars, rbac not configured or not present, public api access, shared usernames, images that are 2-5 years old with trivial kernel privesc bugs, containers built by people who dont security and spread far and wide. its just a risk matroshka doll full of exploitable surfaces and configs, and all the corners and edges full of "industry best practices", written by non-security people
-
@vwbusguy @Viss @mttaggart bruh if we have to add Istio to TLS everything I'm going to die