I just deleted 1,372 disks from Google cloud and 7 project spaces.
-
Gene Pasquetreplied to Scott Williams 🐧 last edited by
@vwbusguy this is very interesting as we've been trying to optimise our platform costs too. How much of the gcliyd cost did you end up saving? 1/3?
-
Scott Williams 🐧replied to Gene Pasquet last edited by
@etenil I'm not going to share that publicly, but it was a significant amount that was very much worth the investment, including paid commercial support for the open source components we're using to manage it all.
-
Markus Werlereplied to Scott Williams 🐧 last edited by
@vwbusguy I am curious about what you use as #Kubernetes Stack, especially which load balancer you prefer for Pods that need their unique IP address. We played around with #k3s and #MetalLB, but I would like to know if you prefer other options.
-
Scott Williams 🐧replied to Markus Werle last edited by [email protected]
@markuswerle We did MetalLB early on with a Sidero Metal prototype using BGP and it worked. We ended up going with Rancher (RKE2 and k3s) and utilizing an external ha_proxy + keepalived setup to proxy to the backend for this particular stack, but we're also using multus with bridging and dhcp for other implementations, especially ones running VM workloads via Harvester. Generally, flannel or calico for CNI depending on the use case.
-
Scott Williams 🐧replied to Scott Williams 🐧 last edited by
@markuswerle This isn't the question you asked, but one thing I like about flannel, besides the relative simplicity, is that it easily integrates with wireguard so you can encrypt k8s traffic between nodes in a kernel native way.
-
@vwbusguy
If you can answer, I'm curious if the savings included the cost of an on-prem data center? Or was that already something you had? (Server room, racks, routers, firewalls, backup generator, redundant fiber internet, IT, facilities and security personel). Also, I can't speak highly enough of OpenStack for a private cloud. -
@Char I am fortunate to work at a place that has robust, established data centers. It was one is the original 4 nodes of the internet, for reference.
OpenStack is one of the things we retired and migrated into this new system, but that's more of a matter of what was specifically practical for our use cases than a statement about OpenStack. We're still using ceph and libvirt images, so there's a lot of familiar concepts.
-
@Viss @arichtman @mttaggart I see you have also familiarized yourself with the Kubernetes documentation.
-
@vwbusguy @arichtman @mttaggart oh totally. nothing like "doing security" by adding 50 container ships worth of attack surface
-
@Viss @vwbusguy @arichtman On the other hand, going on prem!
-
@mttaggart @vwbusguy @arichtman by the simple mechanic of putting shit behind a firewall, you are staggeringly safer than using the cloud, by the sheer virtue of the raw access to the stuff isnt "just public".
-
@vwbusguy @Viss @mttaggart god it's so real. One really incisive take I saw recently was that the balance is wrong on kubernetes WRT Kubernetes platform development/extension. The take is that controllers are made quite simple to write, as they in principal have very defined responsibilities. But in practice you get all the controllers acting on shared mutable state, and sometimes fighting over properties of resources - see VPA vs HPA perhaps for a common example.
-
Scott Williams 🐧replied to Viss last edited by [email protected]
@Viss @mttaggart @arichtman And how much of it isn't encrypted between services nor at rest in many setups.
-
@arichtman @vwbusguy @mttaggart exposed api endpoints, super secret secrets hanging out in env vars, rbac not configured or not present, public api access, shared usernames, images that are 2-5 years old with trivial kernel privesc bugs, containers built by people who dont security and spread far and wide. its just a risk matroshka doll full of exploitable surfaces and configs, and all the corners and edges full of "industry best practices", written by non-security people
-
@vwbusguy @Viss @mttaggart bruh if we have to add Istio to TLS everything I'm going to die
-
Scott Williams 🐧replied to Taggart :donor: last edited by
@mttaggart @Viss @arichtman I'm in favor of securing on prem stuff as if it were public. I mean, definitely do network segmentation and all, but don't not harden/encrypt things just because you are behind a NAT.
-
@vwbusguy @mttaggart @arichtman thats a good posture to maintain - but the topography as it exists today is basically "if you use oldschool networking techniques, like a hardware firewall for example, it reduces the risk of whole classes of bugs, simply because the list of possible attackers goes from 'anybody on the internet' to 'only people in the LAN' "
-
Scott Williams 🐧replied to Scott Williams 🐧 last edited by [email protected]
@mttaggart @Viss @arichtman I think Viss has an absolutely valid point that people often don't secure their public cloud stuff as if it were public, either.
-
-
@Viss @vwbusguy @mttaggart yea that's (ime) generally good practice for cloud clusters too. All nodes in private subnets, use API gateways or select nodes as DMZ