• 0 Posts
  • 10 Comments
Joined 4 months ago
cake
Cake day: June 16th, 2025

help-circle

  • Currently, I have a 3 1L Dell node Proxmox cluster with 6 kube nodes on it (3 masters, 3 workers). Lets me do things like migrate services off of a host so I can take it out, do upgrades/maintenance, and put it back without hearing about downtime from the family/friends.

    For storage, I’ve got a Synology NAS with NFS setup and then the pods are configured to use that for their storage if they need it (So, Jellyfin, Immich, etc). I do regular backups of the NAS with rsync. So, if that goes down, I can restore or standup a new NAS with NFS and it’ll be back to normal.


  • If feel like, for me at least, GitOps for containers is peace of mind. I run a small Kubernetes cluster as my home lab, and all the configs are in git. If need be, I know (because i tested it) if something happens to the cluster and I lose it all, I can spin up a new cluster and apply the configs from git and be back up and running. Because I do deployments directly from git, I know that everything in git is up to date and versioned so i can roll back.

    I previously ran a set of docker containers with compose and then swarm, and I always worried something wouldn’t be recoverable. Adding GitOps here reduced my “What If?” Quotient tremendously.


  • I use an rsync job to do it. Rsync by default uses the files metadata to determine if the file has changed or not and updates based on that. You can opt to use checksums instead if you’d rather. IIRC, you can do it with a Synology task, or just do it yourself on the command line. Ive got a Jenkins setup to run it so i can gather the logs and not have to remember the command all the time (and i use it for other periodic jobs as well), but its pretty straightforward on its own.



  • thejml@sh.itjust.workstoSelfhosted@lemmy.world1U mini PC for AI?
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 months ago

    Honestly, If you are delving into Kubernetes, just add some more of those 1L PCs in there. I tend to find them on ebay cheaper than Pi’s. Last year I snagged 4x 1L Dells with 16GB RAM for $250 shipped. I swapped some RAM around, added some new SSD’s and now have 3x Kube masters, 3x Kube worker nodes and a few VMs running a Proxmox cluster across 3 of the 1L’s with 32GB and a 512GbB SSD each and its been great. The other one became my wife’s new desktop.

    Big plus, there are so many more x86_64 containers out there compared to Pi compatible ARM ones.



  • thejml@sh.itjust.workstoSelfhosted@lemmy.worldThe Future is NOT Self-Hosted
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    3
    ·
    3 months ago

    Instead of building our own clouds, I want us to own the cloud. Keep all of the great parts about this feat of technical infrastructure, but put it in the hands of the people rather than corporations. I’m talking publicly funded, accessible, at cost cloud-services.

    I worry that quickly this will follow this path:

    • Someone has to pay for it, so it becomes like an HOA of compute. (A Compute Owners Association, perhaps) Everyone contributes, everyone pays their shares
    • Now there’s a group making decisions… and they can impose rules voted upon by the group. Not everyone will like that, causing schisms.
    • Economies of scale: COA’s get large enough to be more mini-corps and less communal. Now you’re starting to see “subscription fees” no differently than many cloud providers, just with more “ownership and self regulation”
    • The people running these find that it takes a lot of work and need a salary. They also want to get hosted somewhere better than someone’s house, so they look for colocation facilities and worry about HA and DR.
    • They keep growing and draw the ire of companies for hosting copies of licensed resources. Ownership (which this article says we don’t have anyway) is hard to prove, and lawsuits start flying. The COA has to protect itself, so it starts having to police what’s stored on it. And now it’s no better than what it replaced.