I would say the more regular expiration and renewal of an LE cert is better.
It’s an ongoing check instead of an annual check.
I would say the more regular expiration and renewal of an LE cert is better.
It’s an ongoing check instead of an annual check.
At the homelab scale, proxmox is great.
Create a VM, install docker and use docker compose for various services.
Create additional VMs when you feel the need. You might never feel the need, and that’s fine. Or you might want a VM per service for isolation purposes.
Have proxmox take regular snapshots of the VMs.
Every now and then, copy those backups onto an external USB harddrive.
Take snapshots before, during and after tinkering so you have checkpoints to restore to. Copy the latest snapshot onto an external USB drive once you are happy with the tinkering.
Create a private git repository (on GitHub or whatever), and use it to store your docker-compose files, related config files, and little readmes describing how to get that compose file to work.
Proxmox solves a lot of headaches. Docker solves a lot of headaches. Both are widely used, so plenty of examples and documentation about them.
That’s all you really need to do.
At some point, you will run into an issue or limitation. Then you have to solve for that problem, update your VMs, compose files, config files, readmes and git repo.
Until you hit those limitations, what’s the point in over engineering it? It’s just going to over complicate things. I’m guilty of this.
Automating any of the above will become apparent when tinkering stops being fun.
The best thing to do to learn all these services is to comb the documentation, read GitHub issues, browse the source a bit.
Bitwarden is cheap enough, and I trust them as a company enough that I have no interest in self hosting vaultwarden.
However, all these hoops you have had to jump through are excellent learning experiences that are a benefit to apply to more of your self hosted setup.
Reverse proxies are the backbone of hosting and services these days.
Learning how to inspect docker containers, source code, config files and documentation to find where critical files are stored is extremely useful.
Learning how to set up more useful/granular backups beyond a basic VM snapshot in proxmox can be applied to any install anywhere.
The most annoying thing about a lot of these is that tutorials are “minimal viable setup” sorta things.
Like “now you have it setup, make sure you tune it for production” and it just ends.
And finding other tutorials that talk about the next step, to get things production ready, often reference out dated versions, or have different core setups so doesn’t quite apply.
I understand your frustrations.
Nano is useful because it is everywhere.
There are better editors, but being familiar with nano and it’s shortcuts means you can edit files pretty much anywhere.
Same with knowing the basics of vim (like being able to edit, exit and save)
If your windows computer makes an outbound connection to a server that is actively exploiting this, then yes: you will suffer.
But having a windows computer that is chilling behind a network firewall that is only forwarding established ipv6 traffic (like 99.9999% of default routers/firewalls), then you are extremely extremely ultra unlucky to be hit by this (or, you are such a high value target that it’s likely government level exploits). Or, you are an idiot visiting dogdy websites or running dodgy software.
Once a device on a local network has been successfully exploited for the RCE to actually gain useful code execution, then yes: the rest of your network is likely compromised.
Classic security in layers. Isolatation/layering of risky devices (that’s why my homelab is on a different vlan than my home network).
And even if you don’t realise your windows desktop has been exploited (I really doubt that this is a clean exploit, you would probably notice a few BSOD before they figure out how to backdoor), it then has to actually exploit your servers.
Even if they turn your desktop into a botnet node, that will very quickly be cleaned out by windows defender.
And I doubt that any attacker will have time to actually turn this into a useful and widespread exploit, except in targeting high value targets (which none of us here are. Any nation state equivalent of the US DoD isn’t lurking on Lemmy).
It comes back to: why are you running windows as a server?
ETA:
The possibility that high value targets are exposing windows servers on IPv6 via public addresses is what makes this CVE so high.
Sensible people and sensible companies will be using Linux.
Sensible people and sensible companies will be very closely monitoring what’s going on with windows servers exposed by ipv6.
This isn’t an “ipv6 exploit”. This is a windows exploit. Of which there have been MANY!
If the router/gateway/network (IE not local) firewall is blocking forwarding unknown IPv6, then it’s a compromised server connected to via IPv6 that has the ability to leverage the exploit (IE your windows client connecting to a compromised server that is actively exploiting this IPv6 CVE).
It’s not like having IPv6 enabled on a windows machine automatically makes it instantly exploitable by anyone out there.
Routers/firewalls will only forward IPv6 for established connections, so your windows machine has to connect out.
Unless you are specifically forwarding to a windows machine, at which point you are intending that windows machine to be a server.
Essentially the same as some exploit in some service you are exposing via NAT port forwarding.
Maybe a few more avenues of exploit.
Like I said. Why would a self-hoster or homelabber use windows for a public facing service?!
How many people are running public facing windows servers in their homelab/self-hosted environment?
And just because “it’s worked so far” isn’t a great reason to ignore new technology.
IPv6 is useful for public facing services. You don’t need a single proxy that covers all your http/s services.
It’s also significantly better for P2P applications, as you no longer need to rely on NAT traversal bodges or insecure uPTP type protocols.
If you are unlucky enough to be on IPv4 CGNAT but have IPv6 available, then you are no longer sharing reputation with everyone else on the same public IPv4 address. Also, IPv6 means you can get public access instead of having to rely on some RPoVPN solution.
I thought T568B at each end was standard practice these days
The benefit of using config files is easy version management via git.
Makes it easy to rebuild from scratch and easy to rollback a change that breaks something
Other services will be reflected by active DNS records.
If the only DNS record points to a “Buy this domain” webpage, I think it’s fair to argue that is misuse.
Doubley so if it turns out many unrelated domains are owned by and point to the same webpage, and it’s just doing a js hostname thing to make it seem relevant to the current address
Transfering a domain from one registrar (IE reseller) to another can be a pain, but yes you can - it normally involves a fee and manual actions from the registrars.
As long as the new registrar supports the TLD. A few Geo-TLDs can only be resold/managed by some registrars.
The easiest thing to do is to point the domain at ClouDNS nameservers.
Make sure you are happy with ClouDNS (I’ve never had issues with them) etc before committing
Nginx Proxy Manager is probably perfect for you.
Pick a domain (like mylab.home or something), set up your home network to resolve that domains IP as your docker hosts IP.
NPM will do self-signed certs. So, you will get a “warning, Https is insecure” kinda page when you visit it. You could import NPMs root cert into your OS/browser so it trusts it (or set up an “don’t warn for this domain” or something).
If you don’t want per-client config to trust it, then you need to buy a domain, use a DNS that supports letsencrypt DNS-challenge, and grab certs that way (means you don’t need a publicly accessible well-known route exposed)
Supabase is a dockerised postgres with user auth, rest API and some other goodies. It’s maybe too complicated as a starter.
Appwrite might also work for ya. Much easier to get into, but also less feature complete.
Pocketbase might also work. Haven’t used it tho
VMix popularity exploded during the pandemic. A lot of conferences became a blend of teams/zoom/Google and VMix.
Might be hardware based like a multi-m/e video mixer (blackmagic make cheap ones), or maybe more of a screen manager (like barco e2, analog way livecore). But, unless there are production requirements, vmix is much more likely. It’s (now) proven, and much cheaper!
OBS can absolutely do it. There are other open source softwares that can do it.
I’ve seen people bastardise Resolume into something that looks decent.
There are some online studio systems so everything you do is virtualized. Streamyard used to be like this, till it was bought by hopin (I think it was hopin)
You can do reverse proxy on the VPS and use SNI routing (because the requested domain is in clear text over HTTPS), then use Proxy Protocol to attach the real source IP to the TCP packets.
This way, you don’t have to terminate HTTPS on the VPS, and you can load balance between a couple wireguard peers so you have redundancy (or direct them to different reverse proxies or whatever).
On your home servers, you will need an additional frontend(s) that accepts Proxy Protocol from the VPS (as Proxy Protocol packets aren’t standard HTTP/S packets, so standard HTTPS reverse proxies will drop them as unknown/broken/etc).
This way, your home reverse proxy knows the original IP and can attach it to the decrypted http requests as x-forward-for. Or you can do ACLs based on original client IP. Or whatever.
I haven’t found a way to get a firewall that pays attention to Proxy Protocol TCP headers, but I haven’t found that to really be an issue. I don’t really have a use case
Sure, but what you are describing is the problem that k8s solves.
I’ve run plenty of production things from docker compose. Auto scaling hasn’t been a requirement, and HA was built into the application (so 2 separate VMs running the compose stack). Docker was perfect for it, and k8s would’ve been a sledgehammer.
It’s not a workaround.
In the old days, if you had 2 services that were hard coded to use the same network port, you would need virtualization or a different server and make sure the networking for those is correct.
Network ports allow multiple services to use the same network adapter as a port is like a “sub” address.
Docker being able to remap host network ports to containers ports is a huge feature.
If a container doesn’t need to be accessed outside of the docker network, you don’t need to expose the port.
The only way to have multiple services on the same port is to use either a load balancer (for multiple instances of the same service) or an application-aware reverse proxy (like nginx, haproxy, caddy etc for web things, I’m sure there are other application-aware reverse proxies).
Surely you want to enable 802.1q? Like, that is vlan aware switching and routing. Or is that on the nas?
Edit:
Some troubleshooting:
Connect a laptop into the same subnet as your Nas (so same vlan and IP range/subnet) and connect to the nas. This either eliminates the NAS or the router from the equation
That whole “shortest path” has caught me out before (tho in a different way)!
And firewall logs of “state violation” aren’t always helpful when that’s pretty much the default log message
If you want remote access to your home services behind a cgnat, the best way is with a VPS. This gives you a static public IP that your services connect to, and that you can connect to when out and about.
If you don’t want the traffic decrypted on the VPS, then tunnel the VPN back to your homelab.
As the VPN already is encrypted, there is no point in re-encrypting it between the vps and homelab.
Rathole https://github.com/rapiz1/rathole is one of the easiest I have found for this.
Or you can do things with ssh tunnels.
For VPN, wireguard is very good