If you mean accessing them from within your LAN while your internet is down then no it won’t work.
What you should be doing is either split horizon DNS (LAN resolves local IPs, public resolves public IPs) or use different DNS hostnames internally, for example media.local.yourdomain.com
You then set up a reverse proxy in your LAN and point everything to that, use a let’s encrypt wildcard cert using the DNS challenge method so you can get *.yourdomain.com protected with a single cert. Since you use cloudflare you can use the cloudflare API plugin with certbot, it’ll automate everything for the DNS challenge and no need to keep opening ports or configuring http/https challenges every couple of months.
Last I checked, a wild card cert for *.yourdomain.com is NOT valid for test.local.yourdomain.com, but IS valid for test.yourdomain.com. Wildcard certs are not recursive as far as I know.
You’re right but you can get a wildcard for that level as well.
Totally, you can easy do *.test.yourdomain.com and that’s works just fine for certbot. Ive never used cloudflare so I’d assume the same setup should work.
Last I checked, which was honestly two or more years prior, CloudFlare doesn’t handle second level sub domains (I.E.
a.b.domain.ext
) properly… when I tried it, I could make the DNS records, it did resolve, but the certificates didn’t work. I don’t know if that has since changed.You likely wouldn’t be using cloudflare for that level anyways, since you want it to work when you’re offline you’d bypass them entirely with local DNS server, local reverse proxy+certs. You’d use something like certbot with let’s encrypt which works fine. https://certbot.eff.org/
Yeah, cloudflared won’t be able to access your exposed services if your internet suddenly goes down.
If you are worried about availability maybe you should consider moving some stuff to a VPS.
Yes. You can also set the LAN aside from HTTPS access.
Yes. Depending on your network configuration you could consider using cellular data as a backup form of connectivity.
If you’re remote-accessing these resources and can’t always be home to manage the cutover to cellular, I recommend splurging on a Unifi Dream Machine and LTE Backup module. Getting a verizon gateway or similar device won’t properly communicate with your network and has to use it’s own judgement through traffic monitoring to know when to take over DHCP/routing. We (local MSP) just resolved an issue where these devices couldn’t tell between an ISP blip or an outage and so it would wrestle DHCP from the firewall when they still had internet, killing their wireless printers. Unifi LTE communicates with the UDM over a very verbose and reliable protocol. The LTE doesn’t kick on until the FW realizes there’s a problem on the primary WAN.
Not that I don’t love Ubi but OPNsense and pfsense will also handle failover:
https://docs.opnsense.org/manual/multiwan.html
This is also possible within Linux, Windows and *BSD by just adding both possible routes and weighting them accordingly:
https://serverfault.com/questions/226530/get-linux-to-change-default-route-if-one-path-goes-down
I‘m not totally sure what you are trying to accomplish.
To access your lan services over https you need a certificate, a dns and a reverse proxy (at least thats how I do it).
I know cluudflare does reverse proxy stuff but I‘m not too deep into that.
So if you mean you expose your services to cloudflare and access them from the web. Yes, they’re gonna be down. If you have a nother way of accessing them on lan (e.g. ip:port) then you should be able to at least reach them but https is not going to work.
For that you can use a local certificate. It’s a bit of work but if you have a domain and nginx proxy manager, you‘ll be good. Let me know if you need help
I have a reverse proxy(traefik) on my LAN to handle sub domain service routing. I want https but don’t want to have to install certs on all the clients using the services. I want the s but don’t want my services to be unavailable if my Internet goes down.
You only need a letsencrypt cert on the reverse proxy, the services themselves don’t need them.
Thanks for elaborating. Thats exactly my usecase. You can use lets encrypt certs with a dns challenge I believe. You wildcard a subdomain like *.abc.def.com
Then you set your services to e.g. homeassistant.abc.def.com both on proxy and local dns.
I believe you have to expose your ports once to get the cert and close them immediately. You set the domain to point to the public ip of your router, get the challenge done, close ports and the public domain goes nowhere.
Install the cert, point the local dns to your server ip, done.
If you need more info, let me know.
Replace traefik with this https://nginxproxymanager.com/
I don’t know if cloudflare can do this, but I have a different DDNS + Let’s Encrypt setup and I configure my router to set the same local domain as the public domain (in openwrt it’s
local server
+local domain
although I’m not aware of the distinction between the two). So when requests are sent over LAN (or over a VPN) the router points me to the LAN device directly, rather than needing to go through external DNS. HTTPS still works since to the client it’s the same domain as the certificate is linked to.Hope that makes sense as I haven’t fully got my head around it. I just know it works (indeed I just disabled my internet to test, and the services are still accessible over HTTPS).
Then no, you won’t be able to access your service via https when your internet is down because it’s terminated at cloudlare’s server. You can still access your service directly without https, or with https but with a self-signed certificate.
If your only goal is working https then as the other comment correctly suggests you can do DNS-01 authentication with Let’s Encrypt + Certbot + Some brand of dyndns
However the other comment is incorrect in stating that you need to expose a HTTP server. This method means you don’t need to expose anything. For instance if you do it with HA:
https://github.com/home-assistant/addons/blob/master/letsencrypt/DOCS.md
Certbot uses the API of your DDNS provider to authenticate the cert request by adding a txt record and then pulls the cert. No proxies no exposed servers and no fuss. Point the A record at your Rfc1918 IP.
You can then configure your DNS to keep serving cached responses. I think though that ssl will still be broken while your connection is down but you will be able to access your services.
Edit to add: I don’t understand why so many of the HTTPS tutorials are so complicated and so focused on adding a proxy into the mix even when remote access isn’t the target.
Cert bot is a shell script. It asks the Lets Encrypt api for a secret key. It adds the key as a txt record on a subdomain of the domain you want a certificate for. Let’s encrypt confirms the key is there and spits out a cert. You add the cert to whatever server it belongs to, or ideally Certbot does that for you. That’s it, working https. And all you have to expose is the rfc1918 address. This, to me at least, is preferable to proxies and exposed servers.