![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/8286e071-7449-4413-a084-1eb5242e2cf4.png)
I played with a couple and went with searxng, because I was happiest with the results I was getting back from it compared to the other ones (or, for that matter, a normal Google or Bing search).
I played with a couple and went with searxng, because I was happiest with the results I was getting back from it compared to the other ones (or, for that matter, a normal Google or Bing search).
I’ve accomplished this with the Atom Echo and they work… fine?
The speaker is essentially inaudible, but the mic works well enough for me to just yell at HomeAssistant to do things.
And hey, can’t beat the size/price/power footprint and the deployment with ESPHome takes like, 30 seconds.
I have watchtower configured to update most, but not all containers.
It runs after the nightly backup of everything runs, so if something explodes, I’ve got a backup that’s recent and revertible. I also don’t update certain types of containers (databases, critical infrastructure, etc.) automatically so that the blast radius of a bad update when I’m not there doing it is limited.
In the last ~3 years I’ve had exactly zero instances of ‘oops shit’s fucked!’, but I also don’t run anything that’s in a massive state of flux and constantly having breaking changes (see: immich).
Yeah, exactly: if you know how it works, then you know how to fix it. I don’t think you need a comprehensive knowledge about how everything you run works, but you should at least have good enough notes somewhere to explain HOW you deployed it the first time, if you had to make any changes as well as anything you ran into that required you to go figure out what the blocking issue was.
And then you should make sure that documentation is visible in a form that doesn’t require ANYTHING to actually be working, which is why I just put pages of notes in the compose file: docker doesn’t care, and darn near any computer on earth made in the last 40 years can read a plan text file.
I don’t really think there’s any better/worse reverse proxy for simple configurations, but I’m most familiar with nginx, which means I’ve spent too long fixing busted shit on it so it’s the choice primarily because, well, when I break it, I already probably know how to fix what’s wrong.
I’m a grumpy linux greybeard type, so I went with… plain text files.
Everything is deployed via docker, so I’ve got a docker-compose.yml for each stack, and any notes or configuration things specific to that app is a comment in the compose file. Those are all backed up in a couple of places, since all I need to do is drop them on a filesystem, and bam, complete restoration.
Reverse proxy is nginx, because it’s reliable, tested, proven, works, and while it might not have all those fancy auto-config options other things have, it also doesn’t automatically configure itself into a way that I’d prefer it didn’t, either.
I don’t use any tools like portainer or dockge or nginx proxy manager at this point, because dealing with what’s just a couple of config files on the filesystem is faster (for me) and less complicated (again, for me) than adding another layer of software on top (and it keeps your attack surface small).
My one concession to gui shit for the docker is an install of dozzle because it certainly makes dealing with docker logs simple, and it simplifies managing the ~40 stacks and ~85 containers that I’ve got setup at the moment.
Oh sorry; my goal here was for individual metering. I’ve got an Enphase solar system, so the Envoy is already doing whole-house monitoring.
I’d like to be able to identify and ultimately be able to lower my load to stay under what the solar panels are generating, but that needs data I mostly don’t have, and specific equipment to actually turn things on and off.
Yeah the plan was for the in wall relays. I’m in the US and if I read the specs properly they’ll do 16a at 120v, which is also where my breakers would trip anyways so probably shouldn’t matter.
Honestly it feels like they’re trying to get away from being just a file sync platform, and are pushing for more corpo feature sets to compete with gsuite or O365.
Which I mean is great: that’s exactly what I needed and why I use it - it let me ditch almost all of my Google services and move it all to selfhosted.
But I bet it also causes incentives to prioritize fixes and features that are focused on that, and pushes stuff like ‘make the android sync app work like every other file sync app in history’ to the bottom of the list.
Nope, that curl command says ‘connect to the public ip of the server, and ask for this specific site by name, and ignore SSL errors’.
So it’ll make a request to the public IP for any site configured with that server name even if the DNS resolution for that name isn’t a public IP, and ignore the SSL error that happens when you try to do that.
If there’s a private site configured with that name on nginx and it’s configured without any ACLs, nginx will happily return the content of whatever is at the server name requested.
Like I said, it’s certainly an edge case that requires you to have knowledge of your target, but at the same time, how many people will just name their, as an example, vaultwarden install as vaultwarden.private.domain.com?
You could write a script that’ll recon through various permuatations of high-value targets and have it make a couple hundred curl attempts to come up with a nice clean list of reconned and possibly vulnerable targets.
Just tested that and uh, yeah, what the hell? Not something my workflows need, but that’s a shocking oversight considering damn near everything else 100% does that.
That’s the gotcha that can bite you: if you’re sharing internal and external sites via a split horizon nginx config, and it’s accessible over the public internet, then the actual IP defined in DNS doesn’t actually matter.
If the attacker can determine that secret.local.mydomain.com is a valid server name, they can request it from nginx even if it’s got internal-only dns by including the header of that domain in their request, as an example, in curl like thus:
curl --header 'Host: secret.local.mydomain.com' https://your.public.ip.here -k
Admittedly this requires some recon which means 99.999% of attackers are never even going to get remotely close to doing this, but it’s an edge case that’s easy to work against by ACLs, and you probably should when doing split horizon configurations.
Ugh, not the best marketing for Nextcloud to have a public share not work, lol. It seems like 25% of people just can’t see them but they work for everyone else so who knows.
Anyway, have a pastebin instead: https://pastebin.com/zPyvgxYX
Not saying you’re wrong, but what doesn’t work right? I haven’t noticed any behavior that seems wrong to me. Usually interact with nextcloud via the nextcloud section that gets added by the client in the file picker/file manager on the OnePlus Nord I’m using.
Happy to share the docker-compose.yml I’m using for my setup. It includes OnlyOffice so that I can edit files internally, Google Dcos style. You can skip that section and remove the oonet network definition if you don’t need/want it. You’ll want to change the volume mount paths (or define volumes if you’d rather not use bind mounts) and change the ‘supersecretpasswordhere’ to something actually uh, secure.
Anyway, file is at https://thecloud.home.uncomfortable.business/s/32HoxHajW33PRbf
One thing to be careful of that I don’t see mentioned is you need to setup ACLs for any local-only services that are accessible via a web server that’s public.
If you’re using the standard name-based hosting in say, nginx, and set up two domains publicsite.mydomain.com and secret.local.mydomain.com, anyone who figures out what the name of your private site is can simply use curl with a Host: header and request the internal one if you haven’t put up some ACLs to prevent it from being accessed.
You’d want to use an allow/deny configuration to limit the blowback, something like
allow internal.ip.block.here/24; deny all;
in your server block so that local clients can request it, but everyone else gets told to fuck off.
I’ll be the contrary one: I tried a lot of things and ended up, eventually, going back to Nextclolud, simply because it’s extendable and can add more shit to do things as you need it.
File sync and images may be all you need now, but let’s say in the future you want to dump Google Docs, or add calendar and contact syncing, or notes, or to do lists, or hosting your own bookmark sync app, or integrating webmail, or…
It’s got a lot of flaws, to be sure, but the ability to make it essentially do every task you might want cloud syncing with to at least a level of ‘good enough’, has pretty much kept me on it.
Yeah, they’re all a single-level deep. Multi-disc albums are also the same: artist - album/1-1, 1-2, 2-1, 2-2, etc.
Interesting, I haven’t had any of those issues with tagged media. I use beets for the tagging and sorting, and it’s been otherwise fine? I do \music\artist - album for the directory paths, though, so it’s already happily sorted and grouped correctly on the filesystem in a way that jellyfin seems to like.
So, I posted this on a similar thread a few days ago, but plex and/or jellyfin do an amazing job of user/library seperation, music streaming, AND have apps for every relevant platform you’d remotely care about: phones, computers, browsers, widgets plugged into your tv, etc.
It’s a little odd nobody has bothered to do a really good multi user/library audio-only app, but plex+plexamp or jellyfin+finamp is a pretty great solution as it is.
If all you need is for it to go ‘I turned on the light’, they’re fine. I wouldn’t expect to use them for anything more detailed or music-oriented.