

UFW works well, and is easy to configure. UFW is a great option if you don’t need the flexibility (and insane complexity) that manually managing iptables rules offers,


UFW works well, and is easy to configure. UFW is a great option if you don’t need the flexibility (and insane complexity) that manually managing iptables rules offers,


The job of a reverse proxy like nginx is exactly this. Take traffic coming from one source (usually port 443 HTTPS) and forward it somewhere else based on things like the (sub)domain. A HTTPS reverse proxy often also forwards the traffic as HTTP on the local machine, so the software running the service doesn’t have to worry about ssl.
Be sure to get yourself a firewall on that machine. VPSes are usually directly connected to the internet without NAT in between. If you don’t have a firewall, all internal services will be accessible, stuff like databases or the internal ports of the services you host.


The documentation you were looking at might’ve been the Matrix specification.
There is documentation on how to host a Matrix server, I’d honestly recommend using containers (maybe docker compose) for this one. It can definitely be confusing setting up a service like a Matrix homeserver for the first time.
As for other people finding it, you can (and should) make your homeserver invite-only. It’s also possible to disable federation, which makes the server self-contained. It will not accept incoming connections from other servers, nor make outgoing connections to other servers.
This does mean everyone you want to talk with has to be on your homeserver. There are probably better options available if you want to avoid Matrix’ federation issues, like Spacebar.


Web push for notifications. Sure, there’s privacy implications, but it’s already near universal. There’s other options like ntfy.sh if you’re not limited to existing infrastructure. UnifiedPush also works well as a protocol for push notifications.
Everything else can be handled in-app. Password reset will have to be done by an admin, though it’s completely doable for a small selfhosted service.
Some of the downsides OP listed may or may not always apply, but there are always downsides. Either you have to set up your own email server (with extra maintenance burden), or your “selfhosted” app suddenly relies on third party infrastructure, like your email provider (or those of other users on your instance).


That’s just Ubuntu, including their marketing strategy towards enterprise clients for desktop. (Without the pun in the name of course)


Movies like Terminator have “AGI”, or artificial generalized intelligence. We had to come up with a new term for it after LLM companies kept claiming they had “AI”. Technically speaking, large language models fall under machine learning, but they are limited to just predicting language and text, and will never be able to “think” with concepts or adapt in real time to new situations.
Take for example chess. We have stockfish (and other engines), that far outperform any human. Can these chess engines “think”? Can they reason? Adapt to new situations? Clearly not, for example, adding a new piece with different rules would require stockfish to re-train from scratch. Humans can take their existing knowledge and adapt it to the new situation. Also look at LLMs attempting to play chess. They can “predict the next token” as they were designed to, but nothing more. They have been trained on enough chess notation that the output is likely a valid notation, but they have no concept of what chess even is, so they will spit out nearly random moves, often without following rules.
LLMs are effectively the same concept as chess engines. We just put googly eyes on the software, and now tons of people are worried about AI taking over the world. While current LLMs and generative AI do pose a risk (overwhelming amounts of slop and misinformation, which could affect human cultural development. And a human deciding to give an LLM external influence on anything, which could have major impact), it’s nowhere near Terminator-style AGI. For that to happen, humans would have to figure out a new way of thinking about machine learning, and there would have to be several orders of magnitude more computing resources for it.
Since the classification for “AI” will probably include “AGI”, there will (hopefully) be legal barriers in place by the time anyone develops actual AGI. The computing resources problem is also gradual, an AGI does not simply “tranfer itself onto a smartphone” in the real world (or an airplane, a car, you name it). It will exist in a massive datacenter, and can have its power shut off. If AGI does get created, and causes a massive incident, it will likely be during this time. This would cause whatever real world entity created it to realize there should be safeguards.
So to answer your question: No, the movies did not “get it right”. They are overexaggerated fantasies of what someone thinks could happen by changing some rules of our current reality. Artwork like that can pose some interesting questions, but when they’re trying to “predict the future”, they often get things wrong that changes the answer to any questions asked about the future it predicts.



Corporate social media requires making a profit to keep running. No matter how good it looks at the start, the main goal of a corporate social media is never to provide the best possible service to end users. The things you get to see and how you interact are not driven by interests and real friends, but by what gets the platform the most profit.
Obligatory “AI bad”. You should post what you spent effort writing, instead of letting a large language model subtly change its meaning.


Personally I have seen the opposite from many services. Take Jitsi Meet for example. Without containers, it’s like 4 different services, with logs and configurations all over the system. It is a pain to get running, as none of the services work without everything else being up. In containers, Jitsi Meet is managed in one place, and one place only. (When using docker compose,) all logs are available with docker compose logs, and all config is contained in one directory.
It is more a case-by-case thing whether an application is easier to set up and maintain with or without docker.


“jellyfin isn’t immune to security incidents”
Well, no software is. The difference is that Plex just leaked data of all their users, where Jellyfin can’t, because they don’t have this data.


IRC does not have any federation, and XMPP does it in a completely different way from Matrix that has unique pros and cons.
IRC is designed for you to connect to a specific server, with an account on that server, to talk to other people on that server. There is no federation, you cannot talk to oftc from libera.chat. Alongside that, with mobile devices being so common, you’d need to get people to host their own bouncer, or host one for nearly everyone on your network.
XMPP federation conceptually has one major difference compared to Matrix: XMPP rooms are owned by the server that created them, whereas Matrix rooms are equally “owned” by everyone participating in it, with the only deciding factor being which users have administrator permissions.
This makes for better (and easier) scaling on XMPP, so rooms with 50k people isn’t that big of an issue for any users in that room. However, if the server owning the room goes down, the whole room is down, and nobody can chat. See Google Talk dropping XMPP federation after making a mess of most client and server implementations.
On Matrix, scaling is a much bigger issue, as everyone connects with everyone else. Your single-person homeserver has to talk with every other homeserver you interact with. If you join a lot of big rooms, this adds up, and takes a lot of resources. However, when a homeserver goes down, only the people on that homeserver are affected, not the rooms. Just recently, matrix.org had some trouble with their database going down. Although it was a bit quieter than usual, I only properly noticed when it was explicitly mentioned in chat by someone else. My service was not interrupted, as I host my own homeserver.
The Matrix method of federation definitely comes with some issues, some conceptually, and some from the implementation. However, a single entity cannot take down the federated Matrix network, even when taking down the most used homeservers. XMPP is effectively killed off by doing the same.


Being able to choose the OS and kernel is also important. I would not want my hypervisor machine to load GPU kernel modules, especially not on an older LTS kernel (which often don’t support the latest hardware). Passing the GPU to a VM ensures stability of the host machine, with the flexibility to choose whatever kernel I need for specific hardware. This alongside running entirely different OSes (like *BSD, Windows :(, etc) is pretty useful for some services.


Same here, though more out of lack of control over the host. Libvirt works on basically any distro, and you can easily configure whatever Linux distro you like best for running it. I can’t configure my boot process the way I want on Proxmox (at least not without learning/sidestepping its “convenience” tools/setup).


You don’t get control of the incoming port that way. For LetsEncrypt to issue a certificate primarily intended for HTTPS, they will check that the HTTP server on that IP is owned by the requesting party. That has to live on port 80, which you can’t forward on CGNAT.
Reminder that the license was changed to a “custom” non-free license.


Keyguard, which works on Bitwarden-compatible servers like Vaultwarden
None of that’s true. Free speech laws try to prevent the government from arresting you for opinions or criticism. Social media platforms, parents, etc are still able to take action against statements without reason. The government can also put the blame on something else. If someone is critical of the government, they’re likely to have broken laws they don’t agree with.


Current LLMs are just that, large language models. They’re incredible at predicting the next word, but literally cannot perform tasks outside of that, like fact checking, playing chess, etc. The theoretical AI that “could take over the world” like Skynet is called “Artificial Generalized Intelligence”. We’re nowhere close yet, do not believe OpenAI when they claim otherwise. This means the highest risk currently is a human person deciding to put an LLM “in charge” of an important task, that could cost lives if a mistake is made.


Easily set up, and easily attached to other things. Simple notifications about whatever is needed, like service health or updates, new posts on public platforms, etc. A simple curl is plenty to send and receive notifications, and it works on Android without requiring FCM (Google infrastructure).


I use mautrix/discord, it can work in both puppeting (sign into your account) mode and relay (bot account with webhooks) mode.
I’ve seen many default docker-compose configurations provided by server software that expose the ports of stuff like databases by default (which exposes it on all host interfaces). Even outside docker, a lot of software, has a default configuration of “listen on all interfaces”.
I’m also not saying “evil haxxors will take you over”. It’s not the end of the world to have a service requiring authentication exposed to the internet, but it’s much better to only expose what should be public.