• 0 Posts
  • 23 Comments
Joined 1 year ago
cake
Cake day: June 6th, 2023

help-circle


  • With respect to pricing, I’ve been using SES for maybe 10 years, possibly more - this month is the first time I think I’ve ever been charged. The free tier used to include a very large number - I think it was 30,000 or or more emails a day that I never exceeded. Now it’s 0.10 USD per thousand messages. Which is a pretty big change from free, even though the overall costs are small - and it’s still a bargain. As with everything in “the cloud” though, the big players will squeeze the competition out then increase prices. I fully expect SES prices to keep increasing now they’ve figured out they can extract a few extra dollars from users and how relatively cheap SES is compared to the other overpriced crap. It won’t surprise me if they jack this up significantly in the coming years.

    Referencing sending quotas - Amazon is very lenient - I was talking about the big providers like gmail. It might be different now that my accounts have a long reputation as trustworthy senders, but when I first started using SES way back when, gmail and yahoo would start rejecting mail if more than something like 200 or so messages were submitted in a single batch, so I had to check the recipient domains and limit the numbers for each hourly iteration to stop them rejecting. I keep the email batches pretty small since I’m only sending out about 5-10K at a time and I stagger the send over several hours.

    It’s a bit of a minefield but overall pretty happy with SES, mainly because the mail gets delivered. You don’t need to originate sending from an EC2 hosts (the pricing is the same, even though they make a distinction in the price list:

    Outbound email from EC2 $0.10/1000 emails $0.12 for each GB of attachments you send*

    Outbound email from non-EC2 $0.10/1000 emails $0.12 for each GB of attachments you send

    *You might incur additional data transfer charges for using EC2 (it seems very likely they will increase the non EC2 price to drive you to a place where they are getting your compute and storage $ as well).

    https://aws.amazon.com/ses/pricing/


  • SES is indeed the best option if you want reliable delivery for a reasonable cost. The pricing changed just last month so it’s no longer effectively free for small users but it’s relatively cheap (for now). I looked at the prices you quoted for other services and they seem ridiculously high, but it’s fair to say that sending legitimate (non spam) bulk email is not so easy if you do everything yourself - getting your mail accepted is very challenging. For example, even using SES, if you attempt to originate too many emails to one provider in a single call, they may start rejecting everything - I had to put counters into the code to limit how many gmail addresses would be sent with each iteration. SES also rate limits so you need to manage that somehow. It sounds like you’re planning to send a LOT of email. You’ll also need to be mindful of the bounce rate and complaints (spam / abuse reports from recipients) because SES will shut you down if they go over a certain threshold, which you can see in the dashboard. It sounds like you’ve already figured a lot of this stuff out though - it’s not rocket science but it can be frustrating to work with bulk email delivery for a number of reasons.


  • Yes you understand the suggested approach. I don’t know about the mariadb tool and if it looks good, by all means use it, but I would offer that the fastest, simplest way to restore a reasonably small database that I can think of is with a sql dump. Any additional complexity just seems like it’s adding potential failure points. You don’t want to be messing around with borg or any other tools to replay transactions when all you want to do is get your database rebuilt. Also, if you have an encrypted local copy of the dump, then restoring from borg is the last resort, because most of the time you’ll just need the latest backup. I would bring the data local and back it up there if feasible. Then you only need a remote connection to grab the encrypted file and you’ll always have a recent local copy if your server goes kaput. Borg will back it up incrementally.


  • for the database, consider a script that does a “mysqldump” of the entire database that you schedule to run on the system daily/weekly. Also consider using gpg to encrypt the plain text file and delete the original in the same script. This is so you don’t leave a copy of the data unencrypted anywhere outside the database. You can then initiate either a copy of the encrypted file to a local folder that you’re backing up, or if you’ve set this up to back up directly on the remote that’s fine too - bringing it local gives you a staged copy outside the archive and not on the original host in case you need an immediately available backup of your database.

    With respect to the 3 separate repos, I would say keep them separate unless you have a large amount of duplicated data. Borg does not deduplicate over different repos as far as I’m aware. The downside of using a single repo is that the repo is locked during backups and if you’re running different scripts from each host, the lock files borg creates can become stale if the script doesn’t complete and one day (probably the day you’re trying to restore) you’ll find that borg hasn’t been backing your stuff up because a lock file is holding the backup archive open due to a failed backup that terminated due to an untimely reboot months ago. I don’t recall now why this occurs and doesn’t self-correct but do remember concluding that if deduplication isn’t a major factor, it’s easier and safer to keep the borg repos separate by host. Deduplication is the only reason to combine them as far as I can tell.

    When it comes to backup scripts, try to keep everything foolproof and use checks where you can to make sure the script is seeing the expected data, completes successfully and so on. Setting up automatic backups isn’t a trivial task, although maybe tools like rclone and borgmatic simplify it - I haven’t used those, just borg command line and scp/gpg in shell scripts. Have fun!


  • fuser@quex.cctoSelfhosted@lemmy.worldEmail server hosting
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    you have the main problem in hand. You’ll still need to do all the DKIM / rDNS stuff to be certain your mail is accepted, but using SES as the source gives you a significant leg up vs originating locally. I don’t see why you can’t run dovecot and postfix on separate systems, but a single VM isn’t bad if it’s properly secured. Hosting SMTP/IMAP is not that difficult but you need to make sure you don’t accidentally misconfigure things and become an open relay - as with all internet facing systems, mail services are targeted constantly so you should use fail2ban to deter them.






  • Well, I just learned something, but what does “control” the IP mean? If they are only validating a single address via http then presumably you could just use an Amazon elastic IP as long as it resolves. I doubt that letsencrypt will support that but I would be interested to know. If they do then yeah, you could presumably set up the instance using the IP as the name, but I don’t know why you would want to. Apart from the fact that it would be hard to remember, could change at some point, screwing things up, it might work. I suggest OP do the necessary and report back accordingly.




  • Right, but Lemmy.ml is really just one of a thousand plus instances. We need something instance independent or a way to propagate info that doesn’t rely on any single failure points, or Lemmy as the communication channel. What happens when lemmy.ml is down, or if no instances are able to post due to concerted DoS?

    It’s impossible to stop anyone randomly posting stuff on Lemmy. Attackers can post misinformation as well, especially if they compromise admin accounts. Who are we gonna trust in the midst of the next incident? The account posting most prolifically about the UI exploit in progress was using a burner account that had just been created to post about it. I’m sure there were good reasons for wanting to be anonymous when discussing the work of unknown malicious actors, but it made me think twice about what was being posted at the time.


  • whilst I differ somewhat on sharing information on the exploit - knowing something about what was going on allowed some instance admins to take evasive steps - I agree with you completely that there could be a better channel for coordinating communication - I imagine a lot of the discussion went on via Matrix - under the circumstances the response wasn’t so bad given the complete lack of formal organization but yes, it definitely could be improved - you sound quite well-versed in how to handle security/critical incidents. Maybe consider contacting the devs and offering them some help in this area?





  • thanks - open source search - what a wonderful idea! Although duckduckgo is tolerable, I used google without an ad blocker a couple of days ago while setting up a new system - wow - the search results are so full of clutter and garbage that it’s practically unusable. Google search was useful once - not now.

    The main reason ChatGPT is popular is simply because it provides information quickly without a gazillion ads and SEO-driven click-chasing nonsense making the internet unusable. There’s no “intelligence” beyond a much better and more intuitive information presentation algorithm. OpenAI is just a search-engine reinvented. We need to open source LLMs next.