

Does anyone sell ‘Yes, Do As I Say!’ stickers?
You could possibly recover from that on console, just install few metapackages. And have backups.
Does anyone sell ‘Yes, Do As I Say!’ stickers?
You could possibly recover from that on console, just install few metapackages. And have backups.
I meant that the technology itself is reliable. And you can do self hosting just fine too, I’ve been doing it since 2010 or so, but running a local smarthost which sends messages via reputable SMTP provider works just fine too. Or even directly interacting with the SMTP provider from all the applications you’re running.
Advertising spam.
Commodore 64 is a home computer released at 1982. Modern expansions for it allows the thing to actually have tcp/ip stack and it can run things like telnet, but your single mastodon server, in comparison of what was available in 1980s, is pretty much equal of the whole bandwidth and storage of the internet (or arpanet, depending on how you want to time things).
Mastodon server requires (roughly) at least 2 gigabytes of memory and 20 gigabytes of storage. And with that it needs at least dual core 2GHz CPU to run it.
Commodore 64 had 1Mhz. A million hertz sounds like a big number, but we’re talking (at minimum) of two processor cores running with 2000 million hertz. Also, C=64 had 64 000 bytes of memory while the absolute minimum to run mastodon instance is 2 000 000 000 bytes.
And then there’s the storage. Your minimum mastodon instance should have at least 20GB of storage. 1541 used 5,25" floppy disks which could store up to 170 kilobytes. So you’d need someone to change disks as needed on a over 400 meter tall tower of floppy disks.
So, please tell me again where to get disk images to run mastodon server on a C=64 and how you just know that plain old email is garbage and old people just don’t know what they’re talking about.
you very much can not run mastodon server on a Commodore 64.
You absolutely can.
Ok. Send me the link of disk image of that. I have C64 laying around with 1541 disk drive. I’ll set up a public mastodon instance running on a C64 with a webcam stream of the setup.
I have no idea what any of that means…
That checks out. You conveniently skipped the part where I requested a single messaging solution which works with either modern android/ios devices or with anything you’ll find in your dad’s(or grandads I guess) drawer, can manage multiple recipients, escalations to sms/home automation bells, works reliably even if the uplink goes down for few hours and so on.
And no, you very much can not run mastodon server on a Commodore 64.
But you seem like a young and enthustiatic individual. I was one “a few” years ago. Keep it going, but that arrogant attitude won’t get you anywhere. Email has been a thing since the 1970s and there’s a reason why it’s still going strong. Things like XMPP has been around for a good while and there’s a reason why they’re not even close of overtaking email as a primary communication technology around.
You’ll live and learn. My guess is that when you reach my age, email is still working just fine and majority of the hot stuff which is around right now has faded to the history.
aren’t reliant on any particular company or service, and are easier to run and manage without requiring approval from your ISP
What other than email provides that? Browser notifications generally don’t work on mobile. Most of the common instant messengers rely on a single instance running the thing if you’re not suggesting sending messages via IRC or XMPP (or matrix or…) which have their own problems. App notifications require that you have the thing which app is running to be available and online and they more often that not require some spesific device. Also even if you had linux desktop “app” it requires that the software is running.
Also I have not met an ISP which would block sending email via gmail/amazon/protonmail/whoever. Sure, my current ISP blocks tcp/25 to the world by default, but you can request to open that too if you really want to and ports 587 and 465 are open, so you can work around that if you don’t want a smarthost for some reason.
With other options you wouldn’t need to because they already provide the features you’re looking for in those apps.
Which other protocol allows notifications at the same time on all the mobile devices, all the workstations and allow easy way to send the very same message to arbitary amount of recipients to all of their devices? I had email on a palm pilot device at 2001 or so, over mobile data with IRDA and you can read email even with Commodore 64 if you really want to (well, to be more spesific, use C=64 as an terminal for *nix server to access email, I think there’s no actual IMAP/POP client for it). There’s just no way for any other modern service to even try to compete with versatility with email.
And then there’s the more sopisthicated approaches like pushing email trough however complex procmail/perl/python/whatever scripting you like where you can develop quite literally whatever you can imagine. Set up a old fire alarm bell, hook it up to your home automation, process incoming emails and if it’s severe enough turn the bell on. Sure, at least a some of that is possible via instant messengers too, but with email I can be pretty sure that if I write a script today for it it’ll still run quite happily for the next 10-15 years.
Please do tell me which of the modern messaging alternatives offer all of that.
It just boggles my mind that we haven’t moved away from this archaic technology.
None of the alternatives are as standardized as plain old email. You can use whatever you like to read them, you don’t have to rely on a single company like Meta with WhatsApp for communication, it’s easy to use, pretty damn reliable and fault resistant and just ticks all the boxes you’ll ever need for a simple message delivery.
Personally I would absolutely hate if software started to offer notifications only on slack or signal or whatever. Just let me have my email and I can then read it with a browser in library, on my cellphone, on my desktop and laptop and on pretty much every other internet connected device on the planet. And if I want, I can pass that trough to teams, sms, all the messaging platforms and even straight to my printer should I need to. With other message delivery options that’s often either pretty difficult or straight up impossible.
You only need SMTP server, so the inbox size doesn’t matter (assuming you have another email where you want to receive those notifications). And even if you have separate inbox for alerts it’s quite unlikely that you get hundreds of megabytes worth of alerts every day and they’re pretty much useless after a day or two so there’s no need to keep them around.
In here ISPs commonly have SMTP service included on their service, so that’s worth checking. Beyond than that, any at least somewhat reputable provider will do as long as they provide traditional SMTP service. One option is to use a relay host on local network which sends mail trough a smart host so you can just use local unauthenticated SMTP server for all the things you run and that one service will then push the messages to the internet.
We’ve all been there. If you do this stuff for a living, you’ve done that way more than once.
You’d think you’d learn from your mistakes
Yes, that what you’d think. And then you’ll sit with a blank terminal once again when you did some trivial mistake yet again.
A friend of mine developed a habit (working on a decent sized ISP 20+ years ago) to set up a scheduled reboot for everything in 30 minutes no matter what you’re going to do. The hardware back then (I think it was mostly cisco) had a ‘running conrfig’ and ‘stored config’ which were two separate instances. Log in, set up scheduled reboot, do whatever you’re planning to do and if you mess up and lock yourself out the system will restore to previous config in a while and then you can avoid the previous mistake. Rinse and repeat.
And, personally, I think that’s the one of the best ways to differentiate actual professionals from ‘move fast and break things’ group. Once you’ve locked yourself out of the system literally half way across the globe too many times you’ll eventually learn to think about the next step and failovers. I’m not that much of a network guy, but I have shot myself in the foot enough that whenever there’s dd, mkfs or something similar on the root shell I automatically pause for a second to confirm the command before hitting enter.
And while you gain experience you also know how to avoid the pitfalls, the more important part (at least for myself) is to think ahead. The constant mindset of thinking about processes, connectivity, what you can actually do if you fuck up and so on becomes a part of your workflow. Accidents will happen, no matter how much experience you have. The really good admins just know that something will go wrong at some point in the process and build stuff to guarantee that when you fuck things up you still have availability to fix it instead of calling someone 6 timezones away in the middle of the night to clean up your mess.
Yep. Even if the data I’m backing up doesn’t really change that often. Perhapas I should start to back up files from my laptop and workstation too. Nothing too important is stored only on those devices, but reinstalling and reconfiguring everything back is a bit of a chore.
Does your storage include any kind of RAID? If not then that’s something I’d personally add in to the mix to avoid interruptions for the service. Also 32 gig of RAM is not much, so don’t use ZFS on proxmox, it eats up your memory and if you run out everything is stupidly slow (personal experience speaking here, my proxmox server has 32gig as well).
Also, that’s quite a lot of stuff to maintain, but you do you. Personally I would not like that big stack to maintain for my everyday needs, but I have wife, kids, kids hobbies and a ton of other stuff going on so I have barely enough personal capacity to run my own proxmox, pihole, immich and HomeAssistant and none of those are in perfect condition. Specially the HA setup badly needs some TLC.
And then there’s the obvious. Personal mail server on a home grade uplink is a beast of it’s own to manage and if you really don’t know what you’re getting into I’d recommend against it. And I’m advocating every mail server which is not owned by alphabet/microsoft/apple/etc. It’s just a complicated thing to do right and email is quite essential thing for everyday life today, so be aware. If you know what’s coming up (or are willing to eat up the mistakes and learn from them) then by all means, go for it. If not, then I’d suggest paying for someone to make it happen.
And then the backups. I’ve made the mistake few times where I thought it’d be fine to set up backups at some point in the future. And that has bit me in the rear. You either have backups on the pipeline coming Very Soon™ or you lose your data. And even if it’s coming Very Soon, you’ll still risk losing your data.
Plus with backups, if you don’t test recovery from them then you don’t have backups. Altough for a home gamer it’s often a bit much to ask for a blank slate recovery, so at least I’ve settled on the scenario where I know for sure I can recover from any disaster happening in the home lab without testing as I don’t have enough spare hardware to run that test fully.
Beyond that, just have fun. Recently I ran into an issue where my proxmox server needed some hardware maintenance/changes and that took my pihole-server down, so whole LAN was out of DNS services. No tthe end of the world for me, but a problem anyways and I’ve been planning for a remedy against that, but haven’t yet done anyting concrete for it.
The one thing I always forget, no matter how many DNAT setups or whatever I write with iptables.
I changed my proxmox server from zfs raid pool to software raid with mdadm. Saved me a ton of ram and cheap ssd’s don’t really like zfs, so it’s a win win. And while messing around with drive setups I also changed the system around a bit. Previously it had only single ssd with LVM and 7x4TB drives with zfs but as I don’t really need that much storage it’s now running 3x1TB SSD + 4x4TB HDD, both with software raid5 so 2TB of fast(ish, they’re still sata drives) storage and 12TB (or 10,6 in the real wold, TB vs TiB) of spinning rust storage.
Well enough for my needs and I finally have enough fast storage for my immich server to maintain all the photos and videos over 20+ years. Took “a while” to copy ~5TB over 1gig lan to other system and back, but it’s now done and the copying didn’t need babysitting in the first place, so not too big of a deal. Biggest unexpected issue was that my 3,5" hdd hotswap cradles didn’t have option to mount 2,5" drives so I had to shut down the server and open the case to mount the drives.
And while doing that my piHole was down, so the whole network didn’t have DNS server around. I’d need to either set up another pihole server or just set up some scripts to the router to change DNS offerings to dhcp clients while pihole is down and shorten the lease time to few minutes.
I personally prefer printed out books of our photos. We are missing quite a few years due to life getting in the way, but the end goal is to have actual books of photos with titles like ‘Our family in 2018’ and ‘Sports of our first born at 2022’. In europe we have a company called ‘ifolor’ where you can design and order printouts of your photos. They’re not really cheap, but the quality is pretty damn good. And their offerings go to pretty decent sized photo albums, up to A3 size and 180 pages (which is over 200€). So, not cheap, but at least so far their quality has been worth the money.
And they have cheaper options too, but personally I think it’s worth the money to get the best quality you can for printouts. And even the smallest and cheapest option is far superior over not having anything at all due to hardware failure or whatever.
After reading the previous discussion I think that you should get more than single drive to store cold backups. That way you can at least spread out the risk of single drive failing. 2TB spinning drives are pretty cheap today and if you have, for example, 4 of them, you can buy one now, write your backups to it and in 6 months buy another, write data on that and so on.
This way you’ll have drives with year or two difference on purchase date, so it’s pretty unlikely all of them fail at once and a single drive gets powered on and checked every other year or so. My personal experience is that spinning drives are pretty stable on the shelf, but I wouldn’t rely on them for decades. And of course even with multiple drives you’ll still want to replace them every 3-5 years each. Plus with multiple drives, if I were to build setup like that, I’d set up some sort of scripts or other solution where I can just plug the thing in and doubleclick an icon on desktop to refresh the data and maybe get a notification automatically that the drive you’re using should be replaced.
And for actual, long term storage, printouts are the way to go. At least in here you can get books made out of photo paper with your pictures. That’s one media which is actually stable over long period and using them doesn’t require a lot of technical knowledge nor hardware. But I’d still keep digital copies around, as the printouts aren’t resistant to things like house fire or water damage.
Personally I’m running postfix+dovecot+amavis -setup managed by ispconfig3 running on a Debian VPS from hetzner. I think more important than having a ‘clean’ IP is to have clean domain name attached to it, but your mileage may vary.
This seems to be a common point of view for email self hosting.
However, my own experience is a whole another thing. Sure, my hosts have been on every spam list imaginable, mostly with Microsoft, but just a week ago I migrated the whole setup to new VPS and while there’s still a thing or two I’ll need to iron out the emails are running just fine. Biggest issue was that I forgot to add IPv6 DNS records for the VPS and thus got blocked by gmail, but they gave a clear error why that was and once I fixed the problem it’s been smooth sailing.
With current domains I’ve been running things since 2016 or 2018 and even commercially. It’s mostly problem free and things just work, Microsoft being the bigest ass on to work with. For example last october/november they decided to reject everything from one of my servers but both their JMRP portal and support claimed that there’s nothing wrong with our server. It took couple of days to clear without any definitive explanation. But beyond that, on various environments since 2009 (I think) it’s been mostly problem free hosting.
Sure, hosting email for anyone requires at least some understanding on how things should work (both technically and ethically/legally) and the skillset needed is a bit more complex than hosting a web site to public internet, but it’s still something practically anyone can do if they really want to.
And sure, there’s a ton of stuff you need to get right. And then there’s cases when you miss something and your ‘Contact me’ web form becomes a spammer heaven and your servers end up sending few million viagra ads around the net and your IP/domain is on every shitlist there is. It takes some persistence and time to clean that up and learn from the experience, but it’s not the end of the world.
Self hosting your email is perfectly viable, it can be done regardless of google/microsoft, and I hightly recommend doing that. Email is one of the last “old” fronts to the net where everything is not centralized to a single/few actors. But you really need to know what you’re doing. Copy’n’paste commands to set up whatever the latest hot stuff is on docker containers just isn’t enough.
Dammit, my organic memory failed yet again. It’s been a while since I’ve seen that prompt (and I have agreed to that as well at least few times).