"In 2024, it cost $N to run a Mastodon instance with ~1000 active users for a year.
-
Emelia πΈπ»replied to mekka okereke :verified: last edited by
@mekkaokereke current estimates of yearly cost per account is $0.30 to $0.80 based on infrastructure, storage, etc. From what I've seen.
I'm pretty sure @esk or @dma worked out the numbers for Hachyderm too.
-
Samir Al-Battranreplied to mekka okereke :verified: last edited by
@mekkaokereke
X = Fediverse discovery project
Y = Mastodon implementation of it
Z = Getting rid of relays and local copy of remote dataFediverse discovery project will actually be a huge advantage.
I don't know if it will be complete in 2025, but if it did then it's a game changer (Also assuming Mastodon's architecture takes advantage of it)The existing design is not efficient, and also you end up missing most of the conversations (esp on smaller instances with 1K users)
-
Brandon Jonesreplied to mekka okereke :verified: last edited by
@mekkaokereke I've heard ActivityPub is pretty chatty as far as protocols go. Wonder how much headroom there is for reducing server costs purely through protocol improvements?
(That does nothing to reduce the human costs for moderation, but I think we all know tech folks are more likely to go for the easily quantifiable tech solutions first.)
-
@samir @mekkaokereke we are doing this under a grant from @ngisearch and the agreed plan is to have it done by next June. The spec work is mostly done (unfortunately we did not get much feedback) and we hope to have a first implementation for trends (our first capability) in 2 months, both the « provider » side and the Mastodon implementation
-
Renaud Chaputreplied to mekka okereke :verified: last edited by [email protected]
@mekkaokereke if we focus on cost, then shared moderation and shared storage. Those 2 things build on our FASP idea that we are currently actively working on.
I see a lot of people pointing technical things like switching from Rails or some other brick, but those are really not the issues. At least from my experience running instances with many hundred thousand users. -
gkrnoursreplied to mekka okereke :verified: last edited by
@mekkaokereke I wonder if shared moderation team could help. Like a moderation panel that could handle report for multiple instances and a few small instances that help moderate each other instance using such a tool. This way, a dozen instance have one mod available 2h a day, instead of being unmoderated 22h a day, could be moderated all around the clock.
In the past, spam filter have been used to classify text content. Maybe it could be done for triage in moderation.
-
Joby :gts: (he/him)replied to Brandon Jones last edited by
@tojiro @mekkaokereke I've been kinda blown away by how much traffic AP generates. I'm running a GoToSocial instance that's just me. I only have like 200 followers, and it gets almost two million requests per month and uses like 500MB of RAM 24/7 (and this is a fairly efficient AP implementation!).
-
{Insert Pasta Pun}replied to Emelia πΈπ» last edited byThis post is deleted!
-
{Insert Pasta Pun}replied to Joby :gts: (he/him) last edited by [email protected]
@joby @tojiro @mekkaokereke what's a good comparison point for the amount of traffic generated?
Like if you're publishing from one to many, are there any comparison points for lighter traffic?
I know one trivial inefficiency is if you have 50 accounts followed by 50 accounts on 50 other servers, and each one publishes one post, it could trivially send 1:1 messages per account (2500/server, 125,000 sender), or optimally send only 50 to each server, or if bundled or gossiped or shared reduce that (though some gossiping increases traffic not reduces it)
Like if your criteria is N hosts must sync M feeds...
There still needs to be that delta of changes over the wire (unless they're sending the complete object twice vs the update) and it's mostly haggling over how fast or batched the send is?
Unless there's some part that's duplicating work somewhere
Or if it's just encoding overhead?
-
{Insert Pasta Pun}replied to {Insert Pasta Pun} last edited by
@joby @tojiro @mekkaokereke or maybe it's caching problems?
-
Emelia πΈπ»replied to {Insert Pasta Pun} last edited by
@risottobias @mekkaokereke @esk @dma that range is based on information from a half dozen large instances based on their expenses
-
This post is deleted!
-
@puppygirlhornypost2 @risottobias @mekkaokereke @esk @dma hachyderm uses digitalocean spaces, but has a custom CDN on top
-
This post is deleted!
-
Amberreplied to Emelia πΈπ» last edited by [email protected]This post is deleted!
-
This post is deleted!
-
@puppygirlhornypost2 @risottobias @mekkaokereke @esk @dma yeah, reads and deletes can be expensive
-
{Insert Pasta Pun}replied to Emelia πΈπ» last edited by
@thisismissem @puppygirlhornypost2 @mekkaokereke @esk @dma wasabi charges for not keeping an object to 90 days, so there's also that.
-
Esk πβ‘πreplied to Emelia πΈπ» last edited by [email protected]
@thisismissem @mekkaokereke @dma yup, we could calc the raw infra costs, will do latest tonight, but your range sounds about right.
i'm not aware of anything that would have magically dropped the infra costs (the x, y, nor z). maybe the libvips support reduced cpu somewhat.
mekka has a super key point - the people costs are all $0 in that figure, though, bc it's volunteer. personally, i'm happy to do it as a way of giving back, but reality is, mastodon is hard to make viable if you actually pay people.
-
@esk @mekkaokereke libvips reduces cpu required for conversion, however, does allow for some larger attachments