I'm sorry, it took *how* many servers to post a single long message from Ghost to 5k fediverse accounts and handle some replies?
-
@kissane Aha! That was the post from when we acquired the plugin and brought @pfefferle on board. This is when we launched it for everyone: https://wordpress.com/blog/2023/10/11/activitypub/
We should probably add a follow-up note to the older post
-
Jenniferplusplusreplied to Erin Kissane on last edited by
@kissane @fediversereport @thisismissem
This is armchair engineering, but I suspect there's an architecture issue here. I suspect ghost is organized around the assumption that secondary work is fast and easy. Like sending emails is mostly an API call to mailgun for them. But there's no mailgun for activitypub, so they're doing it themselves, and it happens in a blocking way. -
@mattwiebe @pfefferle Honestly this largely is a search results problem, but that's a problem we all live with forever somehow. I keep meaning to check out your implementation!
-
@bengo @kissane I'm telling everyone, push model was a mistake!!! (https://icosahedron.website/@greg/113222459291481648)
-
-
Jenniferplusplusreplied to Jenniferplusplus on last edited by
@kissane @fediversereport @thisismissem also worth noting ghost is built in nodejs, so it's more or less single threaded. 10 servers might very well have been 10 cpu cores in a different stack.
Anyway, this is to say that activitypub is very resource intensive, but this seems like there are complicating factors that can be worked through over time.
-
Eugen Rochkoreplied to Marco Rogers on last edited by
@polotek @poswald @kissane @fediversereport No, thatโs not true. A post is only delivered once per domain. And we use keep-alive connections to shave off request setup time for repeat deliveries. If you have 5k followers from 2 domains, 2 requests will be made. Only if you have one follower per domain does it become 5k requests.
-
Emelia ๐ธ๐ปreplied to Jenniferplusplus on last edited by
@jenniferplusplus @kissane @fediversereport
True, node.js is single threaded, however due to the async i/o you can usually process a fair number of requests simultaneously, because a single request doesn't have to finish before another is processed โ you only get into trouble with synchronous APIs and like long-running processing (e.g., iterating over a lot of data)
I suspect besides queuing, there's something non-obvious here.
-
Emelia ๐ธ๐ปreplied to Emelia ๐ธ๐ป on last edited by
@kissane @fediversereport so further on this, by not using Fedify's queue option, they're also not using a queue to perform sends of Activities either.
This means delivery failures would also mess up Ghost rather good, because it'd result in one send failure cancelling others:
Sending activities | Fedify
Fedify provides a way to send activities to other actors' inboxes. This section explains how to send activities to others.
(fedify.dev)
-
Emelia ๐ธ๐ปreplied to Emelia ๐ธ๐ป on last edited by
@jenniferplusplus @kissane @fediversereport
Turns out they're not using a queue for receiving activities nor for sending them, which.. I'd not recommend in a production environment where you want to use resources & processes optimally
-
Noah Kennedyreplied to Emelia ๐ธ๐ป on last edited by
@thisismissem @jenniferplusplus @kissane @fediversereport lol that would do it
-
@sashin @polotek @kissane @fediversereport In a nutshell, whenever someone posts a reply to a message, it goes to the server which sourced that message; the server will then relay the reply to everybody engaged with that discussion: followers of the account, other contributors to the discussion, and anyone hashtagged in the conversation.
-
Marco Rogersreplied to Eugen Rochko on last edited by
@Gargron @poswald @kissane @fediversereport feel free to explain the actual reason this is such a persistent problem. I don't mind being corrected. But please donโt let that be the only reason you pop in.
-
Hrefna (DHC)replied to Marco Rogers on last edited by
One of my deep longstanding frustrations in this space:
There is a problem that _can_ be addressed within the scope of the protocol, and so people will assert that it _is_ addressed within the scope of the protocol.
Or they will point to some implementation that has solved it one way or anotherโusually by limiting how they use itโbut not address that there's still a core, fundamental problem in the protocol itself.
-
Marco Rogersreplied to Hrefna (DHC) on last edited by
@hrefna @Gargron @poswald @kissane @fediversereport right. I mean I made the classic mistake of offering my understanding of the issue and being mistaken about it. Now everybody gets to focus on that instead of taking about the fact that some form of this is a problem everywhere.
-
Matthias Pfefferlereplied to Erin Kissane on last edited by [email protected]
@kissane @mattwiebe Maybe we really have to update the blog-post a bit to make that clear. I had quite some comments from people that also thought they had to upgrade!
-
John O'Nolanreplied to Erin Kissane on last edited by
@kissane @fediversereport @thisismissem @bengo Yup! We've got to figure some things out here for sure.
You're absolutely right in your assessment that some of the work is on the side of our (fledgling) implementation (eg. queues) and some may also be needed at the protocol level.
-
Emelia ๐ธ๐ปreplied to John O'Nolan on last edited by
@johnonolan @kissane @fediversereport @bengo @evanprodromou
Yeah, protocol wise, it'd be neat to be able to send multiple activities per request to the inbox, but this requires object signatures instead of http signatures iirc.
But I think adopting queue based processing & sending would likely fix your scaling problems (but does change how you do error handling, e.g., needing Reject activities instead of relying on status codes)
-
Emelia ๐ธ๐ปreplied to Emelia ๐ธ๐ป on last edited by [email protected]
@johnonolan @kissane @fediversereport @bengo @evanprodromou
You might also want to use mem/cpu load instead of open requests as your autoscaling metric, since I'm guessing the high volume of requests was potentially the source of the autoscaler growing so large?
Instead of it actually being due to the server's resources being maxed out.
-
Elena Rossini โreplied to Erin Kissane on last edited by
@kissane a point to add: paying for Fediverse followers is scary because there's no real way to gauge if these users are active or not.
I LOVE it how in Ghost I can look up my subscribers and immediately identify who received 10 of my newsletters but NEVER opened them or interacted with them. I can remove these inactive users as followers so I'm not spending money to send them newsletters they never open. How on earth can one do this with AP followers?