I'm sorry, it took *how* many servers to post a single long message from Ghost to 5k fediverse accounts and handle some replies?
-
Hrefna (DHC)replied to Marco Rogers on last edited by
One of my deep longstanding frustrations in this space:
There is a problem that _can_ be addressed within the scope of the protocol, and so people will assert that it _is_ addressed within the scope of the protocol.
Or they will point to some implementation that has solved it one way or another—usually by limiting how they use it—but not address that there's still a core, fundamental problem in the protocol itself.
-
Marco Rogersreplied to Hrefna (DHC) on last edited by
@hrefna @Gargron @poswald @kissane @fediversereport right. I mean I made the classic mistake of offering my understanding of the issue and being mistaken about it. Now everybody gets to focus on that instead of taking about the fact that some form of this is a problem everywhere.
-
Matthias Pfefferlereplied to Erin Kissane on last edited by [email protected]
@kissane @mattwiebe Maybe we really have to update the blog-post a bit to make that clear. I had quite some comments from people that also thought they had to upgrade!
-
John O'Nolanreplied to Erin Kissane on last edited by
@kissane @fediversereport @thisismissem @bengo Yup! We've got to figure some things out here for sure.
You're absolutely right in your assessment that some of the work is on the side of our (fledgling) implementation (eg. queues) and some may also be needed at the protocol level.
-
@johnonolan @kissane @fediversereport @bengo @evanprodromou
Yeah, protocol wise, it'd be neat to be able to send multiple activities per request to the inbox, but this requires object signatures instead of http signatures iirc.
But I think adopting queue based processing & sending would likely fix your scaling problems (but does change how you do error handling, e.g., needing Reject activities instead of relying on status codes)
-
Emelia 👸🏻replied to Emelia 👸🏻 on last edited by [email protected]
@johnonolan @kissane @fediversereport @bengo @evanprodromou
You might also want to use mem/cpu load instead of open requests as your autoscaling metric, since I'm guessing the high volume of requests was potentially the source of the autoscaler growing so large?
Instead of it actually being due to the server's resources being maxed out.
-
Elena Rossini ⁂replied to Erin Kissane on last edited by
@kissane a point to add: paying for Fediverse followers is scary because there's no real way to gauge if these users are active or not.
I LOVE it how in Ghost I can look up my subscribers and immediately identify who received 10 of my newsletters but NEVER opened them or interacted with them. I can remove these inactive users as followers so I'm not spending money to send them newsletters they never open. How on earth can one do this with AP followers?
-
-
Moose Jolly Holcombreplied to Erin Kissane on last edited by
@kissane there is definitely a cycle of compute is cheap so software is ineffective followed by compute becomes expensive so folks need to make software efficiently again. I knew we were on the cusp of needing to code efficiently again, but this is egregious.
-
Not using a queue is… certainly a choice one can make.
-
Risotto Votedreplied to Hrefna (DHC) on last edited by
@hrefna @thisismissem @jenniferplusplus @kissane @fediversereport
okay...
what's the trade-off for using RSS and webmention instead?
e.g., they pull updates when they get around to it,
you pull logs for webmentions when you get around to it,
otherwise... serve files when asked for em.
-
Jenniferplusplusreplied to Hrefna (DHC) on last edited by
@hrefna @thisismissem @kissane @fediversereport
I'm really inclined to be generous in my read in this case. Ghost's dev team is like 4 people, and they're doing this development and learning in public. I think there's just no concept of out-of-band work in Ghost. They're probably going to need it. Either in the app, or as a standalone service they can farm it out to. But I'm sure they'll figure that out. Hopefully without adding a lot of complexity to the hosting (selfishly, because I host one) -
Hrefna (DHC)replied to Jenniferplusplus on last edited by
Absolutely. It also isn't particularly complicated to add a queue, albeit significantly easier earlier in the dev process, but the logistics of distributing a queue can be much more nuanced.
This does, however, raise questions for me about what their goals are here.
-
Jenniferplusplusreplied to Hrefna (DHC) on last edited by
@hrefna @thisismissem @kissane @fediversereport
That's a fair question. I haven't been able to tell what their goal or strategy is in adding AP integration. It feels kind of like they're going to build it first, and then decide how and why to use it after the fact. And THAT is a choice one can make. -
Emelia 👸🏻replied to Jenniferplusplus on last edited by
@jenniferplusplus @hrefna @kissane @fediversereport I should note that @hongminhee is going to do (I think) a benchmark of Fedify with and without a queue, just to have more data.
-
Erin Kissanereplied to Elena Rossini ⁂ on last edited by
@_elena It's a bit weird, yeah! I have nothing but sympathy for their implementation struggles, and I appreciate them being so transparent, it's just that the business implications were a bit startling.
-
Erin Kissanereplied to John O'Nolan on last edited by
Tbh I appreciate the transparency, even when the implications are startling, in part because it draws out other people's similar struggles. It seems like a whole lot of factors conspiring to be a problem—but also some great people working on those factors.
-
Melroy van den Bergreplied to Erin Kissane on last edited by
@kissane @fediversereport @thisismissem well I'm a contributor to Mbin. So try to rewrite Ghost to PHP . Problem solved.
-
Emelia 👸🏻replied to Melroy van den Berg on last edited by
@melroy @kissane @fediversereport mbin uses a queue iirc, which Ghost isn't currently. That's why it's not as performant as it should be
-
I'm not sure that's actually a useful thing to analyze for these sorts of questions? Like I'd be interested in it regardless, but the reason you generally have a queue is not performance.
(Unless by "benchmark" it means "throughput benchmarking" in which case it is useful but it is really just benchmarking the queue's performance and is highly sensitive to it)