kitsunes.club is playing that old game... 99,999 inbound jobs on the queue, 99,999 inbound jobs.Take one down and process it, 101,203 inbound jobs on the queue.#shitpost #techpost
-
@[email protected] wtf even are inbound jobs? Incoming AP events? But why would you need a queue for processing them? Or am I just too spoiled by Go's concurrency model?
-
@[email protected] In coming posts, and other things that need to be added to the database (cached and accounted for).
-
@[email protected] doesn't change my confusion about why you would need a queue for this. I just don't understand. Probably just really spoiled Go's fire and forget model
-
@[email protected] Handling db stuff isn’t instant, we have a pool of database connections. The queue is stored in redis, and the load is spread on our database connection pool (well, realistically it’s only a pool due to middleware cough pgbouncer cough because misskey just opens and closes db connections like it’s nothing). In this case, it makes sense. A lot of sense, as all these jobs in theory (if Postgres stopped shitting itself) should clear over time. It prevents the server from being overwhelmed essentially.
-
@[email protected] How I’d approach it in go is similar (use channels to communicate jobs for goroutines to process). Redis holds the shit that it needs to work on so that we don’t have other instances buffering (as in increasing their outbound job queue due to http errors). It’s very stupid how there’s no ratelimit for outbound jobs you essentially just do bursts until the server 429s you or tells you to fuck off. I have a proposal for X-RateLimit headers and actually approaching that to have a preemptive rate limiter (based on leaky bucket) to maximize throughout without this spammy all at once pattern.
-
@puppygirlhornypost2 @mstar (oh gods please don’t use channels you want somewhere persistent to spool these so you don’t just drop them on the floor if the process restarts)
-
@[email protected] @[email protected] If the jobs were backed by something like redis would that really matter? Jobs sent over channels, signal for COMPLETE and then remove from queue? Idk. I don't care to program in GO lol
-
@puppygirlhornypost2 @mstar you don’t need (or really want) channels if you’re storing them in Redis or another queue
-
@[email protected] @[email protected] o true i forgot yea literally redis exists and the goroutines can grab from the queue... 🥴
-
@[email protected] @[email protected] like what's the point of storing the job in memory... when it's already in memory right there... just remove it when it's done being processed.
-
@puppygirlhornypost2 @mstar its funny, everyone has been memed into thinking you need to use Redis or similar for your job queue by Rails & Sidekiq
Akkoma uses Oban, which stores the queue in Postgres. And it absolutely screams.
(Akkoma is far more limited by how complex the timeline query is…)