So does anyone have a good guide on what I am actually looking at in the Sharkey dashboard?
-
@[email protected] Yeah I'm not worried about any source for you lmao. You could tell me the delay is because I don't have enough blahaj in proximity to the server and I would probably try it.
-
@[email protected] oh also, the api itself can gridlock the instance but you don’t see that until you’re this size. If we ran sharkey in the unified configuration (default, db workers and frontend+api in the same process) it’d have imploded in on itself.
-
ash nova :neocat_flag_genderfluid:replied to Aurora 🏳️🌈 last edited by
@[email protected] @[email protected] hehe that sounds like good advice actually, make sure the server is comfy
-
Aurora 🏳️🌈replied to ash nova :neocat_flag_genderfluid: last edited by
@[email protected] @[email protected] Server is very comfy don't worry, it has its own blanket keeping it nice and warm. I even activated the RGB for extra 10% ram just in case.
-
Amber 🌸replied to Amber 🌸 last edited by [email protected]
@[email protected] and that gridlocking isn’t bypassable by throwing more CPUs at it I already tried that you have to put on a big girl face and set the variable MK_SERVER_ONLY (iirc? I’d have to double check) and MK_WORKER_ONLY.
-
ash nova :neocat_flag_genderfluid:replied to Amber 🌸 last edited by
@[email protected] @[email protected] incidentally mine is also split and a bit overscaled but that's mostly because I can, not because I need it strictly
-
Amber 🌸replied to ash nova :neocat_flag_genderfluid: last edited by
@[email protected] @[email protected] there’s another level where you use haproxy to send websocket traffic to its own MK_SERVER_ONLY node based on /streaming & HTTP/1.1 -> websocket negotiation. This isn’t possible with sharkey config but you can use middleware. There’s also doing this but matching on headers "Accept: application/ld+json" (and other content types like activity+json) to route federation traffic to its own node…
-
Gwen, the kween fops :neofox_flag_trans: :sheher:replied to Aurora 🏳️🌈 last edited by
@[email protected] @[email protected] @[email protected] I can't get over putting a blanket on your server to keep it warm lmaaaaao
-
@[email protected] @[email protected] @[email protected] I once tossed a comforter on my 25u server rack suffocating it so the temps would spike just to heat my room up a bit more when I took off the comforter because I knew it’d take a while for it to go back to its normal temperature range. This is because I am a completely normal individual
-
ash nova :neocat_flag_genderfluid:replied to Amber 🌸 last edited by
@[email protected] @[email protected] I'm running dedicated workers and MK_SERVER_ONLY nodes, but I haven't split those up into different API routes or anything like that if that's what you mean, they just do all the web traffic pretty much. Been meaning to separate out AP things from client web but I can't quite be bothered to write that much nginx conf rn and changing my own HTTP to HAProxy is more of a longer term project xD
-
@[email protected] @[email protected] @[email protected] it’s so funny to see idrac reporting 130°F exhaust temperatures. I didn’t believe it, so I put my hand behind my server and wew you’ll never guess this - it wasn’t kidding.
-
Amber 🌸replied to ash nova :neocat_flag_genderfluid: last edited by
@[email protected] @[email protected] you can go even further - you can separate the url preview by running an instance of summaly yourself and giving misskey the url. Same goes for the media proxy (which I still need to do because I was going to run it in a docker container with cpu limits so at max it can only max out a single core instead of taking everything else down) on top of using split instance config and using prefix+protocol routing within haproxy to separate api, websocket and federation traffic to their own nodes