hootOS

HOOT_OS - V.30

Stryxnine Amity Pulsatrix
(30/🇨🇦/Saskatchewan)
NACRS Organizer
esports broadcast producer
plural, autistic, adhd
disability & queer activist
hobbyist archival researcher
bylines in Traxion.gg
loves @kadybat and @traumagotchi and @kaceydotme

57RYX9 DESIGN - Visual FX and Graphic Design North American Cohost Racing Series organizer & founder
Big Muddy Archive News


MSN Escargot
hootwheelz@escargot.chat

jkap
@jkap
atomicthumbs
@atomicthumbs asked:

What are the Computer Stats like for Cohost, if it's information y'all feel appropriate to share? How much bandwidth, how much storage, and so on? I'm curious what sort of [cloud-delocalized] hardware footprint the site has.

  • DB size is under 50gb still (we had to overprovision on DB b/c of cpu issues that have since been resolved but we can't scale back down b/c of how digitalocean links this stuff)
  • the render server is really CPU bound so unfortunately we have to replicate that part up higher than we'd want (it can currently only reliably handle 1rps per-replica; this is an area of active research)
  • the API server, on the other hand, is exclusively DB bound and can do 25rps per replica no problem, although we have our scaling target set lower than that for safety.
  • we're running on a kubernetes cluster of digitalocean s-8vcpu-16gb nodes. that node size has been a good sweet spot for us thus far.
  • total image storage is about 500gb

ok. now for big numbers, all of these are over the last month.

  • we're transferring an average of 13gb per hour for uploaded images with a cache hit rate of 51% (we wanna improve this second number). fastly also gives us "cache coverage", which is the percentage of requests that are theoretically cacheable. our cache coverage is 99.6%. i don't know what that extra 0.4% consists of but it's going to haunt me.
  • the main app is 7.9gb per hour with a cache hit rate of 95.30% (static assets like the client-side js have really aggressive cache rules for this exact purpose). our cache coverage here is only 13.63% since dynamic pages (aka Everything That Isn't Javascript) isn't really cacheable.

that said, over the last few days we have seen a fairly substantial increase in overall traffic. our hourly average for images prior to that was 10gb; if we count just since then it's up to 33gb per hour. we see a similar increase with main app traffic.

big weekend for us


hootOS
@hootOS

yes i am proud of myself. yes it only contributed to ~0.004% of the hourly image data upload rate because i exported it at super low crispycrunch quality. yes i am Normal.


You must log in to comment.

in reply to @jkap's post:

(it can currently only reliably handle 1rps per-replica; this is an area of active research)

ah, async SSR with a fullstack javascript framework. hoo boy. i too do a lot of digging and coding in that area and oh god oh fuck. its very cool technology but turns out that async is hard, especially with a billion lines of code in dependencies that are Not build for async *looks at like 98% of NPM*

the amount of race conditions triggered by libraries ran in SSR (and even in frameworks themselves!!!!!!) is incredible. server side rendering is magic, but it is black magic

yeah. i am not a fan of SSR. we were originally writing a fully frameworkless frontend but it made doing anything A Huge Fucking Nightmare so i gave up on that. but CSR-only also sucks, and we kept writing really bad and undiagnosable bugs when we didn't have any sort of type-safety in our view layer, which i guess we did get with react SSR. just like. man. i wish we had better options here that wouldn't be hell to implement. it's hard to justify spending a bunch of time when we don't have the resources for that

✨webdev✨

the worst thing is you have Bad Framework with Libraries and Good Framework with No Libraries and nothing in between. computers are a mistake

Is there any part of your stack that would make sense to colocate instead of using a cloud service, or is that too much labor overhead? Something like a small Ceph cluster for robust S3-compatible image serving instead of DO Spaces, for instance.

(there are enough people on here who have unused rack space and/or autonomous systems that i bet there could be room for Interesting Arrangements)

edit: oxide should mail you an oxide rack as an act of goodwill. new kind of computer for new kind of website

from my vague Lookin in the past, the parts that would be feasible to colocate wouldn't save us money, and the parts that would (maybe) save us money would be Really Hard to operate with our staffing. this calculus might change at some point.

but also yeah oxide should mail us a rack. completely agree.

in reply to @hootOS's post:

it's part of the reason why i also like exporting PNGs in 4-bit and similar super-crushed quality, it looks cool and it keeps the file sizes down.

like my goal for JPGs and PNGs is to be under 2MB at the most, but usually i shoot for under 1MB. i'll even downscale the image if i have to when i really wanna drop the file size.

it's also just personally better for me because A) it saves a shitload of space on my hard drives and solid state, and B) i only save in high quality when i really need to, like for graphic design stuff. so ye