jkap

CEO of posting

butch jewish dyke
part of @staff, cohost user #1
married to @kadybat

This user can say it
osu stats


🐘 mastodon
xoxo.zone/@jkap
🖼️ icon credit
twitter.com/osmoru
🐦 twitter
not anymore lol
You must log in to comment.

in reply to @jkap's post:

I have nothing useful to contribute to the question other than my production database normally hovers around 1.5% CPU utilization, spiking up to 98-100% (for a fairly beefy AWS m4.2xlarge instance) when I put it under-duress by sending it lots of requests during peak traffic times. I also have no idea if this is good or bad.

The load average is probably more insightful

but well, with a database, the answer is usually "sustained periods of 98-100%" because there's no headroom left for other tasks

honestly, you do kinda want to max out on cpu, as it's usually network or disk latency which causes slow downs. at the same time, 100% cpu might mean that there's some errant query up to no good

personally, i'd find more useful answers in pg_stat_activity—seeing if there's long running queries, or queries waiting on locks

This covers most of what I'd reply.

Another thing I encountered at my job is that if only one core goes to 100% on a long query, you could check if you can parallelize it. The planner should do it automagically, but we found that it can go wrong.