what is considered "high CPU usage" for postgresql? asking for a friend (the friend is eggbug)
that can't be right, eggbug wrote like 80% of postgresql
what is considered "high CPU usage" for postgresql? asking for a friend (the friend is eggbug)
that can't be right, eggbug wrote like 80% of postgresql
I have nothing useful to contribute to the question other than my production database normally hovers around 1.5% CPU utilization, spiking up to 98-100% (for a fairly beefy AWS m4.2xlarge instance) when I put it under-duress by sending it lots of requests during peak traffic times. I also have no idea if this is good or bad.
The load average is probably more insightful
but well, with a database, the answer is usually "sustained periods of 98-100%" because there's no headroom left for other tasks
honestly, you do kinda want to max out on cpu, as it's usually network or disk latency which causes slow downs. at the same time, 100% cpu might mean that there's some errant query up to no good
personally, i'd find more useful answers in pg_stat_activity—seeing if there's long running queries, or queries waiting on locks
This was going to be my reply.
Also unless I am out to lunch, check memory use as that makes a huge impact with CPU time too--combine this with disk/network latency for hilarity too.
yep! also tracking P99 / P90 stats for CPU / Mem / etc are also super helpful in debugging issues
This covers most of what I'd reply.
Another thing I encountered at my job is that if only one core goes to 100% on a long query, you could check if you can parallelize it. The planner should do it automagically, but we found that it can go wrong.