vogon

the evil "Website Boy"

member of @staff, lapsed linguist and drummer, electronics hobbyist

zip's bf

no supervisor but ludd means the threads any good


twitter (inactive)
twitter.com/vogon
bluesky
if bluesky has a million haters I am one of them, if bluesky has one hater that's me, if bluesky has no haters then I am no more on the earth (more details: https://cohost.org/vogon/post/1845751-bonus-pure-speculati)
irl
seattle, WA

vogon
@vogon

if you turn off the hardware power limits, it happily turns itself up to 6ghz, burns 400 watts, and thermal throttles, even with a 360mm x 60mm radiator

https://www.youtube.com/watch?v=rKE__VyrPII

this is a computer for nobody


You must log in to comment.

in reply to @vogon's post:

yeah I helped @ror build a new computer literally last night, and on the way home I was reading reviews for the 7700X talking about "but just wait for raptor lake, which could blow it away!" and man, not like this lmao

does this happen in cycles? where they push a technology to its hottest power hungry limits, until the only way to 'innovate' is to make a more efficient chip, and then that tech takes over, and the cycle continues.

but also im sure they are not concerneed with power draw so they can sell power efficient chips separately for people who do care.

I think the reverse is possible, too:

They start on a new process node, and don't understand it fully. In order to get good performance, they push the power budget up to the biggest possible one, and overcome ineffective use of the new tech via power.

Then, next generation using the same underlying process node, they've understood the node and optimized it enough to be both powerful and very efficient.

But since they've gotten everything out of the node they can, the next generation has to push to the next process node, which they don't understand enough to make efficient yet, resulting in another inefficient product. Etc.

I think what you've described happens too, but it's not every case. I'd find it interesting to make a graphic or something with all the CPU/GPU generations with power use, power efficiency, and underlying process node to see how this lines up in general. There's also definitely the choice to change, like, default power budget depending on how the product is marketed (like, if you can't be the most powerful GPU, lowering the power budget and being the most efficient one instead could be a good ploy)

in reply to @vogon's post:

frankly ridiculous that upgrading your CPU from an i5-12500k to an i9-13900k will mean you have to buy a new PSU, a new motherboard, probably a new CPU cooler and a new, more airflow-ey case to fit it in. and it will be so, so much louder.

even funnier: this awful thing has 16 "efficiency" cores and only 8 big cores

the first skus they released that had the big-little design were a disaster, because they had AVX512 XUs on the big cores, so you could either disable the small cores or disable AVX512 support. if you wanted both your threads would get cross-scheduled spuriously and crash with unsupported instructions, because they did not implement microcode for AVX512 emulation.

intel's solution to this was to send out an """update""" that permanently disabled AVX512 on every performance core.

the 13900K is listed as not even having AVX512 support at all, lmao

i wonder if the silicon is still there

intel is getting thrashed by AMD so hard that they're basically doing the equivalent of running the 100 meter dash while doused in flaming kerosene just to put up good benchmarks. it would be sad if they didn't suck so much