I'll wait for a boffin here, but my hunch is that the actual low level architecture of the chip is different, such that a GPU is more "rugged" and can stand more heat. After all there are similar variations from Intel to AMD CPUs, albeit less marked, where an AMD will be able to run hotter before it starts to fry, I believe (or is it just that they are cranked up and run hotter anyway?)
Anyway, I await an educated opinion to (maybe) bear me out.
Running any piece of high density silicon above 90 Deg C is an invitation for an early death. Failure rate is an exponential function of temperature and 90 C is the knee of the curve.
If the junction temperatures truly are running at 120C don't expect the GPU to last too long past the warantee expiration date. But then early failures of Nvidia GPU's is the reason I switched back to ATI products.
On the other hand I really doubt they are running the die that hot. Water boils at 100C. The solder used in electronics starts getting soft at 145C. Nobody in the business could be that stupid... Or could they?
1: The GPU is running less crucial information then a CPU. CPU obviously has more information running at once then a GPU, such as whatever the GPU is needing, as well as OS and other things in the background all at once.
2: Maybe the architecture of the Chip on the GPU is not as well designed as on a processor so it reaches higher temps. Thus making them utilize a different process to build the chips so that they can last longer.
i wouldn't necessarily trust that. it's best to have a separate die that you yourself put on the GPU and monitor the temperature with that instead. a lot of onboard monitoring temps can be very inaccurate.
Ep, glad to see you come back and tidy up...did want to ask a one day favor, I want to enhance my resume , was hoping you could make me administrator for a day, if so, take me right off since I won't be here to do anything, and don't know the slightest about the board, but it would be nice putting "served administrator osnn", if can do, THANKS