Originally posted by StormFront
Damn... You guys are really down on the FX5900 aren't you!
It's not all bad! Seriously! I know that the ATI offers certain things that NVidia doesn't and that it conforms more rigorously to the DX9 API than Nvidia do, but there are good reasons for that. The ATI renders everything to high detail; all the time. The problem with this is that you are wasting GPU power all over the show. In real world, rendered situations, there is very little around you that you can actually see in high detail. Most things are too far away, obscured in shadow or hidden behind something else. Therefore the NV30 paths try to emulate this. Sure they get it wrong, like with those dodgy 'cheat' drivers they had, but they are getting there.
Dont take my word for it, got look up some interviews with people like Jon Carmack. He is absolutley convinced that the FX is the better card and that it has far more potential. While talking about the FX5800 and the Radeon 9700 rendering Doom III he said that (and this was 6 months ago) the ATI was rendering it better but it was maxing out its power to do so. The NVidia was slightly behind in performance but was not even breaking a sweat. The point here being that once we get a correctly written driver (ATI definelty kick arse on that front) the FX 5900 will fly!
And its pretty damned nippy now!
again... that is b$..
the nv3x cards are excellent for dx8 and below... when it comes to dx9 or following the standard arb2 path in openGL.. the nv3x cards weaknesses are shown up...
it is not a question of being down on whatever... it is a fact... the architecture is fubar... every single dx9 test has shown this... even a TWIMTBP game like Tomb Raider shows this...
carmack has also said no such thing about the nv3x being the better card... he said using lower precision and lesser iq... the nv3x cards are slightly faster in doom3... that is when nv3x cards run their own path... when rnning on the standard arb2 path the performance is 1/2 that of the radeons...
remember this... even when ati renders everythng @ fp24 precision... it is running FASTER than the nvidia cards that render things with fp16 precision... dx9 calls for a minimum of fp24... nvidia is dropping back down in all cards except the nv35 to fx12 which is integer.. not floating point...
the architecture with 4 pipelines and 2 tmus on the nv30/35 is well and good in most situations but ati's 8x1 pipeline setup for its highend cards and 4x1 setup is faster as has been shown...
heck the 9600pro is faster than the nv35 in pure dx9 situations... this has nothing to do with drivers or whatever else... it is broken hadware as has been stated for a long time already by everyone except nvidia's pr department...
global shader replacement may sound well and good to you storm but consider this.. people are spending $400-500 on the high end cards... do you REALLY expect people who spend that kind of money to settle for anything less than the best ?
like I said... if you want a dx8 card... get nvidia... for dx9 unless nvidia pulls a rabbit out of the hat in 2004 with the nv40... their dx9 and above performance is still going to be fubar... even the nv38 is no more than a refresh of the nv35 (as is the r360 but it will including better shadowing thechnology for games such as doom3)
btw... occlusion culling already occurs on the high end cards... both nvidia and ati already have algorithms which detect and limits rendering on items which are obscured... this has been happening for awhile... and is done by other makers as well... no power is wasted...
I am going to mention one last thing... concerning carmack... the 9700pro never was running out of power... if anyone knows carmack they know he likes pushing the envelope... he was seeing how far he could go with instructions... and he hit the limit on what the 9700pro could do while testing... it did not run out of power... its registers were full with instructions... the nv30 on the other hand can handle...in principle... more instructions...
here is teh problem with that though... it doesnt MATTER how many instructions the card can run... it is how FAST it can run them... eg... arb2 path...
why is it that all these game developers are spending 5-10 times more time developing code for nvidia cards which lower precision and replace shaders with lower level shaders ? it is because the cards CAN'T HANDLE the workload...
I will reiterate... there is nothing that nvidia can do short of a new architecture that will bring them back up... and that is not going to happen this year...
I personally want members of this forum to get the best experience possible for the money they spent... if you feel so inclined to blow your money on an nv3x card @ this time.. feel free... I will not be recommending any nvidia product other than gf4 ti's to anyone anytime soon...