NVIDIA box clever with Intel and AMD
Issue dual press releases that highlight the development process for the GeForce 3 drivers
It seems that NVIDIA's dominance of the market is pretty much complete now. With ATI announcing profit warnings and already facing significant competition in key areas like OEMs, consumer graphics retail and mobility chipsets, things are looking rosey for Santa Clara-based NVIDIA. Now it seems NVIDIA is charging along, forging working relationships with both Intel and AMD. So yesterday they used the Intel Developer Forum as a catalyst to launch dual press releases that highlight steps taken to optimise the driver set individually for the Pentium 4 and Athlon processors respectively. Generally speaking, GPU driver sets are geared toward a united goal, to improve the performance of each feature on a graphics card under a certain operating system, such as Windows Millenium or 2000. Optimisations on a per-processor basis are somewhat unusual, and when they do occur, they are frequently lop-sided, favouring one system over the other. This was particularly true of graphics cards released when the Athlon was starting to make its presence known just over a year ago. Both press releases for yesterday's announcement are fairly similar, with identical trumpet-blowing NVIDIA information, and individualized processor-specific spiel about features and support. David Vivoli at Intel for instance pointed out that "The Pentium 4 processor introduced many exciting capabilities for increasing system throughput and the GeForce3 is ideally suited to take advantage of them," while Ned Finkle at AMD explained that "The NVIDIA GeForce3 provides an excellent, high-performance video option for AMD's processor customers." According to the Intel release, the GeForce 3 GPU takes advantage of the new Intel SSE2 instruction set, "including 144 new instructions for 128-bit Single Instruction Multiple Data (SIMD) integer arithmetic and 128-bit SIMD double-precision floating point". The system also takes advantage of the "Advanced Transfer Cache for higher data throughput, a 400MHz system bus, and a Rapid Execution Engine for higher execution throughput." Ned Finkle continued from the AMD corner that he felt "NVIDIA's use of DDR memory in the GeForce3 perfectly compliments the AMD Athlon processor's 266MHz front-side bus and DDR memory speeds". The Athlon, being a more conventional x86 processor born of the same line as the Pentium III requires no real explanation, but Intel's rather over-complicated technobabble probably does. "SSE2" for those who missed our Pentium 4 review refers to Streaming SIMD Extensions 2. SIMD, in turn, is lengthened to Single Instruction Multiple Data, and is a way of applying a single instruction to multiple datasets simultaneously. With so much repetitive data manipulation involved in gaming, it's not hard to see how the GeForce 3 could be trained to take advantage of this. The Advanced Transfer Cache is merely lower latency, higher bandwidth cache on the processor, and the 400MHz system bus will join it in helping memory-bandwidth heavy tasks like high resolution / 32-bit display mode games. It's actually more interesting to see the Intel optimisation than it is the AMD - because based on this, one would suspect that 3D games that use DirectX 8 and the GeForce 3 on a Pentium 4 platform would vastly outscore games running on the same platform but with an equivalent AMD chip. Memory bandwidth limitation may even become something of a moot point; after all, the Pentium 4 isn't exactly low on bandwidth. In our upcoming review of the GeForce 3, we hope to test this supposition for ourselves, but in the meantime, we'll have to rely on guestimates. Related Feature - GeForce 3 Preview