The Old Xbox vs GameCube Graphics War

Wow, it’s amazing how far 8 years will change, yet be exactly the same…

For those not in the know, this very thread got me many more contacts in the industry, and several forum members from dozens of other websites have come to know me based on what you see below.

Of course, time will show that this thread has shown its age, as I’ve confused pixel shaders and vertex shaders constantly, though still making the same valid point.

People tend to use performance, or better yet- perceived performance to justify their purchase of a product.  The onslaught of negativity towards Wii on-whole in and out of the industry today is proof that this type of bias will never truly end.

After a good-old dive into the world of archived web images, I came across the infamous post itself that garnered several hundred posts and scores of fanboy hate on both ends of the spectrum.

The point of this message board post?  To disrupt and expose the “safe-haven” that is popular opinion, and shed light that it may not be as safe as people would think.

In the end, the only thing that mattered were the games- and it’s clear that Resident Evil 4 alone proved that GameCube was no slouch when it came to top-of-the-line graphics, as well as mature content.  GameCube (nor Xbox) were ever completely realized last generation because PS2 was the market leader, and unanimously project lead on any major project that spanned more than one console.  Direct to Xbox or GCN titles stayed few and far between, and it took a complete teardown and release of official whitepapers of each console several years later to prove that my theory was really close- both consoles were really identical in prowess, and many minor features were better on the graphics side than the other in too many places to count.

Below is the original post in its archived glory- feel free to discuss…

NOTE: images were resourced, and original statement(s) changed to reflect recent news on the subject at hand…

Hello all, this is your not-so-friendly neighborhood Shadow Fox.

In my time in these forums, I can’t help but notice this general observation that Xbox is the most powerful console of the next-gen systems, and some even say it’s 3-times more powerful (which I most certainly have yet to see in a game).

My big gripe is (yes, this is a rant), that almost everyone thinks this, or “knows” this, yet they haven’t a clue how they got this “information”. Who told you Xbox was most powerful? Did they prove it? How? The reason why I say this is because every person I’ve personally met or chatted with on the boards believes Xbox is more powerful because of one of two reasons:

1). The numbers in the specsheets appear higher for Xbox than GameCube, so that must mean it’s better.

2). Microsoft, or [insert magazine or website here] said so.

NOT ONCE have I actually talked to someone believing this propaganda that actually found out the Xbox was more powerful thru a proper benchmark test, or by matching up individual components of the machines to see how they fare against each other in their respective operations. Usually I end up talking to some guy that works at EB or something and ask them what they think, and they say the same thing- they heard it from somewhere else, or saw it on a website that knows next to nothing about the tech of these consoles.

Best-looking console game to-date? Damn skippy. 15 million polygons per second, anyone?

So who’s to say what is most powerful?

Personally I’m quite sure Xbox and GameCube are VERY identical in terms of polygon performance and effects, after looking at the facts on each system’s abilities, though I’m led to believe that Xbox might not be as powerful as everyone thinks graphicswise, especially since Microsoft avoided posting REALWORLD PERFORMANCE NUMBERS (the polygon performance you get in an actual game, not a demo test). Nintendo posted a very generous realworld number of 6-12 million polys/sec, which was surpassed in one of its own launch games at 15mps (StarWars Rogue Leader, which is still currently the most polygons displayed in a game to-date).

So, Microsoft states Xbox can push 120+million odd polys/sec with no effects as RAW polygons, and Nintendo eventually posted that GameCube’s theoretical maximum was 90 million polys/sec with effects (1 texture, 1 infinite hardware light). Microsoft’s numbers appear a cool 30 million polys/sec higher than Nintendo’s, but why do current games barely push over 10mps on this “all powerful” Xbox, and 5 games have already matched 15mps on GameCube (originally started by the Rogue Leader launch game)??

For one, Microsoft’s numbers are indeed inflated. The Xbox’s fillrate is nowhere NEAR 4 Gtexels/sec (more like 250-750 Mtexels, according to developers). Xbox’s system bandwidth isn’t a true 6.4GB/sec, considering any info from the CPU to the GPU and vice-versa is bottlenecked at 1.02GB/sec; one-third of GCN’s overall system bandwidth in realtime. Xbox’s GPU also requires 16MB of the 64MB DDR just to cull a Z-buffer (which is embedded on the GCN GPU at no cost to system memory), and also GCN’s internal GPU bandwidth is more than twice that of Xbox’s (25GB/sec compared to 10GB/sec). Also, Xbox claims to have more effects than GameCube, and better texturing ability in its GPU, when the XGPU can only do 4 texture layers per pass, and only 4 infinite hardware lights per pass (8 local lights can be done, also). GCN, on the other hand, boasts 8 texture layers per pass, and 8 infinite hardware lights and local lights per pass, all realtime.

What this means is that while Xbox relies on vertex shaders and pixel shaders (which BTW are absent from GCN hardware) to do realtime bumpmapping, the same effect is done in hardware on GameCube via it’s texture layers. Xbox also must deal with texture layers per bumpmapped surface per scene, though.

Also this whole processor thing is quite twisted considering Xbox and GameCube are two TOTALLY DIFFERENT architecures (32/64-bit hybrid, PowerPC native compared to 32-bit Wintel). GameCube, having this architecture, has a significantly shorter data pipeline than Xbox’s PIII setup (4-7 stages versus up to 14), meaning it can process information more than twice as fast per clock cycle. In fact, this GCN CPU (a PowerPC 750e IBM chip) is often compared to be as fast as a 700mhz machine at 400mhz. So GCN could be 849mhz compared to Xbox’s 733mhz machine performancewise.

Not ONCE do you hear this fact stated by Microsoft’s PR, nor do you see anything listed that Xbox can be “beat in” on their official specs (no realworld poly count, no realworld fillrate, no listing of simulataneous texture layers/hardware lights per pass, no mentioning that pixel/vertex shaders only do bumpmapping and skinning commonly done on all games now)…

One of GameCube's best water displays with water refraction/reflection maps
One of Xbox's best water displays without water refraction/reflection maps

Now, don’t get me wrong; I love my Xbox, but there’s no way we’re EVER going to see more than 30 million poly/sec games in this console’s lifespan, and neither will GameCube. Dead or Alive 3, a game Tecmo said “was impossible on any system other than Xbox” due to the amount of polygons onscreen, is a 9-10mps game, tops. The character models (which were also claimed to be an impossibility elsewhere) consisted of 9,000 polygons each- the same amount of polygons in characters in StarFox Adventures, Eternal Darkness, and even in Luigi’s Mansion (end boss). Resident Evil 0, however, boasts the highest polygonal “low-end” model to-date- a whopping 25,000 poly character. Now why is this possible (even against prerendered backgrounds) on a “less techincal” console? Why isn’t Xbox smothering GCN to death with games that are impossible to be done on any other console?

Height map bumpmapping as well as DOT3 bumpmapping on every object
Bumpmapping Microsoft didn't want you to know about that's done just as well on Nintendo's system without "vertex shaders".

I’ve constantly emailed Microsoft about this, and I’ve recieved no response other than “thank you for your interest in our product” with a link back to that wretched Nintendo only commented that it’s specs listed are realworld figures, and are reconfirmed.

EDIT: Rare was contacted, and confirmed that StarFox Adventures does indeed display massive amounts of bumpmaps, and realtime reflection/refraction effects by directly manipulating GCN hardware. When asked about one of the largest areas in the game (Krazoa Palace) regarding fillrate and polygonal display, Rare actually stated this was one of the easier levels to get running on the GCN.

EDIT 2: Nintendo of America was contacted, and they simply replied “Maybe, maybe not…but isn’t it the GAMES that matter”??

People say water in games like Bloodwake and Morrowind can’t be done elsewhere. I point to StarFox Adventures, and even Super Mario Sunshine. People say games like Halo have loads of bumpmapping. I point to Rogue Leader, Eternal Darkness, and Resident Evil’s character models and doors. I’ve even heard the gripe about individual blades of grass rendered on Xbox games. I once again point to StarFox Adventures, Mario Sunshine, and even the recent Legend of Zelda: The Wind Waker. Some Xbox fanboys I’ve run across have even been sore enough to say Xbox has faster loadtimes. I then point to Luigi’s Mansion and Metroid Prime, which are impossible on Xbox because they HAVE NO LOADTIMES (the game is constantly streamed from the GameCube disc in burst packets). Simply put, there’s not one effect Xbox can do that GCN can’t, while this can go the other way since Xbox lacks half of GCN’s hardware lights and texture layers onboard.

While I’m sure Xbox is technically capable of more alone (simply because it has an HDD for potentially larger games; which I’ve yet to see), I’ll have to give the nod to GameCube looking at the facts- which are the games that Xbox has only matched with Rallisport Challenge so far.

Either way, neither console can be proven more powerful than the other unless benchmarked properly, since the machines are so totally different from each other.

A word to the wise: no matter how large those numbers look on specsheets, if you don’t know what the hell they mean they should be taken with a grain of salt.

End Rant.

9 Replies to “The Old Xbox vs GameCube Graphics War”

  1. Interesting, at least. I bought both when they launched and, from casual analysis (oxymoron), didn’t see a great gap. At times I suspected that the GameCube was more powerful, or at least better balanced, when looking at the games between the two. The PS2, on the other hand, clearly was a les advanced machine — when the Cube or Box were the lead console, they shined in ways that shamed the PS2. I always felt it was unfortunate that, from a hardware perspective, the PS2 dominated the market; I continue to suspect that dev results were retarded by its market domination over these two far more capable pieces of kit.

  2. I don’t believe I saw Smilebit put out a console exclusive Panzer Dragoon Orta on the Nintendo Gamecube- oh right, my mistake- that’s because it was on the original Xbox and was absolutely gorgeous and to this day still looks better than some of the best looking Gamecube exclusives.

  3. A few points.

    The XBox is designed around the Nvidia standard for texture passes — it’s a system of 4×2 texture passes — which means the XBox outputs two sets of four textures in parallell, thus the same result as 8 textures per pass. Further, the XBox completes 4×2 passes in two clock cycles, while it takes the GameCube 8 cycles to complete a single pass.

    As far as graphics suites/GPUs, it should be remembered just how powerful the NV2A was: at the time of launch there no desktop GPUs that matched its performance. It was a hybrid of three generations of Nvidia graphics hardware (NV20, NV25 and NV30; internally NV2A was referred to as NV27.5), with its featureset still being implemented as late as 2003 for the PC market. It was cutting edge as a GPU with programmable shaders (allowing the XBox to do games like Chaos Theory and Riddick), while the GameCube was a fixed function console. The XBox was directly a Direct X 8.1 machine, while the GameCube’s suites were closer to DX 7. These design strictures carried over to the Wii, and the lack of shader flexibility and results can be seen in games like FarCry — the XBox version arguably appears to be a generation ahead of its Wii counterpart. Anecdotal, but the result would seem to match up with the differences between GPUs.

    The Gekko is also likely weaker than the PIII/Celeron coppermine hybrid in Xbox. A P3 at 733mhz would almost certainly outperform a PowerPC 750cxe variant in Floating Point Performance, especially when understanding that the Coppermine in Xbox was SIMD, whereas the Gekko was not; the SSE in Xbox gives it big leap in physics-intensive games, which is a big reason why the other machines never gave us anything comparable in scale and complexity to Halo or Half-Life 2. As a pure CPU solution the Xbox likely again had the most powerful silicon of the generation.

    The GameCube also loses 16MB of its 40MB of ram to sound and disc caching; the GameCube was a split-ram setup whereas the XBox was a more forward-looking unified design. GC was thus left with a relatively paltry 24mb of ram to work with as far as graphics outlay. Though the extra 3mb of embedded 1T-SRAM helped to an extent, the overall ram pool in the GC is significantly smaller and, yes, slower than the XBox’s DDR setup.

    You also fail to mention that the XBox output in 32-bit color as baseline for all its games, whereas the GC fluctuated between 24 and 18-bit. Further, the XBox has nearly 50 games that output at 720p, whereas the GameCube is locked at 480p.

  4. Problem- what information exists to suggest that GameCube required 8 clock cycles per render pass? All of Ati’s clock cycles are variable with regard to render pass, depending on what effects are applied that aren’t free (normal maps- trilinear and Bilinear filtering are free, unlike XBOX, which would need 300% more bits per clock cycle on its 256-bit interface to fit the 2048 bits required).

    Programmable shaders weren’t needed for Riddick or Chaos Theory, as DOT3 bump mapping done in games like Rogue Leader and F-Zero GX provide the same effect with the TEV pipeline in place. There’s also the idea that Chaos Theory was ported from PS2 to GameCube, so seeing the latter system’s capabilities with the additional 4 infinite hardware lights per render pass were never realized.

    A lot of what you mentioned was covered already in the original thread much later on, but I’m too lazy to bring up WayBack Machine and search for them page by page- the fact that I was able to stumble upon the OP in question was miracle enough 8 years after the fact.

    Thank you for the response, theoanunnaki.

  5. forget tech babble go to yt watch gc dolphin vs hd xboxed games ARGUMENT OVER GAMECUBE WAS FAR FAR BETTER THAN XBOX SIMPLE LOGIC re4 in hd utterly desimates anything on xbox as does metroid prime as does the star wars games etc etc etc

    gc hit 20 million polygons at 60 fps

    xbox hit 12 million polygons at sub 30 fps

    gamecubes cpu was far ahead of the celeron in xbox and its gpu was way better it had so much hardwired and far far more bandwidth the system had more hardwired compression trickery in bothn the gpu and the cpu data and graphics

    even the remastered pc versions of halo and halo 2 with added fluff STILL DONT COMPARE TO BOG STANAFARD GC GAMES ON DOLPHIN FACT

  6. Its true that xbox had programmable shaders and Gamecube had fixed function, but using programmable shaders is not all pros, programmable shaders give more flexibility to achieve graphical effects, but fixed function gives more speed and performance since the functions are executed at hardware level while programmable shaders are in software level

    the xbox may have had more memory but lacked speed and bandwidth which are also important factors on gaming machines and gamecube could solve the lack of memory with the S3TC compression which not many games used it. 1T-SRAM had only 5 nanoseconds of refreshing, and xbox used a very slow RAM shared with cpu and GPU while gamecube had 24MB as RAM and 3MB embeded directly in the GPU which in memory and bandwidth was enough for framebuffer of 480p which, which brings the question of how much of those 64MB that xbox had were needed for 480p taking into account its small bandwidth?

    As fo the cpu, the Gamecube was using a G3 which could match p3 cpus at lower clock cycles, not to mention that the gamecube cpu had more cache than the xbox cpu

Leave a Reply