Wow, it’s amazing how far 8 years will change, yet be exactly the same…
For those not in the know, this very thread got me many more contacts in the industry, and several forum members from dozens of other websites have come to know me based on what you see below.
Of course, time will show that this thread has shown its age, as I’ve confused pixel shaders and vertex shaders constantly, though still making the same valid point.
People tend to use performance, or better yet- perceived performance to justify their purchase of a product. The onslaught of negativity towards Wii on-whole in and out of the industry today is proof that this type of bias will never truly end.
After a good-old dive into the world of archived web images, I came across the infamous post itself that garnered several hundred posts and scores of fanboy hate on both ends of the spectrum.
The point of this message board post? To disrupt and expose the “safe-haven” that is popular opinion, and shed light that it may not be as safe as people would think.
In the end, the only thing that mattered were the games- and it’s clear that Resident Evil 4 alone proved that GameCube was no slouch when it came to top-of-the-line graphics, as well as mature content. GameCube (nor Xbox) were ever completely realized last generation because PS2 was the market leader, and unanimously project lead on any major project that spanned more than one console. Direct to Xbox or GCN titles stayed few and far between, and it took a complete teardown and release of official whitepapers of each console several years later to prove that my theory was really close- both consoles were really identical in prowess, and many minor features were better on the graphics side than the other in too many places to count.
Below is the original post in its archived glory- feel free to discuss…
NOTE: images were resourced, and original statement(s) changed to reflect recent news on the subject at hand…
Hello all, this is your not-so-friendly neighborhood Shadow Fox.
In my time in these forums, I can’t help but notice this general observation that Xbox is the most powerful console of the next-gen systems, and some even say it’s 3-times more powerful (which I most certainly have yet to see in a game).
My big gripe is (yes, this is a rant), that almost everyone thinks this, or “knows” this, yet they haven’t a clue how they got this “information”. Who told you Xbox was most powerful? Did they prove it? How? The reason why I say this is because every person I’ve personally met or chatted with on the boards believes Xbox is more powerful because of one of two reasons:
1). The numbers in the specsheets appear higher for Xbox than GameCube, so that must mean it’s better.
2). Microsoft, or [insert magazine or website here] said so.
NOT ONCE have I actually talked to someone believing this propaganda that actually found out the Xbox was more powerful thru a proper benchmark test, or by matching up individual components of the machines to see how they fare against each other in their respective operations. Usually I end up talking to some guy that works at EB or something and ask them what they think, and they say the same thing- they heard it from somewhere else, or saw it on a website that knows next to nothing about the tech of these consoles.
So who’s to say what is most powerful?
Personally I’m quite sure Xbox and GameCube are VERY identical in terms of polygon performance and effects, after looking at the facts on each system’s abilities, though I’m led to believe that Xbox might not be as powerful as everyone thinks graphicswise, especially since Microsoft avoided posting REALWORLD PERFORMANCE NUMBERS (the polygon performance you get in an actual game, not a demo test). Nintendo posted a very generous realworld number of 6-12 million polys/sec, which was surpassed in one of its own launch games at 15mps (StarWars Rogue Leader, which is still currently the most polygons displayed in a game to-date).
So, Microsoft states Xbox can push 120+million odd polys/sec with no effects as RAW polygons, and Nintendo eventually posted that GameCube’s theoretical maximum was 90 million polys/sec with effects (1 texture, 1 infinite hardware light). Microsoft’s numbers appear a cool 30 million polys/sec higher than Nintendo’s, but why do current games barely push over 10mps on this “all powerful” Xbox, and 5 games have already matched 15mps on GameCube (originally started by the Rogue Leader launch game)??
For one, Microsoft’s numbers are indeed inflated. The Xbox’s fillrate is nowhere NEAR 4 Gtexels/sec (more like 250-750 Mtexels, according to developers). Xbox’s system bandwidth isn’t a true 6.4GB/sec, considering any info from the CPU to the GPU and vice-versa is bottlenecked at 1.02GB/sec; one-third of GCN’s overall system bandwidth in realtime. Xbox’s GPU also requires 16MB of the 64MB DDR just to cull a Z-buffer (which is embedded on the GCN GPU at no cost to system memory), and also GCN’s internal GPU bandwidth is more than twice that of Xbox’s (25GB/sec compared to 10GB/sec). Also, Xbox claims to have more effects than GameCube, and better texturing ability in its GPU, when the XGPU can only do 4 texture layers per pass, and only 4 infinite hardware lights per pass (8 local lights can be done, also). GCN, on the other hand, boasts 8 texture layers per pass, and 8 infinite hardware lights and local lights per pass, all realtime.
What this means is that while Xbox relies on vertex shaders and pixel shaders (which BTW are absent from GCN hardware) to do realtime bumpmapping, the same effect is done in hardware on GameCube via it’s texture layers. Xbox also must deal with texture layers per bumpmapped surface per scene, though.
Also this whole processor thing is quite twisted considering Xbox and GameCube are two TOTALLY DIFFERENT architecures (32/64-bit hybrid, PowerPC native compared to 32-bit Wintel). GameCube, having this architecture, has a significantly shorter data pipeline than Xbox’s PIII setup (4-7 stages versus up to 14), meaning it can process information more than twice as fast per clock cycle. In fact, this GCN CPU (a PowerPC 750e IBM chip) is often compared to be as fast as a 700mhz machine at 400mhz. So GCN could be 849mhz compared to Xbox’s 733mhz machine performancewise.
Not ONCE do you hear this fact stated by Microsoft’s PR, nor do you see anything listed that Xbox can be “beat in” on their official specs (no realworld poly count, no realworld fillrate, no listing of simulataneous texture layers/hardware lights per pass, no mentioning that pixel/vertex shaders only do bumpmapping and skinning commonly done on all games now)…
Now, don’t get me wrong; I love my Xbox, but there’s no way we’re EVER going to see more than 30 million poly/sec games in this console’s lifespan, and neither will GameCube. Dead or Alive 3, a game Tecmo said “was impossible on any system other than Xbox” due to the amount of polygons onscreen, is a 9-10mps game, tops. The character models (which were also claimed to be an impossibility elsewhere) consisted of 9,000 polygons each- the same amount of polygons in characters in StarFox Adventures, Eternal Darkness, and even in Luigi’s Mansion (end boss). Resident Evil 0, however, boasts the highest polygonal “low-end” model to-date- a whopping 25,000 poly character. Now why is this possible (even against prerendered backgrounds) on a “less techincal” console? Why isn’t Xbox smothering GCN to death with games that are impossible to be done on any other console?
I’ve constantly emailed Microsoft about this, and I’ve recieved no response other than “thank you for your interest in our product” with a link back to that wretched xbox.com. Nintendo only commented that it’s specs listed are realworld figures, and are reconfirmed.
EDIT: Rare was contacted, and confirmed that StarFox Adventures does indeed display massive amounts of bumpmaps, and realtime reflection/refraction effects by directly manipulating GCN hardware. When asked about one of the largest areas in the game (Krazoa Palace) regarding fillrate and polygonal display, Rare actually stated this was one of the easier levels to get running on the GCN.
EDIT 2: Nintendo of America was contacted, and they simply replied “Maybe, maybe not…but isn’t it the GAMES that matter”??
People say water in games like Bloodwake and Morrowind can’t be done elsewhere. I point to StarFox Adventures, and even Super Mario Sunshine. People say games like Halo have loads of bumpmapping. I point to Rogue Leader, Eternal Darkness, and Resident Evil’s character models and doors. I’ve even heard the gripe about individual blades of grass rendered on Xbox games. I once again point to StarFox Adventures, Mario Sunshine, and even the recent Legend of Zelda: The Wind Waker. Some Xbox fanboys I’ve run across have even been sore enough to say Xbox has faster loadtimes. I then point to Luigi’s Mansion and Metroid Prime, which are impossible on Xbox because they HAVE NO LOADTIMES (the game is constantly streamed from the GameCube disc in burst packets). Simply put, there’s not one effect Xbox can do that GCN can’t, while this can go the other way since Xbox lacks half of GCN’s hardware lights and texture layers onboard.
While I’m sure Xbox is technically capable of more alone (simply because it has an HDD for potentially larger games; which I’ve yet to see), I’ll have to give the nod to GameCube looking at the facts- which are the games that Xbox has only matched with Rallisport Challenge so far.
Either way, neither console can be proven more powerful than the other unless benchmarked properly, since the machines are so totally different from each other.
A word to the wise: no matter how large those numbers look on specsheets, if you don’t know what the hell they mean they should be taken with a grain of salt.