Showing posts with label AMD. Show all posts
Showing posts with label AMD. Show all posts

Monday, August 30, 2010

AMD decides it will retire ATI brand by 2011



It’s been four years since AMD bought ATI, and almost anyone who had apprehensions about the deal then, have by now resigned themselves to the fact, and for the most part, are happy about the way AMD has maintained ATI’s gamer quotient till this date, and how they saved it from Nvidia’s ruthless market tactics. While many industry experts felt $5.4 billion was simply too much to invest in a graphics company when with a similar amount of money AMD could have built its own, the popularity of the ATI brand was perhaps the number one factor in favour of an acquisition over internal development. Things are different now – ATI is at the top of its game, it has even edged out Nvidia as the market leader in discrete GPU shipments, and AMD is doing really well with its chipsets and integrated graphics. Things will apparently change some more soon, as AMD has revealed it will soon be doing away with the ATI brand.
Surprised? Well, not everyone is, as AMD has certainly been busy this past year, wresting the crown from Nvidia, and, also trying to simplify its brand structure in the form of AMD Vision, which is a logo a PC gets if the CPU and GPU are manufactured by AMD. Vision, Vision Premium, Vision Ultimate, and Vision Black are the price progressions, from entry-level to top-end.
AMD says it has some solid market research (at least as solid market research could ever be) to show why it has made the decision to phase out the ATI brand by Q1 2011, and retain only the Radeon/FirePro brands (thus making it AMD Radeon, or AMD FirePro).
Here is a look at AMD’s internal research:
These points all certainly make sense from a marketing point of view, but what will really connect to gamers’ hearts, is the real question. One thing is for sure though, AMD certainly benefited from ATI’s expertise and brand name, and, for gamers, it was a win-win situation, with AMD-ATI and Nvidia competing head to head to produce some seriously killer architecture. ATI will live on in the mass consciousness, leaving a tangible vacuum as it is going out at its very zenith.
Let us know what you think about AMD’s move, in the below comments section.
Do also check out some more research below from AMD that apparently justifies the move, as well as the new logos of AMD’s graphics products:


Saturday, August 28, 2010

AMD reveals upcoming ATI Southern Islands GPU codenames in ATI Catalyst 10.8 drivers



AMD released its ATI Catalyst 10.8 drivers recently, and now, some codenames of upcoming GPUs have been found listed in them. What's certainly perplexing is that even though everyone knows the Southern Islands GPUs are the next slated release, these codenames have NI, orNorthern Islands, attached to them, even on some obviously Southern Islands of the West Indies, such as Antilles, Barts, Caicos, Cayman, Seymour, and Turks. Whistler and Blackcomb are the only two actual Northern Islands of the Caribbean on the list .The consensus so far is that the 35 new cards listed in the atiicdxx.da_ file are the codenames of what's coming up next, i.e., 40nm, and we can expect the temporarily shelved 32nm Northern Island GPUs much later.
While a lot of correlation with the naming schemes of the current generation of ATI GPUs has revealed some interesting facts about the revealed graphic cards, no specifications are known yet. Ten Cayman cards are thought to be part of ATI's professional FirePro cards, with their GL prefix. Of the remaining, XT would be the top end of the series (like the HD 5870), PRO the mid-end (like the HD 5850), and LE the low end (like the HD 5830). LP is thought to stand for low power, or, it could simply be a code refresh to the LE tag. Finally, the Gemini prefix is thought to indicate dual GPUs. The GPUs are probably not lmited to just desktop, workstation and notebook graphics, but some may refer to embedded and FireStream products as well. Check out the entire list below:
223,CAYMAN GL XT (6700),NI CAYMAN

224,CAYMAN GL XT (6701),NI CAYMAN

225,CAYMAN GL XT (6702),NI CAYMAN

226,CAYMAN GL XT (6703),NI CAYMAN

227,CAYMAN GL PRO (6704),NI CAYMAN

228,CAYMAN GL PRO (6705),NI CAYMAN

229,CAYMAN GL (6706),NI CAYMAN

230,CAYMAN GL LE (6707),NI CAYMAN

231,CAYMAN GL (6708),NI CAYMAN

232,CAYMAN GL (6709),NI CAYMAN

233,CAYMAN XT (6718),NI CAYMAN

234,CAYMAN PRO (6719),NI CAYMAN

235,ANTILLES PRO (671C),NI CAYMAN

236,ANTILLES XT (671D),NI CAYMAN

237,BLACKCOMB XT/PRO (6720),NI BLACKCOMB

238,BLACKCOMB LP (6721),NI BLACKCOMB

239,BLACKCOMB XT/PRO Gemini (6724),NI BLACKCOMB

240,BLACKCOMB LP Gemini (6725),NI BLACKCOMB

241,BARTS GL XT (6728),NI BARTS

242,BARTS GL PRO (6729),NI BARTS

243,BARTS XT (6738),NI BARTS

244,BARTS PRO (6739),NI BARTS

245,WHISTLER XT (6740),NI WHISTLER

246,WHISTLER PRO/LP (6741),NI WHISTLER

247,WHISTLER XT/PRO Gemini (6744),NI WHISTLER

248,WHISTLER LP Gemini (6745),NI WHISTLER

249,ONEGA (6750),NI TURKS

250,TURKS XT (6758),NI TURKS

251,TURKS PRO (6759),NI TURKS

252,SEYMOUR XT/PRO (6760),NI SEYMOUR

253,SEYMOUR LP (6761),NI SEYMOUR

254,SEYMOUR XT/PRO Gemini (6764),NI SEYMOUR

255,SEYMOUR LP Gemini (6765),NI SEYMOUR

256,CAICOS GL PRO (6768),NI CAICOS

257,CASPIAN PRO (6770),NI CAICOS

258,CAICOS PRO (6779),NI CAICOS
A Turkish website called Donanimhaber has announced its expected release for the schedule for some of the above named ATI Radeon HD 6000 series, which starts of with Radeon HD 6700 series, codenamed Barts, releasing this October. November should be the ATI Radeon HD 6800 series, codenamed Cayman. The flagship chip, replacing the HD 5970, is expected to be the Antilles, a dual-GPU Cayman 6870 (without the lowered clock speeds of the HD 5970), branded as the HD 6970, which is supposedly due out in December. A 6950 card, the Antilles PRO, is also expected at some point.
Turks and Caicos are thought to be the HD 6600 and 6500 cards, and will apparently release in Q1 2011. 

Tuesday, August 24, 2010

AMD reveals more details on Bobcat, Bulldozer cores

Fabless chip company AMD has disclosed more details of its up and coming cores, codenamed Bulldozer and Bobcat.

Bobcat will be the core AMD uses when it releases its first Fusion chip, codenamed Ontario, in early 2011.

The other Fusion chip, Llano, will use a K8 related core.

AMD in a briefing for journalists, said that Bulldozer is the "heavy lifter" aimed at server and high end desktops, while Bobcat is aimed at the netbook and notebook market.
Bulldozer pairs two integer execution cores with components that can be shared, while there are instruction set extensions. A single core is usually used for hyperthreading. Bulldozer includes better power management. It will be built on 32 nanometre process technology. This will be the first processor to use high gate k metal. SMT forces two threads into one core, with threads competing for resources.  Core multiprocessing (CMP) has dedicated cores for each thread.

Bulldozer's separate CPU integer act as two "strong threads". Sharing resources dynamically switches between shared and dedicated components. The FP operations are shared between the two integer units. On an eight core Bulldozer chp the L3 cache is shared with divisions transparent to hardware, OS and applications. 

Bobcat is an efficient low power X86 core, aimed at the netbook and notebook market. It is a sub one watt capable core, with an out of order execution engine, uses SE1-3 and virtualisation. AMD claims it will deliver 90 perfence in less than half of the silicon area.

Bobcat will be the CPU element of Ontario Fusion, using a high speed bus architecture and a shared low latency memory model, and will appear early next year, ahead of schedule.

Friday, August 20, 2010

Bad news for AMD as Intel gains server share



AMD just can't seem to catch a break. After two profitable quarters (amid a multiyear string of losers), a product transition causes it to miss out on the big first-quarter server market rebound that propelled Intel to record profits.
According to a new market share report from IDC, Intel managed to take critical server market share from AMD, with the former company seeing a year-over-year jump from 89.9 percent last year to 93.5 percent this quarter. Meanwhile, AMD's market share dropped from 10.1 percent to 6.5 percent.
AMD's slow transition to the Opteron 6000 series, and the subsequent market share losses, are practically the mirror image of Intel's success in getting its 32nm Westmere and 45nm Nehalem EX server parts into the waiting hands of server makers who finally were ready to open their wallets and start purchasing again.
What makes this situation especially ugly for AMD is the fact that the server market is the company's bread-and-butter. During the worst of the downturn, AMD notoriously jettisoned every part of the company that didn't involve designing x86 processors and GPUs, and it focused in particular on its server business because that was one place where it was still fairly healthy. Server is AMD's absolute core vertical, which means that the company can't really afford too many missteps of this type.
AMD's fortunes could still turn this year, though. The traditional IT upgrade cycle usually happens in the fall, so September will be a big month for the company—at least, it will be if the normal seasonal buying patterns have really returned to the market.
Most PC buying on both the consumer side and corporate side happens in the second half of the year, particularly in the last quarter, as students go back to school and businesses upgrade their machines. This seasonal cyclicality actually halted altogether in 2008, but most of the component suppliers claim to have seen signs that it's returning. Still, we won't know until the fall how much of that normal seasonal surge in buying we'll see this year—the only thing that's certain is that AMD needs that surge to happen, and the company needs to participate in it.

Friday, August 6, 2010

CORE Or Boost? AMD's And Intel's Turbo Features Dissected

Intel arms its Core i5 and Core i7 CPUs with Turbo Boost. AMD's hexa-core Phenom II X6 chips sport Turbo CORE. Both technologies dynamically increase performance based on perceived workloads and available thermal headroom. Which one does the better job?
Automotive turbochargers increase torque and power output, which is why they're used to increase the air-fuel mixture rate per combustion cycle. AMD’s and Intel’s performance-improving technologies don't actually a require an additional piece of hardware bolted on like a turbo would be, but they both invoke the gas compressor namesake anyway.

Instead, both companies' latest six-core models dynamically increase their clock rates to deliver better performance under workload conditions that allow for faster frequencies. We wanted to see whether Intel's Turbo Boost or AMD's Turbo CORE is the better implementation.

Intel was first to offer this performance-enhancing feature. Its Nehalem architecture and the Core i7-900 family first introduced Turbo Boost in late 2008. The technology is capable of accelerating all cores by one clock speed bin (133 MHz) and one or two cores by two speed increments (depending on the particular model). In 2009, the Lynnfield Core i5/i7 quad-core processors for LGA 1156 enabled a more advanced implementation able to accelerate one or two cores by four clock speed increments. The 800-series even bumps clock speed up by five clock speed bins for a single core. One speed bin equals 133 MHz at stock speed, so we’re effectively talking about a 133 to 533 MHz dynamic increase. Turbo Boost is also an available feature on the Clarkdale-based Core i5 dual-core chips.

AMD introduced Turbo CORE with its six-core Phenom II X6 and will keep adding the feature to new models. While Intel's implementation allows the CPU to specifically accelerate one or more cores, AMD’s approach only accelerates three cores in the case of a six-core CPU and only two with quad-core processors.

We grabbed the latest AMD Phenom II X6 and Core i7-980X six-core processors to find out which implementation works best across our benchmark suite in terms of performance and power efficiency. Since the performance level of these two chips is rather different—Intel has more punch—we decided to compare benchmark results with and without the Turbo feature and normalize these to 100% for the non-Turbo results. This way we can compare the relative impact on the respective configurations despite the absolute performance difference. In short, which Turbo implementation gives you more bang for the buck?

Turbo CORE is available on all AMD Phenom II X4 and X6 processors based on the recent 45 nm designs, namely the Thuban six-core and seen-in-the-wild but not-yet-available-at-retail Zosma quad-core models. Should it ever see retail availability, the Phenom II X4 960T at 3.0 GHz nominal speed could accelerate two cores up to 3.4 GHz (+400 MHz) with the thermal headroom available, and if the application load demands the increase. The Phenom II X6 processors increase their clock speeds by 500 MHz, with the exception of the 1090T flagship, which adds 400 MHz to reach from 3.2 to 3.6 GHz.

This implementation can be considered an addition to the Cool’n’Quiet feature, which reduces clock speeds and voltages if there is little work for the processor to do. Once half of the cores are idle, the system reduces their clock speed to the Cool’n’Quiet minimum of 800 MHz. The next step is a voltage increase for the remaining active cores paired with a speed lift of up to 500 MHz, as explained above.

Unfortunately, few workloads would tax exactly three cores by 100%—the conditions needed for AMD’s solution to run at 3.6 GHz. We found that a two-core load scenario is more realistic. This is why the feature works better on a CPU with an even core count, such as the Phenom II X4 960T.

AMD’s Turbo CORE control allows Black Edition processor users to adjust their number of accelerated cores. This makes analysis more complex, but also gives enthusiasts a more powerful tool for fine tuning their systems.

Intel's implementation works best on processors with a lot of scalability inherent to their design, as Turbo Boost covers much broader clock speed ranges. For example, the new six-core "Gulftown," Core i7-980X, is already running close to its thermal ceiling under load. Thus, it's limited to a 266 MHz boost with a single core active, and a modest 133 MHz bump when two or more cores are active. Knowing that Intel’s overclocking headroom is sizable, this is really a pity for enthusiasts. After all, the Phenom II X6 can speed up three cores by up to 400 MHz using a 45 nm process.

Intel’s power gate transistors facilitate cutting power to individual cores. This allows the processor to actually disengage those cores from the overall power envelope, consequently "buying" the overhead needed to increase the remaining cores’ clock speed. The premise here is that fewer cores can run at higher clock speeds before they reach the same thermal output.

While AMD basically reduces clock speed and voltage for inactive cores, Intel can physically shut them down. In theory, this should result in lower power consumption and, paired with the ability to dynamically scale one or more cores up or down, a better overall performance result.

Intel has another advantage that should be mentioned. While AMD's six-core processors access 6 MB of shared L3 cache, Intel's architecture currently offers a massive 12 MB repository. If you switch off individual cores, the remaining active processing units can still access the full 12 MB L3. This should provide advantages for applications that work with limited data and use few threads.

3DMark, a synthetic benchmark, realizes a slight advantage from Intel's architecture and Turbo Boost.

PCMark Vantage clearly shows that Intel’s approach delivers performance gains while AMD’s Turbo Core doesn’t seem to help as much.

iTunes is single-threaded, and is better-accelerated on the Phenom II X6 with Turbo CORE enabled.

The same applies to Lame.

MainConcept is optimized to take advantage of multiple cores, so it benefits more from Turbo Boost, which can kick in even if many cores are taxed.

Once again, we see the multi-threaded advantage in HandBrake, where AMD's processor easily hits its limits on all six cores, preventing Turbo CORE from kicking in.

As expected, switching the Turbo features on or off doesn’t change idle power.

However, peak power increases under Turbo Boost and Turbo CORE. The differences are small, though.

The runtime for our full efficiency suite decreases a bit more on the Intel platform, as there are more applications taking advantage of Intel’s Turbo Boost implementation than AMD’s Turbo CORE.

Average power consumption is much higher on the AMD system with Turbo CORE enabled.

The total power used is exactly the same on the Intel system. This is interesting because the Core i7-980X with Turbo Boost is still faster. AMD’s Turbo CORE-enabled Phenom II X6 delivers more performance, but it requires more power to deliver it.

In the end, the Intel chip's efficiency stays constant. The total power used is exactly the same, but the average power is higher during the workload. As a result, the efficiency is identical. This is like reaching your destination faster in a car without changing your mileage per gallon. AMD’s Turbo implementation sacrifices power efficiency. Runtime decreases, but average power and total power used increase at a higher proportion.

We can only recommend that AMD and Intel continue implementing and developing their Turbo-oriented features. Both do their job in increasing performance. Since the two approaches are different, though, we found that their outcomes in real life are different, as well.

Let’s start with Intel. The six-core, 3.2 GHz Core i7-980X speeds up a single core by 266 MHz if a single-threaded application wants maximum performance, and it can accelerate all six cores by 133 MHz if thermal headroom allows. This is the main difference compared to AMD’s solution, because Intel's Gulftown design can accelerate single-threaded apps, as well as high-end applications. From a multi-core processing standpoint, Turbo Boost makes more sense than Turbo CORE, since all types of workload benefit when compared to nominal clock speed.

AMD’s Turbo CORE only knows one acceleration mode. It increases clock speed for three cores by up to 400 MHz in the case of the Phenom II X6 1090T 3.2 GHz six-core. This means that all applications that utilize no more than three cores experience immediate acceleration. In this case, we found that AMD's performance improvement is higher, as a 400 MHz upgrade is much more noticeable than Intel’s 133/266 MHz speed bump. The downside is nonexistent acceleration if four to six cores are being taxed.

Neither solution is a clear winner. Intel is better for extremely performance-hungry, multi-threaded environments, while AMD's approach provides more benefits for less-threaded environments. The best Turbo technology would be a more granular one, and a perfect Turbo mode would accelerate a single core by even more than AMD’s 400 MHz, two cores by around 400 MHz, three and four cores by less, and all cores by as much as the remaining thermal envelope allows.

Monday, June 28, 2010

AMD launches triple- and quad-core notebook CPUs

Playing catch-up with Intel, AMD has announced its first triple-core and quad-core laptop processors as part of a major brand overhaul.The lineup includes two triple-core and three quad-core Phenom II chips running at between 1.6GHz and 2.3GHz and drawing between 25 and 45W.

There are also new single-core and dual-core low-voltage http://www.amd.com/us/products/notebook/platforms/home/2010- Athlon II Neo and Turion II Neo processors designed for ultrathin laptops; running at speeds between
1.3GHz and 1.7GHz, they draw between 12 and 15W. These are aimed at laptops with screens between 11 and 13 inches and measuring less than 25mm thick.
The chips are expected to appear in over 100 different products from vendors including HP, Dell and Lenovo, starting from the end of this month.
Emulating Intel's Core range, the processors include DDR3 memory, HyperTransport 3, support for Direct X 11 and improved battery life. They don't, however, include AMD's TurboCore technology.
The company is also changing its marketing. It's extending its Vision branding - which segments products into four levels by performance - to include desktop systems.
"With Vision Technology from AMD, we are finally connecting how people use their PCs with the way people purchase them," said senior vice president and chief marketing officer Nigel Dessau.
"Today, after little more than 200 days in market, our partners are introducing more Vision-based PCs than ever before; a testament to both the competitiveness of AMD platform technology and the simplified marketing approach."
AMD has failed to make ground against Intel's dominance, with just 18.8 percent of the X86 market in the first quarter of this year, compared with Intel's 81 percent, according to IDC.


Friday, June 18, 2010

AMD and Intel Mobile Rematch: Gateway NV5933u vs. Acer 5542

It's been ten months since our last comparison between the latest AMD and Intel mobile platforms. Since then AMD has updated their mobile chips to 45nm process technology with a K10-derived architecture.
Intel hasn't been sitting idle either, with plenty of 32nm Arrandale laptops readily available. The last time we looked at the two platforms, Intel came out with a clear lead in battery life and CPU performance, but AMD provided a more affordable platform with a substantially better IGP. Now we're ready to compare the latest Intel and AMD offerings, but there are a few caveats.

The biggest point right now is that AMD has released details of their new Vision mobile platform, using Champlain CPUs on the Danube platform with DDR3 support. The new processors remain 45nm parts based on the K10 (K10.5) architecture, but they use a new socket. The changes are supposed to improve mobility, which is certainly an area where current and previous AMD laptops have been lacking. We're working on getting Danube (and Nile, the low power version) laptops, but given the number of older AMD laptops currently available we feel this comparison is still valid. If you're looking for improved processor and graphics performance from AMD, and perhaps better battery life, Danube laptops with the Turion II P520 are starting to ship and should improve on the Acer 5542 we're looking at today.

Here are the detailed specs for our two laptops. Both have been out for a few months, and similar laptops are available from most manufacturers. Outside of aesthetics and a few other features, performance should be nearly identical to what we're reviewing. Battery sizes may also be larger/smaller, but relative battery life should be similar. The two laptops we're looking at use similar components in all the important areas: 15.6" LCDs, 500GB 5400RPM hard drives, 4GB RAM, and 48Wh batteries. The 5542 uses DDR2 memory, since it uses the older Caspian/Tigris core/platform, while the NV59 uses DDR3 memory. CPU, chipset, and graphics are naturally different, but otherwise we have done our best to make this an apples-to-apples match up.
Acer Aspire 5542 Test System
Processor AMD Athlon II M300
(2x2.0GHz, 45nm, 2x512KB L2, 35W)
Chipset AMD RS880M + SB710
Memory 2x2GB DDR2-800 (Max 2x4GB)
Graphics ATI Radeon HD 4200
(40 Stream Processors, 500MHz Core/shared memory)
Display 15.6" LED Glossy 16:9 768p (1366x768)
Hard Drive(s) 500GB 5400RPM (Western Digital Blue WD5000BEVT-22ZAT0)
Optical Drive 8x DVD±RW (Optiarc AD-7580S)
Battery 6-Cell, 10.8V, 4400mAh, 47.5Wh battery
Operating System Windows 7 Home Premium 64-bit
Dimensions 15.1" x 9.8" x 1.0-1.5" (WxDxH)
Weight 6.2 lbs (with 6-cell battery)
Warranty 1-year basic warranty
Pricing $499 from Amazon
Note: 320GB HDD on that model


Gateway NV5933u Test System
Processor Intel Core i3-330M
(2x2.13GHz + HTT, 32nm, 3MB L3, 35W)
Chipset Intel HM55
Memory 2x2GB DDR3-1066 (Max 2x4GB)
Graphics Intel HD Graphics
(12 Shaders, 500MHz base, 667MHz max Core/shared memory)
Display 15.6" LED Glossy 16:9 768p (1366x768)
Hard Drive(s) 500GB 5400 RPM (Hitachi HTS545032B9A300)
Optical Drive 4x Blu-ray Combo (Optiarc BC-5500H)
Battery 6-Cell, 10.8V, 4400mAh, 48Wh battery
Operating System Windows 7 Home Premium 64-bit
Dimensions 14.66" x 10.19" x 1.02-1.46" (WxDxH)
Weight 5.84 lbs (with 6-cell battery)
Warranty 1-year basic warranty
Pricing $549 from Best Buy
Note: 320GB HDD on that model

The specs are nothing to write home about, but pricing is obviously the driving factor. The Gateway NV5933u manages to pack in an impressive set of features for a list price of $550, including a Blu-ray combo drive (a $75 value). The Acer 5542 isn't the best example of an inexpensive AMD Athlon II M300 laptop, with a current price of $500 online. That makes the Gateway a clear value winner if you want Blu-ray support, but it's worth noting that you can often find similar M300 + 4GB laptops on sale for as little as $400. However, right now going off the retail pricing, we've essentially got a tie for pricing. That said, you won't find a non-Blu-ray i3-330M laptop for less than $550, and we wouldn't be surprised to see the NV5933u supply dry up shortly; the replacement NV59c looks to bump up the price to $749. Let's take a closer look at the two combatants before we get to the benchmarks.

Tuesday, June 15, 2010

Gamers: Do You Need More Than An Athlon II X3?



Gamers: Do You Need More Than An Athlon II X3?

AMD's Athlon II X3 440 is such a capable little chip, and and it costs so little. Is there any real point in spending more money on your gaming machine’s CPU? We explore this question with a head-to-head challenge against Intel's venerable Core i7-920.

Every month, we publish our Best Gaming CPUs For The Money column. This is where we share our picks for the processors that we feel provide the best gaming value for your hard-earned dollar. Our recommendations are based on a lot of testing, and that testing has shown that games respond best to high clock speeds.

However, our benchmarks also show that the number of processor cores is a secondary consideration. There is a large performance jump from single- to dual-core CPUs, but most games only show a slight performance increase when a third core is added. In fact, it is rare to find a game that will take advantage of more than three processor cores and demonstrate a notable performance increase.

Since its release, the Athlon II X3 440 has had a strong impact on our recommended gaming CPU list. When you combine its high 3 GHz clock speed, trio of processor cores, and sub-$90 price tag, you end up with a real force in the gaming arena. On top of that, the third processing core allows the Athlon II X3 to be an especially great processor compared to dual-core models because that extra core can smooth out desktop performance when multitasking.

When it comes to gaming, though, the CPU can only do so much; the graphics subsystem is key. We've received some feedback on the forums suggesting that our recommendation of any processor more expensive than the Athlon II X3 440 is frivolous. The argument is that, while game performance may increase with a costlier CPU, the money is wasted because the Athlon II X3 440 is supplying all the performance that games require to achieve smooth frame rates, and that upgrading the graphics cards is the only way to remove a meaningful game performance bottleneck.

We decided to run a series of tests to really explore whether or not there's any point in investing in a CPU more powerful than the Athlon II X3 440 for gaming duty. First, we need to examine how we measure game performance and get a better understanding of how meaningful the numbers are.

We measure game performance by the speed at which frames of video are fed to our eyes. The preferred unit of measure is frames per second (FPS).

There is a common misconception that 24 or 30 FPS is enough for perfectly smooth video, or that the human eye can only perceive up to 30 FPS. This stems from the movie and television industries. Movie theaters show film at 24 FPS, and that appears perfectly smooth, doesn't it? The fact is that our eyes are tricked into experiencing smooth video from 24 FPS source material because of motion blur. Film and video cameras capture moving objects by blurring their edges and the brain interprets this as smooth movement.

If you've ever had the chance to see a demonstration of movie playback at your local home theater electronics outlet, you might have noticed that movies seem a lot smoother than they do in theaters on some of the displays. This is because many modern televisions can modify the video, smoothing it out with anti-judder technology, and play it back at 120 Hz (or 120 FPS). Most folks easily notice the visual difference when movies are played back at 120 FPS with anti-judder enabled, which goes to show the human eye can perceive a lot more than 24 FPS. In fact, research suggests that human beings can perceive more than 200 FPS.

The point is that when it comes to PC gaming, more than 30 FPS is noticeable. In addition, the PC is an interactive device and the camera view often responds to user input from the mouse. The frame rate has to be quick enough to respond instantly to this user input. Otherwise, the user can feel the lag. This is especially noticeable in twitch games like first-person shooters that require precise aim.

Most PC monitors today cap out at 60 Hz, which means that the screen can refresh 60 times a second (there are a few 120 MHz monitors available for 3D use, but these are far from mainstream). Now, the question becomes: what if your PC is rendering more than 60 FPS? If your machine is fast enough to deliver 100 FPS to a 60 Hz monitor, what happens?

Unfortunately, more performance doesn't always equal better visuals. If your PC is sending out more frames than your monitor can display, what's likely going to happen is that the screen will refresh before the previous frame has finished drawing. This visual artifact is called tearing, and it's not pleasant. This is why vertical synchronization (v-sync) was developed.

Without going into details, v-sync limits your frame rate so that is doesn't exceed the monitor's, therefore eliminating tearing. When we benchmark games, we're usually looking for the performance cap, so we turn v-sync off, but for actual gameplay, you're probably better off enabling triple-buffered v-sync if your title supports the option.

Knowing all of this, we will conclude that a PC user with a 60 Hz monitor can certainly perceive up to 60 FPS. This is a widely accepted performance target for PC games, and now it's a little clearer why.

Finally, we need to consider how we measure game performance. Often, for a quick indicator, we record the average FPS. The problem is that average FPS is an aggregate number that doesn't tell us how low the frame rate can go. You can experience an average of 60 FPS that dips down to 10 FPS during demanding parts of the game, and 10 FPS is choppy by everyone's standards.

Because of this, you should pay attention to minimum FPS. Ideally, the minimum FPS value is 60, but a minimum FPS of 30 or even 20 can be acceptable if it happens for very short stints in demanding parts of a game.

Even the type of game you're playing can determine whether a certain minimum FPS is acceptable. In a first-person shooter, that lag might be enough to mess with your aim during a heated battle. But in a top-view, real-time strategy game, the drop in frame rate probably won't have much of an effect on your view or your click-and-drag inputs.

This is a lot of information to assimilate, but it is critical for the purposes of our review. Remember, we're trying to find out whether or not there is a point in purchasing a CPU that is more expensive than an Athlon II X3 440 for gaming purposes. Now, we know any frame rate advantage over 60 FPS is somewhat useless, but that any minimum frame rate advantage up to 60 FPS can be critical.

We wanted to keep the comparisons crystal clear, so we're going to make it simple. We're pitting AMD's Athlon II X3 440 against a Core i7-920. Yes, the Core i7-920 costs more than three times as much as the Athlon II X3 440, but remember that game performance will be bottlenecked to a large extent by the graphics subsystem.

As far as graphics cards go, we test two configurations: one with a single Radeon HD 5850 and the other with two Radeon HD 5870s in CrossFire. We test the games across 1280x1024, 1680x1050, 1920x1080, and 2560x1600 resolutions.

Because the Core i7-920 utilizes triple-channel memory, we use three 1GB sticks for a total of 3GB of RAM. The dual-channel AMD platform will use two 2GB sticks for a total of 4GB. From our experience, the single gigabyte of RAM difference should have no effect on gaming performance, but if we see any disparity, we will make note of memory usage. The RAM timings and speed are identical between both systems.

We're using the Gigabyte MA790XT-UD4P motherboard with an Athlon II X3 440 and the ASRock X58 SuperComputer with Intel's Core i7-920. Note that the 790X chipset on the Athlon board can't support full 16x PCI Express (PCIe) bandwidth for each graphics card. To keep things comparable, we put the second Radeon HD 5870 in a PCIe slot with 8x bandwidth when using CrossFire on the ASRock X58 board. 

AMD Test System
Intel Test System
CPU
AMD Athlon II X3 440 (Deneb)
3.0 GHz, FSB-200 MHz

Intel Core i7-920 (Nehalem)
2.67 GHz, QPI-4200, 8MB Shared L3 Cache
Motherboard

Gigabyte GA-MA790XT-UD4P
AMD790X, BIOS F7

ASRock X58 SuperComputer
Intel X58, BIOS P1.90
Networking
Onboard Gigabit LAN controller

Onboard Gigabit LAN controller
Memory

Mushkin PC3-10700
4GB Dual-Channel 2 x 2,048MB,
DDR3-1340, CL 9-9-9-24-1T

Kingston PC3-10700
3GB Triple-Channel 3 x 1,024MB,
DDR3-1066, CL 8-8-8-19-1T
Graphics

Sapphire Radeon HD 5850
725 MHz GPU, 1GB GDDR5 RAM at 1,000 MHz
Hard Drive

Western Digital Caviar WD50 00AAJS-00YFA
500GB, 7,200 RPM, 8MB cache, SATA 3.0 Gb/s
Power

Thermaltake Toughpower 1,200W
1,200W, ATX 12V 2.2, EPS 12v 2.91
Software and Drivers
Operating System
Microsoft Windows 7 64-bit
DirectX version DirectX 11
Graphics Drivers

ATI Catalyst 10.3
Benchmark Configuration

3D Games
Crysis

Patch 1.2.1, DirectX 10, 32-bit executable,
benchmark tool, High Settings, No AA, No AF
Far Cry 2

DirectX 10, in-game benchmark
Ultra-High Settings, No AA, No AF
S.T.A.L.K.E.R: Call of Pripyat

Ultra High Preset, DirectX 11, EFDL, no MSAA, Sunshafts Benchmark
World In Conflict: Soviet Assault

DirectX 10, timedemo
Very High Details, 4x AA/4x AF

There are some people who might get the impression that we're being unfairly hard on the Athlon II X3 440 by pitting it against the Core i7-920. In fact, the opposite is true. We have tremendous confidence in the gaming abilities of AMD's Athlon II X3 440, and that's why we think it's up to this kind of challenge.

It's all too easy to look at benchmark graphs and get caught up in the trends, but let me point something out to you: in every single game we benchmarked at 1920x1080, the Athlon II X3 440 was capable of a playable average frame rate in excess of 40 FPS. All of the games we tested were benchmarked at attractive and demanding visual settings, and all of them have a reputation for higher-end hardware requirements.

But to those suggesting that there is never a need for a better gaming CPU than the Athlon II X3 440, the facts show that this is simply not true. It is very clear that the Core i7-920 sports notable gaming advantages in a number of scenarios.

Breaking It Down

The first scenario is minimum frame rates. As we've discussed, minimum frame rates are far more important than average frame rates, and any advantage here is noticeable. When we tested World in Conflict, one of the more CPU-dependent games we've tested, it became apparent that the Athlon II X3 440 does not have the same capabilities as the Core i7-920. The Core i7-920 doubles the Athlon II X3 440's minimum of 10 FPS in this title. Granted, the Athlon II X3 440 also achieves a better minimum frame rate in S.T.A.L.K.E.R.: Call of Pripyat, but the Core i7-920 manages to double 10 FPS in that title, too. In general, the Athlon II X3 440 might not be the best choice for CPU-intensive games like World in Conflict and perhaps even CPU-intensive, real-time strategy games in general.

The second scenario in which the Athlon II X3 440 might not be ideal is when multiple graphics cards are employed. When we use two Radeon HD 5870 cards in CrossFire, the Core i7-920 system stretches its legs, while the Athlon II-based system doesn't seem to gain much additional performance at all. In fact, when we compare the Athlon II X3 440 results in CrossFire mode with the single-card Core i7-920 results, we are surprised to see that the Core i7-920 manages to beat the Athlon-based system more often than not (at least up to 1920x1080). At 2560x1600, the graphics subsystem is always the bottleneck. But realistically, who pairs an extremely expensive 30" monitor with one of the cheapest CPUs available?

To summarize, the Athlon II X3 440 is an excellent budget gaming processor for single graphics card applications, and probably represents the best price/performance value we've seen to date. But for folks with more cash who are looking for greater performance out of their gaming system (particularly when using multi-card graphics configurations or CPU-intensive game titles) ,higher-end CPUs are definitely a viable option. Remember that the name of the game here is balance. As you scale up graphics muscle, adding the processing horsepower to match will yield an optimal balance between the two subsystems.