Intel Arc B580 review

I dread to think how bad things would have been had Intel released this Arc B580 graphics card back in the summer. All year long we've been expecting the new Battlemage GPU architecture to arrive in a new suite of discrete Intel graphics cards, but we've had to wait until the very last minute of 2024 for the company to be able to legitimately say it hit its target of a launch this year.

And where the previous generation Alchemist GPUs were notoriously unreliable in their levels of gaming performance, we were given to believe such things wouldn't be an issue with Battlemage. Indeed, from looking at the Xe2 GPU's first outing inside Lunar Lake, things did look pretty positive.

In its first discrete graphics card of this Battlemage generation—strangely the card whose Alchemist equivalent launched last—the $250 Arc B580 is offering a hefty 12 GB frame buffer, with a healthy 192-bit bus, and a re-designed graphics core aimed at being both more familiar to game engines and to deliver on the twin modern demands of AI and ray tracing workloads. Good job, considering it's launching at a higher price than the $190 A580 did.

Intel is also offering up XeSS-FG. This is its own take on Frame Generation, the interpolation technique introduced by Nvidia to magic up game frames seemingly from nowhere. And in principle it's far more like Nvidia's take on it than AMD's, using a single-model AI technique to achieve its own effects.

There's more cache, more memory, improved core components, and the promise of solid, reliable drivers. All very positive. That positivity, however, has largely evaporated upon first contact with the Arc B580, its BMG-G21 GPU, and the PC Gamer graphics card test gauntlet.

It's not all bad, not by a long shot, but given the state of the review drivers right now, I don't want to imagine what this launch would have looked like just a few months ago.

Intel Arc B580 verdict

Image 1 of 2

(Image credit: Future)
Image 2 of 2

(Image credit: Future)
Buy if...

You're willing to wait: There may well come a time soon where these driver inconsistencies are toast, and down the line the Arc B580 could be the go-to budget GPU. But I'd wait to see if that happens first before spending the money.

Don't buy if...

You want solid, reliable performance: Isn't that what we all want, deep down? Sadly, in its current state of driver support, the Arc B580 is not the reliable GPU we crave.

The Intel Arc B580 was touted by its makers as a key 1440p budget GPU, and I wish I had the confidence to agree. Though the numbers I have managed to get do kinda bear that out for the most part; where it's behind the key RTX 4060 battle in terms of frame rates it's only by a relatively small margin at this relatively high resolution, and where it's ahead it's there by sometimes a fair margin, especially when you add in upscaling and frame generation.

But for the new Intel graphics card to be able to get any kind of recommendation from me it had to do one thing: run consistently. You had one job, Battlemage… Just run the games we throw at you, and not flake out.

Sadly, while things looked super positive from Intel's carefully picked pre-launch benchmarks, that hasn't translated into independent testing. I've heard of other reviewers having driver issues in their testing, so I know I'm not alone with the failure rate in our new GPU benchmarking suite.

Consistency was always going to be key and it's just not there yet with the Arc B580. It's going to be a problem for Intel to turn this initial impression around, even with the sorts of performance bumps we saw in Alchemist drivers over the past couple years.

You had one job, Battlemage… Just run the games we throw at you, and not flake out.

If the issues are ironed out, however, then this $250 GPU will be a hugely tempting budget graphics card in a market where it's been tough to make a positive recommendation before. The $300 RTX 4060 is the obvious alternative, but its 8 GB VRAM has always been a stumbling block for its own 1440p frame rates. On the AMD side the similarly priced RX 7600 XT, with its 16 GB VRAM, is a far less tempting proposition. That hefty frame buffer is still hobbled by a 128-bit aggregated memory bus and RDNA 3 still struggles with ray tracing.

So, it feels like there is an opportunity here if Intel can make the Arc B580's handling of PC games far more consistent. That window of opportunity might be closing rapidly if Nvidia gets anything like a budget GPU out of its Blackwell generation in the first half of 2025, though.

Right now, however, it feels like too much of a lottery when for a small amount more you can buy a boring Nvidia card that will just work.

Intel Arc B580 architecture and specs

Image 1 of 2

(Image credit: Future)
Image 2 of 2

(Image credit: Future)

The Arc B580 is the first discrete graphics card Intel has released to sport the new Battlemage, or Xe2 GPU architecture, and its BMG-G21 chip represents a necessary change from the Alchemist GPUs launched in the tail end of 2022. We've covered the architectural changes in our Lunar Lake coverage, but it does bear repeating here what Intel's switched around and why it's done so.

In raw terms, the BMG-G21 looks like a lesser chip than the A580 or the A750 of the Alchemist generation, but in performance terms the B580 is actually somewhere between the $190 A580 and the $290 Arc A750. That's because Intel has made sweeping changes to its architecture to both improve efficiency and performance. And maybe to improve compatibility along the way, too.

Arguably the biggest change is the switch from SIMD8 to native SIMD16 execution, something that was seen as a bit of a misstep with the original Arc GPU design. Previously there were essentially eight lanes of processing for each instruction fed to the GPU, that has been swapped out for a native 16-lane design giving a wider execution model which helps efficiency and might even give it a little fillip in terms of wider compatibility.

Another mistake Intel has fessed up to is the decision to run some DirectX 12 commands entirely in software emulated states, rather than being supported by specific units within the GPU hardware itself. Intel states that its architects ran deep-dives on various workloads to see what its graphics acceleration hardware was doing, and this has led to the Xe2 GPU being far better optimised for DX12 and better utilises the hardware at its disposal.

The biggest jump is the Execute Indirect command, which was one of those previously emulated in software, but is used widely by modern game engines such as Unreal Engine 5. Now it has support baked into the Xe2 hardware Intel is claiming a performance boost on that part of a frame's draw time by anywhere from 1.8x to a massive 66x faster than with Alchemist. Though apparently it averages out to around 12.5x in general.

Image 1 of 2

(Image credit: Intel)
Image 2 of 2

(Image credit: Intel)

But there are a host of other small improvements in that graphics acceleration area which should all end up delivering the extra final frame rate performance that Intel is promising with Battlemage. And that is effectively a significant increase in performance per core, which is why the 20 Xe cores of the B580 are so much more effective than the 24 or 28 cores of the Arc A580 or Arc A750.

Those second-gen cores will be familiar to anyone who checked out the Alchemist GPUs in any depth, but Intel has kinda smooshed things together rather than separated them out. Instead of having 16 256-bit Vector Engines in each of the cores, there are now just eight 512-bit Vector Engines. And instead of 16 1024-bit XMX Engines, there are now eight 2048-bit XMX Engines. But that native SIMD16 compute capability means there's no lack of parallelism because of the new structure and it can throw the full force of a Vector Engine's floating point units at a task if it needs to, rather than having to pull them together from separate blocks.

Another reason for the improved Xe2 performance is that Intel has also bumped up the cache levels, with the first level cache being boosted by 33% to 256 KB in total, and a total of 18 MB of L2 cache, up from 16 MB in the previous generation.

Ray tracing was one of the more positive parts of the Alchemist picture, and Intel has improved on its second-gen RT units again. It was a major headache for Intel to implement originally with Alchemist, but being able to build upon it for Battlemage means that Intel has a better ray tracing GPU than AMD at this point in time.

Image 1 of 3

(Image credit: Intel)
Image 2 of 3

(Image credit: Intel)
Image 3 of 3

(Image credit: Intel)

Another positive has been Intel's own upscaling solution, XeSS, which is also seeing a version two with Battlemage. Though, arguably, the actual upscaling part isn't changing, just getting a slight naming change for clarity, now being known as XeSS Super Resolution in order to differentiate it from the new features of XeSS2.

The key one being XeSS Frame Generation, or XeSS-FG. Nvidia kicked off the interpolation race, and true to form Intel has followed its lead rather than matching AMD's less full-force simulacrum. That's because Intel, like Nvidia, has specific matrix engines inside its GPUs rather than doing all its extra work in shaders as AMD does.

The new XMX AI engines can hit both FP16 and INT8 operations, which makes them well situated to cope with the rigours of modern generative AI fun times. But they will also have a part to play in the new frame generation feature of XeSS, too, as that also has AI elements.

Image 1 of 2

(Image credit: Intel)
Image 2 of 2

(Image credit: Intel)

XeSS-FG is a single AI model implementation that looks at the previous frame as well as the new frame in flight, using optical flow and motion vector reprojection algorithms, blended together to create interpolated frames.

At the moment that's only available in one game, F1 24, but is certainly impressive from what I've seen. It will function in any Battlemage GPU, including those of Lunar Lake.

Alongside that, and often linked for what will become obvious reasons, is XeSS Low Latency, or XeSS-LL. This is akin to Nvidia's Reflex feature and is designed to cut the PC or display latency (depending on what you want to call it) of a game. That is fundamental for fast-paced competitive games, but also vital to make the most of Intel's XeSS-FG, too. Frame interpolation always adds latency as there's another step in the process before a rendered frame is displayed, but having XeSS-LL in play will cut that back down to a more standard level.

Combining all the XeSS2 features together and you get far greater performance, with both higher frame rates and lower latency. A win win. So, how does it all actually perform then when it comes down to the numbers?

Intel Arc B580 performance

Image 1 of 3

(Image credit: Future)
Image 2 of 3

(Image credit: Future)
Image 3 of 3

(Image credit: Future)

In raw native resolution terms it is the dictionary definition of 'a mixed bag'.

Intel's claims of beating out the immensely popular—but certainly due a toppling—RTX 4060 graphics card definitely do have some merit from our own testing, but it's absolutely not a cut and dried case of Intel dominance of the budget market. Though, were you to just take the 3DMark performance of the B580 as gospel you would have a markedly different view of the situation than you get once you actually look at it on a game-by-game basis.

Intel has shown, from Alchemist onwards, how well optimised its drivers and hardware are when it comes to UL's benchmark standards. The 3DMark Time Spy Extreme performance of the Arc 580 is some 45% higher than the competing Nvidia GPU, which looks like a stellar achievement. In Port Royal, too, there's a huge gap in the ray tracing benchmark, with the Intel card having a 31% lead over the Nvidia RTX 4060.

But as soon as you start talking about games, that's where things look different. In raw native resolution terms—testing the hardware itself rather than upscaling algorithms—it is the dictionary definition of 'a mixed bag'. In one benchmark it will sit slightly behind the Nvidia card, in another it will be slightly ahead, another still and it's well behind, and then well ahead again in further tests.

This up and down performance was a feature of the Alchemist range of cards, and I was hoping it wouldn't be the case again with Battlemage. The twist here, however, is that where the A-series cards would just perform poorly, with this first B-series card I'm coming up against games that simply will not work.

Cyberpunk 2077 is a particularly frustrating example, because without upscaling enabled it completely froze our test system while trying to load into the game world. Not just a crash-to-desktop, but a complete lock which required a hard reset. And it's particularly frustrating because if you look at the pseudo real-world performance—our 1440p testing with upscaling and frame generation enabled where possible—the Cyberpunk 2077 performance is unbelievably good. Like, almost a two-fold performance hike over the RTX 4060 with the RT Ultra preset enabled at 1440p. It's a battering of, well, more than 3DMark proportions.

Then I had Homeworld 3 refusing to run in DX12 mode, and then performing really poorly in the DX11 mode I had to enable just to get some B580 numbers for the game.

There are glimpses, however, of where the Battlemage hardware, with all its architectural improvements, is making a big difference in terms of how it performs in games. And when you take into account the difference XeSS-FG makes in F1 24 it does lead me to feel more positive about how this card could end up in the future. The fact that, even with the RTX 4060 using its own Frame Gen feature, the B580 gets over 50% higher frame rates, and looks damned good while doing it, is testament to what Intel has done with the feature.

It's also notable that even when there are performance disparities on the negative side for the Arc B580, that chonk 12 GB frame buffer really helps shrink the gap when you start looking at higher resolutions.

When it comes to the system-level performance of the card, too, it's a positive story. The card itself looks great, stays impressively cool, and even without messing around in the BIOS and Windows extra power profiling its a relatively efficient GPU, too. The performance per watt levels are up there with Nvidia's efficient Ada architecture, despite being a much bigger chip.


PC Gamer test rig
CPU: AMD Ryzen 7 9800X3D | Motherboard: Gigabyte X870E Aorus Master | RAM: G.Skill 32 GB DDR5-6000 CAS 30 | Cooler: Corsair H170i Elite Capellix | SSD: 2 TB Crucial T700 | PSU: Seasonic Prime TX 1600W | Case: DimasTech Mini V2

Intel Arc B580 analysis

Image 1 of 2

(Image credit: Future)
Image 2 of 2

(Image credit: Future)

There are few things as disappointing as wasted potential. And maybe it's too harsh to slap that tag onto the new Intel Arc B580 graphics card only a day out from its eventual public release, but in my time testing the new Battlemage discrete GPU that's the overriding feeling I'm left with: Disappointment.

Though, the sucker that I am, it is still tinged with hope for the future.

By the way, I get it. I shouldn't be talking about feelings as someone who purports to be a serious hardware journalist, especially not as to how they pertain to a fresh lump of silicon and circuitry.

But I so wanted this new generation of Intel graphics architecture to be a tangible improvement from Alchemist, and there are glimpses of the true promise of the new Battlemage architecture here, shrouded as they are in the now-familiar veil of consistently inconsistent software drivers. The fact we're still talking in those terms about Intel's second generation of discrete graphics cards is so disappointing when we were promised its drivers suffered from "no known issues of any kind."

For what it's worth, even just looking at the release notes for the first Arc B580 driver I installed when I started my testing, it's pretty clear there absolutely are "known issues."

But it's not the total train wreck I was preparing for after my first few benchmarks in our new GPU testing suite. It was maybe unlucky that where I started my testing also just happened to be where there are serious points of failure with the new GPU. Performance picked up with later tests, and I've been impressed with XeSS Frame Generation for the little I've seen of it in the only game supporting the new feature, but the early going was rough.

I was initially interested in seeing what the 12 GB of VRAM would mean for creator applications, and it slapped in the Procyon GenAI benchmark compared with the RTX 4060, which will be its major competition at this level. But shifting on to the PugetBench for DaVinci Resolve tests and I get my first failure… not at any particular point, but every time I've tried the benchmark (after subsequent updated driver releases just days before launch) it falls over at some point in the process.

Then I started at the beginning of the alphabet in our gaming test suite. Black Myth Wukong works, but delivers gaming performance behind the RTX 4060 it's supposed to be topping by an average 10%. Then there's Cyberpunk 2077 and the game cannot even load into the world unless you enable some form of upscaling.

Cyberpunk 2077, by the way, does deliver a significant performance boost over the RTX 4060 when you're pitting DLSS and Frame Generation vs. the Intel card utlising AMD's GPU agnostic equivalent features. So, at least there's that.

A subsequent successful game bench is then followed by a total catastrophic collapse with Homeworld 3 and it crashing before the first splash screen. After a back and forth with Intel's PR I could then bench the game in the DX11 mode by using a command line argument on boot, but that just delivered performance well off the budget competition.

Image 1 of 3

(Image credit: Future)
Image 2 of 3

(Image credit: Future)
Image 3 of 3

(Image credit: Future)

How do you recommend someone who's not a total contrarian spend their cash on an inconsistent graphics card?

You'll be relieved to discover I'm going to abandon the benchmark blow-by-blow now, but I just want to show how inconsistent the testing process has been. Maybe I've just been unlucky and have somehow devised the perfect suite of gaming benchmarks to hit the only problems the new Arc B580 GPU has, and you might also suggest that if only three out of 11 specific tests have failed it's not that bad.

But those are infinitely more problems than you'll have than if you spend another $40 on the mature Nvidia GPU. And therein lies the rub; how do you recommend someone who's not a total contrarian spend their cash on an inconsistent graphics card?

If it were merely a case of the card not performing as well in some games and easily out-pacing the competition in others that wouldn't be such an issue, I'd take the cost savings and enjoy the hell out of a great new budget GPU. But it's not, it's a case of not knowing whether the card will even boot a given game. For the short time I've been given to test the card ahead of launch and ahead of the holiday season, I've not had the chance to just chuck a ton of different games at the Arc B580 to see how widespread these failures are. But my anecdotal evidence isn't painting a particularly positive picture.

This isn't the end for the Battlemage graphics card, however, as bad as it might seem on day one. Intel has shown with Alchemist that it is capable of shoring things up down the line with subsequent driver releases, and I'm told fixes for both my Cyberpunk 2077 and Homeworld 3 issues are in-hand as I type. So Intel could be again borrowing some of AMD's classic 'fine wine' ethos, where the struggling cards at launch are slowly transformed into functioning members of gaming society by fresh software.

Certainly the outstanding performance in Cyberpunk 2077 with upscaling enabled gives me hope, as does the exceptional F1 24 results, too.

And, if that does happen across the board, my own testing figures do show a card that has the potential to be a real budget champion if it can nail that consistent performance across a wide range of titles. Right now, though, it's a struggle to make it a confident recommendation as a buy now GPU.

Читайте на 123ru.net