View Single Post
Staro 24.06.2015., 18:32   #30
Manuel Calavera
Premium
Moj komp
 
Manuel Calavera's Avatar
 
Datum registracije: Jul 2012
Lokacija: vk+
Postovi: 14,575
Recenzije se dosta razlikuju. Kontam da je to zbog toga što je dosta rezultata unutar margin of error (mada negdje i nije baš tako mala razlika da bi toliko osciliralo)

Primjera radi, kod techpowerupa je fury x u bioshock infinite jači od titan x i 980Ti od 1080p pa sve do 4K. Prema hexusu su 980Ti i titan X jače od fury x na svim rezolucijama u bioshocku...

U thiefu je kod guru3d 980Ti jača, dok je pak kod pce fury x jača u thiefu... Dalo bi se nači još masu takvih primjera, ovisno od igre i recenzije

Nego ono što sam htio reći je zanimljivost ovog HBM-a. Recimo u masu igara kod techpowerup gdje 980Ti i titan x budu jače, u mnogo njih je fury x na 4K (no AA) postane jača od titan x sa 12gb ddr 5 i 980Ti sa 6gb ddr5...

Evo jedan osvrt na tu temu sa forbesa:

Citiraj:
Is 4GB High-Bandwidth Memory Really Enough For 4K?

It’s the elephant in the room, and it represents a huge uphill battle for AMD. How do you educate consumers that 4GB of HBM is better than 8GB of GDDR5? And is it actually better, or at least equivalent?

Now, that 4GB is currently a limitation of the first-generation HBM manufacturing process. And remember that there’s 60% more bandwidth to work with here. But what is AMD doing to prevent us from hitting that frame buffer ceiling? “We found that over 70% of the memory is wasted. It’s allocated but not used,” an AMD engineer explained to me during a recent press briefing. He alluded to per-game driver enhancements as a way of improving VRAM usage, and that with DirectX 12, those bandwidth requirements decrease dramatically. He’s right, but widespread adoption of DX12 in our games is going to take a couple years. Should we be worried right now?

Sometimes you have to go to extremes to test a theory. And my theory is that 4GB of HBM is enough right now, but in the very near future (let’s say 6 months to a year) it may not be. So I fired up Grand Theft Auto V in 4K, switched all quality settings to High, turned off tessellation and ambient occlusion, and cranked anti-aliasing up to 8xMSAA. Let me stress that 8xMSAA is utterly ridiculous at 4K resolutions. But the idea was to test its limits.

This particular GTA V benchmark test (note that my standard one has MSAA turned off) brought the Fury X to its knees, clocking average framerates of 12fps to 20fps. I noticed artifacting, and gameplay frequently came to a screeching halt. In 3 of the 5 scenes, the minimum framerate dropped all the way to zero.

Meanwhile, Nvidia’s 980 Ti chugged along at 25fps to 30fps, never once stuttering or displaying artifacts.

But there’s an interesting flip side to this test. I also fired up Shadow of Mordor at 4K, with all settings completely maxed out, and recorded maximum memory usage during the game’s benchmark. The 980 Ti ate up 5.7GB of GDDR5 and clocked an average 39fps. The Fury X? 3.9GB of HBM, with an average of 45fps. Does that emphasize HBM’s efficiency? Yes. Could it have performed even better with more than 4GB of it? Probably.

Of the 10 games in my benchmarking suite, Shadow of Mordor and GTA were the only ones to really punctuate the issue. The rest handled everything I could throw at them without complaining. And while I consider these edge cases (again, this level of anti-aliasing is unnecessary at 4K) both for and against the Fury X’s 4GB ceiling, both tests prove that AMD may have their work cut out for them in the messaging department.

HBM is the way forward but it’s a risky path, and AMD is clearly playing the long game.
Manuel Calavera je offline   Reply With Quote