-------
Datum registracije: Aug 2005
Lokacija: -
Postovi: 7,568
|
Detalji oko nadolazeće AMD K11 (Bulldozer, Fusion i Bobcat) arhitekture
It is never boring in the world of CPUs. Regardless of who's on top, plans for next generations tend to excite everybody in the eco-system… if you deliver, that is. The Ex-Alpha engineering teams lead by Dirk Meyer that created K7 and K8 architecture messed everything up with Barcelona/Agena and the infamous TLB-bug (Translation-Lookaside Buffer). Shanghai/Deneb cleaned a lot of things up and AMD is back being competitive again, but Intel is pushing hard: Intel is operating in tick-tock architectural mode, and so far - AMD isn't able to answer back. K10 and K10.5 were nothing else but improvements over the K8 architecture.
AMD's Kryptonite versus Intel's Tick-Tock or are things really as they seem?Looking at public and leaked roadmaps, it looks like AMD's K11, or Bulldozer core is shaping up to be what Core architecture was for Intel. AMD went to the drawing board back in 2005, and started to work on "K11" architecture. Up to today, AMD only delivered three "ticks" and seven "tocks" architectures, with the latest one breaking the tradition: K5 was launched in 1996, K7 followed up three years later. K8 was an evolution planned to debut in late 2001, but numerous (manufacturing-related) delays postponed the part until April 2003. If the current schedule sticks to K8 and K11, we'll have to wait for eight years between the two. At the same time, even though Intel likes to ponder the "Tick-Tock" architecture, the reality is such that even the Nehalem architecture is remotely based on the Pentium Pro core, and if we look through the "P6" architecture, we see that Intel has delivered five genuinely new architectures as ticks, and a gazillion tocks.
M-SPACE or how Fusion came to be…Bulldozer architecture is actually a consequence of the failed tie-up between AMD and nVidia. Back in 2005, AMD felt that it had Intel by that certain part of male body (direct quote from an unnamed exec) and wanted to merge with nVidia. That fell through because Jen-Hsun Huang wanted the CEO position, and the rest is history; AMD already borrowed money to buy nVidia and had no choice but to seal the deal with ATi Technologies. The key reason for the birth of the Bulldozer architecture is M-SPACE design (Modular-Scalable Portable Accessible Compatible Efficient), GPU-resembling a "LEGO block" architectural concept that became a mantra in AMD's halls. Under the M-SPACE design guidelines, Bulldozer (10-100W TDP) and Bobcat (1-10W) cores were supposed to address different market segments, but the way of creating a processor was exactly the same. The goal was to have Bobcat addressing the OLPC/netbook/MID market, then considered as a crazy vision by Nicholas Negroponte - can anyone today say "Nicholas was crazy"? Bulldozer was the "big daddy" core, going head to head against then Pentiums and Xeons. Unfortunately for AMD, Intel got there first (Core 2, Atom).
In order to understand M-SPACE, we need to take a look into graphics chips; a GPU manufacturer will release a high-end part and then decrease the number of logic units depending on targeted die-size (cost). AMD saw M-SPACE as the way to leverage its biggest disadvantage: lack of available die space. A lot of things have changed since then, AMD spun off its foundry operations to GlobalFoundries and with ATI's upcoming 32nm GPUs coming from ex-Fab38 (Fab 1, Module 2) GlobalFoundries facility in Dresden. One of the key components for M-SPACE are the future CPU Sockets - Servers will get G34 as a part of the Maranello platform (LGA) - but consumer platforms won't stay on Socket AM3 either. AMD has plans to introduce G Sockets across the board, since they will be a necessity for a new memory controller, Display connectors, PCI Express 3.0 etc. Socket AM3 and its 940 pins just won't cut the mustard, but 2000+ lines on Landing Grid Array might do. This also means that pins are waving goodbye from mainstream consumer platforms - AMD will introduce LGA on desktops and start to push BGA (Ball Grid Array) on notebook platforms.
CPU becomes APUThe Bulldozer core will be implemented across the range: Server, Desktop, Notebook, launching as server first, followed by desktop and notebook. Server-wise, AMD plans to introduce three parts: single die quad-core & octal-core for the launch, with dual-die hex-core (16 Cores) to follow later. Quad-core and Octal-core are succeeding Sao Paolo (Istanbul on Socket G34), while Magny-Cours (12-core dual-die on Socket G34) will be succeeded by Montreal, a 16-core dual-die part. We heard about the "Montreal" codename back in 2006, so it might have changed by now. All of these parts sit on the Maranello platform, which will be introduced early next year.
When it comes to the world of desktops, Bulldozer arrives as two parts: Orochi and Llano. Orochi is the first M-SPACE design to feature both as Opteron and Phenom, featuring four Bulldozer cores and 8MB of cache. Llano is the new key processor for AMD's commercial desktop and notebook efforts. Dubbed Accelerated Processing Unit (APU), this combination of quad-core processor and ATi's DX11 core (both manufactured in 32nm - CPU die is SOI, GPU die is bulk). When looking at Ontario's specs, it is clear that this Falcon's dead-ringer (Kuma+ATI core) is all that Falcon was supposed to be: dual-core CPU packed with DirectX 11 based core using BGA packaging, targeting ultra-portable and netbook markets.
If we look at thr specs, it is beyond any doubt that this architecture is another "hammer", but is a hammer for Intel's line-up of today. Intel will launch 32nm Westmere in 2010, and have roughly 11 month advantage over AMD in terms of manufacturing process. To make the matters worse, Sandy Bridge is Intel's new architecture en route 2011, and there is a big question looming above heads at AMD: what will the state of the market be once that Bulldozer finally launches? 2011 is not too late for a Fusion "APU", though. Even though Intel will launch its 32nm Arandale processor in Q1'10, performance and compatibility of integrated graphics is a far cry from usability standpoint. Intel's integrated graphics currently does little more than output picture on the display, and DX11-compliant, OpenCL-compliant, decent low-res gaming performer will cause serious headaches for Intel. Once Intel integrates those features into Larrabee and Sandy Bridge, then we will be able to speak about problems for AMD. For us, it looks like AMD is on a path of innovation. But when AMD will stop being "late to the party"?
Izvor: Bright Side of News
Je da je Theo Valich autor teksta, ali ako ništ drugo navel je sve na jednom mjestu.
|