McG
Datum registracije: Feb 2014
Lokacija: Varaždin
Postovi: 8,122
|
Citiraj:
Citiraj:
There were cases of the Intel Core Ultra 7 155H outperforming the AMD Ryzen 7 7840U on Ubuntu Linux with all these CPU/processor focused tests, but more often than not it was the existing AMD Zen 4 processor leading.... In fact, out of 370 benchmarks run on both the Ryzen 7 7840U and Core Ultra 7 155H focused strictly on the processor performance, the Ryzen 7 7840U was the best performer 80% of the time! Ryzen 7 7840U enjoyed a 28% lead over the Intel Core Ultra 7 155H in these Linux CPU performance benchmarks. This was all the while the Ryzen 7 7840U was delivering similar or lower power consumption than the Core Ultra 7 155H with these tests on Ubuntu 23.10 with the Linux 6.7 kernel at each system's defaults. The Core Ultra 7 155H also had a tendency to have significantly higher power spikes than the Ryzen 7 7840U.
|
Izvor: Phoronix
|
Citiraj:
Citiraj:
The results so far indicate only a small improvement in efficiency under load, for both single-core and multi-core scenarios. However, Intel is still a long way from AMD and Apple, even if the values should improve somewhat. Things look better with the new integrated Arc GPU, which is reserved for the faster H-series processors. At least in the synthetic tests, the performance was increased massively and Intel can overtake AMD's Radeon 780M . When it comes to the actual gaming benchmarks, however, things look a little different.
|
Izvor: NotebookCheck
|
Citiraj:
Citiraj:
On average, upgrading to the 8592+ increased single-socket server performance by around 23.5% compared to the previous generation configs of Sapphire Rapid, Xeon Platinum 8490H. The dual-socket configuration records a 17% boost in performance. However, Intel is not in the data center market by itself. AMD's 64-core offering that Xeon Platinum 8592+ is competing with is AMD EPYC 9554. The Emerald Rapids chip is faster by about 2.3%. However, AMD's lineup doesn't stop at only 64 cores. AMD's Genoa and Genoa-X with 3D V-cache top out at 96 cores, while Bergamo goes up to 128 cores. On the power consumption front, the Xeon Platinum 8592+ was pulling about 289 Watts compared to the Xeon Platinum 8490H average of 306 Watts. At peak, the Xeon Platinum 8592+ CPU managed to hit 434 Watts compared to the Xeon Platinum 8490H peak of 469 Watts. This aligns with Intel's claims of enhanced efficiency. However, compared to the 64-core counterpart from AMD, the EPYC 9554 had an average power consumption of 227 Watts and a recorded peak of 369 Watts.
|
Izvor: Phoronix i TechPowerUp
|
Intelovi direktori u punom jeku s uvijek zanimljivim stavom na konkurenciju dok istovremeno izvršavaju P.L.A.N._i_P.R.O.G.R.A.M.. 
EDIT:
Citiraj:
Citiraj:
nVidia CFO Colette Kress spoke about the possibility of having Intel as a foundry partner during an interview at the UBS Global Technology Conference. At the moment nVidia uses both Samsung and TSMC for their chip fabrication needs, with TSMC handling the majority of the duty. TSMC is currently the strongest foundry partner, and works on making gaming GPUs for the RTX 40 series alongside the powerful chips used in data centers that are making nVidia an AI powerhouse. Meanwhile, Samsung was the supplier of the chips used in nVidia’s prior generation RTX 30 series gaming GPUs.
Intel would be the third foundry partner if nVidia decides to go this route. Getting Intel’s foundry services will be a smart strategic move for nVidia. Tensions between the United States and China don’t look to be easing any time soon, with the possibility of losing access to TSMC’s services getting higher than anyone would like it to be. This will also be welcomed news by the current administration, who is working on developing and strengthening the domestic chip making capabilities available in the United States.
|
Izvor: HotHardware
|
Citiraj:
Citiraj:
Intel CEO Pat Gelsinger came out swinging at nVidia's CUDA technology, claiming that inference technology will be more important than training for AI as he launched Intel Core Ultra and 5th Gen Xeon datacenter chips in an event here in New York City. Taking questions at the NASDAQ, Gelsinger suggested that nVidia’s CUDA dominance in training wouldn't last forever. "We think of the CUDA moat as shallow and small," Gelsinger went on. "Because the industry is motivated to bring a broader set of technologies for broad training, innovation, data science, et cetera." He suggested that with Gaudi 3, shown on stage for the first time, that Intel will be up to the challenge, and will be able to do it as well with Xeon and edge PCs.
|
Izvor: Tom's Hardware
|
Citiraj:
Citiraj:
AMD strikes back at nVidia with new MI300X benchmarks — MI300X shows 30% higher performance than H100, even with an optimized software stack. AMD also took one step ahead by making similar test situations to those nVidia did with TensorRT-LLM while accounting for latency, which is typical with server workloads. Neither AMD nor nVidia intends to back out of this argument involving the performance difference between the Instinct MI300X and the H100 (Hopper) GPUs. But AMD does make some strong points while comparing FP16 using vLLM, which is a more popular choice against FP8, which works only with TensorRT-LLM.
The red team announced the MI300X graphics accelerator early this December, claiming up to 1.6X lead over nVidia's H100. Two days ago, nVidia fired back by saying AMD did not use its optimizations when comparing the H100 with TensorRT-LLM. The reply reached a single H100 against eight-way H100 GPUs while running the Llama 2 70B chat model. In this latest response, AMD said that nVidia used a selective set of inferencing workloads. It further identified that nVidia benchmarked these using its in-house TensorRT-LLM on H100 rather than vLLM, an open-source and widely used method. Furthermore, nVidia used the vLLM FP16 performance datatype on AMD while comparing its results with DGX-H100, which used the TensorRT-LLM with FP8 datatype to display these alleged misconstrued results. AMD stressed that in its test, it used vLLM with the FP16 dataset due to its widespread use, and vLLM does not support FP8.
There's also the point that servers will have latency, but instead of accounting for that, nVidia showed its throughput performance, not emulating the real-world situation, according to AMD. So, AMD used the same selected testing scenario nVidia did with its second and third test scenarios, showing higher performance and reduced latency. The company added more optimizations when compared to H100 while running vLLM on both, offering a 2.1x boost in performance.
|
Izvor: Tom's Hardware
|
__________________
AMD Ryzen 9 9950X | Noctua NH-U12A chromax.black | MSI MAG B650 Tomahawk Wi-Fi | 128GB Kingston FURY Beast DDR5-5200 | 256GB AData SX8200 Pro NVMe | 2x4TB WD Red Plus | Fractal Define 7 Compact | Seasonic GX-750
AMD Ryzen 5 7600 | Noctua NH-U12A chromax.black | MSI MAG B650 Tomahawk Wi-Fi | 128GB Kingston FURY Beast DDR5-5200 | 256GB AData SX8200 Pro NVMe | 2x12TB WD Red Plus | Fractal Define 7 Compact | eVGA 650 B5
Zadnje izmijenjeno od: The Exiled. 19.12.2023. u 22:27.
|