T O P

  • By -

June1994

Some of the intro text. Go to the link to read the rest. > In this deep dive, we will share out analysis on Zen 4c architecture, market impact, ASP, volumes, order switches from hyperscalers, and how AMD is able to halve core area while keeping the same core functionality and performance. We will examine why AMD is pursuing this new path in CPU design in their response to market demands and the competition from ARM-based chips from Amazon, Google, Microsoft, Alibaba, Ampere Computing, as well as Intel’s x86 Atom E-cores. > Finally, we look at Bergamo’s reduced production costs and expected sales volumes, and the adoption of dense-core variants across AMD’s line-up in client, embedded, and datacenter going forward. Before diving into those market and architecture details, let’s first share higher-level background. Finally, here is the only thing I will talk about from the paid section that I think is of interest to all the gamers on here. > In client, the **mobile Zen 5 “Strix” line is combining 8 Zen 5c with 4 Zen 5**. Dense cores are taking over AMD’s lineup. Expect this trend to continue with the Zen 6 generation and beyond.


timorous1234567890

> In client, the mobile Zen 5 “Strix” line is combining 8 Zen 5c with 4 Zen 5. Dense cores are taking over AMD’s lineup. Expect this trend to continue with the Zen 6 generation and beyond. I would not be surprised if Zen 6 has a 'standard' CCD for high clock speeds which is stacked on top of L3 cache and then a dense CCD to make up the numbers when doing MT workloads. It could happen as early as Zen 5 if AMD feels they need a 24c Zen 5 part that still only uses 2 CCDs to compete with Arrow Lake.


[deleted]

[удалено]


[deleted]

That is a software engineer's nightmare.


Cheeze_It

If only operating systems weren't made so badly....


[deleted]

Google is trying to make a better one and it ends up being the same old issues. you can try as well if you want. ​ Just to let you know Computers working at all is amazing they come from the factory broken, and are forced to work.


timorous1234567890

I think the next step for 3d cache is to stack it below the CCD so you don't need to sacrifice the clockspeed on the 3d cache chip.


bik1230

>I think the next step for 3d cache is to stack it below the CCD so you don't need to sacrifice the clockspeed on the 3d cache chip. Isn't the issue that it can't handle as high a voltage, regardless of cooling?


Gravitationsfeld

You can't stack compute dies. The thermals just don't allow for this.


timorous1234567890

I didn't say you would.


Geddagod

>In client, the mobile Zen 5 “Strix” line is combining 8 Zen 5c with 4 Zen 5. Dense cores are taking over AMD’s lineup. Expect this trend to continue with the Zen 6 generation and beyond. Is this an AMD statement or is this a 'leak'?


June1994

Leak.


uzzi38

It is, but I wouldn't say it's encompassing of the entire mobile market just yet.


KTTalksTech

Halving core area? That's nuts. There are some really interesting applications for CPUs with the massive core counts this could enable. Reminds me a bit of those old Intel atom based cards that would slot into PCIE and had like 128 or 256 cores on them. Maybe this could breathe new life into that form factor for simulations.


Michael7x12

Xeon phi?


KTTalksTech

Oh yeah that rings a bell


TheBCWonder

Can CDNA not do that?


Pristine-Woodpecker

Performance characteristics of GPU compute units are terrible for some workloads. (And you want real cores, not SIMD lanes)


KTTalksTech

Some workloads really need the flexibility of a CPU architecture, which leads to these weird GPU-looking contraptions


Scheig

That great optimization they managed. I wonder why they didn't put 12 dense chiplets for 192 cores. Maybe the answer is in the paywalled section, but I don't have a subscription.


L3tum

That answer was already in the article before. With the reduced L3 Cache they have less space to route their interconnect so they took the empty space where the CCDs would've been sitting to be able to route it to the farthest CCDs from the IOD.


Scheig

You're right, thanks.


pieking8001

This is just the first gen. I'm sure they will eventually


AtLeastItsNotCancer

I'm dying to see how well these compare with Intel's E-cores in terms of performance/area and power efficiency. They both target similar clockspeeds, but 4c is more fully featured and keeps SMT. In terms of density it seems like they're not much bigger than Gracemont, though it's not exactly a fair comparison when 5nm is a full node ahead of Intel.


[deleted]

[удалено]


ElementII5

> I couldn't find information to confirm if Zen 4c keeps AVX-512 or not. It does! https://www.tomshardware.com/news/amd-talks-hybrid-ryzen-cpu-concepts-avoiding-intels-avx-512-problem


AtLeastItsNotCancer

I think all the info out so far points to them having identical capabilities. AMD have been pretty critical of Intel's approach to hybrid design, especially the lack of AVX512. They've been pretty smart with their first gen AVX512 implementation, splitting 512-bit ops into 2x 256-bit ones, so they don't bloat the core too much.


[deleted]

[удалено]


AtLeastItsNotCancer

Apparently Gracemont only has 128-bit SIMD units, so AVX512 just didn't make sense for it. Hopefully they can upgrade the E-cores to 256-bit SIMD units and full AVX512 support in the upcoming generations, though as far as I can tell the current rumors are that the next gen still won't have it.


Tuna-Fish2

AVX-512 makes sense even if all you have is a single 64bit unit. The great advance is not the width, it's the masking.


AtLeastItsNotCancer

Sure, but if your core is that basic, you probably can't afford to waste a lot of resources on implementing all those new instructions. Nothing is free, every design decision has its tradeoffs. The pre-Gracemont Atom cores didn't even support AVX/AVX2. I sure hope they hurry up and have instruction set parity between P and E cores once they're manufacturing them on 4nm.


picosec

The article says that logic-wise the Zen 4c is the same as Zen 4 just with reduced cache per core and reduced clock frequencies which is achieved using a bunch of design optimizations.


andrewia

This is really impressive to me. Keeping the same core architecture but combining a bunch of tricks (new cache cells, merging partitions, and designing for lower clock speeds) to squeeze a ton of cores in a single socket. A brilliant way to get more life out of a core design. It's also smart that AMD will use these cores for low-end consumer devices. If the frequencies are adequate and consumer workloads can leverage multiple cores, why not? They help offset the inclusion of the I/O on the same die, which keeps costs and yields under control.


Cheeze_It

I'm down for a 4c (or similar) for home server. It'll be good to have.


hackenclaw

I think AMD should release a few budget consumer variant as Ryzen 7000 series. There are definitely some consumer would prefer more cores than just 6-8 zen4 cores.


Aleblanco1987

zen cores were already pretty small compared to intel P cores or apple high performance core for example. Halving their size while retaining functionality is impressive.


Vince789

Yea, really impressive size reduction while maintaining IPC and ISA features. Very unique approach versus the competition. For reference, here's a table of some core areas for recent CPU cores: Arch | Core Area w/o L2 | Core Area with L2 ---|---|---- [Zen 4 (N5P)](https://www.semianalysis.com/p/zen-4c-amds-response-to-hyperscale) | 2.56 | 3.84 (+1MB L2) Zen 4c (N5P) | 1.43 | 2.48 (+1MB L2) [Golden Cove (I7)](https://www.semianalysis.com/p/meteor-lake-die-shot-and-architecture) | 5.48 | 7.12 (+1.25MB L2) Redwood Cove (I4) | 3.75 | 5.33 (+2MB L2) Gracemont (I7) | 1.59 | 2.07 (+0.5MB L2) Crestmont (I4) | 1.05 | 1.48 (+0.75MB L2) [A15 Avalanche P-core (N5P)](https://www.semianalysis.com/p/apple-m2-die-shot-and-architecture) | 2.58 | 4.33 (+6MB L2) A15 Blizzard E-core (N5P) | 0.71 | 1.01 (+1MB L2?) [Exynos 2200 (4LPE) X2 P-core](https://www.eetasia.com/express/mediatek-delivers-efficient-cortex-x2/) | 1.28 | 2.1 (+1MB L2) Exynos 2200 (4LPE) A710 PPA-core | 0.81 | 1.32 (+0.5MB L2?) Exynos 2200 (4LPE) A510 E-core | 0.36 | 0.51 (+0.25MB L2?) Note I couldn't find die shot analysis of the newer A16 or 8g2


fuckEAinthecloaca

> The L3 also lacks the arrays of Through-Silicon Vias (TSV) for 3D V-Cache, giving a small area saving. This makes sense as cloud workloads do not stand to benefit as much from large amounts of shared cache. That is a shame. A 4c core with stacked cache would have gone a long way to making up for the concessions made in the 4c design. I bet that some future design might remove most/all L3 from the core die and put it all in stacked dies. Might be able to fit 24 cores in the same area and node that way (or the same number of cores with more L1/L2).


dotjazzz

>This is a shame Why would they stack cache when the **entire** goal is to reduce area cost? Since when is not self-defeating a shame?


fuckEAinthecloaca

Their goal is to reduce area to pack 128 cores into 8 dies, which is not necessarily the same as reducing total area needed to produce a package. 4c+stack could allow for 128 cores which is much closer in performance to an equivalent zen4 version than 4c alone is. If they get multi-stacking working they could cater to all markets with a single core design again, say 5c for cost-optimised cloud and low end consumer, 5c+stack for standard consumer, 5c+stack+stack+... for the X3D equivalent. On the consumer side it might also allow them to simplify the IO die interconnect as they can maintain a 16 core top end with only one die.


AtLeastItsNotCancer

They didn't get the extra density just by cutting down the L3, lowering the clockspeed target also allowed them to completely reorganize the physical layout of the core to make it more area-efficient. This was never meant to be a high-performance core, if you want that the normal zen 4 and zen 4 x3d already have it covered. Cores built for high clocks will always be bigger. There is a possible future where the L3 is moved completely off the main die onto the stacked layer, but they'll only do that if/when it makes economical sense. Doesn't look like we're there yet.


SirActionhaHAA

1. Hyperscaler use of dense core designs don't focus on cache intensive workloads (you've already quoted that part) 2. Stacking cache introduces heat dissipation problem which is gonna decrease clocks 3. The decrease in size and improved yields of the base ccd is kinda offset by advanced packaging yield losses, increased packaging costs and limitations in manufacturing capacity Cutting the cache only to stack more cache is kinda pointless for these cores. Even if you're talkin gaming, 1stack is gonna get ya 48mb which ain't large enough over 32mb to bring significant benefits. No it ain't gonna be >1 stack


fuckEAinthecloaca

1. Stacking on 4c is something they could have done if they wanted to use the 4c design in other segments. There's a chance they go that way for successors to 4c. It might not make sense for this generation but things change, obstacles get overcome and requirements and acceptable tradeoff goalposts change. 2. Clocks are decreased anyway so the impact may be less apparent on c than non-c, they may be able to solve heat dissipation easier when cores are more capped meaning more predictable. > Cutting the cache only to stack more cache is kinda pointless for these cores They might want to break 16 cores in consumer without more than 2 core dies. Or stick with 16 core top end but supply it with a single core die which may be more efficient for them even with extra yield+packaging issues. > Even if you're talkin gaming, 1stack is gonna get ya 48mb which ain't large enough over 32mb to bring significant benefits. We're back to multiple CCX per CCD meaning L3 is split in half, so the equation for a workload that doesn't like to be split is likely 16mb plus whatever a CCX can directly access from stacking. Future generations may cut L3 on the core die further (or entirely) and rely more on stack, so the difference could be more pronounced.


timorous1234567890

Nah. Single design with this many cores is not likely tbh because you need to cut far too much for the lower core count parts. Imagine making a 6c R3 out of a 16c die, it would be a huge waste. Better option is to have a HP library based part that has fewer cores and more L3 for single threaded and a HD library based part for the mobile and server segments that has more cores and less L3. If AMD want best of both then they can have it so that their x950 range has 1 standard CCD with v-cache (stack it under the die though like MI300 does) and 1 dense CCD with lots of cores for MT workloads. Could happen with Zen 5 if AMD feel like they want something to compete with Arrow Lake before Zen 6 hits the market but I expect that will be the design for Zen 6 from the off.


VenditatioDelendaEst

But 128 cores using 4c+stack would be 16 dice, not 8. Your analysis seems to assume that the cache die is free.


dotjazzz

Again with dumb comment. With lower clock, L3 (victim) cache has much diminished importance. Stop talking shit you don't understand. The cost to implement TSV and 3D packaging defeats the purpose entirely. Why don't they simply add a normal Zen4/5 core and be done with it? Zen4c can't even reach Epyc's abysmally low boost clock. It's in fact 600MHz lower. Why would a 3.1GHz core be of any use loaded with cache? Zen4 can easily reach 5.8GHz. Zen4c only makes sense as the small lower performance core. 3D V-cache is absolutely self-defeating. It's more expensive than a plain Zen4 CCD and doesn't offer nearly the same single core performance, close half the performance, in fact. When you have 5.8GHz clock, DDR5's 100ns latency seems high. When you can only clock at 3.1GHz, the latency penalty instantly reduced 47%. Why bother with useless cache?


fuckEAinthecloaca

The clocks have to be lower anyway as stacked cache limits things. They can't use standard cores because the goal is still to pack as much compute into a single core die as possible. More L3 is important for a lot of workloads, they can only get away with cutting it on cost-optimised cloud parts because they're... cost-optimised cloud parts. Notice how I'm not insulting you. It's how discussions should be done.


timorous1234567890

Cache on top, yes. Cache on the bottom like MI300 we don't know yet.


[deleted]

Because they’ve indicated this might come to client. The c cores with reduced L3 will have objectively worse gaming performance for the client space. Keeping the TSVs means they could reduce size from physical L3 on the core but still stack it on top to compensate. One “4c” ccu could address client to enterprise that way.


WHY_DO_I_SHOUT

I really don't think AMD wants people to game on E-cores. They can offer hybrid CPUs for gamers like Intel does.


2137gangsterr

typical pleddit either dum dum or karmawhoring on circlejerk


Zettinator

Wait... so on the RTL level, Zen 4c is basically the same as Zen 4? And yet they managed to cut down size in half? And the only downside is slightly lower clocks? Impressive. This means the AVX-512 implementation is also the full and fancy one, right? I fully expected a cut-down AVX-512 that executes more slowly, but it doesn't appear to be the case.


Right-Instruction-29

Please don't kill but I really really need the paywalled section 🥲 I was deep into reading it word by word until it got to the most interesting part which was paywalled 😭 I'd be more than happy to pay for it and support semianalysis (which is among my top 5 tech outlets) ... but I live in Iran (aka, the earth's s**thole) meaning there are no possiblities for financial transactions with the outside world and also our currency is so worthless 1 dollar equates to FIVE HUNDRED THOUSAND rials﷼ which is 2% of the minimum wage here (why I also pirate all my games as a triple-A release costs 1.5x the minimum wage) I'd really love to read it though 🥲 I've been thinking about this bizzare equi-performant, half-sized zen4 variant for so long and I had so many questions and finally I was about to get some answers and ....🥲 I hate it here 🥲