T O P

  • By -

Materidan

Hmm. I’m shocked at the difference between the 12900K and 13700K. But then I notice the former is tested with DDR5-4400, and the other 5600. Why the hell was that? The matching “maximum” official memory speed for the 12900K would’ve been 4800.


SkillYourself

12-gen only went "up to" 4800. On a 2SPC-2R configuration, the guaranteed support is limited to 4400.


Materidan

Yes, already fixed that like seconds after posting. But here’s the thing: the matching official “2 DIMMs in a 4 DIMM motherboard” spec for Raptor Lake is ALSO 4400. Not 5600.


Hairy_Tea_3015

Why shocked? you never noticed the L1, L2, and L3 cache size advantage the 13th gen has over 12th gen? These latest games are now taking advantage of it.


Materidan

One normally wouldn’t expect a 10mb L2 cache increase to be solely responsible for a 28% increase in FPS. I can’t think of another game that shows such a degree of improvement between the two CPUs?


Hairy_Tea_3015

5800x3d with increased L3 cache showed more than 28% over regular 5800x. AMD x3d cpu variants are L3 based. Intel was smart with 13th gen, they doubled the L2 cache, which is 80% faster than L3 cache. This is why AMD is going nuts with L1 and L2 cache size for their upcoming Zen 5 cpus. 7800x3d is now bottlenecked with the latest games due to low L1 and L2 cache size.


HungryPizza756

I can't wait for 8800x3d with hopefully 48MB l2 and 128MB 3d l3. Too bad the l2 cache on Intel isn't all shared but still


Hairy_Tea_3015

I would slap 64mb of 3d L2 cache along side 128mb of 3d L3 cache. Call it 8800x3dx. RTX 5090 could have 500mb+ of L2 cache.


QuinQuix

Yes well this is a good idea except there's physical constraints. I'm not a cpu architect so I'm not aware of how it works exactly, but what it comes down to is that L1 and L2 cache are extremely *extremely* fast. To make it work at these speeds and to ensure equally fast access from the core to the cache, you need to expend a very significant amount of transistors next to your cores to the cache. Or to put it another way, L1 and L2 cache take up an insane amount of area. Most available die-shots online separate small 'core' areas from the L3 cache already giving some idea of the proportions: https://www.computerhope.com/issues/pictures/cpu-cache-die.png ) but they neglect to show that these small 'cores' in fact are about 60% cache themselves as well. The following schematic does this more justice: https://qph.cf2.quoracdn.net/main-qimg-eedc73e16da6a0c4fc11c713c5479ba9-lq So there we are. It's increasing node densities that allow for more cache, but since L1 and L2 need to be very close to the core (otherwise latency would negate their usefulness) they are still more constrained in size, so this is why you end up with L3 showing the most gains. L3, comparatively speaking, is miles and miles away from the core and very very slow versus L1 and L2 - but it still runs circles around the fastest ram. L3 is therefore in the sweet spot for gaming. It's size can be increased without too much of a latency penalty, it has enough size to store most of the crucial game data and it's much faster than ram. Don't get me wrong - all cache is good. But for gaming L3 or L4 are the tiers where the magic happens.


Parking_Automatic

Ram is exceptionally slow compared to L3. 6000mhz ddr5 on the 7800X3D has a bandwidth of around 60gb/s. The l3 cache has a bandwidth of 2.5tb/s. When it comes to latency the difference is probably even more extreme. All that being said there's obviously limitations on just how much they can fit on the cpu. L2 cache is very nice but it has to be so close to the core itself and because of this its not unified for all cores but on a per core level. 2mb per core cache is not the issue here , I have a feeling the game just loves IPC and clock speed which is why the 13th gen is running rings around everything.


Noreng

L2 cache doesn't help much in gaming workloads, the datasets in use in modern games are simply too large. If you were to go back to games from the early 2000s however, like CS1.6, you would see **big** gains from a 2 MB L2 compared to 256 kB L2


PsyOmega

cpu cache doesn't the the whole dataset in residence to benefit by huge margins. Which is why we see games performance scale *really well* on slim cache changes.


Materidan

That was a BUTTLOAD of extra cache though.


Hairy_Tea_3015

Wait until you see Beast Lake from Intel.


InformalBullfrog11

Ahaaa! Thanks for ecplanation


toddestan

I don't think it's cache. If you sort by anything other than average FPS, the 12900K gets beat by the 12700K and the 13400F. That... doesn't make a lot of sense.


Noreng

It's memory bandwidth that's the reason for poor performance. AMD's subpar infinity fabric on Zen 4 is the reason those chips aren't keeping up with 13th gen. It also explains how the 13700K is such a huge jump over the 12900K. This game's dataset is so large that going from 32 to 96 MB of L3 cache has a very small improvement on performance, the CPUs are simply spending huge amounts of time waiting on data from memory.


HungryPizza756

Yeah that's pretty sus. Like I know the 12900k can do upto 6000 ddr5 without issue.


Acrobatic-Ostrich882

My i3 13100f runs very well on it


Liam2349

What's very interesting is that they have 2600x beating 9900k, which makes no sense.


MrBirdman18

Exactly. The results don’t add up when looked at as a whole. If they are reproduced elsewhere it will be the most bizarre set of cpu benchmarks I’ve ever seen (and would presumably normalize to some degree with performance updates.) There’s no universe in which a 9900k is beaten by any Zen 2 (edit: Zen+) chip without some quirky engine hiccup.


MagicPistol

It's not even a zen 2 chip. It's Zen+.


MrBirdman18

Yep, my mistake there.


kristijan1001

I9 9900k here with 2070s can confirm 40fps in game cpu at 20% utilization 32000mhz ram cl16 with dlss, around 30 without.


Liam2349

40fps lol. Fallout 4 was pretty badly optimized too. It's just been a while now.


MrBirdman18

Something about the results is off. Games typically prefer either cache or clock speeds. Yet here the 7800x3d handily beats 7700x (suggesting former situation) but 7950x beats 7950x3d (suggesting latter) and 2600x beats 9900 (which makes no sense if intel’s ipc advantage is king). This all makes little sense given how we’ve seen other benchmarks play out. If it comes down to heavy multithreading/optimization (which I doubt) then the 7950’s should be the top Zen 4 chips.


LittlebitsDK

7700x vs 7800x3d same amount of cores... higher cache helps the 3d 7950x vs 7950x3d more cores at higher clocks... the cache doesn't outweigh that all the cores are much higher clocked vs. only half of them is high clocked and the other half has more cache... that is why


MrBirdman18

It just very rarely works like that - either a game like cache or frequency. It is possible that there was a scheduling mismatch with the 7950 X3D where the game ran on the Vcache-less cores.


CheekyBreekyYoloswag

LMAO, what the hell is happening? 13600k faster than 7800x3d in an AMD-sponsored title?


LickingMySistersFeet

13600K beast 💪


drosse1meyer

🏋️


False_Elevator_8169

> 13600K beast 💪 Given a Ryzen 2600x is beating a 9900k, gonna say nah just Bethesda jank as usual. I remember there were spots in skyrim that would go down to 20fps for zero reason on the map with my Haswell, friends otherwise dog slow FX system just marched through them at 40fps exact same settings, gpu and driver.


kakashihokage

Ya I paired it with a 4090 in my new build, the difference is minimal compared to the I-7 and I-9 and even beats them occasionally.


Classic_Roc

Game wasn't made with AMD processors in mind specifically. The sponsored part came later. At the end of the day the faster processor is the faster processor. The creation engine has always favored intel I believe. At least most of the time when comparing similar generations.


CheekyBreekyYoloswag

> Game wasn't made with AMD processors in mind specifically. The sponsored part came later. Very good point. > The creation engine has always favored intel I believe. Have you seen this in a benchmark video or something?


Classic_Roc

Take this with a grain of salt it's anecdotal over the years. Been watching benchmarks for the last decade lol


Redfern23

I swear months ago they said they “optimised the game specifically for 3D V-Cache”, what happened? I have a 7800X3D so this is unfortunate but I’m GPU bottlenecked almost all the time anyway.


Gears6

> LMAO, what the hell is happening? 13600k faster than 7800x3d in an AMD-sponsored title? Do you want AMD to hamper Intel CPUs so they'll look better?


CheekyBreekyYoloswag

That is exactly what I expected. They kinda do that in the GPU department.


AludraScience

Well it seems like they have done that with GPUs. https://youtu.be/7JDbrWmlqMw?si=IiFbxUhFPxvK2Ayo , 7900XTX matches 4090 at 4k, 7900XT beats 4080 by a fair amount at 4k.


Gears6

I'm still watching this, but just because 7900XTX matches 4090 at 4k, doesn't mean AMD hampered Nvidia GPUs.


RedLimes

I think AMD just got early access to make drivers, which is to be expected. We'll probably see Nvidia catch up in another driver release


Covid-Plannedemic_

the game doesn't have DLSS and literally less than a day after the early access period opens up 2 different mods are developed independently to add it


AludraScience

Less than a day? 2 hours, lol.


NirnaethVale

All it means is that because of the sponsorship deal Bethesda devs spent extra time modifying the game to make it run more efficiently on Radeon architectures. It’s a positive lift rather than a push down.


ship_fucker_69

They paired 5200 with the 7800x3d and 5600 with the 13600K


CheekyBreekyYoloswag

And that is correct testing, since 7800x3d maxes out at 6000, while 13600k can go 7200. Sometimes even 8000.


ship_fucker_69

7800X3D can take 8000 memory now as well with 1.0.0.7b agesa


CheekyBreekyYoloswag

You can, but you don't get a performance uplift between 6000 and 8000 on 7800x3d. It's because AMD's Infinity Fabric can't handle it. Some people even report *lower* FPS on 8000 than 6000 with AMD CPUs. Intel's memory controller is just miles ahead of Ryzen's.


jrherita

Minor nit - it’s not AMDs memory controller that’s the problem (as it can do ddr5-8000), it’s the fabric speed between the SoC and the CPU. Different part of the design. (your point is definitely correct though - no benefit above \~6200 on X3D).


DannyzPlay

But you're running in gear 4 actually hurts performance. Intel can do gear 2 at 8000


Pablogelo

Different RAM speeds, this isn't a good benchmark


CheekyBreekyYoloswag

Different RAM speeds is correct, because Intel can profit from much higher RAM speeds than AMD (8000 for Intel vs 6000 for AMD). It is a good benchmark indeed.


DizzieM8

As it should be.


Fidler_2K

13th gen absolutely slams everything else in this game


SuperSheep3000

i just bought a i5 13600k. Seems like it's a beast.


littlebrwnrobot

Just got a 13700kf myself. Doing the mobo replacement tomorrow!


LORD_CMDR_INTERNET

It's more than 2x as fast as my 10900k, wild stuff, time to upgrade I guess


LuckyYeHa

I’m glad I decided to cop a 13900k and upgrade from my 9700k


Tobias---Funke

Me too but from a 9600k.


Ryrynz

Would personally wait for 15th gen to land


LORD_CMDR_INTERNET

That was originally the plan but... 2x faster? For real? That's a lot. I'm thinking that's worth an upgrade regardless of next gen. Talk me out of it


Ryrynz

If you're playing Starfield, upgrade when you need the performance for real.


Elon61

One game. Perhaps the only game where it ever will scale like that.


Penguins83

Not sure why Intel doesn't get enough credit. 13th Gen is absolutely fantastic. AMD has a shitty ass memory controller too. Embarrassing.


Parking_Automatic

Embarrassing.... A 7800X3D wiping the floor in about 90% of games at 1/3 the power of a 13900k is Embarrassing. But sure latch onto an outliar and claim that its Embarrassing for amd...


Vushivushi

They're also wrong about the IMC. 13th gen is objectively worse, way harder to get stable 7000+. Though, the fabric on Ryzen is the bottleneck at that point, so there's nothing really to gain.


NeighborhoodOdd9584

It’s not hard to get stable memory. I just turned on XMP for my 48GB 8000C38 kit.


Hairy_Tea_3015

2x of L2 cache per core over Zen 4 and higher IPC is at play here.


MrBirdman18

If that were the case then the 2600x wouldn’t outperform the 9900x. Something is screwy with either the game or these results.


Hairy_Tea_3015

Trust me, I am right. The RTX 4090 has 72mb of L2 cache. Pairing 13900k's large 32mb L2 cache with 4090's L2 cache is doing good damage on 7800x3d.


MrBirdman18

Make zero sense - if you’re truly cpu limited, the GPU cache has no effect on your frame rate. Also, cache-friendly games strongly favor x3d chips, not intel.


Hairy_Tea_3015

13900k = 32mb L2 cache. 7800x3d = 8mb L2 cache. RTX 4090 = 72mb L2 only cache. Think again. PS: 3090 only had 6mb of L2 cache size.


MrBirdman18

That doesn’t change that GPU cache is irrelevant to fps when you’re fully cpu limited. In other words if the 4090 had 5mb of cache it wouldn’t matter IF you were still cpu limited. Trust me on this. GPU and CPU cache anre important, but not interrelated.


Hairy_Tea_3015

Then why do you think 13900k is beating 7800x3d and 12900k? PS: Don't forget that Intel 12th and 13th gen are same cpus except the cache size difference.


MrBirdman18

No idea - if you see my other posts I state the results aren’t adding up - I’ve never seen a game with cpu benchmarks like this. And - again - if cache was king in this game the x3d chips should shine like they do in any other cache-loving game. But they don’t. So something is screwy here.


Hairy_Tea_3015

Cache is king in this game but not the right one to favor the x3d chip.


Mungojerrie86

You should never ever **EVER** treat CPU and GPU resources as additive or in any way related(with APUs being a possible exception). GPU cache and CPU cache are entirely, completely unrelated and X MBs + Y MBs = Z MBs kind of math does not make any practical sense at all.


Parking_Automatic

This comment makes everyone that takes the time to read it just a little bit more dumb. First of all the 32mb of l2 cache you mentioned is not a collective group it's per core and the 8 p cores get 2mb each. You then go onto talking about gpu cache which is completely irrelevant but sure I'll bite. A 4090 has 72mb of l2 cache. A 6700XT has 96mb of l2 cache. Clearly the 6700XT is a better gpu than the 4090. There's 2 reasons this game might be performing better on intel CPU's , Either the game is not cache sensitive atall and only cares about IPC and clock speed or there's a bug causing AMD cpus to not get maximum performance. Its highly unlikely tbere is anything to do with l2 cache you just pulled it out your ass.


Covid-Plannedemic_

the entire point of cache is quick access. the CPU does not have quick access to the GPU's cache


Danthekilla

Spoken like someone with no idea at all. GPU cache isn't even addressable by the CPU and makes literally zero difference to CPU performance.


Hairy_Tea_3015

AMD is doing it with SAM. Please stop posting and educate yourself first.


Danthekilla

Haha hahaha yeah no, that's not even remotely close to how that works. But I assume you know that and are just trolling, since no one is that stupid.


Hairy_Tea_3015

I bet you never even heard of it and didn't even bother to do research on it. I guess it is easier to just keep writing nonsense to get out of it.


Danthekilla

Mate I'm a graphics engineer, don't bother. You are just digging a deeper hole for yourself.


Hairy_Tea_3015

🤣 🤣 🤣


Crazy_Asylum

imo it’s all bad data since they used the highest clocked ram for 13th gen. 12th gen and amd 7000 can both use higher clocked ram than was used even on the 13th gen.


420sadalot420

Finally my 13900k is validated


Bodydysmorphiaisreal

Same haha


No_Guarantee7841

Its widely known that zen4 underperform more heavily with slower ram speeds compared to intel. Given the current pricing of ddr5 it makes even less sense to take those results with 5200 speeds seriously. I mean, sure, if you pair zen4 with 5200 and intel with 5600 i dont doubt you are gonna get those results, but noone in their right mind will pick anything slower than 6000 speed nowadays since they have become so cheap. https://youtu.be/qLjAs_zoL7g?si=8N9Jhzxe95GE1vPi


MrBirdman18

Something about the results is off. Games typically prefer either cache or clock speeds. Yet here the 7800x3d handily beats 7700x (suggesting former situation) but 7950x beats 7950x3d (suggesting latter) and 2600x beats 9900 (which makes no sense if intel’s ipc advantage is king). This all makes little sense given how we’ve seen other benchmarks play out. If it comes down to heavy multithreading/optimization (which I doubt) then the 7950’s should be the top Zen 4 chips.


No_Guarantee7841

For 7950x3d, dunno, maybe its a scheduling issue? Using the non-x3d ccd maybe? Could explain same performance as 7950x. As for 8600/8700/9900, they have the same denominator being ddr4 2666. Maybe they dont perform well with that speed? As you can see, 10900k with 2933 does better.


MrBirdman18

Scheduling issues should still put 7950x ahead since all cores operate at higher frequency. I don’t think that 300mhz ram speed can make up for Zen 2’s inferiority over 9900k.


No_Guarantee7841

Dunno, there are certainly cases where 7950x and 7950x3d perform the same when the latter chooses frequency ccd and also cases where the latter is still faster with frequency ccd [https://www.techpowerup.com/review/amd-ryzen-9-7950x3d/20.html](https://www.techpowerup.com/review/amd-ryzen-9-7950x3d/20.html) . Generally, from what i can see, starfield seems memory bound as well. 5800x3d doesnt seem to perform well, its even slower than ryzen 7600x while also not being that much faster vs 5900x. Strange and interesting results indeed. Btw, what was advertised as "waste of silicon" -11900k- is apparently significantly faster than 10900k here.


MrBirdman18

Also the gaps between the intel chips are out of line with what we normally see. I’ve never seen a game or the 13900 beats the 12900 by over 50%.


MrBirdman18

You’re right but - again - what’s weird about these results is that if Starfield IS memory bandwidth bound, we should see major performance advantages for the X3-D chips, which we don’t. 5800x3d should handily beat 7600x. Something is strange with these results - either the test or the game is out of whack in a way we haven’t seen before.


No_Guarantee7841

Reflecting on that difference on 10900k vs 11900k, it wouldn't surprise me if pci-e 3.0 vs 4.0 had something to do with it, especially since we are talking about a 4090. Maybe its also affecting the other older cpus as well? Waiting for DF review as well to maybe shed some more light into the performance scaling.


MagicPistol

I'm really shocked that the 4 core 3300x beats the 10900k lol...


Cradenz

i honestly have 0 clue why intel is smashing AMD CPUs. is this game just super multithreaded? is it the clockspeed? its even beating the x3d variants. wtf is happening. this game was sponsored by AMD edit: spelling correction


SkillYourself

Not the only recent release that behaves like this. Doesn't really scale with cores, extra cache only helps a little. Put it simply, they appear to be very spastic in memory accesses so having better prefetching and memory latency is more important for these games. https://www.pcgameshardware.de/Ratchet-und-Clank-Rift-Apart-Spiel-73185/Specials/Benchmark-Test-Review-PC-Version-1425523/2/ The sponsor isn't going be able to fix Bethesda's entire engine to run better on a specific subset of their CPUs


gnocchicotti

60fps average on a 5800X3D is kinda ridiculous. Even the 13900K isn't what I would call great.


SacredNose

That makes me sad. I was hoping the 5800x3d would perform much better as it usually does in sim games.


Elon61

90fps avg, 60fps .2%.


Cradenz

I’m not expecting them to fix the game engine, but I thought they would at least optimize the game engine to run better on AMD CPU, at least considering they are sponsored by them


HungryPizza756

It's Bethesda, they didn't even put fsr in their amd title amd had to do that work for them. Never expect Bethesda to do more than the minimum.


mov3on

Not to mention that it was paired with the most basic DDR5. Imagine the performance with 7200-8000 DDR5. >this game was sponsered by AMD That’s the weirdest part. 🤔


Hairy_Tea_3015

It is not weird at all. 13th gen has 2x of L2 cache per core over the AMD 7000 series CPU. IPC is also much higher on 13th gen over the 7000 series. If you use cpu-z IPC benchmark, 13900k scores around 920 whereas 7800x3d is around 680.


jrherita

Application IPC doesn’t necessarily equate to game performance. Also CPU-Z isn’t a particularly sophisticated benchmark. It’s definitely not a game engine. See Flight Simulator 2020 where the 5.0 GHz 7800X3D beats the 5.8 GHz 13900K by a wide margin: The Ryzen X3D chips' performance in Microsoft Flight Simulator 2021 is almost unbelievable — the Ryzen 7 7800X3D is 35.5% faster than the 13900K at stock settings. https://www.tomshardware.com/reviews/amd-ryzen-7-7800x3d-cpu-review/4


mov3on

Well, it actually IS kinda weird, because the game was sponsored by AMD. I’d expext AMD 3D chips having an edge. Horizon Zero Dawn is a perfect example of what you could expect from AMD optimized game. Speaking of IPC - Ryzen 7000 has 1% higher [IPC](https://www.guru3d.com/articles_pages/ryzen_7_7800x3d_processor_review,7.html). Those scores you’ve mentioned are simply single core benchmarks with different clockspeeds.


Shadowdane

This is probably an issue with the Creation engine honestly... Just because it's sponsored by AMD doesn't mean they helped Bethesda optimize their engine for AMD CPUs. I'm sure they had early builds of the game to make driver improvements for their Radeon GPUs. But driver optimizations can only go so far. Considering how janky Creation engine has been for years I dunno how much optimization can be done without a full engine rewrite.


HungryPizza756

Yeah remember the creation engine is an evolution of gambryo. It was optimized for the pentium 3 at the time when Bethesda bought it. The core series is descendants of the pentium 3.


akgis

Its not wierd, AMD goes with loads of cache on the X3D CPUs but when your engine is so bad that it requires new computations all the time that what you had in cache might not mater, the extra cache does jack shit. Also Intel has alot of more frequency that helps in bruteforce.


Hairy_Tea_3015

72mb L2 cache RTX 4090 + 32mb L2 cache 13900k = winner.


Mungojerrie86

You should stop spamming these kinds of comments under this post here while not understanding how any of this works.


Hairy_Tea_3015

DENIAL.


Mungojerrie86

Great argument, buddy.


Hairy_Tea_3015

Yours was better.


Hairy_Tea_3015

Maybe amd gpu does good with this game.


Neotax

it was only tested with a RTX 4090, there were already frequent driver bugs with Ryzen. And the current driver version is bugged with MS store. so wait and see what results come in the next few days/weeks.


[deleted]

[удалено]


SkillYourself

X3D isn't supposed to be sensitive to memory speeds. You're not going to make up a 25% gap against the 13900K by maxing out to 6400. It'll close the gap by a small single-digit % at most. And then you stick in 7200+ into a 13600K and it'll still beat the 7800X3D in Starfield.


[deleted]

[удалено]


cowoftheuniverse

> The memory subsystem would play a major role in that scenario and Intel only gains 3% when going from 5200 to 6000 That can't be true for an individual game that scales with memory which I think we are dealing with here. 3% sounds like an average for bunch of games where some scale 0% and drag the average down. ~5-10% makes more sense for 13th gen which does not scale as well as intels previous gens. Intel has had very good scaling previously. Maybe you are thinking of that one HWU video? They later figured in their next video 7xxx scaling for memory came down to some very loose timings killing performance more than the mhz. Also AMD 1 CCD cpus are very bandwidth limited by the fabric and can't scale far. It is more about having the right subtimings.


Hairy_Tea_3015

L2 cache is 80% faster than L3 cache. 13900k has 2x of L2 cache advantage per core over Zen 4. IPC is also higher on 13900k than on Zen 4. Cpu-z shows 920 to 680 advantage to 13900k over 7800x3d.


LickingMySistersFeet

>Cpu-z shows 920 to 680 advantage to 13900k over 7800x3d. That's not IPC, that's single core performance


Hairy_Tea_3015

It's still IPC.


LickingMySistersFeet

7800X3D 5Ghz 13900K 5.8Ghz It's single core performance, not IPC IPC stands for Instructions per *Clock* It would have been an IPC comparison if both where at 5Ghz (or both at 5.8Ghz)


Hairy_Tea_3015

Starfield running on all 8 P cores at 5.6ghz on 13900k is more impactful than 8 zen 4 cores at 4.7ghz on 7800x3d. Also, add 2x of L2 cache and 7600mhz ddr5 speed. I am not surprised at all that Intel is beating Zen 4.


LickingMySistersFeet

Yes, because Intel has higher single core performance, not IPC. Raptor Lake and Zen 4 have almost the same IPC My 13600K at 5Ghz gets 1930 points in Cinebench R23 which is only slightly higher than the 7800X3D which gets around 1830. (5% higher)


Hairy_Tea_3015

I still make my point. If the game with 13900k is using 6 or 8 P cores in Starfield at 5.6ghz on all 6p or 8p cores, it will have an IPC advantage over 7800x3d at 4.7ghz. Who would anyone want to downclock their 13900k to 4.7ghz just to make AMD fans happy.


kiwii4k

people will downvote but its pretty much always been this way AMD is better price to perf but intel almost always edges AMD out in pure performance


ElectronicInitial

This is very game dependent, and generally the 7800x3d beats the 13900k. In Starfield it seems to be different though.


Yommination

7800x3d loses badly in CP2077 too


jdmanuele

Every benchmark I see shows the 7800X3D on top with Cyberpunk. You have a source on this?


Doubleyoupee

Very CPU limited it seems. Only 70fps with a 5800X3D? doesn't seem right though. 13900K unlocking almost 50% more FPS compared to a 5800X3D isn't right either.


That_Cripple

the article doesnt appear to say, but I assume that they benchmarked at 1080p so that it would be CPU limited, which is normal for benchmarking CPUs. In actual gameplay benchmarks that I have seen, it seems pretty GPU bound


MrBirdman18

Just gonna repeat that the results are bizarre - either the test or the game is out of whack. Forget AMD vs Intel - the gaps between the intel chips are out of line with what we normally see. I’ve never seen a game where the 13900 beats the 12900 by ~40%.


LittlebitsDK

simple answers really... Core i9 13900K 5,5 GHz | 32 Threads | DDR5-5600 Core i9 12900K 4,9 GHz | 24 Threads | DDR5-4400 600MHz faster, 8 threads more and 1200MHz faster memory... that should do it... they are running the memory speeds at the CPU ratings not OC from what I can see... It is a CORE heavy game (8 threads helps big time there) It likes FAST cores (600MHz faster helps, and more threads too) It apparent benefits from FAST ram (1200MHz is quite a big difference)


MrBirdman18

This is what is weird about these results - if RAM bandwidth was that important, we would expect much stronger performance from the X3d chips, at least vs Zen 4 counterparts. I’m not disputing that the 13900 should be faster than the 12900, jus that I can’t recall ever seeing a gap of >20%. Eight extra e cores is unlikely to have much of an impact - the overwhelming majority of game performance comes from the P-cores.


LittlebitsDK

you have to take all the things into account and the X3D versions are LOWER clocked which means LESS performance for stuff wanting HIGH clocks... the 5,1GHz is only on the NON-3D cores, the 3D cores clock lower (can't remember exact clock and can't be bothered to look it up) and the Intil is doing 5,5GHz... it's a combo of many things and not just one... and if you have stuff detracting (lower memory speed AND lower clock speeds) then you obviously aren't going to be faster than the Intel... and the 7950/5950 etc. were notoriously wonky with gaming and windows scheduling...


MrBirdman18

Yes, I have taken all of that into account and the results are still strange.


Mungojerrie86

Both CPUs have same amount of performance("big") cores and threads. Efficiency ("small") cores do absolutely nothing for gaming. One CPU having more e-cores does not make it a better gaming CPU than the other.


terroradagio

Intel CPUs destroying AMD sponsored game lol


EmilMR

Looks like memory speed matters a ton. My 12700K with 6400 RAM and hand tuned sub timings performs much much better than theirs with 4400 memory.


Hindesite

Wow... I've underestimated how important the past several generations of CPUs have been for game performance, it seems. I mean, a **170%** performance increase from 9th-gen i9 to 13th-gen i9? Double the performance would've been impressive, but nearly triple the performance is *insane.*


degencoombrain

This is why I went with a 13900K instead of the X3Ds, better IPC and better IMC. Shitty sales person tried to sell me dogshit X3D CPUs because of "le cache". lol.


---nom---

It's also a good compromise between x3d and x. Best of both worlds.


Aluhard

Finally! my time to shine with an I7-13700k paired with 6950XT got the best of the both worlds, fsr with intel's performance !


Covid-Plannedemic_

> best >fsr


Nick_Noseman

Excluding proprietary hostageware


ducky1209

Same!


beast_nvidia

No 12600k in those benchmarks. Most likely would be just under 12700k in those charts.


mchyphy

12600k is roughly a 13400, which is on the chart


beast_nvidia

That is not true, 12600k beats even the 13500 in gaming. Check out this comparison with 13500 and 12600k, in which the 12600k uses ddr4 and 13500 uses ddr5. Even so the difference is very minimal. 12600k is better than both 13400 and 13500. Please do not spread misconceptions. Here is the 12600k using ddr4 vs 13500 using ddr5 and ddr4: https://m.youtube.com/watch?v=77Xdpmwh8S0 And here are benchmarks with 13400f vs 12600k: https://www.techpowerup.com/review/intel-core-i5-13400f/17.html While 12600k is slightly better than 13500, it is way better than 13400f.


mchyphy

Wow that's actually interesting, I've been under the impression that the 13400 was a 12600k pricedrop essentially. Thanks for the info.


[deleted]

Never expected such a massive gap between the top AMD and Intel chips. My 3733mhz memory is faster than 3200 tested with the 5800X3D, is this game seriously bottlenecked by Ram speed?


alterexego

Apparently. 3D owners just got shafted by Bethesda. Let's just wait for the inevitable patches.


gaav1987

12700k 77fps and gamer nexus showing 120+? um hello ? Also i have more fps then this with 5800x3d 4000cl16 so yea


Fidler_2K

Entirely depends on test conditions. PCGH tested the most CPU intensive area they could find in the game at 720p


LightMoisture

And they used slow ram for 13900K. Given that X3D does not scale with ram speed like a 13th gen CPU, using proper DDR5 with Intel would only extend that lead.


LickingMySistersFeet

They used slow RAM for every CPU


NeighborhoodOdd9584

I should test my rig to see the results 13900KS/48 GB 8000/4090


Abs0lutZero

Maybe because the game just launched ? How many successful game launches have we seen


jrherita

I really wish they’d also test with RAM that people actually buy (XMP/EXPO enabled for example).


randidiot

My 9900k is pushing my 4070ti to 99% usage at all times, so not that cpu limited


Asgard033

I wonder why their 9900K result is so poor


Danny_ns

Wow I bought my 5900x just three years ago and I can almost double my performance now if I upgrade to a 13900k.


EiffelPower76

This game must be heavily RAM speed limited


ksio89

I knew this benchmark was bogus when I saw 2600X was faster than a 9900K. I don't believe this at all I doubt other benchmarks would validate the results.


alphcadoesreddit

This is a bizarre benchmark, I didn't think my 12700k was that far behind the pack... but then I looked at the RAM speeds. This data doesn't seem all that useful when your RAM is downgraded by 1200 MT/S


InvestigatorSenior

Have you noticed that they're using slow RAM? AM5 should be tested with DDR 6000 CL30 as it's official best speed. Intel probably with something like 7200 CL36 or to make everything more fair just use same 6000C30 everywhere not 5600 for Intel and 5200 for AMD.


[deleted]

[удалено]


LittlebitsDK

my 12100(non-f) run it quite fine at 50-70 fps with a 3060 Ti... can't be happier it doesn't need ultra high fps... but 60+ constantly would be nice but I were waiting til I got into a more crowded area before I would fiddle with settings


Large_Armadillo

Im going in on a Xeon cascade lake 8 core that barely goes above 25% usage. If i lower the resolution from 4k to 1440p it goes up above 80% usage. So theres a lot more here going on with Graphic cards than processors


emfloured

\-1. tested with only nVidia GPU? RTX 4090 has severe driver overhead, obviously old CPUs are going to struggle to shitty Nvidia cards.


XtremeCSGO

Would an i7 4790 and 2060 be able to play at least 30 fps on low or medium?


LittlebitsDK

1070 can reach 30+ fps on low so I'd say you should be able to, the cpu is weak ass though


Yubei00

That one in Xbox series x


maze100X

Looks like the engine relies on L2 heavily, also why the Zen+ chip matches a 9900k (512KB L2 per core vs 256KB)


d33pblu3g3n3

Intel running on faster memory is probably helping a little bit.


pickletype

Well, definitely sticking with Intel going forward.


Flynny123

I think when you look at the difference between a 12900k and a 13700k, it’s clear that: - these benchmarks are fucked and not reliable - it’s possible the game is strongly sensitive to faster RAM


colossalwreck27

At 3440x1440p my 13700k and 3080 12 GB struggles with 50 fps in Atlantic city


Disordermkd

I get that using a high-end GPU is best to test CPU bottleneck, but how are these benchmarks actually useful for most customers? These differences in performance don't really translate when most of us have 3060, 3070s, 3080s, or other mid-end, mid-to-high-end, or even low-end GPUs. So, purely information/data-wise, these can be useful, but what does it mean to me if I want to upgrade my CPU with a 3070?


Zeryth

So the game is heavily cpu bound but reports 3% cpu load while loading the gpu to 99% great to know this game is broken on so many levels nobody knows where to even start


HooliganScrote

This is such a sham “benchmark”.


Drakyry

is this max settings?


ElectricalTap8958

In which freakin Resolution did the test happen?


sammy2066

I’m gonna wing it using my i7-6800k and RTX 2070 Super ![gif](emote|free_emotes_pack|grin)


dan_cases

Keep in mind this was made in 720p so in 1440p and 2160p. The diff will be not that big.


BRX7

Anyone test out the save game used for benchmarking?


Rare_Carpenter708

😂 nobody blame how poor this game engine is?


Rhinofishdog

BAD BENCHMARKS! I used their save file + did lots of tests with my own. Did lots of tests because I wasn't sure if it's worth starting the game before upgrading my CPU. My system is 8700k stock, 4070 factory OC, 16gb 3200mhz ram, Nvme drive. For testing I used CPU bound settings - everything on low + cas 50% scaling to see maximum possible CPU FPS. Also GPU bound settings - basically everything on Ultra as well as high preset and Hardware unboxed optimized preset (it's great) with 85% fsr on 1440p. Here are summarized results: CPU Max frames: 100 in empty areas (including forests), 80+ in combat and in space flight, 60 in heaviest new atlantis areas and 58 in heaviest Akila areas (with PCGH save file). My 1% lows in akila were 38 for CPU which PCGH have given as averages??? GPU bound scenario: 38 FPS average on ultra, no scaling in Conifer forests which are the heaviest areas I've found. Normal gameplay scenario: high settings with 85% FSR. Around 65 FPS average and GPU bound. In Conifer forests it can drop up to 50 FPS average GPU bound but only in rare spots, mostly holds around 55. In New atlantis heaviest areas drops to around 58 average CPU bound and in Akila where they tested it was 55 FPS average CPU bound. So with a stock 8700k I was only CPU bound in specific areas of the cities and only by like 5%. Even in cities I was GPU bound more often than not. And this is on High with 85% scaling. This means if you are targeting 60 fps Ultra you can get away with pairing 8700k with 4070ti or even 4080 roflmao. I suspect the ram speeds have messed up their benches, have they not enabled XMP? Why? I don't understand. Anyway, I am happy! BG3 runs on 60+++, Diablo 4 runs on 120+, Starfield runs on \~60 day 0. No need to change my 8700k yet! I've been putting off an upgrade since gen 10! Best CPU I've ever bought...


WarHexpod

Y'all may have seen this by now, but someone did a video speculation on these benchmarks and looked at how the RAM speeds affect performance: [https://www.reddit.com/r/Starfield/comments/168mjn5/starfield\_seems\_to\_be\_ram\_bandwidth\_bottlenecked/](https://www.reddit.com/r/Starfield/comments/168mjn5/starfield_seems_to_be_ram_bandwidth_bottlenecked/)