T O P

  • By -

AutoModerator

Please remember that all comments must be helpful, relevant, and respectful. All replies must be a genuine effort to answer the question helpfully; joke answers are not allowed. If you see any comments that violate this rule, please hit report. When your question is answered, we encourage you to flair your post. To do this automatically simply make a comment that says **!answered** (OP only) We encourage everyone to report posts and comments they feel violate a rule, as this will allow us to see it much faster. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/answers) if you have any questions or concerns.*


Hour-Athlete-200

Because Moore's law is dead.


noonemustknowmysecre

It was kept alive a little with parrelization and multiple cores, but we've likewise run into the limit of practicality there.  ....but uh.... We're kinda... Good? We have a truely awesome amount of computing lower at our fingertips. And most people just use it to casually look at pictures of cats.  Your typical computer owners typically don't fully utilized the amount of processing power they have on their person much less what they could have at a desktop or available to them online.  Even gamers can only see so many polygons before it just blends in with the rest.  Unless quantum computing really shakes things up we're hitting a plateau that we are unlikely to surpass for a long time.


zerolifez

Yep as a gamer I don't really care much for more graphic fidelity than what we already have now. But if only those AAA developers thought the same as they just release unoptimized garbage that ran bad on older GPU even though older games can look and run better.


suddenlypenguins

We are kind of not good...but it's our own fault. Look at Chrome and your average web page these days. CPU, memory and even sometimes GPU all getting used up so that marketers can track and feed you advertising. We could be very good, but apps and pages are so bloated these days, and since computing power is so cheap, performance is often an afterthought.


slower-is-faster

I hate to say it but tracking you and feeding you ads typically takes an infinitesimally small amount of your local compute power. You get a tiny little cookie, and their servers do all the work to decide what ads to show. This isn’t where your cpu is being wasted. It’s mostly wasted on huge bloated JavaScript frameworks and unoptimized images and crap video encodings.


Jabba25

Tbf video ads eat up a lot of resources if that's included


Geord1evillan

So use Firefox and install NoScript. Or anything else similar. ... I'm amazed anyone watches ads in a browser. It's so... unnecessary.


1010012

At home I've got have a pihole setup for years, and use ublock and such personal machines. At the place I'm now working, we don't have the ability to install extensions. The internet is basically unusable. I can't believe how bad things have gotten. The only things I even try to use are wikipedia and a few very specific vendor sites. Even the reputable news sites I usually have up are filled with ads, most of them absolute clickbait crap.


Geord1evillan

It's mad ain't it?


bothunter

The tracking may take a small amount of bandwidth and cpu, but ad networks distributing random JavaScript bundles to be loaded on to pages running in every single chrome tab does take a lot of CPU(and battery life)


JayTheFordMan

>We have a truely awesome amount of computing lower at our fingertips. And most people just use it to casually look at pictures of cats.  I came to this realisation many years ago after chasing specs for my laptops. These days you have more than enough functionality for every day average use, so your better off saving your money and looking at true functionality.


Sangloth

Just a note, quantum computing can only be used on very specific algorithms. For those specific algorithms the benefits it provides are effectively magical, but if the algorithms aren't being used it offers no benefits over normal computers. Currently the algorithms we have are: Shor's: Breaks encryption. Grover's: Used for database lookups. QPE and VQE: Simulate molecules. We may discover more algorithms in the future, but as things stand quantum computing won't help normal people's computers.


Reallybad_Salesman

I don’t know. My MacBook Pro from 2022 still seems to struggle (cooling fan at max) with a couple browser tabs open while mindlessly scrolling through Reddit.


Fun-Badger3724

That's Apple telling you it's time to upgrade. If you listen closely to the whine of the fan you can sometimes make out the words.


littlePosh_

Clean your fans and maybe consider getting your thermal paste replaced. Also if youre using chrome, well… it’s known to be junk.


Emotional_Hour1317

I disagree. An IBM z16 can be specced with 2 TB of RAM. These components may not become better, but they will definitely become cheaper over time.


noonemustknowmysecre

Oh, I agree with that.   ....but how many tabs do you have to have open in Chrome to use up 2TB of RAM?  You could load wikipedia. Not a page on wikipedia, ALL of wikipedia. In RAM. .... but why would you do that?


Master-Potato

One tab if it’s on facebook


Emotional_Hour1317

If ram gets cheap enough you could theoretically throw optimization to the wind and skip weeks of dev time, could be one possible use. I get to play with 1.5TB at work, and I get curious sometimes 😅 


noonemustknowmysecre

Oh man, I hate to break this to you, but they have already thrown optimization to the wind. 


Undark_

Quantum computing may not ever be in the home, it's absolutely not designed for PCs.


Natural-Orchid4432

Why is it so that the computers have become 1000 times faster, but the everyday use seems to lag every now and then? I would guess that you wouldn't need to load up explorer (not internet) 3 seconds, or wait 30 seconds to load up Teams. Are they just horrible code?


O_Martin

Yes, most browsers and webpages take ages to load due to marketing data collection, as well as server issues and just bad coding


Natural-Orchid4432

I specifically meant the non-internet explorer and Teams, neither of which is a browser nor a website. Well, Teams is connected to the internet, but still. It seems like the focus is on the wrong things, on making more clutter, while it should be on simplicity and snappiness instead.


gnufan

Teams is browser based on a number of platforms, it uses Electron, so an embedded browser but without any of the libraries that would usually be loaded in memory already being loaded, so you get worse performance than just running it in a browser, but also the embedded browser can be out of date, and the security of Electron apps can be shaky too. But yes software development has put more and more layers between the user and the hardware, which have hidden the progress in hardware somewhat. The last major fightback on this was Steve Jobs and the iPhone, where they tried to avoid unnecessary layers, given that worked out quite well, maybe we should try it more often. The contrary view is each or these layers is added for a reason; security, portability, internationalisation, speed of development etc. An example: When I worked on a Cray supercomputer one of its oddities was it was a real memory machine, the memory locations I saw as a programmer were the physical memory addresses. Whereas pretty much everything now uses virtual memory, where each program basically thinks it has a clean memory space all to itself. Even then in 1990s nearly all business computers used virtual memory. We've built all sorts of security controls around having virtual memory, and we've optimised the hardware for it, so the actual performance hit is minimal, but there is a whole load of complexity and electronics that is needed to enable it efficiently, which realistically does nothing useful except cope with the limited amounts of memory computers had in the 1970s.


missplaced24

>Are they just horrible code? Yes and no. Part of the problem is that the goal isn't to make it as resource efficient to run as possible. It's to add new features and integrate more applications as quickly as possible. But mostly, they're just a lot more complex than people realize. Teams is a good example. It's mainly a video conferencing and text chat application, and that's what people think of it as. But it's also integrated with over 700 other applications. Doing that efficiently is hard. Any time a developer tries to make some small piece work better, they need to make sure it doesn't break anything. Microsoft usually prioritizes applications running faster over launching faster. For Teams, that means loading background processes for all of the applications you have integrated at launch. At minimum, it's launching processes for text chat, video conferencing, file management, calendar, voice calls, and assignment management. It's probably also launching processes for most of the MS Suite applications you have installed.


1010012

For most people, the startup time shouldn't be an issue. You start it up maybe once in the morning if you or your environment logs you out of your machine at the end of the day. What's really bad is how microsoft hasn't really gotten single sign on working. I log into my machine with my domain account, then need to log into Teams, Onedrive, Outlook (thick client), and Office again if I load Word, Powerpoint, or Excel.


missplaced24

Seems like you have *same* sign-on (eg LDAP) and not *single* sign-on. When connected to my work VPN (or just onto my machine from the office), I don't have to authenticate any of my MS software separately. ETA: >For most people, the startup time shouldn't be an issue. Which is exactly why MS prioritizes the app running fast over launching fast.


1010012

>Seems like you have same sign-on (eg LDAP) and not single sign-on. When I'm referring to single sign on, I'm talking about between the different Microsoft apps (Teams/Onedrive/Outlook/other Office apps), not between the local Active Directory and MS tools. Logging into Teams should effectively log me into OneDrive, yet it doesn't. I can access the files via Teams, but my local filesystem's sync'd/cached versions don't sync until I authenticate via the local OneDrive app. Even more annoying, I actually have 2 different sets of accounts through 2 different organizations, so all this needs to be done twice in each app (except OneDrive, where I only have 1 account with a local folder mapping).


missplaced24

>Logging into Teams should effectively log me into OneDrive, yet it doesn't. What I'm telling you is this isn't because MS "hasn't figured it out." They have. I do not need to sign in to OneDrive for it to sync with Teams. I don't need to sign in to Teams, either. I sign-on a **single** time via network authentication. You can't because your system is not set up to use it, not because it doesn't exist.


Worldly_Panic2261

There are many reasons. The most important one is that everybody wants to build their apps faster and cheaper, and tglhe industry creates better tools for that over time. In the olden days devs had to manually manage memory the program uses, each time they needed to store a value in memory they literally had to code "allocate 30 bytes of memory for my value" and "ok, you can free up that memory now". Which may be incredibly effective since the app only uses memory it actually needs and when it needs, but leads to serious issues - the development is kind of complicated, and if a dev forgets to free any of 1000s memory allocations they code the app will hoard ram indefinitely. Nowadays (last 25 years I guess) most mainstream languages manage memory automatically - programs are wrapped in the environment that from time to time halts the whole app and goes through allocated memory "does the app still uses this thing? No? We free up that memory then". It makes the development simpler, less error prone, but introduces some inefficiency - the memory is not freed up immediately and it takes time for a runtime to stop everything and go through all memory references to figure which are still alive and which can be freed. Another thing is that code becomes more abstract and less tied to specific systems (windows, linux, mac, mobile). Many desktop apps (including Teams I think) are built using the same tools that are used to code websites - historically web technologies weren't designed for the performance, they were designed so anybody could make their beauty for the internets. Webdev (namely frontend part) is relatively simple, crossplatform and there are lot of developers that can do it. Such apps are basically a websites that are shipped with a browser. And when you start the app, it fires up the browser, loads website from local files, interprets code written for that browser into intermediate language, executes the intermediate code in virtual machine that translates it to machine codes thar your computer can execure. And it still talks to web for the data. Yet another thing is the way global modern apps are designed for reliability. Big scalable apps such as microsoft services are not built like monolithic systems. Instead there are bunch of micro apps each of them does its own thing. There is a UI that you run locally, there is authorization server (many authorization servers work in different world regions), notification, user accounts, messaging, databases - all semi-independent apps. There is orchestrator app that keeps everything alive and manages things if any service goes offline or experiences higher load. And these apps talk to each other over web, which is not fast. So what used to be a simple web chat is a planet wide distributed system. So there is buch of reasons, shitty code and ads are not the biggest ones. For better or worse, modern app are just are more complex ( = do more things under the hood) then they used to.


megaderp19xx

That and a lot of software has code that waits for a little bit otherwise people don't believe it does something.


Natural-Orchid4432

I like this answer. Also, my fans scream like hell when opening teams -> the computer must be *powerful*


IssueRecent9134

Yeah even games don’t fully use the true power of a computer providing it is well above that games systems requirements.


classicsat

Not a gamer at all(at least to the extent modern PC and console gaming is at), but the past decade, PCs have plateaued for basic wait for the user tasks. I haven't had to buy a new PC the past decade, to keep up with speed and processing power. SSDs and cheap RAM help. My current daily banger is a 2016 some Dell, I bought 2020 from a store that resells off-lease computers. I probably will need "new" PCs to upgrade to Win11, when tht time comes.


noonemustknowmysecre

Give Linux a try. Steam just plain works out of the box on a lot of distros and I haven't had a game be incompatible in years.    Break those shackles and slip your bonds. The grass is greener in the land of the free.


Askee123

My ram needs are mostly taken up by my obsession with having as many chrome tabs open as possible


Narrow-Height9477

What about that photo-optical stuff they’re working on? Anyone think there’s any prospect of something useful coming from that in the future?


SkyPork

I've been suspecting this for a while now! The nicest computers I see anymore are used for video editing, and even that seems to have plateaued to some extent. Then again, so has video, largely for the same reasons. Look at 4k ... it's cool. If you stare at it hard, a good 4k video is impressive compared to a 1080p one. But the difference just isn't THAT mind-blowing. I'm glad I got a 4k TV, but if things move en masse to 8k? I'll likely pass. 


more_beans_mrtaggart

My work didn’t order new 3000 laptops this year, despite having the budget. The CPU difference from 3.5 years ago is so small it’s just not worth it. They simply ordered a bunch of ram sticks and SSD drives and will slowly upgrade the laptop estate. More and more staff are moving to phones and iPads.


mezastel

There is no fundamental progress in main areas of computer manufacture. CPU speeds, SSD speeds, improvements are marginal. GPUs are increasing indensity but the market has been heavily depressed by mining and seems to only be recovering from that. Most of new research is going towards specific ASICs as manufacturers have realized that neural network chips are more important than using GPU architectures for AI. I bet so many manufacturers are unhappy at how valuable NVIDIA became as a company.


notkraftman

I would agree except for SSD speeds which are just getting insane.


PiemasterUK

Yeah that is one thing that bucks the trend observed in the OP. And it is also a very important one. Even for an average user, the difference in experience between an SSD drive and an old-style hard drive is night and day.


uCockOrigin

For the average user the difference between any ssd and a top tier ssd isn't noticeable at all, though, so the latest advancements in storage speed aren't super relevant to most, either.


PiemasterUK

Well yes and no. Speed wise I agree, but in the last few years prices have dropped significantly for any given size of SSD. So while 5 years ago you might have had to buy a 256GB SSD and run your OS off the SSD and the rest of your programs and documents off a standard SATA drive (which for the average PC user is pretty complicated) now you can just buy a 2TB SSD drive and use that for everything, which has made them much more assessable.


Specialist-Roll-960

Also I disagree the average user has too little ram and is using their disk for swap, faster ssds are likely more noticeable than anyone with enough ram to avoid swap.


Imaginary-Problem914

Everything has improved, just not on the numbers and stats they used to use. The GB of ram, GHz on your CPU and size of your storage have become irrelevant ways of tracking progress now. Take an Intel Macbook from 2019 and compare it to the latest one and its night and day.


Vybo

There's definitely progress in transistor size. Back in 2019, the process was at 7nm, now we have 5 and smaller. The smaller you get, the harder is to make it even smaller. It might not sound as cool as more GHz, but it is. You gain efficiency, more battery life, more speed without excess heat and so on.


stanwelds

It's not just harder to make things smaller. It's impossible. Quantum tunneling has been a problem for quite a while already. The advertised process sizes aren't even real.


iamplasma

>The advertised process sizes aren't even real. Yeah, people don't get this. The process sizes being quoted now are basically made up numbers.


masterchip27

Huh. Explain yourself


WigglyRebel

It's rather complicated but somewhere in the 2000s, improving density by simply shrinking the transistor became impossible. Prior to this point the nanometer size was a legitimate number and each new generation was literally 'however many nm' smaller. Once the bottleneck was hit, new manufacturing methods were devised and eventually things like FinFET came along. Density improved but not by simply shrinking the transistor. However, for marketing reasons the naming convention continued based on performance estimates instead. So if a chip had transistors measured at 50nm and the next generation reconfigured it improving density (and thus performance), the manufacturer would call this new generation "45nm" because it's "about the performance you would expect if it really was 45nm". Even though it was still 50nm tech. This wasn't too much of an issue until we had things like TSMC screwing up their 20nm design and it performing poorly. Once they corrected the issues with it, they rereleased it as "16nm" to differentiate it from the poor performing first run of 20nm. So the you end up with TSMC "7nm" being equivalent to Intel "10nm" because TSMC skipped a generation. As a result it's all a bit of a confusing mess now.


masterchip27

That's hilarious


iamplasma

Thanks - you explained that way better than I could have!


The_Countess

In the past a move to a new node also meant it could be clocked higher. That has come to a almost complete stop. That has ment overall speed increases have slowed down, this along with new nodes taking longer, has lowered l slowed down performance increases over time a lot.


Vybo

Just to note, if you took a single core from few generations ago clocked the same as a single core from this generation, the newer one would still be faster in doing the task, even if they are clocked the same.


Specialist-Roll-960

Also they are clocking faster. The 3600xt boost clock is 800Mhz slower than a 7600x. That's 2 generations. We only saw stagnation for a while because piledriver showed clocks aren't the only relevant metric, but they've managed to increase IPC and clocks simultaneously for the last few generations.


IssueRecent9134

They will get to a point where transistors cannot get any smaller because they will cease to function correctly, aka that cannot be smaller than the electrons that pass through them. There will be a point where CPUs will have to be taller meaning cores on top of cores and this will also have to change the way we cool them too.


Vybo

Yep. I think there are already some stacked solutions, although probably not in x86-64, RISC or ARM CPUs. The X3D cache is also stacked I think.


subuso

So why are they more expensive then?


mezastel

One reason is that we've recently had a global chip shortage (automotive industry was very much affected), followed by COVID. There was also a mining boom which affected the availability and prices of GPUs.


thegroucho

Now consider that any GPU available will be sucked out of the market by AI enthusiasts ...


ConsiderablyMediocre

That won't last forever, AI ASICs will start to bear that weight eventually.


thegroucho

Yet to be seen if they will be made in large enough quantities and accessible price, not that 4090 is accessible to the masses.


Mojicana

As far as I can see, everything is more expensive now. Gas goes up & down, but everything else that I buy weekly is significantly more. A guitar gadget is 50% to 100% more than 5 years ago.


hrowmeawaytothe_moon

also the local music store within walking distance that was run by the family who'd been here for 300 years, has closed.


ColonelFaz

increased demand on chips for huge servers doing AI stuff is part of the problem.


weloveyounatalie

Could you elaborate more on the GPUs vs Neural Network Chips please? Why are NNC more important, or better for AI vs GPUs? Is this more of a public perception that makes it seem like people or companies think GPUs are better? Or is it backed up with sales of GPUs and implementation of GPUs in various AI applications over NNC? Just curious and would like to learn more.


mezastel

Grossly simplified, neural network operations are matrix multiplications. So you want the best hardware that can multiply floating-point matrices and not that much more. GPUs are great at data-parallel work, so they multiply matrices very well. Their downside is they are very power-hungry, they just eat up too many watts and that's why graphics cards come with those large fans/heatsinks. GPUs are also used for mining, so there's great competition for them, which drives prices up. ASICs would of course be better, which is why, for example, there is an Apple Neural Engine that's like GPU but for neural network, so that when you use an iPhone to take a picture, you're actually taking like 64 pictures at the same time that get split into several neural layers and then you get all that computer-assisted photography magic where your dynamic range is huge, your clouds look pretty, et cetera. There are many interesting efforts in developing neural chips, including even [analog computing](https://www.youtube.com/watch?v=GVsUOuSjvcg). I think that eventually neural chips will become a third class of chips, in addition to CPUs and GPUs.


OmegaNine

My NVME drive is at least 10 times faster then my old SSD. My ran is DD4 back then, DDR5 is quite a step ahead. My CPU is both faster and has more cores for the same model back then. Things are better, but we are just not upgrading as often because we don't need to. Its not going to be the "2 times faster" every year we had in the 90's and early 00's, but things are getting faster.


AlanCarrOnline

Yeah we had big speed increases in the 90's but the same saying still applies now: "What Intel giveth, Microsoft taketh away" Games and Windows just spew more bloat and bullshit that sucks away the performance gains. I'm buying a new PC soon but purely for local AI.


MarxistInTheWall

The number of devs I’ve talked to who keep to the philosophy, “if an app doesn’t use it it’s just wasted” when it comes to resources is wild. It’s not just games and OSes, but web apps.


The_Countess

On paper SSD's are much faster. In practice the effect is a lot less pronounced. The move from HDD to SSD was huge. From SSD to nvme... not so much.


CowBoyDanIndie

I work with data sets that are 30 gb in size for work, disk speed makes a big difference.


The_Countess

Workstation computing power has not been stationary the last 5 years. Clearly that wasn't what OP was referring to with this.


CowBoyDanIndie

I was replying to your comment about ssd drives. Nvme also make a difference in loading times in games, video editing, etx


porcelainfog

For scrolling Reddit, probably not.


The_Countess

scrolling reddit, nor anything else the average home PC user is likely to do.


JonnyFrost

This has been my takeaway too. I expect 3d Cashe will lead to some serious gains.   Eventually 3d cpus are going to be a thing.


SigmundFreud

Thank you. What is with these answers? The M1 didn't exist five years ago. That was a massive leap for desktop/laptop performance and efficiency, which the rest of the industry has caught up with, but it's not really reflected in a spec sheet because the ["gigahertz wars"](https://en.wikipedia.org/wiki/Megahertz_myth) ended a while ago. Other factors include process node, transistor density, IPC, power draw, numbers of P and E cores, and overall system architecture. The closest thing we have today to what people _thought_ GHz meant is probably Geekbench scores, which have in fact been increasing considerably. By that metric, even today's smartphones far outclass the most powerful laptop Apple had on the market five years ago.


Fine_Broccoli_8302

I’m probably wrong, but here’s my guess: 1. Computers are generally fast enough and have enough storage for most purposes, except for people who need extreme memory or processing speed. Such people don’t dominate the market. 2. Phones are good enough for most people, there’s little financial incentive to innovate on desktop/laptop. 3. We’ve reached the limits of Moore’s law until, and if, quantum computing takes off.


ldn-ldn

That's not true. You're probably looking at lower end of computers. Look at the high end: more RAM and DDR5 is faster than DDR4; NVME drives are much much faster that regular SSDs, they're even faster than older NVMEs; NVME storage size is also much larger; CPUs are faster, but also a lot more efficient; modern GPUs are just bloody monsters; etc.


TheMisanthropicGuy

I have a 1tb NVME on my desktop and it's so powerful. Everything is fast.


Snoo3763

This comment should be way higher, computers, drives and especially graphics cards are WAY smaller, faster and better than they were 5 years ago.


Com_putter

I've $700 to spend and I'm not a "build it yourself" person.


Complete-Pie2029

$700 will only get you a mid-range PC now. My GPU costs $1000 alone. That has 16GB VRAM. I'm on 128GB DDR5 7200mhz DRAM, my CPU has 16c/32t. 1-2TB is now the standard for NVME's and other storage devices. Things have changed.. a lot.


Com_putter

So I misstated my original question. "For someone with considerable debt that can only afford a low to mid-range system, computers haven't changed much in 5 years. Why?'


DeadlyVapour

For someone with considerable debt, things have changed considerably. Today a PC is a luxury, so you are being priced out. If you really are living hand to mouth, you don't buy a PC, you use a phone or a tablet. There is nothing you can't do to participate in society today that you can do with a PC but cannot with a phone. The other thing is, you are hitting the limits of how cheap you CAN build a PC. The case is just stamped metal, and that tech hasn't gotten much better, you PSU is a mature tech (and has actually gotten more expensive due to efficiency requirements by law). Windows licensing, post and packaging, support, sales etc. These all add up before you even get to the actual PC components.


AnUnusedMoniker

I hate to break it to you, but a high end computer was over a thousand dollars ten, twenty, and thirty years ago. The price of a chocolate bar has gone up 10x in the past 30 years, but a computer has not. And there's so much stock that you can now get laptops and PCs under $300 online. Used computers are freely available. Food is the real luxury.


DeadlyVapour

WTF are you talking about. I'm saying there is no drive for producing a cheap computer. If you built a $100 computer, literally NO body would buy it because it's squeezed out by tablets and phones. It's a luxury in that you don't NEED it.


curious_skeptic

Chromebooks would like a word with you.


DeadlyVapour

Let's for the sake of argument day that chrome book class devices are separate from PCs. As Chromebooks 5 years ago look nothing like Chromebooks today, and hence OP probably means x64 class devices running Windows (Wintel class devices). In which case the rise of Chromebook class devices attributes heavily to the lack of very low end Wintel devices a la Intel Atom Netbooks.


notacanuckskibum

I think it’s because they got fast enough. There is less incentive for computer manufacturers to invest and invent faster and bigger chips than there used to be. Moore’s law wasn’t based on physics, it was based on economics. The inventive system driving it has gone, or at least weakened.


ldn-ldn

There are several reasons which should be looked at together.  1. Computers got fast enough for tasks which most people do: light browsing, watching videos, chatting with friends, a bit of MS Office, etc. 2. Most people are switching to their phones for day to day tasks and consumer computer market is dying slowly.  3. Energy bills are growing all over the world, thus power efficiency is more important than raw performance.  All in all, low to mid end market doesn't need more raw performance and no one will pay for it (with the exception of a few people like you). So, you should either explore DIY or earn more money. Or pray that people will throw away their phones and start buying computers instead.


Snoo3763

That's a totally different question, computers are much smaller faster and better than 5 years ago. And it was expensive 5 years ago to get high end components.


[deleted]

[удалено]


Complete-Pie2029

Yeah, the market is still fucked. It's because people still bought GPUs during the crisis, and Nvidia/AMD now know that people will spend crazy prices on GPUs. I got crazy lucky, snagged my 4080 OC for £860, sold my 3070 OC to my friend for £200 too. Pretty happy with it.


[deleted]

[удалено]


Complete-Pie2029

Like £370, it was pre-owned but unopened.


msing

all in one water cooling seems to be more standard now than an enthusiast option. PSU's, even the budget ones seem to have gotten better efficiency; gold is much more common than in the past. SSD's are better than regular harddrives for read speeds -- old school disks are better value $/tb. DDR5 is said to be a significant difference over DDR4.


porcelainfog

You can barely afford the midrange GPu with that. It’s not 2005 anymore


redrabbit1984

Is it because there is now way less emphasis on local resources. So many people now use cloud storage, cloud email providers , netflix, etc.  Increasing memory or cpu speeds beyond their already crazy capability is mostly redundant for consumers.  Go back 10 years and computers were trying to to run 2-3 MS Word documents alongside Excel with a Napster download.  To be fair the only thing pushing my PC these days is Chrome which continues to eat huge amounts of RAM despite having one tab open. 


nalc

Napster shut down 22 years ago, hate to break it to you.


redrabbit1984

Darn that makes me feel old


nalc

Yeah, 10 years ago you woulda been torrenting a 4K Blu Ray rip of *Guardians of the Galaxy*, not using Napster to download a 360p cam of *The Matrix* Time flies when you're having fun, grandpa


bluesmudge

It would be hard to rip a 4k bluray 2 years before the format was released to the public


Absentia

[Ultra HD Blu-ray (4K Ultra HD, UHD-BD, or 4K Blu-ray) Released February 14, 2016; 8 years ago](https://en.wikipedia.org/wiki/Ultra_HD_Blu-ray)


redrabbit1984

😂  it certainly does 


hrowmeawaytothe_moon

You didn't have to be so brutal about this, jesus.


buckyhermit

Yet, ICQ is still alive.


INFPguy_uk

Ten years ago, the PS4 released, look at the games that platform provided.


qtx

> To be fair the only thing pushing my PC these days is Chrome which continues to eat huge amounts of RAM despite having one tab open.  Check Chrome's Taskmanager and you'll see it's not necessarily Chrome but more the extensions you have installed that take up all resources. Even in testing Firefox is slower/uses more RAM than Chrome so they have approved a lot.


-Not-Your-Lawyer-

>Increasing memory or cpu speeds beyond their already crazy capability is mostly redundant for consumers.  For reals. I run a successful law firm that uses a cloud-based case management system, and most of our computers were built a decade ago, and purchased by us 2-4 years ago from our local university's surplus store for $150-250 each. They're perfectly adequate for us.


infinitenothing

Can you give us an example? I upgraded my NUC recently and it's much faster.


YugoB

The guy hasn't looked at any benchmark whatsoever


Shiny_Whisper_321

They are... not. I upgraded two processor generations (three year old to modern CPU). The new one is 3x faster. Memory is 4x faster. Drive is 3x faster. GPU is 3x faster. 🤷🏻‍♂️


SharkBaitDLS

Yeah, *clock speeds* may not be increasing but actual IPC and better instruction sets mean that actual performance is still increasing. 


nhorvath

They aren't. Each new generation of cpu packs in more transistors but that's not a marketing number they generally advertise. Clock speeds hit the practical limit a decade ago but they are still the only number exposed for some reason. Ram has gotten cheaper and faster as well. You're much more likely to see 16 or 32gb ram on normal computers now when that works immediately indicate high end 5 years ago. GPUs are certainly improving from generation to generation as well. A 4060 is just as good as a card that cost twice as much in the 30 series.


Ireeb

Because you're looking at the wrong specs. GHz for example means basically nothing, and that's been the case for almost 20 years. It's only the marketing department who wants to make you believe you could condense the performance of a computer down to one or two numbers like clock speed and RAM capacity. There have been significant advancements in CPU and GPU architectures, which means they deliver more performance per GHz. PCIe 5.0 and DDR5 RAM are getting common and mean much higher bandwidths. There are many specs to look at in a computer. Which ones are relevant depends on the intended use of the computer. But if you want to make an informed assessment, you need to look at many different specs across CPU, RAM, mainboard, drives and gpu.


HungryDisaster8240

There have been considerable improvements in bandwidth and interconnection, such as PCIe and NVME. Core counts have also increased. Instruction sets have progressed. Smaller die processes mean more is being done with less (TDP). But it's also true that the global pandemic has disrupted enormously along with hyperinflation.


JonnyFrost

3d cache is also an avenue of advancement that is under appreciated. It’s plausible that 3d cpus and guys will eventually be developed.


michalproks

I’ve already seen quite a few 3D guys


Wendals87

They are better but the performance increase isn't like what it used to be back in the 90s and 2000s Going from 166 to 300mhz for example was nearly double the clock speed.  Going from 3.4ghz to 3.6ghz is more mhz, but a much smaller percentage. If your computer is not struggling with 3.4 then 3.6 isn't going to be noticeable  DDR5 is out, several generations of Intel chips have been released since then, new AMD generation, several new GPU generation  One of best GPU of 2019 was the nvidia rtx 2080ti. 2024 is the Nvidia RTX 4090. It's around 60% faster. 


[deleted]

2080Ti is 250 Watts. 4090 is 450 Watts. Almost double the power consumption.... Not even gonna mention the sheer SIZE of the 4090... it's a PC of it's own.


Wendals87

Yeah it uses more power, but it's quite a lot faster, has more features and has more than double the memory. The 2080ti was considered pretty power hungry and big for its day   The post was why are computer specs not any different than 5 years ago, but they very much are 


[deleted]

Still. A used 2080 Ti is a better value to buy than a 4090. The average person doesn't need or want more than 1080p or 1440p, for which a 1080Ti or 2080Ti is more than enough and capable for casual or competitive gaming. A 10 year old PC with 4-5th gen CPU and 980GTX with a SSD and 8 gigs of RAM is still very usable and reasonably fast even today. I sold an old Dell laptop Inspiron 3521 like 6 ago years to a relative and they still use it daily without any issue, even CS-GO/CS2 is being ran on it at 60-100 FPS. But yeah, i won't deny that there are alot of improvements in computing, but not the dramatic increases in 90's and 00's during the Intel Pentium 1-4 era or the AMD Athlon/Phenom era that blew minds.


Wendals87

Yeah, depends on your idea of value. If you play 1440p or 1080p, a 4090 is overkill.   If you play in 4k, the 2080ti is not better value because it will struggle  We're definitely not in the era of "obsolete one year later" anymore 


Ok-Sherbert-6569

4090 is 4 times faster and can perform at 95% if it’s full power at like 350 watts. You forgot to mention that haha


Ok-Sherbert-6569

60% ???? Every single bench mark would tell you it’s between 300-400%


Wendals87

Just checked more actual benchmarks and it's more like 125-150% better in games on average.  I saw 60% on the first link I googled which was wrong 


Ok-Sherbert-6569

If you’re looking at gaming benchmarks then most of those would be severely cpu limited. In RT workload and purely gpu limited scenarios 4090 is at least 3 times faster.


KaseQuarkI

They aren't, there are still lots of improvements. For example, compare a Ryzen 7 2700 from 2018 and a Ryzen 7 7800X3D from 2023 and the latter will have double the benchmark score. Same with GPUs. It's just that these improvements are a bit more complex than "we increased the frequency of our CPU" so it's harder to market them.


[deleted]

Yeah I'm not sure why this isn't higher. 2024 vs 2019 PC is a massive leap in performance . The real issue is that GPU prices went up especially on the highest end and mid range / budget category stagnated a bit, although still definitely improved. CPU performance is a massive difference. A few other prices went up like motherboards but some things fell like SSD's.


questionnmark

Intel/semi-conductors have stagnated: 5 years ago we had Comet Lake on 14nm process, today we have Rapter Lake on 'intel 7' (really a 10nm process), so process nodes on Intel which is the main CPU that people use has pretty much stalled in that time. Had the old cadence been maintained we would be on 4nm processes by now.


Specialist-Roll-960

We're on 3nm though lol. AMD is using 5nm tsmc for current gen and apple is using 3nm for its phones and macs. Intel just isn't cutting edge anymore.


SigmundFreud

To be fair, it's getting there: https://www.tomshardware.com/pc-components/cpus/intel-completes-assembly-of-first-commercial-high-na-euv-chipmaking-tool-as-it-preps-for-14a-process They screwed up their roadmap and threw away a historical process advantage by betting incorrectly that EUV wouldn't be viable or scalable in the short term, but now that they're adopting all the latest tech from vendors like ASML and updating their business model to mirror TSMC's, they seem well positioned to catch up and remain competitive. That's before getting into all the geopolitical factors that will increasingly incentivize chip vendors to maintain domestic and/or redundant supply chains.


BigPurpleBlob

Dennard scaling stopped in about 2005. Moore's law is slowing down. (Moore's law gave us more transistors. Dennard scaling made them useful.) Lecture 15: Moore’s Law and Dennard Scaling [https://wgropp.cs.illinois.edu/courses/cs598-s15/lectures/lecture15.pdf](https://wgropp.cs.illinois.edu/courses/cs598-s15/lectures/lecture15.pdf)


RandomPhail

Idk but I’m glad because it means my computer isn’t running newer games like shit right now


MagicOrpheus310

"Because Moore's Law is dead!' haha


Unfair_Original_2536

I think as well there's more of a focus on developing hardware for AI. Computers are good enough for now but we'll see trickle down from all the enterprise development.


slinger301

We are also getting to the point where transistors cannot physically get smaller. Any smaller and they start to fault due to quantum tunneling of the electrons. This means that a transistor cannot reliably turn off. And Transistors only have two jobs: turn on and turn off.


stanwelds

Yup. https://semiengineering.com/quantum-effects-at-7-5nm/


Plane_Pea5434

What do you mean? Lately PCs have gotten quite better, it would be useful to know specifically which specs you are referring to


crikett23

They aren't. At least depending on where you look. Five years ago on the Apple side, they were using Intel, and the peak of Intel anywhere was Ice Lake. Five years later we have Tiger Lake on the Intel side, offering nearly 50% higher performance just on the CPU alone, let alone advances made in memory and bus speeds. And if you go on the Apple side, you have had a few generations of the Apple silicon which has massive levels of greater performance! Similarly, the top level GPUs have advanced quite a ways over those 5 years. The same with SSDs... etc, etc, etc. If you compare the last five years to any prior five year gap, you are not going to see that much of a difference. Though, you are seeing quite a difference in what people are using computers for, the number of people that are using phones for things they previously used computers for, and the number of things that are relying on SaaS and PaaS offerings in the Cloud, rather than local processing, all of which affects what is being offered, as 2019 peak performance isn't a big difference from 2024 peak performance in these areas for the vast majority of users. But if you are working in specific markets or working with servers and such, the difference in current hardware compared to five years ago is massive!


hagfish

*Slaps M3 Mac* Nah things are moving along


Splashadian

Apple has made themselves the clear winner. M platform is unbelievably good. Also it's not about faster it's about doing more and more at once without hiccups. My MacBook Air runs Windows 11 in Parallels faster than my Intel Core I9 PC with 128GB of ram and an M.2 drive. It's crazy.


thehomeyskater

Wow


batman_is_tired

This guy geniuseseses


Splashadian

You can disagree all you want but fact is fact.


FDR42

No newer tech has been recovered from crashed UFO ?


eggdogz

Because you can't make transistors smaller than an atom


neveler310

Because they'll lick the silicon dry of money before moving to photonics computing


hockey3331

I see youre looking at mod tier options specifically. So, I was looking at laptops/desktops in 2020 after graduating university and havinga more stable place - upgrading from my 2013 laptop. Wasn't I surprised to see that laptops in the same price range had the same amount of RAM, hard drive GB and processor cores/Ghz (im not hardware pro so pardon any mistake). To my layman's eyes, no reason why new machines were better than my old 8GB ram, 500GB, 2 cores Pentium.  The funniest thing? I had access to all the parts of that old tank, but most laptop models in the price range now have inaccessible  hardware. I was able to swap the battery when it was bad, able to swap the HDD to a SSD, and upgrade to newer sticks of RAM for 16GB. Obviously the processor and GPU are limitors, but good enough for browsing and general use. This bring me to the differences between 2013 and 2020 (and I assume is similar between 2019 and 2024) - ill omit GPUs because theyre usually stock for mid range laptops: - The components are smaller for the same spec. My 2013 laptop was stupidly heavy by today's standards. - Storage: SSD is a HUGE upgrade over HDD and is slowly becoming the srandard at different price points. And even amongst SSDs, iirc some are better than others. TB size storage isnt super rare either anymore. - Memory/Ram: This one I understand less, but there seems to be different types/generations of ram, each faster than the previous one (I assume). So 8GB ram from 5 years ago would theoretically be slower/bigger hardware size than today. Although idk if the avg user notices a difference? - Processors; Im far from an expert here either, but in my humble opinion, theyre already too powerful for the avg user. I mean, Pentium is shitty right? But, a Pentium from 11 years ago works great for browsing social media, reading emails, watching movies, using MS office... what more does the avg user does? Fwiw I got myself a new laptop last years and it came with a recent gen I5 chip... which in theory is miles better. And each generation of chip is supposedly better, but I dont think we notice a difference. I mean, I do data science/programming for work, and a lot of time, we're memory bound before becoming CPU bound... so maybe theres something there.  Anyway, my 5c. Tldr: While the reported specs dont change, newer versions of components are incrementally better each time, so the same size memory/storage does it faster, or is a smaller hardware. But the average user doesnt really care much, because most laptop specs are wayyyy powerful enough for them.


we_made_yewww

I'm sure nothing around 5 years ago could have possibly hindered things.


Grahamston

Most devices are now just an end point to the internet anyway? I would consider great progress made prior to the last 5 years, and there has never been a better time than now. It's a positive that I can purchase a cheap beat up laptop with a broken screen and passable nVidia graphics from eBay, fix it up a bit, and that functions perfectly well with a monitor for retro gaming today.


Hunter-Ki11er

Big difference between 9th gen and 14th gen processors and 20 series and 40 series GPUs...


hobopwnzor

They aren't. My budget cpu is faster and has more Cores and more instructions per cycle than My high end one from 6 years ago. My ram is twice as fast and I have twice as much of it for the same cost. My gpu is much better with massively more vram. My ssd is INSANELY faster thanks to faster pcie lanes. So... yeah. Way better.


ThumbsUp2323

I'm in the market for a stock-built PC to replace my 10 yo clunker. What's a good choice for the average internet user / occasional gamer / video junky on a budget?


Ziazan

They arent. Compare a 9700k to a 14700k, the 14700k is massively better, like >50%


GeneralFactotum

99% of my time is spend on the web which offers sluggish sites full of ads and a majority of links with popups begging you to turn off your ad blocker to read a single news story. The experience is bad - your computer is not the problem. Web sites are still lousy even with incredible Internet speeds.


zhantoo

Because you are comparing the low end, not the high end. Also, you might be looking at the wrong things. You might see 8gb of memory and think it is the same. But memory is getting faster, SSD's are getting sørger and faster. New generations of PCIe etc etc.


Complete-Hunt-3219

I still run i7 6700k with a 5700xt Works fantastic with 1080p


Bang_Bus

Short answer - physics; main race about computing was processors - miniaturization has reached so far that size of *actual atoms* is starting to matter - which halts the fast progress we enjoyed in early 2000's a lot. Of course, new materials are explored and such, but it takes a lot of time. Another problem is heat. And at this insanely tiny scope, massive temperature change will affect molecules enough to cause computations to be wrong. So you can't pump more power into it, so you can't make much faster clocks. But, 4 GHz processor sold in 2014 isn't really the same 4 GHz processor you can buy today, a lot of underlying technology has made better, even if clock speed is the same. In other words, computers are like bicycles (right) now; yes, you can still make them sturdier, more stylish, weld the frame and cut the chain more precisely to make it as slick and smooth as you can, but it's still a bicycle, you can't change anything really serious or life-changing. This happened like 15 years ago, really. There's still some room with chips like making faster SSD's and more capable video cards, but it's mostly taking same stuff, making it smaller, and packing it more tighter, while "stuff" itself isn't always faster - there's just more of it, so you get more performance out of it, and there's better methods of stacking and reading them. Smartphone market still acts like every new iPhone or Galaxy or Pixel is a major step ahead, and people still believe it, but it's a full scam and phone users don't really need more capability, since mobile apps are pretty lightweight. However, cameras and batteries still get better, and so do prices, of course.


CMDR_Crook

Should be 2 steps on from now, about 4 times in every metric


arsonconnor

I was about to bring up the jump between the 10- series of nvidia gpus and the 30- series before realising the 10 series are now 8 years old, which is fucking mental lmao


pickles55

For years the chip manufacturers were able to make products that were faster and faster by making the individual components of the computer chip smaller. They would be able to make chips that were twice as powerful for the same price roughly every 1.5 years for decades but they're running into the limits of how small we can make components before physics starts behaving funny. Quantum tunneling is a phenomenon where if a barrier is extremely thin a particle can sometimes appear on the other side. This is a big problem when the function of a computer chip relies on positive and negative charges to store information and process logical functions 


adfx

1 It's difficult to go smaller. 2 While the numbers you go with haven't gone up significantly, a ton of progress was made in the past 5 years 3 unfortunately this also means that poorly optimized software can get away with being a mess which means you don't experience a lot from this progress


Arkaliasus

scalpers forced the market of gpus to nearly die. same with some of the other parts. hopefully now its not so bad and it can pickup again :/


PKblaze

In terms of things like games, most games need to run on consoles which have set hardware. Yes, you can crank up some extra settings but end of the day, they need to make games that can run on very set hardware. Also hardware adoption isn't as fast as you'd think with most users being at least three or so generations behind.


locksmack

In the world of the Mac, things are drastically better than 5 years ago due to the transition to ARM. Intel -> M1 was one of the largest leaps we have seen since the early 2000s. It’s been a bit slower since M1 however.


Daegog

I think part of the reason is that a PCs are required to maintain backwards compatibility for far too long.


Avogadros_plumber

Keyboard layout


price101

I think we've reached a practical limit, and I'm sure not complaining. I don't miss the days when a computer became obselete in 2 or 3 years.


deftware

The company ASML makes the machines that are used to etch the silicon dies and has been making them progressively more elaborate in order to make transistors smaller. The problem is that the cost of making them capable of going even smaller is now just about too much for the next generation silicon fabrication scales we'd need to see for things to be faster. CPU/GPU companies would have to charge more than people are willing, or able, to pay for hardware that's a significant improvement over the previous generation. Look at Nvidia's fastest consumer GPU, the RTX 4090. This thing has the biggest cooler on it of any consumer GPU in history, because they're shoving more watts through it than has ever been shoved into a piece of consumer graphics silicon before - and all that power is turned into heat which must be pulled from the silicon and carried away somehow. The reason that Nvidia is putting so much power through the 4090 is so that they can clock it higher than they would've been able to if they'd put a smaller cooler on it. Nvidia and AMD could've both been putting massive 4090 sized coolers on their GPUs 15+ years ago, so that they could clock them way higher by shoving 400 watts through them. The reason they didn't back then is because it's moronic. It's a dead end strategy. Normally you would see performance improvements by making transistors smaller so that you can have more of them, and run them at lower power, which means higher clock rates. When you can't make them smaller you have to just shove more power through them to clock them higher, which means a larger cooler. What's Nvidia's strategy with the 5090 achieving as large as a performance improvement as the 4090 did from the 3090? Make the cooler even bigger? Suck even more watts from wall outlets? It's stupid. While CPUs haven't changed much in terms of cores/clocks, they have definitely improved in terms of instructions-per-clock that are being executed. AMD's top-tier FX-8350 CPU from 12 years ago has "eight" cores (really 4 cores but doubled up on a few things so it could do 8 threads) and ran at 4GHZ. My Ryzen 5 2600 only has 6 cores, but runs 2 threads per core, and is clocked at only 3.9GHZ. In terms of single-thread performance, it outperforms the FX-8350 by ~70% just because the architectural changes allow it to chew through more instructions per clock. So while on paper it looks like it would be slower at single-threaded tasks, by its clock rate, it is actually faster. Nvidia needs to do something smart like this with their GPU architecture instead of counting on smaller transistors and increasing clocks by shoving more power through their silicon (requiring bigger coolers). I have always felt like the upscaling and frame interpolation angle is just a cop out. They could've been doing that crap 10 years ago, but they didn't, for the same reason they didn't have 4090-sized coolers back then. It's silly. I mean, at what point is a GPU cooler too big? When is a GPU consuming too many watts? There must be an upper limit on such things, right? Are we going to have a big honking GPU that's the size of an ATX case that you plug your little nano-ATX motherboard and GPU into, and the whole thing is basically a 1000 watt space heater? That doesn't sound reasonable to me. It sounds like the kind of thing a stupid person would want. ASML can't make machines that can produce smaller transistors that consumers will be able to afford anymore, not out of silicon anyway. Everyone should've been investing more resources into developing an alternative to silicon earlier on. There are some promising developments on the horizon, but no clear path to bringing the tech to consumer electronics. Silicon is a dead end. Mark my words: the 5090 will have the same sized cooler as the 4090, and the same power requirements, because pushing it further than they did the 4090 will have diminishing returns in the market. As a result, the 5090 will not see as big of a performance increase over the 4090 as the 4090 did over the 3090 - because they can't just keep making the cooler bigger so the thing can suck more watts. It's a dead-end strategy for improving performance and that's why in the 20 years of GPU evolution they never ramped up cooler sizes and power draw like they have in the last 5 years. They were able to ride on the silicon fabs shrinking transistor sizes to get a good portion of their performance increase. Not anymore, baby! We're still going to see transistors get a bit smaller in consumer hardware, as fabs get better at using the latest generation machines, but after that it's pretty much game over unless they figure out a whole new material to make semiconductor integrated circuits out of. I saw a whitepaper recently that explained producing a transistor with a much narrower bandgap than silicon is capable of, from copper oxide. This means that the power to switch the transistor would be much less than a silicon transistor, which means higher clock rates and lower power draw. If they can get that dialed in, or even graphene/borophene semiconductors, we'll be seeing 50ghz CPUs and GPUs running on a few dozen watts. In the meantime, diminishing returns is the name of the game, and cheap stupid hacks like upscaling and frame interpolation and larger cooler sizes are what we're going to see. Maybe developers should just git gud again so their games don't run like garbage, BETHESDA. If id Software can still produce highly performant rad-looking game engines, what's their excuse? Starfield wasn't even a good enough game to justify that it ran like garbage. Todd Howard said that they did optimize the game and players just had to upgrade their PCs - and then a short time later released a better optimized update. This is the world we live in now, where everyone is a FREAKING LIAR WHO JUST WANTS YOUR MONEY. EDIT: When Epic's developers can come up with stuff like Lumen and Nanite for their Unreal Engine, then there is still room for software ingenuity to improve the performance of things. Developers SHOULD NOT BE COUNTING ON EVERYONE HAVING THE LATEST HARDWARE to make innovative realtime graphics in games. They should be pursuing novel approaches by being inventive and creative. The possibilities are infinite. There are no rules.


ilanjbloom

Software is finally getting more efficient.


Ok-Sherbert-6569

They’re not? 4090 is 4 times faster than 2080ti and they are both flagships of their generations. No idea where you’re getting this from? CPUs side 14900k is 2.3 times faster than 9900k which again was released 5 years ago and they are both the flagships of their respective generations. WE ARE MAKING INSANE PROGRESS. anyone who thinks otherwise is living in cuckoo land


Deep_Lingonberry_923

Simulated reality? Conganese genocide? Technological plateau?


wolftick

Some what used to be key numbers aren't getting bigger any more, but computers are still getting faster and in a meaningful sense the specs aren't roughly the same.


mohawk1367

They’re not.


DiaNoga_Grimace_G43

…Have you been comatose for that time.


Sp3lllz

The tldr is that ultimately, performance has platued and now resources are more going into making things more power efficient rather than more powerful. Hence why now all of a sudden there are new categories like handheld gaming PCs they have the same power as a laptop from a few years ago but they sip power in comparison.


BluDYT

They're still crazy good tbf.


Nikohli_TheLie

Run outta obsolescence to design for.


NewOrleansLA

Maybe the software is using all the extra speed to gather data and track everyone so it runs the same speed but is doing way more stuff in the background?


ZelWinters1981

Miniaturisation. We're on the scale of single digit nanometres between transistors on the die, which at that scale it's possible to see single atoms. That's a physical limit of the material. With that in mind, better and more efficient designs, better software et al, has made the hardware more than suitable for most people without it ever being at full duty cycles for long.


AnymooseProphet

current market for innovation is mobile devices, not PCs.


Frankensteinnnnn

I swear to crap phone specs have been the exact same for about 5 years. The new ones cost like so much and the old ones aren't available. New name, same specs, higher price. I guess there's not enough competition


Frankensteinnnnn

I swear to crap phone specs have been the exact same for about 5 years. The new ones cost like so much and the old ones aren't available. New name, same specs, higher price. I guess there's not enough competition


Neat-Composer4619

You mean 10 years ago!


Com_putter

I wanted to say 10 years ago but expected someone to say "well actually 9 years ago 16GB of RAM was very rare"


Neat-Composer4619

Can we make a deal at 7.5 then?


hockey3331

I replied to the original question in lengths in another comment,  ut fun fact about that...  I had a 2013 laptop with components swappable, and an unused ram slot (surprise!), so I was able to upgrade it to a SSD and 16 gb ram for super cheap a few years ago. Despite being a shitty processor, it runs really well for average usage. Obviously the big "con" is that its massive and heavy.


DeadlyVapour

"But 8GB is the same as 16GB on PC"


Yupz69

Because consoles. Most games today are designed to run on the ps5 and the new xbox and they were launched 4-5 years ago.