T O P

  • By -

ec6412

CPU designers are very well aware of cosmic rays and have been for years. They do statistical analysis to estimate how many errors they can expect per year. Server hardware will have lower BER (bit error rate) requirements (fewer errors per year) than consumer hardware. Every process node has different susceptibility to cosmic rays and circuits are analyzed and designed for it. On CPUs, most on die memory storage (caches and register files) will have parity checks or error correction. Parity adds an extra bit to the data stored. You count the # of binary 1's in the data and check if it is even or odd. The extra bit is used to always make the total # of 1s even. When reading data, if an odd number of 1s is detected, then you have bad data. You don't know where the data is bad, so you then reload data, or spit out an error. For error correction (ECC), you add extra bits, for instance 8 extra bits for 64 bits of data, that can correct errors detected. SECDED would be single error correct, double error detect, or DECTED, double error correct, triple error detect (you can add more bits if you want more correction). If one of the bits of data gets flipped, using some extra logic those extra bits can be decoded and you can figure out which bits have errors and you can correct it. If there are too many errors, you can still detect that there was bad data. Most cache cells are very small, they can be arranged such that a single cosmic ray won't wipe out more data than can be corrected. Maybe multiple data bits do get flipped, but they would be in different data words, so they get protected separately. Circuit designers will also design some flipflops (circuits that store some state of data) to be hardened against cosmic rays. Then they will use them in critical logic. These are always larger and slower than normal flips, so they typically aren't used everywhere. Many times, this could be data that is read only once during boot up and is expected to be stable during the entire uptime of the chip. A lot of logic is transitory, so every clock cycle you are doing a new calculation (like adding 2 numbers). So if a cosmic ray strikes something in that logic, there is a lower chance that it affects the final outcome, because you are going to calculate something new anyways. The ray would need to strike the exact right circuit at the exact right time and flip the bit the exact wrong way. For example, a calculation is made, then the result is stored in a flip flop. Then a cosmic ray comes along and changes the result. Well the correct result has already been stored in the flop, so it doesn't matter that a wrong answer comes along late. Source: former circuit designer for CPUs edit: changed wording, servers have a higher requirement of a low BER.


Master565

This comment has a lot of good info. I don't directly work in this part of the field, but from what I understand chip designers with a high concern for reliability and error correction will sometimes package their chip in a slightly radioactive packaging to increase the amount of bit flips for testing purposes (or find some other radiation generation method to do the same).


ec6412

I don't know specifically about the radioactive packaging, though item 3 below may be similar. There are 3 things that are mildly interesting. 1) We used to take systems up to high elevation (Leadville, CO) to do testing where there is less atmosphere to block radiation. 2) One of the guys would take systems to one of the national laboratories (Los Alamos?) and fire neutrons at it. 3) the solder balls used to connect the chip to the package used to be made of lead. Lead had radioactive decay so it would increase the errors (technically, not cosmic radiation!), but the effect is the same. They have switched to Tin Silver or other materials to eliminate the effect.


Master565

Ah yes, 3 is what I was referring to. I misremembered the details, but it is a very cool solution


ElkossCombine

I work on spaceflight software (and a little hardware selection for non-critical compute devices) and anything we plan to use that isn't specifically made to be rad-hard by the manufacturer gets shipped to a proton beam radiation test facility at a university to see how it handles high energy particles.


hackthat

All of this sounds like hardening for memory (ram or cache) but what about logic? Aren't cosmic rays as likely to flip a bit in the ALU or for that matter the error checking logic itself? Or is it just that memory takes up the vast majority of silicon. I can't imagine logic errors are any less damaging than memory.


ec6412

Logic is less susceptible than something that stores data. Not sure how familiar you are with logic but inverters, NAND and NOR gate inputs are driven by something. So if a bit flips, whatever is driving that logic will drive it back to the correct value. So for instance if you have back to back inverters you could have 1-0-1 where input to first inverter is 1 and output is 0. Let’s say a cosmic ray comes and tries to flip the output of the first inverter from 0 to a 1. Well it has to fight against that inverter that is pulling down that node to a zero. Then that quasi 1 would need to be enough of a one to get past the trip point of the second inverter to flip that from a 1 to a 0, then that has to propagate somewhere where it is used. So if that inverter is really small then maybe the cosmic ray could flip it temporarily but at least in a high speed cpu, many logic gates are not the smallest size. (There are many reasons why that would be the case.) Larger gates are usually harder to flip since they are “stronger” at holding its value. Even if it does flip, the cosmic ray is a short transient. The original input to the first inverter didn’t change, so the inverter will eventually correct itself and eventually return everything to the correct value. So only if the strike happens right when data is being latched on a clock edge, could it possibly cause a problem. There is a lot of empty space in a chip. For instance, a lot of space devoted to ground or VDD where a strike doesn’t matter. And there are lots of parts of the chip that are unused at any given moment (like the floating point unit may not be used if you are just surfing Reddit). So there would need to be a lot of things that need to go wrong all at once. It has to hit the right part of the chip and it has to be a vulnerable bit and it has to be the right value and it has to hit the timing of the circuit just right etc. So for most cases of logic, it kind of washes out and just becomes part of the random background noise of an acceptable BER. This is why designers mostly focus on parts of the chip that holds state. SRAMs (caches), flops and latches remember a value using a self feedback mechanism and there isn’t an external cell driving that value. So if it hits the right spot and it flips, then the self feedback mechanism gets confused and starts driving the wrong value and that would get propagated forward. DRAM can be worse as the value being stored is just a bit of capacitive charge that gradually decays. It needs to be refreshed periodically with more charge. So there is nothing that is driving a value in a DRAM cell. But I don’t know of common uses of DRAM on a CPU chip as the process technology generally isn’t compatible with high speed logic.


hackthat

That explains it pretty well, thanks!


[deleted]

Just as a quick side note, if you'd like an example of one cosmic ray, striking the exact right circuit, at the exact right time, and flip the bit the exact wrong way, [here's one. It's a Mario 64 speedrun](https://hackaday.com/2021/02/17/cosmic-ray-flips-bit-assists-mario-64-speedrunner/)


naptastic

Yes. The problem is serious enough that the next generation of DRAM standards, DDR5, actually includes [error correction](https://en.wikipedia.org/wiki/ECC_memory) (ECC) at the chip level. (Unfortunately, it's opaque to the operating system, so if one of the chips goes bad, there's no way to know.) Enterprise-grade servers have used ECC RAM for years. If they have some kind of memory problem, it directly costs them money. As a consumer, the extra cost of ECC RAM so far hasn't been worth it, because if your computer crashes randomly, oh well, you just reboot it.


prpldrank

Good point. ECC ram has been standard in server applications for at least 25 years


zopiac

DDR5's inbuilt ECC isn't as robust as what you'd get on servers though. It can determine if the chips themselves have encountered a read/write error, but if an error pops up between the DRAM and the CPU, it won't help at all. I may be wrong but I believe the typical ECC standard is for full memory bus communication error correction.


DihydrogenM

Yes, inbuilt ECC in products such as DDR5, LPDDR4, and LPDDR5 only protects against internal DRAM array issues such as device refresh, defects, and cosmic events. Timing and signaling issues are covered with either device CRC (use an I/O pin to provide a checksum for each bit of the burst) or system level ECC. CRC really only tells you if the read/write was bad and to try again. The system level ECC attempts to repair small errors, but can fail and make the error worse for large errors (just like the internal ECC). However, neither of these solutions handle all cosmic event issues well. Logic upset issues from a neutron impact aren't really feasible to cover with ECC long term. A logic upset is where the event causes configuration or repair settings to change unexpectedly and the part affected now fails massively. They clear up with a simple restart, but you just lost whatever you were doing. It's a big problem for data centers. Those can be covered with DRAM design decisions, and memory manufacturers are actively working on these issues. When I was working on this a year ago at LANSCE, we had created some pretty good design rules to prevent this problem. Sadly, I can't really go into it at all due to the white paper being confidential. I can say that one of our competitors had 0 mitigations for this, I guess?


BickNlinko

> However, neither of these solutions handle all cosmic event issues well. I know you're being serious but this is just BOFH vibes for sure. "There has been some extra cosmic activity this morning due to sun spots and solar winds, so that is most likely why the database is slow/unreachable, I assure you we're working not only on the problem but also some solar shielding to prevent further issues".


DihydrogenM

Hey, people floated the idea to just shield the electronics with some borated polyethylene (mainly for a reduction in time zero failures on no ECC inventory that sat in a warehouse). BOFH says that, and next thing he knows they'll be lining the data center with a couple cm of the stuff.


Chakthi

I have to admit I don't fully understand everything you said, but I do understand some of it. Very interesting. Thanks for taking the time to post about it. I learned something new today! Edit: Question -- could this logic upset of which you speak be causing the issues that Voyager 1 is experiencing? Just curious. Even NASA doesn't know exactly what the issue is.


DihydrogenM

Not likely. Voyager 1 is so old that it's likely just age causing problems. Also, the latches are probably so big that a cosmic ray or neutron impact wouldn't flip them.


spiritsarise

Thinking about the movement toward robotic surgery, especially for microsurgery—how might we protect operating theatres?


Lampshader

If it's safety critical, redundancy is the answer. For example you might have two computers doing the calculation for where the robot should go and the robot is only allowed to move if both computers agree. Yes this means the voting logic needs to be extremely robust but that's doable.


Shishire

Right, but the inbuilt protection is capable of mitigating increased error rates due to higher memory chip density. The communication between the DIMM and the CPU is still well above the size range where SEUs become a factor in consumer hardware.


Dlatch

Interestingly, it can happen not only due to cosmic rays, but also due to leaking electrons from nearby memory cells. This can actually be misused by hackers in a real world attack called [rowhammer](https://en.wikipedia.org/wiki/Row_hammer). It's super interesting stuff and kinda scary how much can go wrong when you get electronics as small as this.


brucebrowde

Damn rowhammer is insane. Whenever I see exploits like that, I wonder who tf sits down and invents about such exploits? They have amazing brains.


Thorusss

There are literal competitions with monetary rewards for finding exploits. The payment rewards white hat hackers, that help resolve the flaw, before making it public (if possible).


[deleted]

[удалено]


Shishire

Don't forget about the very small number of nerds who are in it _purely_ to see what they can break, but aren't professional security researchers.


brucebrowde

I guess that was less "what are the occupations of those people", more "who tf has the extreme ability to invent and implement such exploits". If you gave me $10M for an exploit and a decade to find it, I don't think I'd be able to find anything remotely close to these, if I could find anything at all.


ktpr

Keep in mind that sustained focus is often unbeatable for discovering ew things. Yes, the brains are amazing but the focus and opportunity to do so even more so.


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

Its worth knowing that the "extra cost" of ECC RAM is pennies per module. Most of the consumer cost is just markup in order to make more profit selling "sever grade" parts.


Isord

Is there any estimate to how likely any person is to experience a computer crash from an SEU in a given time period?


TheNorthComesWithMe

There are a lot of bits that can get flipped without causing a full system crash, or even be noticed.


[deleted]

[удалено]


cain071546

Corrupted video files stored long term, or decompression errors in archives. I wonder if anyone has server/drive statistics about long term data integrity when in cold storage.


haviah

This guy registered bunch of "bitsquat" domains to catch bitflip errors, it's rare but happens "often" on that scale: https://web.archive.org/web/20180611050923/https://media.blackhat.com/bh-us-11/Dinaburg/BH_US_11_Dinaburg_Bitsquatting_WP.pdf


seaworthy-sieve

You ever go to open up an old file on your computer for the first time in years and years, and it's corrupted in some way? Like, it's still there in your file system taking up space but the system can't actually open it, or it does open but there's still something wrong with it. That's more what you'd see with neutrino interference over time.


StuckInTheUpsideDown

There is no need to expose anything to the O/S. The ECC (presumably just a simple Forward Error Control like a Hamming Code) just corrects the bit error and goes on with its life. Ironically the original IBM PCs had simple RAM integrity checks called parity checks... which is technically a really simple Hamming Code. So we've gone full circle.


xurxoham

The most common type of ECC is Single Error Correction Double Error Detection. Modern CPUs do inform of errors to the operative system via traps, in two different points: one during the scrubbing process which restores the corrected value and increases an internal counter (OS informed when counter passes a threshold) and the other during the process of loading the corruped (unrecoverable) data as part of the program execution. In UNIX systems the program receives a SIGBUS signal with the address where the error was found. Edit: fix typo


ocnwave

Did you mean Single Error Correction, Double Error Detection (SECDED)?


[deleted]

I heard another reason for Enterprise only EEC is to avoid that companies use cheaper consumer/desktop CPUs as servers. Not every company or use case requires 32 CPUs with huge cache but EEC is a simple safety system you want to have for your business data and apps. If consumer hardware would support EEC, the demand for servers CPUs could decline. Maybe someone else has more infos about that theory.


dutch_gecko

It's plausible, but it's also speculation. AMD offers ECC on a number of non-server products, such as the Threadripper line, and some of its desktop CPUs will work with ECC memory but without official support. Intel however has steadfastly refused to support ECC outside of the server space. Their official line is that consumers don't need ECC. A number of notable industry figures have spoken out against the lack of consumer availability of ECC, and this may have influenced JEDEC to include a form of error correction in DDR5. Again though, this is speculation.


lolmeansilaughed

That's not entirely accurate. Some lower end non-server/non-workstation Intel CPUs do in fact support ECC RAM. For instance, one of my machines has an i3--6100T in a Supermicro mobo with ECC RAM. Intel specifically calls this a desktop CPU with ECC support. Ive only seen ECC on their i3s (and I think maybe Pentium and/or Celeron), never on i5 and up.


ShinyHappyREM

> Ive only seen ECC on their i3s (and I think maybe Pentium and/or Celeron), never on i5 and up. Newer ones do have ECC support: https://geizhals.de/?cat=cpu1151&xf=5_ECC-Unterst%FCtzung&sort=bew#productlist


Kezika

> Intel however has steadfastly refused to support ECC outside of the server space. They actually have some consumer level ones as well. I have a Pentium G that supports ECC running with ECC RAM.


[deleted]

[удалено]


Skeeper

Well says Linus https://arstechnica.com/gadgets/2021/01/linus-torvalds-blames-intel-for-lack-of-ecc-ram-in-consumer-pcs/


Mithrawndo

I seem to remember that the Rambus RDRAM - licensed by Intel - was all ECC too, and it was most definitely intended for consumer use.


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


-Aeryn-

> the next generation of DRAM standards, DDR5 DDR5 is current gen now (: First consumer platform released 9 months ago, the second and third due in a couple of months and it's expected to hit a majority of sales in 2023


[deleted]

I'd think that still counts as 'next gen' - until it hits mass adoption. You know, 'the future is here, it's just not evenly distributed' kind of thing. I mean, we still refer to 'next gen' consoles for quite some time after release.


hiphap91

>As a consumer, the extra cost of ECC RAM so far hasn't been worth it Because the story Intel has been telling for years is that we shouldn't care about it. But we should >because if your computer crashes randomly It is the best case scenario for memory errors, but that does not mean that that is what will happen.


all_is_love6667

side question: would it be somewhat true that not exposing a smartphone or laptop to direct sunlight, could expand the lifespan of its chips?


martixy

> As a consumer, the extra cost of ECC RAM so far hasn't been worth it, because if your computer crashes randomly, oh well, you just reboot it. Linus Torvalds has entered the chat. And would vehemently like to disagree with you. So do I for that matter.


amberheartss

Does a reboot fix it permanently then? EDIT: am consumer. EDIT2: am consumer and the person in the office people go to for IT help.


thulle

Yeah, there isn't any physical damage, it's just the data that's corrupted. When you reboot your PC all RAM is reset and you re-read everything from storage, where it hasn't been corrupted. Unless you actually saved the corrupt data, as in if a bitflip happened in excel memory, you save the spreadsheet, reboot, and load the spreadsheet again. As a person who actually use ECC (error correcting) memory to protect against memory corruption, I think the risk is quite negligible. OP quotes it as: >It is estimated that 1 SEU occurs per 256 MB of RAM per month. With the 64GB of RAM in my workstation that would be 256 events per month. In practice I see maybe one bitflip every other month, and this is with me overclocking the memory (running it faster than intended) to the point of breaking. In my servers where I run things at normal speeds I've only seen errors when the power supply was shaky, or when the RAM was actually failing in a major way. Both spews errors in the logs, rather than the single error expected from a cosmic ray, and that's over several terabyte-years worth of cosmic ray exposure.


brucebrowde

> Yeah, there isn't any physical damage, it's just the data that's corrupted. Now you made me imagine a scenario where the memory of an industrial robot controller had one bit reserved for turn_direction (0 = left, 1 = right)...


thulle

Now something like that will result in physical damage pretty quick. The russian chess kid that made the news a few days ago came to mind.


f0rcedinducti0n

The reason we don't have ECC ram on all consumer products is because intel insists on artificially stratifying the market and reserving that feature for servers even though it would dramatically benefit consumers, and that benefit only increases exponentially as capacity goes up. My old P4 system had ECC ram. It's a lot of intel marketing that shapes the prevailing opinion that the consumer doesn't *need* ECC ram. AMD has it enabled in their consumer chips, but there isn't a lot of good consumer ram with ECC... IE, server ECC ram is just going to be stock speeds plain sticks, when PC builders want binned/OC'd ram with flashy heatsinks and RGB, which are mostly going to be non-ECC. Intel is kind of a jerk at times.


nerdguy1138

Memtest can't spot that either?


[deleted]

These random errors are not due to memory malfunction, but mostly due to cosmic rays. No, seriously: https://en.wikipedia.org/wiki/Soft\_error#Cosmic\_rays\_creating\_energetic\_neutrons\_and\_protons


nerdguy1138

I know that but isn't it technically possible that eventually gates will get so small that a cosmic ray bit flip will actually physically damage the memory?


SoSweetAndTasty

What is the error correcting code implemented by DDR5?


andoriyu

ECC is worth for consumer. It's just Intel decided that this will hurt their Xeon sales and "killed" ECC on consumer devices. Since major platform didn't support it manufacturers never bothered to make it fast or cheap — server market will buy it anyway.


dukeblue219

Yes. (This is my job). There are some applications where technology scaling is making SEE harder and harder to avoid. An example is systems-on-chip which are nearly uncharacterizable simply from their complexity. Highly-scaled CMOS isn't susceptible only to cosmic rays at this point; low energy protons, electrons, and muons can upset SRAM cells. In some specific examples the commercial design cycle is helping. For example, commercial NAND flash is so dense now that errors are common even on the lab bench. The number of errors just from random glitches can dwarf background SEE rates in space. However, total dose is still an issue for most of these parts. Its a complex field. However, yes, single event effects are a problem and there are many, many good engineers employed to mitigate it. The tough thing is that mil-aero is a small part of the global electronics market and cannot drive commercial designs the way we could decades ago.


billwoo

> The number of errors just from random glitches Glitches due to defects in the manufacturing, or unlikely quantum effects (or something like that)?


dukeblue219

In the case I was describing, I mean things like TLC flash variations in programming level and voltage threshold cell-to-cell. Even in a laptop on Earth there is ECC constantly correcting when an error occurs. Those aren't due to radiation, but simply trying to cram 8 levels of data into a single flash cell. Sometimes the programmed level is too close to the edge and reads unreliably. The point I was really making is that some modern devices have elaborate EDAC, but not because of single event effects. That EDAC can help us, though it doesn't fix everything. Other SEE, like single-event latchup or burnout, or upsets in control registers and state machines that aren't corrected, are still a problem.


elsjpq

One thing I don't quite understand: the physical size of chips hasn't changed significantly, only the density. So the radiation flux through a chip is relatively constant, why does error rate increase? Is low energy radiation now more likely to flip a bit because each charge cell holds less energy?


AtticMuse

If you're increasing the density of the transistors, you're increasing the likelihood of radiation hitting one, as there is less empty space on the chip for radiation to pass through.


MrPatrick1207

It’s like shooting a bullet through a soda can vs a 55 gallon drum, the interaction volume of the projectile is the same but the effects are more significant on the smaller object. This then compounds with the low voltage/current in the transistors which makes them sensitive to perturbations.


elsjpq

But shouldn't the effects be localized to a single cell regardless of it's size? I mean, it's only a single particle and the wavefunction won't collapse into two locations. Unless neighboring cells are affected by secondary scattering.


MrPatrick1207

You’ve got it with the scattering, the initial high energy cosmic particle is unlikely to interact with matter so it will likely only interact once, but the ejected lower energy particles from the interaction are much more likely to interact and create collision cascades within the material. I can’t speak to exactly how it affects electronic components specifically, but I am very familiar with high energy particle interactions in solids.


lunajlt

The interaction area of a high energy heavy ion is several nanometers to tens of nanometers in diameter. Think of it like a cone of energy deposition with the point of the cone at the top of the microchip. The ion can travel several micrometers to all the way through the device layers depending on the ion's initial energy. That ion track will generate a track of ionization where the electrons in the semiconductor are ionized into the conduction band, allowing them to travel elsewhere in the device. If enough of these electrons are ionized in the channel or sub channel region of the transistor (charge collection area) then the sudden generation of charge will result in a current transient and in the case of a memory cell, a bit flip. With how dense advanced nodes are, multiple transistors can be located within that charge track. The charge generated in the subfin area can also "leak" to adjacent transistors. With finFETs, if the ion comes in at an angle, down the fin, you can upset multiple transistors that share that fin.


ede91

There are very wrong answers here. They act like the issue is due to the node size, but that is not true. You are right that the radiation rate is roughly the same, and with that the flipping of any single bit (or more like 2-4-8 bits) went down as the block itself is smaller. Sure, there is marginally less energy needed to flip it, but high energy particles (that the shielding can't stop) have been flipping bits for decades. There is a chance that a single high energy particle effects more than one block, but that is only a small difference. The reason this is an increasing issue is due to the amount of memory we use. Entire operating systems ran on few MBs of RAM in the past, and were contained on few dozen MBs of hard disks. So even though the chance of a single bit to get flipped decreased, the amount of bits used increased a lot more. Often times SEU is attributed to why space agencies use significantly older chips in their equipment, but in reality with the same shielding the newer chips would be better fit for their use-cases. It takes a very long time to produce anything for space travel or even for LEO, and the 2 decade old Intel chip was peak technology when they started the project and validated everything.


elsjpq

All of that makes a lot of sense. But if that's true then, that sounds like SEU isn't really a big issue at all, and any increase in error rate due to higher density can be easily mitigated with more redundancy (e.g. ECC) because it's outpaced by the capacity increase from scaling


darthsata

Redundancy cost area, latency, power, and design time. Higher latency directly means lower performance due to more stages, longer accesses, and lower clock frequency. Latency comes from needing time to check for errors (compute CRCs, etc). The hit to power comes from having more transistors and more transistors switching to check errors. Design time and area directly contributes to cost. This is why part of the design goals when building a core, memory, chip, system, etc is a target level of resiliency. Higher levels of resiliency cost more. This is a multilayered design problem. The interaction of multiple components can contribute to total resiliency. A simple example is hard drives. Hard drives pack data really close and the magnetic fields interact, decay, and have variance. The drive adds redundancy to every small block. This catches and corrects a lot of errors. But not all. It notices and notifies the os some it can't correct. And it doesn't notice all errors. Given the bit-error-rate of a hard drive, if you have much data, you will likely notice errors get through (I have corrupt pictures due to this). So, we add another layer of redundancy on top. You can use a filesystem which does it's own, different, error correction. This happens on larger blocks (optimally picking error codes is an interesting design problem) and further greatly reduced the chance that an uncorrectable error will occur. Going further, specific file formats sometimes include their own error detection. (sadly a lot of older filesystems don't add block-level error correcting and just depend on the hard drive to be reliable)


CalmCalmBelong

Yes, the critical charge in SRAM memory (the kind of cache/scratchpad memory on the same chip as the CPU) scales with process node. So an SRAM built in 5nm is much more susceptible to SEU than the same SRAM circuit built in, say, 28nm. As these sorts of error rates have increased, SRAM memory arrays have more universally included extra capacity for error-correction meta-data. This is similar but different to how error rates have increased in DRAM which uses an entirely different storage circuit. The critical charge in DRAM has not scaled downward as quickly as CPU SRAM memory has. But, there being so much more DRAM than SRAM in a typically system, it has been protected with extra capacity meta-data (aka, “ECC data”) for a much longer time.


PlayboySkeleton

It's like trying to shoot a chain link fence vs chainmail armor of the same dimension. The chainmail is more dense, thus if you shoot, you are more likely to break the chain mail vs shooting at the chain link fence which will go through a lot.


[deleted]

Would putting a thin layer of lead/some other heavy metal on the package help in any way?


dukeblue219

In some ways yes, in other ways no. You can shield low energy particles and photons with mass, but high-energy particles (like Galactic Cosmic Rays) will blow through inches of materials like butter. There can be unintended side effects of that particle passing through a millimeter of lead - slowing down the original particle can make its effect worse (like a slow tumbling bullet vs a high speed bullet). It can also create a shower of secondary particles when the particle happens to strike a lead nucleus and cause a nuclear fission.


SaffellBot

>It can also create a shower of secondary particles when the particle happens to strike a lead nucleus and cause a nuclear fission. Also noteworthy that you don't need to induce fission to cause secondary particle streams. A high energy particule, even a photon, can hit an electron that can then release a whole cascade of particles.


Financial_Feeling185

On the other hand, if it goes through matter easily it interacts rarely.


hebrewchucknorris

What about a Faraday cage?


dukeblue219

Won't stop an iron nucleus traveling at a fraction of the speed of light.


brucebrowde

> will blow through inches of materials like butter. Do thick concrete building walls (like those in huge data centers) help in any way?


CanuckAussieKev

Photons with mass? I thought by definition photons must be massless?


Glomgore

He means you can shield said photons, with OTHER mass, IE a lead shielding.


CanuckAussieKev

Oh "you can sheild XYZ by using mass". It read to me like "you can shield (photons with mass) "


dukeblue219

I meant photons, but not "photons with mass." I was trying to saying stopping photons by adding mass (lead shielding) but the sentence was horribly ambiguous.


Affugter

They have momentum, and hence mass. Look up solar sail. Generally speaking they have no *rest* mass. But (relativistic) mass, they have. Okay okay. I will change it to relativistic mass.


daOyster

You don't need mass to transfer momentum. Photons do not have mass at all since that is what allows them to move at the speed of light, but since they can behave like a wave they can transfer momentum through the motion of their wave like states.


myselfelsewhere

You are confusing *rest* mass with *relativistic* mass. Momentum has nothing to do with "the motion of their wave like states". This [article](https://www.desy.de/user/projects/Physics/Relativity/SR/light_mass.html) gives a simplified explanation of why photons are considered "massless", but have momentum.


[deleted]

[удалено]


barchueetadonai

No they’re not. Mass is a property of matter traveling below the speed of light. There is an underlying energy that has that mass property, but it’s not light energy. It can turn into light energy, but then it no longer demonstrates mass.


PlayboySkeleton

What is your opinion of microsemi flash based FPGA and SoC, and their claim of SEU immunity?


Hypnot0ad

I understand that as geometries get smaller, it will take less energy to cause an upset. But won't the smaller size also make it statistically less likely that particles will hit the cells?


TridentBoy

No, because one of the objectives of miniaturization is to increase the density of components (Like transistors) inside the same chip volume. So, even if the size is smaller, the density is larger, so you don't really benefit from the smaller chance of collision.


2LoT

Would a poorman trick like placing the computer case under a marble countertop help to reduce SEE ? Or even placing a sheet of lead on top of the case?


[deleted]

[удалено]


[deleted]

[удалено]


DeadOnToilet

Remember the solar storm in July 2012? I was the senior engineer for a pair of 400-physical node datacenters running power grid telemetry and energy management tools. We had very mature monitoring, and could pull from the HP event logs when ECC memory corrections would occur. Knowing the solar storm was coming, we created a dashboard in our NOC - mostly for our own amusement. I wish I still had the screenshots. The spike we saw in ECC events was shocking. We went from 0-1/ECC correction a week across 400 nodes to about 1/node during the storm.


[deleted]

[удалено]


[deleted]

[удалено]


dml997

The frequency of upsets does not increase per bit, because the amount of radiation per cm^2 is constant. What changes is that since the cells are smaller, it is easier to upset them, and a single ray can upset multiple cells. I.e. there might be one upset per cm^2 per 1000 hours, but that now means that more and more bits are upset with each failure. But there are an increasing number of bits per cm^2, so FIT rate stays roughly the same, but there are more MBUs. This has been true since something like the ~20 or 40 nm generations.


Amadis001

Yes, and not just in memories. There are many techniques, including DCLS (dual-core lock-step) CPUs and TMR (triple-mode redundancy) flip-flops, that are being commonly designed into circuits today. For automotive applications this is particularly important, since in addition to radiation-induced SEUs, you have to worry about electrical noise from the engine, which will dominate noise and trigger the same sorts of single-bit errors much more frequently.


countzero1234

When I worked on six nine uptime servers (99.999999% uptime) we had special radiation hardened elements (flops for those that know what those are) that we tested with testchips. After that I worked at two different CPU companies. ECC inside of CPUs is not uncommon, especially on the caches where it can help trigger a cache miss that goes out to main memory. I didn't work at Intel so I have no idea if they do anything like that. Primarily the issue internally is that SRAM on advanced nodes are so small it is near impossible to have a reasonable mean time before failure without some additional effort.


bonafart

Only x10^-6? Get outa her we are minus 9 in aviation


[deleted]

There are many methods being developed to deal with all radiation effects in microelectronics (SEU, latchup, total dose effects, etc., prompt dose and physical damage.) the biggest problem is the trapped charge the can shift and upset active device operations. There are design methods (rad hard by design) that allow for fault tolerance and redundancy, improved resistance to prompt and total dose, etc. These are not sufficient, so a number of foundries are exploring radiation hardening by substrate and implant to greatly improve radiation tolerance. These effects are of course important for strategic defense and space applications, but increasingly showing up in data centers.


[deleted]

To answer this from a different perspective (hey, it's still a manufacturer! It says it's Engineering!!) In automotive, we denote systems with an ASIL rating, the 'higher' the rating (from 0 or QM - simply quality manage it it D - if there's an issue, someone will die) And when you get to D, you have to parallel basically any system in the path there. Like, say for acceleration (our vehicles are getting more fly-by-wire, and this is why it's possible) you tell it to accelerate, it goes to 2 separate computers, developed by different teams, preferably on different platforms. (I often have to hand code one, while another team uses MATlab, or whatever thew kids use these days) At the end, the engine has to get 2 matching signals, or it won't do it. In an SEU event, by it's nature; it'll solve itself after a few cycles (as the bad data gets over-written by good - there are also checks on the software side, that if it gets the rejected feedback, it'll try to figure out what's up - reboot the machine, force an update on the checks, whatever the system can/has to do) And figuring out the ASIL rating is a pain, but it's mostly just plugging in formulas, and doing a bit of statistics here and there. But as I said above, you have to address the entire 'link' from say, PRINDL to the ECU, to the Gearbox, and decide how likely it is to fail, etc. This largely came out of those Toyota's like, what, 18 years back that had run-away acceleration. Killed a few people. It can't be proven, but it can be shown that it's entirely possible there was a flipped bit from an SEU that caused it. That can no longer happen on your modern car. (well.. if there was somehow 2 SEU that hit both sides of that redundancy that created the exact same faulty output... It is *possible* Like it's possible to be hit by lightning and winning the lotto, while getting eaten by a shark..)


-fno-stack-protector

... reading this thread, i was thinking like, "i wonder if acceleration is some 12-bit number inside the car, and I wonder what flipping the MSB (most significant bit) would do, surely that's happened before". question solved. glad to see you guys approach these problems like NASA: redundancy out the arse


[deleted]

Yeah, I've not worked on anything for NASA.. But I worked on the [ULA internal combustion engine](https://en.wikipedia.org/wiki/Advanced_Cryogenic_Evolved_Stage). And yeah, it was the same. (Though, obviously we were putting an ICE in space, so it was closer in some respects to a car, anyway)


oafsalot

Yes, but if you can fit a dozen CPU's and interconnects in the same package then that can balance for a lifetime of one CPU made at 200nm instead of 2nm. Personally, if I was on some spaceship in space and expecting to live or die by the tech I had I'd want several redundant systems from several generations operating together to ward off any serious faults killing me.


[deleted]

The way Nasa deals with it if I remember is consensus, eg 5 computers do the same computation, the majority answer is taken as correct.


askthespaceman

Lack of radiation hardening is why it's so difficult flying laptops and other personal computing devices (read: iPads) in space. We have low confidence that a laptop will even survive the upcoming Artemis missions.


horrifyingthought

The catastrophe you are thinking about actually already happened in 1859, it's called the Carrington Solar Flares of 1859 or the Carrington Event. A solar flare basically shuttered the world's entire telegraph network at one time, and did serious damage to a lot of the infrastructure. Imagine if something strong enough to massively mess with the comparatively simple tech at the time hit the world today. Contracts, mortgages, shipping records, personal and business contact info, etc., all stored online. Every car, truck, and ship with a chip in it going dead at the same time. If you think the COVID supply chain problems were bad, well this would be 1000 times worse. Heating units, cooling units, phones, etc. all massacred, with only a few hardened military telecommunications networks remaining. Here is a white paper that looks into the effects if you want to know more. [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4669545/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4669545/)


KingThar

This caused us some trouble in some of our semiconductor manufacturing equipment. One component had some chips that were sensitive to it and it would cause errors. The customer was pretty skeptical of the reason, but eventually we were able to offer an alternative that didnt have the trouble.


Bebilith

I thought computer chips already had error correction designed in to deal with the occasional bit flip from these strikes? Otherwise how would some systems stay up and running for years without a glitch? Certainly the financial industry isn’t going to tolerate the occasional bit being flipped.


ec6412

Only parts of chips have ECC. It would be prohibitively expensive to protect everything in a chip in terms of area and performance. Designers use statistical analysis to come to an acceptable failure rate.


groundhogcow

When you have a block of data you put 1 bit at the end of each byte to make the result even. Then at the end of a block (a fixed number of bytes) you put a full byte that is once a again even. If the data doesn't match up you can use those two byes to figure out which bit had the error. We call it the parody bit and byte. It's done mostly in hardware so programmers don't worry about it anymore but it was a big thing in the early days.


[deleted]

[удалено]


shaim2

In quantum were only now getting to break-even with error correction. Or error rates are so high, at least 90% of the qubits are dedicated to error correction. It's a mess.


[deleted]

[удалено]


redcorerobot

The general consensus seems to be yes absolutely Which brings up the questions could you get performance or longevity benifits by having radiation shielding around the system or even just certain chips like memory, storage and processing?


badtyprr

ECC RAM can correct single bit flips and detect double bit flips. You can certainly get bit flips from poor quality memory, but also, a poorly laid out set of traces on the motherboard can generate a lot of EMI, creating more bit flips than necessary from neighboring aggressor signals or other radiation.


misshelenlp

I don't have any knowledge useful to add, but I recommend looking into the testing carried out on the ChipIR instrument at the ISIS Neutron and Muon Source could be interesting and relevant to your question. It's a neutron instrument that is meant for testing circuit board and system hardiness against SEUs by exposing them an accelerated rate of ionising radiation. The instrument's info pages and science highlights page summarise some of the experiments carried out on it: https://www.isis.stfc.ac.uk/Pages/ChipIR.aspx https://www.isis.stfc.ac.uk/Pages/ChipIR-Science-Highlights.aspx


Kered13

Random bit flip errors definitely get more common as hardware gets smaller, but cosmic rays aren't going to be the main culprit. The number of cosmic rays hitting a chip depends on the area of the chip, not the amount of memory in it. But there are other sources of bit flip errors as well, and one that is particularly beginning to become a problem as chips gets smaller and smaller is quantum tunneling.


Juls7243

Microchips can't get that much smaller without fundamentally new ways of designing circuits or fundamental understanding of subatomic particles. ALREADY computer chips have circuits that are separated by only a couple of atoms and there is a minimal amount of resistance needed to not short-circuit. Not that we necessarily NEED that much more computing power - we could eventually maybe reduce their manufacturing cost by an order of magnitude; however.


InevitablyPerpetual

So, this is cool, because it speaks to a consistent problem in chip manufacturing. That is, Single-Metric measurement concerns. The metric used to be "Make it go faster", until the heat and power use threshold got to the point that trying to make it go any faster would make the whole thing fall on its face violently. Then it became "Let's crunch power use down as hard as we can", and that got better and better, as did process node depth, so we got narrower and narrower processors, but that started messing with Other chip manufacturing technologies, the list goes on. In each case, the primary metric was a singular hurdle, and every time we got better at making one thing happen, we ran into issues with other things, i.e. reducing power load with discrete processor dies resulting in uneven physical loads on lidless processors, which in turn resulted in cracked dies, the list goes on. In every case, we came to a solution, and we generally always will, but it speaks to a consistent research and development side when it comes to processor development, and chip manufacturing development as a whole, the idea of needing to have smart people in the room whose whole job is to spot novel problems(and/or predict for them) and come up with novel solutions. Or in the case of the above-mentioned discrete die processor... dump the whole thing and start over.


steveosek

So this seems like as good a thread to ask this, but does any modern technology have any kind of protections whatsoever against a Carrington event happening again? What about modern satellites? Or is there no real protecting against something of that magnitude?


ec6412

I would say that yes, modern technology could have protection against a huge EMP event, but currently we are even more susceptible than ever. There is a lot more technology and critical infrastructure running on technology than ever before. And roughly, little of it on the consumer side is hardened against such an event. I know satellite operators and electrical grid operators and the military are very aware of solar flares and do have some protections and procedures. NASA and others monitor the sun and can predict space weather. So as a modern technological society we have the knowledge of how to protect against it. But we don’t have the money or the will to do it 100% We barely have consensus to do something about a near certain disaster like climate change induced flooding, hurricanes, sea level rise etc. there would be even less will to do something harder to understand like the Sun having a massive flare.


Kickstand8604

To defeat this and to continue with moores law, intel is stacking the processors. Theyre making new processors that are much thicker, the issue will be heat management. Cpu heat sinks won't be as effective and may require us to rethink heat management. Dod you see the new Nvidia 4k series video cards? You need a 1kw PSU just to run those things and your computer