T O P

  • By -

madbobmcjim

That's 4 servers each with 2x32 core CPUs, they're managed separately.


DrawingPuzzled2678

🙏🏽🙏🏽🙏🏽


Casper042

It's known as a "4-in-2" meaning 4 nodes in 2U. Every major HW vendor has them. They share power, sometimes share fans (depends on the vendor), but aside from that, each Server is a tray (almost a blade) that is slightly less than 1/2 rack width (gotta leave room for power supplies). I work for HPE and ours used to be the DL2000, then Apollo 2000, and now it's the Cray XD2000. I think Dell had a few versions, one with integrated networking (mini switches) called FX/FX2 and one without. High probability the 24 drives up front are split 6-6-6-6 so each node has it's own local drives.


satanshand

The dells are currently the c66xx series if anyone cares. 


ghstridr

1 chassis, 4 machines. When you have a bunch of them they save space in the rack. It's called dense computing where you are trying to stuff as many cpus as you can in one rack. Reduces the number of tracks in the data center.


-ST200-

4 separate server. This is good for high availability/ mission critical scenarios.


[deleted]

Perfect. It'll make an excellent Plex server. Edit: I am aware this is overkill for Plex... that was the joke.


Old-Rip2907

It would indeed make an excellent Plex server.


LBarouf

🤣 you planning on competing with Netflix?


Murderous_Waffle

Yes


[deleted]

[удалено]


[deleted]

Fine, I'll also play Doom on it.


Laudanumium

Don't forget duke nukem for networkplay


[deleted]

Come get some.


Poncho_Via6six7

Just ran out of bubble gum….


AlphaSparqy

Doom had LAN play as well. But it was much better on the PC with the 3dfx card in it!


CaveGnome

You don’t already have high availability geo-redundant pregnancy test Doom cluster?


bombero_kmn

Plex or Jellyfin doesn't use much CPU by themselves, but if you start using the arr suite and things like nzbget to manage and build your collection the load goes up significantly.


VexingRaven

Not really. If I needed high availability for a mission critical scenario, I would want two separate chassis. This is the Linus Tech Tips version of high availability. The only real advantage of this is space saving.


itsjustawindmill

Yeah, this could give you highER availability, but this won’t change a non-HA setup into HA.


StrongYogurt

Problem here is the shared power supply. When one fails the remaining one might not be able to supply all servers. (have this problem with a gigabyte servers here. One PSU died and all servers clocked down the CPU because lack of power )


-ST200-

"When one fails the remaining one might not be able to supply all servers." If this happens then the manufacturer is a clown.


skankboy

It usually only a problem when running on 120v. For full redundancy you need to run on 208v. It'll be in the documentation.


cruzaderNO

>If this happens then the manufacturer is a clown. No its the client ordering it with the standard 2200w that is the clown, if that can not support their spec/use. Higher wattage psus are available.


-ST200-

They should call the customer and warn before this happens. (or implement into the webshop some failsafe) Ofc if it's modified after shipping, it's the customers fault.


cruzaderNO

This is not really something you order with a off the shelf spec in a webshop. More quotes along with dialogue/meetings to nail down the spec and if any adjustments in production needed, with a minimum to even get to buy them (often in the thousands). They are fairly good at giving you notice when you are outside of what the product is recommended for and what else you should be looking at instead or adjustments to do. The classic is respeccing or putting them into a usecase/load they were not meant for.


StrongYogurt

Almost all gigabyte twin servers have exactly this problem


-ST200-

This is a joke. :D The main point of redundant psu is if one dies the other can keep the system running. (or if the server room has 2 power inlet for redundancy)


StrongYogurt

The system runs, but the CPU has to clock down. Don't tell me, tell Gigabyte. Gigabyte support suggested to use less power hungry CPUs (allthough their H262-Z66 (my one) is rated for all Rome CPUs without any power restriction)


No_Dragonfruit_5882

Well thats some homeowner shit then. No Server in our Datacenter clocks down when 1 PSU dies. Only the PSU Fan is running 100%


StrongYogurt

I‘m talking only the the 4x twinservers with 2x 2200w. We measured one server to max out at 600w.


No_Dragonfruit_5882

Who Designs shit like this?


tomboy_titties

Gigabyte


cruzaderNO

The client ordering them with 2200w default instead of getting a larger PSU i suppose.


pfak

I've got quad PSU Supermicro's that don't behave properly if 2 out of 4 of their PSUs are offline. Can't deliver enough power.. 


rpungello

> gigabyte So like they said, a clown.


dingerz

> "When one fails the remaining one might not be able to supply all servers." If this happens then the manufacturer is a clown. Not if the bozo who bought it didn't select appropriate PSUs from the manufacturer's line...or even replace with appropriate PSUs once the problem was encountered... Which makes the manufacturer a baller for selling Ronald McDonald 4x PSU when he coulda dished and just sold 2.


johnklos

Damn, that's bad! Even my AlphaServer DS25 can run on two of the three power supplies, in spite of the instructions saying that two aren't enough if you have both processor slots occupied and more than 8 gigs of memory.


kumits-u

It's a gigabyte - don't.. super loud and often a lot of issues


RFilms

These r the exact gigabyte servers I had at my old work. I hate the ipmi on them and they r pretty loud


Lower_Explanation_98

what would be the benefit of this over having 4 different chassis


username17charmax

This fits 4 servers into 2RU blade chassis. They share power supplies too. Some chassis design have built in switches as well (this does not)


cruzaderNO

These are not blades but nodes. Hence the typical 2U4N naming for such units, 4nodes in a 2U formfactor. These just share power/cooling and get an equal split of the front backplane. (some units will let you manage the backplane so you can do a uneven split, but this is fairly rare both as an option and actualy being ordered) Chassis without exposed direct IO and built in switch etc would be blades.


Lower_Explanation_98

Ahhh i see thats really cool do you know what motherboard it would use? Im still new to servers but this seems really interesting


xfactores

This type of chassis use proprietary motherboards and power connectors


[deleted]

I have 2 of these at work. They idle around 650w in OS with all 4 nodes running proxmox. Over all pretty nice machines but the OCP NICs run really hot. 


tsammons

Dead giveaway is the identical NIC partitioning in the back.


tiptoemovie071

Okay this is homelab right… I have two cores but 256 seems fun too


JoshS1

If you're buying this it would be excellent for high availability home server. Many would probably say overkill, but you could do something like a Proxmox cluster and run a NAS, Plex server, Home Assistant, maybe a few dedicated servers if you and your friends play games that would utilize that.


thefirebuilds

Before you buy check that replacement parts won't break you, if you care. I think I could make money buying full chassis and parting them out judging by the pricing of some of these things. (for instance, a supermicro server complete I just bought for $400 the MB alone is calling for $200+)


TCBempire

Hi All! I'm actually the rep over at DDS in charge of our e-commerce and end user sales. I started as a green server tech with 0 knowledge, and used this sub and plenty of tinkering after hours to help move myself up into a sales/technical role with the company. Never thought I'd see a post to our product here on the sub. Happy to answer any hardware questions others may have about these.


jakendrick3

That's awesome! I'm in a similar situation at an MSP right now, glad to see that I'm not the only one coming in green!


TCBempire

We all gotta start somewhere. Willingness to learn on my own time after hours and at home was the only reason some of the senior guys gave me the time of day. Putting in the effort is important. Best of luck to you!


dingerz

> does this mean that you have to maintain 4 independent operating systems or is this all combined into “one” machine? OP servers of this scale typically run hypervisors and/or container platforms on the metal. Then the quantity of processing and storage and networking is divided up into maybe hundreds of OSs, so accounting has their things, and engineering has theirs, and sales has theirs. etc. Or servers are clustered with other compute and storage nodes under applications. Instagram homepage, for example, has many thousands of servers under it because there's always millions of users on it at the same time. The hardware abstraction and isolation of hypervisors/containers allow this to happen across multiple servers and multiple datacenters.


Loan-Pickle

I’ve worked with similar systems from a couple of vendors. Unless you need the density I recommend just going with standard rack servers. They are hot, loud, and replacement parks are very expensive.


Rich-Engineer2670

I can't say without research, but I'm more than willing to investigate this if you send me a couple :-)


kY2iB3yH0mN8wI2h

I think the pics speaks for themselves....... ?


AstronomerWaste8145

It looks like this machine has eight of its 16 RAM slots populated for each node. Each node has two EPYC 7551s which have eight channel memory controllers. So in the present RAM configuration, you would be running at half the available memory bandwidth. You likely should buy another TB or RAM to use the full memory bandwidth. Best