• stoebichB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Probably something compact with a punch. Like a 3-node hci cluster of 1U servers with a terabyte or two of ram and a couple of terabytes of NVMe Storage. Then some networking capable of NVMe speeds and some nice supporting infra like dedicated firewall-, backup- and monitoring appliances. All on a UPS and a generator for emergencies. Also a climate controlled environment to put them in.

  • belly_hole_fireB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    About 4 or 5 mini PCs. I don’t need much, a couple to tinker with and a few to run docker, proxmox etc.

  • SCP_radiantpoisonB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Absolute pipe dream:

    A single rack, not even the big fridge sizes ones (I think they’re 48u I’m not sure but I’ve seen them on this sub) but one of the crash cart sized ones (I think it’s 12u). A couple current generation 1u servers, hardware firewall running OPNSense, wired mesh APs and VoIP phones around the house, symmetrical gigabit internet, static IP, GPU server, UPS and a rack mounted console just for aesthetics, a little desk for tinkering. Motorized microscope. (Bear in mind this is “winning the lottery” level wild, just the microscope and GPU server are probably more expensive than an SUV)

    Something realistic:

    Hardware firewall running OPNSense, 2 APs, mini computer with I9 processor and GPU for virtualization, 300mb Internet, maybe a Jetson Nano and a little IKEA desk for tinkering, static IP. My current microscopes

  • steviefauxB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Possible Dave Plumber’s setup he’s shown on his house tour.

  • SirLagzB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I would fill my current 24RU rack like this.

    24 - patch panel

    23 - 48 port PoE switch with 10GB, maybe even 40/100 gb, with stacking, have a monkey create me custom length cables to each port that I need with the correct colour

    22 - 48 port PoE switch with 10GB, maybe even 40/100 gb, with stacking for redunancy for my servers

    21 - patch panel

    20 - VM Server 1 - As many cores and RAM as can fit into a 1RU Server, 2x 10gb/40/100gb connections to both switches, as many 2.5" 8TB NVMe SSDs as possible

    19 - VM Server 2 - As many cores and RAM as can fit into a 1RU Server, 2x 10gb/40/100gb connections to both switches, as many 2.5" 8TB NVMe SSDs as possible

    18 - VM Server 3 - As many cores and RAM as can fit into a 1RU Server, 2x 10gb/40/100gb connections to both switches, as many 2.5" 8TB NVMe SSDs as possible

    16-17 - Storage Array 1 - 12bay full of 100TB SSDs (ExaDrive EDDCT100/EDDCS100), 2x 10gb/40/100gb connections to both switches

    14-15 - Storage array 2 - 12 bay full of 100TB SSDs, 2x 10gb/40/100gb connections to both switches

    12-13 - Storage array 3 - 12 bay full of 100TB SSDs, 2x 10gb/40/100gb connections to both switches

    1-11 - UPS with external batteries.

    I would also have multiple 4G and multi-gig internet connections.

  • hodak2B
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    A server with one of those massive epyc bergamo CPUs with like 128 cores and hyper threading. I run a lot of VM’s and for the cost I had to get 4x Xeon 8890 v4 chips. They just use a lot of power and the bergamo would cut down power usage by a decent amount. Along with 22x 4tb SSD drives. For storage. Right now my storage is all over the place. 4x 2tb NVME’s on an asus hyper cars 4x 4tb NVME’s on another asus hyper card 9x 2tb crucial SSD’s in the bays 2x SAS drives for proxmox operating system 4x 8tb drives in an actual NAS although I would love to get rid of this as well.

    And an entirely unrelated dell r620 server that I would also like to remove to cut down on power at some point. I am in the process of moving everything over to the dell r930 as we speak.

  • HTTP_404_NotFoundB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I mean… if money wasn’t a problem- I’d have a loaded blade chassis.

    With a pure flash-blade.

  • cjchicoB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    A full rack with at least 200GbE, NVMe in everything - probably HCI with at least 8 R660’s in vSAN.