I see people having a small 8 gigs and 4 core system and trying to split that with something like proxmox into multiple VMs. I think that’s not the best way to utilise the resources.

As many services are mostly in ideal mode so in case something is running it should be possible to use the complete power of the machine.

My opinion is using docker and compose to manage things on the whole hardware level for smaller homelab.

Only split VMs for something critical, even decide on that if it’s required.

Do you agree?

  • ttkciarB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    On one hand, I think VMs are overused, introduce undue complexity, and reduce visibility.

    On the other hand, the problem you’re citing doesn’t actually exist (at least not on Linux, dunno about Windows). A VM can use all of the host’s memory and processing power if the other VMs on the system aren’t using them. An operating system will balance resource utilization across multiple VMs the same as it does across processes to maximize the performance of each.

  • AnApexBreadB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    It all comes down to “what are you trying to do.”

    Not everyone runs applications, so docker is not the answer to everything.

    But if you only have 8Gb of RAM and are trying to run VMs then I’d advise you to go buy more RAM.