I’m using a SFF Lenovo M700 (i5-6400T, 8GB RAM, 256GB SSD, 16TB USB HDD, Debian 11) for my server.

At the moment, I’m just copying all the files from the drives on my Windows PC to the server using WinSCP to make sure that I have a backup. Speeds are around 40MB/s with large files, which is probably as good as I can expect transferring from/to spinning disk, but sshd on the M700 is using 35-50% CPU, and sftp-server is using 15-20%, so about 70% in total. That only seems to happen with large files, when transferring lots of small files they’re using about 10% and 5% respectively, although it varies and can be double or half that.

If it’s going to use this much CPU whenever someone (or my sync or backup software) is transferring large files, I’m concerned that it won’t have the capacity to run the other services that I need (Adguard, Home Assistant (probably as a VM in Proxmox), Jellyfin, Tailscale, Crowdsec, etc.). The 16TB USB HDD is encrypted with Veracrypt, but I don’t think that’s the issue as I see separate processes in top for kcryptd, and they generally add up to less than 10%.

Is there anything I can do to reduce the CPU usage when transferring files to/from other PC’s on the LAN to the server? Once it is deployed, the users won’t be using WinSCP to transfer files, they’ll probably use Filebrowser or STFPGo, and I’ll set up automated syncs and pull backups, so will it use less CPU to transfer files using those methods rather than WinSCP?

  • @qjkxbmwvz@lemmy.sdf.org
    link
    fedilink
    English
    17 months ago

    As other commenter mentioned, check compression.

    Transferring over ssh will use more CPU than something simpler/not encrypted. If you want fast, I would try NFS or even Samba. If you want fastest possible, netcat will be hard to beat but that’s getting silly.