TLDR: I am running some Docker containers on a homelab server, and the containers’ volumes are mapped to NFS shares on my NAS. Is that bad performance?

  • I have a Linux PC that acts as my homelab server, and a Synology NAS.
  • The server is fast but has 100GB SSD.
  • The NAS is slow(er) but has oodles of storage.
  • Both devices are wired to their own little gigabit switch, using priority ports.

Of course it’s slower to run off HDD drives compared to SSD, but I do not have a large SSD. The question is: (why) would it be “bad practice” to separate CPU and storage this way? Isn’t that pretty much what a data center also does?

  • Norah - She/They@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    There are also situations where you can do it safely if there’s already the ability for remote communication built in. I have some MariaDB containers on a different machine to what’s using and serving the data. I could’ve had them in the same Compose file on the one machine, communicating over an internal Docker network. Instead I just changed it to point at an external port instead.

    • Molecular0079@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Agreed! If the application can handle these files (or other resources) disappearing for a while during network issues, then sure, they can be separate. However, if an application depends on a file for its core functionality, I do not think it is a good idea.