I’ve been writing code professionally for 24 years, 15 of which has been Python and 9 years of that with Docker. I got tired of running into the same complications every time I started a new job, so I wrote this. Maybe you’ll find it useful, or it could even start a conversation, but this post has been a long time coming.

Update: I had a few requests for a demo repo as a companion to this post, so I wrote one today. It includes a very small Django demo user Docker, Compose, and GitLab CI.

  • Daniel Quinn@lemmy.caOP
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    3 months ago

    I don’t mean to be snarky, but I feel like you didn’t actually read the post 'cause pretty much everything you’ve suggested is the opposite of what I was trying to say.

    • A CLI to make things simple sounds nice, but given that the whole idea is to harmonise the develop/test/deploy process, writing a whole program to hide the differences is counterproductive.
    • Config settings should be hard-coded into your docker-compose file and absolutely not stored in .json or .env files. The litmus test here is: “How many steps does it take to get this project running?” If it’s more than 1 (docker compose up) it’s too many.
    • Suggesting that one package Django into a single Lambda seems like an odd take on a post about Docker.
    • fubarx@lemmy.ml
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      3 months ago

      OK, you wanted a conversation… :-)

      I did read the post, but I assumed it was the starting point of a system or mechanism, not the end-point. Wanting to just run “docker compose up” is fine, but there is more to developing and deploying to production (and continuing post-launch).

      That’s why I mentioned the CLI. It lets you go from a simple local app (Django on sqlite) to a Docker one (postgres, celery, redis, etc.), to all the way out to the cloud (ECS/EKS/serverless lambda/RDS), without having to remember what commands do what or managing lots of separate docker-compose files.

      I can see we are VERY far apart on how docker should be used in moving toward a production-ready system.

      For one thing, recommending putting secrets inside docker-compose is an instantly disqualifying piece of advice. There’s a whole ‘secrets’ section of docker compose that is there to prevent people from inadvertently including those in cleartext and baking them into images: https://docs.docker.com/compose/how-tos/use-secrets/.

      Github itself has a secret scanning mechanism to prevent leakage: https://docs.github.com/en/code-security/secret-scanning/introduction/about-secret-scanning. For gitlab, there’s also Blackbox or HashiCorp vault. Putting AWS key/secret inside a repo can be VERY expensive and open one to legal liability if the account is misused. Repeated infractions could lead to AWS banning one’s account.

      I really recommend you take down that part of your post, instead of proliferating bad practices.

      As for the rest, to each their own.

      • Daniel Quinn@lemmy.caOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        I feel like you must have read an entirely different post, which must be a failing in my writing.

        I would never condone baking secrets into a compose file, which is why the values in compose.yaml aren’t secrets. The idea is that your compose file is used exclusively for testing and development, where the data isn’t real, and the priority is easing development. When you deploy, you don’t use that compose file because your environment is populated by whatever you use in production (typically Kubernetes these days).

        You should not store your development database password in a .env file because it’s not a secret. The AWS keys listed in the compose are meant to be exactly as they are there: XXX, because LocalStack doesn’t care what these values are, only that they exist.

        As for the CLI thing, again I think you’ve missed the point. The idea is to start from a position of “I’m building images” and therefore neve have a “local app, (Django, sqlite)” because sqlite should not be used unless that’s what’s used in production. There should be little to no difference between development and production, so scripting a bridge between these doesn’t make a lot of sense to me.