Last time I tried using Docker Desktop on my MBP (x86), the reason to not use Docker was that it was excruciatingly slow for the test suite I had to run in it, compared with Docker on Linux.
I abandoned Docker on the Mac because of that, and haven't touched it since. That was early 2021; maybe it's faster now.
It’s the cross-OS filesystem stuff that always killed it on Docker for Mac (and to a lesser extent Docker for Windows).
I believe improvements have been made, but a lot of people these days check out their code directly in the container rather than using bind mounts, leveraging the fact that a lot of editors/IDEs now will interface directly with the containers.
Docker Mac m1 is my daily driver. Painless performance.
But... watch out for software compatibility: m1/m2 requires recompiling from source which is painless... when it works. I recently needed syslinux and had to move to x86 cloud instance. Fortunately, docker made that easy.
It takes more time and effort than just running the damn script! Is that not good enough?
I mean I want to use containers but on top of setting up the host, they require composing containers (even when ready made for customizations), networking, logging and fight for more memory when using memory hungry stuff in conjunction (like elastic or other db).
If my main job was devops, I suppose I would make myself more valuable by doing everything in containers but when I deploy an app it is because I have to on top of many other duties so being able to not only setup but troubleshoot and fix outages quickly is most important (and I hope a full time devops person, if I ever get one) will help me migrate all that some day so it looks nice and neat.
I do have a particular reason - memory use. I am running postgres and redis locally for dev work, but I would love to use Docker so that I can standardize it for my team, but it just takes up to much ram on m1.
I don't mean to sound flippant, but that sounds like you're using a computer for business that can't handle the work you are trying to do. If 32/64GB isn't enough memory then yeah, I guess you need something else, but if your machine has less than that then it sounds like you need to buy the right computer for the job.
I dunno about the m1 aspect of this but postgres and redis for typical dev work shouldn't require more than a gigabyte or two of RAM when running together. 32-64GB is more like heavy traffic enterprise DB territory...
guess the parent is annoyed at spending 2-3GiB of ram on the VM itself.
even if the containers themselves only use 128MiB total: the VM will cordon off everything it's told to; which most people are unlikely to change until there's an issue, and is configurable down to 1GiB as a minimum.
FWIW Docker Desktop on my machine (M2 Macbook Air; 24GiB Ram) defaulted to using 8GiB of RAM.
Also, stop using a Mac if they aren't suitable for your usecase. There are just so many layers of emulation for this, it's an absolute waste of resources.
Your team might want to use asdf https://github.com/asdf-vm/asdf to run multiple native versions of PostgreSQL and Redis in parallel. Even with one project you might have multiple versions of those tools in different releases of the project. You standardize by using a .tool-versions file. I've been using that for a team targeting Linux and developing on Ubuntu, Mac and WSL (or was that an Ubuntu VM in Windows?)
Many companies run dev work in a dedicated cloud VM.. incl well known companies like Google or Amazon.
You can run a constant VM with 2/4/8/128gb of ram or whatever you need. I use one at work for years and I think mine is 16gb of ram and it’s way over provisioned most of the time. Unlike how you might expect, treat the cloud vm like a work laptop not a production service. Let people write scripts that stay there, let people keep it on 24/7, available on demand, etc. It’s a cloud laptop not a production VM.
Depending on your workloads that sounds like a very expensive way to do development compared to just having a dedicated but efficiently set up work laptop.
Even giving people a headless Intel NUC to connect their laptops too would be way, way cheaper (assuming you're just doing development).
Where I work, they used to give you desktop computers (pre Covid) for the workstation purpose, but post-covid they just provision a VM for you. Honestly, its probably cheaper in short-term and only mildly more expensive long term. No real IT work needed (since <cloud provider> handles the hardware), and automatic upgrades if more ram/GPU/etc is needed. Even a really big VM ($80/m) wouldn’t be crazy compared to the logistics of managing/storing/powering/networking a bunch of desktops across an office.
This is what I wonder. A powerful local laptop is so useful - instant feedback; works on the train or any other offline/low-connectivity setting; the only engineer you need looking after it really is the one who has it.
Having said that, cloud development means you don't need big capex orders for laptops all the time, and it's probably easier to secure.
I use a 16gb m1 air. I'm running docker desktop with mysql, redis, 2 containers doing python, a node container and an nginx container. I'm not noticing any impact on performance. MS Teams hurts more to run. Though I have adjusted the resources docker uses.