Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think you'd need a particular reason to not use Docker.


Last time I tried using Docker Desktop on my MBP (x86), the reason to not use Docker was that it was excruciatingly slow for the test suite I had to run in it, compared with Docker on Linux.

I abandoned Docker on the Mac because of that, and haven't touched it since. That was early 2021; maybe it's faster now.


It’s the cross-OS filesystem stuff that always killed it on Docker for Mac (and to a lesser extent Docker for Windows).

I believe improvements have been made, but a lot of people these days check out their code directly in the container rather than using bind mounts, leveraging the fact that a lot of editors/IDEs now will interface directly with the containers.


That’s been massively improved with VirtioFS. I work with mounted code and I agree it wouldn’t be possible without it.


Docker Mac m1 is my daily driver. Painless performance.

But... watch out for software compatibility: m1/m2 requires recompiling from source which is painless... when it works. I recently needed syslinux and had to move to x86 cloud instance. Fortunately, docker made that easy.


It takes more time and effort than just running the damn script! Is that not good enough?

I mean I want to use containers but on top of setting up the host, they require composing containers (even when ready made for customizations), networking, logging and fight for more memory when using memory hungry stuff in conjunction (like elastic or other db).

If my main job was devops, I suppose I would make myself more valuable by doing everything in containers but when I deploy an app it is because I have to on top of many other duties so being able to not only setup but troubleshoot and fix outages quickly is most important (and I hope a full time devops person, if I ever get one) will help me migrate all that some day so it looks nice and neat.


If you don't want to use containers at all then that's a different question than the one I I was responding to.


Fair enough.


Docker Desktop is:

- spyware (transmits private data off your machine without consent when it crashes, which it does a lot)

- nonfree software

- has a git repo so you don't notice it's nonfree software


Does docker support true x86_64 emulation on arm64 hosts?


No, and that’s a pretty good reason to not use it if you really need that.

What can you use to do that?


Currently using straight up lima and software emulation. It’s slow but it just about works.

I think there’s stuff that uses it under the hood like colima and rancher.

Basically the issue is that quite a lot of software is amd64 only and they haven’t cross complied binary layers for arm64.

So for local development you have to emulate.

It’s a shame there’s not a way to leverage rosetta2 really.


There is, it works wonderfully well but you have to do a lot of things manually for now.


I thought that was a limitation that Apple put in place.

You can either run an alternative architecture or an alternative OS. Running MacOS software for x86 is running through an interpreter.

If you want to run x86 Linux software, then MacOS on Arm is just one too many steps away.


I do have a particular reason - memory use. I am running postgres and redis locally for dev work, but I would love to use Docker so that I can standardize it for my team, but it just takes up to much ram on m1.


I don't mean to sound flippant, but that sounds like you're using a computer for business that can't handle the work you are trying to do. If 32/64GB isn't enough memory then yeah, I guess you need something else, but if your machine has less than that then it sounds like you need to buy the right computer for the job.

Also, are you using AMD or ARM images for those?


I dunno about the m1 aspect of this but postgres and redis for typical dev work shouldn't require more than a gigabyte or two of RAM when running together. 32-64GB is more like heavy traffic enterprise DB territory...


guess the parent is annoyed at spending 2-3GiB of ram on the VM itself.

even if the containers themselves only use 128MiB total: the VM will cordon off everything it's told to; which most people are unlikely to change until there's an issue, and is configurable down to 1GiB as a minimum.

FWIW Docker Desktop on my machine (M2 Macbook Air; 24GiB Ram) defaulted to using 8GiB of RAM.


They're also trivial to install without virtualization. Postgres has been one of the first things I install on a new laptop for over a decade now.


Also, stop using a Mac if they aren't suitable for your usecase. There are just so many layers of emulation for this, it's an absolute waste of resources.


This is the right answer.

Scratching my head as to why devs are buying up these new macs knowing up front that they will not work well for the job they were bought for.

Why not make it easier for yourself and have your dev machine match up with prod?


You can setup how much ram is allowed in docker. Generally software will use as much as you allow (especially DB)


Your team might want to use asdf https://github.com/asdf-vm/asdf to run multiple native versions of PostgreSQL and Redis in parallel. Even with one project you might have multiple versions of those tools in different releases of the project. You standardize by using a .tool-versions file. I've been using that for a team targeting Linux and developing on Ubuntu, Mac and WSL (or was that an Ubuntu VM in Windows?)


Many companies run dev work in a dedicated cloud VM.. incl well known companies like Google or Amazon.

You can run a constant VM with 2/4/8/128gb of ram or whatever you need. I use one at work for years and I think mine is 16gb of ram and it’s way over provisioned most of the time. Unlike how you might expect, treat the cloud vm like a work laptop not a production service. Let people write scripts that stay there, let people keep it on 24/7, available on demand, etc. It’s a cloud laptop not a production VM.


Depending on your workloads that sounds like a very expensive way to do development compared to just having a dedicated but efficiently set up work laptop.

Even giving people a headless Intel NUC to connect their laptops too would be way, way cheaper (assuming you're just doing development).


Where I work, they used to give you desktop computers (pre Covid) for the workstation purpose, but post-covid they just provision a VM for you. Honestly, its probably cheaper in short-term and only mildly more expensive long term. No real IT work needed (since <cloud provider> handles the hardware), and automatic upgrades if more ram/GPU/etc is needed. Even a really big VM ($80/m) wouldn’t be crazy compared to the logistics of managing/storing/powering/networking a bunch of desktops across an office.


This is what I wonder. A powerful local laptop is so useful - instant feedback; works on the train or any other offline/low-connectivity setting; the only engineer you need looking after it really is the one who has it.

Having said that, cloud development means you don't need big capex orders for laptops all the time, and it's probably easier to secure.


I use a 16gb m1 air. I'm running docker desktop with mysql, redis, 2 containers doing python, a node container and an nginx container. I'm not noticing any impact on performance. MS Teams hurts more to run. Though I have adjusted the resources docker uses.


I never understand the "docker takes up too much space/ram" objection. Isn't that configurable/manageable even from the GUI?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: