Based on your architecture diagram it looks like you're spinning up an instance per-user? As you're probably finding now, you will hit AWS limits quickly.
You might instead want to have a smaller pool of (larger) servers that you run co-resident VMs on with https://firecracker-microvm.github.io/. That will avoid account limits and also keep your AWS costs more predictable.
That's kinda nice use case for the WASM machine/linux emulators, then you just need to provide image and user can run it in the browser
> You might instead want to have a smaller pool of (larger) servers that you run co-resident VMs on with https://firecracker-microvm.github.io/. That will avoid account limits and also keep your AWS costs more predictable.
I'd imagine (still waiting for it to load lmao) most of it could be containers too.
I like making jokes with coworkers about implementing this or that bit of infra with WASM-based tools mostly to get a rise out of them but each time I make the joke I look into some of the tools or projects and the balance of joke to "I'm actually serious" shifts a little bit to the right.
So then users experience will be poor due to the slowness and non-standard implementation. A better solution IMO would be to provide a container with SSH access.
Just run them in Linux VMs with WASM, on the users' browsers. Make them all pay for it with higher utility bills and greater wear & tear on their hardware.
I thought Cloudflare only ensures high usage of the free tier for "web"-ish responses, which doesn't even include .txt files. But I suppose this use case is several orders of magnitude away from that of EasyList, at least in request rate.
I don't think the DNS exercise would behave the same although that probably depends on how the container was setup. Docker usually controls /etc/resolv.conf. Another exercise is "try to figure out if you're in a container or VM so that'd definitely be different"
The question is not if the exercises would behave identically, but if you can test the objective in a container. For example, you can totally test, screw up, and fix DNS in a container. I would think that "try to figure out if you're in a container or VM" would be exactly the same as it is right now.
I haven’t gotten any of the challenges to load, but if you’re going to simulate a sysadmin it would make sense to give you high privileges (or even root) on the box. The more privileged you are inside a container, the more attack surface you expose.
Which is why you create a "dummy" host VM that hosts containers. Nobody's saying "host containers on your prod webserver." On the other hand, spinning up a VM for every user seems insane to me.
User namespaces have resulted in multiple new container breakout CVEs in the last year. Some guides actually recommend disabling user namespaces because they are still somewhat new and perilous.
You're talking about creating new user namespaces inside a container, not running a container in a user namespace. Running a container in a user namespace is strictly a security improvement over running it in the host user namespace.
Also, all container runtimes automatically block unshare(CLONE_NEWUSER) with seccomp already (unless they've disabled seccomp, which I'm not sure if Kubernetes still does).
What are the ones in the last year? They provide security benefits as well. I mean, you could say the Linux kernel is also dangerous and the Windows kernel and pretty much anything that has ever had a CVE. You can also limit it to specific users too if that is a major concern.
I haven't fully grokked this yet, but one trick I've used in the past to get around limits is AWS Organizations, creating a sub-account per property. A bit more setup but can keep things cleaner administratively.
At least for the tests I've done on a small startup recently, they've also implemented some automatic quota increases for EC2. I ran commands that would have (or did) eclipsed my quota, and got an email that my quotas were bumped a few minutes later.
You might instead want to have a smaller pool of (larger) servers that you run co-resident VMs on with https://firecracker-microvm.github.io/. That will avoid account limits and also keep your AWS costs more predictable.