No, 'static' CPU manager policy provides ability to allocate CPUs exclusively to container cgroup. But since Go runtime doesn't read cpugroup information anyway, it still sees all available CPUs.
That is only true if the pod is running within the guaranteed runtime class (requests==limits). For pods where requests!=limits a common set of cpus are used for all burstable pods, otherwise bursting past requests would not work.
This still allows the worst case where a node with 100 cpus running butstable pods will still see huge overheads in the golang scheduling runtime.
To my knowledge (I have done a lot of research into not only runc but also gvisor) there is no way to have the go runtime and cgroups interact in a sane way currently by default.
If the golang runtime was cgroup aware I do believe it is possible to have sane defaults, especially since the JVM and CLR have done so.
Ad 3. Lot of people don't know about it, but there's a open source, free, and DynamoDB compatibile databse called ScyllaDB - API it's called Alternator to be specific.