The only 32B distill I'm aware of is `DeepSeek-R1-Distill-Qwen-32B`, which would be a base model of `Qwen-32B` distilled (further trained) on outputs from the full R1 model.
Deepseek R1 has 671 billion parameters. Even if you could quantize each parameter to just 1 bit (from 8 bits), you'd still need 84GB of RAM just for the weights. There is no 32B parameter version of the V3/R1 model architecture.