Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"the MPS backend" - that's the thing that lets Torch run accelerated on M1/M2 Macs!


Based on George Hotz's testing it is very broken. It's possible it has improved since then, I guess but he streamed this a few weeks ago.


We tested inference for all spaCy transformer models and they work:

https://explosion.ai/blog/metal-performance-shaders

It depends very much on the ops that your model is using.


It supports a subset of the operators (as mentioned in the release notes). I don’t think it’s broken for the ones that it does support though.


That's been my experience. However when fallback to CPU happens, it sometimes end up making a specific graph execution slower. But that's explicitly mentioned by the warning and pretty much expected.


Yes, this is my experience. Many off the shelf models still don't work, but several of my own models work great as long as they don't use unsupported operators.


Where can I find a list of the supported operators?


This is the master tracking list for MPS operator support: https://github.com/pytorch/pytorch/issues/77764


You'd think it would fall back to GPU/CPU for unsupported operations instead of failing, but I guess thats easier said than done.


In fact there is an environment variable that enables exactly that. There are obviously performance issues associated with that, but it does work.


I could fine-tune a Detectron2 model a few months ago using PyTorch and MPS backend [1]. I'd be interested if it's working yet.

https://github.com/facebookresearch/detectron2/issues/4342


[flagged]


so, you would bet he was holding it wrong?


More likely, no bet would be placed.


MPS is like an artificial flavor, you get the general idea but not the nutrition.

Nvidia execs should light a candle and pray to all the Gods most "AI" stuff really just works with CUDA since it was coded with CUDA in mind.

That's why I reluctantly shell out quite a few bucks on a ridiculously overpriced Nvidia card.


Yes, I am not sure at what extent is MPS a viable alternative to CUDA. You seem to write a lot about ML models. Do you have a detailed write about this subject?


Great! Any up-to-date guide to get the latest PyTorch running on Macs? Do we still have to use conda for example?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: