Will CUDA remain a monopoly for GPU software through 2027?
➕
Plus
15
Ṁ363
2027
56%
chance

NVIDIA's CUDA is a proprietary closed source parallel computing platform and application programming interface (API) that allows software to use certain types of GPUs for general purpose processing

all the major deep learning frameworks like TensorFlow, PyTorch, Caffe, Theano and MXNet added native support for CUDA GPU acceleration early on.

This created a self-reinforcing virtuous cycle — CUDA became the standard way of accessing GPU acceleration due to its popularity and support across frameworks, and frameworks aligned with it due to strong demand from users. Over time, the CUDA programming paradigm and stack became deeply embedded into all aspects of the AI ecosystem.

End date : Jan 1 2027

Get
Ṁ1,000
and
S3.00
Sort by:

Does Apple Metal Performance Shaders count?

hmmm, how does this count if you can already export your models to ONNX today and run them on all sorts of devices? You just have to be careful what APIs you call and such, and need a portable variant if you have custom CUDA code.

But like, my Stable Diffusion and LLM trainings I did on my Mac because PyTorch just runs everywhere. And all the hardware startups are getting LLMs ported using ONNX.

@Mira Doesn't ONNX also use CUDA to interface with GPUs? Can you run ONNX inference on an AMD GPU?

@Shump Yes, that's the whole point of ONNX. Microsoft/Facebook made it as a portability layer. PyTorch team did a lot of the work.

The hardware declares to support an "ONNX version" and when you export a model it also will have an ONNX version. So not everything necessarily runs, but that's hardware/driver/library limitation. The framework is supposed to allow it.

So if e.g. Groq declares their chips support ONNX 12, then any model exported to ONNX 12 will run on a Groq card.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules