Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you need CUDA don't care about AMD.


>If you need CUDA

The 4090 is a gaming card, and thus optimized for games, not compute.

Buying a card that costs that much and draws that much power to do CUDA is not a choice somebody would make within the realm of sanity.

>don't care about AMD.

AMD has HiP, which is supposedly an open CUDA.

I hear that it's enough to replace calls to "CUDA" to calls to the same function names except "HIP" and then you're set, and has swappable backends, working with both AMD and NVIDIA hardware.

Note that I can't vouch for it, as I haven't tried it, but with the swappable backends I'd switch immediately even if I kept using NVIDIA hardware, just to have assurance of not being under vendor lock-in.


HiP (unfortunately) has very limited support on AMD cards, unlike CUDA which runs on ALL NVidia hardware.


That sucks, but the situation is improving. RDNA2 support was added early this year.

Now that AMD isn't constrained by financial dire straits anymore, they seem to be pushing the HiP ecosystem very hard, adding HiP support into relevant software and frameworks.

They were focused on CDNA cards first (aimed at datacenter/computer), but expanding it to cover RDNA2 (the gaming cards) is a key step forward; that's what potential developers already have for gaming, and thus important for HiP adoption.

Otherwise, the widely deployed Vega have worked with HiP for a long time. I agree the RDNA1 hiccup was ugly and hurt adoption.


While it's nice that AMD is becoming more competitive in this space, many small teams still go for NVIDIA to maximize compatibility.

The 4090 is fine as a budget option for many compute tasks, including deep learning, as long as 24GB per card is enough for your use cases.

Single cards in a PC/Workstation can be used to develop the code for a network, even if you run the main training on servers.

But as the 4090 is multiple times cheaper than an A100, a setup with 4 4090's will be preferable to A100s for some workloads.

See for instance Bizon Z5000: https://bizon-tech.com/bizon-z5000.html


This.

Even assuming AMD had support for something as mature and battle-tested as CUDA, a CUDA engineer may consider the price of these cards a no-brainer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: