Alien Top
  • Communities
  • Create Post
  • heart
    Support Lemmy
  • search
    Search
  • Login
  • Sign Up
KinocciB to AMD@hardware.watchEnglish · 2 years ago

Ditching CUDA for AMD ROCm for more accessible LLM training and inference.

medium.com

external-link
message-square
11
fedilink
1
external-link

Ditching CUDA for AMD ROCm for more accessible LLM training and inference.

medium.com

KinocciB to AMD@hardware.watchEnglish · 2 years ago
message-square
11
fedilink
Just a moment...
medium.com
external-link
  • Yaris_FanB
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    AMD should stop wasting money and put their support behind OpenVINO. It’s already open source, and use optimized for both CPU and GPU for most large AI software.

    https://www.phoronix.com/news/Intel-OpenVINO-2023.0

AMD@hardware.watch

amd@hardware.watch

Subscribe from Remote Instance

Create a post
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: !amd@hardware.watch

For all things AMD; come talk about Ryzen, Radeon, Threadripper, EPYC, rumors, reviews, news and more.

Visibility: Public
globe

This community can be federated to other instances and be posted/commented in by their users.

  • 2 users / day
  • 2 users / week
  • 2 users / month
  • 2 users / 6 months
  • 3 local subscribers
  • 24 subscribers
  • 263 Posts
  • 3.72K Comments
  • Modlog
  • mods:
  • communick@hardware.watch
  • BE: 0.19.5
  • Modlog
  • Instances
  • Docs
  • Code
  • join-lemmy.org