AMD’s Radeon RX 7900 XTX is redefining expectations by outpacing NVIDIA’s GeForce RTX 4090 in benchmarks for the DeepSeek R1 AI model, bringing it into the spotlight.
AMD Swiftly Enhances Support for DeepSeek’s R1 Models, Delivering Outstanding Performance
DeepSeek’s latest AI creation has shaken up the tech world, and while curiosity abounds about the resources used in its development, AMD’s “RDNA 3” Radeon RX 7900 XTX GPU stands out as a powerhouse for the average user. AMD has unveiled impressive inference benchmark comparisons, showing how their flagship RX 7000 series GPU delivers superior performance against NVIDIA’s offerings.
As noted by David McAfee on Twitter, DeepSeek is performing remarkably on AMD’s hardware, offering insights into running AI workloads on Radeon and Ryzen platforms.
Using consumer GPUs for AI has proven effective for many due to the cost-performance balance they provide compared to traditional AI accelerators. Running models locally ensures your privacy, a significant concern with DeepSeek’s models. To this end, AMD has released a comprehensive guide on utilizing DeepSeek R1 on their GPUs. Here’s a simplified path to get you started:
Step 1: Ensure you have the 25.1.1 Optional or higher Adrenalin driver installed.
Step 2: Obtain LM Studio 0.3.8 or above from lmstudio.ai/ryzenai.
Step 3: Install LM Studio and skip the onboarding screen.
Step 4: Navigate to the discover tab.
Step 5: Select your preferred DeepSeek R1 Distill. For quick performance, the Qwen 1.5B is recommended. Larger distills promise better reasoning capabilities.
Step 6: Choose the “Q4 K M” quantization and click “Download.”
Step 7: Once done, return to the chat tab, choose the DeepSeek R1 distill from the menu, and ensure “manually select parameters” is checked.
Step 8: Maximize the GPU offload layers.
Step 9: Click on model load.
Step 10: Interact with your reasoning model running entirely on local AMD hardware!
If you encounter hiccups with these instructions, fear not. AMD has a video tutorial on YouTube that meticulously covers each step. Dive into this guide and secure your data from being compromised by running DeepSeek’s LLMs locally. With NVIDIA and AMD’s new GPUs on the horizon, we anticipate significant leaps in inferencing capabilities as dedicated AI engines become standard, enhancing workload efficiency.