HitPaw VikPea

  • AI upscaling your video with only one click
  • Solution for low res videos, increase video resolution up to 8K
  • Provide best noise reduction for videos to get rid of unclarity
  • Exclusive designed AI for perfection of anime and human face videos
HitPaw Online learning center

How to Self Host DeepSeek R1 Models on Your Windows & Mac System

If you want to run a powerful LLM model on your own computer, DeepSeek Self Host is the route you should take. This way, you can tap into advanced AI capabilities without relying on cloud services. As a developer looking for privacy, a researcher needing offline access, or just someone who loves to manage their workflows, this local deployment solution opens up exciting possibilities for you without requiring an internet connection.

Create Now!

Part 1. DeepSeek Self Host On Windows: Create Front End Chatbox

Here are the easy and quick step-by-step instructions to self host DeepSeek front end on your Windows computer:

Step 1. Download Docker Desktop

The first step is to go to the Docker website and download the Docker Desktop app. When you hover over Download Docker Desktop, you'll see 4 options. Select the "Download to Windows MD64" option, which works fine in most cases.

downloading-docker-for-windows

Step 2. Download and Install Ollama for Windows

Next, head to the Ollama website, click the Download option, and select "Download for Windows."

downloading-ollama-for-windows

Now, click the Ollama .EXE file to install it on your Windows PC, which will probably take 3-5 minutes.

After installation, you'll see an Ollama pop-up box at the bottom right corner of your PC screen; click to open Ollama in PowerShell. If you don't see the pop-up due to some error, just open Windows Powershell and follow along.

ollama-running-on-windows-powershell

Step 3. Download DeepSeek R1 Model and Run on PowerShell

In the next part, go to Ollama's library on their website to download the appropriate model of DeepSeek R1 for your Windows computer.

deepseek-r1-models

In most cases, it is better to stick between 7B and 8B parameters unless you have really high-end Hardware. Otherwise, you'll get an error message as soon as you type something on the Poweshell during configuration. If your system is of high specs, you can try between 14B and 70B, and maybe 671B.

As soon as you select the parameter, the install command in the side option window adjusts correctly for you. Click the double square to copy.

copying-deepseek-model-install-command

Now, right-click on Windows PowerShell and select "Run as administrator." Paste the command of the DeepSeek R1 parameter, hit the Enter key, and give it some time to pull manifests. The waiting time depends on how fast your Internet connection and computer are.

running-deepseek-model-install-command-in-powershell

After you receive the confirmation in the PowerShell that the DeepSeek R1 parameter has been successful, you can start chatting with the chatbox straight from the PowerShell's command prompt.

Step 4. Install Docker for Front End DeepSeek R1 Chatbox

If you are not fond of using the command prompt, you can use the front end with Docker (you've already downloaded it in the previous step. Simply click the Docker Installer and install it on your Windows PC using the default configurations.

installing-docker-desktop-on-windows

After the installation, your PC will reboot. Next, launch the Docker program and skip the surveys to access the main Docker Engine.

Step 5. Add a Container In the Docker

Here, you notice that there are no containers. If you have an NVIDIA GPU, copy and paste the following command in PowerShell:

docker run -d -p 3000:8080 --gpus all --add-host-host.docker.internal:host-gateway -v open-web ul:/app/Dackend/data --name open-webul --restart always ghcr.io/open-webul/open-webul:cuda

If you have an alternative GPU, run this command:

docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Hit the Enter key, and once the command is completed successfully, it will create a container in Docker and start it. You don't have to lift a finger in this automated process. On the container, you can now see the Local Host link.

deepseek-self-host-port-url-in-docker

Step 6. Access DeepSeek Front End Chatbox

When you click the local host link, it will take you to the front of the DeepSeek R1 chatbox. Click "Get Started" and create an admin account with your name, email, and password. It's a local database, so you don't need to verify anything on that email.

creating-account-on-deepseek-local-host

Click "Let's Go" in the next pop-up, and the DeepSeek R1 front end will be executed.

self-host-deepseek-interface

Step 7. Do Final Testing

In the final step, it is time to test your DeepSeek self host front end. To do this, ask anything that is not on the internet, like a valid question you have in your mind, and it will show you its reasoning process or thinking process.

However, the DeepSeek local host may not give you an accurate response to your question compared to the online model, which is accepted. It is more of a hit-and-miss with the local host.

To add more DeepSeek R1 models to your local host front end, just add them from the Ollama website Step 3 above), create separate containers in the Docker, and click the local host links next to each of them to access their front-end chatboxes.

That's about it!

Part 2. How to Install DeepSeek R1 Locally on Mac - Front End

The steps involved in DeepSeek self host on Mac are almost the same as those for Windows, with very few exceptions. Let's get to know the complete process below:

  • 1.First, go to the Ollama website, download the Mac version, and install it on your system.

    downloading-ollama-for-mac
  • 2.Now, go to the Docker website, download the Mac version of the Docker Desktop app, and install it with the already configured settings.

  • 3.Launch the Docker Desktop app. Open your Mac's Terminal and run the following command to create a container on the Docker Desktop, but keep the Terminal open for the next step.

    docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

  • 4.Next, go to the Ollama website to download the DeepSeek R1 model you need to run locally on your Mac. Copy the install command by double-clicking the two squares.

    copying-deepseek-model-install-command-for-mac
  • 5.Paste the install command into the Terminal, hit Enter, and it will start pulling manifests right away.

    running-deepseek-model-install-command-on-mac-terminal
  • 6.Finally, go back to the Docker, and you will see that the container has already been created. Click the local host link to access the DeepSeek R1 front-end chatbox on your Mac.

    deepseek-local-host-port-in-docker
  • 7.Create an account with your name, email, and password to access the locally host DeepSeek R1 front-end chatbox, and start managing your workflows

Part 3. FAQs of DeepSeek Self Host

Q1. What are the minimum system requirements to run DeepSeek locally?

A1. To run DeepSeek locally on your Windows or Mac system, you need to make sure it meets the necessary requirements, especially if you are planning to run larger R1 models. Here are the requirements:

  • CPU: Multi-core processor (preferrably 12+ cores recommended) to handle multiple requests.
  • GPU: NVIDIA GPU with CUDA support to ensure accelerated performance. The three recommended NVIDIA GPUs are the NVIDIA RTX 3060, NVIDIA RTX 4090, and the NVIDIA A100. AMD GPUs can work, too, but they are less popular in this case.
  • RAM: Minimum 16 GB, preferably 32 GB or more for larger models.
  • Storage: Minimum 20GB free space (50GB is recommended).

Q2. Why you should run DeepSeek locally?

A2. Running DeepSeek locally offers several key advantages: complete privacy as your data never leaves your machine, no internet dependency for continuous operation, lower latency since processing happens on your hardware, and no usage costs or API fees. Local deployment also allows customization of model parameters and fine-tuning for specific use cases. Additionally, you have full control over model versioning and updates, ensuring consistent performance for your applications. Local deployment is particularly valuable for sensitive data processing or when working in air-gapped environments.

Q3. Can I run DeepSeek offline completely?

A3. Yes, DeepSeek can run fully offline once model weights are downloaded. First, download your chosen model weights (7B/13B) through Ollama and run them through the Docker Desktop app. After the initial download, no internet connection is required for model inference or operation.

Q4. What are distilled DeepSeek models?

A4. Distilled DeepSeek models are compressed versions of the original models available on Ollama and GitHub and created through knowledge distillation techniques. These smaller models maintain much of the original model's capabilities while requiring less computational power and memory. They're optimized for efficiency, making them suitable for devices with limited resources or when faster inference is needed. The trade-off is slightly reduced performance compared to full models.

Conclusion on DeepSeek Self Host

As we wrap up this DeepSeek Self Host guide, remember that the future of AI isn't just about cloud services - it's about bringing powerful tools directly to your fingertips. By taking control of your AI environment, you're not just running a model; you're joining a community of innovators who believe in the power of local, accessible, and customizable AI solutions.

Hopefully, this guide has helped you run DeepSeek R1 models locally, and you can now manage your workflows offline.

Select the product rating:

HitPaw Online blogs

Leave a Comment

Create your review for HitPaw articles

Recommend Products

HitPaw Video Converter

HitPaw Video Converter

All-in-one video, audio, and image converting and editing solutions.

HitPaw Edimakor

HitPaw Edimakor

An Award-winning video editor to bring your unlimited creativity from concept to life.

download
Click Here To Install