In recent years, large language models (LLMs) like DeepSeek have gained significant attention for their powerful natural language processing capabilities. However, relying solely on cloud-based services can lead to issues such as server overloads or privacy concerns. This guide provides a step-by-step tutorial for deploying the DeepSeek model locally using Ollama and integrating it with AnythingLLM to create a personalized knowledge base—all without requiring advanced technical expertise.
System Requirements
Before starting, ensure your computer meets these specifications:
-
Operating System: Windows 10 or later -
Storage: At least 10GB of free space -
Hardware: -
CPU: Intel Core i5 / AMD Ryzen 5 or higher -
GPU: Optional but recommended (1GB+ VRAM improves performance)
-
Step 1: Installing Ollama
Ollama is an open-source tool for running LLMs locally. Follow these steps to install it:
1.1 Download Ollama
-
Visit the Ollama official website or use the Baidu Netdisk link: -
URL: https://pan.quark.cn/s/a8e63450cbf7 -
Extraction code: jrPZ
-
Download the OllamaSetup.exe
file.
-
1.2 Install Ollama
-
Double-click the downloaded installer. -
Click Install
and wait for the process to complete. -
The installer will close automatically once finished.
1.3 Verify Installation
-
Press Win + R
, typecmd
, and press Enter to open the Command Prompt. -
Enter the following command: ollama --version
If the installation is successful, the terminal will display the installed version (e.g.,
ollama version 0.1.20
).
Step 2: Deploying the DeepSeek Model
Ollama supports multiple LLMs, including DeepSeek. Here’s how to install and validate the model:
2.1 Install DeepSeek
-
Open the Command Prompt. -
Run the following command to download the deepseek-r1:1.5b
model:ollama run deepseek-r1:1.5b
Note: Replace
1.5b
with other model sizes (e.g.,7b
,14b
) based on your GPU capabilities.
2.2 Validate the Model
After installation, test the model by entering a prompt:
>>> who are you
The model should respond with a self-introduction, confirming it’s operational.
Step 3: Installing and Configuring AnythingLLM
AnythingLLM is a user-friendly interface for managing local LLMs and knowledge bases.
3.1 Download AnythingLLM
-
Official website: https://anythingllm.com/ -
Alternative Baidu Netdisk link (same as Ollama): -
URL: URL: https://pan.quark.cn/s/a8e63450cbf7 -
Extraction code: jrPZ
-
Download AnythingLLMDesktop.exe
.
-
3.2 Install AnythingLLM
-
Run the installer and follow the prompts. -
If blocked by Windows Defender, click More Info
>Run Anyway
. -
Choose an installation directory and complete the setup.
3.3 Configure Ollama Integration
-
Launch AnythingLLM and skip initial setup steps by clicking the right arrow. -
Create a workspace name (e.g., “My Knowledge Base”). -
Navigate to Settings (wrench icon) > LLM Preferences. -
Select Ollama
as the provider. The settings will auto-populate if Ollama is running. -
Click Save Changes
.
Step 4: Building a Local Knowledge Base
AnythingLLM allows you to upload documents (PDF, TXT, etc.) to create a customized knowledge base.
4.1 Upload Documents
-
In your workspace, click Upload a Document
. -
Drag-and-drop files or browse your local storage. A sample document is available here: https://pan.baidu.com/s/1uMfqUIM6LpGXGDk52HuQXQ?pwd=ziyu.
-
Click Move to Workspace
andSave and Embed
to process the files.
4.2 Query the Knowledge Base
-
Start a New Thread
in the workspace. -
Ask questions related to your uploaded documents. -
The AI will generate answers based on the knowledge base. -
Click Show Citations
to view sources from your documents.
Troubleshooting Common Issues
Issue 1: Limited Citations in Responses
If Show Citations
references too few documents:
-
Adjust Search Preferences: -
In AnythingLLM settings, set the vector database search preference to Accuracy Optimized
.
-
-
Modify Similarity Threshold: -
Increase the similarity threshold to include more document chunks.
-
Issue 2: Installation Path Conflicts
To avoid filling up the C: drive:
-
Use a custom installation script for Ollama: @echo off chcp 65001 > nul set "SCRIPT_DIR=%~dp0" set path=E:/Ollama setx OLLAMA_MODELS "%path%/models" > nul setx OLLAMA_HOST "0.0.0.0" > nul setx OLLAMA_ORIGINS "*" > nul %SCRIPT_DIR%OllamaSetup.exe /DIR="%path%" pause
-
Replace E:/Ollama
with your preferred directory.
Enhancing Usability with Visual Tools
Option 1: Chatbox
-
Download Chatbox for a streamlined interface. -
Configure settings: -
API Endpoint: http://127.0.0.1:11434
-
Model: Select your installed DeepSeek version.
-
Option 2: Web UI (Next.js-Ollama-LLM-UI)
-
Deploy a browser-based interface using this GitHub repository.
Option 3: Chrome Extension (Page Assist)
-
Install the Page Assist extension for direct browser integration.
Key Directory Locations
All AnythingLLM data is stored in:
C:\Users\<YourUsername>\AppData\Roaming\anythingllm-desktop\storage
-
lancedb
: Vector database files. -
documents
: Uploaded and parsed documents. -
vector-cache
: Embedding representations of files. -
models
: Locally stored GGUF model files.
Conclusion
By following this guide, you can bypass cloud service limitations and build a fully offline, customizable AI knowledge base. Whether for academic research, enterprise use, or personal projects, this setup ensures data privacy, reduces latency, and offers flexibility in model selection. For optimal performance, choose a DeepSeek model size that matches your hardware capabilities, and explore third-party tools like Chatbox to enhance usability.
Note: Regularly check Ollama’s model library for updates to DeepSeek and other supported models.