Site icon Efficient Coder

Building a DeepSeek Large Model Local Knowledge Base Using AnythingLLM on a Local Computer

In recent years, large language models (LLMs) like DeepSeek have gained significant attention for their powerful natural language processing capabilities. However, relying solely on cloud-based services can lead to issues such as server overloads or privacy concerns. This guide provides a step-by-step tutorial for deploying the DeepSeek model locally using Ollama and integrating it with AnythingLLM to create a personalized knowledge base—all without requiring advanced technical expertise.


System Requirements

Before starting, ensure your computer meets these specifications:

  • Operating System: Windows 10 or later
  • Storage: At least 10GB of free space
  • Hardware:
    • CPU: Intel Core i5 / AMD Ryzen 5 or higher
    • GPU: Optional but recommended (1GB+ VRAM improves performance)

Step 1: Installing Ollama

Ollama is an open-source tool for running LLMs locally. Follow these steps to install it:

1.1 Download Ollama

1.2 Install Ollama

  1. Double-click the downloaded installer.
  2. Click Install and wait for the process to complete.
  3. The installer will close automatically once finished.

1.3 Verify Installation

  1. Press Win + R, type cmd, and press Enter to open the Command Prompt.
  2. Enter the following command:
    ollama --version  
    

    If the installation is successful, the terminal will display the installed version (e.g., ollama version 0.1.20).


Step 2: Deploying the DeepSeek Model

Ollama supports multiple LLMs, including DeepSeek. Here’s how to install and validate the model:

2.1 Install DeepSeek

  1. Open the Command Prompt.
  2. Run the following command to download the deepseek-r1:1.5b model:
    ollama run deepseek-r1:1.5b  
    

    Note: Replace 1.5b with other model sizes (e.g., 7b, 14b) based on your GPU capabilities.

2.2 Validate the Model

After installation, test the model by entering a prompt:

>>> who are you  

The model should respond with a self-introduction, confirming it’s operational.


Step 3: Installing and Configuring AnythingLLM

AnythingLLM is a user-friendly interface for managing local LLMs and knowledge bases.

3.1 Download AnythingLLM

3.2 Install AnythingLLM

  1. Run the installer and follow the prompts.
  2. If blocked by Windows Defender, click More Info > Run Anyway.
  3. Choose an installation directory and complete the setup.

3.3 Configure Ollama Integration

  1. Launch AnythingLLM and skip initial setup steps by clicking the right arrow.
  2. Create a workspace name (e.g., “My Knowledge Base”).
  3. Navigate to Settings (wrench icon) > LLM Preferences.
  4. Select Ollama as the provider. The settings will auto-populate if Ollama is running.
  5. Click Save Changes.

Step 4: Building a Local Knowledge Base

AnythingLLM allows you to upload documents (PDF, TXT, etc.) to create a customized knowledge base.

4.1 Upload Documents

  1. In your workspace, click Upload a Document.
  2. Drag-and-drop files or browse your local storage.

    A sample document is available here: https://pan.baidu.com/s/1uMfqUIM6LpGXGDk52HuQXQ?pwd=ziyu.

  3. Click Move to Workspace and Save and Embed to process the files.

4.2 Query the Knowledge Base

  1. Start a New Thread in the workspace.
  2. Ask questions related to your uploaded documents.
  3. The AI will generate answers based on the knowledge base.
  4. Click Show Citations to view sources from your documents.

Troubleshooting Common Issues

Issue 1: Limited Citations in Responses

If Show Citations references too few documents:

  1. Adjust Search Preferences:
    • In AnythingLLM settings, set the vector database search preference to Accuracy Optimized.
  2. Modify Similarity Threshold:
    • Increase the similarity threshold to include more document chunks.

Issue 2: Installation Path Conflicts

To avoid filling up the C: drive:

  1. Use a custom installation script for Ollama:
    @echo off  
    chcp 65001 > nul  
    set "SCRIPT_DIR=%~dp0"  
    set path=E:/Ollama  
    setx OLLAMA_MODELS "%path%/models" > nul  
    setx OLLAMA_HOST "0.0.0.0" > nul  
    setx OLLAMA_ORIGINS "*" > nul  
    %SCRIPT_DIR%OllamaSetup.exe /DIR="%path%"  
    pause  
    
  2. Replace E:/Ollama with your preferred directory.

Enhancing Usability with Visual Tools

Option 1: Chatbox

  • Download Chatbox for a streamlined interface.
  • Configure settings:
    • API Endpoint: http://127.0.0.1:11434
    • Model: Select your installed DeepSeek version.

Option 2: Web UI (Next.js-Ollama-LLM-UI)

Option 3: Chrome Extension (Page Assist)


Key Directory Locations

All AnythingLLM data is stored in:

C:\Users\<YourUsername>\AppData\Roaming\anythingllm-desktop\storage  
  • lancedb: Vector database files.
  • documents: Uploaded and parsed documents.
  • vector-cache: Embedding representations of files.
  • models: Locally stored GGUF model files.

Conclusion

By following this guide, you can bypass cloud service limitations and build a fully offline, customizable AI knowledge base. Whether for academic research, enterprise use, or personal projects, this setup ensures data privacy, reduces latency, and offers flexibility in model selection. For optimal performance, choose a DeepSeek model size that matches your hardware capabilities, and explore third-party tools like Chatbox to enhance usability.

Note: Regularly check Ollama’s model library for updates to DeepSeek and other supported models.

Exit mobile version