Initial setup: Knowledge base RAG system with LlamaIndex and ChromaDB
- Add Python project with uv package manager - Implement LlamaIndex + ChromaDB RAG pipeline - Add sentence-transformers for local embeddings (all-MiniLM-L6-v2) - Create MCP server with semantic search, indexing, and stats tools - Add Markdown chunker with heading/wikilink/frontmatter support - Add Dockerfile and docker-compose.yaml for self-hosted deployment - Include sample Obsidian vault files for testing - Add .gitignore and .env.example
This commit is contained in:
15
.env.example
Normal file
15
.env.example
Normal file
@ -0,0 +1,15 @@
|
||||
# Knowledge RAG Configuration
|
||||
|
||||
# Path to your Obsidian vault (must contain markdown files)
|
||||
# This should be an absolute path or relative to where you run docker-compose
|
||||
VAULT_PATH=./knowledge
|
||||
|
||||
# Embedding model to use
|
||||
# Default: all-MiniLM-L6-v2 (fast, good quality, ~90MB)
|
||||
# Other options:
|
||||
# - all-mpnet-base-v2 (higher quality, slower, ~420MB)
|
||||
# - BAAI/bge-small-en-v1.5 (good quality, ~130MB)
|
||||
EMBEDDING_MODEL=all-MiniLM-L6-v2
|
||||
|
||||
# Optional: Log level (DEBUG, INFO, WARNING, ERROR)
|
||||
LOG_LEVEL=INFO
|
||||
Reference in New Issue
Block a user