The Local LLM Helper plugin integrates local Large Language Models (LLMs) with Obsidian, offering a suite of tools for text processing, interactive AI chat, and personalized responses. Users can perform tasks such as summarizing text, generating action items, adjusting tone, and chatting directly with indexed notes. The plugin supports privacy by working with offline LLM servers like Ollama and LM Studio, and it allows full customization of prompts and response formats. Additionally, the plugin includes a user-friendly chat interface and integration with ribbon and status bar for quick access. It offers seamless interaction with LLMs for both productivity tasks and dynamic conversations within notes.
🔧 Critical Bug Fix
Fixed embeddings re-generating on every Obsidian restart
Proper persistent storage prevents data conflicts between settings and embeddings
🚀 New Features
Storage diagnostics command (Ctrl+P → "RAG Storage Diagnostics")
Startup notifications showing loaded embedding count
Enhanced console logging for debugging
⚡ Improvements
Better Ollama API error handling
Settings panel shows correct indexed file count
Separate storage files for settings vs embeddings
Upgrade Notes: Existing users may need to re-index notes once after this update for optimal performance.