#local-ai
View:
  • 1.
    Mini-RAG
    a month ago by John Wheatley
    favorite
    share
    Score: 49/100
    Category: Learning & Knowledge Management
    The Mini-RAG plugin enables local retrieval augmented generation by connecting your notes to a locally running LLM through Ollama. You can start a chat in the context of a specific note or folder, allowing the model to reference only relevant content when generating responses. It supports any Ollama-installed model and provides controls for model selection, temperature adjustment, and even context-free chatting when you want unconstrained responses. Interactions can be initiated directly from right-click menus in the editor or sidebar, and conversations can be saved for later reference.
  • 2.
    Private AI
    a month ago by GB
    favorite
    share
    Score: 48/100
    Category: Learning & Knowledge Management
    The Private AI Chat plugin enables secure, local AI-powered conversations with your notes using a privacy-first approach. It connects to a locally running LM Studio server, ensuring that all processing happens on your own device without sending data to external services. You can query your vault naturally, with the plugin automatically searching relevant notes and citing them in responses. It allows narrowing the AI's focus to specific open notes for targeted insights and supports performance tuning through adjustable models, search parameters, and token limits. Cross-platform compatibility for Mac and Windows makes it widely accessible, while easy setup and model swapping simplify usage.
  • 3.
    LLM Tagger
    7 months ago by David Jayatillake
    favorite
    share
    Score: 39/100
    The LLM Tagger plugin enhances note organization in Obsidian by using locally running large language models via Ollama to automatically generate relevant tags. It processes notes efficiently, avoiding re-tagging unchanged files, and can create brief summaries alongside the generated tags. Users can customize their tag list for more focused categorization and select different LLM models for processing. The plugin also supports an auto-tagging feature that applies tags to newly created or modified notes. With local processing, it ensures privacy and speed while maintaining a seamless tagging workflow.