Description
Learning & Knowledge Management Plugin
Category: Learning & Knowledge Management
The Mini-RAG plugin enables local retrieval augmented generation by connecting your notes to a locally running LLM through Ollama. You can start a chat in the context of a specific note or folder, allowing the model to reference only relevant content when generating responses. It supports any Ollama-installed model and provides controls for model selection, temperature adjustment, and even context-free chatting when you want unconstrained responses. Interactions can be initiated directly from right-click menus in the editor or sidebar, and conversations can be saved for later reference.
Stats
6
stars
6 stargazers
318
downloads
318 downloads
1
forks
1 forks
54
days
54 days since creation
80
days
80 days since last commit
111
days
111 days since last release
0
total PRs
0 total pull requests
0
open PRs
0 open pull requests
0
closed PRs
0 closed pull requests
0
merged PRs
0 merged pull requests
0
total issues
0 total issues
0
open issues
0 open issues
0
closed issues
0 closed issues
0
commits
0 total commits in last one year
RequirementsExperimental
Ollama installed and running locally
Latest Version
4 months ago
Changelog
Description
First release of the Mini-Rag plugin for Obsidian.
Features
- Context-Sensitive Chats with Local LLMs
- Context-Free Chats with Local LLMs
README file from
Similar Plugins
info
• Similar plugins are suggested based on the common tags between the plugins.
Ollama
2 years ago by hinterdupfinger
Vector Search
7 months ago by Ashwin A Murali
Obsidian plugin for Vector Search
LLM Tagger
7 months ago by David Jayatillake
Pure Chat LLM
4 months ago by Justice Vellacott
Private AI
2 months ago by GB
Effortlessly chat with your Obsidian notes using a privacy first LLM. Private by design, your notes never leave the device and use local processing only.