Mini-RAG

by John Wheatley
favorite
share
Score: 50/100
Description
Category: Learning & Knowledge Management

The Mini-RAG plugin enables local retrieval augmented generation by connecting your notes to a locally running LLM through Ollama. You can start a chat in the context of a specific note or folder, allowing the model to reference only relevant content when generating responses. It supports any Ollama-installed model and provides controls for model selection, temperature adjustment, and even context-free chatting when you want unconstrained responses. Interactions can be initiated directly from right-click menus in the editor or sidebar, and conversations can be saved for later reference.

Stats
1
stars
72
downloads
0
forks
8
days
34
days
65
days
0
total PRs
0
open PRs
0
closed PRs
0
merged PRs
0
total issues
0
open issues
0
closed issues
0
commits
RequirementsExperimental
  • Ollama installed and running locally

Latest Version
2 months ago
Changelog

Description

First release of the Mini-Rag plugin for Obsidian.

Features

  • Context-Sensitive Chats with Local LLMs
  • Context-Free Chats with Local LLMs
README file from