Horme

by ducktapekiller
5
4
3
2
1
Score: 35/100
New Plugin

Description

This plugin has not been manually reviewed by Obsidian staff. AI-powered assistant with chat, right-click actions, vault-wide RAG, and an extensible skill system. Supports Ollama, Claude, Gemini, OpenAI, Groq, OpenRouter, and LM Studio.

Reviews

No reviews yet.

Stats

stars
downloads
0
forks
0
days
NaN
days
NaN
days
0
total PRs
0
open PRs
0
closed PRs
0
merged PRs
0
total issues
0
open issues
0
closed issues
0
commits

Latest Version

Invalid date

Changelog

README file from

Github

GitHub Repo stars GitHub issues GitHub closed issues GitHub manifest version Downloads

▵ Horme

In Greek mythology, Horme (/ˈhɔːrmiː/; Ancient Greek: Ὁρμή) is the Greek spirit personifying energetic activity, impulse or effort (to do a thing), eagerness, setting oneself in motion, and starting an action, and particularly onrush in battle.

Illustration


[!IMPORTANT]

Quick start

TLDR — For Non-Technical Users

To use this plugin's full capabilities, you need two local models:

  1. An Indexing Model: This model allows the plugin to interact with and index your notes.
  2. An Interaction Model: This is the model you will actually chat or “speak” with.

Prerequisites: Setting up Ollama

We recommend using Ollama to manage your local models. You can download it here: Download Ollama.

1. Download the Indexing Model: This model is only for indexing your vault in a compressed format; you cannot chat with it.

  • Recommended Model: nomic-embed-text:latest (274 MB).
  • Command (in your terminal):
    ollama pull nomic-embed-text:latest
    

2. Download the Interaction Model: This is the model you will use for asking questions.

  • Strong Recommendation: gemma4:e4b (9.6 GB)

  • Command (in your terminal):

    ollama pull gemma4:e4b
    

Manual Installation Steps

Once both models are downloaded, follow these steps to install Horme:

  1. Create Plugin Folder:
    • Navigate to your hidden Obsidian folder: .obsidian/plugins
    • Create a new folder named horme.
  2. Download Plugin Files:
    • Go to the repository releases page: Horme Releases.
    • Download the three files from the most recent release:
      • main.js
      • manifest.json
      • styles.css
  3. Activate in Obsidian:
    • Go to Settings in Obsidian.
    • Scroll down and toggle on “Enable Local Vault Memory”.
    • Select the indexing model you just downloaded: nomic-embed-text:latest.
    • Wait for the count in the status bar to finish processing.

Ready to Use

Once the indexing is complete, go to the Horme chat box and ask any question about your notes.

Example Query:

“I want to write an essay on modern art, help me find which of my notes can help me.”

No data leaves your machine. No API keys. No cloud. Just your models, your notes, your rules. But if you still want to use cloud models, they are available in settings.


☰ Table of Contents


▾ Installation

  1. Download main.js, styles.css, and manifest.json.
  2. Create a folder named horme inside your vault's .obsidian/plugins/ directory.
  3. Place the three files inside that folder.
  4. Open Obsidian ➔ Settings ➔ Community Plugins ➔ enable Horme.

◆ Requirements

Dependency Details
Ollama Must be running locally at http://127.0.0.1:11434 (configurable).
Embedding Model Specialized model for RAG (e.g. ollama pull nomic-embed-text or mxbai-embed-large).
Chat Model Pull a model with ollama pull <model> (e.g. gemma3, llama3).
Obsidian v1.0.0 or later.

★ Features

▷ Vault Brain (Local RAG)

The Vault Brain gives the AI long-term memory of your entire knowledge base. It uses a high-performance, private Retrieval-Augmented Generation (RAG) engine.

  • Lean Indexing: Horme does not store your text in the index. It stores character offsets and mathematical "fingerprints" (embeddings), reducing index size by 90% and keeping startup instant.
  • Auto-Pilot Indexing: The system automatically detects when you create or modify a note and updates the index in the background (with a 2-second debounce to save resources).
  • Heading-Aware Chunking: Notes are split into semantically meaningful chunks that preserve heading context, so the model knows which section a passage belongs to.
  • Model-Aware Prefixes: The indexer automatically applies the correct asymmetric prefix convention for your embedding model (nomic-embed-text, mxbai-embed-large, or symmetric models), ensuring high-fidelity retrieval.
  • Multi-Query Fusion: Search runs dual-embedding (full query + keyword distillation) for improved recall across your vault.
  • Model-Locked Integrity: The index is versioned. If you change your embedding model in settings, the plugin detects the mismatch and prompts for a rebuild to prevent corrupted results.
  • Session Toggle: A "Use Vault Brain" checkbox in the chat header lets you disable vault search per-session for faster responses when you don't need it.

▷ Live Connections

Horme can magically surface notes semantically related to what you're currently reading or writing in real-time. This feature runs entirely locally on top of the Vault Brain.

  • Real-time Discovery: As you switch notes, a sidebar panel updates to show you related content across your vault.
  • Granular Control: Adjust the similarity threshold, limit the maximum number of results, and exclude specific folders (like Templates or Daily Notes) directly from settings.
  • Privacy First: Connections are generated locally using your indexed vector embeddings. No data is sent to the cloud.

▷ Semantic Tagging

Manage large tag collections (3,000+ tags) with ease using the Hybrid Tag Suggester.

  • Keyword + Semantic: Combines traditional word-matching with mathematical topic-matching. It finds specific names (like "Hernán Cortés") AND broad themes (like "Spanish History") simultaneously.
  • Intelligent Candidates: From a collection of thousands, it selects the most relevant candidates and lets your local LLM make the final, precise selection.
  • Shadow Tagging (Bilingual): Translate your tags automatically during indexing. Keep your vault in one language (e.g. Spanish #pájaros) but retrieve them using another (e.g. English "birds"). This is fully decoupled from the chat model, allowing you to use a dedicated local model just for translations. This does not affect your real tags in any way. It is just for the index.
  • Tag Index: Dedicated tag brain that maps your entire hierarchy for instant retrieval. Use Rebuild Tag Index in settings to refresh.
  • Tag Button: Quick access via the "Tags" button in the chat header.

▷ Grammar Proofreading Engine

Feed the AI your own grammar manuals and style guides. Horme indexes them locally and consults them during proofreading.

  • Local Grammar Index: Point the plugin to a folder containing your grammar reference notes. Horme chunks and indexes them for semantic retrieval.
  • Language-Aware Activation: Set your grammar language in settings (e.g. "Español"). The grammar skill is only triggered when proofreading text in that language — English text won't invoke Spanish grammar rules.
  • Academic Precision: When proofreading, the AI is explicitly instructed to consult your grammar manuals for non-obvious errors like false cognates, prepositional regimes, and orthotypography.

▷ Frontmatter Summary Generation

Automatically generate concise summaries and write them directly into your notes' YAML frontmatter.

  • Configurable Field: Choose the frontmatter key (e.g. summary, resumen, abstract) in settings.
  • Configurable Language: Summaries are generated in your chosen language.
  • Two Access Points: Use the "Summary" button in the chat header or the command palette (Horme: Generate frontmatter summary).
  • Overwrite Protection: If a summary already exists, a confirmation dialog shows old vs. new before replacing.

▷ AI Skills

Horme extends the LLM with modular skills that it can invoke autonomously during conversations and actions. Skills are tool calls the model emits when it needs external information.

Skill Type Description
Wikipedia Search 🌐 Web Searches Wikipedia for factual verification. Supports multiple languages (en, es, fr, etc.). Returns summaries and relevant article sections with source URLs.
Wiktionary Lookup 🌐 Web Looks up word definitions, etymology, and usage notes. Useful for distinguishing false friends and verifying word existence. Multi-language.
DuckDuckGo Instant Answer 🌐 Web Quick facts and topic summaries for recent events, technical specs, and niche topics not covered by Wikipedia. No API key required.
Date Calculator 💻 Local Computes time differences between dates, verifies day-of-week for historical dates, and checks chronological consistency. Pure computation, zero latency.
Vault Linker 📚 Index Finds semantically related notes within your vault. Privacy-guarded — only available to local providers (or with explicit cloud opt-in).
Taxonomy Scholar 📚 Index Retrieves the full list of existing tags to ensure consistent tagging.
Grammar Scholar 📚 Index Consults your local grammar and orthography manuals for precision checks on syntax, false friends, and orthotypography.
Custom HTTP Skills

Beyond the built-in skills, you can create your own HTTP-based skills to connect Horme to any REST API (local or public). You just configure the URL, method, headers, and a response path.

Example: Open Library Book Search

  • Method: GET
  • URL: https://openlibrary.org/search.json?q={{query}}&limit=3
  • Response Path: docs

When armed, typing a query (e.g., "Don Quixote") replaces the {{query}} placeholder, makes the request, extracts the docs array, and injects it directly into the AI's context so it can answer your question using real-time data.


▷ Privacy Firewall

Horme is built with a "Privacy-First" architecture with four layers of protection on vault data.

  • Cloud Lock: If you switch to a cloud provider (Claude, Gemini, etc.), the Vault Brain, background indexer, and Vault Linker skill are immediately disabled. No private note content is processed by external servers.
  • Skill Suppression: When vault search is locked, the Vault Linker skill is hidden from the model's instructions entirely — the model never even knows it exists.
  • Defence in Depth: Even if a prompt-injected model somehow attempts to call the vault skill, the skill itself refuses to execute when access is locked.
  • Context Warning: A one-time confirmation dialog is required before sending the current note context to a cloud provider.
  • Explicit Opt-In: An "Allow Cloud Provider Access" toggle (with a confirmation prompt) is required before any vault content can be sent to cloud providers.
  • Tag & Grammar indexes are available to all providers — they contain only tag names and grammar manual excerpts, not private vault content.

▷ Chat Panel

Open the chat panel from the ribbon icon (▵) or the command palette (Horme: Open chat panel).

  • Streaming UI: Responses rendered as live Markdown with code blocks, lists, and full text selection.
  • Connection Indicator: Live coloured dot showing Ollama status.
  • Model Selector: Switch between available models directly from the chat header.
  • Micro-Batching: Optimized for Apple Silicon; handles large context windows by processing embeddings in small, stable groups.

▷ Multi-Note Context

Send multiple notes as context to the AI in a single conversation.

  • Note Picker: Click "+ Add notes" in the chat header to open a fuzzy search modal. Select up to 5 notes.
  • Selected Notes Label: A compact label shows which notes are currently included as context.
  • Clear All: One-click button to remove all selected notes.
  • Per-Session: Selections persist across messages within the same chat session and are cleared on new conversations.

▷ Right-Click Context Menu

Select text in any note to access professional editing tools via right-click ➔ Horme:

Action Description
Proofread Fixes grammar, spelling, and punctuation. Consults your grammar manuals for the configured language.
Rewrite Opens a tone picker: Formal, Friendly, Academic, Sarcastic, Aggressive, or Humanise.
Expand Adds detail while preserving meaning.
Summarize Condenses text to key points.
Beautify Format Fixes heading hierarchy, normalizes lists and spacing.
Fact Check Verifies each claim against Wikipedia. Returns structured verdicts with source citations.
Translate Opens a language input modal. Translates to any language.

▷ Inline Diff Confirmation

Before any text is changed, Horme shows a side-by-side Original vs. Replacement modal. You review the changes and explicitly click Accept or Cancel. All changes are fully undoable with Ctrl+Z.


▷ Status Bar Progress

A professional progress indicator appears in the Obsidian status bar during background tasks:

  • ● Indexing 47 / 3210

The indicator is color-coded and disappears automatically when the task is finished.


▷ Token Awareness

Horme estimates the total token count of the conversation before sending. If the context (system prompt + note context + documents + history) exceeds ~6,000 tokens, a warning notice is displayed to prevent silent truncation.


▷ Note Context

Toggle "Use current note as context" to inject the active note's content. The plugin tracks the last focused markdown editor live, so switching tabs updates the context automatically.


▷ Document Upload

Upload PDF and DOCX files directly into the chat. Horme extracts the text content (including structural metadata for PDFs) and injects it as context for the model.


▷ Chat History

Manage your past conversations via the History panel (🕑):

  • Debounced Saving: History is saved every 2 seconds during active chat to minimize disk I/O.
  • Capped Storage: Retains up to 200 conversations; oldest entries are automatically trimmed.
  • Flush on Close: In-progress conversations are guaranteed to save when the chat panel is closed.

▷ Export Conversation

Export any conversation as a formatted Markdown note (⬇). The file is saved to the configured export folder with a timestamped filename, preserving the distinction between User and Assistant messages.


▷ System Prompt Presets

Create reusable system prompts (e.g. "Constitutional Law Professor", "Code Auditor", "Spanish Tutor") in settings. Switch between them from the preset dropdown in the chat header — no need to retype.


▷ Per-Note Frontmatter Prompts

Override the global system prompt for specific notes by adding a horme-prompt key to the YAML frontmatter. This allows for note-specific personas that activate automatically when the note is in context.

---
horme-prompt: "You are an expert in constitutional law. Always cite legal precedent."
---

☍ Providers

Horme supports multiple AI providers. Local providers are recommended for privacy.

Provider Type Notes
Ollama 🏠 Local Default. Full feature access including Vault Brain.
LM Studio 🏠 Local Full feature access including Vault Brain.
Claude ☁ Cloud Vault Brain requires explicit opt-in.
Gemini ☁ Cloud Vault Brain requires explicit opt-in.
OpenAI ☁ Cloud Vault Brain requires explicit opt-in.
Groq ☁ Cloud Vault Brain requires explicit opt-in.
OpenRouter ☁ Cloud Vault Brain requires explicit opt-in.

⚙ Settings

Setting Default Description
Ollama Base URL http://127.0.0.1:11434 Endpoint for the Ollama API.
Embedding Model nomic-embed-text Model used for indexing (e.g. nomic-embed-text, mxbai-embed-large).
Vault Brain Off Toggle for the semantic RAG engine and background indexer.
Allow Cloud RAG Off Explicitly allow vault content to be sent to cloud providers.
Grammar Manual Folder Gramática Folder containing your grammar reference notes.
Grammar Language Español Language your grammar manuals cover. Proofreading only consults manuals for this language.
Summary Field summary Frontmatter key where generated summaries are stored.
Summary Language Español Language summaries are written in.
Max Tag Candidates 250 Number of existing tags considered for semantic suggestions.
Export Folder HORME Vault-relative path for saved notes and exports.

☐ Release Files

To install Horme, you need exactly three files:

File Purpose
main.js Bundled plugin logic (includes pdfjs-dist).
styles.css Chat panel and modal styling.
manifest.json Plugin metadata for Obsidian.

✎ Author

DuckTapeKiller


⚖ License

MIT