README file from
GithubInspired by this reading workflow demo.
Features
- Adaptive segmentation — the LLM decides natural topic boundaries. Short sections merge, long ones split. No dependency on markdown headings.
- Scroll-sync — scrolling the editor auto-highlights the matching card on the right.
- Streaming — real-time SSE streaming for OpenAI Chat and Anthropic APIs, so you see progress as it generates.
- 20+ providers — Anthropic, OpenAI, Gemini, OpenRouter, Groq, DeepSeek, Moonshot, Ollama, LM Studio, and more. Plus Claude Code CLI and Codex CLI backends.
- Persistent cache — summaries are cached by content SHA-1 + settings fingerprint. Reopen a note and cards appear instantly. Edits or config changes show a stale banner.
- Rich rendering — cards render through Obsidian's
MarkdownRenderer, so tables, bold, code, and wikilinks all work natively. - Card editing — right-click any card to copy, edit, delete, or jump to source.
- Export — save cards as a Markdown note in your vault, or copy to clipboard.
- Bilingual UI — full Chinese and English support for commands, settings, and notices.
Quick Start
Step 1: Install the Plugin
- Go to the Releases page and download three files from the latest release: main.js, manifest.json, styles.css
- Open your vault folder, navigate to
.obsidian/plugins/(create it if it doesn't exist), and create a new folder calledparallel-reader - Put the three downloaded files into that folder
- Open Obsidian → Settings → Community plugins → find Parallel Reader → toggle it on
Tip: Can't see the
.obsidianfolder? On macOS pressCmd+Shift+.in Finder; on Windows enable "Show hidden files" in File Explorer.
Step 2: Set Up Your AI Provider
- In Obsidian, go to Settings → Parallel Reader
- Choose a Provider preset (e.g. Anthropic, OpenAI, DeepSeek, etc.)
- Paste your API Key
- (Optional) Change the Model if you prefer a different one
- Click Test to verify the connection
That's it! Open any note and run the command "Parallel Reader: Generate" from the command palette (Cmd/Ctrl+P).
| Provider | Notes |
|---|---|
| Anthropic | Default, recommended |
| OpenAI | GPT models |
| Google Gemini | Gemini models |
| OpenRouter / Groq / DeepSeek / Moonshot / ... | OpenAI-compatible |
| Ollama / LM Studio | Local models, no API key needed |
| Custom endpoint | Any OpenAI or Anthropic compatible API |
If you have Claude Code or Codex installed locally, you can use them as backends instead of API keys. Just switch the backend in settings — the plugin automatically detects common install locations. If auto-detection fails, you can manually enter the path in settings.
Usage
| Action | Effect |
|---|---|
| Click a card | Jump editor to that section |
| Right-click a card | Context menu: copy, edit, delete, jump to source |
| Scroll the editor | Right-side card auto-highlights |
Alt+↑ / Alt+↓ |
Navigate between cards |
Enter in summary pane |
Jump to active card's source line |
| Ribbon icon | Open the parallel reader pane |
| File context menu | Generate / regenerate / clear cache |
How It Works
The LLM returns structured JSON:
{
"cards": [
{
"title": "Short heading",
"anchor": "Verbatim quote from source for positioning",
"gist": "One-sentence lead-in",
"bullets": ["Supporting detail 1", "Supporting detail 2"]
}
]
}
Anchor is the key mechanism — a verbatim quote that the plugin locates via indexOf with multi-level fallbacks, keeping scroll-sync working without relying on headings.
Gist + bullets gives both overview and scannable detail — pure prose felt like a wall of text, pure bullets felt fragmented.
Development
npm install
npm run dev # watch mode
npm run build # production build
npm run typecheck # TypeScript strict mode
npm run lint # Biome
npm test # build + typecheck + tests
npm run test:unit
npm run test:component
npm run test:contract
npm run test:e2e # packaged plugin + disposable Vault smoke
For CI / release evidence, run the contract gate:
bash .e2e/gate.sh --json # writes .e2e/artifact.json (gitignored)
Add TEST_LIVE=1 to opt into real local Vault / provider checks.