Skip to main content

Building a TUI to index and search my coding agent sessions

·3554 words·17 mins

This is the story of how fast-resume came to life and evolved, as I was trying to search and resume my coding agent sessions more easily across different local CLI agents.

The problem with resuming sessions #

I use many coding agents these days: Claude Code, Codex, OpenCode, Copilot, and more. Sometimes I remember that I, or the agent, mentioned something specific in a previous session, and I want to go back to it.

Most coding agents have a /resume feature now, which allows a session to be reopened with all the state back. While the resume feature works great, finding which session to resume is harder.

The usual limitations:

  • The list of sessions is scoped to the current directory. I often do cross-directory work so it’s not obvious that I was in a specific directory when working on a specific project.
  • You can’t search sessions, or can only search by title, not the full session (messages for example)

That means that for example if I remember the agent mentioning a specific subject later during the conversation, it won’t be in the title, so I can’t find it.

Let’s say I have a few sessions about building a TUI program. I remember that in one of the sessions, the agent mentioned textual. I can’t search for textual in the resume view! Also, if I don’t remember the folder and which agent I used, I’m screwed. And some agents don’t have that feature at all.

So I started ripgrep‘ing my home folder to find the string I was searching for, then using clues from the session file (directory, timestamp, context) to navigate to the correct directory, /resume, and find the session in question. ๐Ÿ˜…

Since most coding agents store sessions locally, I started thinking: what if I could automate this grep‘ing, wrap it in a nice TUI and be able to resume in one keypress?

How sessions are stored #

First, to see if this was feasible, I had to understand how sessions are actually stored. Most agents use JSON files, but there are some interesting differences.

Claude Code: one JSONL file per session #

Most agents follow the same pattern as Claude Code: Codex and Copilot CLI use JSONL with similar structures.

Files are stored in ~/.claude/projects/{project_id}/{session_id}.jsonl. JSONL is a format where each JSON object is stored independently on a newline.

Messages from the user or Claude and tool calls are stored that way. Here is an example of a message:

โžœ  ~ jq -s '[.[] | select(.type == "assistant")] | last' ~/.claude/projects/-Users-stanislas-lab-LilyGo-AMOLED-Series/15380ff3-2312-430d-94cf-b3ad97d008be.jsonl
{
  "parentUuid": "f670067e-cd90-4515-b5ca-f0f049ddba9b",
  "cwd": "/Users/stanislas/lab/LilyGo-AMOLED-Series",
  "sessionId": "15380ff3-2312-430d-94cf-b3ad97d008be",
  "version": "2.0.75",
  "slug": "fluffy-meandering-anchor",
  "message": {
    "model": "claude-opus-4-5-20251101",
    "role": "assistant",
    "content": [
      {
        "type": "text",
        "text": "Done! There's now a retained message. Your ESP32 will get `5.8ยฐC` immediately on boot.\n\nReboot the device to test it - temp should show instantly now."
      }
    ],
    ...
  },
  "type": "assistant",
  "timestamp": "2026-01-01T13:11:32.832Z"
}

OpenCode: many small JSON files #

OpenCode doesn’t use JSONL but instead independent JSON files. Message content is sharded by session id, message id, and message parts in ~/.local/share/opencode/storage/:

DirectoryPatternContent
session/ses_*.jsonSession metadata: id, title, directory, time.created
message/msg_*.jsonMessage metadata: id, role, session_id
part/*.jsonMessage parts: text, reasoning, tool calls, files, etc
~/.local/share/opencode/storage/
โ”œโ”€โ”€ session/
โ”‚   โ””โ”€โ”€ {project-hash}/
โ”‚       โ””โ”€โ”€ ses_{session_id}.json
โ”œโ”€โ”€ message/
โ”‚   โ””โ”€โ”€ {session_id}/
โ”‚       โ”œโ”€โ”€ msg_001_{msg_id}.json
โ”‚       โ””โ”€โ”€ msg_002_{msg_id}.json
โ””โ”€โ”€ part/
    โ””โ”€โ”€ {msg_id}/
        โ”œโ”€โ”€ 001_{part_id}.json
        โ””โ”€โ”€ 002_{part_id}.json

This design conceptually makes sense: not having to rewrite or append to a single file might be simpler. But for indexing, it means a lot more filesystem operations. To give you an idea: I used Claude Code possibly 100x more than OpenCode, yet OpenCode has 10x more files (9,847 vs 827). See Stats for more details.

As I was writing this, it looks like the OpenCode devs are considering changing the storage format, possibly to sqlite.

Vibe: single JSON, rewritten each message #

Vibe stores one JSON file per session in ~/.vibe/logs/session/session_*.json. It is not JSONL. The file contains metadata and the full messages array.

One detail that surprised me: Vibe rewrites the entire file after each user turn. That means the file grows and gets fully serialized on every message, which is simple but doesn’t seem very efficient for long sessions.

Crush: SQLite #

Crush is the only agent that uses SQLite instead of JSON files. Projects are listed in ~/.local/share/crush/projects.json, and each project has its own .crush/crush.db database.

The schema has a sessions table with metadata like title, message count, and cost, and a messages table with role and parts (stored as JSON).

โžœ  sqlite3 ~/lab/project/.crush/crush.db "SELECT id, role, substr(parts,1,80) FROM messages LIMIT 3"
0a374879...|user|[{"type":"text","data":{"text":"how does this work?"}},{"type":"finish"...
b4bfb280...|assistant|[{"type":"reasoning","data":{"thinking":"The user wants me to...
98d80ffe...|user|[{"type":"text","data":{"text":"Search for all Textual usage"}}...

I’m surprised it’s the only agent using SQLite!

First attempt: fuzzy finding with RapidFuzz #

To search sessions, I started with a naive approach. I defined a common Session type and an adapter protocol to abstract each agent’s storage format:

@dataclass
class Session:
    """Represents a coding agent session."""
    id: str
    agent: str  # "claude", "codex", "opencode", "vibe", etc
    title: str
    directory: str
    timestamp: datetime
    content: str  # Full searchable content


class AgentAdapter(Protocol):
    """Protocol for agent-specific session adapters."""
    name: str

    def find_sessions(self) -> list[Session]: ...
    def get_resume_command(self, session: Session) -> list[str]: ...
    def is_available(self) -> bool: ...

Each adapter implements three methods: find_sessions parses all session files and returns Session objects, get_resume_command returns the shell command to resume a session (claude --resume {id} for Claude, codex resume {id} for Codex), and is_available checks if the agent’s data directory exists.

For example, here’s the core of Claude’s adapter:

class ClaudeAdapter:
    def find_sessions(self):
        sessions = []
        for project_dir in self._sessions_dir.iterdir():
            for session_file in project_dir.glob("*.jsonl"):
                session = self._parse_jsonl(session_file)
                if session:
                    sessions.append(session)
        return sessions

    def get_resume_command(self, session):
        return ["claude", "--resume", session.id]

Adding a new agent means writing one adapter file. Implement scanning, parsing, and the resume command. The search engine, TUI, and CLI all work automatically.

On startup, each adapter would parse its session files and return a list of Session objects. I cached the results in a sessions.json file and used file mtimes to know when to reindex.

For search, I used RapidFuzz because the experience I had in mind was the familiar fuzzy finding of fzf. For each session, I built a searchable string by concatenating the title, directory, and full content:

searchable = f"{session.title} {session.directory} {session.content}"

RapidFuzz’s Weighted Ratio scorer compared the query against every searchable string. This scorer has an interesting backstory but it basically uses other scorers based on the lengths of the string.

The problem was that WRatio alone didn’t rank exact matches high enough. Searching for “fix auth bug” might rank “authentication fixes” higher than a session literally titled “fix auth bug”. I added bonuses on top of the fuzzy score: +25 if the query appears as a substring, +15 if all query words are present, and +30 if they appear consecutively. This helped with ranking quality, but the performance was not good enough for me. Every search scanned every session on every keystroke. The TUI would visibly lag while typing, and I’m trying to have a very reactive TUI.

Switching to Tantivy #

I needed a proper search engine. I first considered SQLite FTS5, which has a trigram tokenizer for similarity matching, but it works by comparing 3-character substring overlap rather than edit distance, which is what I’m looking for. I’m a very imprecise typer ๐Ÿ˜„

I opted for Tantivy, a full-text search library written in Rust, and the one powering Quickwit. Instead of comparing the query against every document at search time, we can use it to build an inverted index upfront: a mapping from terms to the sessions that contain them.

Tantivy’s FuzzyTermQuery uses Levenshtein distance, which is better for actual typos: “teh” matches “the” (distance=1), but wouldn’t match with trigrams since they share no 3-character chunks.

When a session gets indexed, Tantivy tokenizes its content into terms and stores which document IDs contain each term. Searching for “auth bug” means finding documents containing “auth”, finding documents containing “bug”, intersecting the sets, then scoring the matches using BM25.

Luckily, the only “official” bindings for Tantivy are for Python! So I was able to use it directly and very easily in my project.

The schema defines what gets indexed:

schema_builder = tantivy.SchemaBuilder()
schema_builder.add_text_field("id", stored=True, tokenizer_name="raw")
schema_builder.add_text_field("title", stored=True)
schema_builder.add_text_field("content", stored=True)
schema_builder.add_text_field("agent", stored=True, tokenizer_name="raw")
schema_builder.add_float_field("timestamp", stored=True)
# etc

Text fields get tokenized and indexed for search. The raw tokenizer keeps the value as-is without splitting, which is useful for IDs and agent names where “copilot-cli” should stay as one token, not become “copilot” and “cli”. (cf Keyword query syntax)

When the schema changes (adding a field, changing tokenizers), the index needs to be rebuilt. I track a schema version in a file alongside the index and clear everything if it doesn’t match:

SCHEMA_VERSION = 5  # Bump this when schema changes

def _ensure_index(self):
    if self.index_path.exists() and not self._check_version():
        shutil.rmtree(self.index_path)  # Clear stale index
    # ... create index with new schema

It’s not very robust (I could bump the version to the same number into two concurrent PRs), but it’s good enough for now.

For fuzzy matching, Tantivy supports custom distance for queries. A fuzzy term query with distance 1 matches terms that are one character insertion, deletion, or substitution away from the query term. “atuh” matches “auth”, “bugg” matches “bug”.

fuzzy_title = tantivy.Query.fuzzy_term_query(
    schema, "title", term, distance=1, prefix=True
)

The prefix=True flag also matches terms that start with the query, so “au” matches “auth” and “authentication”.

I ran into the same ranking problem as with RapidFuzz: fuzzy matches sometimes outranked exact matches. The fix was a hybrid query that boosts exact matches:

def _build_hybrid_query(self, query, index, schema):
    # Exact match with 5x boost
    exact_query = index.parse_query(query, ["title", "content"])
    boosted_exact = tantivy.Query.boost_query(exact_query, 5.0)

    # Fuzzy matches for typo tolerance
    fuzzy_parts = []
    for term in query.split():
        fuzzy_title = tantivy.Query.fuzzy_term_query(schema, "title", term, distance=1, prefix=True)
        fuzzy_content = tantivy.Query.fuzzy_term_query(schema, "content", term, distance=1, prefix=True)
        fuzzy_parts.append((tantivy.Occur.Should, fuzzy_title))
        fuzzy_parts.append((tantivy.Occur.Should, fuzzy_content))

    # Either exact OR fuzzy can match, but exact scores 5x higher
    return tantivy.Query.boolean_query([
        (tantivy.Occur.Should, boosted_exact),
        (tantivy.Occur.Should, tantivy.Query.boolean_query(fuzzy_parts)),
    ])

The performance has been quite good with Tantivy. My use case is pretty basic and the dataset is very small in FTS terms, so I haven’t looked into performance optimization too much. But queries complete in a handful of milliseconds, which is perfect!

Incremental indexing #

The first version of fast-resume rebuilt the entire index when any source directory changed. Adding one new Claude session meant re-parsing hundreds of Codex sessions that hadn’t changed.

The fix was tracking modification times per session. Tantivy stores each session’s mtime alongside its content:

schema_builder.add_float_field("mtime", stored=True)

On startup, fast-resume asks the index for all known sessions and their mtimes. Each adapter compares file mtimes against what’s known and only re-parses what changed or is new:

class BaseSessionAdapter:
    def find_sessions_incremental(self, known):
        current_files = self._scan_session_files()  # Subclass implements

        new_or_modified = []
        for session_id, (path, mtime) in current_files.items():
            if self._needs_reparse(session_id, mtime, known):
                session = self._parse_session_file(path)  # Subclass implements
                if session:
                    new_or_modified.append(session)

        deleted_ids = [sid for sid in known if sid not in current_files]
        return new_or_modified, deleted_ids

If a session’s mtime is newer than what’s in the index, re-parse it. If a session exists in the index but not on disk, mark it deleted. Everything else stays untouched.

Updates are atomic: delete the old documents and add the new ones in a single transaction before committing. This avoids a window where the session is missing from the index:

def update_sessions(self, sessions):
    writer = index.writer()
    for session in sessions:
        writer.delete_documents_by_term("id", session.id)
    for session in sessions:
        writer.add_document(tantivy.Document(...))
    writer.commit()

The adapters run in parallel:

with ThreadPoolExecutor(max_workers=len(self.adapters)) as executor:
    results = executor.map(get_incremental, self.adapters)
    for new_or_modified, deleted_ids in results:
        all_new_or_modified.extend(new_or_modified)
        all_deleted_ids.extend(deleted_ids)

If nothing changed (the common case) the whole process is just reading mtimes and comparing numbers. In any case, this happens in the background while the TUI starts instantly (see streaming updates).

Fast JSON parsing with orjson #

Most adapters spend their time parsing JSON. Claude sessions are JSONL files with hundreds of lines. OpenCode has thousands of small JSON files spread across directories. Even with incremental indexing, the initial index build parses everything.

To try to gain a bit for performance, I switched the native json lib for orjson, which is a JSON library written in Rust that’s supposed to be a lot faster.

orjson’s loads also accept both strings and bytes, and it’s faster with bytes, so we can pass it the file directly in binary mode without converting to a string first.

The TUI #

The TUI is built with Textual, a Python framework for terminal interfaces. I discovered it with Mistral’s vibe coding agent. This and uv are the reason I wanted this project to be Python, even though I usually pick Go for CLIs.

Textual provides a layout system, widgets, reactive state, and async workers, great to have a fully featured and snappy TUI.

Reactive state #

The main screen has three parts: a search input at the top, a results table in the middle, and a preview pane at the bottom. Everything is reactive; changing state automatically updates the UI, which is a pattern I like and I’m used to with web frameworks.

class FastResumeApp(App):
    show_preview: reactive[bool] = reactive(True)
    selected_session: reactive[Session | None] = reactive(None)
    active_filter: reactive[str | None] = reactive(None)
    is_loading: reactive[bool] = reactive(True)
    search_query: reactive[str] = reactive("")

Progressive updates #

On startup, results load instantly from the existing index. In parallel, all adapters compare file mtimes to find new or modified sessions and index them in the background. Each time an adapter finishes, the results table refreshes to include the newly indexed sessions. On first run or after a schema version bump, the index is empty so results populate progressively as adapters complete.

with ThreadPoolExecutor(max_workers=len(self.adapters)) as executor:
    futures = {executor.submit(get_incremental, a): a for a in self.adapters}
    for future in as_completed(futures):
        new_or_modified, deleted_ids = future.result()
        self._index.update_sessions(new_or_modified)
        on_progress()  # Notify TUI

The TUI runs this off the main thread using Textual’s @work decorator. Each time an adapter finishes indexing, on_progress re-runs the current search query against the updated index, so newly indexed sessions that match appear immediately:

@work(exclusive=True, thread=True)
def _do_streaming_load(self):
    def on_progress():
        sessions = self.search_engine.search(self.search_query, ...)
        self.call_from_thread(self._update_results_streaming, sessions)

    self.search_engine.index_sessions_parallel(on_progress, on_error=on_error)

call_from_thread marshals updates back to the main thread for UI changes.

Search is debounced to improve responsiveness when holding delete for example, otherwise the TUI doesn’t have enough time to re-render after the search and it feels laggy.

@on(Input.Changed, "#search-input")
def on_search_changed(self, event: Input.Changed):
    if self._search_timer:
        self._search_timer.stop()

    self._search_timer = self.set_timer(0.05, lambda: setattr(self, "search_query", value))

def watch_search_query(self, query: str):
    self._do_search(query)

The watch_search_query method is a Textual watcher: it gets called automatically when search_query changes. Setting the reactive variable triggers the search.

Search also runs in a background thread so the UI stays responsive while Tantivy works:

@work(exclusive=True, thread=True)
def _do_search(self, query: str):
    start_time = time.perf_counter()
    sessions = self.search_engine.search(query, agent_filter=self.active_filter, limit=100)
    elapsed_ms = (time.perf_counter() - start_time) * 1000
    self.call_from_thread(self._update_results, sessions, elapsed_ms)

The query time gets displayed next to the search box, it’s surprisingly variable, from ~0.5ms to ~50ms on my laptop. But it feels pretty snappy!

Navigation works with up/down, but also j and k. shift+tab to move from search to preview, / to focus back the search bar, return resumes the selected session. Scrolling also works with the mouse. You can resize the preview with + and - or hide it entirely with Ctrl+backtick.

Search terms are highlighted in the results table (title, directory) in fzf style and the preview pane using Rich’s Text.stylize(). One limitation: Tantivy returns matching documents but doesn’t expose which terms actually matched. So if you search “atuh” and it fuzzy-matches “auth”, only “atuh” gets highlighted, not “auth”. I couldn’t find a way to get the expanded terms from Tantivy.

Agent logos #

Since modern terminals support inline images through protocols like Sixel, I thought we could include coding agent logos to make it look nicer. The textual-image library handles terminal detection and rendering. Unfortunately, it doesn’t work with vhs, so I have to record demos manually!

Age color gradient #

Session timestamps are colored based on age: green for recent, fading through yellow and orange to gray for old. Exponential decay maps time to a 0-1 value, which compresses older sessions together:

decay_rate = -math.log(1 - 0.3) / 24  # 24 hours hits t=0.3
t = 1 - math.exp(-decay_rate * age_hours)

Then t interpolates through color stops (green โ†’ yellow โ†’ orange โ†’ gray). A session from an hour ago looks noticeably different from one from yesterday, but three months and six months both just look “old”.

Keyword query syntax #

Plain text search works fine for most queries, but sometimes you want to narrow results by agent or time. Rather than building a separate filter UI, I added keyword syntax directly in the search box. Type agent:claude to filter to Claude sessions, date:today for today’s sessions, dir:my-project to match directory paths.

Textual’s Suggester provides autocomplete as you type: agent:cl suggests claude, date:to suggests today. It also handles negation, so agent:!co suggests !codex.

The parser extracts keywords from the query using a regex, handling keyword:value pairs, quoted values with spaces like dir:"my project", and negation with - or !. Whatever doesn’t match a keyword becomes free-text that goes to Tantivy.

Agent and directory filters support multiple values: agent:claude,codex matches either agent, agent:claude,!codex means Claude but not Codex. Date filters have their own mini-language: date:today, date:yesterday, date:<1h (within the last hour), date:>2d (older than two days).

These parsed filters translate to Tantivy queries:

Keyword syntaxTantivy query
agent:claudeterm_query("agent", "claude")
agent:claude,codexterm_set_query("agent", ["claude", "codex"])
-agent:vibeMustNot(term_query("agent", "vibe"))
dir:myprojectregex_query("directory", "(?i).*myproject.*")
date:<1hrange_query("timestamp", lower=cutoff)

Screenshot showing keyword filters in the search bar with agent:claude highlighted
Keyword in action. They’re highlighted when used and crossed and ignored if invalid

Resuming a session #

When you press Enter on a session, the TUI doesn’t directly exec the resume command. Instead it stores the command and directory, exits cleanly, and returns them to the CLI wrapper:

def _do_resume(self, yolo: bool):
    self._resume_command = self.search_engine.get_resume_command(self.selected_session, yolo=yolo)
    self._resume_directory = self.selected_session.directory
    self.exit()

The CLI then uses os.execvp to replace itself with the agent’s resume command:

if resume_cmd:
    if resume_dir:
        os.chdir(resume_dir)
    os.execvp(resume_cmd[0], resume_cmd)

execvp replaces the current process entirely: same PID, same terminal, but now running Claude or Codex instead of fast-resume. This is cleaner than spawning a child process because the resumed agent owns the terminal directly. Ctrl+C goes to the agent, not to a wrapper script.

The directory change happens first because most agents expect to be run from the project directory.

Yolo mode #

Some agents support “yolo mode” to automatically approve edits and tool calls. Claude has --dangerously-skip-permissions for example. But it applies to the current instance of claude, not the session. So starting claude without this flag, you can’t resume a past session in yolo mode, even if that session was started in an instance of claude started with the flag. When you resume a session, fast-resume can detect if it was originally started in yolo mode and offer to resume the same way.

Adapters that parse session files look for yolo indicators. Codex stores approval policy in a turn_context record:

if payload.get("approval_policy") == "never":
    yolo = True
if payload.get("sandbox_policy", {}).get("mode") == "danger-full-access":
    yolo = True

Vibe stores it directly in session metadata:

yolo = metadata.get("auto_approve", False)

The yolo flag gets indexed alongside each session. When you resume, the TUI checks in order: fast-resume’s --yolo flag overrides everything, then stored session yolo state, then if the adapter supports yolo but we don’t know the session’s state, a modal asks the user.

Stats #

Since we’re indexing all sessions across agents, we get analytics as a bonus. fr --stats gives you a breakdown of your session history:

โžœ fr --stats

Index Statistics

  Total sessions          1152
  Total messages          29,676
  Avg messages/session    25.8
  Index size              18.8 MB
  Index location          /Users/stanislas/.cache/fast-resume/tantivy_index
  Date range              2023-11-15 to 2026-01-16

Data by Agent

โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”“
โ”ƒ Agent          โ”ƒ Files โ”ƒ     Disk โ”ƒ Sessions โ”ƒ Messages โ”ƒ  Content โ”ƒ Data Directory                                                                โ”ƒ
โ”กโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ฉ
โ”‚ claude         โ”‚   832 โ”‚ 663.3 MB โ”‚      688 โ”‚   23,066 โ”‚   6.4 MB โ”‚ ~/.claude/projects                                                            โ”‚
โ”‚ copilot-vscode โ”‚   196 โ”‚ 146.1 MB โ”‚      192 โ”‚      960 โ”‚   1.4 MB โ”‚ ~/Library/Application Support/Code/User/globalStorage/emptyWindowChatSessions โ”‚
โ”‚ opencode       โ”‚ 25588 โ”‚ 182.4 MB โ”‚      141 โ”‚    4,983 โ”‚   1.3 MB โ”‚ ~/.local/share/opencode/storage                                               โ”‚
โ”‚ codex          โ”‚   129 โ”‚  31.7 MB โ”‚      106 โ”‚      468 โ”‚   1.1 MB โ”‚ ~/.codex/sessions                                                             โ”‚
โ”‚ vibe           โ”‚    12 โ”‚ 858.2 kB โ”‚       12 โ”‚      138 โ”‚ 380.0 kB โ”‚ ~/.vibe/logs/session                                                          โ”‚
โ”‚ crush          โ”‚     3 โ”‚   1.0 MB โ”‚        7 โ”‚       44 โ”‚  15.2 kB โ”‚ ~/.local/share/crush                                                          โ”‚
โ”‚ copilot-cli    โ”‚     6 โ”‚ 422.1 kB โ”‚        6 โ”‚       17 โ”‚   7.1 kB โ”‚ ~/.copilot/session-state                                                      โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Activity by Day

 Mon   โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ         140
 Tue   โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ     175
 Wed   โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ          126
 Thu   โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ    182
 Fri   โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ    189
 Sat   โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ   190
 Sun   โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ        150

Activity by Hour

  0h โ–…โ–        โ–„โ–†โ–…โ–ƒโ–‚โ–ƒโ–ƒโ–ƒโ–„โ–„โ–ƒโ–ƒโ–†โ–ˆ 23h
  Peak hours: 23:00 (130), 22:00 (104), 11:00 (98)

Top Directories
[...]

Give it a try #

This was a fun project! It was a good occasion to try a new framework for TUIs and use an in-process search engine to keep things snappy. I’m pretty happy with the result!

I published it to PyPI, so you can try or install it with uv:

# Run directly
uvx --from fast-resume fr

# Or install permanently
uv tool install fast-resume
fr

And the code is on GitHub.