Understand your codebase — ranked, related, summarized, and kept up to date automatically.
A TypeScript-based MCP server and standalone daemon that ranks files by importance, tracks bidirectional dependencies, detects circular dependency chains, autonomously maintains AI-generated summaries, concepts, and change impact assessments — and keeps all of that metadata fresh in the background as your codebase changes.
FileScopeMCP is a fully autonomous file intelligence platform. Once pointed at a project it:
- Scans the codebase via a streaming async directory walker and builds a dependency graph with 0–10 importance scores for every file.
- Watches the filesystem. When files change, it incrementally updates dependency lists and importance scores, then detects semantic changes via tree-sitter AST diffing (TS/JS) or LLM-powered diff analysis (all other languages), and propagates staleness through the dependency graph via the cascade engine.
- A background LLM pipeline auto-generates summaries, key concepts, and change impact assessments for stale files — keeping structured metadata current without any manual work.
- On-demand mtime-based freshness checks detect files that changed while the server was down, so metadata is never silently stale.
All of this information is exposed to your AI assistant through the Model Context Protocol so it always has accurate, up-to-date context about your codebase structure.
-
File Importance Ranking
- Rank every file on a 0–10 scale based on its role in the dependency graph.
- Weighted formula considers incoming dependents, outgoing dependencies, file type, location, and name significance.
- Instantly surface the most critical files in any project.
-
Dependency Tracking
- Bidirectional dependency relationships: which files import a given file (dependents) and which files it imports (dependencies).
- Distinguishes local file dependencies from package dependencies.
- Multi-language support: Python, JavaScript, TypeScript, C/C++, Rust, Lua, Zig, PHP, C#, Java, Go, Ruby.
-
Circular Dependency Detection
- Detects all strongly connected components (circular dependency groups) using iterative Tarjan's SCC algorithm.
- Project-wide scan via
detect_cyclesor per-file query viaget_cycles_for_file. - Identifies exactly which files participate in each cycle, helping untangle tight coupling.
-
Autonomous Background Updates
- Filesystem watcher detects
add,change, andunlinkevents in real time. - Incremental updates: re-parses only the affected file, diffs old vs. new dependency lists, patches the reverse-dependency map, and recalculates importance — no full rescan.
- Startup integrity sweep detects files added, deleted, or modified while the server was offline and heals the database before accepting requests.
- Per-file mtime-based lazy validation on read — see Freshness Validation.
- All mutations are serialized through an async mutex to prevent concurrent corruption.
- Per-event-type enable/disable and
autoRebuildTreemaster switch. - Semantic change detection classifies what changed before triggering cascade — avoids unnecessary LLM calls.
- Filesystem watcher detects
-
File Summaries
- Background LLM auto-generates summaries for files after they change.
- Manual override via
set_file_summary— your summary is preserved until the file changes again. - Summaries persist across server restarts in SQLite.
-
SQLite Storage
- All data stored in
.filescope.dbin the project root using SQLite with WAL mode. - Type-safe schema via drizzle-orm:
files,file_dependencies,llm_jobs,schema_version,llm_runtime_statetables. - Transparent auto-migration: existing JSON tree files are automatically imported on first run — no manual migration step.
- All data stored in
-
Semantic Change Detection
- tree-sitter AST diffing for TypeScript and JavaScript files — fast, accurate, and token-free.
- Classifies changes as:
body-only(function internals only),exports-changed(public API changed),types-changed(type signatures changed), orunknown. - LLM-powered diff fallback for all other languages (Python, Rust, C/C++, Go, Ruby, etc.).
- Change classification drives the cascade engine — body-only changes skip dependent propagation entirely.
-
Cascade Engine
- BFS staleness propagation through the dependency graph when exports or types change.
- Per-field granularity: marks
summary,concepts, andchange_impactfields stale independently. - Circular dependency protection via visited set — no infinite loops.
- Depth cap of 10 levels prevents runaway propagation on deeply nested graphs.
-
Background LLM Pipeline
- Auto-generates summaries, concepts (functions, classes, interfaces, exports, purpose), and change impact (risk level, affected areas, breaking changes) for stale files.
- Priority-ordered job queue: interactive (tier 1) > cascade (tier 2) > background (tier 3).
- Token budget limits and per-minute rate limiting prevent runaway API costs.
- Recovers orphaned
in_progressjobs on restart — no stuck jobs after crashes. - Toggle on/off at runtime via
toggle_llmMCP tool or config file.
-
Multi-Provider LLM Support
- Anthropic (Claude) via
@ai-sdk/anthropic— usesANTHROPIC_API_KEYenvironment variable. - OpenAI-compatible via
@ai-sdk/openai-compatible— works with Ollama, vLLM, and any OpenAI-compatible API. - Configurable model, baseURL, and apiKey per-project in
config.json. - Local-first default: Ollama on
localhost:11434withqwen2.5-coder:14b. - Structured output with JSON repair fallback for local models that don't follow schemas perfectly.
- Anthropic (Claude) via
-
Custom Exclusion Patterns
.filescopeignorefile in the project root — uses gitignore syntax (via theignorepackage) to exclude files from scanning and watching.exclude_and_removeMCP tool — adds glob patterns at runtime; patterns are persisted toconfig.jsonso they survive restarts.- Default exclusions for
node_modules,.git,dist,build,coverage, and.filescope.*runtime artifacts.
-
Daemon Mode
- Runs as a standalone daemon (
--daemon --base-dir=<path>) for 24/7 operation without an MCP client connected. - PID file guard (
.filescope.pidin the project root) prevents concurrent daemons on the same project. - Graceful shutdown on SIGTERM/SIGINT — flushes pending jobs before exit.
- File-only logging to
.filescope-daemon.login the project root — no stdout pollution.
- Runs as a standalone daemon (
- Node.js 22+ — required. Earlier versions may work but are untested. Download from nodejs.org.
- npm — comes with Node.js.
- Native build tools (usually optional) —
better-sqlite3andtree-sittership prebuilt binaries for most platforms. If prebuilds aren't available for your OS/arch,npm installwill fall back to compiling from source, which requires:- Linux:
python3,make,gcc(e.g.,sudo apt install build-essential python3) - macOS: Xcode Command Line Tools (
xcode-select --install) - Windows: Visual Studio Build Tools with C++ workload
- Linux:
-
Clone this repository
-
Build and register:
Linux / macOS / WSL:
./build.sh
Windows:
build.bat
Both scripts will:
- Install npm dependencies
- Compile TypeScript to
dist/ - Generate
mcp.jsonfor Cursor AI - Register the server with Claude Code (
~/.claude.json)
FileScopeMCP includes an automated setup script for Ollama:
./setup-llm.shThis script will:
- Install Ollama if not present (supports Linux, macOS, WSL)
- Detect GPU hardware (NVIDIA, AMD, Metal) and configure acceleration
- Pull the default model (
qwen2.5-coder:14b) - Verify the installation
To check status or use a different model:
./setup-llm.sh --status # Check Ollama and model status
./setup-llm.sh --model codellama # Pull a different modelThe build script registers FileScopeMCP automatically. To register (or re-register) without rebuilding:
./install-mcp-claude.shThe server is registered globally — no --base-dir is needed. When you start a session, tell Claude to run set_project_path pointing at your project. This builds the initial file tree, starts the file watcher, and runs the startup integrity sweep:
set_project_path(path: "/path/to/your/project")
After that you can optionally enable the background LLM pipeline:
toggle_llm(enabled: true)
Build inside WSL, then copy mcp.json to your project's .cursor/ directory:
{
"mcpServers": {
"FileScopeMCP": {
"command": "wsl",
"args": ["-d", "Ubuntu-24.04", "/home/yourname/FileScopeMCP/run.sh", "--base-dir=${projectRoot}"],
"transport": "stdio",
"disabled": false,
"alwaysAllow": []
}
}
}{
"mcpServers": {
"FileScopeMCP": {
"command": "node",
"args": ["C:\\FileScopeMCP\\dist\\mcp-server.js", "--base-dir=${projectRoot}"],
"transport": "stdio",
"disabled": false,
"alwaysAllow": []
}
}
}{
"mcpServers": {
"FileScopeMCP": {
"command": "node",
"args": ["/path/to/FileScopeMCP/dist/mcp-server.js", "--base-dir=${projectRoot}"],
"transport": "stdio"
}
}
}To run FileScopeMCP as a standalone background process (no MCP client required):
node dist/mcp-server.js --daemon --base-dir=/path/to/projectThe daemon watches the project, runs the startup integrity sweep, and keeps the LLM pipeline active 24/7. Logs are written to .filescope-daemon.log in the project root.
After installation, walk through these steps to verify everything is working:
1. Verify the build succeeded:
ls dist/mcp-server.js # Should exist after build2. Verify Claude Code registration:
claude mcp list # Should show FileScopeMCP in the list3. Start a Claude Code session and initialize:
set_project_path(path: "/path/to/your/project")
You should see: Project path set to /path/to/your/project. File tree built and saved to SQLite.
4. Confirm files are tracked:
find_important_files(limit: 5)
You should see a list of your most important files with importance scores.
5. (Optional) Enable the LLM pipeline:
toggle_llm(enabled: true)
Requires Ollama running locally (default) or a configured LLM provider — see Configuration.
6. (Optional) Check LLM status:
get_llm_status()
Should show running: true and token counters.
7. Add generated files to your .gitignore:
# FileScopeMCP
.filescope.db
.filescope.db-wal
.filescope.db-shm
.filescope.pid
.filescope-daemon.log
mcp-debug.logThe tool scans source code for import statements and other language-specific patterns:
- Python:
importandfrom ... importstatements - JavaScript/TypeScript:
importstatements,require()calls, and dynamicimport()expressions - C/C++/Header:
#includedirectives - Rust:
useandmodstatements - Lua:
requirestatements - Zig:
@importdirectives - PHP:
require,require_once,include,include_once, andusestatements - C#:
usingdirectives - Java:
importstatements - Go:
importstatements withgo.modmodule resolution - Ruby:
requireandrequire_relativestatements with.rbextension probing
Files are assigned importance scores (0–10) based on a weighted formula that considers:
- Number of files that import this file (dependents) — up to +3
- Number of files this file imports (dependencies) — up to +2
- Number of package dependencies imported — up to +1
- File type and extension — TypeScript/JavaScript get higher base scores; PHP +2; JSON config files (package.json, tsconfig.json) +3
- Location in the project structure — files in
src/orapp/are weighted higher - File naming —
index,main,server,app,config,types, etc. receive additional points
The formula is evaluated from scratch on every calculation, so calling recalculate_importance is always idempotent. Manual overrides set via set_file_importance will be overwritten when importance is recalculated.
When a file event fires, the update pipeline is:
- Debounce — events are coalesced per
filePath:eventTypekey (default 2 s) to avoid thrashing on rapid saves. - Acquire mutex — all tree mutations are serialized through
AsyncMutexso the watcher and the startup sweep can never corrupt the database simultaneously. - Semantic change detection — tree-sitter AST diffing for TS/JS files classifies the change (body-only, exports-changed, types-changed, unknown). LLM-powered diff analysis handles all other languages.
- Incremental update — re-parses the file, diffs old vs. new dependency lists, patches
dependents[]on affected nodes, and recalculates importance. - Cascade engine — if exports or types changed, BFS propagates staleness to all transitive dependents; if body-only, only the changed file is marked stale.
- LLM pipeline — picks up stale files and regenerates summaries, concepts, and change impact assessments in priority order.
FileScopeMCP uses two complementary strategies to keep metadata current:
- Startup sweep — runs once when the server initializes. Compares every tracked file against the filesystem to detect adds, deletes, and modifications that occurred while the server was offline. Heals the database before accepting any MCP requests.
- Per-file mtime check — when you query a file through MCP tools (
get_file_importance,get_file_summary,read_file_content), the system compares the file's current mtime against the last recorded value. If the file changed, it's immediately flagged stale and queued for LLM re-analysis. This catches changes missed by the watcher without the overhead of periodic full-tree scans.
Circular dependencies are detected using an iterative implementation of Tarjan's strongly connected components algorithm:
- Loads all local import edges from SQLite in a single batch query.
- Runs Tarjan's SCC on the directed dependency graph.
- Filters out trivial SCCs (single files with no self-loop) to return only actual cycles.
- Each cycle group lists the participating files, making it easy to identify and break circular imports.
The system handles various path formats to ensure consistent file identification:
- Windows and Unix path formats
- Absolute and relative paths
- URL-encoded paths
- Cross-platform compatibility
All file tree data is stored in .filescope.db (SQLite, WAL mode) in the project root.
- Schema — drizzle-orm manages:
files(metadata, staleness, concepts, change_impact),file_dependencies(bidirectional relationships),llm_jobs(background job queue),schema_version(migration versioning),llm_runtime_state(token budget persistence). - Auto-migration — on first run, any legacy JSON tree files are automatically detected and imported into SQLite. The original JSON files are left in place but are no longer used.
Persistent exclusions: When you call exclude_and_remove, the pattern is saved to the excludePatterns array in config.json. Patterns take effect immediately and persist across server restarts.
FileScopeMCP uses config.json in the project root for all settings. This file is optional — sensible defaults are used when it doesn't exist, and it's created automatically when you change settings via MCP tools.
{
"baseDirectory": "/path/to/your/project",
"excludePatterns": [
"**/node_modules",
"**/.git",
"**/dist",
"**/build",
"**/coverage"
],
"fileWatching": {
"enabled": true,
"ignoreDotFiles": true,
"autoRebuildTree": true,
"maxWatchedDirectories": 1000,
"watchForNewFiles": true,
"watchForDeleted": true,
"watchForChanged": true
},
"llm": {
"enabled": true,
"provider": "openai-compatible",
"model": "qwen2.5-coder:14b",
"baseURL": "http://localhost:11434/v1",
"maxTokensPerMinute": 40000,
"tokenBudget": 1000000
},
"version": "1.0.0"
}Create a .filescopeignore file in your project root to exclude files from scanning and watching. Uses gitignore syntax:
# Ignore generated documentation
docs/api/
# Ignore large data files
*.csv
*.parquet
# Ignore specific directories
tmp/
vendor/This file is loaded once at startup and applied alongside the excludePatterns from config.json. Changes to .filescopeignore require a server restart (or re-calling set_project_path) to take effect. Both systems work together — use config.json for programmatic patterns (set via MCP tools) and .filescopeignore for patterns you want to commit to your repo.
| Provider | provider value |
Auth | Use case |
|---|---|---|---|
| Ollama | "openai-compatible" |
None needed | Local inference, free |
| vLLM | "openai-compatible" |
Optional apiKey |
Self-hosted GPU server |
| OpenAI-compatible API | "openai-compatible" |
apiKey or env var |
Any compatible endpoint |
| Anthropic (Claude) | "anthropic" |
ANTHROPIC_API_KEY env var or apiKey field |
Cloud API |
Default behavior: When toggle_llm(enabled: true) is called with no existing LLM config, the system auto-creates a config targeting Ollama at localhost:11434 with qwen2.5-coder:14b.
| Field | Default | Description |
|---|---|---|
enabled |
false |
Whether the LLM pipeline runs |
provider |
"anthropic" |
Provider adapter to use |
model |
"claude-3-haiku-20240307" |
Model identifier |
baseURL |
— | API endpoint (required for openai-compatible) |
apiKey |
— | API key override (otherwise uses env vars) |
maxTokensPerCall |
1024 |
Maximum tokens per LLM call |
maxTokensPerMinute |
40000 |
Sliding-window rate limit |
tokenBudget |
unlimited | Lifetime token cap; pipeline stops when reached |
- TypeScript 5.8 / Node.js 22 — ESM modules throughout
- Model Context Protocol —
@modelcontextprotocol/sdkfor MCP server interface - chokidar — cross-platform filesystem watcher for real-time change detection
- esbuild — fast TypeScript compilation to ESM
- better-sqlite3 — SQLite storage with WAL mode (loaded via
createRequirefor ESM compatibility) - drizzle-orm — type-safe SQL schema and queries
- tree-sitter — AST parsing for semantic change detection (loaded via
createRequire) - Vercel AI SDK (
ai,@ai-sdk/anthropic,@ai-sdk/openai-compatible) — multi-provider LLM abstraction - zod — runtime validation and structured output schemas
- AsyncMutex — serializes concurrent tree mutations from the watcher and startup sweep
The MCP server exposes 22 tools organized by category:
- set_project_path: Point the server at a project directory and initialize or reload its file tree
- create_file_tree: Re-scan a directory and rebuild the file tree in SQLite
- select_file_tree: Get the current active file tree configuration
- list_saved_trees: Show the current SQLite database status (file count, dependency count)
- delete_file_tree: Clear all data from the SQLite database (requires
confirm: truesafety guard)
- list_files: List all files in the project with their importance rankings
- get_file_importance: Get detailed information about a specific file — includes importance, dependencies, dependents, summary, concepts, changeImpact, and staleness fields
- find_important_files: Find the most important files in the project — includes staleness fields
- set_file_importance: Manually override the importance score for a specific file
- recalculate_importance: Recalculate importance values for all files based on dependencies
- read_file_content: Read the content of a specific file
- get_file_summary: Get the stored summary of a specific file — includes concepts, changeImpact, and staleness fields
- set_file_summary: Set or update the summary of a specific file
- detect_cycles: Detect all circular dependency groups in the project using Tarjan's SCC algorithm
- get_cycles_for_file: Get circular dependency groups that include a specific file
- toggle_llm: Enable or disable background LLM processing. When enabled with no prior config, defaults to Ollama (
openai-compatible,qwen2.5-coder:14b,localhost:11434) - get_llm_status: Get pipeline status — running state, budget exhaustion flag, lifetime tokens used, token budget, and max tokens per minute
- toggle_file_watching: Toggle file watching on/off
- get_file_watching_status: Get the current status of file watching
- update_file_watching_config: Update file watching configuration (per-event-type toggles,
autoRebuildTree,ignoreDotFiles, etc.)
- exclude_and_remove: Exclude a file or glob pattern from the tree and remove matching nodes. Patterns are saved to
config.jsonand persist across restarts - debug_list_all_files: List every file path currently tracked in the active tree (useful for debugging)
The easiest way to get started is to enable this MCP in your AI client and let the AI figure it out. As soon as the MCP starts, it builds an initial file tree. Ask your AI to read important files and use set_file_summary to store summaries on them.
-
Point the server at your project (builds the tree, starts file watching and the startup sweep):
set_project_path(path: "/path/to/project") -
Find the most important files:
find_important_files(limit: 5, minImportance: 5) -
Get detailed information about a specific file:
get_file_importance(filepath: "/path/to/project/src/main.ts")
-
Read a file's content to understand it:
read_file_content(filepath: "/path/to/project/src/main.ts") -
Add a summary to the file:
set_file_summary(filepath: "/path/to/project/src/main.ts", summary: "Main entry point that initializes the application, sets up routing, and starts the server.") -
Retrieve the summary later:
get_file_summary(filepath: "/path/to/project/src/main.ts")
-
Enable background LLM processing (uses Ollama by default):
toggle_llm(enabled: true) -
Check LLM pipeline status:
get_llm_status()Returns:
{ "enabled": true, "running": true, "budgetExhausted": false, "lifetimeTokensUsed": 42350, "tokenBudget": 1000000, "maxTokensPerMinute": 40000 } -
View auto-generated metadata for a file:
get_file_importance(filepath: "/path/to/project/src/main.ts")
Sample response (after LLM pipeline has processed the file):
{
"path": "/path/to/project/src/main.ts",
"importance": 8,
"dependencies": ["./config.ts", "./router.ts", "./db.ts"],
"dependents": ["./test/main.test.ts"],
"packageDependencies": ["express", "dotenv"],
"summary": "Main entry point that initializes Express server, loads configuration, sets up routes, and starts listening on the configured port.",
"concepts": {
"functions": ["startServer", "gracefulShutdown"],
"classes": [],
"interfaces": ["ServerOptions"],
"exports": ["startServer", "app"],
"purpose": "Application entry point that wires together configuration, routing, and server lifecycle"
},
"changeImpact": {
"riskLevel": "high",
"affectedAreas": ["server startup", "route registration", "error handling"],
"breakingChanges": [],
"summary": "Central orchestration file — changes here affect all downstream request handling"
},
"summaryStale": null,
"conceptsStale": null,
"changeImpactStale": null
}When staleness fields are null, the metadata is current. A non-null value (epoch timestamp) means the file or a dependency changed and the LLM pipeline will regenerate that field.
-
Detect all cycles in the project:
detect_cycles()Returns groups of files that form circular import chains.
-
Check if a specific file is part of a cycle:
get_cycles_for_file(filepath: "/path/to/project/src/moduleA.ts")
-
Check the current file watching status:
get_file_watching_status() -
Update file watching configuration:
update_file_watching_config(config: { autoRebuildTree: true, watchForNewFiles: true, watchForDeleted: true, watchForChanged: true }) -
Disable watching entirely:
toggle_file_watching()
npm test
npm run coverage| File | When | Location |
|---|---|---|
mcp-debug.log |
MCP server mode (disabled by default) | Working directory |
.filescope-daemon.log |
Daemon mode (always on) | Project root |
MCP mode: File logging is disabled by default. To enable it, edit src/mcp-server.ts and change enableFileLogging(false, ...) to true, then rebuild. MCP log messages also go to stderr, which Claude Code captures in its own logs.
Daemon mode: File logging is always on. Logs auto-rotate at 10 MB (file is truncated and restarted). View logs in real time:
tail -f /path/to/project/.filescope-daemon.logFrom your AI assistant, you can query the system state at any time:
# Is the file watcher running? What events is it tracking?
get_file_watching_status()
# Is the LLM pipeline running? How many tokens have been used?
get_llm_status()
# Returns: { running, budgetExhausted, lifetimeTokensUsed, tokenBudget, maxTokensPerMinute }
# How many files are tracked?
debug_list_all_files()
# Check if a specific file has stale metadata
get_file_importance(filepath: "/path/to/file.ts")
# Staleness fields: summaryStale, conceptsStale, changeImpactStale (epoch timestamps, null = not stale)
# Are there any circular dependency chains?
detect_cycles()
The SQLite database is a standard file you can query with any SQLite client:
sqlite3 /path/to/project/.filescope.db
# How many files are tracked?
SELECT COUNT(*) FROM files WHERE is_directory = 0;
# Which files have LLM-generated summaries?
SELECT path, LENGTH(summary) as summary_len FROM files WHERE summary IS NOT NULL AND is_directory = 0;
# What LLM jobs are pending?
SELECT * FROM llm_jobs WHERE status = 'pending' ORDER BY priority, created_at;
# Check lifetime token usage
SELECT * FROM llm_runtime_state;
# Which files have stale metadata?
SELECT path, summary_stale, concepts_stale, change_impact_stale FROM files WHERE summary_stale IS NOT NULL OR concepts_stale IS NOT NULL OR change_impact_stale IS NOT NULL;# Check if a daemon is running for a project
cat /path/to/project/.filescope.pid
# Check if that PID is alive
kill -0 $(cat /path/to/project/.filescope.pid) 2>/dev/null && echo "Running" || echo "Not running"
# Graceful shutdown
kill $(cat /path/to/project/.filescope.pid)
# Start daemon
node /path/to/FileScopeMCP/dist/mcp-server.js --daemon --base-dir=/path/to/projectEvery tool except set_project_path requires initialization first. Call set_project_path(path: "/your/project") or ensure you're using --base-dir when starting the server.
- Run
claude mcp listto check registration. - If missing, run
./install-mcp-claude.shto register. - Check
~/.claude.json— it should have aFileScopeMCPentry undermcpServers. - Restart Claude Code after registration.
better-sqlite3 and tree-sitter include native addons. If prebuilt binaries aren't available:
- Linux:
sudo apt install build-essential python3 - macOS:
xcode-select --install - Windows: Install Visual Studio Build Tools with C++ workload
- Check
get_llm_status()— isrunningtrue? - If
budgetExhaustedis true, the lifetime token budget has been reached. IncreasetokenBudgetinconfig.jsonor set to0for unlimited. - If using Ollama, confirm it's running:
curl http://localhost:11434/v1/models - Check for errors in the log file (daemon mode) or stderr (MCP mode).
- Run
./setup-llm.sh --statusto verify Ollama and model installation.
A PID file exists for this project. Either another daemon is running, or a previous one crashed without cleanup:
# Check if the PID is actually alive
cat /path/to/project/.filescope.pid
kill -0 <PID> 2>/dev/null && echo "Still running" || echo "Stale PID file"
# If stale, remove it
rm /path/to/project/.filescope.pidThe SQLite database uses WAL mode for crash safety, but if something goes wrong:
# Delete the database — it will be rebuilt on next startup
rm /path/to/project/.filescope.db
rm -f /path/to/project/.filescope.db-wal
rm -f /path/to/project/.filescope.db-shmOn the next set_project_path call, the system rescans the project and rebuilds the database from scratch. If legacy JSON tree files exist, they'll be auto-imported.
- Check
get_llm_status()forlifetimeTokensUsed. - Set a
tokenBudgetinconfig.jsonto cap total usage. - Reduce
maxTokensPerMinuteto slow the pipeline down. - The cascade engine's depth cap (10 levels) and body-only-change optimization already prevent most unnecessary calls.
Files created by FileScopeMCP in your project directory:
| File | Purpose | Gitignore? |
|---|---|---|
.filescope.db |
SQLite database (all metadata, jobs, state) | Yes |
.filescope.db-wal |
SQLite write-ahead log | Yes |
.filescope.db-shm |
SQLite shared memory file | Yes |
.filescope.pid |
Daemon PID lock file | Yes |
.filescope-daemon.log |
Daemon log output | Yes |
mcp-debug.log |
MCP server debug log (when enabled) | Yes |
config.json |
Server configuration (exclude patterns, file watching, LLM settings) | Optional |
This project is licensed under the GNU General Public License v3 (GPL-3.0). See the LICENSE file for the full license text.