Skip to content

dotCooCoo/hermitstash

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

198 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

HermitStash

HermitStash

Stash it quietly. Share it instantly.
Post-quantum encrypted, self-hosted file upload server.

HermitStash Sync β€” companion desktop sync client


A note before you dive in.

HermitStash is my first public repo. It started as a weekend project to solve my own problem β€” sharing files with clients without trusting third-party cloud storage β€” and grew from there. I use it daily and it works for me, but I'm sharing it publicly knowing that "works for me" and "is fit for your use case" are different things.

A few things I want to be honest about:

  • I'm not a cryptographer. I've used well-reviewed primitives and tried to assemble them carefully, but I haven't had this audited, and there are almost certainly things I don't know that I don't know. I've written docs/THREAT_MODEL.md as a detailed design document specifically so that review is possible β€” it describes every protocol, lists known limitations honestly, and includes specific questions for cryptographers. If that's you: thank you, and please open an issue.
  • This is a personal project. I maintain it solo, in my spare time, and I can't promise fast response times or backwards compatibility.
  • I'm not currently accepting code contributions (more on that below), but bug reports, security findings, and feedback are genuinely welcome β€” they're how I learn.

If HermitStash is useful to you, that's wonderful. If you're considering it for anything where the consequences of a security flaw matter, please weigh that against the fact that no professional has reviewed this code.

β€” .CooCoo (@dotCooCoo)

Status: Personal project Β· Not audited Β· API may change Β· Use at your own risk


Quick Start

git clone https://github.com/dotCooCoo/hermitstash.git
cd hermitstash
node server.js

No config files. No build step. No npm install β€” all dependencies are vendored in the repo with zero npm runtime packages. First run generates the vault keypair and creates default accounts. Configure everything from the admin panel.

Default admin: admin@hermitstash.com with a randomly-generated password printed to stdout on first boot. The same password is also written to data/initial-admin-password.txt (mode 0600) so it survives container restart. Read it from your docker logs or the file, log in, and complete the setup wizard β€” the wizard walks you through changing the admin email and password, setting your site name, configuring the passkey relying party (rpOrigin/rpId), and generating a session secret. The plaintext password file is deleted automatically when setup completes.

Why HermitStash?

  • Post-quantum encryption β€” your files are protected against both today's computers and tomorrow's quantum computers
  • Zero plaintext β€” every database field, every file, every audit log entry is encrypted or hashed before touching disk
  • Self-hosted β€” your server, your keys, your data. No third-party cloud
  • Zero dependencies at runtime β€” node server.js is the entire setup. All crypto libraries are vendored and committed
  • One-command deploy β€” Docker or bare metal, no build step, no config files needed

Crypto Suite

For the detailed design β€” protocols, security goals, non-goals, adversary model, key hierarchy, known limitations, and open questions for reviewers β€” see docs/THREAT_MODEL.md. The table below is a summary.

All cryptographic operations use NIST-standardized post-quantum algorithms:

Layer Algorithm Standard Purpose
KEM ML-KEM-1024 + P-384 ECDH hybrid FIPS 203 + NIST P-384 Key encapsulation (PQC + classical)
Symmetric XChaCha20-Poly1305 RFC 8439 extended Data encryption (192-bit nonce, constant-time)
KDF SHAKE256 FIPS 202 (XOF) Key derivation from KEM shared secrets
Hash SHA3-512 FIPS 202 Integrity, email/IP hashing, checksums
HMAC HMAC-SHA3-512 FIPS 202 Webhook signing, token verification
Password Argon2id RFC 9106 Memory-hard password hashing
Signatures SLH-DSA-SHAKE-256f (default) / ML-DSA-87 (legacy) FIPS 205 / 204 Digital signatures. New keys default to SLH-DSA-SHAKE-256f; existing ML-DSA-87 keys continue to verify (algorithm auto-detected from key PEM)
Random SHA3-512(entropy) FIPS 202 All random generation via centralized KDF

Envelope versioning

Every encrypted blob starts with a 4-byte header encoding the algorithms used:

byte 0: 0xE1  (envelope magic)
byte 1: KEM   (0x02 ML-KEM-1024 / 0x03 ML-KEM-1024+P-384)
byte 2: Cipher(0x02 XChaCha20-Poly1305)
byte 3: KDF   (0x02 SHAKE256)

Any component can be swapped independently without re-encrypting existing data. When HQC or future algorithms are standardized, assign a new ID and existing blobs remain readable.

API payload encryption runs through two coexisting protocols, both PQC end-to-end:

  • blamejs apiEncrypt (per-session) β€” /drop/init, /drop/finalize/:bundleId, /sync/rename. Clients fetch the server keypair from GET /.well-known/blamejs-pubkey (plain JSON {publicKey, ecPublicKey, kemId, cipherId, kdfId}), generate a session key, wrap it to the server keypair via the framework envelope (ML-KEM-1024 + P-384 ECDH hybrid β†’ SHAKE256 β†’ XChaCha20-Poly1305), and send _ek on first request. Subsequent requests carry _sid + monotonically-increasing _ctr against an in-memory session store.
  • Legacy api-encrypt β€” every other JSON route for cookie-authenticated browser clients. Per-session XChaCha20-Poly1305 key vault-sealed in the session table; key embedded into HTML templates via res._apiKey for browser-side JS to encrypt subsequent requests. The legacy _ek field has its own version byte (0x01 = ML-KEM-1024 + P-384 + HKDF-SHA3-512 + XChaCha20-Poly1305). Bearer-authenticated callers (sync clients, API key holders) skip the legacy layer β€” TLS + mTLS + Bearer is the transport guarantee for those, and JSON-bodied operations route through blamejs apiEncrypt instead.

Future KEMs get a new version byte / new envelope ID β€” old blobs remain readable, new wires use the new primitive. The two protocols can be migrated independently.

Hybrid KEM

ML-KEM-1024 encapsulate  -->  shared_secret_1 (32 bytes)
P-384 ephemeral ECDH     -->  shared_secret_2 (48 bytes)
                               |
                     SHAKE256(ss1 || ss2, 32)
                               |
                     XChaCha20-Poly1305(key, nonce=24)  -->  ciphertext

Protects against both quantum (ML-KEM) and classical (P-384) attacks. If either is broken, the other still holds.

Encryption Architecture

Zero plaintext anywhere. Every piece of data is encrypted or hashed before touching disk:

ML-KEM-1024 + P-384 (vault.key)
  |
  +-- vault.seal() = hybrid KEM --> SHAKE256 KDF --> XChaCha20-Poly1305
  |
  +-- Wraps per-file XChaCha20-Poly1305 keys (file encryption at rest)
  +-- Wraps per-session XChaCha20-Poly1305 keys (API payload encryption)
  +-- API payload encryption: blamejs apiEncrypt envelope (ML-KEM-1024 + P-384 ECDH + SHAKE256) for sync writes
  +--   and legacy ECIES (ML-KEM-1024 + P-384 + HKDF-SHA3-512) for cookie-authed browsers
  +-- Wraps database file XChaCha20-Poly1305 key (DB encryption at rest)
  +-- Directly seals ALL database fields (not just PII)
  +-- Directly seals session cookie values

Automatic field-level encryption

Routes never touch vault.seal() directly. A centralized field-crypto middleware (lib/field-crypto.js) intercepts all database operations:

Routes pass PLAINTEXT
       |
  Collection.insert() / update() / find()
       |
  field-crypto.js (automatic middleware)
       |
  +-- sealDoc() on write ---> vault.seal() per field ---> DB stores ciphertext
  +-- unsealDoc() on read ---> vault.unseal() per field ---> routes get plaintext
  +-- derived hashes --------> emailHash, shareIdHash auto-computed
  +-- _translateQuery() -----> { email: "x" } becomes { emailHash: sha3("hs-email:x") }

Every field in every table is classified as seal (encrypted), hash (one-way lookup), derived (auto-computed from another field), or raw (IDs, timestamps, counters). The schema is defined once in FIELD_SCHEMA and enforced on every database operation.

What gets encrypted and how

Data Encryption Key Protection
File contents XChaCha20-Poly1305 (random key per file) Key sealed with hybrid ML-KEM-1024 + P-384 vault
Vault files ML-KEM-1024 + SHAKE256 + XChaCha20-Poly1305 (client-side) Key derived from passkey (never leaves browser)
API request/response bodies XChaCha20-Poly1305 (random key per session) Key sealed with hybrid vault
Database file on disk XChaCha20-Poly1305 (random key) Key sealed with hybrid vault
Session cookies Hybrid KEM + XChaCha20-Poly1305 Direct vault.seal() per cookie
All user fields (email, name, avatar, googleId) Hybrid KEM + XChaCha20-Poly1305 Auto vault.seal() per field
All file metadata (names, paths, MIME, storage) Hybrid KEM + XChaCha20-Poly1305 Auto vault.seal() per field
Audit log fields (action, emails, details) Hybrid KEM + XChaCha20-Poly1305 Auto vault.seal() per field
Audit log IPs SHA3-512 hash then vault-sealed One-way hashed, then auto-sealed
Passwords Argon2id One-way hash (no key needed)
Email/IP lookups SHA3-512 One-way hash for indexed queries

Anti-attack protections

Attack Protection
Quantum computer key recovery Hybrid ML-KEM-1024 + P-384 ECDH (dual protection)
Harvest-now-decrypt-later ML-KEM-1024 post-quantum KEM + envelope versioning for algorithm agility
Classical-only TLS downgrade ClientHello PQC gate rejects connections without hybrid key exchange groups
Brute-force passwords Argon2id (64MB memory, 3 iterations)
Brute-force login Rate limiting (5 attempts / 15 min per IP)
Brute-force share IDs 256-bit SHA3-derived IDs (2^256 search space)
Session hijacking Hybrid KEM encrypted cookies, per-session keys
API replay attacks Timestamp validation (30-second window)
API payload tampering XChaCha20-Poly1305 authentication (Poly1305 MAC)
Database file theft XChaCha20-Poly1305 encrypted at rest, key requires vault.key
PII exposure from DB dump Every field vault-sealed, IPs one-way hashed
Nonce collision XChaCha20 192-bit nonce eliminates birthday-bound risk
AES-NI side channels XChaCha20 is constant-time in software, no hardware dependency
Brute-force bundle passwords Exponential backoff lockout after 5 failed attempts
Email enumeration on bundles Identical response regardless of whether email is in allow list
Brute-force access codes 5 attempt limit per code, rate limiting, 10-minute expiry
CSRF on API endpoints Per-session XChaCha20-Poly1305 key binds JSON requests to session; form POSTs validated with constant-time CSRF token
Logout CSRF Logout is POST-only with CSRF token validation β€” cross-site <img> or <a> tags cannot force logout
WebSocket credential leakage API keys accepted only via Authorization header β€” query string tokens rejected to prevent proxy/log/Referer leaks
Session key interception Hybrid ECIES key exchange β€” session key encrypted via ML-KEM-1024 + ECDH P-384, never plaintext in HTTP
CSV formula injection Export values sanitized to prevent spreadsheet code execution
DNS rebinding via webhooks Pre-validated IP pinned to outbound connection
SSRF via webhooks Blocks localhost, RFC 1918, RFC 6598 CGNAT, link-local, IPv6 private ranges
Disguised file uploads Magic byte validation rejects files whose content doesn't match extension
Malicious filenames Backend sanitization strips control chars, path traversal, dot attacks, HTML injection
ZIP path traversal (Zip Slip) Entry names sanitized to remove .. segments; paths normalized on both upload and archive
Anonymous storage abuse Per-IP upload quota with 24-hour rolling window
Stored XSS via uploads User-controlled names auto-escaped in templates; raw output reserved for admin-set values only
Weak bundle/stash passwords Minimum 4-character requirement enforced server-side
Automated scanners and bots Request fingerprinting (accept-language, sec-fetch-dest, sec-fetch-mode) blocks non-browser clients on public routes β€” survives PQC TLS adoption
NPM supply chain All dependencies vendored as committed bundles β€” zero npm runtime packages
Admin settings injection Type-safe settings schema (lib/settings-schema.js) sanitizes on save (strip control chars, trim, type-specific normalization) and validates (format, range, enum) β€” bad data rejected at the gate with clear error messages
Stale config after admin change Config reset registry (config.onReset) invalidates cached clients (S3, upload paths, etc.) when dependent settings change at runtime
Timing attack on access codes SHA3-512 hash comparison uses constant-time timingSafeEqual on all security-sensitive comparisons (access codes, CSRF, TOTP)
Crash during backup restore Pre-restore snapshots of vault.key, db.key.enc, hermitstash.db.enc enable rollback if restore is interrupted

Built on Node.js 24.8+ (LTS) with ML-KEM-1024, SLH-DSA-SHAKE-256f (default signature) and ML-DSA-87 (legacy) via OpenSSL 3.5, XChaCha20-Poly1305 and SHAKE256 via vendored blamejs (which bundles @noble/ciphers and @noble/post-quantum), Argon2id via Node 24+'s built-in crypto.argon2 (no native binding required), WebAuthn via vendored blamejs (which bundles @simplewebauthn/server), and built-in SQLite via node:sqlite. Zero npm runtime dependencies.

Features

Authentication

  • Argon2id local auth, Google OAuth, WebAuthn passkeys -- all simultaneous
  • TOTP 2FA with single-use backup codes β€” HMAC-SHA-512, 128-byte secret, 8-digit codes (legacy SHA-1 enrollments are forced through a one-time re-pair on next login)
  • Email verification with SHA3-hashed tokens
  • Hybrid KEM encrypted session cookies
  • Per-session XChaCha20-Poly1305 encrypted API payloads with anti-replay and anti-tamper
  • Hybrid PQC payload encryption for API clients -- ML-KEM-1024 + ECDH P-384 hybrid envelope (SHAKE256 KDF, XChaCha20-Poly1305 wrap) via blamejs apiEncrypt for sync write paths and /drop/init / /drop/finalize/:bundleId; legacy ECIES with HKDF-SHA3-512 retained for cookie-authenticated browsers. Server keypair is published as plain JSON at /.well-known/blamejs-pubkey and vault-sealed at rest
  • Rate limiting on login (5/15min), registration (10/15min), 2FA verify (5/5min), passkey login (10/min)
  • Account lockout after 10 consecutive failed password attempts (30-minute cooldown)
  • Password reset flow with single-use, 1-hour-expiry tokens and anti-enumeration (always returns success)
  • User invitation system -- admin invites by email with role assignment, 48-hour expiry
  • Configurable session idle timeout (default 30 minutes, server-side enforcement)
  • OAuth CSRF state validation on Google callback
  • Password change automatically revokes all other sessions

File Management

  • Public folder drops -- drag entire trees, no login required
  • Per-file XChaCha20-Poly1305 encryption, keys sealed with hybrid ML-KEM-1024 + P-384
  • Chunked uploads for large files (>10MB auto-split, server reassembly)
  • Pause/resume/cancel uploads, per-file progress bars
  • Password-protected share links with exponential backoff lockout (2^n Γ— 30s after 5 failed attempts), persisted per-share in the database so counters survive restart
  • Email-gated access -- restrict bundles to specific recipient emails, verified by one-time code (anti-enumeration, rate limited, SHA3-hashed codes)
  • Dual protection mode -- require both email verification and password for maximum security
  • Custom expiry per bundle (1d, 7d, 30d, 90d, never)
  • Bundle messages, multiple recipient emails
  • Bundle naming -- name bundles during upload, rename inline from dashboard
  • Inline rename for files and bundles with backend-enforced sanitization (dot attack protection, path traversal prevention, extension preservation)
  • Magic byte content validation -- uploaded files verified against claimed extension (15 format signatures)
  • File preview with SVG sanitization, HTML/JS forced download
  • Shareable links -- browse folders or download as ZIP
  • Subfolder ZIP download -- download individual subdirectories from a bundle
  • Safe Content-Disposition headers with RFC 5987 encoding for non-ASCII filenames

Zero-Knowledge Vault

  • Client-side ML-KEM-1024 + SHAKE256 KDF + XChaCha20-Poly1305 encryption in the browser
  • Passkey-gated access (Touch ID, Face ID, YubiKey, FIDO2)
  • PRF mode for true zero-knowledge (no seed touches the server)
  • Stealth mode hides vault operations from audit logs
  • Self-access links for direct vault file download with passkey auth
  • Vault key rotation with atomic re-encryption of all files
  • Batch upload and batch delete with client-generated batch IDs
  • Folder structure preserved in vault uploads and batch ZIP downloads
  • Inline rename for vault batches and individual vault files
  • Force-reset recovery mode for vault lockout (deletes all vault files, clears vault state)
  • ML-KEM-1024 only (ML-KEM-768 fully removed β€” server rejects 768 keys at startup)

Customer Stash β€” Branded Upload Portals

  • Create custom-branded upload pages at /stash/:slug for clients and partners
  • Per-page branding -- custom title, instructions, accent color, and logo
  • Per-page upload constraints -- max file size, max files, default expiry, allowed extensions
  • Password-protected stash pages with Argon2 hashing and rate-limited unlock
  • Email/domain-gated stash access -- restrict by specific emails or entire domains (@acme.com), verified by one-time code
  • Dual protection mode -- require both email verification and password on stash pages
  • Simplified upload form -- message and files only (no name/email fields)
  • Bundle naming during stash upload
  • Dynamic slug validation with automatic reserved-word detection from registered routes
  • Upload stats tracked per stash page (bundle count, total bytes)
  • Custom logo upload per stash page with magic-byte validation
  • Dedicated admin page with bundle drill-down -- view bundles, browse files, inline rename, delete, purge all
  • Admin management -- create, edit, toggle, copy link, delete stash pages

Teams

  • Create teams, add/remove members with role-based access
  • Team-scoped file visibility -- cross-team isolation enforced
  • Team admin and member roles

Profile

  • Self-service email change with password re-authentication
  • Self-service account deletion (files reassigned, sessions revoked, last admin protected)

Admin Dashboard

  • Stats with computed totals (size, downloads), activity feed
  • Row-based bundle lists with file drill-down (My Stash + Personal Vault)
  • Paginated file/bundle browser with search
  • User management -- create, suspend, delete, role toggle
  • Audit log -- searchable, filterable, date range
  • Settings panel -- 9 tabs (Branding, General, Auth, Uploads, Storage, Theme, Email, Environment, Backup)
  • API keys with scoped permissions (upload, read, admin, webhook) validated against canonical enum
  • Webhooks with HMAC-SHA3-512 signed payloads, per-hook delivery log, enable/disable toggle
  • IP blocklist
  • Database backup (serves encrypted-at-rest copy), CSV exports (with formula injection protection)
  • Automated off-site backup to S3-compatible storage (AWS, R2, MinIO, B2, DO Spaces) with passphrase-encrypted vault key, incremental file manifests, configurable retention, and manual trigger from admin UI. Full-scope backups include all storage objects (bundles and vault files)
  • Backup restore with pre-restore snapshots -- critical files (vault.key, db.key.enc, hermitstash.db.enc) are snapshotted before overwrite for crash recovery
  • Scheduled tasks with watchdog timeouts -- file expiry, audit retention, stale upload cleanup, token cleanup, invite cleanup, daily SQLite vacuum, automated backup. Hung jobs auto-reset after 10 minutes
  • Danger Zone -- factory reset, purge all sessions, purge all users, purge all files (typed confirmation required)
  • Custom logo upload with magic-byte validation and SVG sanitization
  • Reverse proxy auto-detection with config snippet generator (nginx, Caddy, Apache)
  • Per-user storage quotas (separate from global quota) and per-IP public upload quota (24h rolling window)
  • Configurable upload concurrency, retry count, timeout, and file extension allowlist
  • Admin email list for auto-promoting OAuth users to admin role
  • Maintenance mode -- blocks non-admin access with 503 page
  • Announcement banner -- site-wide text displayed on all pages

Email

  • SMTP or Resend API backend (switchable from admin)
  • Dual-mode failover -- SMTP-primary/Resend-fallback or Resend-primary/SMTP-fallback
  • Resend quota enforcement (daily/monthly limits per plan tier)
  • Email template customization -- subject, header, footer with named placeholders ({siteName}, {uploaderName}, {fileCount}, {totalSize})
  • Upload confirmations, admin notifications, verification emails
  • All email send/fail/quota events audit-logged

Sync and API

  • Mutable sync bundles -- bundleType: "sync" creates persistent, mutable bundles that accept file additions, replacements, and deletions after creation
  • File replace -- uploading to a sync bundle with an existing relativePath overwrites the file with a new encryption key (old key and blob fully removed)
  • File rename/move -- POST /sync/rename updates relativePath without re-uploading the file (metadata-only, emits file_renamed WebSocket event). Sync client detects local renames by checksum matching within the debounce window
  • File delete -- individual files can be removed from sync bundles with tombstone-based soft delete (30-day cleanup)
  • Per-file change tracking -- seq monotonic counter and updatedAt timestamp on files and bundles for sync change feeds
  • JSON content negotiation on bundle view -- Accept: application/json returns file list with checksums and metadata
  • Structured audit log events for file mutations (JSON details with action, bundleId, checksum, size)
  • Shared access control middleware (require-access.js) -- centralized lock checks for bundles and stash
  • JSON-aware auth -- API/sync clients receive 401 JSON, browsers get login redirect
  • WebSocket sync channel -- GET /sync/ws with auth during upgrade handshake, scoped to single bundle
  • Real-time file change events over WebSocket (file_added, file_replaced, file_removed, heartbeat β€” sent immediately on connect, then every 30s)
  • Catch-up on reconnect via seq cursor (?since=N on WebSocket upgrade)
  • PQC TLS enforcement -- ClientHello inspection rejects connections without PQC hybrid key exchange groups
  • PQC gate architecture -- TCP proxy inspects supported_groups extension before TLS handshake completes
  • Localhost bypass for Docker health probes (127.0.0.1/::1 skip PQC check)
  • PQC_ENFORCE=false disables gate for transition periods (PQC preferred but not required)
  • PQC TLS -- conditional HTTPS with SecP384r1MLKEM1024 + X25519MLKEM768 + SecP256r1MLKEM768 hybrid key exchange (TLS 1.3 only, Level 5 preferred)
  • Certificate auto-reload on Let's Encrypt renewal (hourly file poll)
  • PQC outbound HTTPS agent -- all S3, SMTP, Resend, webhook, OAuth calls use PQC hybrid TLS groups
  • PQC_OUTBOUND_ENFORCE=false allows classical fallback for outbound connections
  • mTLS for sync clients -- server acts as its own Certificate Authority (ECDSA P-384)
  • Client certificate generation on sync token creation with one-click PEM bundle download
  • Certificate revocation table with SHA3-512 hashed fingerprint lookups
  • WebSocket upgrade validates mTLS cert + API key (dual auth, neither alone sufficient)
  • When a CA exists, WebSocket mTLS is required by default. Set MTLS_REQUIRED=false as an explicit bring-up escape to permit API-key-only upgrades; per-key cert binding is still enforced when api_keys.certFingerprint is set, so a cert-bound key cannot be downgraded.
  • New sync API key scope for WebSocket connections and sync bundle operations
  • Resource-scoped API keys -- boundStashId and boundBundleId columns restrict keys to specific resources
  • Stash-scoped sync tokens -- admin generates tokens that grant sync access to a single stash only
  • One-time enrollment codes -- admin generates a short code (e.g. HSTASH-A4K9-XMWP-7RB2), client redeems it to get API key + mTLS certs automatically (no file transfer needed, 1-hour expiry)
  • Stash sync mode -- persistent mutable bundle per stash for desktop sync clients
  • Admin UI: sync toggle per stash, one-click sync token generation with copy button
  • Desktop sync client: hermitstash-sync β€” watches a local folder and syncs via WebSocket + PQC TLS
  • Enforce mTLS mode β€” restricts the web UI to clients that present a valid CA-signed certificate. Sync clients, Bearer-authenticated API calls, /sync/*, and /health always pass through.
    • Soft (Admin β†’ Auth β†’ Enforce mTLS): instant toggle, no restart. Non-mTLS connections are dropped at the app layer via socket.destroy() β€” no HTTP response rendered, no information leakage.
    • Hard (env ENFORCE_MTLS_STRICT=true, boot-time): TLS handshake itself rejects non-mTLS clients. Requires restart to change.
    • Escape hatch (env ENFORCE_MTLS_STRICT=false): forces all enforcement off at boot regardless of DB setting. Use when locked out.
  • Browser certificates β€” admin panel on /admin (Browser Certificates section) issues a PKCS#12 for install in OS / browser cert stores. AES-256-CBC + SHA-512 MAC + 2M PBKDF2 iterations. See "Installing a browser certificate" below.
  • mTLS CA regeneration β€” Admin β†’ General β†’ Danger Zone β†’ Regenerate mTLS CA rolls the CA to the current algorithm envelope (SHA-384 cert signatures, 2M iterations, SHA-512 PRF). Every CA is version-tagged (OU=CAv{N} in the subject DN); boot-time and UI banners surface when the on-disk CA is a legacy generation. Active sync clients receive new certificates via a WebSocket ca:rotation message and ack back before the server auto-restarts β€” offline sync clients must re-enroll, browser certs must be re-downloaded. Operators who want a preview without committing can POST { confirm: "REGEN", skipRestart: true } to /admin/api/mtls-ca/regenerate.

Security Hardening

  • Security headers on all responses (CSP, X-Frame-Options, nosniff, Referrer-Policy, Permissions-Policy, COOP, CORP)
  • HSTS with preload auto-enabled when rpOrigin uses HTTPS
  • Content Security Policy with no external domains -- fonts vendored locally, object-src 'none', base-uri 'none', frame-ancestors 'none'
  • 256-bit SHA3-derived share IDs (no brute-force, no collisions)
  • CSRF protection: JSON requests bound by per-session encryption key; form POSTs validated with constant-time CSRF token; non-JSON/non-exempt POSTs rejected
  • Logout is POST-only with CSRF token validation (no GET logout CSRF)
  • Bot guard middleware -- request fingerprinting (accept-language, sec-fetch-dest, sec-fetch-mode) blocks automated scanners on public routes without relying on user-agent strings
  • WebSocket API keys accepted only via Authorization header -- query string tokens rejected to prevent proxy/log/Referer leaks
  • CSV formula injection protection on all exports
  • CORS configurable via admin (wildcard disallowed with credentials)
  • Health endpoint CORS configurable from admin for PQC gateway status checks
  • Canonical origin policy -- all URLs generated from rpOrigin, never from Host header
  • Webhook DNS pinning -- resolved IP reused for outbound connection, preventing TOCTOU rebinding
  • Input length limits on all free-text fields
  • Pagination capped at 200 results
  • X-Forwarded-For only trusted from configured proxies
  • Safe redirects (relative paths only)
  • SSRF protection covers all RFC 1918, RFC 6598 CGNAT, link-local, metadata, and IPv6 ranges
  • All crypto and font dependencies vendored from npm -- zero external CDN requests, zero runtime packages
  • Restrictive CSP on user-uploaded logo directory (defense-in-depth against SVG XSS)

Storage

  • Local disk, NAS mount, or any S3-compatible bucket (MinIO, Cloudflare R2, DigitalOcean Spaces, Backblaze B2)
  • S3 direct downloads with pre-signed URLs (configurable expiry, AWS Signature V4)
  • All file operations (uploads, vault, backups) go through a unified storage abstraction -- local and S3 backends are transparent to the application
  • Per-file XChaCha20-Poly1305 encryption at rest, keys sealed with hybrid vault

SEO and Legal

  • Open Graph and Twitter card meta tags with dynamic site name and origin
  • Canonical URL tag derived from rpOrigin
  • robots.txt blocks admin, dashboard, vault, and auth pages from search engines
  • Dynamic sitemap.xml (GET /sitemap.xml) with public pages
  • noindex/nofollow meta tag on all authenticated pages
  • Configurable Privacy Policy, Terms of Service, and Cookie Policy pages
  • Default legal page templates with sensible content for self-hosted deployments
  • Footer links to all legal pages
  • Configurable analytics script injection -- paste any provider's <script> tag (Plausible, Umami, Matomo, Fathom, PostHog, Google Analytics)
  • Analytics injected on public pages only (admin/dashboard excluded)
  • API encryption scoped to same-origin -- external analytics and third-party fetches pass through unmodified
  • Auto-detected CSP domains from analytics script with manual override

Accessibility

  • Skip-to-content link for keyboard navigation
  • ARIA labels on interactive controls (theme toggle, icon buttons)
  • Alt text on all logo and avatar images
  • Semantic HTML with <main> landmark on all pages

Zero Configuration

  • No .env file -- settings stored in encrypted database
  • No build step -- vanilla Node.js
  • node server.js is the entire setup -- no npm install needed
  • process.env overrides available for Docker/containers
  • Health check endpoint (GET /health) for load balancers, container probes, and PQC gateway status checks (CORS configurable)
  • Zero external CDN dependencies -- fonts vendored locally, no requests to Google, Cloudflare, or any third-party on page load
  • PWA web app manifest with dynamic site name and theme colors
  • Automatic database schema migrations on startup
  • Startup invariant checks -- validates vault key, warns on default credentials/secrets, checks directory permissions

Installing a browser certificate

When Enforce mTLS is on, every browser session needs a client certificate signed by HermitStash's internal CA. Generate one from Admin β†’ Browser Certificates β†’ Issue + Download. The server returns a .p12 file (AES-256-CBC + SHA-512 MAC + 2M PBKDF2 iterations). Install it per your OS:

macOS β€” double-click the .p12, Keychain Access opens, enter the password. The cert is installed to the login keychain. When you next visit HermitStash, Safari / Chrome / Firefox will offer it in the cert picker.

Windows β€” double-click the .p12, Certificate Import Wizard opens. Choose Current User β†’ Personal store, enter the password. Edge / Chrome pick it up automatically; Firefox uses its own store (see below).

Linux (Chrome/Chromium) β€” use NSS command-line:

pk12util -i hermitstash-browser-<cn>.p12 -d sql:$HOME/.pki/nssdb

Firefox (any OS) β€” Preferences β†’ Privacy & Security β†’ Certificates β†’ View Certificates β†’ Your Certificates tab β†’ Import β†’ pick the .p12 β†’ enter password.

After install, visit your HermitStash URL. The browser will prompt you to select a certificate (pick "HermitStash: <cn>"), then you'll land on the login page as normal.

Docker Deployment

Quick start (pre-built image)

docker pull ghcr.io/dotcoocoo/hermitstash:1
docker run -d --name hermitstash \
  -p 3000:3000 \
  -v ./data:/app/data \
  -v ./uploads:/app/uploads \
  --shm-size=256m \
  ghcr.io/dotcoocoo/hermitstash:1

Image tag scheme

Tags published per release:

Tag Example Behaviour
:1 major-version pin Gets bug fixes and features within the major version (no breaking changes)
:1.7 minor-version pin Gets only patch updates within the minor series
:1.7.x exact pin Pin to a specific patch (:1.7.12, :1.7.13, etc.) β€” never updates
:latest rolling Always the newest published image β€” follows the default branch
:sha-<commit> per-commit Reproducible pin to the exact commit

Pick the level of stability you want β€” :1 is the recommended default for production deployments.

Or with docker compose (using pre-built image):

services:
  hermitstash:
    image: ghcr.io/dotcoocoo/hermitstash:1
    init: true
    ports: ["3000:3000"]
    volumes:
      - ./data:/app/data
      - ./uploads:/app/uploads
    shm_size: 256m
    security_opt:
      - no-new-privileges:true
    environment:
      PUID: "99"                   # default 99 (Unraid). Set 1000 for standard Linux.
      PGID: "100"                  # default 100 (Unraid). Set 1000 for standard Linux.
      TZ: "Etc/UTC"                # e.g. America/New_York
      TRUST_PROXY: "true"
      RP_ORIGIN: ""              # https://your-domain.com
    restart: unless-stopped

Quick start (build from source)

git clone https://github.com/dotCooCoo/hermitstash.git
cd hermitstash
docker compose up -d

Uses cgr.dev/chainguard/node:latest-dev β€” a wolfi-based, glibc-dynamic Node image rebuilt continuously by Chainguard when upstream CVE fixes land, so the image's CVE count at any given digest is typically near zero. Node 24.8+ is still required for PQC (OpenSSL 3.5). No config files needed β€” all dependencies vendored, no npm install. Starts with defaults and generates the vault keypair on first run. Configure everything from the admin panel at /admin once running.

Image details

Base image cgr.dev/chainguard/node:latest-dev (wolfi, glibc β€” continuously rebuilt for CVE fixes)
Node.js 24.8+ (required for ML-KEM-1024, SLH-DSA-SHAKE-256f, ML-DSA-87 via OpenSSL 3.5)
User Runs as hermit (non-root) via su-exec (installed at build time) β€” PUID/PGID env vars remap UID/GID at runtime (default 99:100, standard Linux 1000:1000)
Tmpfs HERMITSTASH_TMPDIR=/dev/shm β€” plaintext DB held in memory, never on disk. Set shm_size: 256m in compose. Also consider CHUNK_SCRATCH_DIR=/dev/shm/hermitstash-chunks for RAM-backed chunked-upload staging.
Volumes /app/data (encrypted DB, vault keys, TLS certs), /app/uploads (files if using local storage)
Port 3000 (configurable via PORT env var)
Health check Built-in: GET /health every 30s, 5s timeout, 3 retries, 30s start period
Security init: true (tini PID 1), no-new-privileges, cap_drop: ALL + minimal cap_add
Entrypoint docker-entrypoint.sh β€” remaps PUID/PGID, sets TZ/UMASK, chowns volumes, drops to hermit via su-exec

docker-compose.yml

The included docker-compose.yml provides a production-ready starting point:

services:
  hermitstash:
    build: .
    init: true
    ports:
      - "3000:3000"
    volumes:
      - ./data:/app/data       # encrypted DB, vault keys, TLS certs
      - ./uploads:/app/uploads  # files (local storage only)
    shm_size: 256m              # /dev/shm for plaintext DB in memory
    security_opt:
      - no-new-privileges:true
    cap_drop:
      - ALL
    cap_add:
      - CHOWN
      - SETUID
      - SETGID
      - DAC_OVERRIDE
    environment:
      PUID: "99"                # default 99 (Unraid). Set 1000 for standard Linux.
      PGID: "100"               # default 100 (Unraid). Set 1000 for standard Linux.
      TZ: "Etc/UTC"             # e.g. America/New_York
      UMASK: "022"              # 755 dirs, 644 files. Use 000 for Unraid nobody:users sharing.
      NODE_ENV: production
      HERMITSTASH_TMPDIR: /dev/shm
      PORT: 3000
      TRUST_PROXY: "true"       # set if behind nginx/Cloudflare/Coolify
      RP_ORIGIN: ""             # https://your-domain.com (required for passkeys + HSTS)
    restart: unless-stopped

All other settings (auth, email, S3, branding) are best configured via the admin panel at /admin so credentials are vault-sealed in the encrypted database. Environment variables override DB settings and are visible in Admin > Settings > Environment tab.

Hardened / rootless variant: docker-compose.rootless.yml runs the container as UID 1000 with read_only: true and zero Linux capabilities. You pre-chown ./data and ./uploads on the host; the container never needs CHOWN/SETUID/SETGID/DAC_OVERRIDE. Use this if you're comfortable managing host UIDs.

Coolify / managed Docker hosts

Works out of the box with Coolify, Portainer, CapRover, and similar platforms:

  1. Point the platform at the git repo (or Dockerfile)
  2. Set the RP_ORIGIN env var to your domain's full URL (e.g., https://app.hermitstash.com)
  3. Mount persistent volumes for /app/data and /app/uploads
  4. Set shm_size: 256m (or equivalent in the platform's container config)
  5. The built-in health check works with any orchestrator that supports HEALTHCHECK

TLS / HTTPS

The server can terminate TLS itself (for PQC enforcement) or sit behind a reverse proxy:

  • Behind Cloudflare/nginx (recommended): Set TRUST_PROXY=true. The proxy terminates TLS; the server runs HTTP internally. PQC TLS between browser and Cloudflare is handled by Cloudflare's edge. Set PQC_ENFORCE=false if the proxyβ†’server leg is plain HTTP.
  • Direct TLS (PQC enforced): Mount TLS certs at /app/data/tls/fullchain.pem and /app/data/tls/privkey.pem (or set TLS_CERT and TLS_KEY env vars). The PQC gate inspects ClientHello and rejects non-PQC connections. The server negotiates SecP384r1MLKEM1024 > X25519MLKEM768 > SecP256r1MLKEM768 (strongest available hybrid group). Certificate auto-reload on Let's Encrypt renewal (hourly file poll via fs.watchFile).

Persistent data

Path Contents Backup?
/app/data/hermitstash.db.enc Vault-encrypted SQLite database (users, files, settings, audit log) Yes β€” automated S3 backup available
/app/data/vault.key ML-KEM-1024 + P-384 hybrid keypair (encrypts all DB fields). Plaintext JSON, mode 0600. Not present when passphrase protection is enabled β€” see below. Critical β€” lose this and all sealed data is unrecoverable
/app/data/vault.key.sealed Passphrase-wrapped vault key (only when VAULT_PASSPHRASE_MODE=required). Critical β€” needs both the file AND the passphrase to recover
/app/data/tls/ TLS certificates (if using direct TLS) Regenerated by Let's Encrypt
/app/uploads/ Uploaded files (if using local storage; not needed with S3) Optional β€” files are re-uploadable

Passphrase protection (opt-in)

By default, /app/data/vault.key is plaintext JSON protected only by filesystem permissions (0600). An attacker with a disk snapshot β€” a stolen backup, a leaked volume dump, an errant rsync β€” can recover the vault key and decrypt everything HermitStash has stored. This is the single largest limitation in the default configuration and is documented as L2 in docs/THREAT_MODEL.md.

v1.9+ adds an opt-in layer that wraps the vault key with an Argon2id-derived XChaCha20-Poly1305 encryption. When enabled, the on-disk file is a ciphertext blob that requires the passphrase to unwrap.

Default behavior is unchanged β€” if you don't enable this feature, your existing plaintext vault.key keeps working identically.

Enabling

  1. Store the passphrase somewhere safe first. Loss of the passphrase = loss of all encrypted data. HermitStash has no recovery mechanism. Use a password manager.

  2. Stop the server.

  3. Run the migration tool:

    # Interactive prompt (simplest for manual setup):
    docker exec -it hermitstash node scripts/vault-passphrase-setup.js
    
    # Or with an env-var passphrase:
    VAULT_PASSPHRASE='your-strong-passphrase' \
    node scripts/vault-passphrase-setup.js
    
    # Or from a file (Docker/K8s secrets idiom β€” recommended for production):
    VAULT_PASSPHRASE_FILE=/run/secrets/vault-passphrase \
    node scripts/vault-passphrase-setup.js

    The tool wraps vault.key into vault.key.sealed, verifies round-trip, then atomically deletes the plaintext. Pass --keep-plaintext if you want to preserve the plaintext file as a manual rollback backup.

  4. Set VAULT_PASSPHRASE_MODE=required in the server's environment, plus one of:

    • VAULT_PASSPHRASE=<same-passphrase> (env var), or
    • VAULT_PASSPHRASE_FILE=/path/to/secret (file β€” preferred for orchestration).
  5. Restart the server. Expected startup lines:

    [vault] Unsealing vault.key.sealed...
    [vault] Unsealed successfully.
    

Rotating the passphrase

Once protection is enabled, rotate the passphrase periodically or whenever exposure is suspected:

# Interactive (recommended for local ops):
docker exec -it hermitstash node scripts/vault-passphrase-rotate.js

# Scripted (secrets-manager / CI friendly):
VAULT_PASSPHRASE_OLD='current' VAULT_PASSPHRASE_NEW='new-value' \
  docker exec hermitstash node scripts/vault-passphrase-rotate.js

Rotation reads the current sealed file, unwraps with the OLD passphrase, re-wraps with the NEW one using a fresh Argon2id salt and XChaCha20 nonce, verifies round-trip in-process, and atomically replaces the sealed file. After success, update the server's VAULT_PASSPHRASE (or _FILE) to the new value and restart.

Important caveat β€” what passphrase rotation does and does NOT protect:

Passphrase rotation protects the future, not the past. If an attacker already captured both the sealed file AND the old passphrase, they already have the vault key. Changing the passphrase after the fact doesn't undo that. For suspected vault-key compromise (not just passphrase compromise) use full vault key rotation (v1.9.3+) β€” see below.

Full vault key rotation (v1.9.3+)

When you suspect the vault keypair itself has been compromised β€” not just the passphrase β€” run the offline full rotation tool. It generates a brand new ML-KEM-1024 + P-384 hybrid keypair and re-encrypts every vault-sealed value in the data directory: every sealed DB column, every per-file XChaCha20 key index, the SQLite file's wrapping key, and the vault key itself. File blobs in the upload directory are NOT re-encrypted β€” their per-file keys are, so rotation completes in seconds even for multi-terabyte upload directories.

# stop the server first, then:
docker exec -it hermitstash node scripts/vault-key-rotate.js

# scripted (wrapped mode):
VAULT_PASSPHRASE_OLD='current' VAULT_PASSPHRASE_NEW='new' \
  docker exec hermitstash node scripts/vault-key-rotate.js

# dry-run β€” exercise everything except the final swap:
docker exec hermitstash node scripts/vault-key-rotate.js --dry-run

The tool builds a complete rotated copy of data/ at data.rotating/, verifies it round-trips, then atomically swaps data/ β†’ data.old.<ISO timestamp>/ and data.rotating/ β†’ data/. A crash at any point leaves data/ either fully pre-rotation or fully post-rotation, never partial β€” server boot recovery handles every interruption point.

After success:

  1. data.old.<ISO timestamp>/ is retained (delete with rm -rf once you've verified the rotated state)
  2. If the passphrase changed, update VAULT_PASSPHRASE / VAULT_PASSPHRASE_FILE to the new value
  3. Restart the server
  4. Verify access: docker exec hermitstash node scripts/vault-key-verify.js

Sessions are invalidated by the required server restart (sessions live in tmpfs by design).

Performance: ~500 sealed-column-values rotated per second per CPU core. A typical 100k-row DB with ~5 sealed columns per row (~500k values) takes ~15 minutes; 1M rows takes ~90 minutes. Bottleneck is per-value PQC envelope crypto, not I/O.

⚠ When to use:

  • Suspected vault-key compromise (sealed file + passphrase both leaked)
  • Annual key rotation per compliance policy
  • Investigating? Run --dry-run first; it does everything except the final swap

For just changing the passphrase (e.g. the old passphrase leaked but the sealed file did NOT), keep using vault-passphrase-rotate.js β€” it's a faster operation that only re-wraps the same vault key.

PEM at-rest sealing for CA + TLS keys (v1.9.4+, opt-in)

v1.9.0 closed the disk-snapshot threat for data/vault.key. v1.9.4 extends the same protection to two other long-lived plaintext PEM files:

  • data/ca.key β€” the mTLS root of trust. Whoever reads it can mint trusted client certs forever; rotation never undoes that. This is the most important PEM to seal.
  • data/tls/privkey.pem β€” the TLS server private key. Lower long-term risk because it rotates via Let's Encrypt every 60-90 days, but a snapshot during the renewal window still enables MITM until cert expiry.

Both are independent opt-in via env var:

Env var Default When =required
CA_KEY_SEALED auto (load whichever exists) Refuse to operate on plaintext ca.key; require ca.key.sealed
TLS_KEY_SEALED auto Refuse to boot on plaintext tls/privkey.pem; require tls/privkey.pem.sealed

CA key

# stop the server (or leave running β€” CA is loaded lazily on cert ops)
docker exec hermitstash node scripts/ca-key-seal.js
# then set CA_KEY_SEALED=required and restart

To revert: docker exec hermitstash node scripts/ca-key-unseal.js (do this BEFORE downgrading to v1.9.3 or earlier; older versions don't understand .sealed files).

TLS server key β€” ACME-friendly

docker exec hermitstash node scripts/tls-key-seal.js
# set TLS_KEY_SEALED=required and restart

After enabling sealed mode, certbot / acme.sh hooks need no changes. The running server's cert watcher polls every minute and auto-seals plaintext privkey.pem files that ACME tools drop into tls/. Renewal flow: ACME writes plaintext β†’ watcher detects within ~1 min β†’ vault-seals β†’ deletes plaintext β†’ reloads setSecureContext.

For ACME hooks that need immediate effect (no 1-minute wait), call scripts/tls-key-seal.js --reload from your hook β€” it sends SIGHUP to the running server's PID file (data/hermitstash.pid) which triggers an immediate reload.

To revert: scripts/tls-key-unseal.js.

What sealing does and does NOT protect

  • βœ… Closes the disk-snapshot gap for the CA + TLS keys (no longer recoverable from a stolen volume snapshot without the vault keypair)
  • βœ… ACME workflows continue without modification
  • ❌ Does NOT protect a running server (key is in process memory once unsealed, recoverable by any code-execution attacker β€” same N1 caveat as the vault key itself)
  • ❌ Does NOT survive vault key loss β€” these PEMs are now downstream of the vault. If the vault is unrecoverable, the CA is too (every existing client cert becomes invalid; users must re-enroll). Trade-off: same risk profile as every other vault-sealed value.

Security overview at a glance (v1.9.5+)

The admin Settings β†’ Security tab shows the live status of every security-related setting in one view, with Enable/Disable buttons (v1.9.9+) for the three sealable layers.

  • Vault key passphrase wrapping (status of VAULT_PASSPHRASE_MODE + whether vault.key.sealed exists)
  • mTLS CA private key sealing (status of CA_KEY_SEALED + whether ca.key.sealed exists)
  • TLS server private key sealing (status of TLS_KEY_SEALED + whether privkey.pem.sealed exists)
  • mTLS enforcement strictness (ENFORCE_MTLS_STRICT mode and whether mTLS is currently active at TLS or app layer)
  • TLS / HTTPS configuration

Each row shows a βœ“ / Β· / ! indicator, the current effective value (masked for sensitive bits), a short explanation of what the setting does, and operator guidance for the right way to configure it.

Boot-time secrets must come from environment variables (or *_FILE variants for Docker secrets), never the admin UI. The Action buttons in v1.9.9+ create the sealed file artifacts but they CANNOT write your .env / Docker secret / Kubernetes Secret for you β€” those live on the host, outside the container's mount. The vault-passphrase-enable wizard surfaces a copyable env-var snippet tailored to your deployment style (Docker Compose with Secrets, Compose with .env, Kubernetes, or systemd) so you can paste it into your config before sealing.

How the Enable wizards work (v1.9.9+)

For vault key passphrase wrapping β€” a 4-step wizard:

  1. Pick deployment style β†’ wizard renders the right env-var snippet
  2. Copy the snippet, paste it into your deployment config, save it
  3. Three-checkbox confirmation: env vars added, passphrase stored safely, you understand loss-of-passphrase = loss-of-data
  4. Enter the passphrase + confirm β†’ server seals vault.key β†’ success message

After the wizard completes, the next server restart will use the configured env var to unwrap. If you skip the env-var setup step, the next restart will fail with "passphrase rejected" β€” an explicit failure mode, not silent breakage. Run scripts/vault-passphrase-remove.js BEFORE restart to revert if you're not ready.

For CA key sealing and TLS server key sealing β€” single confirmation modals (no operator-side env config needed; the vault key is already in memory, so the dispatch picks up the sealed file automatically). TLS sealing also triggers an immediate cert reload via SIGHUP.

Two env-var conventions in use

  1. Tristate *_MODE / *_SEALED β€” auto (default; load whichever exists) / required (refuse to operate on plaintext) / disabled (refuse to operate on sealed). Used by VAULT_PASSPHRASE_MODE, CA_KEY_SEALED, TLS_KEY_SEALED. Newer convention introduced in v1.9.x.
  2. Binary ENFORCE_MTLS_STRICT β€” true (hard enforcement at TLS handshake) / false (escape hatch β€” disables ALL mTLS, use only for locked-out recovery) / unset (soft enforcement at app layer, default). Predates the tristate convention; kept for backwards compat. The Security tab labels both styles clearly so operators don't need to remember which is which.

Recommended secure defaults

For any deployment beyond a personal homelab where the host is fully trusted:

# .env or compose environment
VAULT_PASSPHRASE_MODE=required
VAULT_PASSPHRASE_FILE=/run/secrets/vault-passphrase   # Docker secrets
CA_KEY_SEALED=required
TLS_KEY_SEALED=required
ENFORCE_MTLS_STRICT=true                              # if mTLS is configured

Run the corresponding seal scripts once to migrate existing plaintext keys: vault-passphrase-setup.js, ca-key-seal.js, tls-key-seal.js.

Backup configuration β€” v1.9.4 recovery for v1.9.3-affected deployments

A bug in v1.9.0–v1.9.3 silently blanked the saved backup passphrase whenever the operator edited any other backup setting (schedule, timezone, retention, S3 endpoint) without re-typing the passphrase. The form pre-populated empty for the passphrase field, the form submitted that empty value on save, the backend overwrote the stored passphrase. Once blanked, the scheduled backup job silently skipped on every tick with only a stderr log line β€” no audit log, no admin UI surface.

If you upgraded from v1.9.3 or earlier and your BACKUP_PASSPHRASE may have been silently cleared:

  1. Open the admin Backup section
  2. Re-enter your passphrase in the Backup Passphrase field
  3. Click Save Backup

After v1.9.4, the form pre-populates with bullets when a passphrase is saved, and the submission round-trip preserves the saved value when the bullets aren't replaced. The fix also adds a diagnostic block to the Backup History section that surfaces "Backups are silently skipping because: $reason" when scheduled backup is misconfigured β€” instead of a bare "No backups found" with no clue why.

Reverting

Stop the server, then run:

VAULT_PASSPHRASE='your-passphrase' node scripts/vault-passphrase-remove.js

Unset VAULT_PASSPHRASE_MODE (or set to disabled), restart. The plaintext vault.key is restored byte-for-byte.

What this protects against

  • Stolen disk snapshots, leaked backups, accidental rsync exposure, cloud-provider storage compromise.

What this does NOT protect against

  • Live host compromise. Once the server has unsealed the key into memory, any attacker with code execution on the host recovers it. This is unavoidable for any at-rest encryption on a running service.
  • Leakage of the passphrase itself. If VAULT_PASSPHRASE sits in an .env file alongside the sealed vault, both can be stolen together. The VAULT_PASSPHRASE_FILE path is preferred because it lets you put the passphrase on a different filesystem or managed secret store.
  • Downgrade to v1.8.x β€” earlier versions don't understand the wrapped format. Run the removal tool first if you need to downgrade.

See docs/THREAT_MODEL.md Β§5.2 and Β§9 L2 for the full threat analysis.

Health check

GET /health returns { status, uptime, timestamp } β€” works with Docker HEALTHCHECK, Kubernetes liveness probes, load balancers, and the PQC gateway status check.

Reverse proxy

Drop-in configs for the three common proxies live in deploy/reverse-proxy/ β€” Caddyfile, nginx.conf, and apache.conf. All three terminate TLS, forward /sync/ws WebSocket upgrades, match the 100MB upload limit, and pass X-Forwarded-* headers through for TRUST_PROXY=true.

The admin panel (Settings > Uploads) auto-detects your proxy and generates a ready-to-paste snippet reflecting your current MAX_FILE_SIZE if you'd rather tune body limits from the UI.

If you use the sync client's mTLS mode, see the reverse-proxy README β€” TLS-terminating proxies strip the client cert, so you need TCP passthrough or a dedicated bypass port.

S3 storage

Configure S3-compatible storage (AWS, MinIO, Cloudflare R2, DigitalOcean Spaces, Backblaze B2) from Admin > Settings > Storage tab. All credentials are vault-sealed and validated by the settings schema on save. For R2, set the endpoint to https://<account-id>.r2.cloudflarestorage.com and region to auto.

Other platforms

Coolify / Portainer: Paste ghcr.io/dotcoocoo/hermitstash:1 as the image. Set port 3000, mount /app/data and /app/uploads as persistent volumes, set shared memory to 256MB, add TRUST_PROXY=true and RP_ORIGIN=https://your-domain.com.

Unraid: Docker β†’ Add Container β†’ paste this template URL:

https://raw.githubusercontent.com/dotCooCoo/hermitstash/main/unraid-template.xml

Pre-fills icon, ports, volumes, and --shm-size=256m automatically.

Synology / QNAP: Container Manager β†’ Registry β†’ add ghcr.io, search dotcoocoo/hermitstash, download latest. Create container with port 3000 and folder mappings for /app/data and /app/uploads. For --shm-size use SSH: docker run -d --shm-size=256m ...

Kubernetes:

curl -O https://raw.githubusercontent.com/dotCooCoo/hermitstash/main/deploy/kubernetes.yml
# Edit RP_ORIGIN, PVC sizes, and optional Ingress
kubectl apply -f kubernetes.yml
kubectl port-forward -n hermitstash svc/hermitstash 3000:3000

Includes: Namespace, PVCs, Deployment (liveness/readiness probes, resource limits, memory-backed /dev/shm), Service, and commented-out Ingress template. See deploy/kubernetes.yml.

TrueNAS SCALE: Apps β†’ Custom App β†’ image ghcr.io/dotcoocoo/hermitstash, tag latest. Add host path datasets for /app/data and /app/uploads. Add shared memory volume: type emptyDir, medium Memory, size 256Mi, mount at /dev/shm.

Ubuntu / Debian (native install):

curl -fsSL https://raw.githubusercontent.com/dotCooCoo/hermitstash/main/deploy/install.sh | sudo bash
# Or with auto-update enabled from the start:
curl -fsSL https://raw.githubusercontent.com/dotCooCoo/hermitstash/main/deploy/install.sh | sudo HERMITSTASH_AUTO_UPDATE=yes bash

Installs Node.js 24, creates a hermit system user, sets up tmpfs (256MB) for the in-memory database, and registers a systemd service using the checked-in deploy/hermitstash.service unit. Re-running the script git pulls the latest code and restarts the service. See deploy/install.sh. Uninstall with deploy/uninstall.sh β€” data is preserved unless you pass --purge.

Terraform (DigitalOcean):

cd deploy/terraform
cp terraform.tfvars.example terraform.tfvars  # edit with your API token + SSH key
terraform init && terraform apply

Provisions a droplet, configures a firewall (SSH + HTTP/S + 3000), and optional DNS. See deploy/terraform/.

Ansible:

ansible-playbook -i "your-server," deploy/ansible-playbook.yml

Supports both Docker (-e hermitstash_deploy=docker) and native (-e hermitstash_deploy=native) deployment modes. See deploy/ansible-playbook.yml.

Proxmox LXC:

# Run on the Proxmox host
bash deploy/proxmox-lxc.sh

Creates an unprivileged Debian 12 LXC container with Docker and HermitStash. Configurable via environment variables (CTID, MEMORY, DISK, IP). See deploy/proxmox-lxc.sh.

LXD / Incus:

# Run on the host (auto-detects LXD or Incus)
bash deploy/lxd-incus.sh

Creates a Debian 12 system container with Docker nested inside. Forwards port 3000 from host to container via proxy device. Configurable via environment variables (CONTAINER_NAME, MEMORY, DISK, CPU, PORT). See deploy/lxd-incus.sh.

Podman (RHEL / Fedora / Rocky / Alma):

bash deploy/podman.sh

Drop-in Docker alternative β€” works rootless or rootful. Automatically generates a systemd unit via podman generate systemd (user unit for rootless, system unit for rootful). Volumes use :Z for SELinux relabeling. Pass AUTO_UPDATE=true to opt into podman-auto-update.timer. See deploy/podman.sh.

Systemd (manual): If you already have Node.js 24+ installed, copy deploy/hermitstash.service to /etc/systemd/system/ and adjust paths. The unit includes NoNewPrivileges, ProtectSystem=strict, PrivateTmp, and scoped ReadWritePaths.

Upgrading

# Back up vault key (critical β€” loss = all data unrecoverable)
cp data/vault.key data/vault.key.bak

# Pull new image and restart
docker pull ghcr.io/dotcoocoo/hermitstash:1
docker compose up -d

Database migrations run automatically on startup β€” no manual steps needed. The server logs applied migrations at startup. If something goes wrong, restore vault.key and hermitstash.db.enc from your backup and restart with the previous image version.

Envelope migration (v1.9.16 β†’ v1.9.18+) β€” automatic at boot

Starting in v1.9.18, HermitStash auto-migrates its on-disk sealed-value envelope from 0xE1 to 0xE2 (NIST SP 800-56C r2 / RFC 9180 FixedInfo binding) at the first boot after upgrading from v1.9.17 or earlier. The vendored framework (blamejs) bumped the envelope magic in its 0.8.41 release and refuses legacy 0xE1 blobs on decrypt; the auto-migrate hook runs in-process during server startup and converts every vault:-prefixed sealed value before any other module reads it.

docker compose pull && docker compose up -d
# First boot logs:
#   [envelope-migrate] detected 0xE1 sealed data β€” converting to 0xE2 ...
#   [envelope-migrate] [ok] api-encrypt-keypair.sealed
#   [envelope-migrate] [ok] users β€” 1 rows migrated
#   [envelope-migrate] [ok] audit_log β€” 91 rows migrated
#   ... (remaining tables) ...
#   [envelope-migrate] [ok] db.key.enc
#   [envelope-migrate] complete β€” 2 sealed files + 130 DB rows migrated to 0xE2

Subsequent boots probe data/db.key.enc's envelope magic byte and skip the migration entirely. Migration scope: data/ca.key.sealed, data/tls/privkey.pem.sealed, data/api-encrypt-keypair.sealed, data/db.key.enc, and every vault:-prefixed cell in the encrypted DB. Cross-version-compatible formats (DB file encryptPacked, per-file storage blobs, backup blobs) are not touched β€” they read identically across versions.

For operators who prefer manual control or want to dry-run before committing, the standalone CLI at scripts/envelope-migrate-0xE1-to-0xE2.js is still shipped:

docker compose down
node scripts/envelope-migrate-0xE1-to-0xE2.js                # dry-run, no writes
node scripts/envelope-migrate-0xE1-to-0xE2.js --apply        # actual migration
docker compose up -d                                         # boot β€” auto-migrate detects 0xE2, no-ops

Crash safety: marker file at data/envelope-migration.marker tracks completed steps. A killed migration resumes from the last completed step on next boot. The migration refuses to re-run on already-migrated data (which would otherwise trip lib/db's auto-regenerate fallback). Restore from backup before re-running if needed.

Container orchestrators with aggressive startup health-check timeouts: raise the startup probe timeout to 5 minutes for the v1.9.17 β†’ v1.9.18 jump. Worst-case migration time is ~3 minutes for ~100k sealed cells; typical small deployments measure in seconds.

Auto-update (opt-in)

Auto-update is off by default on every deployment method. Turn it on when you want it.

Docker / Compose: a 3-line root cron is enough β€” no extra container, no Docker socket to mount:

# /etc/cron.d/hermitstash-update  (root)
17 4 * * *  cd /opt/hermitstash && docker compose pull && docker compose up -d --remove-orphans

The ghcr.io/dotcoocoo/hermitstash:1 tag is a moving major-version pointer that every v1.* release updates, so this stays on v1.x forever. If you're on Coolify, Portainer, or CapRover, use the platform's built-in auto-deploy instead β€” they already watch the registry and handle rollout without needing a cron.

Podman: pass AUTO_UPDATE=true to deploy/podman.sh. The script adds the io.containers.autoupdate=registry label and enables podman-auto-update.timer. Preview with podman auto-update --dry-run.

Native install (systemd): pass HERMITSTASH_AUTO_UPDATE=yes to the installer, or enable it later:

sudo systemctl enable --now hermitstash-update.timer

The timer fires deploy/update.sh daily with a randomized 4-hour delay. The script reads the installed version from package.json, fetches the latest matching release tag from the GitHub API, and performs a git fetch && git checkout vX.Y.Z followed by systemctl restart hermitstash. If /health doesn't come back within 60 seconds, it rolls back to the prior commit and restarts again. Auto-update stays on your current major version β€” going from v1.x to v2.x is an operator-initiated action. Dry-run with sudo DRY_RUN=1 /opt/hermitstash/deploy/update.sh.

Kubernetes: there's no HermitStash-provided controller. Use your cluster's standard tooling β€” ArgoCD, Flux, or Keel β€” to watch the :1 image tag for updates.

Signed releases: the native updater has pluggable strategies β€” UPDATE_STRATEGY=git today, with stubs for release-tarball (checksum-verified) and signed-tarball (P-384 ECDSA signed against keys in /etc/hermitstash/trusted-keys.d/) so the transport can be hardened later without rewriting the timer, rollback, or health-check paths.

Maintenance mode

Toggle from Admin > Settings > Branding. Blocks all non-admin access and serves a 503 page. Admin routes, auth routes, and API keys with admin scope still work during maintenance.

API Keys

API keys enable programmatic access. Manage them in the admin panel under the API Keys collapsible section.

Creating a key

Generate a key from the admin panel or via the API:

curl -X POST https://your-domain/admin/apikeys/create \
  -H "Authorization: Bearer <admin-api-key>" \
  -H "Content-Type: application/json" \
  -d '{"name": "CI Pipeline", "permissions": "upload"}'

Response (key shown once, then SHA3-hashed -- never retrievable):

{ "success": true, "key": "hs_a1b2c3d4e5f6...", "prefix": "hs_a1b2" }

Authentication

Include the key as a Bearer token:

Authorization: Bearer hs_a1b2c3d4e5f6...

Permission scopes

Scope Access
upload Create bundles, upload files via /drop endpoints
read List and download files, view bundles
admin Full admin access (settings, users, webhooks, keys)
webhook Manage webhooks

Upload endpoints

Public upload endpoints accept API key authentication. When authenticated, uploads are assigned to the key owner's account.

Endpoint Method Description
GET /.well-known/blamejs-pubkey Server keypair for the blamejs apiEncrypt envelope. Plain JSON {publicKey, ecPublicKey, kemId, cipherId, kdfId}. No auth, no encryption. Cache at the client; re-fetch only when the server keypair rotates
POST /drop/init Initialize a bundle. Blamejs-encrypted. Decrypted body: { uploaderName, uploaderEmail, password, message, bundleName, expiryDays, fileCount, ... }. Decrypted response: { bundleId, shareId, finalizeToken }
POST /drop/file/:bundleId Upload a file (multipart/form-data, field: file). Body bypasses encryption (multipart not JSON). Response is plaintext JSON for Bearer clients
POST /drop/chunk/:bundleId Upload a chunk for large files (multipart, fields: chunk, filename, chunkIndex, totalChunks). Same encryption shape as /drop/file/:bundleId
POST /drop/finalize/:bundleId Finalize the bundle. Blamejs-encrypted. Decrypted body: { finalizeToken }. Decrypted response: { success, shareId, shareUrl, emailSent }
GET /b/:shareId Bundle metadata (with Accept: application/json). Plaintext for Bearer clients
POST /sync/rename Sync file rename. Blamejs-encrypted. Decrypted body: { bundleId, oldRelativePath, newRelativePath }
DELETE /files/:fileId Sync file delete. Plaintext request and response. Sync-guards enforce scope + cert binding + bundle ownership

Example: programmatic upload

POST /drop/init and POST /drop/finalize/:bundleId require the blamejs apiEncrypt envelope (_ek / _ct / _ts / _nonce) β€” plain-JSON callers receive 400 encrypted-payload-required. Use a blamejs-aware HTTP client; the easiest is b.httpClient.encrypted({ pubkey, baseUrl, keying: "per-session" }) from a blamejs-bundled app, fetching pubkey once from GET /.well-known/blamejs-pubkey. Multipart upload (/drop/file/:bundleId, /drop/chunk/:bundleId) bypasses the encryption layer for body content β€” the file bytes are application-encrypted at rest after upload.

// minimal Node example using vendored blamejs
var b = require("./vendor/blamejs");
var pubkey = await fetch("https://your-domain/.well-known/blamejs-pubkey")
  .then(r => r.json());
var enc = b.httpClient.encrypted({
  pubkey, baseUrl: "https://your-domain", keying: "per-session",
  headers: { Authorization: "Bearer " + process.env.API_KEY },
});

var init = await enc.request({ method: "POST", path: "/drop/init",
  body: { password: "", message: "Automated upload", expiryDays: 7 } });
var { bundleId, finalizeToken, shareId } = init.body;

// Multipart upload β€” plain HTTPS, response is plaintext for Bearer clients
// (omitted for brevity; field name is "file")

await enc.request({ method: "POST", path: "/drop/finalize/" + bundleId,
  body: { finalizeToken } });

console.log("Share link: https://your-domain/b/" + shareId);

Admin endpoints

All require admin scope:

Endpoint Method Description
GET /admin/apikeys/api List all API keys (hashes hidden)
POST /admin/apikeys/create Generate new key. Body: { "name": "...", "permissions": "upload" }
POST /admin/apikeys/:id/revoke Revoke a key permanently
GET /admin/settings Get all settings (sensitive values masked)
POST /admin/settings Update settings. Body: { "siteName": "...", ... }
GET /admin/environment Runtime info (Node.js, OpenSSL, Docker, env overrides)

Webhooks

Webhooks send signed HTTP POST requests when events occur. Manage them in the admin panel under the Webhooks collapsible section.

Creating a webhook

curl -X POST https://your-domain/admin/webhooks/create \
  -H "Authorization: Bearer <admin-api-key>" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://example.com/hook", "events": "*"}'

Response (secret shown once):

{ "success": true, "secret": "a1b2c3d4..." }

Events

Event Trigger Payload
bundle_finalized Bundle upload completed and finalized { shareId, uploaderName, files, size }

Event filter: set to * for all events, or a specific event name. Additional events may be added in future releases.

Payload format

{
  "event": "bundle_finalized",
  "data": {
    "shareId": "a1b2c3d4e5f6...",
    "uploaderName": "Anonymous",
    "files": 3,
    "size": 1048576
  },
  "timestamp": "2026-04-09T12:00:00.000Z"
}

Signature verification

Every webhook request includes an X-Webhook-Signature header containing an HMAC-SHA3-512 hex digest of the raw JSON body, signed with the webhook secret:

X-Webhook-Signature: a1b2c3d4e5f6...

Verify in your handler:

const crypto = require("crypto");

function verifyWebhook(body, signature, secret) {
  const expected = crypto
    .createHmac("sha3-512", secret)
    .update(body)
    .digest("hex");
  return crypto.timingSafeEqual(
    Buffer.from(signature, "hex"),
    Buffer.from(expected, "hex")
  );
}
import hmac, hashlib

def verify_webhook(body: bytes, signature: str, secret: str) -> bool:
    expected = hmac.new(secret.encode(), body, hashlib.sha3_512).hexdigest()
    return hmac.compare_digest(signature, expected)

SSRF protection

Webhook URLs are validated against:

  • Private IP ranges (127.0.0.0/8, 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16)
  • Link-local addresses (169.254.0.0/16, fe80::/10)
  • IPv6 private ranges (fc00::/7, ::1)
  • Cloud metadata endpoints (169.254.169.254)
  • Non-HTTPS schemes are rejected in production

Critical Files

All in the data/ directory (gitignored):

File What it is Lose it?
data/vault.key ML-KEM-1024 + P-384 hybrid keypair All encrypted data permanently unrecoverable
data/db.key.enc DB file encryption key (vault-sealed) Database file unreadable
data/hermitstash.db.enc Encrypted database at rest All settings, users, audit logs lost

Back up data/vault.key. This is the root of the entire encryption chain. Every sealed value, every encrypted file, every protected key traces back to this keypair. It cannot be regenerated.

Vendored Dependencies

All runtime dependencies are committed to the repo -- no npm install needed. As of v1.9.12, every server-side crypto / identity dependency is vendored as a single framework β€” blamejs β€” at lib/vendor/blamejs/. Browser-side bundles continue to ship individually until blamejs grows browser builds.

Managed via scripts/vendor-update.sh:

./scripts/vendor-update.sh blamejs                # refresh the framework bundle
./scripts/vendor-update.sh --check                # see what's outdated (browser bundles)
./scripts/vendor-update.sh --diff @noble/ciphers  # see changelog (browser bundles)
Vendored Version Author Purpose
blamejs 0.8.0 blamejs contributors (Apache-2.0) Server-side framework: XChaCha20-Poly1305, ML-KEM-1024, ML-DSA-87, SLH-DSA-SHAKE-256f, Argon2id (Node 24+ built-in), WebAuthn, mTLS CA, envelope versioning, audit chain, etc. Bundles every server-side crypto/identity dep transitively (see lib/vendor/MANIFEST.json packages.blamejs.components)
@noble/ciphers (browser only) 2.1.1 Paul Miller (MIT) XChaCha20-Poly1305 in the browser vault + outbox flows
@noble/hashes (browser only) 2.0.1 Paul Miller (MIT) SHAKE256 KDF in the browser
@noble/post-quantum (browser only) 0.6.0 Paul Miller (MIT) ML-KEM-1024 in the browser vault flow

blamejs internally vendors @noble/ciphers, @noble/post-quantum, @simplewebauthn/server, @peculiar/x509 + pkijs (peculiar-pki bundle), and the SecLists top-10000 password list β€” each tracked under packages.blamejs.components in lib/vendor/MANIFEST.json so Trivy / Grype can flag CVEs against any nested dep.

Argon2id derivation runs through Node 24+'s built-in crypto.argon2 API via blamejs's lib/argon2-builtin.js wrapper β€” the @ranisalt/argon2 native binding (and its 8-platform prebuilds) is no longer vendored.

These libraries are exceptional work. HermitStash wouldn't exist without them.

Architecture

~180 JS files, 25 HTML templates, 21 database tables. Small files, one job each.

server.js             Bootstrap, middleware, scheduled tasks, default accounts
lib/
  crypto.js           PQC crypto: ML-KEM-1024+P-384, XChaCha20, SHAKE256,
                      SLH-DSA-SHAKE-256f (default sig), ML-DSA-87 (legacy),
                      envelope versioning
  vault.js            Hybrid keypair management, seal/unseal, auto key upgrade
  field-crypto.js     FIELD_SCHEMA: auto seal/unseal/hash for all DB fields
  db.js               SQLite + auto field crypto + DB file encryption
  api-crypto.js       API payload XChaCha20-Poly1305 encrypt/decrypt
  session.js          Hybrid KEM encrypted cookies, LRU eviction
  storage.js          Local/S3 + XChaCha20-Poly1305 file encryption + pre-signed URLs
                      saveRaw/getRawBuffer for pre-encrypted data (vault files)
  cert-utils.js       Certificate fingerprint hashing + indexed revocation checks
  config.js           Settings from encrypted DB, env fallback, onReset registry
  settings-schema.js  Type-safe settings sanitization + validation (77 settings)
  audit.js            Audit logging with auto-sealed entries
  rate-limit.js       Per-IP rate limiting with proxy validation
  ip-quota.js         Per-IP storage quota for anonymous uploads
  email.js            SMTP + Resend API with dual failover + quota tracking
  router.js           HTTP server, routing, pre-compiled patterns
  multipart.js        Multipart + JSON body parser (shared accumulator)
  template.js         Custom template engine with caching
  sanitize.js         Filename sanitization + HTML escaping
  sanitize-svg.js     SVG sanitizer (strips scripts, events, dangerous tags)
  totp.js             TOTP generation/verification (HMAC-SHA-512 default,
                      legacy HMAC-SHA-1 verification retained), backup codes
  google-auth.js      Google OAuth2 (OpenID Connect, CSRF state)
  constants.js        Paths, versions, theme, hash prefixes, time constants
  zip.js              ZIP writer with Deflate compression
  expiry.js           File expiry cleanup
  scheduler.js        Task scheduler with watchdog timeouts
  webhook.js          Webhook dispatch queue
  pqc-gate.js         ClientHello PQC group inspection at TCP level
  pqc-agent.js        PQC-only outbound HTTPS agent
  vendor/blamejs/     Vendored framework (server-side crypto + identity primitives)

app/
  bootstrap/          Startup invariant checks
  data/               Repositories + migration runner
  domain/             Services (auth, uploads, teams, admin, webhooks, email)
  http/               Request validators (upload magic bytes, auth, admin)
  security/           CSRF, CORS, SSRF, scope, origin policies
  domain/uploads/     Shared upload handler, bundle service, chunk service
  jobs/               Background jobs (expiry, audit retention, webhook dispatch)
  shared/             Errors, logger, validation helpers, filename sanitization

scripts/              vendor-update.sh, vendor-font.js, sync-to-public.sh
routes/               19 route files (includes stash.js for Customer Stash)
middleware/           15 files (auth, CORS, CSRF, API encryption, security headers, bot guard, require-access, require-admin, require-auth)
views/                25 templates
public/               CSS, JS, logos, icons, vendored fonts

Contributing

I want to be straightforward about this: I'm not currently accepting code contributions, and I want to explain why rather than just saying no.

HermitStash is a security-focused project maintained by one person. Reviewing external code contributions to a cryptographic system is something I don't feel I can do responsibly right now β€” I'm still learning, and I'd rather not merge code I can't fully evaluate myself. Accepting PRs would mean either rubber-stamping changes I don't understand (bad) or asking contributors to wait indefinitely while I figure it out (also bad). The honest answer is that I'm not set up for it yet.

That said, there are a lot of ways to help that I genuinely welcome:

  • Bug reports. If something doesn't work, or works in a way that surprises you, please open an issue. Steps to reproduce help a lot.
  • Security findings. If you spot a cryptographic issue, a misuse of a primitive, or anything that contradicts a security claim in the README, please report it privately β€” see SECURITY.md for how.
  • Feature requests. Open an issue describing the use case. I can't promise I'll build it, but I want to hear what people would find useful.
  • Documentation feedback. If something in the README is unclear, wrong, or missing, an issue is great. Documentation issues are some of the most useful kinds of feedback I get.
  • Questions. If you're trying to use HermitStash and something isn't clear, asking is welcome.

If you've built something on top of HermitStash, or you're running it somewhere interesting, I'd love to hear about that too β€” feel free to open an issue just to say hi.

This may change in the future. If HermitStash grows to a point where I can responsibly review external code, I'll update this section. Until then: thank you for understanding, and thank you for being interested enough to consider contributing in the first place.

License

AGPL-3.0-or-later

A final note

If you've read this far β€” thank you. Building and sharing HermitStash has been one of the most rewarding things I've worked on, and the fact that you took the time to look at it means a lot.

If HermitStash has been useful to you and you'd like to buy me a coffee, you can do so at ko-fi.com/dotcoocoo. It's never expected, always appreciated.

Sponsor this project

Packages

 
 
 

Contributors