Skip to content

Releases: ipfs/kubo

v0.41.0

23 Apr 23:06
v0.41.0
d719fb8

Choose a tag to compare

Note

This release was brought to you by the Shipyard team.

Overview

🔦 Highlights

🗑️ Faster Provide Queue Disk Reclamation

Nodes with significant amount of data and DHT provide sweep enabled (Provide.DHT.SweepEnabled, the default since Kubo 0.39) could see their datastore/ directory grow continuously. Each reprovide cycle rewrote the provider keystore inside the shared repo datastore, generating tombstones faster than the storage engine could compact them, and in default configuration Kubo was slow to reclaim this space.

The provider keystore now lives in a dedicated datastore under $IPFS_PATH/provider-keystore/. After each reprovide cycle the old datastore is removed from disk entirely, so space is reclaimed immediately regardless
of storage backend.

On first start after upgrading, stale keystore data is cleaned up from the shared datastore automatically.

To learn more, see kubo#11096, kubo#11198, and go-libp2p-kad-dht#1233.

✨ New ipfs cid inspect command

New subcommand for breaking down a CID into its components. Works offline, supports --enc=json.

$ ipfs cid inspect bafybeigdyrzt5sfp7udm7hu76uh7y26nf3efuylqabf3oclgtqy55fbzdi
CID:        bafybeigdyrzt5sfp7udm7hu76uh7y26nf3efuylqabf3oclgtqy55fbzdi
Version:    1
Multibase:  base32 (b)
Multicodec: dag-pb (0x70)
Multihash:  sha2-256 (0x12)
  Length:   32 bytes
  Digest:   c3c4733ec8affd06cf9e9ff50ffc6bcd2ec85a6170004bb709669c31de94391a
CIDv0:      QmbWqxBEKC3P8tqsKc98xmWNzrzDtRLMiMPL8wBuTGsMnR
CIDv1:      bafybeigdyrzt5sfp7udm7hu76uh7y26nf3efuylqabf3oclgtqy55fbzdi

See ipfs cid --help for all CID-related commands.

🔤 --cid-base fixes across all commands

--cid-base is now respected by every command that outputs CIDs. Previously block stat, block put, block rm, dag stat, refs local, pin remote, and files chroot ignored the flag.

CIDv0 values are now auto-upgraded to CIDv1 when a non-base58btc base is requested, because CIDv0 can only be represented in base58btc.

🔄 Built-in ipfs update command

Kubo now ships with a built-in ipfs update command that downloads release binaries from GitHub and swaps the current one in place. It supersedes the external ipfs-update tool, deprecated since v0.37.

$ ipfs update check
Update available: 0.40.0 -> 0.41.0
Run 'ipfs update install' to install the latest version.

See ipfs update --help for the available subcommands (check, versions, install, revert, clean).

🖥️ WebUI Improvements

IPFS Web UI has been updated to v4.12.0.

IPv6 peer geolocation and Peers screen optimizations

The Peers screen now resolves IPv6 addresses to geographic locations, and the geolocation database has been updated to GeoLite2-City-CSV_20260220. (ipfs-geoip v9.3.0)

Peer locations load faster thanks to UX optimizations in the underlying ipfs-geoip library.

🔧 Correct provider addresses for custom HTTP routing

Nodes using custom routing (Routing.Type=custom) with IPIP-526 could end up publishing unresolved 0.0.0.0 addresses in provider records. Addresses are now resolved at provide-time, and when AutoNAT V2 has confirmed publicly reachable addresses, those are preferred automatically. See #11213.

🔀 Provide.Strategy modifiers: +unique and +entities

Experimental opt-in optimizations for content providers with large repositories where multiple recursive pins share most of their DAG structure (e.g. append-only datasets, versioned archives like dist.ipfs.tech).

  • +unique: bloom filter dedup across recursive pins. Shared subtrees are traversed only once per reprovide cycle instead of once per pin, cutting I/O from O(pins * blocks) to O(unique blocks) at ~4 bytes/CID.
  • +entities: announces only entity roots (files, directories, HAMT shards), skipping internal file chunks. Far fewer DHT provider records while keeping all content discoverable by file/directory CID. Implies +unique.

Example: Provide.Strategy = "pinned+mfs+entities"

The default Provide.Strategy=all is unchanged. See Provide.Strategy for configuration details and caveats.

The bloom filter precision is tunable via Provide.BloomFPRate (default ~1 false positive per 4.75M lookups, ~4 bytes per CID).

📌 pin add and pin update now fast-provide root CID

ipfs pin add and ipfs pin update announce the pinned root CID to the routing system immediately after pinning, same as ipfs add and ipfs dag import. This matters for selective strategies like pinned+mfs, where previously the root CID was not announced until the next reprovide cycle (see Provide.DHT.Interval). With the default Provide.Strategy=all, the blockstore already provides every block on write, so this is a no-op.

Both commands now accept --fast-provide-root, --fast-provide-dag, and --fast-provide-wait flags, matching ipfs add and ipfs dag import. See Import for defaults and configuration.

🌳 New --fast-provide-dag flag for fine-tuned provide control

Users with a custom Provide.Strategy (e.g. pinned, pinned+mfs+entities) now have finer control over which CIDs are announced immediately on ipfs add, ipfs dag import, ipfs pin add, and ipfs pin update.

By default, only the root CID is provided right away (--fast-provide-root=true). Child blocks are deferred until the next reprovide cycle. This keeps bulk imports fast and avoids overwhelming online nodes with provide traffic.

Pass --fast-provide-dag=true (or set Import.FastProvideDAG) to provide the full DAG immediately during add, using the active Provide.Strategy to determine scope.

Provide.Strategy=all (default) is unaffected. It provides every block at the blockstore level regardless of this flag.

Note

Faster default imports for Provide.Strategy=pinned and pinned+mfs users. Previously, ipfs add --pin eagerly announced every block of newly added content as it was written, through an internal DAG service wrapper. This release routes add-time providing through the new --fast-provide-dag code path, which defaults to false. The result is faster bulk imports and less provide traffic during add: only the root CID is announced immediately (via Import.FastProvideRoot), and child blocks are picked up by the next reprovide cycle (see Provide.DHT.Interval, default 22...

Read more

v0.41.0-rc2

22 Apr 00:56
v0.41.0-rc2

Choose a tag to compare

v0.41.0-rc2 Pre-release
Pre-release

Note

This Release Preview was brought to you by the Shipyard team.

Draft release notes: docs/changelogs/v0.41.md
Release status: #11082

v0.41.0-rc1

14 Apr 15:20
v0.41.0-rc1

Choose a tag to compare

v0.41.0-rc1 Pre-release
Pre-release

Note

This Release Preview was brought to you by the Shipyard team.

Draft release notes: docs/changelogs/v0.41.md
Release status: #11082

v0.40.1

27 Feb 17:58
v0.40.1
39f8a65

Choose a tag to compare

Note

This patch release was brought to you by the Shipyard team.

This is a Windows bugfix release. If you use Linux or macOS v0.40.0 should be fine.

🚒 Bugfix for Windows

If you run Kubo on Windows, v0.40.0 can crash after running for a while. The daemon starts fine and works normally at first, but eventually hits a memory corruption in Go's network I/O layer and dies. This is likely caused by an upstream Go 1.26 regression in overlapped I/O handling that has known issues (go#77142, #11214).

This patch release downgrades the Go toolchain from 1.26 to 1.25, which does not have this bug. If you are running Kubo on Windows, upgrade to v0.40.1. We will switch back to Go 1.26.x once the upstream fix lands.

📝 Changelog

Full Changelog v0.40.1
  • github.com/ipfs/kubo:

See v0.40.0 for full list of changes since v0.39.x.

v0.40.0

25 Feb 23:16
v0.40.0
882b7d2

Choose a tag to compare

Note

This release was brought to you by the Shipyard team.

🔦 Highlights

This release brings reproducible file imports (CID Profiles), cleanup of interrupted flatfs operations, better connectivity diagnostics, and improved gateway behavior. It also ships with Go 1.26, lowering memory usage and GC overhead across the board.

🔢 IPIP-499: UnixFS CID Profiles

CID Profiles are presets that pin down how files get split into blocks and organized into directories, so you get the same CID for the same data across different software or versions. Defined in IPIP-499.

New configuration profiles

  • unixfs-v1-2025: modern CIDv1 profile with improved defaults
  • unixfs-v0-2015 (alias legacy-cid-v0): best-effort legacy CIDv0 behavior

Apply with: ipfs config profile apply unixfs-v1-2025

The test-cid-v1 and test-cid-v1-wide profiles have been removed. Use unixfs-v1-2025 or manually set specific Import.* settings instead.

New Import.* options

  • Import.UnixFSHAMTDirectorySizeEstimation: estimation mode (links, block, or disabled)
  • Import.UnixFSDAGLayout: DAG layout (balanced or trickle)

New ipfs add CLI flags

  • --dereference-symlinks resolves all symlinks to their target content, replacing the deprecated --dereference-args which only resolved CLI argument symlinks
  • --empty-dirs / -E controls inclusion of empty directories (default: true)
  • --hidden / -H includes hidden files (default: false)
  • --trickle implicit default can be adjusted via Import.UnixFSDAGLayout

ipfs files write fix for CIDv1 directories

When writing to MFS directories that use CIDv1 (via --cid-version=1 or ipfs files chcid), single-block files now produce raw block CIDs (like bafkrei...), matching the behavior of ipfs add --raw-leaves. Previously, MFS would wrap single-block files in dag-pb even when raw leaves were enabled. CIDv0 directories continue to use dag-pb.

Block size limit raised to 2MiB

ipfs block put, ipfs dag put, and ipfs dag import now accept blocks up to 2MiB without --allow-big-block, matching the bitswap spec. The previous 1MiB limit was too restrictive and broke ipfs dag import of 1MiB-chunked non-raw-leaf data (protobuf wrapping pushes blocks slightly over 1MiB). The max --chunker value for ipfs add is 2MiB - 256 bytes to leave room for protobuf framing. IPIP-499 profiles use lower chunk sizes (256KiB and 1MiB) and are not affected.

HAMT Threshold Fix

HAMT directory sharding threshold changed from >= to > to match the Go docs and JS implementation (ipfs/boxo@6707376). A directory exactly at 256 KiB now stays as a basic directory instead of converting to HAMT. This is a theoretical breaking change, but unlikely to impact real-world users as it requires a directory to be exactly at the threshold boundary. If you depend on the old behavior, adjust Import.UnixFSHAMTShardingSize to be 1 byte lower.

🧹 Automatic cleanup of interrupted imports

If you cancel ipfs add or ipfs dag import mid-operation, Kubo now automatically cleans up incomplete data on the next daemon start. Previously, interrupted imports would leave orphan blocks in your repository that were difficult to identify and remove without pins and running explicit garbage collection.

Batch operations also use less memory now. Block data is written to disk immediately rather than held in RAM until the batch commits.

Under the hood, the block storage layer (flatfs) was rewritten to use atomic batch operations via a temporary staging directory. See go-ds-flatfs#142 for details.

🌍 Light clients can now use your node for delegated routing

The Routing V1 HTTP API is now exposed by default at http://127.0.0.1:8080/routing/v1. This allows light clients in browsers to use Kubo Gateway as a delegated routing backend instead of running a full DHT client. Support for IPIP-476: Delegated Routing DHT Closest Peers API is included. Can be disabled via Gateway.ExposeRoutingAPI.

📊 See total size when pinning

ipfs pin add --progress now shows the total size of the pinned DAG as it fetches blocks.

Example output:

Fetched/Processed 336 nodes (83 MB)

🔀 IPIP-523: ?format= takes precedence over Accept header

The ?format= URL query parameter now always wins over the Accept header (IPIP-523), giving you deterministic HTTP caching and protecting against CDN cache-key collisions. Browsers can also use ?format= reliably even when they send Accept headers with specific content types.

The only breaking change is for edge cases where a client sends both a specific Accept header and a different ?format= value for an explicitly supported format (tar, raw, car, dag-json, dag-cbor, etc.). Previously Accept would win. Now ?format= always wins.

🚫 IPIP-524: Gateway codec conversion disabled by default

Gateways no longer convert between codecs by default (IPIP-524). This removes gateways from a gatekeeping role: clients can adopt new codecs immediately without waiting for gateway operator updates. Requests for a format that differs from the block's codec now return 406 Not Acceptable.

Migration: Clients should fetch raw blocks (?format=raw or Accept: application/vnd.ipld.raw)
and convert client-side using libraries like @helia/verified-fetch.

Set Gateway.AllowCodecConversion
to true to restore previous behavior.

✅ More reliable IPNS over PubSub

IPNS over PubSub implementation in Kubo is now more reliable. Duplicate messages are rejected even in large networks where messages may cycle back after the in-memory cache expires.

Kubo now persists the maximum seen sequence number per peer to the datastore (go-libp2p-pubsub#BasicSeqnoValidator), providing stronger duplicate detection that survives node restarts. This addresses message flooding issues reported in #9665.

IPNS over PubSub is opt-in via Ipns.UsePubsub. Kubo's pubsub is optimized for IPNS use case. For custom pubsub applications requiring different va...

Read more

v0.40.0-rc2

20 Feb 19:15
v0.40.0-rc2

Choose a tag to compare

v0.40.0-rc2 Pre-release
Pre-release

This Release Preview was brought to you by the Shipyard team.

Draft release notes: docs/changelogs/v0.40.md
Release status: #11008

v0.40.0-rc1

12 Feb 22:13
v0.40.0-rc1

Choose a tag to compare

v0.40.0-rc1 Pre-release
Pre-release

This Release Preview was brought to you by the Shipyard team.

Draft release notes: docs/changelogs/v0.40.md
Release status: #11008

v0.39.0

27 Nov 03:47
v0.39.0
2896aed

Choose a tag to compare

Note

This release was brought to you by the Shipyard team.

Overview

This release is an important step toward solving the DHT bottleneck for self-hosting IPFS on consumer hardware and home networks. The DHT sweep provider (now default) announces your content to the network without traffic spikes that overwhelm residential connections. Automatic UPnP recovery means your node stays reachable after router restarts without manual intervention.

New content becomes findable immediately after ipfs add. The provider system persists state across restarts, alerts you when falling behind, and exposes detailed stats for monitoring. This release also finalizes the deprecation of the legacy go-ipfs name.

🔦 Highlights

🎯 DHT Sweep provider is now the default

The Amino DHT Sweep provider system, introduced as experimental in v0.38, is now enabled by default (Provide.DHT.SweepEnabled=true).

What this means: All nodes now benefit from efficient keyspace-sweeping content announcements that reduce memory overhead and create predictable network patterns, especially for nodes providing large content collections.

Migration: The transition is automatic on upgrade. Your existing configuration is preserved:

  • If you explicitly set Provide.DHT.SweepEnabled=false in v0.38, you'll continue using the legacy provider
  • If you were using the default settings, you'll automatically get the sweep provider
  • To opt out and return to legacy behavior: ipfs config --json Provide.DHT.SweepEnabled false
  • Providers with medium to large datasets may need to adjust defaults; see Capacity Planning
  • When Routing.AcceleratedDHTClient is enabled, full sweep efficiency may not be available yet; consider disabling the accelerated client as sweep is sufficient for most workloads. See caveat 4.

New features available with sweep mode:

  • Detailed statistics via ipfs provide stat (see below)
  • Automatic resume after restarts with persistent state (see below)
  • Proactive alerts when reproviding falls behind (see below)
  • Better metrics for monitoring (provider_provides_total) (see below)
  • Fast optimistic provide of new root CIDs (see below)

For background on the sweep provider design and motivations, see Provide.DHT.SweepEnabled and Shipyard's blogpost Provide Sweep: Solving the DHT Provide Bottleneck.

⚡ Fast root CID providing for immediate content discovery

When you add content to IPFS, the sweep provider queues it for efficient DHT provides over time. While this is resource-efficient, other peers won't find your content immediately after ipfs add or ipfs dag import completes.

To make sharing faster, ipfs add and ipfs dag import now do an immediate provide of root CIDs to the DHT in addition to the regular queue (controlled by the new --fast-provide-root flag, enabled by default). This complements the sweep provider system: fast-provide handles the urgent case (root CIDs that users share and reference), while the sweep provider efficiently provides all blocks according to Provide.Strategy over time.

This closes the gap between command completion and content shareability: root CIDs typically become discoverable on the network in under a second (compared to 30+ seconds previously). The feature uses optimistic DHT operations, which are significantly faster with the sweep provider (now enabled by default).

By default, this immediate provide runs in the background without blocking the command. For use cases requiring guaranteed discoverability before the command returns (e.g., sharing a link immediately), use --fast-provide-wait to block until the provide completes.

Simple examples:

ipfs add file.txt                     # Root provided immediately, blocks queued for sweep provider
ipfs add file.txt --fast-provide-wait # Wait for root provide to complete
ipfs dag import file.car              # Same for CAR imports

Configuration: Set defaults via Import.FastProvideRoot (default: true) and Import.FastProvideWait (default: false). See ipfs add --help and ipfs dag import --help for more details and examples.

Fast root CID provide is automatically skipped when DHT routing is unavailable (e.g., Routing.Type=none or delegated-only configurations).

⏯️ Provider state persists across restarts

The Sweep provider now persists the reprovide cycle state and automatically resumes where it left off after a restart. This brings several improvements:

  • Persistent progress: The provider saves its position in the reprovide cycle to the datastore. On restart, it continues from where it stopped instead of starting from scratch.
  • Catch-up reproviding: If the node was offline for an extended period, all CIDs that haven't been reprovided within the configured reprovide interval are immediately queued for reproviding when the node starts up. This ensures content availability is maintained even after downtime.
  • Persistent provide queue: The provide queue is persisted to the datastore on shutdown. When the node restarts, queued CIDs are restored and provided as expected, preventing loss of pending provide operations.
  • Resume control: The resume behavior is controlled via Provide.DHT.ResumeEnabled (default: true). Set to false if you don't want to keep the persisted provider state from a previous run.

This feature improves reliability for nodes that experience intermittent connectivity or restarts.

📊 Detailed statistics with ipfs provide stat

The Sweep provider system now exposes detailed statistics through ipfs provide stat, helping you monitor provider health and troubleshoot issues.

Run ipfs provide stat for a quick summary, or use --all to see complete metrics including connectivity status, queue sizes, reprovide schedules, network statistics, operation rates, and worker utilization. For real-time monitoring, use watch ipfs provide stat --all --compact to observe changes in a 2-column layout. Individual sections can be displayed with flags like --network, --operations, or --workers.

For Dual DHT configurations, use --lan to view LAN DHT statistics instead of the default WAN DHT stats.

For more information, run ipfs provide stat --help or see the Provide Stats documentation, including Capacity Planning.

Note

Legacy provider (when Provide.DHT.SweepEnabled=false) shows basic statistics without flag support.

🔔 Slow reprovide warnings

Kubo now monitors DHT reprovide operations when Provide.DHT.SweepEnabled=true
and alerts you if your node is falling behind on reprovides.

When the reprovide queue consistently grows and all periodic workers are busy,
a warning displays with:

  • Queue size and worker utilization details
  • Recommended solutions: increase Provide.DHT.MaxWorkers or Provide.DHT.DedicatedPeriodicWorkers
  • Command to monitor real-time progress: watch ipfs provide stat --all --compact

The alert polls every 15 minutes (to avoid alert fatigue while catching
persistent issues) and only triggers after sustained growth across multiple
intervals. The legacy provider is unaffected by this change.

📊 Metric rename: provider_provides_total

The Amino DHT Sweep provider metric has been renamed from total_provide_count_total to provider_provides_total to follow OpenTelemetry naming conventions and maintain consistency with other kad-dht metrics (which use dot notation like rpc.inbound.messages, rpc.outbound.requests, etc.).

Migration: If y...

Read more

v0.39.0-rc1

17 Nov 22:01
v0.39.0-rc1

Choose a tag to compare

v0.39.0-rc1 Pre-release
Pre-release

This Release Preview was brought to you by the Shipyard team.

Draft release notes: docs/changelogs/v0.39.md
Release status: #10946

v0.38.2

30 Oct 02:44
v0.38.2
9fd105a

Choose a tag to compare

Note

This release was brought to you by the Shipyard team.

Overview

Kubo 0.38.2 is a quick patch release that improves retrieval, traces and memory usage.

🔦 Highlights

  • Updates boxo v0.35.1 with bitswap and HTTP retrieval fixes:
    • Fixed bitswap trace context not being passed to sessions, restoring observability for monitoring tools
    • Kubo now fetches from HTTP gateways that return errors in legacy IPLD format, improving compatibility with older providers
    • Better handling of rate-limited HTTP endpoints and clearer timeout error messages
  • Updates go-libp2p-kad-dht v0.35.1 with memory optimizations for nodes using Provide.DHT.SweepEnabled=true
  • Updates quic-go v0.55.0 to fix memory pooling where stream frames weren't returned to the pool on cancellation

For full release notes of 0.38, see 0.38.1.

📝 Changelog

Full Changelog

👨‍👩‍👧‍👦 Contributors

Contributor Commits Lines ± Files Changed
rvagg 1 +537/-481 3
Carlos Hernandez 9 +556/-218 11
Guillaume Michel 3 +139/-105 6
gammazero 8 +101/-97 14
Hector Sanjuan 1 +87/-28 5
Marcin Rataj 4 +57/-9 7
Marco Munizaga 2 +42/-14 7
Dennis Trautwein 2 +19/-7 7
Andrew Gillis 3 +3/-19 3
Rod Vagg 4 +12/-3 4
web3-bot 1 +2/-1 1
galargh 1 +1/-1 1