Releases: ipfs/kubo
v0.41.0
Note
This release was brought to you by the Shipyard team.
- Overview
- 🔦 Highlights
- 🗑️ Faster Provide Queue Disk Reclamation
- ✨ New
ipfs cid inspectcommand - 🔤
--cid-basefixes across all commands - 🔄 Built-in
ipfs updatecommand - 🖥️ WebUI Improvements
- 🔧 Correct provider addresses for custom HTTP routing
- 🔀
Provide.Strategymodifiers:+uniqueand+entities - 📌
pin addandpin updatenow fast-provide root CID - 🌳 New
--fast-provide-dagflag for fine-tuned provide control - 🛡️ Hardened
Provide.Strategyparsing - 🔧 Filestore now respects
Provide.Strategy - 🛡️
ipfs object patchvalidates UnixFS node types - 🔗 MFS: fixed CidBuilder preservation
- 📂 FUSE Mount Improvements
- 📦 CARv2 import over HTTP API
- 🌐 HTTPS proxy support
- 🛡️
serverprofile no longer announces loopback and non-public IPv6 addresses - 🐹 Go 1.26, Once More with Feeling
- 🐛 Fixed long-standing random daemon crashes during DHT lookups
- 📦️ Dependency updates
- 📝 Changelog
- 👨👩👧👦 Contributors
Overview
🔦 Highlights
🗑️ Faster Provide Queue Disk Reclamation
Nodes with significant amount of data and DHT provide sweep enabled (Provide.DHT.SweepEnabled, the default since Kubo 0.39) could see their datastore/ directory grow continuously. Each reprovide cycle rewrote the provider keystore inside the shared repo datastore, generating tombstones faster than the storage engine could compact them, and in default configuration Kubo was slow to reclaim this space.
The provider keystore now lives in a dedicated datastore under $IPFS_PATH/provider-keystore/. After each reprovide cycle the old datastore is removed from disk entirely, so space is reclaimed immediately regardless
of storage backend.
On first start after upgrading, stale keystore data is cleaned up from the shared datastore automatically.
To learn more, see kubo#11096, kubo#11198, and go-libp2p-kad-dht#1233.
✨ New ipfs cid inspect command
New subcommand for breaking down a CID into its components. Works offline, supports --enc=json.
$ ipfs cid inspect bafybeigdyrzt5sfp7udm7hu76uh7y26nf3efuylqabf3oclgtqy55fbzdi
CID: bafybeigdyrzt5sfp7udm7hu76uh7y26nf3efuylqabf3oclgtqy55fbzdi
Version: 1
Multibase: base32 (b)
Multicodec: dag-pb (0x70)
Multihash: sha2-256 (0x12)
Length: 32 bytes
Digest: c3c4733ec8affd06cf9e9ff50ffc6bcd2ec85a6170004bb709669c31de94391a
CIDv0: QmbWqxBEKC3P8tqsKc98xmWNzrzDtRLMiMPL8wBuTGsMnR
CIDv1: bafybeigdyrzt5sfp7udm7hu76uh7y26nf3efuylqabf3oclgtqy55fbzdiSee ipfs cid --help for all CID-related commands.
🔤 --cid-base fixes across all commands
--cid-base is now respected by every command that outputs CIDs. Previously block stat, block put, block rm, dag stat, refs local, pin remote, and files chroot ignored the flag.
CIDv0 values are now auto-upgraded to CIDv1 when a non-base58btc base is requested, because CIDv0 can only be represented in base58btc.
🔄 Built-in ipfs update command
Kubo now ships with a built-in ipfs update command that downloads release binaries from GitHub and swaps the current one in place. It supersedes the external ipfs-update tool, deprecated since v0.37.
$ ipfs update check
Update available: 0.40.0 -> 0.41.0
Run 'ipfs update install' to install the latest version.See ipfs update --help for the available subcommands (check, versions, install, revert, clean).
🖥️ WebUI Improvements
IPFS Web UI has been updated to v4.12.0.
IPv6 peer geolocation and Peers screen optimizations
The Peers screen now resolves IPv6 addresses to geographic locations, and the geolocation database has been updated to GeoLite2-City-CSV_20260220. (ipfs-geoip v9.3.0)
Peer locations load faster thanks to UX optimizations in the underlying ipfs-geoip library.
🔧 Correct provider addresses for custom HTTP routing
Nodes using custom routing (Routing.Type=custom) with IPIP-526 could end up publishing unresolved 0.0.0.0 addresses in provider records. Addresses are now resolved at provide-time, and when AutoNAT V2 has confirmed publicly reachable addresses, those are preferred automatically. See #11213.
🔀 Provide.Strategy modifiers: +unique and +entities
Experimental opt-in optimizations for content providers with large repositories where multiple recursive pins share most of their DAG structure (e.g. append-only datasets, versioned archives like dist.ipfs.tech).
+unique: bloom filter dedup across recursive pins. Shared subtrees are traversed only once per reprovide cycle instead of once per pin, cutting I/O from O(pins * blocks) to O(unique blocks) at ~4 bytes/CID.+entities: announces only entity roots (files, directories, HAMT shards), skipping internal file chunks. Far fewer DHT provider records while keeping all content discoverable by file/directory CID. Implies+unique.
Example: Provide.Strategy = "pinned+mfs+entities"
The default Provide.Strategy=all is unchanged. See Provide.Strategy for configuration details and caveats.
The bloom filter precision is tunable via Provide.BloomFPRate (default ~1 false positive per 4.75M lookups, ~4 bytes per CID).
📌 pin add and pin update now fast-provide root CID
ipfs pin add and ipfs pin update announce the pinned root CID to the routing system immediately after pinning, same as ipfs add and ipfs dag import. This matters for selective strategies like pinned+mfs, where previously the root CID was not announced until the next reprovide cycle (see Provide.DHT.Interval). With the default Provide.Strategy=all, the blockstore already provides every block on write, so this is a no-op.
Both commands now accept --fast-provide-root, --fast-provide-dag, and --fast-provide-wait flags, matching ipfs add and ipfs dag import. See Import for defaults and configuration.
🌳 New --fast-provide-dag flag for fine-tuned provide control
Users with a custom Provide.Strategy (e.g. pinned, pinned+mfs+entities) now have finer control over which CIDs are announced immediately on ipfs add, ipfs dag import, ipfs pin add, and ipfs pin update.
By default, only the root CID is provided right away (--fast-provide-root=true). Child blocks are deferred until the next reprovide cycle. This keeps bulk imports fast and avoids overwhelming online nodes with provide traffic.
Pass --fast-provide-dag=true (or set Import.FastProvideDAG) to provide the full DAG immediately during add, using the active Provide.Strategy to determine scope.
Provide.Strategy=all (default) is unaffected. It provides every block at the blockstore level regardless of this flag.
Note
Faster default imports for Provide.Strategy=pinned and pinned+mfs users. Previously, ipfs add --pin eagerly announced every block of newly added content as it was written, through an internal DAG service wrapper. This release routes add-time providing through the new --fast-provide-dag code path, which defaults to false. The result is faster bulk imports and less provide traffic during add: only the root CID is announced immediately (via Import.FastProvideRoot), and child blocks are picked up by the next reprovide cycle (see Provide.DHT.Interval, default 22...
v0.41.0-rc2
Note
This Release Preview was brought to you by the Shipyard team.
Draft release notes: docs/changelogs/v0.41.md
Release status: #11082
v0.41.0-rc1
Note
This Release Preview was brought to you by the Shipyard team.
Draft release notes: docs/changelogs/v0.41.md
Release status: #11082
v0.40.1
Note
This patch release was brought to you by the Shipyard team.
This is a Windows bugfix release. If you use Linux or macOS v0.40.0 should be fine.
🚒 Bugfix for Windows
If you run Kubo on Windows, v0.40.0 can crash after running for a while. The daemon starts fine and works normally at first, but eventually hits a memory corruption in Go's network I/O layer and dies. This is likely caused by an upstream Go 1.26 regression in overlapped I/O handling that has known issues (go#77142, #11214).
This patch release downgrades the Go toolchain from 1.26 to 1.25, which does not have this bug. If you are running Kubo on Windows, upgrade to v0.40.1. We will switch back to Go 1.26.x once the upstream fix lands.
📝 Changelog
Full Changelog v0.40.1
- github.com/ipfs/kubo:
- chore: downgrade to Go 1.25 to fix Windows crash (ipfs/kubo#11215)
See v0.40.0 for full list of changes since v0.39.x.
v0.40.0
Note
This release was brought to you by the Shipyard team.
- 🔦 Highlights
- 🔢 IPIP-499: UnixFS CID Profiles
- 🧹 Automatic cleanup of interrupted imports
- 🌍 Light clients can now use your node for delegated routing
- 📊 See total size when pinning
- 🔀 IPIP-523:
?format=takes precedence overAcceptheader - 🚫 IPIP-524: Gateway codec conversion disabled by default
- ✅ More reliable IPNS over PubSub
- 🗄️ New
ipfs diag datastorecommands - 🔍 New
ipfs swarm addrs autonatcommand - 🚇 Improved
ipfs p2ptunnels with foreground mode - 📊 Friendlier
ipfs dag statoutput - 🔑
ipfs keyimprovements - 🤝 More reliable content providing after startup
- 🌐 No unnecessary DNS lookups for AutoTLS addresses
- ⏱️ Configurable gateway request duration limit
- 🔧 Recovery from corrupted MFS root
- 📡 RPC
Content-Typeheaders for binary responses - 🔖 New
ipfs name get|putcommands - 📋 Long listing format for
ipfs ls - 🖥️ WebUI Improvements
- 📉 Fixed Prometheus metrics bloat on popular subdomain gateways
- 📢 libp2p announces all interface addresses
- 🗑️ Badger v1 datastore slated for removal this year
- 🐹 Go 1.26
- 📦️ Dependency updates
- 📝 Changelog
- 👨👩👧👦 Contributors
🔦 Highlights
This release brings reproducible file imports (CID Profiles), cleanup of interrupted flatfs operations, better connectivity diagnostics, and improved gateway behavior. It also ships with Go 1.26, lowering memory usage and GC overhead across the board.
🔢 IPIP-499: UnixFS CID Profiles
CID Profiles are presets that pin down how files get split into blocks and organized into directories, so you get the same CID for the same data across different software or versions. Defined in IPIP-499.
New configuration profiles
unixfs-v1-2025: modern CIDv1 profile with improved defaultsunixfs-v0-2015(aliaslegacy-cid-v0): best-effort legacy CIDv0 behavior
Apply with: ipfs config profile apply unixfs-v1-2025
The test-cid-v1 and test-cid-v1-wide profiles have been removed. Use unixfs-v1-2025 or manually set specific Import.* settings instead.
New Import.* options
Import.UnixFSHAMTDirectorySizeEstimation: estimation mode (links,block, ordisabled)Import.UnixFSDAGLayout: DAG layout (balancedortrickle)
New ipfs add CLI flags
--dereference-symlinksresolves all symlinks to their target content, replacing the deprecated--dereference-argswhich only resolved CLI argument symlinks--empty-dirs/-Econtrols inclusion of empty directories (default: true)--hidden/-Hincludes hidden files (default: false)--trickleimplicit default can be adjusted viaImport.UnixFSDAGLayout
ipfs files write fix for CIDv1 directories
When writing to MFS directories that use CIDv1 (via --cid-version=1 or ipfs files chcid), single-block files now produce raw block CIDs (like bafkrei...), matching the behavior of ipfs add --raw-leaves. Previously, MFS would wrap single-block files in dag-pb even when raw leaves were enabled. CIDv0 directories continue to use dag-pb.
Block size limit raised to 2MiB
ipfs block put, ipfs dag put, and ipfs dag import now accept blocks up to 2MiB without --allow-big-block, matching the bitswap spec. The previous 1MiB limit was too restrictive and broke ipfs dag import of 1MiB-chunked non-raw-leaf data (protobuf wrapping pushes blocks slightly over 1MiB). The max --chunker value for ipfs add is 2MiB - 256 bytes to leave room for protobuf framing. IPIP-499 profiles use lower chunk sizes (256KiB and 1MiB) and are not affected.
HAMT Threshold Fix
HAMT directory sharding threshold changed from >= to > to match the Go docs and JS implementation (ipfs/boxo@6707376). A directory exactly at 256 KiB now stays as a basic directory instead of converting to HAMT. This is a theoretical breaking change, but unlikely to impact real-world users as it requires a directory to be exactly at the threshold boundary. If you depend on the old behavior, adjust Import.UnixFSHAMTShardingSize to be 1 byte lower.
🧹 Automatic cleanup of interrupted imports
If you cancel ipfs add or ipfs dag import mid-operation, Kubo now automatically cleans up incomplete data on the next daemon start. Previously, interrupted imports would leave orphan blocks in your repository that were difficult to identify and remove without pins and running explicit garbage collection.
Batch operations also use less memory now. Block data is written to disk immediately rather than held in RAM until the batch commits.
Under the hood, the block storage layer (flatfs) was rewritten to use atomic batch operations via a temporary staging directory. See go-ds-flatfs#142 for details.
🌍 Light clients can now use your node for delegated routing
The Routing V1 HTTP API is now exposed by default at http://127.0.0.1:8080/routing/v1. This allows light clients in browsers to use Kubo Gateway as a delegated routing backend instead of running a full DHT client. Support for IPIP-476: Delegated Routing DHT Closest Peers API is included. Can be disabled via Gateway.ExposeRoutingAPI.
📊 See total size when pinning
ipfs pin add --progress now shows the total size of the pinned DAG as it fetches blocks.
Example output:
Fetched/Processed 336 nodes (83 MB)
🔀 IPIP-523: ?format= takes precedence over Accept header
The ?format= URL query parameter now always wins over the Accept header (IPIP-523), giving you deterministic HTTP caching and protecting against CDN cache-key collisions. Browsers can also use ?format= reliably even when they send Accept headers with specific content types.
The only breaking change is for edge cases where a client sends both a specific Accept header and a different ?format= value for an explicitly supported format (tar, raw, car, dag-json, dag-cbor, etc.). Previously Accept would win. Now ?format= always wins.
🚫 IPIP-524: Gateway codec conversion disabled by default
Gateways no longer convert between codecs by default (IPIP-524). This removes gateways from a gatekeeping role: clients can adopt new codecs immediately without waiting for gateway operator updates. Requests for a format that differs from the block's codec now return 406 Not Acceptable.
Migration: Clients should fetch raw blocks (?format=raw or Accept: application/vnd.ipld.raw)
and convert client-side using libraries like @helia/verified-fetch.
Set Gateway.AllowCodecConversion
to true to restore previous behavior.
✅ More reliable IPNS over PubSub
IPNS over PubSub implementation in Kubo is now more reliable. Duplicate messages are rejected even in large networks where messages may cycle back after the in-memory cache expires.
Kubo now persists the maximum seen sequence number per peer to the datastore (go-libp2p-pubsub#BasicSeqnoValidator), providing stronger duplicate detection that survives node restarts. This addresses message flooding issues reported in #9665.
IPNS over PubSub is opt-in via Ipns.UsePubsub. Kubo's pubsub is optimized for IPNS use case. For custom pubsub applications requiring different va...
v0.40.0-rc2
This Release Preview was brought to you by the Shipyard team.
Draft release notes: docs/changelogs/v0.40.md
Release status: #11008
v0.40.0-rc1
This Release Preview was brought to you by the Shipyard team.
Draft release notes: docs/changelogs/v0.40.md
Release status: #11008
v0.39.0
Note
This release was brought to you by the Shipyard team.
- Overview
- 🔦 Highlights
- 🎯 DHT Sweep provider is now the default
- ⚡ Fast root CID providing for immediate content discovery
- ⏯️ Provider state persists across restarts
- 📊 Detailed statistics with
ipfs provide stat - 🔔 Slow reprovide warnings
- 📊 Metric rename:
provider_provides_total - 🔧 Automatic UPnP recovery after router restarts
- 🪦 Deprecated
go-ipfsname no longer published - 🚦 Gateway range request limits for CDN compatibility
- 🖥️ RISC-V support with prebuilt binaries
- 📦️ Important dependency updates
- 📝 Changelog
- 👨👩👧👦 Contributors
Overview
This release is an important step toward solving the DHT bottleneck for self-hosting IPFS on consumer hardware and home networks. The DHT sweep provider (now default) announces your content to the network without traffic spikes that overwhelm residential connections. Automatic UPnP recovery means your node stays reachable after router restarts without manual intervention.
New content becomes findable immediately after ipfs add. The provider system persists state across restarts, alerts you when falling behind, and exposes detailed stats for monitoring. This release also finalizes the deprecation of the legacy go-ipfs name.
🔦 Highlights
🎯 DHT Sweep provider is now the default
The Amino DHT Sweep provider system, introduced as experimental in v0.38, is now enabled by default (Provide.DHT.SweepEnabled=true).
What this means: All nodes now benefit from efficient keyspace-sweeping content announcements that reduce memory overhead and create predictable network patterns, especially for nodes providing large content collections.
Migration: The transition is automatic on upgrade. Your existing configuration is preserved:
- If you explicitly set
Provide.DHT.SweepEnabled=falsein v0.38, you'll continue using the legacy provider - If you were using the default settings, you'll automatically get the sweep provider
- To opt out and return to legacy behavior:
ipfs config --json Provide.DHT.SweepEnabled false - Providers with medium to large datasets may need to adjust defaults; see Capacity Planning
- When
Routing.AcceleratedDHTClientis enabled, full sweep efficiency may not be available yet; consider disabling the accelerated client as sweep is sufficient for most workloads. See caveat 4.
New features available with sweep mode:
- Detailed statistics via
ipfs provide stat(see below) - Automatic resume after restarts with persistent state (see below)
- Proactive alerts when reproviding falls behind (see below)
- Better metrics for monitoring (
provider_provides_total) (see below) - Fast optimistic provide of new root CIDs (see below)
For background on the sweep provider design and motivations, see Provide.DHT.SweepEnabled and Shipyard's blogpost Provide Sweep: Solving the DHT Provide Bottleneck.
⚡ Fast root CID providing for immediate content discovery
When you add content to IPFS, the sweep provider queues it for efficient DHT provides over time. While this is resource-efficient, other peers won't find your content immediately after ipfs add or ipfs dag import completes.
To make sharing faster, ipfs add and ipfs dag import now do an immediate provide of root CIDs to the DHT in addition to the regular queue (controlled by the new --fast-provide-root flag, enabled by default). This complements the sweep provider system: fast-provide handles the urgent case (root CIDs that users share and reference), while the sweep provider efficiently provides all blocks according to Provide.Strategy over time.
This closes the gap between command completion and content shareability: root CIDs typically become discoverable on the network in under a second (compared to 30+ seconds previously). The feature uses optimistic DHT operations, which are significantly faster with the sweep provider (now enabled by default).
By default, this immediate provide runs in the background without blocking the command. For use cases requiring guaranteed discoverability before the command returns (e.g., sharing a link immediately), use --fast-provide-wait to block until the provide completes.
Simple examples:
ipfs add file.txt # Root provided immediately, blocks queued for sweep provider
ipfs add file.txt --fast-provide-wait # Wait for root provide to complete
ipfs dag import file.car # Same for CAR importsConfiguration: Set defaults via Import.FastProvideRoot (default: true) and Import.FastProvideWait (default: false). See ipfs add --help and ipfs dag import --help for more details and examples.
Fast root CID provide is automatically skipped when DHT routing is unavailable (e.g., Routing.Type=none or delegated-only configurations).
⏯️ Provider state persists across restarts
The Sweep provider now persists the reprovide cycle state and automatically resumes where it left off after a restart. This brings several improvements:
- Persistent progress: The provider saves its position in the reprovide cycle to the datastore. On restart, it continues from where it stopped instead of starting from scratch.
- Catch-up reproviding: If the node was offline for an extended period, all CIDs that haven't been reprovided within the configured reprovide interval are immediately queued for reproviding when the node starts up. This ensures content availability is maintained even after downtime.
- Persistent provide queue: The provide queue is persisted to the datastore on shutdown. When the node restarts, queued CIDs are restored and provided as expected, preventing loss of pending provide operations.
- Resume control: The resume behavior is controlled via
Provide.DHT.ResumeEnabled(default:true). Set tofalseif you don't want to keep the persisted provider state from a previous run.
This feature improves reliability for nodes that experience intermittent connectivity or restarts.
📊 Detailed statistics with ipfs provide stat
The Sweep provider system now exposes detailed statistics through ipfs provide stat, helping you monitor provider health and troubleshoot issues.
Run ipfs provide stat for a quick summary, or use --all to see complete metrics including connectivity status, queue sizes, reprovide schedules, network statistics, operation rates, and worker utilization. For real-time monitoring, use watch ipfs provide stat --all --compact to observe changes in a 2-column layout. Individual sections can be displayed with flags like --network, --operations, or --workers.
For Dual DHT configurations, use --lan to view LAN DHT statistics instead of the default WAN DHT stats.
For more information, run ipfs provide stat --help or see the Provide Stats documentation, including Capacity Planning.
Note
Legacy provider (when Provide.DHT.SweepEnabled=false) shows basic statistics without flag support.
🔔 Slow reprovide warnings
Kubo now monitors DHT reprovide operations when Provide.DHT.SweepEnabled=true
and alerts you if your node is falling behind on reprovides.
When the reprovide queue consistently grows and all periodic workers are busy,
a warning displays with:
- Queue size and worker utilization details
- Recommended solutions: increase
Provide.DHT.MaxWorkersorProvide.DHT.DedicatedPeriodicWorkers - Command to monitor real-time progress:
watch ipfs provide stat --all --compact
The alert polls every 15 minutes (to avoid alert fatigue while catching
persistent issues) and only triggers after sustained growth across multiple
intervals. The legacy provider is unaffected by this change.
📊 Metric rename: provider_provides_total
The Amino DHT Sweep provider metric has been renamed from total_provide_count_total to provider_provides_total to follow OpenTelemetry naming conventions and maintain consistency with other kad-dht metrics (which use dot notation like rpc.inbound.messages, rpc.outbound.requests, etc.).
Migration: If y...
v0.39.0-rc1
This Release Preview was brought to you by the Shipyard team.
Draft release notes: docs/changelogs/v0.39.md
Release status: #10946
v0.38.2
Note
This release was brought to you by the Shipyard team.
Overview
Kubo 0.38.2 is a quick patch release that improves retrieval, traces and memory usage.
🔦 Highlights
- Updates boxo v0.35.1 with bitswap and HTTP retrieval fixes:
- Fixed bitswap trace context not being passed to sessions, restoring observability for monitoring tools
- Kubo now fetches from HTTP gateways that return errors in legacy IPLD format, improving compatibility with older providers
- Better handling of rate-limited HTTP endpoints and clearer timeout error messages
- Updates go-libp2p-kad-dht v0.35.1 with memory optimizations for nodes using
Provide.DHT.SweepEnabled=true - Updates quic-go v0.55.0 to fix memory pooling where stream frames weren't returned to the pool on cancellation
For full release notes of 0.38, see 0.38.1.
📝 Changelog
Full Changelog
- github.com/ipfs/kubo:
- chore: boxo and kad-dht updates
- fix: update quic-go to v0.55.0
- github.com/ipfs/boxo (v0.35.0 -> v0.35.1):
- Release v0.35.1 (ipfs/boxo#1063)
- bitswap/httpnet: improve "Connect"/testCid check (#1057) (ipfs/boxo#1057)
- fix: revert go-libp2p to v0.43.0 (#1061) (ipfs/boxo#1061)
- bitswap/client: propagate trace state when calling
GetBlocks(ipfs/boxo#1060) - fix(tracing): use context to pass trace and retrieval state to session (ipfs/boxo#1059)
- bitswap: link traces (ipfs/boxo#1053)
- fix(gateway): deduplicate peer IDs in retrieval diagnostics (#1058) (ipfs/boxo#1058)
- update go-dsqueue to v0.1.0 (ipfs/boxo#1049)
- Update go-libp2p to v0.44 (ipfs/boxo#1048)
- github.com/ipfs/go-dsqueue (v0.0.5 -> v0.1.0):
- new version (#24) (ipfs/go-dsqueue#24)
- Do not reuse datastore Batch (#23) (ipfs/go-dsqueue#23)
- github.com/ipfs/go-log/v2 (v2.8.1 -> v2.8.2):
- new version (#175) (ipfs/go-log#175)
- fix: revert removal of LevelFromString to avoid breaking change (#174) (ipfs/go-log#174)
- github.com/ipld/go-car/v2 (v2.15.0 -> v2.16.0):
- v2.16.0 bump (#625) (ipld/go-car#625)
- github.com/ipld/go-ipld-prime/storage/bsadapter (v0.0.0-20230102063945-1a409dc236dd -> v0.0.0-20250821084354-a425e60cd714):
- github.com/libp2p/go-libp2p-kad-dht (v0.35.0 -> v0.35.1):
- chore: release v0.35.1 (#1165) (libp2p/go-libp2p-kad-dht#1165)
- feat(provider): use Trie.AddMany (#1164) (libp2p/go-libp2p-kad-dht#1164)
- fix(provider): memory usage (#1163) (libp2p/go-libp2p-kad-dht#1163)
- github.com/libp2p/go-netroute (v0.2.2 -> v0.3.0):
- release v0.3.0
- remove google/gopacket dependency
- Query routes via routesocket (libp2p/go-netroute#57)
- ci: uci/update-go (#52) (libp2p/go-netroute#52)
- github.com/multiformats/go-multicodec (v0.9.2 -> v0.10.0):
- chore: v0.10.0 bump
- chore: update submodules and go generate
- chore(deps): update stringer to v0.38.0
- ci: uci/update-go (multiformats/go-multicodec#104)
👨👩👧👦 Contributors
| Contributor | Commits | Lines ± | Files Changed |
|---|---|---|---|
| rvagg | 1 | +537/-481 | 3 |
| Carlos Hernandez | 9 | +556/-218 | 11 |
| Guillaume Michel | 3 | +139/-105 | 6 |
| gammazero | 8 | +101/-97 | 14 |
| Hector Sanjuan | 1 | +87/-28 | 5 |
| Marcin Rataj | 4 | +57/-9 | 7 |
| Marco Munizaga | 2 | +42/-14 | 7 |
| Dennis Trautwein | 2 | +19/-7 | 7 |
| Andrew Gillis | 3 | +3/-19 | 3 |
| Rod Vagg | 4 | +12/-3 | 4 |
| web3-bot | 1 | +2/-1 | 1 |
| galargh | 1 | +1/-1 | 1 |






