MPP (Multipath Payments)
A Lightning payment split across multiple routes that recombines atomically at the destination for improved reliability.
Key Takeaways
- MPP splits a single Lightning payment into smaller parts routed independently: the receiver holds all parts and only settles once every piece arrives, providing an all-or-nothing atomicity guarantee enforced by HTLCs.
- Base MPP differs from Atomic Multi-Path Payments (AMP) in that all parts share a single payment hash: atomicity relies on receiver cooperation rather than cryptographic secret sharing.
- By aggregating channel capacity across multiple routes, MPP enables payments far larger than any individual channel can support and significantly improves success rates for everyday transactions.
What Is MPP?
MPP (Multipath Payments) is a Lightning Network protocol feature that allows a sender to break a payment into multiple smaller parts, route each part through a different path, and have the receiver reassemble them into a single atomic payment. The receiver only settles once the sum of all arriving parts equals the invoiced amount: if any part fails to arrive, the entire payment fails and the sender recovers all funds.
Before MPP, every Lightning payment had to fit through a single chain of channels from sender to receiver. This created a hard ceiling on payment size dictated by the smallest channel along the route. A network full of 500,000 sat channels could not reliably route a 1,000,000 sat payment, regardless of how much total liquidity existed. MPP removes this constraint by letting the sender spread the payment across however many paths are needed.
The term "MPP" most commonly refers to Base MPP, the variant standardized in the BOLT specifications (specifically BOLT 4 and BOLT 11 amendments). It is sometimes called Base AMP to distinguish it from the original AMP proposal. For a broader overview of the concept, see our glossary entry on multi-path payments.
How It Works
MPP builds on the standard Lightning payment flow but introduces a critical change: the sender transmits multiple HTLCs that all reference the same payment hash, and the receiver aggregates them before releasing the preimage.
- The sender decodes a Lightning invoice to extract the payment hash and total amount
- The sender's pathfinding algorithm determines how to split the total across available routes (for example, 500,000 sats might become three parts: 200,000 + 180,000 + 120,000)
- Each part is dispatched as a separate HTLC through a different route, all locked to the same payment hash
- Each HTLC's onion payload includes a
total_msatfield telling the receiver the expected total payment amount - The receiver accumulates arriving parts, matching them by payment hash
- Once the sum of received parts equals
total_msat, the receiver releases the preimage - The preimage propagates back along each route, settling every HTLC at every hop
The Atomicity Guarantee
MPP's atomicity is enforced by the receiver's behavior: they refuse to release the preimage until the full amount has arrived. This creates three possible outcomes for any MPP attempt:
- Full success: all parts arrive, the receiver releases the preimage, and every HTLC settles. The sender gets a proof-of-payment preimage and the receiver claims the full amount.
- Partial arrival with retry: some parts arrive while others fail en route. The receiver holds the successful parts without settling. The sender detects the failures and retries the missing amount through alternative routes. Once the total is met, settlement proceeds normally.
- Full failure: if the sender cannot complete the remaining parts before HTLC timelocks expire, all parts time out. The receiver never received enough to settle, so they release nothing. Every hop reclaims its locked funds through the timeout path.
This atomicity depends on receiver cooperation, not cryptographic enforcement. A well-behaved receiver will always wait for the full amount. A malicious receiver who somehow learned the preimage through other means could theoretically settle early with a partial amount, but in practice the receiver generates the preimage and has no incentive to settle for less than the full invoice amount.
The total_msat Field
The total_msat field in the onion payload is what makes MPP work at the protocol level. Without it, the receiver would have no way to know whether a 200,000 sat HTLC is a complete 200,000 sat payment or one part of a larger payment. The field is included in the encrypted onion data visible only to the receiver, so intermediate routing nodes cannot see the total payment size.
# Onion payload for an MPP part (simplified TLV fields)
amt_to_forward: 200000 # This part's amount in msat
outgoing_cltv: 800150 # Timelock for this hop
payment_data:
payment_secret: 0x7a3f... # Prevents probing attacks
total_msat: 500000 # Full payment amountThe payment_secret (also called payment address) prevents third parties from probing the receiver to discover whether an invoice exists. It was introduced alongside MPP support and is now required for all BOLT 11 invoices.
Pathfinding and Split Strategies
MPP fundamentally changes the sender's pathfinding problem. Instead of finding one route with sufficient capacity end-to-end, the sender must determine how many parts to create, how large each part should be, and which route each part should take.
Sequential vs. Proactive Splitting
Lightning implementations use two broad strategies for deciding when and how to split:
- Sequential splitting: the sender first attempts the full amount along the best single route. If that fails at a specific channel (revealing a capacity bottleneck), the sender splits at that bottleneck and retries the parts through separate routes. This approach is simple but slower because it learns about network capacity through trial and error.
- Proactive splitting: the sender analyzes its local view of channel capacities and pre-computes an optimal split before sending any HTLCs. This uses techniques like minimum-cost flow algorithms to distribute the payment across routes that minimize total fees while maximizing success probability. LND's mission control and CLN's renepay plugin use variants of this approach.
Optimal Number of Parts
Choosing how many parts to create involves a fundamental trade-off. Fewer parts means lower total routing fees and fewer HTLCs consuming network resources, but each part is larger and harder to route. More parts are individually easier to route but increase cumulative fees and the probability that at least one part fails.
Most implementations target between 2 and 6 parts depending on the payment amount and the sender's knowledge of network topology. Research and experimentation suggest that for typical payments, 3 to 4 parts often strikes the best balance between reliability and cost.
# Simplified split decision logic (pseudocode)
total = 500_000 # sats to send
max_single_path = estimate_max_route_capacity(destination)
if total <= max_single_path:
# Single path likely sufficient
routes = [find_route(destination, total)]
else:
# Compute splits using min-cost flow
graph = build_capacity_graph(local_channels, gossip_data)
routes = min_cost_flow(graph, destination, total)
# Result: [(route_a, 200_000), (route_b, 180_000), (route_c, 120_000)]
for route, amount in routes:
send_htlc(route, amount, payment_hash, total_msat=total)Probing for Better Splits
Senders can use payment probes to discover available capacity on routes before committing to a split. A probe sends an HTLC with an invalid payment hash: routing nodes forward it normally, but the receiver rejects it because the hash is wrong. The probe either reaches the receiver (proving the route has sufficient capacity) or fails at a bottleneck (revealing the capacity limit).
Probing multiple routes in parallel lets the sender build an accurate capacity map and compute a split strategy with high confidence of success. The cost is additional latency before the real payment begins.
Comparison with AMP
MPP and AMP both split payments across multiple routes, but they differ in how they achieve atomicity and what trade-offs they make:
| Property | Base MPP | AMP |
|---|---|---|
| Payment hash | Same hash for all parts | Unique hash per part |
| Atomicity mechanism | Receiver withholds preimage | Cryptographic secret sharing |
| Invoice required | Yes (BOLT 11 / BOLT 12) | No (spontaneous payments possible) |
| Privacy | Parts linkable by shared hash | Parts unlinkable on the network |
| Proof of payment | Single preimage from receiver | Sender-generated proofs only |
| Adoption | Widely deployed across implementations | Primarily LND |
Base MPP's key advantage is simplicity and interoperability. Because all parts use the standard payment hash from the invoice, any BOLT-compliant routing node can forward them without special support. AMP provides stronger atomicity (cryptographic rather than behavioral) and better privacy (unlinkable parts), but requires specific support from both sender and receiver nodes.
Use Cases
Enabling Larger Payments
MPP's most direct impact is raising the effective payment ceiling on Lightning. Without MPP, the maximum reliable payment size is limited by the smallest channel along the best single route. With MPP, the effective limit becomes the sum of available capacity across all routes between sender and receiver. A sender with five 200,000 sat channels pointing toward different parts of the network can potentially send a 1,000,000 sat payment by using all five simultaneously.
This is particularly important for real-world Lightning liquidity where channel sizes vary widely and individual channel capacity is often insufficient for commerce-scale payments.
Improving Everyday Reliability
Even payments well within single-channel capacity benefit from MPP. Channel liquidity shifts constantly as payments flow through the network. A route that had 100,000 sats of capacity moments ago might only have 70,000 now. MPP lets the sender hedge against this uncertainty by spreading the payment across multiple routes, reducing dependence on any single path's exact liquidity state.
Integration with Channel Management
MPP works synergistically with channel management tools like Loop and Autoloop. Rebalancing operations often involve moving significant liquidity through the network, and these operations benefit from MPP's ability to split across routes. Similarly, well-balanced channels created by these tools provide better routes for MPP parts to traverse.
Compatibility with Advanced Routing
MPP combines naturally with other routing improvements. Trampoline routing lets resource-constrained mobile wallets delegate pathfinding to intermediate nodes, and those trampoline nodes can use MPP internally to route each leg of the payment. Blinded paths can be used for the final hops, preserving receiver privacy while still allowing the sender to split across multiple blinded routes.
Fee Implications
Splitting a payment across multiple routes changes the fee calculation in non-obvious ways. Each part incurs independent routing fees at every hop along its route, so the total fee for an MPP is the sum of fees across all parts and all hops.
When MPP Costs More
In the simplest case, MPP increases total fees compared to a successful single-path payment. Routing fees have both a base component (charged per HTLC regardless of amount) and a proportional component (percentage of the forwarded amount). With multiple parts, you pay the base fee multiple times. For a payment split into 4 parts across 3-hop routes, you pay 12 base fees instead of 3.
When MPP Saves Money
Counter-intuitively, MPP can sometimes reduce total fees. If the only viable single-path route goes through high-fee nodes (because those nodes have the largest channels), splitting the payment might allow parts to traverse cheaper, lower-capacity routes that the full amount could never use. The aggregate fee across multiple cheap routes can undercut a single expensive one.
Sophisticated pathfinding algorithms explicitly optimize for this by evaluating total fees across all possible split configurations, not just finding the cheapest individual routes.
Risks and Considerations
Partial Failure Complexity
The most challenging failure mode is partial delivery: some parts reach the receiver while others fail mid-route. The receiver holds successful parts without settling, tying up liquidity along those routes. The sender must detect which parts failed, calculate the remaining amount, find new routes for just the shortfall, and retry. If retries also partially fail, the process repeats. Each round adds latency and locks more liquidity network-wide.
HTLC Slot Consumption
Lightning channels have a protocol-level maximum of 483 concurrent pending HTLCs per direction. Each MPP part consumes one HTLC slot at every channel along its route. Heavy MPP usage across the network can exhaust HTLC slots on popular routing nodes, causing unrelated payments to fail. Future upgrades like PTLCs may help by enabling more efficient point-based routing that reduces per-hop overhead.
Privacy Limitations
Because all parts of a Base MPP share the same payment hash, a routing node that forwards two parts of the same payment on different routes can correlate them. This reveals that the total payment is larger than either individual part and links the sender's channels. The onion routing layer protects sender and receiver identity, but the shared hash weakens amount privacy at the routing level.
Timelock Coordination
Each part carries its own HTLC timelock, and these must be set carefully. If early-arriving parts have timelocks that expire before late-arriving parts reach the receiver, the receiver faces a conflict: the early parts may time out while the late parts are still in flight. Implementations avoid this by ensuring all parts use compatible timelock windows, but the coordination adds complexity to the sender's routing logic.
This glossary entry is for informational purposes only and does not constitute financial or investment advice. Always do your own research before using any protocol or technology.