Skip to content

Piecewise linear constraints: follow-up improvements#602

Merged
FabianHofmann merged 12 commits into
masterfrom
piecewise-followup
Mar 9, 2026
Merged

Piecewise linear constraints: follow-up improvements#602
FabianHofmann merged 12 commits into
masterfrom
piecewise-followup

Conversation

@FabianHofmann
Copy link
Copy Markdown
Collaborator

@FabianHofmann FabianHofmann commented Mar 6, 2026

Changes proposed in this Pull Request

Strongly needed follow-up improvements to the piecewise linear constraints feature:

  • Introduce piecewise function to create a PiecewiseExpression which can be used in operations like y == piecewise(x, x_pts, y_pts)
  • Make use case breakpoints and segments (new) clearer.
  • Apply convexity checks where needed (LP formulation)
  • Documentation: Expanded doc/piecewise-linear-constraints.rst and updated notebook

Checklist

  • Code changes are sufficiently documented; i.e. new functions contain docstrings and further explanations may be given in doc.
  • Unit tests for new features were added (if applicable).
  • A note for the release notes doc/release_notes.rst of the upcoming release is included.
  • I consent to the release of this PR's code under the MIT license.

…ts API, LP formulation for convex/concave cases, and simplify tests
…l formulation, add domain bounds to LP formulation

- Incremental method now uses binary indicator variables with link/order constraints to enforce proper segment filling order (Markowitz & Manne)
- LP method now adds x ∈ [min(xᵢ), max(xᵢ)] domain bound constraints to prevent extrapolation beyond breakpoints
Validate trailing-NaN-only for SOS2 and disjunctive methods to prevent
corrupted adjacency. Fail fast when skip_nan_check=True but breakpoints
actually contain NaN.
Support reversed syntax (y == piecewise(...)) via __le__/__ge__/__eq__
dispatch in BaseExpression and ScalarLinearExpression. Fix LP example
to use power == demand for more illustrative results.
- Add @overload to comparison operators (__le__, __ge__, __eq__) in
  BaseExpression and Variable to distinguish PiecewiseExpression from
  SideLike return types
- Update ConstraintLike type alias to include PiecewiseConstraintDescriptor
- Fix PiecewiseConstraintDescriptor.lhs type from object to LinExprLike
- Fix dict/sequence type mismatches in _dict_to_array, _dict_segments_to_array,
  _segments_list_to_array
- Remove unused type: ignore comments
- Narrow ScalarLinearExpression/ScalarVariable return types to not include
  PiecewiseConstraintDescriptor (impossible at runtime)
@FabianHofmann FabianHofmann requested a review from coroa March 6, 2026 12:18
FBumann and others added 2 commits March 9, 2026 13:19
* feat: add `active` parameter to piecewise linear constraints

Add an `active` parameter to the `piecewise()` function that accepts a
binary variable to gate piecewise linear functions on/off. This enables
unit commitment formulations where a commitment binary controls the
operating range.

The parameter modifies each formulation method as follows:
- Incremental: δ_i ≤ active (tightened bounds) + base terms × active
- SOS2: Σλ_i = active (instead of 1)
- Disjunctive: Σz_k = active (instead of 1)

When active=0, all auxiliary variables are forced to zero, collapsing
x and y to zero. When active=1, the normal PWL domain is active.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* docs: tighten active parameter docstrings

Clarify that zero-forcing is the only linear formulation possible —
relaxing the constraint would require big-M or indicator constraints.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* docs: add active parameter to release notes

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: resolve mypy type errors for x_base/y_base assignment

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* docs: add unit commitment example to piecewise notebook

Example 6 demonstrates the active parameter with a gas unit that
stays off at t=1 (low demand) and commits at t=2,3 (high demand),
showing power=0 and fuel=0 when the commitment binary is off.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Update notebook

* test: comprehensive active parameter test coverage

Add tests for gaps identified in review:
- Inequality + active (incremental and SOS2, on and off)
- auto method selection + active (equality and auto-LP rejection)
- active with LinearExpression (not just Variable)
- active with NaN-masked breakpoints
- LP file output comparison (active vs plain)
- Multi-dimensional solver test (per-entity on/off)
- SOS2 non-zero base + active off
- SOS2 inequality + active off
- Disjunctive active on (solver)
- Fix: reject active when auto resolves to LP

159 tests pass (was 122).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor: extract PWL_ACTIVE_BOUND_SUFFIX constant

Move the active bound constraint name suffix to constants.py,
consistent with all other PWL suffix constants.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* test: remove redundant active parameter tests

Keep only tests that exercise unique code paths or verify distinct
mathematical properties.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
@FabianHofmann FabianHofmann merged commit 982b573 into master Mar 9, 2026
2 of 4 checks passed
@FabianHofmann FabianHofmann deleted the piecewise-followup branch March 9, 2026 12:19
@coroa
Copy link
Copy Markdown
Member

coroa commented Mar 9, 2026

I'd have liked to review this: I still have questions about the use of breakpoints for x and y or why they are used at all. I would prefer we remove add_piecewise_constraints and merge its code into add_constraints.

I don't understand the docs in regards to inequalities. They are slightly misleading, it's probably easiest to discuss once more

@FabianHofmann
Copy link
Copy Markdown
Collaborator Author

FabianHofmann commented Mar 10, 2026

Hey @coroa, please do the review anyway. This will likely need another follow up. I mostly wanted to avoid the misleading state in the master branch and resolve that asap. and as it is a bit more sorted now, there is no need to pressure you on the review (I know that you are on a workshop this week).

FabianHofmann added a commit to CharlieFModo/linopy that referenced this pull request Mar 12, 2026
* Refactor piecewise constraints: add piecewise/segments/slopes_to_points API, LP formulation for convex/concave cases, and simplify tests

* piecewise: replace bp_dim/seg_dim params with constants, remove dead code, improve errors

* Fix piecewise linear constraints: add binary indicators to incremental formulation, add domain bounds to LP formulation

- Incremental method now uses binary indicator variables with link/order constraints to enforce proper segment filling order (Markowitz & Manne)
- LP method now adds x ∈ [min(xᵢ), max(xᵢ)] domain bound constraints to prevent extrapolation beyond breakpoints

* update signatures of breakpoints and segments, apply convexity check only where needed

* update doc

* Reject interior NaN and skip_nan_check+NaN in piecewise formulations

Validate trailing-NaN-only for SOS2 and disjunctive methods to prevent
corrupted adjacency. Fail fast when skip_nan_check=True but breakpoints
actually contain NaN.

* Allow piecewise() on either side of comparison operators

Support reversed syntax (y == piecewise(...)) via __le__/__ge__/__eq__
dispatch in BaseExpression and ScalarLinearExpression. Fix LP example
to use power == demand for more illustrative results.

* Fix mypy type errors for piecewise constraint types

- Add @overload to comparison operators (__le__, __ge__, __eq__) in
  BaseExpression and Variable to distinguish PiecewiseExpression from
  SideLike return types
- Update ConstraintLike type alias to include PiecewiseConstraintDescriptor
- Fix PiecewiseConstraintDescriptor.lhs type from object to LinExprLike
- Fix dict/sequence type mismatches in _dict_to_array, _dict_segments_to_array,
  _segments_list_to_array
- Remove unused type: ignore comments
- Narrow ScalarLinearExpression/ScalarVariable return types to not include
  PiecewiseConstraintDescriptor (impossible at runtime)

* rename header of jupyter notebook

* doc: rename notebook again

* feat: add active parameter to piecewise linear constraints (PyPSA#604)

* feat: add `active` parameter to piecewise linear constraints

Add an `active` parameter to the `piecewise()` function that accepts a
binary variable to gate piecewise linear functions on/off. This enables
unit commitment formulations where a commitment binary controls the
operating range.

The parameter modifies each formulation method as follows:
- Incremental: δ_i ≤ active (tightened bounds) + base terms × active
- SOS2: Σλ_i = active (instead of 1)
- Disjunctive: Σz_k = active (instead of 1)

When active=0, all auxiliary variables are forced to zero, collapsing
x and y to zero. When active=1, the normal PWL domain is active.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* docs: tighten active parameter docstrings

Clarify that zero-forcing is the only linear formulation possible —
relaxing the constraint would require big-M or indicator constraints.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* docs: add active parameter to release notes

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: resolve mypy type errors for x_base/y_base assignment

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* docs: add unit commitment example to piecewise notebook

Example 6 demonstrates the active parameter with a gas unit that
stays off at t=1 (low demand) and commits at t=2,3 (high demand),
showing power=0 and fuel=0 when the commitment binary is off.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Update notebook

* test: comprehensive active parameter test coverage

Add tests for gaps identified in review:
- Inequality + active (incremental and SOS2, on and off)
- auto method selection + active (equality and auto-LP rejection)
- active with LinearExpression (not just Variable)
- active with NaN-masked breakpoints
- LP file output comparison (active vs plain)
- Multi-dimensional solver test (per-entity on/off)
- SOS2 non-zero base + active off
- SOS2 inequality + active off
- Disjunctive active on (solver)
- Fix: reject active when auto resolves to LP

159 tests pass (was 122).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor: extract PWL_ACTIVE_BOUND_SUFFIX constant

Move the active bound constraint name suffix to constants.py,
consistent with all other PWL suffix constants.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* test: remove redundant active parameter tests

Keep only tests that exercise unique code paths or verify distinct
mathematical properties.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: FBumann <117816358+FBumann@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
@FBumann
Copy link
Copy Markdown
Collaborator

FBumann commented Mar 30, 2026

@FabianHofmann This PR improved the code quality pf the feature a lot i think.
But it also made some simplifications that are a real deal breaker for me. I think your intend with this follow up was to make the case of linking 2 variables via piecewise simpler, but this came at the cost of less universal usage and other potential issues.

  1. The formulation now only allows to link exactly 2 variables. This is a fundamental limitation and will prevent users (including me) from using the api. I often need to link 1,2,3 or more variables (like a chp plant with fuel in, heat and power out = 3 vars with nonlinear efficiencies)
  2. This API hides the complexity of piecewise constraints. the constraint is now captured as a new class/state and abstraction layer. This complicates inspection and interoperability with regular expressions and constraints. Also, this goes against the principle of exposing only variables and constraints. I would rather have a construction layer which convers everything into regular expressions/constraints instead of a State/object. Like we had in Feat/add piecewise linear #558

A bit more Insight with examples:

What changed

PR #602 replaced the old dict[str, LinExprLike] + link_dim API with a simpler piecewise(expr, x_points, y_points) function that only supports 2-variable relationships (y = f(x)).

What was lost

The old API could link 3+ variables through shared SOS2 lambda weights:

# OLD API
m.add_piecewise_constraints(
    expr={"x": x_var, "y": y_var, "z": z_var},
    breakpoints=bp,  # link_dim matched dict keys
)

This is no longer possible. The new API is strictly x → y:

# NEW API
m.add_piecewise_constraints(piecewise(x, x_pts, y_pts) == y)

Workarounds

1. Chain piecewise constraints

Works when the relationship decomposes (e.g. z = g(x) + h(y)). Use two separate piecewise() calls and sum the results. Only applicable for separable functions.

2. Manual lambda formulation (essentially constructing the piecewise function yourselves)

For true multi-variable linking, reproduce the SOS2 lambda formulation manually. All required operations (Variable * DataArray, .sum(), add_sos_constraints) are supported:

import numpy as np
import pandas as pd

bp_idx = pd.Index(np.arange(n_breakpoints), name="breakpoint")

lam = m.add_variables(lower=0, upper=1, coords=[bp_idx], name="lambda")
m.add_constraints(lam.sum("breakpoint") == 1)          # convexity
m.add_constraints(x == (lam * x_pts).sum("breakpoint")) # link x
m.add_constraints(y == (lam * y_pts).sum("breakpoint")) # link y
m.add_constraints(z == (lam * z_pts).sum("breakpoint")) # link z
m.add_sos_constraints(lam, sos_type=2, sos_dim="breakpoint")  # adjacency

Summary

The current state is not usable for me. I would really like to bring back the more generic #558 approach.
I would keep the distinction between segments/piecewise and think about adding a simple constructur method for your x-->y case

What do you think about this?
@FabianHofmann @coroa

@FabianHofmann
Copy link
Copy Markdown
Collaborator Author

@FBumann this makes totally sense and this use case was not on my radar at all, sorry for this. there were so many new aspects to think about. so let's bring it back. can we find a compromise that allows the piecewise function to be applied on 2+ variables? or do you think we should directly go for the dict approach?

@FBumann
Copy link
Copy Markdown
Collaborator

FBumann commented Mar 30, 2026

@FabianHofmann I'm not entirely sure we need to choose between the two approaches. If we go for a construction layer (no dedicated Piecewise class/state), there's no issue with supporting different use cases through method overloading or shared internal helpers.

The key architectural decision is: should piecewise constraints introduce new state (PiecewiseConstraintDescriptor) or should they be a pure construction layer that produces regular Variables, Constraints, and Expressions?

I'd argue strongly for the construction layer, for a few reasons:

  1. API surface stays minimal. linopy's strength is that the entire API is just Model, Variables, Constraints, Expression. Adding PiecewiseExpression and PiecewiseConstraintDescriptor as user-facing types breaks this. Users now need to understand a new concept that doesn't exist in the optimization domain — it's an implementation artifact.

  2. No loss of functionality. Everything the descriptor does can be done by a single add_piecewise_constraints() call that directly creates the auxiliary variables, SOS constraints, and linking constraints. The descriptor is a detour, not a necessity.

  3. Multi-variable linking becomes natural. The current descriptor pattern is structurally tied to 2-variable x → y relationships. A construction function with a dict-based API (like pre-Piecewise linear constraints: follow-up improvements #602) supports N-variable linking without any contortion — CHP plants with fuel/power/heat are a first-class use case, not a workaround.

  4. Interoperability. When piecewise constraints produce regular linopy objects, they compose with everything else — .dual, .to_matrix(), IO, inspection. A descriptor is opaque until it's expanded.

A single overloaded method could cover both the simple and the complex case:

# Simple 2-variable case
m.add_piecewise_constraints(power, fuel, x_pts, y_pts, sign="==")

# N-variable case (shared lambdas)
m.add_piecewise_constraints(
    exprs={"power": power, "fuel": fuel, "heat": heat},
    breakpoints=bp,
    sign="==",
)

Proposed first step: Agree that piecewise constraints should not introduce stored state (PiecewiseConstraintDescriptor, PiecewiseExpression). Once we align on that, we can list the use cases to cover and design the API signature together.

FabianHofmann added a commit that referenced this pull request May 7, 2026
… mode (#673)

* feat(piecewise): add Slopes class for deferred breakpoint specs

Introduces ``linopy.Slopes`` — a frozen dataclass that carries
per-piece slopes + initial y-value, deferred until an x grid is known.
Used as the second element of a tuple in ``add_piecewise_formulation``
where another tuple in the same call provides the x grid::

    m.add_piecewise_formulation(
        (power, [0, 30, 60, 100]),
        (fuel,  Slopes([1.2, 1.4, 1.7], y0=0)),
    )

* Constructor: ``Slopes(values, y0=0.0, align="pieces", dim=None)``
* Standalone resolution: ``Slopes(...).to_breakpoints(x_points)`` returns
  the resolved breakpoint ``DataArray`` — useful for inspection or
  building breakpoints outside the formulation pipeline.
* Dispatch: ``add_piecewise_formulation`` adds a one-pass resolution that
  borrows the x grid from the first non-Slopes tuple (deterministic).
  All-Slopes calls raise with a pointer to the standalone resolution.
* Supports the same shape variations as ``breakpoints(slopes=...)``
  (1D, dict, DataFrame, DataArray) and the ``align`` modes from #672.

This commit is purely additive: ``breakpoints(slopes=..., x_points=...,
y0=...)`` and ``slopes_to_points`` keep working unchanged.  A follow-up
commit removes them in favour of ``Slopes``.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(piecewise): remove slopes-mode of breakpoints() and slopes_to_points

Now that ``Slopes`` covers the deferred-and-standalone slopes use case
with a clearer type story, drop the duplicated paths:

* ``breakpoints(slopes=, x_points=, y0=, slopes_align=)`` removed.
  ``breakpoints`` is now points-only: ``breakpoints(values, *, dim=None)``.
* ``slopes_to_points`` made private (``_slopes_to_points``) — it's a
  list-level primitive used only by ``Slopes.to_breakpoints``.  Public
  callers should use ``Slopes(...)``; users who need list output can
  call ``Slopes(...).to_breakpoints([...]).values.tolist()``.

Both surfaces shipped earlier in this development cycle (``Slopes``
mode of ``breakpoints`` from #602 and #672, ``slopes_to_points`` from
#602) and have not been released, so the breakage window is the same
as the rest of the v0.7.0 piecewise work.

Tests migrated:

* The slopes-mode tests on ``TestBreakpointsFactory`` and the entire
  ``TestSlopesAlignLeading`` class are removed; the same shapes are
  exercised in expanded ``TestSlopesClass`` tests (Series / DataArray
  / DataFrame / shared x grid / shared y0 / leading-align ragged /
  bad-y0 validation).
* ``TestSlopesToPoints`` becomes ``TestSlopesToPointsPrivate``, importing
  the helper under its private name.
* Inline ``breakpoints(slopes=...)`` callers in feasibility/envelope
  tests migrated to ``Slopes(...)`` (or
  ``Slopes(...).to_breakpoints(x_pts)`` for the standalone path).

Docs:

* ``doc/api.rst``: drop ``slopes_to_points``, add ``Slopes``.
* ``doc/release_notes.rst``: replace the ``breakpoints`` slopes-mode
  bullet with one describing ``Slopes``.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(piecewise): migrate slopes examples to Slopes class

* ``doc/piecewise-linear-constraints.rst``:
  - Replace the ``breakpoints(slopes=, x_points=, y0=)`` quick-reference
    line with ``Slopes(values, y0=)`` (deferred form).
  - Rewrite the "From slopes" section to use ``Slopes`` inside
    ``add_piecewise_formulation``, plus a note on standalone resolution
    via ``Slopes.to_breakpoints(x_pts)``.
* ``examples/piecewise-linear-constraints.ipynb``: add section 8
  "Specifying with slopes — ``Slopes``" that reproduces the section-1
  gas-turbine fit using slopes [1.2, 1.6, 2.15] over the same x grid,
  and demonstrates standalone ``Slopes.to_breakpoints(...)``.

The inequality-bounds notebook doesn't reference the removed slopes
APIs and stays focussed on curvature/LP dispatch — no changes there.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(piecewise): custom Slopes repr that hides defaults and summarises bulky values

Default ``@dataclass`` repr was noisy:

    Slopes(values=[1.2, 1.6, 2.15], y0=0, align='pieces', dim=None)

and would dump the full DataArray/DataFrame for non-list inputs.  New repr:

    Slopes([1.2, 1.6, 2.15], y0=0)
    Slopes([nan, 1, 2], y0=0, align='leading')
    Slopes(<DataArray gen: 2, _breakpoint: 4>, y0=0, dim='gen')
    Slopes(<DataFrame shape=(2, 3)>, y0=..., dim='gen')

* The primary ``values`` arg renders without a keyword (positional like the
  constructor call) and inline only for plain lists/tuples; complex types
  (DataArray/DataFrame/Series/dict) get a one-line shape summary.
* ``align`` and ``dim`` are omitted when at their defaults.
* New ``_summarise_breakslike`` helper handles the value rendering.

Notebook section 8 gains a "what does Slopes look like" peek cell that
renders the repr before the in-formulation usage, so users see the
value-type semantics directly.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(piecewise): consolidate Slopes tests into focused, parametrised classes

The flat list of ``test_to_breakpoints_*`` methods had drifted into one
case per (input shape × input type) combination — duplicated bodies,
hard to scan, easy to miss a type.  Restructure into five classes,
each pinning one aspect of the contract:

* ``TestSlopesValueType`` — immutability + repr.  Repr behaviour
  parametrised over (1d-defaults-hidden, non-default-align,
  non-default-dim) for the format check, and over
  (DataFrame, DataArray, Series, dict) for the bulky-value summary.
* ``TestSlopesToBreakpoints1D`` — same arithmetic anchor (slopes [1, 2]
  over x [0, 1, 2] → y [0, 1, 3]) under every accepted 1D input type
  pairing (list, tuple, ndarray, Series, DataArray, mixed).  Plus a
  separate parametrised "arithmetic anchors" set covering negative
  slopes, non-zero y0, and uneven x spacing.
* ``TestSlopesToBreakpointsPerEntity`` — same per-entity anchor
  (gen=a → [0, 10, 30]; gen=b → [10, 50, 110]) under every accepted
  multi-entity container type (dict, DataFrame, DataArray).  Plus
  shared-x-grid broadcast and ``y0`` shape coverage (scalar, dict,
  Series, DataArray) under one parametrised test.
* ``TestSlopesToBreakpointsAlignment`` — ``align="pieces"`` and
  ``align="leading"`` must produce equal output for matching inputs;
  parametrised over 1D and per-entity-dict shapes.  Ragged
  per-entity case kept as a dedicated test.
* ``TestSlopesValidationErrors`` — three rejection paths
  (leading-first-not-NaN, 1D + dict y0, bad y0 type) parametrised in
  one test.

Net: 17 individual tests collapse into 32 parametrised cases under 5
classes, with each behaviour-of-interest in exactly one place.

Also adds the missing ``BreaksLike`` import in the test-only
``TYPE_CHECKING`` block (used in the new parametrised signatures).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: hoist _slopes_to_points test import + strip notebook execution metadata

* ``test/test_piecewise_constraints.py``: hoist the
  ``from linopy.piecewise import _slopes_to_points`` to module scope —
  was repeated inside each of the three ``TestSlopesToPointsPrivate``
  methods.
* ``examples/piecewise-linear-constraints.ipynb``: strip
  ``cell.metadata.execution`` (iopub timestamps) from all cells.  The
  ``jupyter-notebook-cleanup`` pre-commit hook clears outputs but
  doesn't touch this field, so it accumulated noise in the diff every
  time the notebook was re-executed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(notebook): restore em-dashes from — escapes to UTF-8

The previous metadata-strip pass round-tripped the notebook through
``json.dump(..., indent=1)`` which defaults ``ensure_ascii=True`` and
escaped all em-dashes (and any other non-ASCII chars) across the whole
file — pure encoding churn.

Surgical fix: byte-level replace ``—`` → ``—`` rather than another
JSON round-trip, so nothing else changes.  Future re-encodes should use
``ensure_ascii=False``.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(notebook): restore unrelated unicode chars and Python version metadata

Two more accidental edits from the json round-trip caught by reviewing
the master diff:

* ``≤`` and ``≥`` in section 4 (existing master content) had been
  escaped to ``≤`` / ``≥``.  Restored to UTF-8.
* Notebook ``language_info.version`` metadata had drifted from
  ``"3.13.2"`` (master) to ``"3.11.11"`` (whatever kernel I happened to
  run).  Reverted.

Net: the notebook diff vs master is now 63 insertions / 0 deletions —
only the four new section-8 cells, no incidental churn.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* review fixes: emit Slopes warning, bound seq repr, harden dispatch test

Addresses review of #673:

* **Slopes now actually emits the EvolvingAPIWarning** it advertises in
  its docstring.  The warning fires from ``__post_init__`` so the
  standalone ``Slopes(...).to_breakpoints(...)`` migration path doesn't
  silently bypass the evolving-API signal that the previous
  ``breakpoints(slopes=...)`` form indirectly inherited.
  ``_EvolvingApiKey`` extended to include ``"Slopes"``; per-key dedup
  keeps construction cheap on repeated use.
* **``_summarise_breakslike`` truncates long sequences** instead of
  dumping them verbatim.  Sequences over 8 entries render as
  ``[0, 1, 2, ..., 48, 49] (50 items)`` — the previous "small size"
  comment promised this without enforcing it.
* **``test_two_non_slopes_picks_first_x_grid``** previously asserted only
  that the formulation was registered.  Now uses distinguishable x grids
  (10× scale difference), pins the model onto piece 1, and verifies
  ``z == 10`` (the value implied by the *first* tuple's grid) rather
  than ``z == 100`` (the second tuple's).
* **New ``test_multiple_slopes_share_x_grid``** covers the
  ``(non-Slopes, Slopes, Slopes)`` shape — both Slopes resolve against
  the same borrowed grid.  Reviewer-flagged coverage gap.
* **New ``test_slopes_construction_warns_and_dedups``** in
  ``TestEvolvingAPIWarning`` pins the new warning behaviour.
* **New ``test_repr_truncates_long_sequences``** in
  ``TestSlopesValueType`` pins the truncation.
* Hoisted ``set(slopes_idx)`` out of the ``non_slopes_idx`` comprehension
  in the dispatch (cosmetic; N is small).
* Added a module-level ``TOL = 1e-6`` constant in
  ``test_piecewise_constraints.py`` matching the convention in
  ``test_piecewise_feasibility.py``; the new dispatch test uses it.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(piecewise): three robustness issues in Slopes

1. **Stacklevel was off by one** for warnings emitted from
   ``Slopes.__post_init__``.  The dataclass-generated ``__init__``
   adds an extra frame (helper → ``_warn_evolving_api`` →
   ``__post_init__`` → synthetic ``__init__`` → user code), so
   ``stacklevel=3`` landed inside the synthetic init instead of the
   user's call site.  Made ``_warn_evolving_api`` accept ``stacklevel``
   as a parameter (default 3, matching the function-call entry points)
   and pass ``stacklevel=4`` from ``Slopes``.

2. **Equality crashed with array values.**  Frozen dataclasses default
   to elementwise ``__eq__``, so ``Slopes(np.array([1, 2])) ==
   Slopes(np.array([1, 2]))`` raised ``ValueError: truth value of an
   array with more than one element is ambiguous``.  Added ``eq=False``
   to opt out and fall back to identity equality.  ``Slopes`` is now
   safely usable as a set member or dict key.

3. **Numpy scalar repr noise.**  ``_summarise_breakslike`` previously
   called ``list(v)`` which preserved numpy scalar types; their
   reprs differ from Python scalars (and across numpy versions).
   Switched to ``np.asarray(v).tolist()`` which normalises numpy
   types to Python types up front, so ``Slopes(np.array([1, 2, 3],
   dtype=np.int64), y0=0)`` renders as ``Slopes([1, 2, 3], y0=0)``
   uniformly.  Added a 0-D guard for the edge case.

Each fix is pinned by a new test in ``TestSlopesValueType``
(``test_repr_normalises_numpy_scalars``,
``test_equality_with_array_values_does_not_raise``) and
``TestEvolvingAPIWarning``
(``test_slopes_warning_stacklevel_points_to_user_call``).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(piecewise): value-equality on Slopes via type-dispatched __eq__

Earlier ``eq=False`` (identity equality) was a footgun for tests:
``assert pwf_spec == expected_slopes`` would silently return ``False``
even when the two specs described the same curve.

Replace with a custom ``__eq__`` that compares each field by value:

* ``align`` / ``dim`` — plain ``==``.
* ``y0`` / ``values`` — dispatched on type via ``_values_equal``:
  - ``ndarray`` → ``np.array_equal(equal_nan=True)``
  - ``DataFrame`` / ``Series`` → ``.equals(...)``
  - ``DataArray`` → ``.equals(...)``
  - ``dict`` → recurse on matching keys
  - scalar ``float`` → NaN-safe ``==`` (treats nan==nan as ``True`` to
    match the array path's ``equal_nan=True``)
  - everything else → strict ``type(a) is type(b)`` then ``==``.

``__hash__`` set to ``None`` (unhashable) since ``values`` may be a
mutable container.  Documented edges:

* List vs ndarray of the same numeric content compare unequal — strict
  type matching, same as Python's general ``[1,2] != np.array([1,2])``
  behaviour.

Tests: parametrised ``TestSlopesValueType.test_equality`` covers nine
shapes (lists, ndarrays, dicts, NaN scalars, NaN in arrays, mismatched
y0, mismatched values, mismatched types, dict inner-value mismatch).
Plus ``test_eq_against_non_slopes_returns_notimplemented`` for the
non-Slopes branch and ``test_unhashable`` pinning the hash opt-out.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(piecewise): summarise multi-dim ndarray Slopes values by shape

Previously a multi-dim ndarray fell through to the seq path,
``np.asarray(v).tolist()`` returned nested lists, and the repr dumped
them in full.  Even a moderate ``np.zeros((5, 20))`` produced a
2-line wall of ``0.0`` entries; an earlier ``np.zeros((20, 5, 30))``
case would have been worse.

Treat 2-D+ ndarrays the same way ``DataArray`` / ``DataFrame`` /
``Series`` are treated: a one-line shape summary
(``<ndarray shape=(20, 5, 30)>``).  1-D ndarrays still render inline
with the existing head + tail truncation, so user-facing slope
specifications stay readable.

The ``np.asarray(v)`` call is hoisted so we don't double-normalise on
the 1-D path.

New parametrised case ``multi_dim_ndarray`` in
``TestSlopesValueType.test_repr_summarises_bulky_values`` pins the new
behaviour.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(piecewise): broaden Slopes equality, trim release-notes entry

Equality (``Slopes.__eq__`` via ``_values_equal``) was strict-type to a
fault.  Four edge cases produced surprising ``False`` results despite
the operands describing the same curve:

1. ``Slopes(y0=0) != Slopes(y0=0.0)`` — ``int`` and ``float`` are
   semantically the same y-coordinate (``_breakpoints_from_slopes``
   calls ``float(y0)`` downstream), but the strict ``type(a) is
   type(b)`` gate rejected them.
2. ``Slopes(y0=np.float64(0)) != Slopes(y0=0.0)`` — same root cause
   for numpy scalars.
3. ``Slopes([float('nan'), 1.0], align='leading')`` was unequal to
   itself — Python's list equality uses ``is`` before ``==`` per
   element, so it only worked accidentally when the user happened to
   write ``np.nan`` (a CPython singleton) instead of ``float('nan')``.
4. ``np.array_equal(..., equal_nan=True)`` raises ``TypeError`` on
   object/string ndarrays.

Rewrite ``_values_equal`` to:

* Treat any two ``numbers.Real`` (excluding ``bool``) as numerically
  comparable with a NaN-safe float fallback.
* Promote ``list`` / ``tuple`` to ndarray before the array branch so
  in-place ``float('nan')`` content compares element-wise NaN-safe.
* Fall back to ``np.array_equal`` without ``equal_nan`` when the
  array has a non-numeric dtype.

Document the new semantics on ``__eq__`` and explicitly note that
``.equals`` for pandas / xarray containers is order-sensitive.

Tests:

* Flip ``different_value_types`` (now ``list_and_ndarray_same_content``)
  to expect ``True``.
* Rename ``nan_in_list_via_array_path`` → ``np_nan_in_list``; add
  parallel ``float_nan_in_list`` case.
* Add ``int_and_float_y0`` and ``numpy_scalar_and_float_y0`` cases.
* Add ``test_eq_dataframe_is_order_sensitive`` pinning the documented
  ``.equals`` caveat.
* Add ``test_eq_object_dtype_ndarray_does_not_raise`` covering the
  non-numeric ndarray fallback path.

Release notes: trim the ``Slopes`` entry to the user-facing purpose
(specify a curve by marginal costs / per-piece slopes) and the
canonical call form.  Drop the dev-cycle "**replaces** the slopes mode
of ``breakpoints()``..." sentence — those API surfaces never shipped,
so v0.7.0 readers have no context for the removal note.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(piecewise): trim notebook section 8 to match the surrounding shape

Section 8 was 6 cells where 2 do the same job — the surrounding
sections (1, 7) all use the 1-markdown-intro + 1-code-cell pattern.

Drops:

* The repr-explanation markdown + a standalone ``Slopes(...)`` cell
  showing the repr.  The repr is incidental; users will see it
  whenever they instantiate a ``Slopes``.
* The ``to_breakpoints`` intro markdown and demo cell.  Standalone
  resolution is documented in the ``.rst`` page; the notebook should
  show the canonical ``add_piecewise_formulation`` use only.
* The ``# Same curve as section 1 — slopes 1.2, 1.6, 2.15 …`` inline
  comment, now that the markdown intro says the same thing.

Also tighten the markdown intro: drop the bold emphasis on "borrowed
from the sibling tuple" and the trailing transition sentence.

Net result: section-8 diff vs master drops from 63 lines to 30
(roughly halved), and the section now mirrors the visual rhythm of
the rest of the tutorial.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(piecewise): require exactly one non-Slopes tuple in add_piecewise_formulation

The previous "borrow x grid from the first non-Slopes tuple" rule was
silently order-dependent when more than one non-Slopes tuple was
present.  Each non-Slopes tuple is a y-vector for its own variable,
so there is no canonical x axis — picking the *first* meant tuple
order changed the resolved breakpoints, and therefore the optimisation
problem itself.

Reject the ambiguous case at the dispatch boundary instead.  The new
ValueError points users at ``Slopes(...).to_breakpoints(x_pts)`` so
they can opt into a specific x grid explicitly when their setup has
multiple breakpoint vectors in play.

* ``Slopes`` docstring updated: states the "exactly one non-Slopes"
  rule and the ``to_breakpoints`` escape hatch up front.
* ``test_three_tuple_deferred`` removed — its (power, fuel, Slopes)
  shape is now invalid and the equivalent (power, Slopes, Slopes) is
  already covered by ``test_multiple_slopes_share_x_grid``.
* ``test_two_non_slopes_picks_first_x_grid`` →
  ``test_multiple_non_slopes_with_slopes_raises``: the test that
  previously pinned the order-dependent behaviour now pins the
  ValueError.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(piecewise): pin Slopes dispatch via assert_model_equal; widen ndarray/Real annotations

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* refactor(piecewise): trim _values_equal and _summarise_breakslike

* fix(piecewise): TypeGuard on _is_numeric_scalar for mypy

* fix(piecewise): revert _values_equal equals-loop to explicit branches for mypy

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-authored-by: Fabian <fab.hof@gmx.de>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants