feat: add slopes_align to breakpoints()#672
Conversation
6476b2b to
f8d7634
Compare
Add `slopes_align` keyword to `linopy.breakpoints()` accepting "pieces" (default) or "leading". With "leading", `slopes` has the same length as `x_points` and `slopes[0]` is a NaN sentinel that is dropped — matches the convention of tabulating a marginal value at each breakpoint with the first row's marginal undefined.
6a00693 to
51b054f
Compare
for more information, see https://pre-commit.ci
|
@FabianHofmann I would probably split breakpoints and slopes into to paths. And introduce a Slopes object, that has a init, alternative constricters via class methods and a to_breakpoints() method. One could use it instead of breakpoints directly. THis would make for better UX. BI planned on doing that in a Follow up |
good idea! |
|
WOuld you still do this PR, and then remove the breakpoints() path for Slopes? |
|
let's merge this one, then replace the functionality entirely. fine with that? |
… mode (#673) * feat(piecewise): add Slopes class for deferred breakpoint specs Introduces ``linopy.Slopes`` — a frozen dataclass that carries per-piece slopes + initial y-value, deferred until an x grid is known. Used as the second element of a tuple in ``add_piecewise_formulation`` where another tuple in the same call provides the x grid:: m.add_piecewise_formulation( (power, [0, 30, 60, 100]), (fuel, Slopes([1.2, 1.4, 1.7], y0=0)), ) * Constructor: ``Slopes(values, y0=0.0, align="pieces", dim=None)`` * Standalone resolution: ``Slopes(...).to_breakpoints(x_points)`` returns the resolved breakpoint ``DataArray`` — useful for inspection or building breakpoints outside the formulation pipeline. * Dispatch: ``add_piecewise_formulation`` adds a one-pass resolution that borrows the x grid from the first non-Slopes tuple (deterministic). All-Slopes calls raise with a pointer to the standalone resolution. * Supports the same shape variations as ``breakpoints(slopes=...)`` (1D, dict, DataFrame, DataArray) and the ``align`` modes from #672. This commit is purely additive: ``breakpoints(slopes=..., x_points=..., y0=...)`` and ``slopes_to_points`` keep working unchanged. A follow-up commit removes them in favour of ``Slopes``. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * refactor(piecewise): remove slopes-mode of breakpoints() and slopes_to_points Now that ``Slopes`` covers the deferred-and-standalone slopes use case with a clearer type story, drop the duplicated paths: * ``breakpoints(slopes=, x_points=, y0=, slopes_align=)`` removed. ``breakpoints`` is now points-only: ``breakpoints(values, *, dim=None)``. * ``slopes_to_points`` made private (``_slopes_to_points``) — it's a list-level primitive used only by ``Slopes.to_breakpoints``. Public callers should use ``Slopes(...)``; users who need list output can call ``Slopes(...).to_breakpoints([...]).values.tolist()``. Both surfaces shipped earlier in this development cycle (``Slopes`` mode of ``breakpoints`` from #602 and #672, ``slopes_to_points`` from #602) and have not been released, so the breakage window is the same as the rest of the v0.7.0 piecewise work. Tests migrated: * The slopes-mode tests on ``TestBreakpointsFactory`` and the entire ``TestSlopesAlignLeading`` class are removed; the same shapes are exercised in expanded ``TestSlopesClass`` tests (Series / DataArray / DataFrame / shared x grid / shared y0 / leading-align ragged / bad-y0 validation). * ``TestSlopesToPoints`` becomes ``TestSlopesToPointsPrivate``, importing the helper under its private name. * Inline ``breakpoints(slopes=...)`` callers in feasibility/envelope tests migrated to ``Slopes(...)`` (or ``Slopes(...).to_breakpoints(x_pts)`` for the standalone path). Docs: * ``doc/api.rst``: drop ``slopes_to_points``, add ``Slopes``. * ``doc/release_notes.rst``: replace the ``breakpoints`` slopes-mode bullet with one describing ``Slopes``. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * docs(piecewise): migrate slopes examples to Slopes class * ``doc/piecewise-linear-constraints.rst``: - Replace the ``breakpoints(slopes=, x_points=, y0=)`` quick-reference line with ``Slopes(values, y0=)`` (deferred form). - Rewrite the "From slopes" section to use ``Slopes`` inside ``add_piecewise_formulation``, plus a note on standalone resolution via ``Slopes.to_breakpoints(x_pts)``. * ``examples/piecewise-linear-constraints.ipynb``: add section 8 "Specifying with slopes — ``Slopes``" that reproduces the section-1 gas-turbine fit using slopes [1.2, 1.6, 2.15] over the same x grid, and demonstrates standalone ``Slopes.to_breakpoints(...)``. The inequality-bounds notebook doesn't reference the removed slopes APIs and stays focussed on curvature/LP dispatch — no changes there. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(piecewise): custom Slopes repr that hides defaults and summarises bulky values Default ``@dataclass`` repr was noisy: Slopes(values=[1.2, 1.6, 2.15], y0=0, align='pieces', dim=None) and would dump the full DataArray/DataFrame for non-list inputs. New repr: Slopes([1.2, 1.6, 2.15], y0=0) Slopes([nan, 1, 2], y0=0, align='leading') Slopes(<DataArray gen: 2, _breakpoint: 4>, y0=0, dim='gen') Slopes(<DataFrame shape=(2, 3)>, y0=..., dim='gen') * The primary ``values`` arg renders without a keyword (positional like the constructor call) and inline only for plain lists/tuples; complex types (DataArray/DataFrame/Series/dict) get a one-line shape summary. * ``align`` and ``dim`` are omitted when at their defaults. * New ``_summarise_breakslike`` helper handles the value rendering. Notebook section 8 gains a "what does Slopes look like" peek cell that renders the repr before the in-formulation usage, so users see the value-type semantics directly. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test(piecewise): consolidate Slopes tests into focused, parametrised classes The flat list of ``test_to_breakpoints_*`` methods had drifted into one case per (input shape × input type) combination — duplicated bodies, hard to scan, easy to miss a type. Restructure into five classes, each pinning one aspect of the contract: * ``TestSlopesValueType`` — immutability + repr. Repr behaviour parametrised over (1d-defaults-hidden, non-default-align, non-default-dim) for the format check, and over (DataFrame, DataArray, Series, dict) for the bulky-value summary. * ``TestSlopesToBreakpoints1D`` — same arithmetic anchor (slopes [1, 2] over x [0, 1, 2] → y [0, 1, 3]) under every accepted 1D input type pairing (list, tuple, ndarray, Series, DataArray, mixed). Plus a separate parametrised "arithmetic anchors" set covering negative slopes, non-zero y0, and uneven x spacing. * ``TestSlopesToBreakpointsPerEntity`` — same per-entity anchor (gen=a → [0, 10, 30]; gen=b → [10, 50, 110]) under every accepted multi-entity container type (dict, DataFrame, DataArray). Plus shared-x-grid broadcast and ``y0`` shape coverage (scalar, dict, Series, DataArray) under one parametrised test. * ``TestSlopesToBreakpointsAlignment`` — ``align="pieces"`` and ``align="leading"`` must produce equal output for matching inputs; parametrised over 1D and per-entity-dict shapes. Ragged per-entity case kept as a dedicated test. * ``TestSlopesValidationErrors`` — three rejection paths (leading-first-not-NaN, 1D + dict y0, bad y0 type) parametrised in one test. Net: 17 individual tests collapse into 32 parametrised cases under 5 classes, with each behaviour-of-interest in exactly one place. Also adds the missing ``BreaksLike`` import in the test-only ``TYPE_CHECKING`` block (used in the new parametrised signatures). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * chore: hoist _slopes_to_points test import + strip notebook execution metadata * ``test/test_piecewise_constraints.py``: hoist the ``from linopy.piecewise import _slopes_to_points`` to module scope — was repeated inside each of the three ``TestSlopesToPointsPrivate`` methods. * ``examples/piecewise-linear-constraints.ipynb``: strip ``cell.metadata.execution`` (iopub timestamps) from all cells. The ``jupyter-notebook-cleanup`` pre-commit hook clears outputs but doesn't touch this field, so it accumulated noise in the diff every time the notebook was re-executed. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix(notebook): restore em-dashes from — escapes to UTF-8 The previous metadata-strip pass round-tripped the notebook through ``json.dump(..., indent=1)`` which defaults ``ensure_ascii=True`` and escaped all em-dashes (and any other non-ASCII chars) across the whole file — pure encoding churn. Surgical fix: byte-level replace ``—`` → ``—`` rather than another JSON round-trip, so nothing else changes. Future re-encodes should use ``ensure_ascii=False``. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix(notebook): restore unrelated unicode chars and Python version metadata Two more accidental edits from the json round-trip caught by reviewing the master diff: * ``≤`` and ``≥`` in section 4 (existing master content) had been escaped to ``≤`` / ``≥``. Restored to UTF-8. * Notebook ``language_info.version`` metadata had drifted from ``"3.13.2"`` (master) to ``"3.11.11"`` (whatever kernel I happened to run). Reverted. Net: the notebook diff vs master is now 63 insertions / 0 deletions — only the four new section-8 cells, no incidental churn. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * review fixes: emit Slopes warning, bound seq repr, harden dispatch test Addresses review of #673: * **Slopes now actually emits the EvolvingAPIWarning** it advertises in its docstring. The warning fires from ``__post_init__`` so the standalone ``Slopes(...).to_breakpoints(...)`` migration path doesn't silently bypass the evolving-API signal that the previous ``breakpoints(slopes=...)`` form indirectly inherited. ``_EvolvingApiKey`` extended to include ``"Slopes"``; per-key dedup keeps construction cheap on repeated use. * **``_summarise_breakslike`` truncates long sequences** instead of dumping them verbatim. Sequences over 8 entries render as ``[0, 1, 2, ..., 48, 49] (50 items)`` — the previous "small size" comment promised this without enforcing it. * **``test_two_non_slopes_picks_first_x_grid``** previously asserted only that the formulation was registered. Now uses distinguishable x grids (10× scale difference), pins the model onto piece 1, and verifies ``z == 10`` (the value implied by the *first* tuple's grid) rather than ``z == 100`` (the second tuple's). * **New ``test_multiple_slopes_share_x_grid``** covers the ``(non-Slopes, Slopes, Slopes)`` shape — both Slopes resolve against the same borrowed grid. Reviewer-flagged coverage gap. * **New ``test_slopes_construction_warns_and_dedups``** in ``TestEvolvingAPIWarning`` pins the new warning behaviour. * **New ``test_repr_truncates_long_sequences``** in ``TestSlopesValueType`` pins the truncation. * Hoisted ``set(slopes_idx)`` out of the ``non_slopes_idx`` comprehension in the dispatch (cosmetic; N is small). * Added a module-level ``TOL = 1e-6`` constant in ``test_piecewise_constraints.py`` matching the convention in ``test_piecewise_feasibility.py``; the new dispatch test uses it. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix(piecewise): three robustness issues in Slopes 1. **Stacklevel was off by one** for warnings emitted from ``Slopes.__post_init__``. The dataclass-generated ``__init__`` adds an extra frame (helper → ``_warn_evolving_api`` → ``__post_init__`` → synthetic ``__init__`` → user code), so ``stacklevel=3`` landed inside the synthetic init instead of the user's call site. Made ``_warn_evolving_api`` accept ``stacklevel`` as a parameter (default 3, matching the function-call entry points) and pass ``stacklevel=4`` from ``Slopes``. 2. **Equality crashed with array values.** Frozen dataclasses default to elementwise ``__eq__``, so ``Slopes(np.array([1, 2])) == Slopes(np.array([1, 2]))`` raised ``ValueError: truth value of an array with more than one element is ambiguous``. Added ``eq=False`` to opt out and fall back to identity equality. ``Slopes`` is now safely usable as a set member or dict key. 3. **Numpy scalar repr noise.** ``_summarise_breakslike`` previously called ``list(v)`` which preserved numpy scalar types; their reprs differ from Python scalars (and across numpy versions). Switched to ``np.asarray(v).tolist()`` which normalises numpy types to Python types up front, so ``Slopes(np.array([1, 2, 3], dtype=np.int64), y0=0)`` renders as ``Slopes([1, 2, 3], y0=0)`` uniformly. Added a 0-D guard for the edge case. Each fix is pinned by a new test in ``TestSlopesValueType`` (``test_repr_normalises_numpy_scalars``, ``test_equality_with_array_values_does_not_raise``) and ``TestEvolvingAPIWarning`` (``test_slopes_warning_stacklevel_points_to_user_call``). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(piecewise): value-equality on Slopes via type-dispatched __eq__ Earlier ``eq=False`` (identity equality) was a footgun for tests: ``assert pwf_spec == expected_slopes`` would silently return ``False`` even when the two specs described the same curve. Replace with a custom ``__eq__`` that compares each field by value: * ``align`` / ``dim`` — plain ``==``. * ``y0`` / ``values`` — dispatched on type via ``_values_equal``: - ``ndarray`` → ``np.array_equal(equal_nan=True)`` - ``DataFrame`` / ``Series`` → ``.equals(...)`` - ``DataArray`` → ``.equals(...)`` - ``dict`` → recurse on matching keys - scalar ``float`` → NaN-safe ``==`` (treats nan==nan as ``True`` to match the array path's ``equal_nan=True``) - everything else → strict ``type(a) is type(b)`` then ``==``. ``__hash__`` set to ``None`` (unhashable) since ``values`` may be a mutable container. Documented edges: * List vs ndarray of the same numeric content compare unequal — strict type matching, same as Python's general ``[1,2] != np.array([1,2])`` behaviour. Tests: parametrised ``TestSlopesValueType.test_equality`` covers nine shapes (lists, ndarrays, dicts, NaN scalars, NaN in arrays, mismatched y0, mismatched values, mismatched types, dict inner-value mismatch). Plus ``test_eq_against_non_slopes_returns_notimplemented`` for the non-Slopes branch and ``test_unhashable`` pinning the hash opt-out. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(piecewise): summarise multi-dim ndarray Slopes values by shape Previously a multi-dim ndarray fell through to the seq path, ``np.asarray(v).tolist()`` returned nested lists, and the repr dumped them in full. Even a moderate ``np.zeros((5, 20))`` produced a 2-line wall of ``0.0`` entries; an earlier ``np.zeros((20, 5, 30))`` case would have been worse. Treat 2-D+ ndarrays the same way ``DataArray`` / ``DataFrame`` / ``Series`` are treated: a one-line shape summary (``<ndarray shape=(20, 5, 30)>``). 1-D ndarrays still render inline with the existing head + tail truncation, so user-facing slope specifications stay readable. The ``np.asarray(v)`` call is hoisted so we don't double-normalise on the 1-D path. New parametrised case ``multi_dim_ndarray`` in ``TestSlopesValueType.test_repr_summarises_bulky_values`` pins the new behaviour. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix(piecewise): broaden Slopes equality, trim release-notes entry Equality (``Slopes.__eq__`` via ``_values_equal``) was strict-type to a fault. Four edge cases produced surprising ``False`` results despite the operands describing the same curve: 1. ``Slopes(y0=0) != Slopes(y0=0.0)`` — ``int`` and ``float`` are semantically the same y-coordinate (``_breakpoints_from_slopes`` calls ``float(y0)`` downstream), but the strict ``type(a) is type(b)`` gate rejected them. 2. ``Slopes(y0=np.float64(0)) != Slopes(y0=0.0)`` — same root cause for numpy scalars. 3. ``Slopes([float('nan'), 1.0], align='leading')`` was unequal to itself — Python's list equality uses ``is`` before ``==`` per element, so it only worked accidentally when the user happened to write ``np.nan`` (a CPython singleton) instead of ``float('nan')``. 4. ``np.array_equal(..., equal_nan=True)`` raises ``TypeError`` on object/string ndarrays. Rewrite ``_values_equal`` to: * Treat any two ``numbers.Real`` (excluding ``bool``) as numerically comparable with a NaN-safe float fallback. * Promote ``list`` / ``tuple`` to ndarray before the array branch so in-place ``float('nan')`` content compares element-wise NaN-safe. * Fall back to ``np.array_equal`` without ``equal_nan`` when the array has a non-numeric dtype. Document the new semantics on ``__eq__`` and explicitly note that ``.equals`` for pandas / xarray containers is order-sensitive. Tests: * Flip ``different_value_types`` (now ``list_and_ndarray_same_content``) to expect ``True``. * Rename ``nan_in_list_via_array_path`` → ``np_nan_in_list``; add parallel ``float_nan_in_list`` case. * Add ``int_and_float_y0`` and ``numpy_scalar_and_float_y0`` cases. * Add ``test_eq_dataframe_is_order_sensitive`` pinning the documented ``.equals`` caveat. * Add ``test_eq_object_dtype_ndarray_does_not_raise`` covering the non-numeric ndarray fallback path. Release notes: trim the ``Slopes`` entry to the user-facing purpose (specify a curve by marginal costs / per-piece slopes) and the canonical call form. Drop the dev-cycle "**replaces** the slopes mode of ``breakpoints()``..." sentence — those API surfaces never shipped, so v0.7.0 readers have no context for the removal note. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * docs(piecewise): trim notebook section 8 to match the surrounding shape Section 8 was 6 cells where 2 do the same job — the surrounding sections (1, 7) all use the 1-markdown-intro + 1-code-cell pattern. Drops: * The repr-explanation markdown + a standalone ``Slopes(...)`` cell showing the repr. The repr is incidental; users will see it whenever they instantiate a ``Slopes``. * The ``to_breakpoints`` intro markdown and demo cell. Standalone resolution is documented in the ``.rst`` page; the notebook should show the canonical ``add_piecewise_formulation`` use only. * The ``# Same curve as section 1 — slopes 1.2, 1.6, 2.15 …`` inline comment, now that the markdown intro says the same thing. Also tighten the markdown intro: drop the bold emphasis on "borrowed from the sibling tuple" and the trailing transition sentence. Net result: section-8 diff vs master drops from 63 lines to 30 (roughly halved), and the section now mirrors the visual rhythm of the rest of the tutorial. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix(piecewise): require exactly one non-Slopes tuple in add_piecewise_formulation The previous "borrow x grid from the first non-Slopes tuple" rule was silently order-dependent when more than one non-Slopes tuple was present. Each non-Slopes tuple is a y-vector for its own variable, so there is no canonical x axis — picking the *first* meant tuple order changed the resolved breakpoints, and therefore the optimisation problem itself. Reject the ambiguous case at the dispatch boundary instead. The new ValueError points users at ``Slopes(...).to_breakpoints(x_pts)`` so they can opt into a specific x grid explicitly when their setup has multiple breakpoint vectors in play. * ``Slopes`` docstring updated: states the "exactly one non-Slopes" rule and the ``to_breakpoints`` escape hatch up front. * ``test_three_tuple_deferred`` removed — its (power, fuel, Slopes) shape is now invalid and the equivalent (power, Slopes, Slopes) is already covered by ``test_multiple_slopes_share_x_grid``. * ``test_two_non_slopes_picks_first_x_grid`` → ``test_multiple_non_slopes_with_slopes_raises``: the test that previously pinned the order-dependent behaviour now pins the ValueError. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test(piecewise): pin Slopes dispatch via assert_model_equal; widen ndarray/Real annotations * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * refactor(piecewise): trim _values_equal and _summarise_breakslike * fix(piecewise): TypeGuard on _is_numeric_scalar for mypy * fix(piecewise): revert _values_equal equals-loop to explicit branches for mypy --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com> Co-authored-by: Fabian <fab.hof@gmx.de> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* docs: restructure upcoming release notes and fold in missing PRs Group the upcoming version block into Features / Performance / Bug Fixes / Breaking Changes / Documentation sections so the headline (piecewise) leads, and add the entries for #589, #595, #601, #614, #619, #635, #656, #671, #672, #674. Tighten the piecewise block to its final state. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * docs: tighten upcoming changelog and drop internal-only entries Trim verbose phrasing in the piecewise / variables / model / solvers sections, fold subset-superset sub-bullets into one paragraph, and drop two entries that aren't user-facing for a release notes audience: sphinx-copybutton (doc tooling) and Model.__weakref__ (only relevant to extension authors). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * docs: move align convention from breakpoints() to Slopes in changelog #673 removed the slopes-mode (and slopes_align kwarg) from breakpoints(); the align kwarg now lives on the Slopes class. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * docs: move SOS reformulation bullet from Variables to Model SOS reformulation is a model-rewrite/solve-pipeline concern, not a variable attribute. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * docs: split coord alignment into Expressions, move CPLEX to Bug Fixes - New *Expressions* subsection holds the subset/superset coord harmonization, which was misfiled under *Model*. - CPLEX quality-attribute handling is a fix for crashes on missing attributes, not a new feature — moved to **Bug Fixes**. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * docs: fold as_dataarray MultiIndex fix into add_variables bullet #659 fixes a regression introduced by #614 in the same release cycle — no end user ever saw the broken state, so a standalone bullet overstates the change. Net behavior is captured by extending the add_variables bullet to mention MultiIndex coords. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * docs: tighter pass on upcoming changelog Drop implementation details that belong in API docs (numpy-vs-pandas note, JSON encoding for netCDF, "with no auxiliary variables" piecewise detail), merge the two OETC bullets, and trim "Add X. Supports Y." wrappers across most lines. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * docs: rephrase active gating bullet to avoid output-zeroing implication Previous wording ("zeros all auxiliaries when off") was true at the auxiliary level but glossed over the bounded-tuple case where the output is not automatically pinned to 0. Drop the implication and defer the detail to the docstring. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * docs: drop option-name detail from upcoming changelog Trim references to specific kwargs/attributes the reader doesn't need in the high-level summary: method="auto" parens, align="pieces|leading", deep / include_solution, reformulate_sos="auto", solver_name / **solver_options, max_dual_infeasibility example, and the operator-by-operator coord-alignment breakdown. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Changes proposed in this Pull Request
Adds a new
slopes_alignkeyword tolinopy.breakpoints()to support an alternative slope-to-breakpoint alignment convention.slopes_align="pieces"(default, unchanged):len(slopes) == len(x_points) - 1;slopes[i]is the slope betweenx[i]andx[i+1].slopes_align="leading":len(slopes) == len(x_points);slopes[0]must be NaN and is dropped,slopes[i]fori>=1is the slope betweenx[i-1]andx[i].The "leading" mode matches the common convention where a marginal value is tabulated alongside each breakpoint with the first row's marginal undefined. It removes the manual
shift(-1)callers (e.g. PyPSA) currently apply before passing slopes tobreakpoints.Validation:
slopes_alignis rejected outside slopes mode."leading"; otherwise aValueErroris raised.Per-entity NaN-stripping logic and ragged-input handling are unchanged — the alignment is normalized up front by slicing position 0 along the breakpoint dim.
Checklist
doc.doc/release_notes.rstof the upcoming release is included.