performLayout method

  1. @override
void performLayout()
override

Do the work of computing the layout for this render object.

Do not call this function directly: call layout instead. This function is called by layout when there is actually work to be done by this render object during layout. The layout constraints provided by your parent are available via the constraints getter.

If sizedByParent is true, then this function should not actually change the dimensions of this render object. Instead, that work should be done by performResize. If sizedByParent is false, then this function should both change the dimensions of this render object and instruct its children to layout.

In implementing this function, you must call layout on each of your children, passing true for parentUsesSize if your layout information is dependent on your child's layout information. Passing true for parentUsesSize ensures that this render object will undergo layout if the child undergoes layout. Otherwise, the child can change its layout information without informing this render object.

Some special RenderObject subclasses (such as the one used by OverlayPortal.overlayChildLayoutBuilder) call applyPaintTransform in their performLayout implementation. To ensure such RenderObjects get the up-to-date paint transform, RenderObject subclasses should typically update the paint transform (as reported by applyPaintTransform) in this method instead of paint.

Implementation

@override
void performLayout() {
  debugLastParentDataRefreshIterationCount = 0;
  final constraints = this.constraints;
  // This sliver's layout and paint code assume a vertical-forward axis.
  // Child constraints, offset math, sticky pinning and hit-testing all use
  // plain (x = indent, y = layoutOffset) coordinates with no axis mapping.
  // Running in any other axis/growth/reverse configuration silently renders
  // incorrectly, so fail loudly in debug builds.
  assert(
    constraints.axis == Axis.vertical &&
        constraints.axisDirection == AxisDirection.down &&
        constraints.growthDirection == GrowthDirection.forward,
    "SliverTree currently supports only vertical, forward-growing axes "
    "(Axis.vertical, AxisDirection.down, GrowthDirection.forward). Got "
    "axis=${constraints.axis}, axisDirection=${constraints.axisDirection}, "
    "growthDirection=${constraints.growthDirection}.",
  );
  childManager?.didStartLayout();

  final visibleNodes = controller.visibleNodes;
  if (visibleNodes.isEmpty) {
    _structureChanged = true;
    _lastVisibleNodeCount = 0;
    geometry = SliverGeometry.zero;
    childManager?.didFinishLayout();
    return;
  }

  _ensureLayoutCapacity();

  // Detect structure changes
  if (controller.structureGeneration != _lastStructureGeneration) {
    _structureChanged = true;
    _sticky.dirty = true;
    _lastStructureGeneration = controller.structureGeneration;
  }
  if (visibleNodes.length != _lastVisibleNodeCount) {
    _structureChanged = true;
    _sticky.dirty = true;
  }

  final scrollOffset = constraints.scrollOffset;
  final remainingPaintExtent = constraints.remainingPaintExtent;
  final remainingCacheExtent = constraints.remainingCacheExtent;
  final crossAxisExtent = constraints.crossAxisExtent;

  // Cache region bounds — per sliver protocol, remainingCacheExtent starts
  // at scrollOffset + cacheOrigin (cacheOrigin is typically ≤ 0).
  final cacheOrigin = constraints.cacheOrigin;
  final cacheStart = scrollOffset + cacheOrigin;
  final cacheEnd = cacheStart + remainingCacheExtent;

  // Slide-pipeline ordering (per plan §5):
  //
  //   1. Build the current viewport snapshot.
  //   2. If scroll changed since last layout and edge ghosts exist,
  //      normalize the ghost map for the current viewport — handle
  //      re-promotions, direction flips, and stays-same. When a
  //      pending mutation baseline exists, normalization runs WITHOUT
  //      installing standalone slides so the upcoming consume owns
  //      the single animation batch for this layout (avoids two
  //      independent `animateSlideFromOffsets` calls in the same
  //      frame). When no pending baseline exists, normalization
  //      installs slides directly.
  //   3. Consume the pending mutation baseline. After step 2 the
  //      ghost map already reflects the current viewport, so consume
  //      composes against fresh state instead of the stale frozen
  //      `edgeY` the previous implementation carried.
  //   4. Update `_lastObservedScrollOffset` to the new scroll.
  //
  // `snapshotVisibleOffsets()` walks `visibleNodes` with
  // `getCurrentExtent`, which is independent of Pass 1's per-nid
  // offset array, so all three steps are safe before Pass 1.
  final currentViewport = _currentViewportSnapshot();
  final currentScroll = currentViewport.scrollOffset;
  final scrollChanged = !_lastObservedScrollOffset.isNaN
      && currentScroll != _lastObservedScrollOffset;
  final hasEdgeGhosts =
      _phantomEdgeExits != null && _phantomEdgeExits!.isNotEmpty;
  if (scrollChanged && hasEdgeGhosts) {
    _normalizeEdgeGhostsForViewport(
      viewport: currentViewport,
      installStandaloneSlides: _pendingSlideBaseline == null,
    );
  }
  _consumeSlideBaselineIfAny(currentViewport: currentViewport);
  _lastObservedScrollOffset = currentScroll;

  // FLIP-slide overreach (Option A): during a slide, a row's painted y
  // can differ from its structural y by up to `slideOverreach` px in
  // either direction. Widen the effective cache region by that amount
  // so rows whose painted y lies in the viewport — but whose structural
  // y is outside the normal cache region — still get built. Without
  // this, a swap of two large subtrees leaves a visible gap at the slot
  // where a sliding row should appear (no child created for it), and
  // the gap does NOT resolve on scroll because the build decision still
  // only considers structural offsets. Overreach shrinks to 0 as the
  // slide progresses (see [TreeController.maxActiveSlideAbsDelta]), so
  // the transient overbuild contracts with the animation.
  //
  // Future optimization (Option B): replace this blanket clamp with a
  // per-entry precise union. For each active slide, compute the
  // structural index range whose painted y (structural + currentDelta)
  // intersects the cache region, then union those ranges with the
  // normal cache-region index range. This eliminates the transient
  // overbuild for the common case of a few small slides, at the cost
  // of a per-entry scan every frame. Worth doing only when the
  // overbuild measurably hurts — large-subtree swaps are rare and
  // short-lived, so the blanket clamp is usually fine.
  final slideOverreach = controller.maxActiveSlideAbsDelta;
  final effectiveCacheStart = cacheStart - slideOverreach;
  final effectiveCacheEnd = cacheEnd + slideOverreach;

  // ────────────────────────────────────────────────────────────────────────
  // PASS 1: Calculate offsets and extents
  // ────────────────────────────────────────────────────────────────────────
  double totalScrollExtent;
  final bool hasAnimations = controller.hasActiveAnimations;
  // Fetch the bulk-animation snapshot once so downstream branches share a
  // single read of value/generation/membership. Per-key membership
  // queries inside _rebuildBulkCumulatives go through this snapshot too.
  final BulkAnimationData<TKey> bulkData = controller.bulkAnimationData();
  final bool bulkOnly = bulkData.isValid && !controller.hasOpGroupAnimations;

  if (bulkOnly) {
    // Fast path: bulk animation only. Every node's offset is a scalar
    // function of position via the precomputed cumulatives. Avoid touching
    // _nodeOffsetsByNid for nodes outside the cache region — that write
    // is what the per-frame O(N) cost was buying.
    final bulkGen = bulkData.generation;
    final n = visibleNodes.length;
    if (!_bulkCumulativesValid ||
        _bulkCumulativesCount != n ||
        bulkGen != _lastBulkAnimationGeneration ||
        _structureChanged) {
      _rebuildBulkCumulatives(visibleNodes, bulkData);
      _lastBulkAnimationGeneration = bulkGen;
      _structureChanged = false;
    }
    _bulkValueCached = bulkData.value;
    totalScrollExtent = _offsetAtVisibleIndex(n);
    _lastFrameUsedBulkCumulatives = true;
  } else if (_structureChanged || _lastFrameUsedBulkCumulatives) {
    // Either the visible order changed OR we just exited the bulk-only
    // fast path — in both cases the per-nid offset/extent arrays are
    // not guaranteed fresh for every visible node, so do a full walk.
    _bulkCumulativesValid = false;
    _lastFrameUsedBulkCumulatives = false;
    totalScrollExtent = 0.0;

    final orderNids = controller.orderNidsView;
    final n = visibleNodes.length;
    for (int i = 0; i < n; i++) {
      final nid = orderNids[i];
      _nodeOffsetsByNid[nid] = totalScrollExtent;
      final extent = controller.getCurrentExtentNid(nid);
      _nodeExtentsByNid[nid] = extent;
      totalScrollExtent += extent;
    }

    _structureChanged = false;
  } else if (!hasAnimations && !_animationsWereActive) {
    // Pure scrolling: no animations active now or last frame.
    // Offsets and extents are unchanged — reuse cached total.
    totalScrollExtent = _lastTotalScrollExtent;
  } else if (hasAnimations) {
    // Active animation frame: only indices at or beyond the first
    // animating node can have changed offsets/extents. Everything
    // before them has stable cached values from the prior frame.
    final firstAnimIdx = controller.computeFirstAnimatingVisibleIndex();
    if (firstAnimIdx >= visibleNodes.length) {
      // Animating nodes exist but none are in the visible order
      // (e.g. an animation on a subtree that was moved out of view).
      // Nothing to recompute here.
      totalScrollExtent = _lastTotalScrollExtent;
    } else {
      final orderNids = controller.orderNidsView;
      if (firstAnimIdx == 0) {
        totalScrollExtent = 0.0;
      } else {
        final prevNid = orderNids[firstAnimIdx - 1];
        totalScrollExtent =
            _nodeOffsetsByNid[prevNid] + _nodeExtentsByNid[prevNid];
      }
      for (int i = firstAnimIdx; i < visibleNodes.length; i++) {
        final nid = orderNids[i];
        final newExtent = controller.getCurrentExtentNid(nid);
        _nodeOffsetsByNid[nid] = totalScrollExtent;
        _nodeExtentsByNid[nid] = newExtent;
        totalScrollExtent += newExtent;
      }
    }
  } else {
    // Transitional frame: no active animations this frame, but there
    // were last frame. Some just-settled nodes may have their cached
    // extent stuck at an intermediate interpolated value if the
    // settling frame fired before our last layout. Walk the list with
    // the extent-equality short-circuit so stable-prefix nodes stay
    // cheap and only changed nodes get rewritten.
    totalScrollExtent = 0.0;
    bool foundAnimating = false;

    final orderNids = controller.orderNidsView;
    for (int i = 0; i < visibleNodes.length; i++) {
      final nid = orderNids[i];
      final newExtent = controller.getCurrentExtentNid(nid);
      final oldExtent = _nodeExtentsByNid[nid];

      if (!foundAnimating && oldExtent == newExtent) {
        // Structure is stable in this branch, so the prior-layout slot
        // value is valid; no null-vs-zero ambiguity to guard against.
        totalScrollExtent = _nodeOffsetsByNid[nid] + newExtent;
      } else {
        foundAnimating = true;
        _nodeOffsetsByNid[nid] = totalScrollExtent;
        _nodeExtentsByNid[nid] = newExtent;
        totalScrollExtent += newExtent;
      }
    }
  }

  // ────────────────────────────────────────────────────────────────────────
  // PASS 2: Create children for nodes in cache region
  // ────────────────────────────────────────────────────────────────────────

  // Clear prior-layout cache-region flags in one memset-style pass, then
  // mark the slice [cacheStartIndex, cacheEndIndex) as this frame's members.
  // Sparse clear of last frame's writes. Iterate the nids we wrote
  // last frame instead of memset'ing the whole nid-indexed array — the
  // array's length tracks nidCapacity, which grows monotonically and
  // dwarfs the actual cache-region size on a long-lived tree.
  for (int i = 0; i < _writtenCacheRegionNidsLen; i++) {
    final nid = _writtenCacheRegionNids[i];
    if (nid < _inCacheRegionByNid.length) {
      _inCacheRegionByNid[nid] = 0;
    }
  }
  _writtenCacheRegionNidsLen = 0;
  final cacheStartIndex = _findFirstVisibleIndex(effectiveCacheStart);

  // In bulk-only mode, break on the row's *steady-state* (full-space)
  // position rather than its animated position. At low bulkValue, animated
  // rows have sub-pixel extents — using animated offsets would admit
  // thousands of invisible rows into the cache region on frame 1 of
  // expandAll, causing a mass-mount hitch. Anchoring the band to full-space
  // caps admission at the count we'd mount at bulkValue=1.
  final double fullCacheEnd;
  if (_bulkCumulativesValid && cacheStartIndex < visibleNodes.length) {
    final fullStart =
        _stableCumulative[cacheStartIndex] +
        _bulkFullCumulative[cacheStartIndex];
    fullCacheEnd =
        fullStart + remainingCacheExtent + slideOverreach * 2.0;
  } else {
    fullCacheEnd = 0.0;
  }

  // Dispatch the per-iteration `if (_bulkCumulativesValid)` branch out of
  // the loop body — it's invariant across one loop run, so a single
  // up-front decision replaces N per-iteration branches.
  //
  // Bulk fast path: scalar offset = _stableCumulative[i] + value *
  // _bulkFullCumulative[i]. Inline because the loop writes per-nid
  // arrays the render object owns and reads cumulative arrays the
  // render object owns.
  //
  // Op-group path: dual-view (live/post) admission cap that pre-mounts
  // post-animation visible rows during a collapse and caps mass-mounting
  // during an expand. Lives in [LayoutAdmissionPolicy.admit].
  final int cacheEndIndex;
  if (_bulkCumulativesValid) {
    cacheEndIndex = _admitBulkFastPath(
      cacheStartIndex: cacheStartIndex,
      visibleNodes: visibleNodes,
      fullCacheEnd: fullCacheEnd,
    );
  } else {
    cacheEndIndex = _admission.admit(
      cacheStartIndex: cacheStartIndex,
      visibleNodes: visibleNodes,
      nodeOffsetsByNid: _nodeOffsetsByNid,
      nodeExtentsByNid: _nodeExtentsByNid,
      inCacheRegionByNid: _inCacheRegionByNid,
      onCacheRegionAdmit: _writeCacheRegionNid,
      effectiveCacheEnd: effectiveCacheEnd,
      slideOverreach: slideOverreach,
      remainingCacheExtent: remainingCacheExtent,
    );
  }

  // Create children for nodes in the cache region.
  //
  // The range `[cacheStartIndex, cacheEndIndex)` may contain rows that
  // were iterated but not admitted (e.g. off-screen exits during a
  // collapse — iterated past to reach the post-animation-visible
  // following rows, but not admitted themselves). Gate on
  // `_inCacheRegionByNid[nid]` so skipped rows do not trigger a build.
  if (cacheEndIndex > cacheStartIndex) {
    invokeLayoutCallback<SliverConstraints>((SliverConstraints constraints) {
      for (int i = cacheStartIndex; i < cacheEndIndex; i++) {
        final nodeId = visibleNodes[i];
        final nid = _controller.nidOf(nodeId);
        if (_inCacheRegionByNid[nid] == 0) continue;
        childManager?.createChild(nodeId);
      }
    });
  }

  // Layout the children — track whether any extent changed to skip
  // the O(N) _recomputeOffsets when sizes are stable (cache hit path).
  // Also track the smallest index whose extent changed so we can walk
  // only from there when recomputing offsets.
  bool extentsChanged = false;
  int firstChangedIdx = visibleNodes.length;

  for (int i = cacheStartIndex; i < cacheEndIndex; i++) {
    final nodeId = visibleNodes[i];
    final actualAnimatedExtent = _layoutNodeChild(nodeId, crossAxisExtent);
    if (actualAnimatedExtent == null) continue;

    final nid = _controller.nidOf(nodeId);
    final estimatedExtent = _nodeExtentsByNid[nid];
    if (actualAnimatedExtent != estimatedExtent) {
      _nodeExtentsByNid[nid] = actualAnimatedExtent;
      totalScrollExtent += actualAnimatedExtent - estimatedExtent;
      extentsChanged = true;
      if (i < firstChangedIdx) firstChangedIdx = i;
    }

    final child = getChildForNode(nodeId)!;
    final parentData = child.parentData! as SliverTreeParentData;
    parentData.layoutOffset = _nodeOffsetsByNid[nid];
  }

  // Only recompute offsets if actual extents differed from estimates.
  // During steady-state animation (constraint cache hit → same sizes),
  // this skips the full O(N) recomputation. When extents did change,
  // only walk from the first changed index forward — offsets before
  // that point are unaffected by later-index extent changes.
  if (extentsChanged) {
    _sticky.dirty = true;

    if (_bulkCumulativesValid) {
      // A child's measured size perturbed _fullExtents mid-bulk; the
      // cumulatives are now inconsistent with truth for positions beyond
      // firstChangedIdx. Materialize per-nid extents for the affected
      // tail so _recomputeOffsetsFrom can walk it, then fall back off
      // the fast path for this frame. The next frame will rebuild cumulatives
      // fresh via _rebuildBulkCumulatives.
      final orderNids = controller.orderNidsView;
      for (int i = firstChangedIdx; i < visibleNodes.length; i++) {
        if (i >= cacheStartIndex && i < cacheEndIndex) continue;
        final nid = orderNids[i];
        _nodeExtentsByNid[nid] = controller.getCurrentExtentNid(nid);
      }
      _bulkCumulativesValid = false;
    }

    totalScrollExtent = _recomputeOffsetsFrom(firstChangedIdx);

    // Only rewrite parentData.layoutOffset for cache-region nodes at or
    // after firstChangedIdx. Earlier cache-region nodes already had the
    // correct value written in the measurement loop above.
    final updateStart = math.max(cacheStartIndex, firstChangedIdx);
    final orderNids = controller.orderNidsView;
    for (int i = updateStart; i < cacheEndIndex; i++) {
      final nodeId = visibleNodes[i];
      final child = getChildForNode(nodeId);
      if (child == null) continue;
      final parentData = child.parentData! as SliverTreeParentData;
      parentData.layoutOffset = _nodeOffsetsByNid[orderNids[i]];
    }
  }

  // Precompute subtree bottoms BEFORE sticky identification so that
  // candidate probing can use O(1) lookups instead of O(n)-per-candidate
  // subtree scans. Skip during animation: candidate probing bails on
  // animating nodes anyway, so the O(3N) precomputation is wasted. The
  // fallback per-candidate scan inside the computer is trivially cheap
  // since it also bails immediately. Also skip when nothing changed
  // since last precomputation (pure scrolling).
  if (_animationsWereActive && !hasAnimations) {
    _sticky.dirty = true; // animation just settled — one final pass
  }
  if (_maxStickyDepth > 0 && !hasAnimations && _sticky.dirty) {
    _sticky.precomputeStableSubtreeBottoms(
      visibleNodes: visibleNodes,
      nodeOffsetsByNid: _nodeOffsetsByNid,
      nodeExtentsByNid: _nodeExtentsByNid,
    );
    _sticky.dirty = false;
  } else if (hasAnimations || _maxStickyDepth == 0) {
    _sticky.invalidatePrecompute();
  }

  // Throttle sticky header recomputation during animation: only recompute
  // every 3rd frame. The candidate probe bails on animating candidates
  // anyway, so results are approximate and largely unchanged frame-to-
  // frame. Exception: scrolling since the last sticky computation forces
  // a recompute — pinnedY is relative to scrollOffset, and stale values
  // produce visible header jitter plus wrong hit-test coordinates.
  final bool skipStickyRecompute = !_sticky.shouldRecomputeThisFrame(
    hasActiveAnimations: controller.hasActiveAnimations,
    scrollOffset: scrollOffset,
  );

  if (skipStickyRecompute) {
    // Even when throttling, purge entries for nodes that just started
    // exiting so a stale pinned row doesn't keep painting / inflate
    // paintExtent for another 1–2 frames.
    _sticky.purgeExitingDuringThrottle();
  } else {
    // Identify sticky candidates now that offsets and precomputed data are ready.
    final potentialStickyNodes = _sticky.identifyPotentialStickyNodes(
      scrollOffset: scrollOffset,
      overlap: constraints.overlap,
      visibleNodes: visibleNodes,
      nodeOffsetsByNid: _nodeOffsetsByNid,
      nodeExtentsByNid: _nodeExtentsByNid,
      findFirstVisibleIndex: _findFirstVisibleIndex,
    );

    // Force-create and layout any sticky nodes not already in cache region.
    // Filter by the cache-region flag rather than allocating a diff set.
    final newStickyNodes = <TKey>{};
    for (final id in potentialStickyNodes) {
      final nid = _controller.nidOf(id);
      if (nid < 0 || _inCacheRegionByNid[nid] == 0) {
        newStickyNodes.add(id);
      }
    }
    if (newStickyNodes.isNotEmpty) {
      invokeLayoutCallback<SliverConstraints>((
        SliverConstraints constraints,
      ) {
        for (final nodeId in newStickyNodes) {
          childManager?.createChild(nodeId);
        }
      });
      // Track whether any measured sticky extent actually differs from the
      // prior stored (estimated) extent. When all match, Pass 1's offsets
      // and subtree-bottom precompute are still valid, so both the O(N)
      // offset recompute and the O(3N) subtree-bottom precompute can be
      // skipped entirely.
      bool stickyExtentsChanged = false;
      for (final nodeId in newStickyNodes) {
        final nid = _controller.nidOf(nodeId);
        final priorExtent = _nodeExtentsByNid[nid];
        final extent = _layoutNodeChild(nodeId, crossAxisExtent);
        if (extent != null) {
          _nodeExtentsByNid[nid] = extent;
          if (extent != priorExtent) stickyExtentsChanged = true;
        }
      }
      if (stickyExtentsChanged) {
        totalScrollExtent = _recomputeOffsets();
        if (_maxStickyDepth > 0 && !hasAnimations) {
          _sticky.precomputeStableSubtreeBottoms(
            visibleNodes: visibleNodes,
            nodeOffsetsByNid: _nodeOffsetsByNid,
            nodeExtentsByNid: _nodeExtentsByNid,
          );
          _sticky.dirty = false;
        }
      }
      // Always write the newly-created sticky children's layoutOffset —
      // they were outside the cache region during Pass 1 and never had it set.
      for (final nodeId in newStickyNodes) {
        final child = getChildForNode(nodeId);
        if (child == null) continue;
        final parentData = child.parentData! as SliverTreeParentData;
        parentData.layoutOffset =
            _nodeOffsetsByNid[_controller.nidOf(nodeId)];
      }
    }

    _sticky.computeStickyHeaders(
      scrollOffset: scrollOffset,
      overlap: constraints.overlap,
      visibleNodes: visibleNodes,
      nodeOffsetsByNid: _nodeOffsetsByNid,
      nodeExtentsByNid: _nodeExtentsByNid,
      findFirstVisibleIndex: _findFirstVisibleIndex,
    );
  }

  // ────────────────────────────────────────────────────────────────────────
  // Calculate paint extent
  // ────────────────────────────────────────────────────────────────────────
  double paintExtent = 0.0;

  final startIndex = _findFirstVisibleIndex(scrollOffset);
  final orderNids = controller.orderNidsView;
  for (int i = startIndex; i < visibleNodes.length; i++) {
    final nid = orderNids[i];
    final offset = _nodeOffsetsByNid[nid];
    final extent = _nodeExtentsByNid[nid];
    final endOfNode = offset + extent;

    if (offset >= scrollOffset + remainingPaintExtent) break;

    final visibleStart = math.max(offset, scrollOffset);
    final visibleEnd = math.min(
      endOfNode,
      scrollOffset + remainingPaintExtent,
    );
    // Only add positive contributions (can be negative when scrolled past content)
    if (visibleEnd > visibleStart) {
      paintExtent += visibleEnd - visibleStart;
    }
  }

  // Bug 1 fix: Ensure paintExtent covers sticky headers. Sticky headers
  // paint at pinnedY (near viewport top) but content may have scrolled far
  // enough that the natural paint extent doesn't cover them, causing clipping.
  bool stickyInflationClamped = false;
  for (final sticky in _sticky.headers) {
    final stickyBottom = sticky.pinnedY + sticky.extent;
    if (stickyBottom > remainingPaintExtent) {
      // This header would extend past our paint budget and overlap the
      // next sliver. We cannot relocate it here (pinnedY is final), but
      // flagging visual overflow ensures the viewport clips us to
      // paintExtent so it doesn't bleed through.
      stickyInflationClamped = true;
    }
    if (stickyBottom > paintExtent) paintExtent = stickyBottom;
  }

  // Ensure paintExtent is non-negative and within bounds
  paintExtent = paintExtent.clamp(0.0, remainingPaintExtent);

  geometry = SliverGeometry(
    scrollExtent: totalScrollExtent,
    paintExtent: paintExtent,
    maxPaintExtent: totalScrollExtent,
    cacheExtent: math.min(remainingCacheExtent, totalScrollExtent),
    // Overflow means: our painted region would exceed the portion of the
    // scroll extent visible within our own paintExtent. Comparing against
    // remainingPaintExtent (which includes space occupied by later slivers)
    // gave false negatives and missed clipping. Also flag when a sticky
    // header's inflated bottom was clamped against remainingPaintExtent.
    // The `scrollOffset > 0` clause mirrors RenderSliverMultiBoxAdaptor:
    // when the first visible row starts before scrollOffset, it paints at
    // a negative y relative to the sliver's paint origin. Without this
    // flag, the viewport skips its clip layer and the partial top row
    // spills above the sliver — visible at max scroll extent, where the
    // "content extends below" clause is false.
    hasVisualOverflow:
        stickyInflationClamped ||
        scrollOffset + paintExtent < totalScrollExtent ||
        scrollOffset > 0.0,
  );

  // Refresh parentData.layoutOffset for children mounted in a prior
  // frame that now fall outside [cacheStartIndex, cacheEndIndex). The
  // admission cap (steadyAccum / fullCacheEnd) deliberately limits *new*
  // mounts to prevent mass-mounting during large expansions, but
  // siblings below the expanding subtree that were already mounted keep
  // their pre-expand parentData.layoutOffset — causing them to paint at
  // stale positions through the whole animation and only snap into place
  // on the settle frame, when Pass 1's "Transitional frame" branch
  // finally rewrites extents.
  //
  // Placed after the sticky pass so any _recomputeOffsets triggered by
  // stickyExtentsChanged has already landed in _nodeOffsetsByNid /
  // cumulatives before we write to parentData.
  //
  // Only runs during active animations; pure scrolling doesn't mutate
  // offsets so cached parentData is already correct.
  //
  // Iterate `_children.keys` directly: the loop body acts only on
  // mounted boxes, so the previous `for (int i = 0; i < visibleNodes.length; i++)`
  // walk did O(visibleNodes) work to update O(_children) entries. On a
  // dense expandAll with 10⁵ visible nodes and a 50-row viewport this
  // walked 10⁵ entries per frame for ~50 writes — now O(_children).
  // Refresh parentData (layoutOffset, indent, visibleExtent) for
  // off-cache mounted children.
  //
  // Runs unconditionally when there are mounted children. Cost is
  // O(_children) — bounded by cache-region size plus any retained
  // off-cache rows (edge ghosts, exit phantoms, slide-active rows).
  // Always running closes a subtle staleness window:
  //
  //   * Past trigger (now baseline) was `hasAnimations || hasActiveSlides`,
  //     under the assumption that pure scrolling can't mutate offsets so
  //     cached parentData is already correct. That assumption breaks for
  //     a sequence: STRUCTURAL MUTATION (no slides installed — e.g. rapid
  //     cascaded toggle whose composedY/X both round to 0 → engine clears
  //     the entry), then PURE SCROLL. The mutation's layout updates
  //     parentData only for in-cache rows; off-cache rows whose structural
  //     just shifted keep their pre-mutation layoutOffset. The follow-up
  //     scroll's layout sees `!hasAnimations && !hasActiveSlides` so the
  //     gated refresh skips them. The off-cache row paints at its OLD
  //     structural Y until something else triggers a layout that DOES
  //     re-admit it to cache (typically a further scroll into its new
  //     structural Y). User-perceived symptom: "row stuck at old position
  //     until I scroll again."
  //
  //   * Stale `indent` for depth-changing reparents: same path. Painted
  //     X = `parentData.indent + slideDeltaX` resolves to `oldIndent +
  //     (oldIndent - newIndent)` at slide t=0 if `parentData.indent` is
  //     stale.
  //
  //   * Stale `visibleExtent`: the per-row clip-and-translate in
  //     `_paintRow` slices the wrong portion of the child box.
  //
  // For non-bulk mode, `_nodeOffsetsByNid` is stale for off-cache rows.
  // Compute a fresh structural cumulative on-demand by walking
  // `visibleNodes` once into a local Float64List, then index into it.
  if (_children.isNotEmpty) {
    Float64List? freshCumulative;
    if (!_bulkCumulativesValid) {
      // O(N_visible) one-time accumulation for the loop below.
      final vlen = visibleNodes.length;
      freshCumulative = Float64List(vlen + 1);
      double acc = 0.0;
      for (int i = 0; i < vlen; i++) {
        freshCumulative[i] = acc;
        acc += controller.getCurrentExtentNid(orderNids[i]);
      }
      freshCumulative[vlen] = acc;
    }
    for (final nodeId in _children.keys) {
      debugLastParentDataRefreshIterationCount++;
      final child = _children[nodeId]!;
      final nid = _controller.nidOf(nodeId);
      if (nid < 0) {
        // Dead key — purge handled by stale-node eviction.
        continue;
      }
      // Cache-region children already had their parentData written by
      // the measurement loop. Skip them. Non-admitted-but-mounted
      // children inside [cacheStartIndex, cacheEndIndex) have
      // `_inCacheRegionByNid[nid] == 0` here and would also have been
      // touched by the measurement loop via `_layoutNodeChild` — letting
      // them through is a redundant (but correctness-safe) re-write of
      // the same offset. Cost is one field assignment per such row;
      // the case is rare (off-screen exits during a collapse).
      if (nid < _inCacheRegionByNid.length && _inCacheRegionByNid[nid] != 0) {
        continue;
      }
      final visIdx = _controller.visibleIndexOfNid(nid);
      if (visIdx < 0) {
        // Mounted but no longer in visible order — happens transiently
        // during structure changes. Eviction sweeps it next.
        continue;
      }
      final double offset;
      if (_bulkCumulativesValid) {
        // Bulk-only fast path: per-nid offset slots are not kept fresh
        // for out-of-cache-region nids — derive from cumulatives.
        offset = _offsetAtVisibleIndex(visIdx);
      } else {
        // Non-bulk: `_nodeOffsetsByNid` is stale for off-cache rows.
        // Use the freshly-computed cumulative.
        offset = freshCumulative![visIdx];
      }
      final parentData = child.parentData! as SliverTreeParentData;
      parentData.layoutOffset = offset;
      // Refresh indent + visibleExtent against the controller's live
      // values. Both are read directly from the controller (no per-
      // child layout call required), matching the assignments
      // `_layoutNodeChild` would have performed on a cache-region row.
      // `controller.getIndent` reads `getDepth(key) * indentWidth`,
      // and `controller.getCurrentExtentNid` resolves the
      // bulk → operation-group → standalone → fullExtent chain — the
      // same chain `_layoutNodeChild` consumes via `getAnimatedExtent`.
      parentData.indent = controller.getIndent(nodeId);
      parentData.visibleExtent = controller.getCurrentExtentNid(nid);
    }
  }

  _lastVisibleNodeCount = visibleNodes.length;
  _lastTotalScrollExtent = totalScrollExtent;
  _animationsWereActive = hasAnimations;

  childManager?.didFinishLayout();
}