Skip to content

Fix using a non-tuple sequence for multidimensional indexing that wil…#8732

Open
aarpon wants to merge 7 commits intoProject-MONAI:devfrom
aitoo:fix/non_tuple_indexing
Open

Fix using a non-tuple sequence for multidimensional indexing that wil…#8732
aarpon wants to merge 7 commits intoProject-MONAI:devfrom
aitoo:fix/non_tuple_indexing

Conversation

@aarpon
Copy link

@aarpon aarpon commented Feb 5, 2026

Fix using a non-tuple sequence for multidimensional indexing that will result in error in pytorch 2.9.

Fixes # .

Description

The SlidingWindowInferer complains that from PyTorch 2.9, using non-tuple sequences to slice into Tensors will result in an error (or, worse, misbehave). This patch targets all places in monai/inferers/utils.py where indexing may happen with lists or other types of sequences.

Types of changes

  • Non-breaking change (fix or new feature that would not break existing functionality).
  • Breaking change (fix or new feature that would cause existing functionality to change).
  • New tests added to cover the changes.
  • Integration tests passed locally by running ./runtests.sh -f -u --net --coverage.
  • Quick tests passed locally by running ./runtests.sh --quick --unittests --disttests.
  • In-line docstrings updated.
  • Documentation updated, tested make html command in the docs/ folder.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 5, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Changes standardize internal indexing in sliding-window inference by converting list- or slice-based indices to tuples (using ensure_tuple) across buffered and non-buffered write paths, per-slice handling for sw_batch_size variants, and in _compute_coords. Introduced o_slice_idx for buffered writes. No public API signature changes. Added tests: parity check between buffered and non-buffered conditioned sliding_window_inference flows, and a test verifying _compute_coords accepts list-based indices.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 42.86% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed Title accurately describes the main change: fixing non-tuple sequence indexing for PyTorch 2.9+ compatibility.
Description check ✅ Passed Description covers the problem, solution approach with ensure_tuple, and includes test validation. Template sections mostly completed.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

I, Aaron Ponti <aaron@aaronponti.ch>, hereby add my Signed-off-by to this commit: f6673fa
I, Aaron Ponti <aaron@aaronponti.ch>, hereby add my Signed-off-by to this commit: 922dfff

Signed-off-by: Aaron Ponti <aaron@aaronponti.ch>
@ericspod
Copy link
Member

Hi @aarpon thanks for these fixes, I know tuple indices are needed in places more strictly than in the past. Would it make sense to use ensure_tuple in these places however? I don't think we should be doing the same tuple(p) if isinstance(p, list) else p everywhere, this should be condensed to a function but I think ensure_tuple does this already. This also covers cases where p isn't a list but some other iterable that should be converted to a tuple. If you agree please update and we can rereview.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
monai/inferers/utils.py (1)

226-376: Consider adding a regression test for PyTorch 2.9 tuple indexing.

Changes are correct, but per coding guidelines, test coverage for modified code paths would help prevent regressions. A minimal test exercising sliding_window_inference with sw_batch_size=1 and sw_batch_size>1 under PyTorch 2.9+ would suffice.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/inferers/utils.py` around lines 226 - 376, Add a regression test that
exercises sliding_window_inference with both sw_batch_size=1 and sw_batch_size>1
on PyTorch 2.9+ to catch tuple-indexing regressions: write a minimal unit test
that calls sliding_window_inference (the public function that triggers
_create_buffered_slices and _compute_coords) on a small synthetic input and
verifies outputs are identical between sw_batch_size values and match a
ground-truth full-window inference; ensure the test covers buffered path
(buffer_dim/buffer_steps) and multi-resolution behavior so tuple indexing in
_create_buffered_slices and _compute_coords is exercised under torch>=2.9.0.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@monai/inferers/utils.py`:
- Around line 226-376: Add a regression test that exercises
sliding_window_inference with both sw_batch_size=1 and sw_batch_size>1 on
PyTorch 2.9+ to catch tuple-indexing regressions: write a minimal unit test that
calls sliding_window_inference (the public function that triggers
_create_buffered_slices and _compute_coords) on a small synthetic input and
verifies outputs are identical between sw_batch_size values and match a
ground-truth full-window inference; ensure the test covers buffered path
(buffer_dim/buffer_steps) and multi-resolution behavior so tuple indexing in
_create_buffered_slices and _compute_coords is exercised under torch>=2.9.0.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 922dfff and 939a5bb.

📒 Files selected for processing (1)
  • monai/inferers/utils.py

@aarpon
Copy link
Author

aarpon commented Mar 2, 2026

Hi, I replaced my explicit casts with calls to monai.utils.ensure_tuple and I re-ran the inferer tests with PyTorch 2.10 forcing errors for warnings.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@monai/inferers/utils.py`:
- Around line 246-259: The changed indexing branches in monai.inferers.utils
(the code paths that build win_data and win_condition using unravel_slice,
ensure_tuple, sw_device, kwargs["condition"] and the else branch using
s0/s0_idx) lack unit tests; add regression tests that exercise both
sw_batch_size > 1 and sw_batch_size == 1, include condition is not None to
validate kwargs["condition"] assignment, cover buffered and non-buffered write
flows used by the inferer, and create tests for _compute_coords when passed
non-tuple index containers (e.g., lists) to ensure unravel_slice and
ensure_tuple handling is correct; target functions/classes to assert correct
tensor slicing, device placement, and that buffered writes behave identically to
non-buffered writes.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 939a5bb and 54f8246.

📒 Files selected for processing (1)
  • monai/inferers/utils.py

Comment on lines +246 to 259
win_data = torch.cat([inputs[ensure_tuple(win_slice)] for win_slice in unravel_slice]).to(sw_device)
if condition is not None:
win_condition = torch.cat([condition[win_slice] for win_slice in unravel_slice]).to(sw_device)
win_condition = torch.cat([condition[ensure_tuple(win_slice)] for win_slice in unravel_slice]).to(
sw_device
)
kwargs["condition"] = win_condition
else:
win_data = inputs[unravel_slice[0]].to(sw_device)
s0 = unravel_slice[0]
s0_idx = ensure_tuple(s0)

win_data = inputs[s0_idx].to(sw_device)
if condition is not None:
win_condition = condition[unravel_slice[0]].to(sw_device)
win_condition = condition[s0_idx].to(sw_device)
kwargs["condition"] = win_condition
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add regression tests for the modified indexing paths.

The changed paths (Line [246]-Line [259], Line [285], Line [316]-Line [320], Line [396]) are not backed by new tests in this PR. Please add coverage for:

  • sw_batch_size > 1 and == 1
  • condition is not None
  • buffered and non-buffered writes
  • _compute_coords with non-tuple index containers

As per coding guidelines, "Ensure new or modified definitions will be covered by existing or new unit tests."

Also applies to: 285-285, 316-320, 396-396

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/inferers/utils.py` around lines 246 - 259, The changed indexing
branches in monai.inferers.utils (the code paths that build win_data and
win_condition using unravel_slice, ensure_tuple, sw_device, kwargs["condition"]
and the else branch using s0/s0_idx) lack unit tests; add regression tests that
exercise both sw_batch_size > 1 and sw_batch_size == 1, include condition is not
None to validate kwargs["condition"] assignment, cover buffered and non-buffered
write flows used by the inferer, and create tests for _compute_coords when
passed non-tuple index containers (e.g., lists) to ensure unravel_slice and
ensure_tuple handling is correct; target functions/classes to assert correct
tensor slicing, device placement, and that buffered writes behave identically to
non-buffered writes.

…monai.inferers.sliding_window_inference.

Signed-off-by: Aaron Ponti <aaron@aaronponti.ch>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/inferers/test_sliding_window_inference.py`:
- Line 709: Add Google-style docstrings for the newly introduced definitions:
the test function test_conditioned_branches_and_buffered_parity, the nested
compute function inside it, the TestSlidingWindowUtils class, and the
test_compute_coords_accepts_list_indices test; each docstring should briefly
describe the purpose, parameters (including their types, e.g., sw_batch_size or
any function args), return value (if any), and any exceptions raised. Place the
docstrings immediately below each definition signature and follow the project’s
Google-style formatting (Args:, Returns:, Raises:) so tests and linters pick
them up.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 54f8246 and 95dac4b.

📒 Files selected for processing (1)
  • tests/inferers/test_sliding_window_inference.py

…ity, (nested) compute, TestSlidingWindowUtils, test_compute_coords_accepts_list_indices.

Signed-off-by: Aaron Ponti <aaron@aaronponti.ch>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants