Fix using a non-tuple sequence for multidimensional indexing that wil…#8732
Fix using a non-tuple sequence for multidimensional indexing that wil…#8732aarpon wants to merge 7 commits intoProject-MONAI:devfrom
Conversation
…l result in error in pytorch 2.9.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughChanges standardize internal indexing in sliding-window inference by converting list- or slice-based indices to tuples (using ensure_tuple) across buffered and non-buffered write paths, per-slice handling for sw_batch_size variants, and in _compute_coords. Introduced o_slice_idx for buffered writes. No public API signature changes. Added tests: parity check between buffered and non-buffered conditioned sliding_window_inference flows, and a test verifying _compute_coords accepts list-based indices. Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Hi @aarpon thanks for these fixes, I know tuple indices are needed in places more strictly than in the past. Would it make sense to use |
Signed-off-by: Aaron Ponti <aaron@aaronponti.ch>
There was a problem hiding this comment.
🧹 Nitpick comments (1)
monai/inferers/utils.py (1)
226-376: Consider adding a regression test for PyTorch 2.9 tuple indexing.Changes are correct, but per coding guidelines, test coverage for modified code paths would help prevent regressions. A minimal test exercising
sliding_window_inferencewithsw_batch_size=1andsw_batch_size>1under PyTorch 2.9+ would suffice.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@monai/inferers/utils.py` around lines 226 - 376, Add a regression test that exercises sliding_window_inference with both sw_batch_size=1 and sw_batch_size>1 on PyTorch 2.9+ to catch tuple-indexing regressions: write a minimal unit test that calls sliding_window_inference (the public function that triggers _create_buffered_slices and _compute_coords) on a small synthetic input and verifies outputs are identical between sw_batch_size values and match a ground-truth full-window inference; ensure the test covers buffered path (buffer_dim/buffer_steps) and multi-resolution behavior so tuple indexing in _create_buffered_slices and _compute_coords is exercised under torch>=2.9.0.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@monai/inferers/utils.py`:
- Around line 226-376: Add a regression test that exercises
sliding_window_inference with both sw_batch_size=1 and sw_batch_size>1 on
PyTorch 2.9+ to catch tuple-indexing regressions: write a minimal unit test that
calls sliding_window_inference (the public function that triggers
_create_buffered_slices and _compute_coords) on a small synthetic input and
verifies outputs are identical between sw_batch_size values and match a
ground-truth full-window inference; ensure the test covers buffered path
(buffer_dim/buffer_steps) and multi-resolution behavior so tuple indexing in
_create_buffered_slices and _compute_coords is exercised under torch>=2.9.0.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
monai/inferers/utils.py
|
Hi, I replaced my explicit casts with calls to monai.utils.ensure_tuple and I re-ran the inferer tests with PyTorch 2.10 forcing errors for warnings. |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@monai/inferers/utils.py`:
- Around line 246-259: The changed indexing branches in monai.inferers.utils
(the code paths that build win_data and win_condition using unravel_slice,
ensure_tuple, sw_device, kwargs["condition"] and the else branch using
s0/s0_idx) lack unit tests; add regression tests that exercise both
sw_batch_size > 1 and sw_batch_size == 1, include condition is not None to
validate kwargs["condition"] assignment, cover buffered and non-buffered write
flows used by the inferer, and create tests for _compute_coords when passed
non-tuple index containers (e.g., lists) to ensure unravel_slice and
ensure_tuple handling is correct; target functions/classes to assert correct
tensor slicing, device placement, and that buffered writes behave identically to
non-buffered writes.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
monai/inferers/utils.py
| win_data = torch.cat([inputs[ensure_tuple(win_slice)] for win_slice in unravel_slice]).to(sw_device) | ||
| if condition is not None: | ||
| win_condition = torch.cat([condition[win_slice] for win_slice in unravel_slice]).to(sw_device) | ||
| win_condition = torch.cat([condition[ensure_tuple(win_slice)] for win_slice in unravel_slice]).to( | ||
| sw_device | ||
| ) | ||
| kwargs["condition"] = win_condition | ||
| else: | ||
| win_data = inputs[unravel_slice[0]].to(sw_device) | ||
| s0 = unravel_slice[0] | ||
| s0_idx = ensure_tuple(s0) | ||
|
|
||
| win_data = inputs[s0_idx].to(sw_device) | ||
| if condition is not None: | ||
| win_condition = condition[unravel_slice[0]].to(sw_device) | ||
| win_condition = condition[s0_idx].to(sw_device) | ||
| kwargs["condition"] = win_condition |
There was a problem hiding this comment.
Add regression tests for the modified indexing paths.
The changed paths (Line [246]-Line [259], Line [285], Line [316]-Line [320], Line [396]) are not backed by new tests in this PR. Please add coverage for:
sw_batch_size > 1and== 1condition is not None- buffered and non-buffered writes
_compute_coordswith non-tuple index containers
As per coding guidelines, "Ensure new or modified definitions will be covered by existing or new unit tests."
Also applies to: 285-285, 316-320, 396-396
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@monai/inferers/utils.py` around lines 246 - 259, The changed indexing
branches in monai.inferers.utils (the code paths that build win_data and
win_condition using unravel_slice, ensure_tuple, sw_device, kwargs["condition"]
and the else branch using s0/s0_idx) lack unit tests; add regression tests that
exercise both sw_batch_size > 1 and sw_batch_size == 1, include condition is not
None to validate kwargs["condition"] assignment, cover buffered and non-buffered
write flows used by the inferer, and create tests for _compute_coords when
passed non-tuple index containers (e.g., lists) to ensure unravel_slice and
ensure_tuple handling is correct; target functions/classes to assert correct
tensor slicing, device placement, and that buffered writes behave identically to
non-buffered writes.
…monai.inferers.sliding_window_inference. Signed-off-by: Aaron Ponti <aaron@aaronponti.ch>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/inferers/test_sliding_window_inference.py`:
- Line 709: Add Google-style docstrings for the newly introduced definitions:
the test function test_conditioned_branches_and_buffered_parity, the nested
compute function inside it, the TestSlidingWindowUtils class, and the
test_compute_coords_accepts_list_indices test; each docstring should briefly
describe the purpose, parameters (including their types, e.g., sw_batch_size or
any function args), return value (if any), and any exceptions raised. Place the
docstrings immediately below each definition signature and follow the project’s
Google-style formatting (Args:, Returns:, Raises:) so tests and linters pick
them up.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
tests/inferers/test_sliding_window_inference.py
…ity, (nested) compute, TestSlidingWindowUtils, test_compute_coords_accepts_list_indices. Signed-off-by: Aaron Ponti <aaron@aaronponti.ch>
Fix using a non-tuple sequence for multidimensional indexing that will result in error in pytorch 2.9.
Fixes # .
Description
The
SlidingWindowInferercomplains that from PyTorch 2.9, using non-tuple sequences to slice into Tensors will result in an error (or, worse, misbehave). This patch targets all places inmonai/inferers/utils.pywhere indexing may happen with lists or other types of sequences.Types of changes
./runtests.sh -f -u --net --coverage../runtests.sh --quick --unittests --disttests.make htmlcommand in thedocs/folder.