Skip to content

Comments

Follow-up for Fixing SSIM Test Method Names#8750

Open
benediktjohannes wants to merge 1 commit intoProject-MONAI:devfrom
benediktjohannes:patch-7
Open

Follow-up for Fixing SSIM Test Method Names#8750
benediktjohannes wants to merge 1 commit intoProject-MONAI:devfrom
benediktjohannes:patch-7

Conversation

@benediktjohannes
Copy link
Contributor

This fixes some other issues which were described in #8746 .

Description

This is a follow-up for #8746, thanks and credits to @ericspod! 👍

Types of changes

  • Non-breaking change (fix or new feature that would not break existing functionality).
  • Breaking change (fix or new feature that would cause existing functionality to change).
  • New tests added to cover the changes.
  • Integration tests passed locally by running ./runtests.sh -f -u --net --coverage.
  • Quick tests passed locally by running ./runtests.sh --quick --unittests --disttests.
  • In-line docstrings updated.
  • Documentation updated, tested make html command in the docs/ folder.

Signed-off-by: Benedikt Johannes <benedikt.johannes.hofer@gmail.com>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 22, 2026

📝 Walkthrough

Walkthrough

Three test method names in TestSSIMMetric class were renamed to comply with snake_case naming conventions: test2d_gaussian to test_2d_gaussian, test2d_uniform to test_2d_uniform, and test3d_gaussian to test_3d_gaussian. No logic, behavior, or test content was modified.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed Title clearly summarizes the main change: renaming SSIM test methods to follow snake_case conventions.
Description check ✅ Passed Description includes all required template sections: link to related issue, clear explanation as follow-up work, and properly completed types of changes checklist.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/metrics/test_ssim_metric.py (1)

63-74: ⚠️ Potential issue | 🟠 Major

input_ill_input_shape and mismatch_y_pred_and_y are never executed.

Both methods lack the test_ prefix, so the test runner silently ignores them. Given this PR is specifically fixing test method names, these should be fixed here too.

🐛 Proposed fix
-    def input_ill_input_shape(self):
+    def test_input_ill_input_shape(self):
         with self.assertRaises(ValueError):
             metric = SSIMMetric(spatial_dims=3)
             metric(torch.randn(1, 1, 16, 16), torch.randn(1, 1, 16, 16))

         with self.assertRaises(ValueError):
             metric = SSIMMetric(spatial_dims=2)
             metric(torch.randn(1, 1, 16, 16, 16), torch.randn(1, 1, 16, 16, 16))

-    def mismatch_y_pred_and_y(self):
+    def test_mismatch_y_pred_and_y(self):
         with self.assertRaises(ValueError):
             compute_ssim_and_cs(y_pred=torch.randn(1, 1, 16, 8), y=torch.randn(1, 1, 16, 16), spatial_dims=2)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/metrics/test_ssim_metric.py` around lines 63 - 74, Rename the two test
methods so the test runner executes them: change input_ill_input_shape to
test_input_ill_input_shape and mismatch_y_pred_and_y to
test_mismatch_y_pred_and_y; these methods reference SSIMMetric and
compute_ssim_and_cs, so update only the method names (no other logic) to ensure
the assertions raising ValueError run during test collection.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/metrics/test_ssim_metric.py`:
- Line 35: The assertion uses a one-sided check (expected_value - result.item()
< 0.000001) which allows results greater than expected_value to pass; replace it
with a symmetric tolerance check by using abs(expected_value - result.item()) <
1e-6 or unittest's self.assertAlmostEqual(result.item(), expected_value,
places=6) for the assertions that reference expected_value and result.item()
(apply the same change to the other two occurrences mentioned).

---

Outside diff comments:
In `@tests/metrics/test_ssim_metric.py`:
- Around line 63-74: Rename the two test methods so the test runner executes
them: change input_ill_input_shape to test_input_ill_input_shape and
mismatch_y_pred_and_y to test_mismatch_y_pred_and_y; these methods reference
SSIMMetric and compute_ssim_and_cs, so update only the method names (no other
logic) to ensure the assertions raising ValueError run during test collection.

@@ -34,7 +34,7 @@ def test2d_gaussian(self):
expected_value = 0.045415
self.assertTrue(expected_value - result.item() < 0.000001)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

One-sided tolerance check — upper-bound errors pass silently.

expected_value - result.item() < 0.000001 only fails when the result is too low; any result above expected_value trivially satisfies it. Use abs() or assertAlmostEqual.

🐛 Proposed fix (shown for line 35; apply identically to lines 48 and 61)
-        self.assertTrue(expected_value - result.item() < 0.000001)
+        self.assertAlmostEqual(result.item(), expected_value, places=5)

Also applies to: 48-48, 61-61

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/metrics/test_ssim_metric.py` at line 35, The assertion uses a one-sided
check (expected_value - result.item() < 0.000001) which allows results greater
than expected_value to pass; replace it with a symmetric tolerance check by
using abs(expected_value - result.item()) < 1e-6 or unittest's
self.assertAlmostEqual(result.item(), expected_value, places=6) for the
assertions that reference expected_value and result.item() (apply the same
change to the other two occurrences mentioned).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant