Follow-up for Fixing SSIM Test Method Names#8750
Follow-up for Fixing SSIM Test Method Names#8750benediktjohannes wants to merge 1 commit intoProject-MONAI:devfrom
Conversation
Signed-off-by: Benedikt Johannes <benedikt.johannes.hofer@gmail.com>
📝 WalkthroughWalkthroughThree test method names in Estimated code review effort🎯 1 (Trivial) | ⏱️ ~2 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/metrics/test_ssim_metric.py (1)
63-74:⚠️ Potential issue | 🟠 Major
input_ill_input_shapeandmismatch_y_pred_and_yare never executed.Both methods lack the
test_prefix, so the test runner silently ignores them. Given this PR is specifically fixing test method names, these should be fixed here too.🐛 Proposed fix
- def input_ill_input_shape(self): + def test_input_ill_input_shape(self): with self.assertRaises(ValueError): metric = SSIMMetric(spatial_dims=3) metric(torch.randn(1, 1, 16, 16), torch.randn(1, 1, 16, 16)) with self.assertRaises(ValueError): metric = SSIMMetric(spatial_dims=2) metric(torch.randn(1, 1, 16, 16, 16), torch.randn(1, 1, 16, 16, 16)) - def mismatch_y_pred_and_y(self): + def test_mismatch_y_pred_and_y(self): with self.assertRaises(ValueError): compute_ssim_and_cs(y_pred=torch.randn(1, 1, 16, 8), y=torch.randn(1, 1, 16, 16), spatial_dims=2)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/metrics/test_ssim_metric.py` around lines 63 - 74, Rename the two test methods so the test runner executes them: change input_ill_input_shape to test_input_ill_input_shape and mismatch_y_pred_and_y to test_mismatch_y_pred_and_y; these methods reference SSIMMetric and compute_ssim_and_cs, so update only the method names (no other logic) to ensure the assertions raising ValueError run during test collection.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/metrics/test_ssim_metric.py`:
- Line 35: The assertion uses a one-sided check (expected_value - result.item()
< 0.000001) which allows results greater than expected_value to pass; replace it
with a symmetric tolerance check by using abs(expected_value - result.item()) <
1e-6 or unittest's self.assertAlmostEqual(result.item(), expected_value,
places=6) for the assertions that reference expected_value and result.item()
(apply the same change to the other two occurrences mentioned).
---
Outside diff comments:
In `@tests/metrics/test_ssim_metric.py`:
- Around line 63-74: Rename the two test methods so the test runner executes
them: change input_ill_input_shape to test_input_ill_input_shape and
mismatch_y_pred_and_y to test_mismatch_y_pred_and_y; these methods reference
SSIMMetric and compute_ssim_and_cs, so update only the method names (no other
logic) to ensure the assertions raising ValueError run during test collection.
| @@ -34,7 +34,7 @@ def test2d_gaussian(self): | |||
| expected_value = 0.045415 | |||
| self.assertTrue(expected_value - result.item() < 0.000001) | |||
There was a problem hiding this comment.
One-sided tolerance check — upper-bound errors pass silently.
expected_value - result.item() < 0.000001 only fails when the result is too low; any result above expected_value trivially satisfies it. Use abs() or assertAlmostEqual.
🐛 Proposed fix (shown for line 35; apply identically to lines 48 and 61)
- self.assertTrue(expected_value - result.item() < 0.000001)
+ self.assertAlmostEqual(result.item(), expected_value, places=5)Also applies to: 48-48, 61-61
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/metrics/test_ssim_metric.py` at line 35, The assertion uses a one-sided
check (expected_value - result.item() < 0.000001) which allows results greater
than expected_value to pass; replace it with a symmetric tolerance check by
using abs(expected_value - result.item()) < 1e-6 or unittest's
self.assertAlmostEqual(result.item(), expected_value, places=6) for the
assertions that reference expected_value and result.item() (apply the same
change to the other two occurrences mentioned).
This fixes some other issues which were described in #8746 .
Description
This is a follow-up for #8746, thanks and credits to @ericspod! 👍
Types of changes
./runtests.sh -f -u --net --coverage../runtests.sh --quick --unittests --disttests.make htmlcommand in thedocs/folder.