-
-
Notifications
You must be signed in to change notification settings - Fork 96
Reshape consistency fix #663
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request addresses a consistency bug in the reshape operation for F-contiguous tensors. The fix correctly removes the special handling for F-contiguous tensors, ensuring they are converted to a C-contiguous layout before reshaping, which prevents data corruption. My review highlights a potential pre-existing bug in the reshape_with_copy function that this change now makes more prominent. Addressing this related issue is recommended to prevent potential memory access errors.
I am having trouble creating individual review comments. Click here to see my feedback.
src/arraymancer/tensor/private/p_shapeshifting.nim (72-74)
Removing this special handling for F-contiguous tensors is the correct approach to fix the reshape inconsistency. Now, non-C-contiguous tensors (including F-contiguous ones) will be handled by reshape_with_copy, which forces a copy to a C-contiguous layout.
However, I've noticed a potential issue in reshape_with_copy that will now be triggered more often. The implementation of reshape_with_copy is:
proc reshape_with_copy*[T](t: Tensor[T], new_shape: varargs[int]|Metadata|seq[int], result: var Tensor[T]) =
contiguousImpl(t, rowMajor, result)
reshape_no_copy(t, new_shape, result, rowMajor)contiguousImpl creates a new tensor in result with new storage, and its data is packed starting at offset 0. But reshape_no_copy is then called with the original tensor t and sets result.offset = t.offset. If t was a view with a non-zero offset, this would be incorrect for the new storage of result and could lead to memory errors.
A fix would be to call reshape_no_copy on result itself:
reshape_no_copy(result, new_shape, result, rowMajor)Since this part of the code is not in the diff, I can't suggest a direct change, but I strongly recommend addressing this as part of this PR or in a follow-up, as it could cause critical issues.
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
📝 WalkthroughWalkthroughThe pull request removes the special-case Fortran-contiguous tensor handling from the reshape implementation. Non-C-contiguous tensors now uniformly use copy-based reshape instead of the previous no-copy path with column-major layout. C-contiguous tensors remain optimized with the existing no-copy mechanism. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
Pre-merge checks✅ Passed checks (3 passed)
📜 Recent review detailsConfiguration used: Organization UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
💤 Files with no reviewable changes (1)
Comment |
|
Thank you! |
What?
Fixing a bug with
reshape, where it mistakes a non-contiguous C layout for contiguous F layout (see issue #660).Why
A simple illustration why this is a problem:
Right now this assertion fails.
How
The check for whether the tensor is F-contiguous is removed. No-copy reshape is only performed if the tensor is C-contiguous.
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.