-
Notifications
You must be signed in to change notification settings - Fork 536
feat: introduce MemWAL regional writer and MemTable reader #5709
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Thanks for the fast turnaround! I will take a look tonight. Meanwhile, I think the code path deserves some benchmark, can you add that? |
4fae180 to
eb46adc
Compare
|
All review comments have been addressed in commit cda38c0:
|
45ed939 to
1277042
Compare
|
ACTION NEEDED The PR title and description are used as the merge commit message. Please update your PR title and description to match the specification. For details on the error please inspect the "PR Title Check" action. |
cf9c002 to
6cf05f0
Compare
6cf05f0 to
cf4c6e8
Compare
jackye1995
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for bearing with all my comments and direct edits! I think this looks good now! We still need replay feature during initialization and also better alignment with the Lance scanner, but this is a great foundation. Pending CI to merge
…form compatibility Store only the generation folder name (e.g., "77ea7152_gen_1") in the manifest instead of the full object store path. This fixes Windows test failures where the full path was being incorrectly manipulated. Tests now construct the full path as: base_path/_mem_wal/region_id/folder_name Co-Authored-By: Claude Opus 4.5 <[email protected]>
|
Known CI failure, merging |
Based on draft shared from @jackye1995 , cleanup and publish for review.
Currently regional writer can reach 300MB/s performance on Amazon S3, around 50MB/s under high backpressure based on the benchmark.
Memtable read performance is inline with Lance table with a single fragment in memory, and much faster than using Lance table and append 1 fragment per batch.