Skip to content

Conversation

@BrianLusina
Copy link
Owner

@BrianLusina BrianLusina commented Jan 23, 2026

Describe your change:

Sum two linked lists to form a new one

  • Add an algorithm?
  • Fix a bug or typo in an existing algorithm?
  • Documentation change?

Checklist:

  • I have read CONTRIBUTING.md.
  • This pull request is all my own work -- I have not plagiarized.
  • I know that pull requests will not be merged if they fail the automated tests.
  • This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
  • All new Python files are placed inside an existing directory.
  • All filenames are in all lowercase characters with no spaces or dashes.
  • All functions and variable names follow Python naming conventions.
  • All function parameters and return values are annotated with Python type hints.
  • All functions have doctests that pass the automated testing.
  • All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
  • If this pull request resolves one or more open issues then the commit message contains Fixes: #{$ISSUE_NO}.

Summary by CodeRabbit

  • New Features

    • Added a generic LinkedList base class and a generic Node type with many common list operations.
    • Added a utility to sum two numbers represented by reversed linked lists.
    • Improved circular list split logic using a middle-node helper.
  • Tests

    • Added unit tests for linked-list utilities and circular-list splitting.
  • Documentation

    • Updated directory links to include new list-related entries.

✏️ Tip: You can customize this high-level summary in your review settings.

@BrianLusina BrianLusina self-assigned this Jan 23, 2026
@BrianLusina BrianLusina added enhancement Algorithm Algorithm Problem Datastructures Datastructures Documentation Documentation Updates LinkedList Linked List Data Structure labels Jan 23, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 23, 2026

Caution

Review failed

The pull request is closed.

📝 Walkthrough

Walkthrough

Adds a generic LinkedList[T] ABC, extracts Node[T] and T types, restricts module exports, adds linked-list utilities (including sum_of_linked_lists), refactors circular list split logic, adjusts imports, updates DIRECTORY links, and adds tests for utilities and circular split.

Changes

Cohort / File(s) Summary
Core abstractions
datastructures/linked_lists/node.py, datastructures/linked_lists/types.py, datastructures/linked_lists/linked_list.py
Add generic Node[T] and T; introduce LinkedList[T] abstract base with many abstract list operations and several concrete helpers (__iter__, __len__, search, delete_head, get_position, middle_node, reverse_list, swap_nodes`, plus cycle wrapper methods).
Public surface / exports
datastructures/linked_lists/__init__.py
Replace large in-file implementations with a minimal export surface: __all__ = ["LinkedList", "Node", "T"].
Utilities & tests
datastructures/linked_lists/linked_list_utils.py, datastructures/linked_lists/test_linked_list_utils.py
Add sum_of_linked_lists(head_one, head_two) and new parameterized tests covering sum and remove-nth-from-end scenarios (including None/empty inputs).
Circular list behavior & tests
datastructures/linked_lists/circular/__init__.py, datastructures/linked_lists/circular/test_circular_linked_list_split.py, datastructures/linked_lists/circular/test_circular_linked_list.py
Add middle_node() and refactor split_list() to use it; add dedicated split_list tests and remove prior split test.
Integration changes
datastructures/linked_lists/singly_linked_list/node.py
Change relative import to absolute (from datastructures.linked_lists import Node) and a minor docstring wording update.
Docs
DIRECTORY.md
Add new link entries under Circular, Mergeklinkedlists, Doubly Linked List, and Singly Linked List sections.

Sequence Diagram(s)

(omitted — changes are internal library API additions and tests, not multi-component runtime flows)

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested labels

Utilities

Poem

🐰 I hopped through nodes and types so spry,

I split a circle and learned to try.
I summed two lists with carrot cheer,
Exports trimmed — the path is clear.
Little rabbit, code and sky, hooray! 🥕

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 76.12% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The PR title 'feat(data structures, linked lists): sum of linked lists' clearly summarizes the main change: adding a sum of linked lists algorithm to the linked lists data structure module.
Description check ✅ Passed The PR description follows the required template structure and includes all major sections: change description, algorithm addition confirmation, and a substantially completed checklist with only one optional item unchecked (the 'Fixes' issue reference, which is conditional).

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link

github-actions bot commented Jan 23, 2026

Warnings
⚠️ ❗ Big PR

: Pull Request size seems relatively large. If Pull Request contains multiple changes, split each into separate PR will helps faster, easier review.

Generated by 🚫 dangerJS against 1420817

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

🤖 Fix all issues with AI agents
In `@datastructures/linked_lists/linked_list_utils.py`:
- Around line 204-209: The current early-return branches return the original
lists via head_one/head_two which contradicts the docstring promise of returning
a new list; remove those early returns and let the main merge/build logic handle
the case when one input is None, or if keeping early exits is desired update the
docstring to state the original list may be returned. Specifically, in the
function that uses head_one and head_two, eliminate the three if-block returns
and instead, when one head is None, iterate the remaining nodes of the other
list and append new node copies to the result list (or call the existing
node-copying helper used elsewhere) so the returned list is a fresh list;
alternatively, change the docstring to explicitly allow returning
head_one/head_two.

In `@datastructures/linked_lists/linked_list.py`:
- Around line 274-276: The ValueError message uses a literal dollar sign instead
of interpolating the variable (f"Position ${position} out of bounds"), so update
the error string to interpolate position (e.g., use f"Position {position} out of
bounds") in the method that performs the bounds check (the block using "if not 0
<= position <= len(self) - 1" and raising ValueError) so the actual position
value is shown in the error message.
- Around line 138-142: get_position's docstring states positions are 1-indexed
but the implementation currently treats/accepts 0-indexed values; either update
the docstring to state 0-based indexing or enforce 1-based indexing in
get_position (validate position >= 1, treat 1 as head) and adjust the traversal
loop accordingly, returning None or raising a clear error for invalid positions;
reference the get_position method and Node type when making the change so
callers and tests remain consistent.
- Around line 25-39: The iterator in __iter__ omits valid falsy values because
it tests node.data truthiness; change those checks to explicit None checks
(e.g., node.data is not None) and simplify the traversal to start at self.head
and loop while node is not None, yielding node.data when node.data is not None;
update conditions in both the initial head handling and the subsequent while
loop inside __iter__ so values like 0, "", and False are yielded but None is
still skipped.
- Around line 489-504: The method uses k to advance fast_pointer from self.head
but currently allows k == 0 which leads to None dereference; validate k >= 1
(e.g., change the check from k < 0 to k < 1 or explicitly raise ValueError when
k <= 0) and keep the existing upper-bound check (k > len(self)) so the for-loop
"for _ in range(k - 1):" and subsequent fast_pointer/slow_pointer usage are
safe; reference the variables fast_pointer, slow_pointer, self.head and the
parameter k when making the change and update the raised error message to
reflect the k must be >= 1 constraint.

In `@datastructures/linked_lists/node.py`:
- Around line 13-22: The constructor for Node (__init__) currently uses "key or
hash(data)" which overwrites valid falsy keys (0, "", False); change the
assignment to only use hash(data) when key is None (e.g., set self.key = key if
key is not None else hash(data) or use an if key is None branch) so explicit
falsy keys are preserved.
- Around line 30-31: The __eq__ method on Node currently assumes other has a key
and will raise for non-Node objects; update Node.__eq__ to first check the
operand type (use isinstance(other, Node)) and return NotImplemented for
non-Node types, otherwise compare self.key to other.key; reference the
Node.__eq__ method and the key attribute when making the change.

In `@datastructures/linked_lists/test_linked_list_utils.py`:
- Around line 21-30: The empty-data branch in test_linked_list_utils.py
currently calls remove_nth_from_end(head=None, n=n) but then execution falls
through and uses data[0], causing a crash for empty inputs; modify the if not
data block (around the test that builds the list) to guard against fall-through
by returning or using an else so the list-construction code that references
data[0] only runs when data is non-empty (affecting variables/data flow around
remove_nth_from_end, head, and SingleNode).
- Around line 71-84: The empty-input test branches are incorrect: fix the third
branch to call sum_of_linked_lists(head_one=head, head_two=None) instead of
passing head as head_two, and for the both-empty branch return early asserting
None; for the single-list branches convert the returned linked-list nodes into a
Python list by traversing nodes (use the existing
construct_linked_list/iteration pattern or add a small helper) and compare node
values to expected rather than comparing list object to node objects. Ensure you
reference construct_linked_list and sum_of_linked_lists and perform
value-by-value traversal for assertions.

In `@DIRECTORY.md`:
- Around line 420-424: The markdown list has inconsistent indentation causing
MD007 failures: adjust the nested bullets under the linked lists section so each
nested level uses two spaces per indent (e.g., make "Mergeklinkedlists" and its
nested "* [Test Merge]..." use two-space indentation relative to the parent
bullets) and apply the same two-space nesting fix to the other affected entries
referenced (the entries around "Linked List", "Linked List Utils",
"Mergeklinkedlists", "Test Merge", and "Node") so all list levels conform to the
MD007 rule.
🧹 Nitpick comments (2)
datastructures/linked_lists/linked_list_utils.py (1)

216-231: Append carry after the loop to avoid redundant nodes.
Inside-loop carry nodes are immediately overwritten on subsequent iterations. Moving this after the loop is clearer and avoids extra allocations.

♻️ Suggested refactor
     while pointer_one or pointer_two:
         x = pointer_one.data if pointer_one else 0
         y = pointer_two.data if pointer_two else 0
         current_sum = carry + x + y
         carry = current_sum // 10
         current.next = Node(current_sum % 10)
         current = current.next
@@
-        if carry > 0:
-            current.next = Node(carry)
+    if carry > 0:
+        current.next = Node(carry)
datastructures/linked_lists/linked_list.py (1)

49-56: Avoid tuple materialization in __len__.
This currently allocates O(n) memory and depends on __iter__ semantics.

♻️ Proposed refactor
-        return len(tuple(iter(self)))
+        count = 0
+        current = self.head
+        while current:
+            count += 1
+            current = current.next
+        return count

Comment on lines +25 to +39
def __iter__(self):
current = self.head
if not current:
return

if current:
if current.data:
yield current.data

if current.next:
node = current.next
while node:
if node.data:
yield node.data
node = node.next
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Iterator skips valid falsy data.
Nodes with data like 0, "", or False are silently omitted.

🛠️ Proposed fix
     def __iter__(self):
-        current = self.head
-        if not current:
-            return
-
-        if current:
-            if current.data:
-                yield current.data
-
-        if current.next:
-            node = current.next
-            while node:
-                if node.data:
-                    yield node.data
-                node = node.next
+        current = self.head
+        while current:
+            yield current.data
+            current = current.next
🤖 Prompt for AI Agents
In `@datastructures/linked_lists/linked_list.py` around lines 25 - 39, The
iterator in __iter__ omits valid falsy values because it tests node.data
truthiness; change those checks to explicit None checks (e.g., node.data is not
None) and simplify the traversal to start at self.head and loop while node is
not None, yielding node.data when node.data is not None; update conditions in
both the initial head handling and the subsequent while loop inside __iter__ so
values like 0, "", and False are yielded but None is still skipped.

Comment on lines +138 to +142
def get_position(self, position: int) -> Union[Node, None]:
"""
Returns the current node in the linked list if the current position of the node is equal to the position. Assume
counting starts from 1
:param position: Used to get the node at the given position
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Docstring says 1‑indexed but method accepts 0.
Please align the documentation or enforce 1‑based indexing.

🛠️ Doc alignment option
-        counting starts from 1
+        counting starts from 0
🤖 Prompt for AI Agents
In `@datastructures/linked_lists/linked_list.py` around lines 138 - 142,
get_position's docstring states positions are 1-indexed but the implementation
currently treats/accepts 0-indexed values; either update the docstring to state
0-based indexing or enforce 1-based indexing in get_position (validate position
>= 1, treat 1 as head) and adjust the traversal loop accordingly, returning None
or raising a clear error for invalid positions; reference the get_position
method and Node type when making the change so callers and tests remain
consistent.

Comment on lines +13 to +22
def __init__(
self,
data: Optional[T] = None,
next_: Optional["Node[Generic[T]]"] = None,
key: Any = None,
):
self.data = data
self.next = next_
# if no key is provided, the hash of the data becomes the key
self.key = key or hash(data)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Preserve explicit falsy keys when defaulting key.
key or hash(data) overwrites valid falsy keys (e.g., 0/""/False). Use a None check instead.

🛠️ Proposed fix
-        # if no key is provided, the hash of the data becomes the key
-        self.key = key or hash(data)
+        # if no key is provided, the hash of the data becomes the key
+        self.key = hash(data) if key is None else key
🤖 Prompt for AI Agents
In `@datastructures/linked_lists/node.py` around lines 13 - 22, The constructor
for Node (__init__) currently uses "key or hash(data)" which overwrites valid
falsy keys (0, "", False); change the assignment to only use hash(data) when key
is None (e.g., set self.key = key if key is not None else hash(data) or use an
if key is None branch) so explicit falsy keys are preserved.

Comment on lines +30 to +31
def __eq__(self, other: "Node") -> bool:
return self.key == other.key
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Guard __eq__ against non-Node comparisons.
other may not have a key, which would raise at runtime.

🛠️ Proposed fix
-    def __eq__(self, other: "Node") -> bool:
-        return self.key == other.key
+    def __eq__(self, other: object) -> bool:
+        if not isinstance(other, Node):
+            return NotImplemented
+        return self.key == other.key
🤖 Prompt for AI Agents
In `@datastructures/linked_lists/node.py` around lines 30 - 31, The __eq__ method
on Node currently assumes other has a key and will raise for non-Node objects;
update Node.__eq__ to first check the operand type (use isinstance(other, Node))
and return NotImplemented for non-Node types, otherwise compare self.key to
other.key; reference the Node.__eq__ method and the key attribute when making
the change.

Comment on lines +21 to +30
if not data:
actual = remove_nth_from_end(head=None, n=n)
self.assertIsNone(actual)

head = SingleNode(data[0])
current = head
for d in data[1:]:
current.next = SingleNode(d)
current = current.next

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Guard the empty-data branch to avoid fall-through failures.
If an empty case is added, the test will continue and crash on data[0].

✅ Suggested fix
         if not data:
             actual = remove_nth_from_end(head=None, n=n)
             self.assertIsNone(actual)
+            return
🤖 Prompt for AI Agents
In `@datastructures/linked_lists/test_linked_list_utils.py` around lines 21 - 30,
The empty-data branch in test_linked_list_utils.py currently calls
remove_nth_from_end(head=None, n=n) but then execution falls through and uses
data[0], causing a crash for empty inputs; modify the if not data block (around
the test that builds the list) to guard against fall-through by returning or
using an else so the list-construction code that references data[0] only runs
when data is non-empty (affecting variables/data flow around
remove_nth_from_end, head, and SingleNode).

Comment on lines +71 to +84
if not data_one and not data_two:
actual = sum_of_linked_lists(head_one=None, head_two=None)
self.assertIsNone(actual)

if not data_one and data_two:
head = self.construct_linked_list(data_two)
actual = sum_of_linked_lists(head_one=None, head_two=head)
self.assertEqual(expected, actual)

if data_one and not data_two:
head = self.construct_linked_list(data_one)
actual = sum_of_linked_lists(head_one=None, head_two=head)
self.assertEqual(expected, actual)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix empty-input branches and compare via list traversal.
These branches should return early and compare node contents rather than list-to-node equality.

✅ Suggested fix
         if not data_one and not data_two:
             actual = sum_of_linked_lists(head_one=None, head_two=None)
             self.assertIsNone(actual)
+            return
 
         if not data_one and data_two:
             head = self.construct_linked_list(data_two)
             actual = sum_of_linked_lists(head_one=None, head_two=head)
-            self.assertEqual(expected, actual)
+            self.assert_data(actual, expected)
+            return
 
         if data_one and not data_two:
             head = self.construct_linked_list(data_one)
             actual = sum_of_linked_lists(head_one=None, head_two=head)
-            self.assertEqual(expected, actual)
+            self.assert_data(actual, expected)
+            return
🤖 Prompt for AI Agents
In `@datastructures/linked_lists/test_linked_list_utils.py` around lines 71 - 84,
The empty-input test branches are incorrect: fix the third branch to call
sum_of_linked_lists(head_one=head, head_two=None) instead of passing head as
head_two, and for the both-empty branch return early asserting None; for the
single-list branches convert the returned linked-list nodes into a Python list
by traversing nodes (use the existing construct_linked_list/iteration pattern or
add a small helper) and compare node values to expected rather than comparing
list object to node objects. Ensure you reference construct_linked_list and
sum_of_linked_lists and perform value-by-value traversal for assertions.

BrianLusina and others added 5 commits January 23, 2026 13:46
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
DIRECTORY.md (1)

415-445: List indentation inconsistent with markdownlint expectations.

Static analysis flags MD007 (unordered list indentation) on these lines. The file appears to use 4-space indentation while markdownlint expects 2-space nesting. Since this file is likely auto-generated, you may want to either:

  1. Configure markdownlint to accept 4-space indentation, or
  2. Regenerate the file with 2-space indentation

This is a low-priority documentation formatting issue.

datastructures/linked_lists/circular/__init__.py (1)

201-232: Single-node list handling doesn't match docstring.

The docstring states: "If the size is 1, then the first portion of the tuple, at index 0 will be the head of this circular linked list, while the second portion will be None."

However, for a single-node list, middle_node() returns self.head, and head_two = middle_node.next also equals self.head (since a single circular node points to itself). Both halves will reference the same node rather than returning (head, None).

🐛 Proposed fix
     def split_list(self) -> Optional[Tuple[CircularNode, Optional[CircularNode]]]:
         # If there is no head node, there is nothing more to do here
         if not self.head:
             return None

+        # If there's only one node, return it as the first half with None as second
+        if self.head.next == self.head:
+            return self.head, None
+
         # Get the middle of the linked list
         middle_node = self.middle_node()
🤖 Fix all issues with AI agents
In `@datastructures/linked_lists/circular/test_circular_linked_list_split.py`:
- Around line 8-16: Add a test case for the single-element list to
CIRCULAR_LINKED_LIST_SPLIT_LIST_TEST_CASES (e.g., ([7], ([7], None))) and update
the test assertion logic that compares split_list results so it accepts None as
the expected second half; specifically adjust the test helper/compare code to
treat an expected None as meaning the returned second head must be None (or
convert actual halves to lists only when they are not None) and keep using
split_list as the target function to validate.

In `@datastructures/linked_lists/linked_list_utils.py`:
- Line 202: Fix the typo in the docstring in linked_list_utils.py by replacing
"crewted" with "created" in the description for the Node (the "Node: head node
of newly crewted linked list" docstring) so it reads "Node: head node of newly
created linked list".
♻️ Duplicate comments (2)
datastructures/linked_lists/linked_list.py (2)

25-39: Iterator skips valid falsy data.

Nodes with data like 0, "", or False are silently omitted because the code checks if current.data: and if node.data: for truthiness rather than checking for None.

🛠️ Proposed fix
     def __iter__(self):
         current = self.head
-        if not current:
-            return
-
-        if current:
-            if current.data:
-                yield current.data
-
-        if current.next:
-            node = current.next
-            while node:
-                if node.data:
-                    yield node.data
-                node = node.next
+        while current:
+            yield current.data
+            current = current.next

138-162: Docstring and implementation mismatch for position indexing.

The docstring states "counting starts from 1" but the code handles position == 0 specially (lines 153-154) and returns the head, suggesting 0-based indexing is also supported. This creates ambiguity for callers.

Either update the docstring to clarify that both 0 and 1-based access are supported, or enforce consistent 1-based indexing by rejecting position < 1.

📝 Option 1: Update docstring for clarity
-        counting starts from 1
+        counting starts from 0 (position 0 returns the head)
🧹 Nitpick comments (2)
datastructures/linked_lists/linked_list_utils.py (1)

225-226: Move carry handling outside the loop for clarity and efficiency.

The carry check inside the loop creates intermediate nodes that may be overwritten on the next iteration. While functionally correct, this is wasteful and less readable than the standard pattern.

♻️ Suggested refactor
     while pointer_one or pointer_two:
         x = pointer_one.data if pointer_one else 0
         y = pointer_two.data if pointer_two else 0
         current_sum = carry + x + y
         carry = current_sum // 10
         current.next = Node(current_sum % 10)
         current = current.next

         if pointer_one:
             pointer_one = pointer_one.next
         if pointer_two:
             pointer_two = pointer_two.next

-        if carry > 0:
-            current.next = Node(carry)
+    if carry > 0:
+        current.next = Node(carry)

     return dummy_head.next
datastructures/linked_lists/linked_list.py (1)

20-20: Remove Python 2 metaclass syntax.

__metaclass__ is Python 2 syntax. In Python 3, metaclasses are specified via the metaclass keyword argument in the class definition. Since ABCMeta is already applied via Generic[T] inheritance patterns and the @abstractmethod decorators work correctly, this line is unnecessary.

-    __metaclass__ = ABCMeta

Comment on lines +8 to +16
CIRCULAR_LINKED_LIST_SPLIT_LIST_TEST_CASES = [
([1, 2, 3, 4, 5, 6], ([1, 2, 3], [4, 5, 6])),
([], ([], [])),
([1, 5, 7], ([1, 5], [7])),
([2, 6, 1, 5], ([2, 6], [1, 5])),
([3, 1, 4, 2, 5], ([3, 1, 4], [2, 5])),
([8, 10, 12, 14, 16, 18], ([8, 10, 12], [14, 16, 18])),
([9, 10], ([9], [10])),
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add test case for single-element list.

The current test cases don't cover a single-element list, which would expose the issue where split_list returns both halves pointing to the same node instead of (head, None) as documented.

 CIRCULAR_LINKED_LIST_SPLIT_LIST_TEST_CASES = [
     ([1, 2, 3, 4, 5, 6], ([1, 2, 3], [4, 5, 6])),
     ([], ([], [])),
+    ([1], ([1], None)),  # Single element should return (head, None)
     ([1, 5, 7], ([1, 5], [7])),
     ([2, 6, 1, 5], ([2, 6], [1, 5])),
     ([3, 1, 4, 2, 5], ([3, 1, 4], [2, 5])),
     ([8, 10, 12, 14, 16, 18], ([8, 10, 12], [14, 16, 18])),
     ([9, 10], ([9], [10])),
 ]

Note: The test logic would need to be updated to handle None as the expected second half.

🤖 Prompt for AI Agents
In `@datastructures/linked_lists/circular/test_circular_linked_list_split.py`
around lines 8 - 16, Add a test case for the single-element list to
CIRCULAR_LINKED_LIST_SPLIT_LIST_TEST_CASES (e.g., ([7], ([7], None))) and update
the test assertion logic that compares split_list results so it accepts None as
the expected second half; specifically adjust the test helper/compare code to
treat an expected None as meaning the returned second head must be None (or
convert actual halves to lists only when they are not None) and keep using
split_list as the target function to validate.

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
@BrianLusina BrianLusina merged commit 039a999 into main Jan 23, 2026
4 of 7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Algorithm Algorithm Problem Datastructures Datastructures Documentation Documentation Updates enhancement LinkedList Linked List Data Structure

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants