[do not merge] feat: Span streaming & new span API #1087
Triggered via pull request
February 26, 2026 09:05
sentrivana
synchronize
#5317
Status
Success
Total duration
19s
Artifacts
–
changelog-preview.yml
on: pull_request_target
changelog-preview
/
preview
15s
Annotations
18 errors and 15 warnings
|
get_start_span_function returns function with incompatible signature in streaming mode:
sentry_sdk/ai/utils.py#L542
When streaming mode is enabled, `get_start_span_function()` returns `sentry_sdk.traces.start_span` which only accepts `name`, `attributes`, and `parent_span` parameters. However, all callers pass keyword arguments like `op=`, `name=`, and `origin=` (e.g., `get_start_span_function()(op=OP.GEN_AI_CHAT, name="chat", origin=...)`). This will cause `TypeError: got an unexpected keyword argument 'op'` at runtime when streaming mode is enabled.
|
|
[ME3-Y3M] get_start_span_function returns function with incompatible signature in streaming mode (additional location):
sentry_sdk/integrations/celery/__init__.py#L104
When streaming mode is enabled, `get_start_span_function()` returns `sentry_sdk.traces.start_span` which only accepts `name`, `attributes`, and `parent_span` parameters. However, all callers pass keyword arguments like `op=`, `name=`, and `origin=` (e.g., `get_start_span_function()(op=OP.GEN_AI_CHAT, name="chat", origin=...)`). This will cause `TypeError: got an unexpected keyword argument 'op'` at runtime when streaming mode is enabled.
|
|
Spans leak if Redis command raises an exception:
sentry_sdk/integrations/redis/_async_common.py#L135
In `_sentry_execute_command`, the spans are manually entered via `__enter__()` but `__exit__()` is only called after a successful `await old_execute_command()`. If the Redis command raises an exception, `db_span.__exit__()` (line 142) and `cache_span.__exit__()` (line 146) are never reached, leaving spans unclosed. The sync version in `_sync_common.py` correctly uses `try/finally` to ensure spans are always closed.
|
|
[UQ5-MK9] Spans leak if Redis command raises an exception (additional location):
sentry_sdk/integrations/redis/_sync_common.py#L148
In `_sentry_execute_command`, the spans are manually entered via `__enter__()` but `__exit__()` is only called after a successful `await old_execute_command()`. If the Redis command raises an exception, `db_span.__exit__()` (line 142) and `cache_span.__exit__()` (line 146) are never reached, leaving spans unclosed. The sync version in `_sync_common.py` correctly uses `try/finally` to ensure spans are always closed.
|
|
StreamedSpan created without calling start() will fail on finish():
sentry_sdk/integrations/strawberry.py#L191
In span streaming mode, `sentry_sdk.traces.start_span()` creates a `StreamedSpan` but does not automatically start it. The code creates the span at line 192-194 but never calls `.start()` or uses a `with` statement. When `self.graphql_span.finish()` is called at line 234, it will attempt to access `_context_manager_state` which was never set, causing an `AttributeError` (silently caught by `capture_internal_exceptions()`) and the span will not be properly ended or sent.
|
|
[WF7-US3] StreamedSpan created without calling start() will fail on finish() (additional location):
sentry_sdk/integrations/strawberry.py#L238
In span streaming mode, `sentry_sdk.traces.start_span()` creates a `StreamedSpan` but does not automatically start it. The code creates the span at line 192-194 but never calls `.start()` or uses a `with` statement. When `self.graphql_span.finish()` is called at line 234, it will attempt to access `_context_manager_state` which was never set, causing an `AttributeError` (silently caught by `capture_internal_exceptions()`) and the span will not be properly ended or sent.
|
|
[WF7-US3] StreamedSpan created without calling start() will fail on finish() (additional location):
sentry_sdk/integrations/strawberry.py#L260
In span streaming mode, `sentry_sdk.traces.start_span()` creates a `StreamedSpan` but does not automatically start it. The code creates the span at line 192-194 but never calls `.start()` or uses a `with` statement. When `self.graphql_span.finish()` is called at line 234, it will attempt to access `_context_manager_state` which was never set, causing an `AttributeError` (silently caught by `capture_internal_exceptions()`) and the span will not be properly ended or sent.
|
|
UnboundLocalError when Redis command raises exception:
sentry_sdk/integrations/redis/_sync_common.py#L148
In the `finally` block of `sentry_patched_execute_command`, if `old_execute_command` raises an exception, the variable `value` is never assigned. The code then tries to pass `value` to `_set_cache_data()` on line 148, which will raise `UnboundLocalError: local variable 'value' referenced before assignment`. This secondary exception will either mask the original Redis error or cause unexpected behavior, breaking proper error propagation.
|
|
[U5T-7MR] UnboundLocalError when Redis command raises exception (additional location):
sentry_sdk/integrations/redis/_async_common.py#L135
In the `finally` block of `sentry_patched_execute_command`, if `old_execute_command` raises an exception, the variable `value` is never assigned. The code then tries to pass `value` to `_set_cache_data()` on line 148, which will raise `UnboundLocalError: local variable 'value' referenced before assignment`. This secondary exception will either mask the original Redis error or cause unexpected behavior, breaking proper error propagation.
|
|
StreamedSpan never started before finish() in on_operation:
sentry_sdk/integrations/strawberry.py#L192
In span streaming mode, `sentry_sdk.traces.start_span()` creates a span but doesn't activate it - the span must be started either via context manager (`with span:`) or explicit `.start()` call. The code creates `self.graphql_span` without starting it, then calls `.finish()`. This causes an AttributeError (caught internally) because `_context_manager_state` is never set, resulting in spans never being sent to Sentry.
|
|
[FGT-WAB] StreamedSpan never started before finish() in on_operation (additional location):
sentry_sdk/integrations/strawberry.py#L239
In span streaming mode, `sentry_sdk.traces.start_span()` creates a span but doesn't activate it - the span must be started either via context manager (`with span:`) or explicit `.start()` call. The code creates `self.graphql_span` without starting it, then calls `.finish()`. This causes an AttributeError (caught internally) because `_context_manager_state` is never set, resulting in spans never being sent to Sentry.
|
|
[FGT-WAB] StreamedSpan never started before finish() in on_operation (additional location):
sentry_sdk/integrations/strawberry.py#L261
In span streaming mode, `sentry_sdk.traces.start_span()` creates a span but doesn't activate it - the span must be started either via context manager (`with span:`) or explicit `.start()` call. The code creates `self.graphql_span` without starting it, then calls `.finish()`. This causes an AttributeError (caught internally) because `_context_manager_state` is never set, resulting in spans never being sent to Sentry.
|
|
UnboundLocalError when Redis command raises an exception:
sentry_sdk/integrations/redis/_sync_common.py#L148
When `old_execute_command` raises an exception, the variable `value` is never assigned, but the `finally` block on line 148 attempts to use it in `_set_cache_data(cache_span, self, cache_properties, value)`. This will raise an `UnboundLocalError: local variable 'value' referenced before assignment`, masking the original Redis exception and breaking error handling.
|
|
[3YJ-L2U] UnboundLocalError when Redis command raises an exception (additional location):
sentry_sdk/integrations/redis/_async_common.py#L120
When `old_execute_command` raises an exception, the variable `value` is never assigned, but the `finally` block on line 148 attempts to use it in `_set_cache_data(cache_span, self, cache_properties, value)`. This will raise an `UnboundLocalError: local variable 'value' referenced before assignment`, masking the original Redis exception and breaking error handling.
|
|
StreamedSpan created but never started - spans will be silently discarded:
sentry_sdk/integrations/strawberry.py#L192
When span streaming is enabled, `sentry_sdk.traces.start_span()` returns a `StreamedSpan` that requires `.start()` or a `with` block to properly initialize. In `on_operation()`, the span is created (line 192-194) but `.start()` is never called. When `self.graphql_span.finish()` is later called (line 234), `StreamedSpan.__exit__()` tries to access `_context_manager_state` which was never set by `__enter__()`. This raises an `AttributeError` that's silently caught by `capture_internal_exceptions()`, preventing the span from being sent to Sentry. Compare with `graphene.py` which correctly calls `.start()` after creating the streaming span.
|
|
[DXZ-RF5] StreamedSpan created but never started - spans will be silently discarded (additional location):
sentry_sdk/integrations/strawberry.py#L239
When span streaming is enabled, `sentry_sdk.traces.start_span()` returns a `StreamedSpan` that requires `.start()` or a `with` block to properly initialize. In `on_operation()`, the span is created (line 192-194) but `.start()` is never called. When `self.graphql_span.finish()` is later called (line 234), `StreamedSpan.__exit__()` tries to access `_context_manager_state` which was never set by `__enter__()`. This raises an `AttributeError` that's silently caught by `capture_internal_exceptions()`, preventing the span from being sent to Sentry. Compare with `graphene.py` which correctly calls `.start()` after creating the streaming span.
|
|
[DXZ-RF5] StreamedSpan created but never started - spans will be silently discarded (additional location):
sentry_sdk/integrations/strawberry.py#L261
When span streaming is enabled, `sentry_sdk.traces.start_span()` returns a `StreamedSpan` that requires `.start()` or a `with` block to properly initialize. In `on_operation()`, the span is created (line 192-194) but `.start()` is never called. When `self.graphql_span.finish()` is later called (line 234), `StreamedSpan.__exit__()` tries to access `_context_manager_state` which was never set by `__enter__()`. This raises an `AttributeError` that's silently caught by `capture_internal_exceptions()`, preventing the span from being sent to Sentry. Compare with `graphene.py` which correctly calls `.start()` after creating the streaming span.
|
|
StreamedSpan never started/activated in on_operation, on_validate, and on_parse methods:
sentry_sdk/integrations/strawberry.py#L191
When span streaming is enabled, `sentry_sdk.traces.start_span()` creates a `StreamedSpan` but does NOT automatically activate it. The span must be explicitly started via `.start()` or used as a context manager (`with span:`). In `on_operation()`, `on_validate()`, and `on_parse()`, the created `StreamedSpan` objects are never started/entered, causing: (1) spans to never be set as the current scope's span, (2) sampling decisions to never be made for segment spans, (3) child spans to not properly inherit from parent spans. Compare with `graphene.py` line 158 which correctly calls `_graphql_span.start()` after creating the span.
|
|
Missing test coverage for span streaming mode in Starlette middleware spans:
sentry_sdk/integrations/starlette.py#L175
The middleware span creation code now branches on `span_streaming` mode, using `set_attribute()` for StreamedSpan vs `set_tag()` for legacy Span. However, existing tests (e.g., `test_middleware_spans`) only verify the legacy mode by checking `span["tags"]["starlette.middleware_name"]`. There are no tests covering the span streaming path where attributes are set via `set_attribute()`. This gap could allow regressions in the new streaming functionality to go undetected.
|
|
Setting MAX_BEFORE_DROP equal to MAX_BEFORE_FLUSH causes premature span dropping under load:
sentry_sdk/_span_batcher.py#L19
When MAX_BEFORE_DROP (1000) equals MAX_BEFORE_FLUSH (1000), there is no buffer headroom between signaling a flush and dropping spans. Since flush runs asynchronously in a separate thread, spans arriving after flush is signaled but before the buffer is cleared will be dropped. The previous value of MAX_BEFORE_DROP was 5000, which provided adequate headroom.
|
|
StreamedSpan leaks when Anthropic API call raises an exception in streaming mode:
sentry_sdk/integrations/anthropic.py#L572
The code change `isinstance(span, Span)` excludes `StreamedSpan` from the error cleanup path. When streaming mode is enabled and an Anthropic API call raises an exception, the span is opened via `__enter__()` but never closed via `__exit__()`. The generator is not completed due to the exception, and the finally block's `isinstance(span, Span)` check fails for `StreamedSpan`. This causes the span to leak without being properly finalized.
|
|
[YHZ-QAB] StreamedSpan leaks when Anthropic API call raises an exception in streaming mode (additional location):
sentry_sdk/integrations/anthropic.py#L610
The code change `isinstance(span, Span)` excludes `StreamedSpan` from the error cleanup path. When streaming mode is enabled and an Anthropic API call raises an exception, the span is opened via `__enter__()` but never closed via `__exit__()`. The generator is not completed due to the exception, and the finally block's `isinstance(span, Span)` check fails for `StreamedSpan`. This causes the span to leak without being properly finalized.
|
|
Control flow exceptions incorrectly marked as ERROR for StreamedSpan:
sentry_sdk/integrations/celery/__init__.py#L104
The `_set_status` function always sets `SpanStatus.ERROR` for StreamedSpan regardless of the actual status parameter. When called with 'aborted' (for Celery control flow exceptions like Retry, Ignore, Reject), the span is incorrectly marked as an error. These exceptions represent intentional control flow, not actual failures, and should likely be marked as OK or handled differently to avoid polluting error metrics.
|
|
Early return in set_transaction_name skips setting _transaction_info['source']:
sentry_sdk/scope.py#L829
In `set_transaction_name`, when `self._span` is a `NoOpStreamedSpan`, the function returns early at line 829. This causes the code at lines 841-842 (`if source: self._transaction_info["source"] = source`) to be skipped. In the original code, `_transaction_info["source"]` was always set when `source` was provided, regardless of the span state. This inconsistency means transaction source information is lost when using NoOpStreamedSpan, which could affect downstream processing of events.
|
|
start_streamed_span crashes with AttributeError when legacy Span is the active span:
sentry_sdk/scope.py#L1282
In `start_streamed_span`, after reassigning `parent_span` at line 1247 from `self.span or self.get_current_scope().span`, the code at lines 1279-1282 accesses `parent_span.segment`. However, if the active span is a legacy `Span` (from `sentry_sdk.tracing.Span`), this will raise an `AttributeError` because the legacy `Span` class doesn't have a `segment` attribute. This can occur in mixed-mode scenarios where integrations create legacy spans but the new streaming API is used.
|
|
Spans are serialized twice: once for size estimation and once during flush:
sentry_sdk/_span_batcher.py#L77
The `_estimate_size()` method calls `_to_transport_format()` on every span added (unless count-based flush triggers first), and then `_flush()` calls `_to_transport_format()` again for all buffered spans. This means each span's attributes are serialized twice via `serialize_attribute()`, which iterates through all attribute values and creates dictionary structures. For spans with many attributes, this duplication adds unnecessary CPU overhead in a performance-sensitive code path.
|
|
StreamedSpan status always set to ERROR, ignoring aborted status for control flow exceptions:
sentry_sdk/integrations/celery/__init__.py#L104
The modified `_set_status` function always sets `SpanStatus.ERROR` for `StreamedSpan` regardless of the `status` argument. This causes Celery control flow exceptions (Retry, Ignore, Reject) that should be marked as "aborted" to be incorrectly marked as errors. While the new `SpanStatus` enum only has OK and ERROR values, hardcoding ERROR loses the semantic distinction between intentional control flow and actual errors, potentially causing misleading trace data.
|
|
NoOpStreamedSpan missing scope parameter prevents span context manager from working correctly:
sentry_sdk/scope.py#L1273
At line 1273, `NoOpStreamedSpan()` is created without the `scope` parameter, unlike lines 1237 and 1255 which pass `scope=self`. When `_scope` is None in NoOpStreamedSpan, the `__enter__` method returns early without setting `scope.span = self`, and `__exit__` won't restore the old span. This breaks context manager behavior for ignored child spans - the ignored span won't be tracked in the scope, causing inconsistent span hierarchy when nested spans are used.
|
|
Docstring references non-existent `op` parameter causing potential runtime errors:
sentry_sdk/traces.py#L807
The docstring example references `@trace(op="custom")` but the function signature only accepts `name` and `attributes` parameters. Users following this documentation will get a `TypeError: trace() got an unexpected keyword argument 'op'` at runtime. This is a backwards compatibility concern as users migrating from the old API may expect the `op` parameter to work.
|
|
StreamedSpan instances with INTERNAL_ERROR status are never cleaned up:
sentry_sdk/integrations/anthropic.py#L572
The code checks `isinstance(span, Span)` before calling `span.__exit__()` for error cleanup. However, when streaming mode is enabled (`_experiments={"trace_lifecycle": "stream"}`), `get_start_span_function()` returns a function that creates `StreamedSpan` instances instead of `Span`. Since `StreamedSpan` is not a subclass of `Span`, this check will always be False for streamed spans, causing the error cleanup code to be silently skipped. This may lead to spans with `INTERNAL_ERROR` status not being properly closed when using streaming mode.
|
|
[JJB-QWM] StreamedSpan instances with INTERNAL_ERROR status are never cleaned up (additional location):
sentry_sdk/integrations/anthropic.py#L610
The code checks `isinstance(span, Span)` before calling `span.__exit__()` for error cleanup. However, when streaming mode is enabled (`_experiments={"trace_lifecycle": "stream"}`), `get_start_span_function()` returns a function that creates `StreamedSpan` instances instead of `Span`. Since `StreamedSpan` is not a subclass of `Span`, this check will always be False for streamed spans, causing the error cleanup code to be silently skipped. This may lead to spans with `INTERNAL_ERROR` status not being properly closed when using streaming mode.
|
|
Spans not closed on exception in async Redis execute_command:
sentry_sdk/integrations/redis/_async_common.py#L135
The `_sentry_execute_command` function in the async Redis client uses manual `__enter__()` and `__exit__()` calls for spans, but unlike the sync version in `_sync_common.py`, it lacks a `try/finally` block around `await old_execute_command()`. If the Redis command raises an exception, `db_span.__exit__()` and `cache_span.__exit__()` are never called, leaving spans open. This causes span leakage and incorrect tracing data. The sync version correctly uses `try/finally` to ensure spans are always closed.
|
|
[485-8WU] Spans not closed on exception in async Redis execute_command (additional location):
sentry_sdk/integrations/redis/_sync_common.py#L143
The `_sentry_execute_command` function in the async Redis client uses manual `__enter__()` and `__exit__()` calls for spans, but unlike the sync version in `_sync_common.py`, it lacks a `try/finally` block around `await old_execute_command()`. If the Redis command raises an exception, `db_span.__exit__()` and `cache_span.__exit__()` are never called, leaving spans open. This causes span leakage and incorrect tracing data. The sync version correctly uses `try/finally` to ensure spans are always closed.
|