Skip to content

switch /wave/stream-file to use new modern streams (w/ flow control) and fix big ttfb streaming bug#3084

Merged
sawka merged 6 commits intomainfrom
sawka/streamfile
Mar 19, 2026
Merged

switch /wave/stream-file to use new modern streams (w/ flow control) and fix big ttfb streaming bug#3084
sawka merged 6 commits intomainfrom
sawka/streamfile

Conversation

@sawka
Copy link
Member

@sawka sawka commented Mar 19, 2026

No description provided.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 19, 2026

Walkthrough

Adds end-to-end file streaming with optional byte-range support. New RPC methods FileStreamCommand and RemoteFileStreamCommand were introduced across RPC types, client, server, and remote implementations. New payload types CommandFileStreamData and CommandRemoteFileStreamData were added. A byte-range parser/type (ByteRangeType, ParseByteRange) and adjusted inclusive end offset fix were introduced. Web server streaming was refactored to a reader-based flow with HTTP Range header handling and routing/order changes. Remote streaming handlers gained panic guards and were updated to use the shared byte-range type. Frontend streaming preview tags were simplified and a minor const change in terminal write was made.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 36.84% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ❓ Inconclusive No description was provided by the author; this check cannot assess whether content relates to the changeset. Add a pull request description explaining the motivation, implementation approach, and testing performed for this refactoring and bug fix.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the primary change: refactoring stream-file endpoint to use modern streams with flow control and fixing a TTFB streaming bug.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch sawka/streamfile
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@cloudflare-workers-and-pages
Copy link

Deploying waveterm with  Cloudflare Pages  Cloudflare Pages

Latest commit: 7162bc0
Status: ✅  Deploy successful!
Preview URL: https://ee93590d.waveterm.pages.dev
Branch Preview URL: https://sawka-streamfile.waveterm.pages.dev

View logs

@kilo-code-bot
Copy link
Contributor

kilo-code-bot bot commented Mar 19, 2026

Code Review Summary

Status: 2 Issues Found | Recommendation: Address before merge

Overview

Severity Count
CRITICAL 0
WARNING 2
SUGGESTION 0
Issue Details (click to expand)

WARNING

File Line Issue
pkg/util/fileutil/fileutil.go ~179, ~186 fmt.Sscanf accepts trailing garbage - should reject invalid byte range format
pkg/web/web.go ~374-381 Unsatisfiable byte ranges not clamped - could produce invalid Content-Length headers
Other Observations (not in diff)

Issues found in unchanged code that cannot receive inline comments:

File Line Issue
pkg/web/web.go 56, 494 WriteTimeout (21s) still applies to streaming endpoints
Files Reviewed (11 files)
  • frontend/app/store/wshclientapi.ts - New RPC methods for file streaming
  • frontend/app/view/preview/preview-streaming.tsx - Simplified video/audio elements
  • frontend/app/view/term/termwrap.ts - Minor improvement
  • frontend/types/gotypes.d.ts - New TypeScript types
  • pkg/remote/fileshare/wshfs/wshfs.go - New FileStream function
  • pkg/util/fileutil/fileutil.go - New ByteRangeType with OpenEnd support
  • pkg/web/web.go - Route reorganization, new streaming handler
  • pkg/wshrpc/wshclient/wshclient.go - New client RPC methods
  • pkg/wshrpc/wshremote/wshremote_file.go - New RemoteFileStreamCommand, bug fixes
  • pkg/wshrpc/wshrpctypes_file.go - New RPC types
  • pkg/wshrpc/wshserver/wshserver.go - New server RPC method

Summary

This PR implements a modern streaming interface with flow control and addresses the TTFB streaming bug by moving streaming routes outside of TimeoutHandler. The following issues remain:

  1. fmt.Sscanf trailing garbage: The ParseByteRange function still accepts inputs with trailing characters (e.g., "0-99,200-299" or "0-99foo") because fmt.Sscanf ignores leftover characters.

  2. Unsatisfiable ranges not clamped: When a client requests a byte range beyond the file size (e.g., bytes=0-999999 on a 100-byte file), the code uses br.End directly without clamping to fileInfo.Size, which could produce invalid HTTP responses.

  3. WriteTimeout applies to streaming (not in diff): The server's 21-second WriteTimeout still applies to streaming endpoints, which could interrupt long-lived streams. This is in unchanged code so cannot receive an inline comment.

The PR does fix the issue of returning 404s properly (previously returned as 500s).

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (2)
pkg/wshrpc/wshremote/wshremote_file.go (2)

675-756: Well-structured streaming implementation with proper resource management.

The function correctly:

  • Validates stream broker availability
  • Handles errors with CloseWithError before returning
  • Uses LimitReader with correct inclusive range length (End-Start+1)
  • Defers cleanup in the goroutine with panic recovery

One minor observation on error handling consistency:

Consider logging write errors for observability

At line 742, write errors cause a silent return. While the stream will be closed, adding a log could help with debugging:

 			if n > 0 {
 				if _, writeErr := writer.Write(buf[:n]); writeErr != nil {
+					log.Printf("error writing to stream for %q: %v", data.Path, writeErr)
 					return
 				}
 			}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/wshrpc/wshremote/wshremote_file.go` around lines 675 - 756, In
RemoteFileStreamCommand, the goroutine currently returns silently on
writer.Write errors; update the write error handling (the block checking
writeErr after writer.Write in the loop) to record the failure by calling
writer.CloseWithError with a wrapped error (include context like the file path
and the writeErr) and then return, so write failures are logged/propagated for
observability instead of being dropped.

718-722: Redundant close after CloseWithError.

When os.Open fails, CloseWithError is called (line 720) and then return exits the goroutine, which triggers defer writer.Close() (line 716). The Close() call after CloseWithError is safe but redundant since CloseWithError should already handle the error state.

This is a minor point and the current code is functionally correct - the implementation handles the edge case safely.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/wshrpc/wshremote/wshremote_file.go` around lines 718 - 722, The defer
writer.Close() is redundant because CloseWithError is used on the error path;
remove the deferred Close and instead explicitly close the writer on the success
path (call writer.Close() when the file handling completes successfully) while
keeping writer.CloseWithError(...) in the os.Open error branch (the symbols to
change are the deferred writer.Close() and the error-handling call
writer.CloseWithError in the same function).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pkg/util/fileutil/fileutil.go`:
- Around line 29-51: ParseByteRange currently accepts inputs with trailing
garbage because fmt.Sscanf ignores leftover characters; update parsing to detect
leftover input by using the %n verb and validating that the number of bytes
consumed equals len(rangeStr). For the open-ended case use fmt.Sscanf(rangeStr,
"%d-%n", &start, &n) and require n == len(rangeStr), and for the closed range
use fmt.Sscanf(rangeStr, "%d-%d%n", &start, &end, &n) and require n ==
len(rangeStr); keep the existing validation for negative values and start > end
and return errors when n != len(rangeStr) or Sscanf fails. Ensure these changes
are applied inside ParseByteRange and still return ByteRangeType{Start:...,
OpenEnd:true} or ByteRangeType{Start:..., End:...} as before.

In `@pkg/web/web.go`:
- Around line 302-312: The current range handling uses the requested br.End
directly and can produce invalid lengths; update the ParseByteRange handling to
first detect unsatisfiable ranges (if br.Start >= fileInfo.Size respond with
http.StatusRequestedRangeNotSatisfiable and set an appropriate Content-Range
like "bytes */%d"), otherwise compute a satisfiable end by clamping: if
br.OpenEnd use fileInfo.Size-1 else set rangeEnd = min(br.End, fileInfo.Size-1);
then set Content-Length to rangeEnd-br.Start+1, set Content-Range using br.Start
and the clamped rangeEnd, and write http.StatusPartialContent; reference
ParseByteRange/br.OpenEnd/br.Start/br.End, ContentLengthHeaderKey, Content-Range
header and the status codes 206/416.
- Around line 281-295: The helper currently returns a generic error when
fileInfo.NotFound is true which causes handleStreamFile to produce a 500;
instead return a 404-specific error so the caller can translate it to HTTP 404.
Update the branch that checks fileInfo.NotFound (after calling wshfs.FileStream)
to, when no404 is false, return a dedicated sentinel or typed error (e.g.,
define ErrNotFound or an HTTPError with StatusCode 404) instead of
fmt.Errorf("file not found: %q", path); keep the existing serveTransparentGIF
behavior when no404 is true. Ensure handleStreamFile recognizes that
sentinel/type (ErrNotFound or HTTPError) and maps it to HTTP 404.
- Around line 445-450: The server-level WriteTimeout at initialization
(WriteTimeout = 21s) still applies to all handlers and will cut off long-lived
streams; update the routing so streaming endpoints (handleStreamLocalFile,
handleStreamFile, aiusechat.WaveAIPostMessageHandler) are served by a dedicated
http.Server instance that has no WriteTimeout (or a much larger one) or move
them behind a listener with per-handler timeout middleware instead of the global
WriteTimeout; ensure WebFnWrap usage remains but register these routes on the
new server (or attach middleware) so WriteTimeout no longer aborts SSE/streaming
responses.

---

Nitpick comments:
In `@pkg/wshrpc/wshremote/wshremote_file.go`:
- Around line 675-756: In RemoteFileStreamCommand, the goroutine currently
returns silently on writer.Write errors; update the write error handling (the
block checking writeErr after writer.Write in the loop) to record the failure by
calling writer.CloseWithError with a wrapped error (include context like the
file path and the writeErr) and then return, so write failures are
logged/propagated for observability instead of being dropped.
- Around line 718-722: The defer writer.Close() is redundant because
CloseWithError is used on the error path; remove the deferred Close and instead
explicitly close the writer on the success path (call writer.Close() when the
file handling completes successfully) while keeping writer.CloseWithError(...)
in the os.Open error branch (the symbols to change are the deferred
writer.Close() and the error-handling call writer.CloseWithError in the same
function).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: d46fb216-408a-4da8-a648-4c06d7102034

📥 Commits

Reviewing files that changed from the base of the PR and between c126306 and 7162bc0.

📒 Files selected for processing (11)
  • frontend/app/store/wshclientapi.ts
  • frontend/app/view/preview/preview-streaming.tsx
  • frontend/app/view/term/termwrap.ts
  • frontend/types/gotypes.d.ts
  • pkg/remote/fileshare/wshfs/wshfs.go
  • pkg/util/fileutil/fileutil.go
  • pkg/web/web.go
  • pkg/wshrpc/wshclient/wshclient.go
  • pkg/wshrpc/wshremote/wshremote_file.go
  • pkg/wshrpc/wshrpctypes_file.go
  • pkg/wshrpc/wshserver/wshserver.go

Comment on lines +29 to +51
func ParseByteRange(rangeStr string) (ByteRangeType, error) {
if rangeStr == "" {
return ByteRangeType{All: true}, nil
}
// handle open-ended range "N-"
if len(rangeStr) > 0 && rangeStr[len(rangeStr)-1] == '-' {
var start int64
_, err := fmt.Sscanf(rangeStr, "%d-", &start)
if err != nil || start < 0 {
return ByteRangeType{}, errors.New("invalid byte range")
}
return ByteRangeType{Start: start, OpenEnd: true}, nil
}
var start, end int64
_, err := fmt.Sscanf(rangeStr, "%d-%d", &start, &end)
if err != nil {
return ByteRangeType{}, errors.New("invalid byte range")
}
if start < 0 || end < 0 || start > end {
return ByteRangeType{}, errors.New("invalid byte range")
}
// End is inclusive (HTTP byte range semantics: bytes=0-999 means 1000 bytes)
return ByteRangeType{Start: start, End: end}, nil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🌐 Web query:

Does Go's fmt.Sscanf return success when the format matches but trailing input remains unread?

💡 Result:

Yes, Go's fmt.Sscanf returns success (n > 0 and err == nil) when the format matches and parses the expected values, even if trailing input remains unread. It does not require consuming the entire input string and only signals an error if the format fails to match or parsing fails.

Citations:


Reject trailing garbage in ParseByteRange.

fmt.Sscanf succeeds when the format matches and parses the expected values, even if trailing input remains unread. This means inputs like 0-99,200-299 or 0-99foo are silently accepted as 0-99 instead of being rejected, allowing unsupported or malformed byte range headers to pass validation.

Suggested fix
+import "strconv"
+
 func ParseByteRange(rangeStr string) (ByteRangeType, error) {
 	if rangeStr == "" {
 		return ByteRangeType{All: true}, nil
 	}
-	// handle open-ended range "N-"
-	if len(rangeStr) > 0 && rangeStr[len(rangeStr)-1] == '-' {
-		var start int64
-		_, err := fmt.Sscanf(rangeStr, "%d-", &start)
+	if strings.HasSuffix(rangeStr, "-") {
+		start, err := strconv.ParseInt(strings.TrimSuffix(rangeStr, "-"), 10, 64)
 		if err != nil || start < 0 {
 			return ByteRangeType{}, errors.New("invalid byte range")
 		}
 		return ByteRangeType{Start: start, OpenEnd: true}, nil
 	}
-	var start, end int64
-	_, err := fmt.Sscanf(rangeStr, "%d-%d", &start, &end)
-	if err != nil {
+	startStr, endStr, ok := strings.Cut(rangeStr, "-")
+	if !ok || startStr == "" || endStr == "" {
 		return ByteRangeType{}, errors.New("invalid byte range")
 	}
+	start, err := strconv.ParseInt(startStr, 10, 64)
+	if err != nil {
+		return ByteRangeType{}, errors.New("invalid byte range")
+	}
+	end, err := strconv.ParseInt(endStr, 10, 64)
+	if err != nil {
+		return ByteRangeType{}, errors.New("invalid byte range")
+	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/util/fileutil/fileutil.go` around lines 29 - 51, ParseByteRange currently
accepts inputs with trailing garbage because fmt.Sscanf ignores leftover
characters; update parsing to detect leftover input by using the %n verb and
validating that the number of bytes consumed equals len(rangeStr). For the
open-ended case use fmt.Sscanf(rangeStr, "%d-%n", &start, &n) and require n ==
len(rangeStr), and for the closed range use fmt.Sscanf(rangeStr, "%d-%d%n",
&start, &end, &n) and require n == len(rangeStr); keep the existing validation
for negative values and start > end and return errors when n != len(rangeStr) or
Sscanf fails. Ensure these changes are applied inside ParseByteRange and still
return ByteRangeType{Start:..., OpenEnd:true} or ByteRangeType{Start:...,
End:...} as before.

Comment on lines +302 to +312
if byteRange != "" {
br, _ := fileutil.ParseByteRange(byteRange)
var rangeEnd int64
if br.OpenEnd {
rangeEnd = fileInfo.Size - 1
} else {
rangeEnd = br.End
}
w.Header().Set(ContentLengthHeaderKey, fmt.Sprintf("%d", rangeEnd-br.Start+1))
w.Header().Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", br.Start, rangeEnd, fileInfo.Size))
w.WriteHeader(http.StatusPartialContent)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Clamp or reject unsatisfiable ranges before emitting 206 headers.

Lines 303-311 derive Content-Length and Content-Range from the requested end, not the satisfiable end. bytes=5000- on a smaller file can produce an invalid length, and bytes=0-999999 overstates the body size.

Suggested fix
 if byteRange != "" {
 	br, _ := fileutil.ParseByteRange(byteRange)
-	var rangeEnd int64
-	if br.OpenEnd {
-		rangeEnd = fileInfo.Size - 1
-	} else {
-		rangeEnd = br.End
-	}
+	if fileInfo.Size == 0 || br.Start >= fileInfo.Size {
+		w.Header().Set("Content-Range", fmt.Sprintf("bytes */%d", fileInfo.Size))
+		w.WriteHeader(http.StatusRequestedRangeNotSatisfiable)
+		return nil
+	}
+	rangeEnd := fileInfo.Size - 1
+	if !br.OpenEnd && br.End < rangeEnd {
+		rangeEnd = br.End
+	}
 	w.Header().Set(ContentLengthHeaderKey, fmt.Sprintf("%d", rangeEnd-br.Start+1))
 	w.Header().Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", br.Start, rangeEnd, fileInfo.Size))
 	w.WriteHeader(http.StatusPartialContent)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if byteRange != "" {
br, _ := fileutil.ParseByteRange(byteRange)
var rangeEnd int64
if br.OpenEnd {
rangeEnd = fileInfo.Size - 1
} else {
rangeEnd = br.End
}
w.Header().Set(ContentLengthHeaderKey, fmt.Sprintf("%d", rangeEnd-br.Start+1))
w.Header().Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", br.Start, rangeEnd, fileInfo.Size))
w.WriteHeader(http.StatusPartialContent)
if byteRange != "" {
br, _ := fileutil.ParseByteRange(byteRange)
if fileInfo.Size == 0 || br.Start >= fileInfo.Size {
w.Header().Set("Content-Range", fmt.Sprintf("bytes */%d", fileInfo.Size))
w.WriteHeader(http.StatusRequestedRangeNotSatisfiable)
return nil
}
rangeEnd := fileInfo.Size - 1
if !br.OpenEnd && br.End < rangeEnd {
rangeEnd = br.End
}
w.Header().Set(ContentLengthHeaderKey, fmt.Sprintf("%d", rangeEnd-br.Start+1))
w.Header().Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", br.Start, rangeEnd, fileInfo.Size))
w.WriteHeader(http.StatusPartialContent)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/web/web.go` around lines 302 - 312, The current range handling uses the
requested br.End directly and can produce invalid lengths; update the
ParseByteRange handling to first detect unsatisfiable ranges (if br.Start >=
fileInfo.Size respond with http.StatusRequestedRangeNotSatisfiable and set an
appropriate Content-Range like "bytes */%d"), otherwise compute a satisfiable
end by clamping: if br.OpenEnd use fileInfo.Size-1 else set rangeEnd =
min(br.End, fileInfo.Size-1); then set Content-Length to rangeEnd-br.Start+1,
set Content-Range using br.Start and the clamped rangeEnd, and write
http.StatusPartialContent; reference ParseByteRange/br.OpenEnd/br.Start/br.End,
ContentLengthHeaderKey, Content-Range header and the status codes 206/416.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
pkg/web/web.go (2)

305-315: ⚠️ Potential issue | 🟠 Major

Clamp or reject unsatisfiable ranges before emitting 206.

Lines 307-314 still derive Content-Length / Content-Range from the requested end. bytes=0-999999, bytes=5000- on a smaller file, or any range on an empty file can advertise a longer body than the reader will produce.

Suggested fix
  if byteRange != "" {
      br, _ := fileutil.ParseByteRange(byteRange)
-     var rangeEnd int64
-     if br.OpenEnd {
-         rangeEnd = fileInfo.Size - 1
-     } else {
-         rangeEnd = br.End
-     }
+     if fileInfo.Size == 0 || br.Start >= fileInfo.Size {
+         w.Header().Set("Content-Range", fmt.Sprintf("bytes */%d", fileInfo.Size))
+         w.WriteHeader(http.StatusRequestedRangeNotSatisfiable)
+         return nil
+     }
+     rangeEnd := fileInfo.Size - 1
+     if !br.OpenEnd && br.End < rangeEnd {
+         rangeEnd = br.End
+     }
      w.Header().Set(ContentLengthHeaderKey, fmt.Sprintf("%d", rangeEnd-br.Start+1))
      w.Header().Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", br.Start, rangeEnd, fileInfo.Size))
      w.WriteHeader(http.StatusPartialContent)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/web/web.go` around lines 305 - 315, The current range handling uses the
requested end directly and can advertise more bytes than available; update the
logic in the handler that calls fileutil.ParseByteRange and uses br to (1) if
br.Start >= fileInfo.Size respond with http.StatusRequestedRangeNotSatisfiable
and set a proper Content-Range like "bytes */<size>", (2) clamp rangeEnd to
fileInfo.Size-1 when br.End is past EOF or when br.OpenEnd is true, (3) handle
empty files by returning 416, and then compute Content-Length as
rangeEnd-br.Start+1 and send http.StatusPartialContent using the clamped values;
change references to ContentLengthHeaderKey and the Content-Range header to use
these validated/clamped values.

282-287: ⚠️ Potential issue | 🟠 Major

Keep no404 scoped to actual not-found results.

Line 284 still turns any wshfs.FileStream failure into a 200 transparent GIF. That masks transport/auth/backend faults as a successful thumbnail response, and with caching enabled those failures can be stored as success. Let real errors bubble and only suppress the dedicated not-found signal.

Based on learnings The CopyRemote function in WaveTerm's file operations has proper error handling that prevents partial writes by failing fast and using context cancellation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/web/web.go` around lines 282 - 287, The current code treats any error
from wshfs.FileStream as a "not found" and serves a transparent GIF; change it
to only suppress errors when the error is the explicit not-found sentinel (e.g.,
compare against wshfs.ErrNotFound or use os.IsNotExist) and otherwise return or
propagate the real error (do not call serveTransparentGIF). Update the
FileStream error handling in pkg/web/web.go around the wshfs.FileStream call so
that only the genuine not-found condition triggers serveTransparentGIF/no404
behavior and all other errors bubble up (or are converted to a 5xx) to avoid
masking transport/auth/backend faults.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pkg/web/web.go`:
- Around line 248-250: The current logs call log.Printf with the full file URI
via the path variable (log.Printf("stream-file path=%q range=%q\n", path,
rangeHeader)), exposing sensitive filenames; update the logging in
pkg/web/web.go to stop printing raw path and instead log a non-sensitive
identifier (e.g., a request ID or connection/route id) and the rangeHeader;
obtain or generate the identifier from the incoming request context
(r.Context()) or create one at the start of the handler (e.g., reqID :=
r.Header.Get("X-Request-ID") or generate a UUID) and replace path with that
identifier in both occurrences (the log.Printf at startTime/rangeHeader and the
later log at the other occurrence around line 302) so logs retain correlation
without leaking file URIs.

---

Duplicate comments:
In `@pkg/web/web.go`:
- Around line 305-315: The current range handling uses the requested end
directly and can advertise more bytes than available; update the logic in the
handler that calls fileutil.ParseByteRange and uses br to (1) if br.Start >=
fileInfo.Size respond with http.StatusRequestedRangeNotSatisfiable and set a
proper Content-Range like "bytes */<size>", (2) clamp rangeEnd to
fileInfo.Size-1 when br.End is past EOF or when br.OpenEnd is true, (3) handle
empty files by returning 416, and then compute Content-Length as
rangeEnd-br.Start+1 and send http.StatusPartialContent using the clamped values;
change references to ContentLengthHeaderKey and the Content-Range header to use
these validated/clamped values.
- Around line 282-287: The current code treats any error from wshfs.FileStream
as a "not found" and serves a transparent GIF; change it to only suppress errors
when the error is the explicit not-found sentinel (e.g., compare against
wshfs.ErrNotFound or use os.IsNotExist) and otherwise return or propagate the
real error (do not call serveTransparentGIF). Update the FileStream error
handling in pkg/web/web.go around the wshfs.FileStream call so that only the
genuine not-found condition triggers serveTransparentGIF/no404 behavior and all
other errors bubble up (or are converted to a 5xx) to avoid masking
transport/auth/backend faults.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: ada8ab0d-e0b4-425a-9e69-96beea3805f5

📥 Commits

Reviewing files that changed from the base of the PR and between 7162bc0 and 84037ad.

📒 Files selected for processing (1)
  • pkg/web/web.go

Comment on lines +248 to +250
startTime := time.Now()
rangeHeader := r.Header.Get("Range")
log.Printf("stream-file path=%q range=%q\n", path, rangeHeader)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid logging raw file URIs on the happy path.

Lines 250 and 302 log the full path for every stream request. These values can contain local usernames and document names, so this adds sensitive data to routine logs and will get noisy under heavy preview traffic. Prefer a request id or connection route id here if you still need correlation.

Also applies to: 302-302

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/web/web.go` around lines 248 - 250, The current logs call log.Printf with
the full file URI via the path variable (log.Printf("stream-file path=%q
range=%q\n", path, rangeHeader)), exposing sensitive filenames; update the
logging in pkg/web/web.go to stop printing raw path and instead log a
non-sensitive identifier (e.g., a request ID or connection/route id) and the
rangeHeader; obtain or generate the identifier from the incoming request context
(r.Context()) or create one at the start of the handler (e.g., reqID :=
r.Header.Get("X-Request-ID") or generate a UUID) and replace path with that
identifier in both occurrences (the log.Printf at startTime/rangeHeader and the
later log at the other occurrence around line 302) so logs retain correlation
without leaking file URIs.

@sawka sawka merged commit 3028c2a into main Mar 19, 2026
7 checks passed
@sawka sawka deleted the sawka/streamfile branch March 19, 2026 22:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant