diff --git a/AGENTS.md b/AGENTS.md index ada6eca7..0a0aff0f 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -659,6 +659,8 @@ mock.module("./some-module", () => ({ * **whoami should be separate from auth status command**: The \`sentry auth whoami\` command should be a dedicated command separate from \`sentry auth status\`. They serve different purposes: \`status\` shows everything about auth state (token, expiry, defaults, org verification), while \`whoami\` just shows user identity (name, email, username, ID) by fetching live from \`/auth/\` endpoint. \`sentry whoami\` should be a top-level alias (like \`sentry issues\` → \`sentry issue list\`). \`whoami\` should support \`--json\` for machine consumption and be lightweight — no credential verification, no defaults listing. * **Raw markdown output for non-interactive terminals, rendered for TTY**: Output raw CommonMark when stdout is not a TTY; render through marked-terminal only for TTY. Detection: \`process.stdout.isTTY\`. Override precedence: \`SENTRY\_PLAIN\_OUTPUT\` > \`NO\_COLOR\` > auto-detect. \`--json\` always outputs JSON. Streaming formatters (log/trace) use ANSI-colored text for TTY, markdown table rows for non-TTY. + +* **Use -t (not -p) as shortcut alias for --period flag**: The --period flag on issue list uses -t (for 'time period') as its short alias, not -p. The rationale: -p could be confused with --platform from other CLI tools/contexts. -t maps naturally to 'time period' and avoids collision. This was a deliberate choice after initial implementation used -p. ### Gotcha @@ -696,6 +698,18 @@ mock.module("./some-module", () => ({ * **Esbuild banner template literal double-escape for newlines**: When using esbuild's \`banner\` option with a TypeScript template literal containing string literals that need \`\n\` escape sequences: use \`\\\\\\\n\` in the TS source. The chain is: TS template literal \`\\\\\\\n\` → esbuild banner output \`\\\n\` → JS runtime interprets as newline. Using only \`\\\n\` in the TS source produces a literal newline character inside a JS double-quoted string, which is a SyntaxError. This applies to any esbuild banner/footer that injects JS strings containing escape sequences. Discovered in script/bundle.ts for the Node.js version guard error message. * **GitHub immutable releases prevent rolling nightly tag pattern**: getsentry/cli has immutable releases enabled. Once published, assets can't be modified and the tag can NEVER be reused. Nightly uses per-version tags (e.g., \`0.13.0-dev.1772062077\`) with API-based latest discovery. The publish-nightly job deletes all existing assets before uploading (not \`--clobber\`) to prevent stale assets. + +* **sudo scripts: $HOME resolves to /root, use SUDO\_USER for original user paths**: When a script runs under sudo, $HOME resolves to /root, not the invoking user's home. SSH keys, config files, etc. referenced via $HOME will fail. Fix: use \`REAL\_HOME=$(eval echo ~${SUDO\_USER:-$USER})\` at the top of the script and reference that instead of $HOME for user-owned paths like ~/.ssh/id\_ed25519. + +* **getsentry/codecov-action enables JUnit XML test reporting by default**: The \`getsentry/codecov-action@main\` has \`enable-tests: true\` by default, which searches for JUnit XML files matching \`\*\*/\*.junit.xml\`. If the test framework doesn't produce JUnit XML, the action emits 3 warnings on every CI run: "No files found matching pattern", "No JUnit XML files found", and "Please ensure your test framework is generating JUnit XML output". Fix: either set \`enable-tests: false\` in the action inputs, or configure the test runner to output JUnit XML. For Bun, add \`\[test.reporter] junit = "test-results.junit.xml"\` to \`bunfig.toml\` and add \`\*.junit.xml\` to \`.gitignore\`. + +* **OpenCode worktrees accumulate massive .test-tmp dirs — clean periodically**: OpenCode worktrees at \`~/.local/share/opencode/worktree/\` accumulate massive \`.test-tmp/\` dirs (14+ GB in getsentry/cli). Worktrees are ephemeral — safe to delete entirely between sessions. Targeted: \`rm -rf ~/.local/share/opencode/worktree/\*/.test-tmp\`. Also: \`git checkout main\` fails because main is used by the worktree. Workaround: always use \`origin/main\` — \`git checkout -b \ origin/main\` or rebase onto \`origin/main\`, never the local \`main\` branch. + +* **Bugbot flags defensive null-checks as dead code — keep them with JSDoc justification**: Cursor Bugbot and Sentry Seer repeatedly flag two false positives: (1) defensive null-checks as "dead code" — keep them with JSDoc explaining why the guard exists for future safety, especially when removing would require \`!\` assertions banned by \`noNonNullAssertion\`. (2) stderr spinner output during \`--json\` mode — always a false positive since progress goes to stderr, JSON to stdout. Reply explaining the rationale and resolve. + +* **Biome lint: Response.redirect() required, nested ternaries forbidden**: Biome lint rules that frequently trip up this codebase: (1) \`useResponseRedirect\`: use \`Response.redirect(url, status)\` not \`new Response\`. (2) \`noNestedTernary\`: use \`if/else\`. (3) \`noComputedPropertyAccess\`: use \`obj.property\` not \`obj\["property"]\`. (4) Max cognitive complexity 15 per function — extract helpers to stay under. + +* **Cherry-picking GHCR tests requires updating mocks from version.json to GHCR manifest flow**: Nightly test mocks must use the 3-step GHCR flow: (1) token exchange at \`ghcr.io/token\`, (2) manifest fetch at \`/manifests/nightly\` returning JSON with \`annotations.version\` and \`layers\[].annotations\["org.opencontainers.image.title"]\`, (3) blob download returning \`Response.redirect()\` to Azure. The \`mockNightlyVersion()\` and \`mockGhcrNightlyVersion()\` helpers must handle all three URLs. Platform-specific filenames in manifest layers must use \`if/else\` blocks (Biome forbids nested ternaries). ### Pattern @@ -715,6 +729,18 @@ mock.module("./some-module", () => ({ * **Markdown table structure for marked-terminal: blank header row + separator + data rows**: Markdown tables for marked-terminal: use blank header row (\`| | |\`), separator (\`|---|---|\`), then data rows (\`| \*\*Label\*\* | value |\`). Data rows before the separator produce malformed output. When escaping user content in table cells, escape backslashes first then pipes: \`str.replace(/\\\\/g, '\\\\\\\\').replace(/\\|/g, '\\\\|')\`. This is centralized as \`escapeMarkdownCell()\` in \`src/lib/formatters/markdown.ts\` — all formatters must use it (CodeQL flags incomplete escaping as high severity). Formatters return \`string\`; test with \`result.includes(...)\` since rendered markdown uses Unicode box-drawing characters. * **resolveAllTargets/resolveOrgAndProject should not call fetchProjectId — callers enrich**: The shared helpers \`resolveAllTargets\` and \`resolveOrgAndProject\` must NOT call \`fetchProjectId\`/\`getProject\` — many commands don't need projectId. Commands that need it (like \`issue list\`) should enrich targets themselves with silent fallback. \`toNumericId()\` validates with \`Number.isInteger(n) && n > 0\` (not truthiness) and accepts \`null\` since API responses can return null IDs. + +* **Make Bun.which testable by accepting optional PATH parameter**: When wrapping \`Bun.which()\` in a helper function, accept an optional \`pathEnv?: string\` parameter and pass it as \`{ PATH: pathEnv }\` to \`Bun.which\`. This makes the function deterministically testable without mocking — tests can pass a controlled PATH (e.g., \`/nonexistent\` for false, \`dirname(Bun.which('bash'))\` for true). Pattern: \`const opts = pathEnv !== undefined ? { PATH: pathEnv } : undefined; return Bun.which(name, opts) !== null;\` + +* **PR review workflow: reply, resolve, amend, force-push**: PR review workflow: (1) Read unresolved threads via GraphQL, (2) make code changes, (3) run lint+typecheck+tests, (4) create a SEPARATE commit per review round (not amend) for incremental review, (5) push normally, (6) reply to comments via REST API, (7) resolve threads via GraphQL \`resolveReviewThread\`. Only amend+force-push when user explicitly asks or pre-commit hook modified files. + +* **Re-run prior art search when task scope pivots**: When a conversation pivots from one approach to another, always run a fresh prior-art search with keywords specific to the new scope. The failure mode: anchoring on the original framing's keywords and never re-searching. Simple searches like \`gh pr list --search "embed"\` would have surfaced a near-identical open PR. Treat each material scope change as a new research task. + +* **Multi-target concurrent progress needs per-target delta tracking**: When multiple targets fetch concurrently and each reports cumulative progress, maintain a \`prevFetched\` array and \`totalFetched\` running sum. Each callback computes \`delta = fetched - prevFetched\[i]\`, adds to total. This prevents display jumps and double-counting. Use \`totalFetched += delta\` (O(1)), not \`reduce()\` on every callback. + +* **Pagination contextKey must include all query-varying parameters with escaping**: Pagination \`contextKey\` must encode every query-varying parameter (sort, query, period) with \`escapeContextKeyValue()\` (replaces \`|\` with \`%7C\`). Always provide a fallback before escaping since \`flags.period\` may be \`undefined\` in tests despite having a default: \`flags.period ? escapeContextKeyValue(flags.period) : "90d"\`. + +* **Branch naming and commit message conventions for Sentry CLI**: Branch naming: \`feat/\\` or \`fix/\-\\` (e.g., \`feat/ghcr-nightly-distribution\`, \`fix/268-limit-auto-pagination\`). Commit message format: \`type(scope): description (#issue)\` (e.g., \`fix(issue-list): auto-paginate --limit beyond 100 (#268)\`, \`feat(nightly): distribute via GHCR instead of GitHub Releases\`). Types seen: fix, refactor, meta, release, feat. PRs are created as drafts via \`gh pr create --draft\`. Implementation plans are attached to commits via \`git notes add\` rather than in PR body or commit message. ### Preference @@ -722,4 +748,6 @@ mock.module("./some-module", () => ({ * **Progress message format: 'N and counting (up to M)...' pattern**: User prefers progress messages that frame the limit as a ceiling rather than an expected total. Format: \`Fetching issues, 30 and counting (up to 50)...\` — not \`Fetching issues... 30/50\`. The 'up to' framing makes it clear the denominator is a max, not an expected count, avoiding confusion when fewer items exist than the limit. For multi-target fetches, include target count: \`Fetching issues from 10 projects, 30 and counting (up to 50)...\`. Initial message before any results: \`Fetching issues (up to 50)...\` or \`Fetching issues from 10 projects (up to 50)...\`. * **CI scripts: prefer jq/sed over node -e for JSON manipulation**: Reviewer (BYK) prefers using standard Unix tools (\`jq\`, \`sed\`, \`awk\`) over \`node -e\` for simple JSON manipulation in CI workflow scripts. For example, reading/modifying package.json version: \`jq -r .version package.json\` to read, \`jq --arg v "$NEW" '.version = $v' package.json > tmp && mv tmp package.json\` to write. This avoids requiring Node.js to be installed in CI steps that only need basic JSON operations, and is more readable for shell-centric workflows. + +* **Use captureException (not captureMessage) for unexpected states, or Sentry logs**: When reporting unexpected/defensive-guard situations to Sentry (e.g., non-numeric input where a number was expected), the reviewer prefers \`Sentry.captureException(new Error(...))\` over \`Sentry.captureMessage(...)\`. \`captureMessage\` with 'warning' level was rejected in PR review. Alternatively, use the Sentry structured logger (\`Sentry.logger.warn(...)\`) for less severe diagnostic cases — this was accepted in the abbreviateCount NaN handler. diff --git a/plugins/sentry-cli/skills/sentry-cli/SKILL.md b/plugins/sentry-cli/skills/sentry-cli/SKILL.md index fa1767d0..b6ea0da4 100644 --- a/plugins/sentry-cli/skills/sentry-cli/SKILL.md +++ b/plugins/sentry-cli/skills/sentry-cli/SKILL.md @@ -587,6 +587,17 @@ View details of a specific trace - `-w, --web - Open in browser` - `--spans - Span tree depth limit (number, "all" for unlimited, "no" to disable) - (default: "3")` +#### `sentry trace logs ` + +View logs associated with a trace + +**Flags:** +- `--json - Output as JSON` +- `-w, --web - Open trace in browser` +- `-t, --period - Time period to search (e.g., "14d", "7d", "24h"). Default: 14d - (default: "14d")` +- `-n, --limit - Number of log entries (1-1000) - (default: "100")` +- `-q, --query - Additional filter query (Sentry search syntax)` + ### Issues List issues in a project diff --git a/src/commands/trace/index.ts b/src/commands/trace/index.ts index e22162bb..1c04f9bf 100644 --- a/src/commands/trace/index.ts +++ b/src/commands/trace/index.ts @@ -6,12 +6,14 @@ import { buildRouteMap } from "@stricli/core"; import { listCommand } from "./list.js"; +import { logsCommand } from "./logs.js"; import { viewCommand } from "./view.js"; export const traceRoute = buildRouteMap({ routes: { list: listCommand, view: viewCommand, + logs: logsCommand, }, docs: { brief: "View distributed traces", @@ -19,7 +21,8 @@ export const traceRoute = buildRouteMap({ "View and explore distributed traces from your Sentry projects.\n\n" + "Commands:\n" + " list List recent traces in a project\n" + - " view View details of a specific trace", + " view View details of a specific trace\n" + + " logs View logs associated with a trace", hideRoute: {}, }, }); diff --git a/src/commands/trace/logs.ts b/src/commands/trace/logs.ts new file mode 100644 index 00000000..c28e43cd --- /dev/null +++ b/src/commands/trace/logs.ts @@ -0,0 +1,247 @@ +/** + * sentry trace logs + * + * View logs associated with a distributed trace. + */ + +import type { SentryContext } from "../../context.js"; +import { listTraceLogs } from "../../lib/api-client.js"; +import { validateLimit } from "../../lib/arg-parsing.js"; +import { openInBrowser } from "../../lib/browser.js"; +import { buildCommand } from "../../lib/command.js"; +import { ContextError, ValidationError } from "../../lib/errors.js"; +import { + formatLogTable, + writeFooter, + writeJson, +} from "../../lib/formatters/index.js"; +import { resolveOrg } from "../../lib/resolve-target.js"; +import { buildTraceUrl } from "../../lib/sentry-urls.js"; + +type LogsFlags = { + readonly json: boolean; + readonly web: boolean; + readonly period: string; + readonly limit: number; + readonly query?: string; +}; + +/** Maximum allowed value for --limit flag */ +const MAX_LIMIT = 1000; + +/** Minimum allowed value for --limit flag */ +const MIN_LIMIT = 1; + +/** Default number of log entries to show */ +const DEFAULT_LIMIT = 100; + +/** + * Default time period for the trace-logs API. + * The API requires statsPeriod — without it the response may be empty even + * when logs exist for the trace. + */ +const DEFAULT_PERIOD = "14d"; + +/** Usage hint shown in error messages */ +const USAGE_HINT = "sentry trace logs [] "; + +/** Regex for a valid 32-character hexadecimal trace ID */ +const TRACE_ID_RE = /^[0-9a-f]{32}$/i; + +/** + * Parse --limit flag, delegating range validation to shared utility. + */ +function parseLimit(value: string): number { + return validateLimit(value, MIN_LIMIT, MAX_LIMIT); +} + +/** + * Validate that a string looks like a 32-character hex trace ID. + * + * @throws {ValidationError} If the trace ID format is invalid + */ +function validateTraceId(traceId: string): void { + if (!TRACE_ID_RE.test(traceId)) { + throw new ValidationError( + `Invalid trace ID "${traceId}". Expected a 32-character hexadecimal string.\n\n` + + "Example: sentry trace logs abc123def456abc123def456abc123de" + ); + } +} + +/** + * Parse positional arguments for trace logs. + * + * Accepted forms: + * - `` → auto-detect org + * - ` ` → explicit org (space-separated) + * - `/` → explicit org (slash-separated, one arg) + * + * @param args - Positional arguments from CLI + * @returns Parsed trace ID and optional explicit org slug + * @throws {ContextError} If no arguments are provided + * @throws {ValidationError} If trace ID format is invalid + */ +export function parsePositionalArgs(args: string[]): { + traceId: string; + orgArg: string | undefined; +} { + if (args.length === 0) { + throw new ContextError("Trace ID", USAGE_HINT); + } + + if (args.length === 1) { + const first = args[0]; + if (first === undefined) { + throw new ContextError("Trace ID", USAGE_HINT); + } + + // Check for "org/traceId" slash-separated form + const slashIdx = first.indexOf("/"); + if (slashIdx !== -1) { + const orgArg = first.slice(0, slashIdx); + const traceId = first.slice(slashIdx + 1); + + if (!orgArg) { + throw new ContextError("Organization", USAGE_HINT); + } + if (!traceId) { + throw new ContextError("Trace ID", USAGE_HINT); + } + + validateTraceId(traceId); + return { traceId, orgArg }; + } + + // Plain trace ID — org will be auto-detected + validateTraceId(first); + return { traceId: first, orgArg: undefined }; + } + + // Two or more args — first is org, second is trace ID + const orgArg = args[0]; + const traceId = args[1]; + + if (orgArg === undefined || traceId === undefined) { + throw new ContextError("Trace ID", USAGE_HINT); + } + + validateTraceId(traceId); + return { traceId, orgArg }; +} + +export const logsCommand = buildCommand({ + docs: { + brief: "View logs associated with a trace", + fullDescription: + "View logs associated with a specific distributed trace.\n\n" + + "Uses the dedicated trace-logs endpoint, which is org-scoped and\n" + + "automatically queries all projects — no project flag needed.\n\n" + + "Target specification:\n" + + " sentry trace logs # auto-detect org\n" + + " sentry trace logs # explicit org\n" + + " sentry trace logs / # slash-separated\n\n" + + "The trace ID is the 32-character hexadecimal identifier.\n\n" + + "Examples:\n" + + " sentry trace logs abc123def456abc123def456abc123de\n" + + " sentry trace logs myorg abc123def456abc123def456abc123de\n" + + " sentry trace logs --period 7d abc123def456abc123def456abc123de\n" + + " sentry trace logs --json abc123def456abc123def456abc123de", + }, + parameters: { + positional: { + kind: "array", + parameter: { + placeholder: "args", + brief: "[] - Optional org and required trace ID", + parse: String, + }, + }, + flags: { + json: { + kind: "boolean", + brief: "Output as JSON", + default: false, + }, + web: { + kind: "boolean", + brief: "Open trace in browser", + default: false, + }, + period: { + kind: "parsed", + parse: String, + brief: `Time period to search (e.g., "14d", "7d", "24h"). Default: ${DEFAULT_PERIOD}`, + default: DEFAULT_PERIOD, + }, + limit: { + kind: "parsed", + parse: parseLimit, + brief: `Number of log entries (${MIN_LIMIT}-${MAX_LIMIT})`, + default: String(DEFAULT_LIMIT), + }, + query: { + kind: "parsed", + parse: String, + brief: "Additional filter query (Sentry search syntax)", + optional: true, + }, + }, + aliases: { w: "web", t: "period", n: "limit", q: "query" }, + }, + async func( + this: SentryContext, + flags: LogsFlags, + ...args: string[] + ): Promise { + const { stdout, cwd, setContext } = this; + + const { traceId, orgArg } = parsePositionalArgs(args); + + // Resolve org — trace-logs is org-scoped, no project needed + const resolved = await resolveOrg({ org: orgArg, cwd }); + if (!resolved) { + throw new ContextError("Organization", USAGE_HINT, [ + "Set a default org with 'sentry org list', or specify one explicitly", + `Example: sentry trace logs myorg ${traceId}`, + ]); + } + + const { org } = resolved; + setContext([org], []); + + if (flags.web) { + await openInBrowser(stdout, buildTraceUrl(org, traceId), "trace"); + return; + } + + const logs = await listTraceLogs(org, traceId, { + statsPeriod: flags.period, + limit: flags.limit, + query: flags.query, + }); + + if (flags.json) { + writeJson(stdout, logs); + return; + } + + if (logs.length === 0) { + stdout.write( + `No logs found for trace ${traceId} in the last ${flags.period}.\n\n` + + `Try a longer period: sentry trace logs --period 30d ${traceId}\n` + ); + return; + } + + // API returns newest-first; reverse for chronological display + const chronological = [...logs].reverse(); + + stdout.write(formatLogTable(chronological, false)); + + const hasMore = logs.length >= flags.limit; + const countText = `Showing ${logs.length} log${logs.length === 1 ? "" : "s"} for trace ${traceId}.`; + const tip = hasMore ? " Use --limit to show more." : ""; + writeFooter(stdout, `${countText}${tip}`); + }, +}); diff --git a/src/lib/api-client.ts b/src/lib/api-client.ts index ad2d21e0..b20afbe2 100644 --- a/src/lib/api-client.ts +++ b/src/lib/api-client.ts @@ -45,6 +45,8 @@ import { type SentryTeam, type SentryUser, SentryUserSchema, + type TraceLog, + TraceLogsResponseSchema, type TraceSpan, type TransactionListItem, type TransactionsResponse, @@ -1682,3 +1684,59 @@ export async function getLog( const logsResponse = DetailedLogsResponseSchema.parse(data); return logsResponse.data[0] ?? null; } + +// Trace-log functions + +type ListTraceLogsOptions = { + /** Additional search query to filter results (Sentry query syntax) */ + query?: string; + /** Maximum number of log entries to return (max 9999) */ + limit?: number; + /** + * Time period to search in (e.g., "14d", "7d", "24h"). + * Required by the API — without it the response may be empty even when + * logs exist for the trace. Defaults to "14d". + */ + statsPeriod?: string; +}; + +/** + * List logs associated with a specific trace. + * + * Uses the dedicated `/organizations/{org}/trace-logs/` endpoint, which is + * org-scoped and automatically queries all projects in the org. This is + * distinct from the Explore/Events logs endpoint (`/events/?dataset=logs`) + * which does not support filtering by trace ID in query syntax. + * + * `statsPeriod` defaults to `"14d"`. Without a stats period the API may + * return empty results even when logs exist for the trace. + * + * @param orgSlug - Organization slug + * @param traceId - The 32-character hex trace ID + * @param options - Optional query/limit/statsPeriod overrides + * @returns Array of trace log entries, ordered newest-first + */ +export async function listTraceLogs( + orgSlug: string, + traceId: string, + options: ListTraceLogsOptions = {} +): Promise { + const regionUrl = await resolveOrgRegion(orgSlug); + + const { data: response } = await apiRequestToRegion<{ data: TraceLog[] }>( + regionUrl, + `/organizations/${orgSlug}/trace-logs/`, + { + params: { + traceId, + statsPeriod: options.statsPeriod ?? "14d", + per_page: options.limit ?? API_MAX_PER_PAGE, + query: options.query, + sort: "-timestamp", + }, + schema: TraceLogsResponseSchema, + } + ); + + return response.data; +} diff --git a/src/lib/formatters/log.ts b/src/lib/formatters/log.ts index c68e9e87..0fa2478d 100644 --- a/src/lib/formatters/log.ts +++ b/src/lib/formatters/log.ts @@ -38,6 +38,19 @@ const SEVERITY_TAGS: Record[0]> = { /** Column headers for the streaming log table */ const LOG_TABLE_COLS = ["Timestamp", "Level", "Message"] as const; +/** + * Minimal log-row shape shared by {@link SentryLog} (Explore/Events) and + * trace-log entries (`TraceLog` from the trace-logs endpoint). + * Both types carry these three fields with the same semantics. + */ +type LogLike = { + timestamp: string; + severity?: string | null; + message?: string | null; + /** Present on Explore/Events logs; absent on trace-logs (all rows share one trace). */ + trace?: string | null; +}; + /** * Format severity level with appropriate color tag. * Pads to 7 characters for alignment (longest: "warning"). @@ -74,20 +87,28 @@ function formatTimestamp(timestamp: string): string { /** * Extract cell values for a log row (shared by streaming and batch paths). * - * @param log - The log entry + * When `includeTrace` is true (the default), a short trace-ID suffix is + * appended to the message cell — useful in Explore/Events lists where rows + * may span many traces. Pass `false` when all rows already share the same + * trace (e.g., `sentry trace logs`) so the redundant suffix is omitted. + * + * @param log - The log entry (any {@link LogLike} shape) * @param padSeverity - Whether to pad severity to 7 chars for alignment - * @returns `[timestamp, severity, message]` strings + * @param includeTrace - Whether to append a short trace-ID suffix to the message + * @returns `[timestamp, severity, message]` markdown-safe cell strings */ export function buildLogRowCells( - log: SentryLog, - padSeverity = true + log: LogLike, + padSeverity = true, + includeTrace = true ): [string, string, string] { const timestamp = formatTimestamp(log.timestamp); const level = padSeverity ? formatSeverity(log.severity) : formatSeverityLabel(log.severity); const message = escapeMarkdownCell(log.message ?? ""); - const trace = log.trace ? ` \`[${log.trace.slice(0, 8)}]\`` : ""; + const trace = + includeTrace && log.trace ? ` \`[${log.trace.slice(0, 8)}]\`` : ""; return [timestamp, level, `${message}${trace}`]; } @@ -95,11 +116,12 @@ export function buildLogRowCells( * Format a single log entry as a plain markdown table row. * Used for non-TTY / piped output where StreamingTable isn't appropriate. * - * @param log - The log entry to format + * @param log - The log entry (any {@link LogLike} shape) + * @param includeTrace - Whether to append a short trace-ID suffix (default: true) * @returns Formatted log line with newline */ -export function formatLogRow(log: SentryLog): string { - return mdRow(buildLogRowCells(log)); +export function formatLogRow(log: LogLike, includeTrace = true): string { + return mdRow(buildLogRowCells(log, true, includeTrace)); } /** Hint rows for column width estimation in streaming mode. */ @@ -141,21 +163,28 @@ export function formatLogsHeader(): string { /** * Build a markdown table for a list of log entries. * + * Accepts any {@link LogLike} shape — both {@link SentryLog} (Explore/Events) + * and trace-log entries. Pass `includeTrace: false` when all rows already share + * the same trace (e.g., `sentry trace logs`) to omit the redundant trace suffix. + * * Pre-rendered ANSI codes in cell values (e.g. colored severity) are preserved. * * @param logs - Log entries to display + * @param includeTrace - Whether to append a short trace-ID suffix (default: true) * @returns Rendered terminal string with Unicode-bordered table */ -export function formatLogTable(logs: SentryLog[]): string { +export function formatLogTable(logs: LogLike[], includeTrace = true): string { if (isPlainOutput()) { const rows = logs - .map((log) => mdRow(buildLogRowCells(log, false)).trimEnd()) + .map((log) => mdRow(buildLogRowCells(log, false, includeTrace)).trimEnd()) .join("\n"); return `${stripColorTags(mdTableHeader(LOG_TABLE_COLS))}\n${rows}\n`; } const headers = [...LOG_TABLE_COLS]; const rows = logs.map((log) => - buildLogRowCells(log, false).map((c) => renderInlineMarkdown(c)) + buildLogRowCells(log, false, includeTrace).map((c) => + renderInlineMarkdown(c) + ) ); return renderTextTable(headers, rows); } diff --git a/src/types/index.ts b/src/types/index.ts index a9bdd291..ee7ac738 100644 --- a/src/types/index.ts +++ b/src/types/index.ts @@ -74,6 +74,8 @@ export type { StackFrame, Stacktrace, TraceContext, + TraceLog, + TraceLogsResponse, TraceSpan, TransactionListItem, TransactionsResponse, @@ -92,6 +94,8 @@ export { SentryRepositorySchema, SentryTeamSchema, SentryUserSchema, + TraceLogSchema, + TraceLogsResponseSchema, TransactionListItemSchema, TransactionsResponseSchema, UserRegionsResponseSchema, diff --git a/src/types/sentry.ts b/src/types/sentry.ts index 2ab097c5..c3133b02 100644 --- a/src/types/sentry.ts +++ b/src/types/sentry.ts @@ -566,6 +566,59 @@ export const DetailedLogsResponseSchema = z.object({ export type DetailedLogsResponse = z.infer; +// Trace-log types (from /organizations/{org}/trace-logs/ endpoint) + +/** + * Individual log entry from the trace-logs endpoint. + * + * Fields returned by `GET /api/0/organizations/{org}/trace-logs/`. This endpoint + * is org-scoped and always queries all projects — it returns a fixed set of 8 + * columns, unlike the flexible Explore/Events logs endpoint. + * + * Key differences from {@link SentryLog} (Explore/Events): + * - `id` instead of `sentry.item_id` + * - Includes `project.id` (integer) and `severity_number` + * - `timestamp_precise` is an ISO string (not a nanosecond integer) + */ +export const TraceLogSchema = z + .object({ + /** Unique identifier for this log entry */ + id: z.string(), + /** Numeric ID of the project this log belongs to */ + "project.id": z.number(), + /** The 32-character hex trace ID this log is associated with */ + trace: z.string(), + /** Numeric OTel severity level (e.g., 9 = INFO, 13 = WARN, 17 = ERROR) */ + severity_number: z.number(), + /** Severity label (e.g., "info", "warn", "error") */ + severity: z.string(), + /** ISO 8601 timestamp */ + timestamp: z.string(), + /** High-precision ISO 8601 timestamp */ + timestamp_precise: z.string(), + /** Log message content */ + message: z.string().nullable().optional(), + }) + .passthrough(); + +export type TraceLog = z.infer; + +/** Response from the trace-logs endpoint */ +export const TraceLogsResponseSchema = z + .object({ + data: z.array(TraceLogSchema), + meta: z + .object({ + fields: z.record(z.string()).optional(), + units: z.record(z.string()).optional(), + }) + .passthrough() + .optional(), + }) + .passthrough(); + +export type TraceLogsResponse = z.infer; + // Transaction (for trace listing) /** diff --git a/test/commands/trace/logs.test.ts b/test/commands/trace/logs.test.ts new file mode 100644 index 00000000..47ef70fd --- /dev/null +++ b/test/commands/trace/logs.test.ts @@ -0,0 +1,535 @@ +/** + * Trace Logs Command Tests + * + * Tests for positional argument parsing and the command func() body + * in src/commands/trace/logs.ts. + * + * Uses spyOn mocking to avoid real HTTP calls or database access. + */ + +import { + afterEach, + beforeEach, + describe, + expect, + mock, + spyOn, + test, +} from "bun:test"; +import { + logsCommand, + parsePositionalArgs, +} from "../../../src/commands/trace/logs.js"; +// biome-ignore lint/performance/noNamespaceImport: needed for spyOn mocking +import * as apiClient from "../../../src/lib/api-client.js"; +import { ContextError, ValidationError } from "../../../src/lib/errors.js"; +// biome-ignore lint/performance/noNamespaceImport: needed for spyOn mocking +import * as resolveTarget from "../../../src/lib/resolve-target.js"; +import type { TraceLog } from "../../../src/types/sentry.js"; + +// ============================================================================ +// parsePositionalArgs +// ============================================================================ + +const VALID_TRACE_ID = "aaaa1111bbbb2222cccc3333dddd4444"; + +describe("parsePositionalArgs", () => { + describe("no arguments", () => { + test("throws ContextError when called with empty args", () => { + expect(() => parsePositionalArgs([])).toThrow(ContextError); + }); + + test("error mentions 'Trace ID'", () => { + try { + parsePositionalArgs([]); + expect.unreachable("Should have thrown"); + } catch (error) { + expect(error).toBeInstanceOf(ContextError); + expect((error as ContextError).message).toContain("Trace ID"); + } + }); + }); + + describe("single argument — plain trace ID", () => { + test("parses 32-char hex trace ID with no org", () => { + const result = parsePositionalArgs([VALID_TRACE_ID]); + expect(result.traceId).toBe(VALID_TRACE_ID); + expect(result.orgArg).toBeUndefined(); + }); + + test("accepts mixed-case hex trace ID", () => { + const mixedCase = "AAAA1111bbbb2222CCCC3333dddd4444"; + const result = parsePositionalArgs([mixedCase]); + expect(result.traceId).toBe(mixedCase); + expect(result.orgArg).toBeUndefined(); + }); + }); + + describe("single argument — slash-separated org/traceId", () => { + test("parses 'org/traceId' as org + trace ID", () => { + const result = parsePositionalArgs([`my-org/${VALID_TRACE_ID}`]); + expect(result.orgArg).toBe("my-org"); + expect(result.traceId).toBe(VALID_TRACE_ID); + }); + + test("throws ContextError when trace ID is missing after slash", () => { + expect(() => parsePositionalArgs(["my-org/"])).toThrow(ContextError); + }); + + test("throws ContextError when org is missing before slash", () => { + expect(() => parsePositionalArgs([`/${VALID_TRACE_ID}`])).toThrow( + ContextError + ); + }); + }); + + describe("two arguments — space-separated org and trace ID", () => { + test("parses org and trace ID from two args", () => { + const result = parsePositionalArgs(["my-org", VALID_TRACE_ID]); + expect(result.orgArg).toBe("my-org"); + expect(result.traceId).toBe(VALID_TRACE_ID); + }); + + test("uses first arg as org and second as trace ID", () => { + const result = parsePositionalArgs(["acme-corp", VALID_TRACE_ID]); + expect(result.orgArg).toBe("acme-corp"); + expect(result.traceId).toBe(VALID_TRACE_ID); + }); + }); + + describe("trace ID validation", () => { + test("throws ValidationError for non-hex trace ID", () => { + expect(() => parsePositionalArgs(["not-a-valid-trace-id"])).toThrow( + ValidationError + ); + }); + + test("throws ValidationError for trace ID shorter than 32 chars", () => { + expect(() => parsePositionalArgs(["abc123"])).toThrow(ValidationError); + }); + + test("throws ValidationError for trace ID longer than 32 chars", () => { + expect(() => parsePositionalArgs([`${VALID_TRACE_ID}extra`])).toThrow( + ValidationError + ); + }); + + test("throws ValidationError for trace ID with non-hex chars", () => { + expect(() => + parsePositionalArgs(["aaaa1111bbbb2222cccc3333ddddgggg"]) + ).toThrow(ValidationError); + }); + + test("ValidationError mentions the invalid trace ID", () => { + try { + parsePositionalArgs(["short-id"]); + expect.unreachable("Should have thrown"); + } catch (error) { + expect(error).toBeInstanceOf(ValidationError); + expect((error as ValidationError).message).toContain("short-id"); + } + }); + + test("validates trace ID in two-arg form as well", () => { + expect(() => parsePositionalArgs(["my-org", "not-valid-trace"])).toThrow( + ValidationError + ); + }); + + test("validates trace ID in slash-separated form", () => { + expect(() => parsePositionalArgs(["my-org/short-id"])).toThrow( + ValidationError + ); + }); + }); +}); + +// ============================================================================ +// logsCommand.func() +// ============================================================================ + +const TRACE_ID = "aaaa1111bbbb2222cccc3333dddd4444"; +const ORG = "test-org"; + +const sampleLogs: TraceLog[] = [ + { + id: "log001", + "project.id": 1, + trace: TRACE_ID, + severity_number: 9, + severity: "info", + timestamp: "2025-01-30T14:32:15+00:00", + timestamp_precise: "2025-01-30T14:32:15.123456+00:00", + message: "Request received", + }, + { + id: "log002", + "project.id": 1, + trace: TRACE_ID, + severity_number: 13, + severity: "warn", + timestamp: "2025-01-30T14:32:16+00:00", + timestamp_precise: "2025-01-30T14:32:16.456789+00:00", + message: "Slow query detected", + }, + { + id: "log003", + "project.id": 1, + trace: TRACE_ID, + severity_number: 17, + severity: "error", + timestamp: "2025-01-30T14:32:17+00:00", + timestamp_precise: "2025-01-30T14:32:17.789012+00:00", + message: "Database connection failed", + }, +]; + +function createMockContext() { + const stdoutWrite = mock(() => true); + return { + context: { + stdout: { write: stdoutWrite }, + stderr: { write: mock(() => true) }, + cwd: "/tmp", + setContext: mock(() => { + // no-op for test + }), + }, + stdoutWrite, + }; +} + +describe("logsCommand.func", () => { + let listTraceLogsSpy: ReturnType; + let resolveOrgSpy: ReturnType; + + beforeEach(() => { + listTraceLogsSpy = spyOn(apiClient, "listTraceLogs"); + resolveOrgSpy = spyOn(resolveTarget, "resolveOrg"); + }); + + afterEach(() => { + listTraceLogsSpy.mockRestore(); + resolveOrgSpy.mockRestore(); + }); + + describe("JSON output mode", () => { + test("outputs JSON array when --json flag is set", async () => { + listTraceLogsSpy.mockResolvedValue(sampleLogs); + resolveOrgSpy.mockResolvedValue({ org: ORG }); + + const { context, stdoutWrite } = createMockContext(); + const func = await logsCommand.loader(); + await func.call( + context, + { json: true, web: false, period: "14d", limit: 100 }, + TRACE_ID + ); + + const output = stdoutWrite.mock.calls.map((c) => c[0]).join(""); + const parsed = JSON.parse(output); + expect(Array.isArray(parsed)).toBe(true); + expect(parsed).toHaveLength(3); + expect(parsed[0].id).toBe("log001"); + }); + + test("outputs empty JSON array when no logs found with --json", async () => { + listTraceLogsSpy.mockResolvedValue([]); + resolveOrgSpy.mockResolvedValue({ org: ORG }); + + const { context, stdoutWrite } = createMockContext(); + const func = await logsCommand.loader(); + await func.call( + context, + { json: true, web: false, period: "14d", limit: 100 }, + TRACE_ID + ); + + const output = stdoutWrite.mock.calls.map((c) => c[0]).join(""); + expect(JSON.parse(output)).toEqual([]); + }); + }); + + describe("human-readable output mode", () => { + test("shows message about no logs when empty", async () => { + listTraceLogsSpy.mockResolvedValue([]); + resolveOrgSpy.mockResolvedValue({ org: ORG }); + + const { context, stdoutWrite } = createMockContext(); + const func = await logsCommand.loader(); + await func.call( + context, + { json: false, web: false, period: "14d", limit: 100 }, + TRACE_ID + ); + + const output = stdoutWrite.mock.calls.map((c) => c[0]).join(""); + expect(output).toContain("No logs found"); + expect(output).toContain(TRACE_ID); + }); + + test("includes period in empty result message", async () => { + listTraceLogsSpy.mockResolvedValue([]); + resolveOrgSpy.mockResolvedValue({ org: ORG }); + + const { context, stdoutWrite } = createMockContext(); + const func = await logsCommand.loader(); + await func.call( + context, + { json: false, web: false, period: "30d", limit: 100 }, + TRACE_ID + ); + + const output = stdoutWrite.mock.calls.map((c) => c[0]).join(""); + expect(output).toContain("30d"); + }); + + test("renders log messages in output", async () => { + listTraceLogsSpy.mockResolvedValue(sampleLogs); + resolveOrgSpy.mockResolvedValue({ org: ORG }); + + const { context, stdoutWrite } = createMockContext(); + const func = await logsCommand.loader(); + await func.call( + context, + { json: false, web: false, period: "14d", limit: 100 }, + TRACE_ID + ); + + const output = stdoutWrite.mock.calls.map((c) => c[0]).join(""); + expect(output).toContain("Request received"); + expect(output).toContain("Slow query detected"); + expect(output).toContain("Database connection failed"); + }); + + test("shows count footer", async () => { + listTraceLogsSpy.mockResolvedValue(sampleLogs); + resolveOrgSpy.mockResolvedValue({ org: ORG }); + + const { context, stdoutWrite } = createMockContext(); + const func = await logsCommand.loader(); + await func.call( + context, + { json: false, web: false, period: "14d", limit: 100 }, + TRACE_ID + ); + + const output = stdoutWrite.mock.calls.map((c) => c[0]).join(""); + expect(output).toContain("Showing 3 logs"); + expect(output).toContain(TRACE_ID); + }); + + test("uses singular 'log' for exactly one result", async () => { + listTraceLogsSpy.mockResolvedValue([sampleLogs[0]]); + resolveOrgSpy.mockResolvedValue({ org: ORG }); + + const { context, stdoutWrite } = createMockContext(); + const func = await logsCommand.loader(); + await func.call( + context, + { json: false, web: false, period: "14d", limit: 100 }, + TRACE_ID + ); + + const output = stdoutWrite.mock.calls.map((c) => c[0]).join(""); + expect(output).toContain("Showing 1 log for trace"); + expect(output).not.toContain("Showing 1 logs"); + }); + + test("shows --limit tip when results match limit", async () => { + listTraceLogsSpy.mockResolvedValue(sampleLogs); + resolveOrgSpy.mockResolvedValue({ org: ORG }); + + const { context, stdoutWrite } = createMockContext(); + const func = await logsCommand.loader(); + await func.call( + context, + // limit equals number of returned logs → hasMore = true + { json: false, web: false, period: "14d", limit: 3 }, + TRACE_ID + ); + + const output = stdoutWrite.mock.calls.map((c) => c[0]).join(""); + expect(output).toContain("Use --limit to show more."); + }); + + test("does not show --limit tip when fewer results than limit", async () => { + listTraceLogsSpy.mockResolvedValue(sampleLogs); + resolveOrgSpy.mockResolvedValue({ org: ORG }); + + const { context, stdoutWrite } = createMockContext(); + const func = await logsCommand.loader(); + await func.call( + context, + { json: false, web: false, period: "14d", limit: 100 }, + TRACE_ID + ); + + const output = stdoutWrite.mock.calls.map((c) => c[0]).join(""); + expect(output).not.toContain("Use --limit to show more."); + }); + }); + + describe("org resolution", () => { + test("uses explicit org from first positional arg", async () => { + listTraceLogsSpy.mockResolvedValue([]); + resolveOrgSpy.mockResolvedValue({ org: ORG }); + + const { context } = createMockContext(); + const func = await logsCommand.loader(); + await func.call( + context, + { json: true, web: false, period: "14d", limit: 100 }, + ORG, + TRACE_ID + ); + + expect(resolveOrgSpy).toHaveBeenCalledWith({ + org: ORG, + cwd: "/tmp", + }); + }); + + test("passes undefined org when only trace ID given", async () => { + listTraceLogsSpy.mockResolvedValue([]); + resolveOrgSpy.mockResolvedValue({ org: ORG }); + + const { context } = createMockContext(); + const func = await logsCommand.loader(); + await func.call( + context, + { json: true, web: false, period: "14d", limit: 100 }, + TRACE_ID + ); + + expect(resolveOrgSpy).toHaveBeenCalledWith({ + org: undefined, + cwd: "/tmp", + }); + }); + + test("throws ContextError when org cannot be resolved", async () => { + resolveOrgSpy.mockResolvedValue(null); + + const { context } = createMockContext(); + const func = await logsCommand.loader(); + + await expect( + func.call( + context, + { json: false, web: false, period: "14d", limit: 100 }, + TRACE_ID + ) + ).rejects.toThrow(ContextError); + }); + + test("calls setContext with resolved org and empty project array", async () => { + listTraceLogsSpy.mockResolvedValue([]); + resolveOrgSpy.mockResolvedValue({ org: ORG }); + + const { context } = createMockContext(); + const func = await logsCommand.loader(); + await func.call( + context, + { json: true, web: false, period: "14d", limit: 100 }, + TRACE_ID + ); + + expect(context.setContext).toHaveBeenCalledWith([ORG], []); + }); + }); + + describe("flag forwarding to API", () => { + test("passes traceId, period, limit, and query to listTraceLogs", async () => { + listTraceLogsSpy.mockResolvedValue([]); + resolveOrgSpy.mockResolvedValue({ org: ORG }); + + const { context } = createMockContext(); + const func = await logsCommand.loader(); + await func.call( + context, + { + json: false, + web: false, + period: "7d", + limit: 50, + query: "level:error", + }, + TRACE_ID + ); + + expect(listTraceLogsSpy).toHaveBeenCalledWith(ORG, TRACE_ID, { + statsPeriod: "7d", + limit: 50, + query: "level:error", + }); + }); + + test("passes undefined query when --query not provided", async () => { + listTraceLogsSpy.mockResolvedValue([]); + resolveOrgSpy.mockResolvedValue({ org: ORG }); + + const { context } = createMockContext(); + const func = await logsCommand.loader(); + await func.call( + context, + { json: false, web: false, period: "14d", limit: 100 }, + TRACE_ID + ); + + expect(listTraceLogsSpy).toHaveBeenCalledWith(ORG, TRACE_ID, { + statsPeriod: "14d", + limit: 100, + query: undefined, + }); + }); + }); + + describe("--web flag", () => { + test("does not call listTraceLogs when --web is set", async () => { + resolveOrgSpy.mockResolvedValue({ org: ORG }); + + const { context } = createMockContext(); + const func = await logsCommand.loader(); + // --web would call openInBrowser which needs a real browser; catch any error + try { + await func.call( + context, + { json: false, web: true, period: "14d", limit: 100 }, + TRACE_ID + ); + } catch { + // openInBrowser may throw in test environment — that's OK + } + + expect(listTraceLogsSpy).not.toHaveBeenCalled(); + }); + }); + + describe("chronological ordering", () => { + test("reverses API order (API returns newest-first, display is oldest-first)", async () => { + // API returns newest first: log003, log002, log001 + const newestFirst = [...sampleLogs].reverse(); + listTraceLogsSpy.mockResolvedValue(newestFirst); + resolveOrgSpy.mockResolvedValue({ org: ORG }); + + const { context, stdoutWrite } = createMockContext(); + const func = await logsCommand.loader(); + await func.call( + context, + { json: false, web: false, period: "14d", limit: 100 }, + TRACE_ID + ); + + const output = stdoutWrite.mock.calls.map((c) => c[0]).join(""); + // All three messages should appear in the output + const reqIdx = output.indexOf("Request received"); + const slowIdx = output.indexOf("Slow query detected"); + const dbIdx = output.indexOf("Database connection failed"); + + // After reversal, oldest (Request received) should appear before newest + expect(reqIdx).toBeLessThan(dbIdx); + expect(slowIdx).toBeLessThan(dbIdx); + }); + }); +});