Conversation
📝 WalkthroughWalkthroughThis PR introduces a Model Context Protocol (MCP) server for AI assistants like Claude, refactors the API with OpenAPI/Zod schema-driven routes, expands database service functions for analytics queries, adds comprehensive Vitest testing infrastructure with fixtures, updates API reference documentation, and configures CI with PostgreSQL and ClickHouse services for integration testing. Changes
Sequence Diagram(s)sequenceDiagram
participant Claude
participant MCP Server
participant Auth
participant SessionMgr
participant Database
participant Analytics
Claude->>MCP Server: POST /mcp with token or Bearer auth
MCP Server->>Auth: authenticateToken(token)
Auth->>Database: Lookup client, verify password
Database-->>Auth: Client context
Auth-->>MCP Server: McpAuthContext (projectId, orgId, clientType)
MCP Server->>SessionMgr: Create/store session (Redis)
SessionMgr-->>MCP Server: Session ID
MCP Server-->>Claude: Return session ID in response
Claude->>MCP Server: Subsequent request with Mcp-Session-Id header
MCP Server->>SessionMgr: Retrieve session context from Redis
SessionMgr-->>MCP Server: McpAuthContext
MCP Server->>Analytics: Execute tool (e.g., queryEvents)
Analytics->>Database: Query with projectId filtering
Database-->>Analytics: Results
Analytics-->>MCP Server: Formatted tool result
MCP Server-->>Claude: JSON-RPC response or SSE stream
sequenceDiagram
participant Client
participant Fastify
participant Route Handler
participant Controller
participant Database
Client->>Fastify: HTTP request with query/body
Fastify->>Fastify: Validate with Zod schema (preValidation)
alt Validation fails
Fastify-->>Client: 400 Bad Request
else Validation passes
Fastify->>Route Handler: Call handler with validated req/reply
Route Handler->>Controller: controller.methodName(req)
Controller->>Database: Query with req.query/req.body fields
Database-->>Controller: Results
Controller-->>Route Handler: Response data
Route Handler-->>Client: 200 JSON response with OpenAPI schema
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Poem
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 1
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
packages/db/src/engine/index.ts (1)
135-140:⚠️ Potential issue | 🟠 MajorUse query-builder.ts instead of sql-builder.ts for ClickHouse queries.
getAggregateChartSqlcurrently usescreateSqlBuilderfromsql-builder.ts, but per coding guidelines, ClickHouse queries should use the custom query builder from./packages/db/src/clickhouse/query-builder.ts. Migrate this function to use theclixpattern (e.g.,clix().select(...).from(...).where(...)) like other services do (session, pages, profile, overview, event, etc.).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/db/src/engine/index.ts` around lines 135 - 140, getAggregateChartSql currently builds SQL with createSqlBuilder from sql-builder.ts; replace its internals to use the ClickHouse query builder (import clix from ./clickhouse/query-builder.ts) and migrate the SQL construction to the clix pattern (e.g., clix().select(...).from(...).where(...).groupBy(...).orderBy(...)) so the string passed to chQuery<ISerieDataItem>(getAggregateChartSql(...)) is produced by the ClickHouse-specific builder; keep the existing function name getAggregateChartSql and preserve the same query clauses/parameters but convert each builder call from createSqlBuilder/... to the equivalent clix chain used across session/pages/profile/overview/event services.apps/api/src/controllers/manage.controller.ts (1)
44-55:⚠️ Potential issue | 🟠 MajorUse
z.string().datetime()for ISO 8601 datetime validation.
z.string()accepts any text, allowing invalid datetime strings to reachnew Date(...)and potentially fail with Invalid Date errors. The codebase already usesz.string().datetime()in track validation (seepackages/validation/src/track.validation.ts), which properly validates ISO 8601 datetime format at the schema boundary.🛠️ Tighten the schema
export const zCreateReference = z.object({ projectId: z.string(), title: z.string().min(1), description: z.string().optional(), - datetime: z.string(), + datetime: z.string().datetime(), }); export const zUpdateReference = z.object({ title: z.string().min(1).optional(), description: z.string().optional(), - datetime: z.string().optional(), + datetime: z.string().datetime().optional(), });Also applies to: 476, 509
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/controllers/manage.controller.ts` around lines 44 - 55, The datetime fields in the validation schemas zCreateReference and zUpdateReference currently use z.string() which accepts any text; change those fields to use z.string().datetime() to enforce ISO 8601 datetime validation (replace the datetime property in zCreateReference and the optional datetime in zUpdateReference with z.string().datetime()). Ensure you follow the same pattern used in the project's track validation so invalid datetimes are rejected at the schema boundary.
🟠 Major comments (21)
packages/logger/index.ts-67-70 (1)
67-70:⚠️ Potential issue | 🟠 MajorBroaden query-param key matching to avoid leaking token variants.
At Line 68, the regex only redacts exact key names (e.g.,
token=), but misses keys that contain those terms (e.g.,access_token=,idToken=). This can leave secrets in logs.🔐 Proposed fix
+ const escapedSensitiveKeys = sensitiveKeys.map((key) => + key.replace(/[.*+?^${}()|[\]\\]/g, '\\$&'), + ); + const sensitiveUrlParamPattern = new RegExp( - `([?&])(${sensitiveKeys.join('|')})=([^&]*)`, + `([?&])([^=&]*(?:${escapedSensitiveKeys.join('|')})[^=&]*)=([^&#]*)`, 'gi', );Also applies to: 72-73
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/logger/index.ts` around lines 67 - 70, The current sensitiveUrlParamPattern only matches exact query parameter names and misses keys that contain sensitive substrings (e.g., access_token), so update the regex construction that uses sensitiveKeys to match parameter keys that contain any sensitive term anywhere in the key name; specifically, change the pattern building for sensitiveUrlParamPattern to capture the param key and allow the sensitive token to appear inside that key (for example using something like `([?&])([^=]*?(?:${sensitiveKeys.join('|')})[^=]*)=([^&]*)` with 'gi'), and apply the same broadened matching where the other URL/key-redaction regexes are built (the places referencing sensitiveUrlParamPattern and sensitiveKeys on lines 72-73) so keys containing the sensitive substrings are redacted.biome.json-63-63 (1)
63-63:⚠️ Potential issue | 🟠 MajorRe-enable unreachable-code detection to prevent silent logic bugs
Line 63 disables
noUnreachable, removing an important correctness safeguard. The codebase uses scopedbiome-ignoresuppressions elsewhere (e.g., foruseTopLevelRegex), yet this rule has no such suppressions—meaning violations are silently hidden. Enable the rule and add scoped suppressions only where necessary.Suggested config change
"correctness": { "useExhaustiveDependencies": "off", - "noUnreachable": "off", + "noUnreachable": "error", "noGlobalDirnameFilename": "off" },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@biome.json` at line 63, Re-enable the unreachable-code rule by changing the "noUnreachable" setting from "off" to a strict level (e.g., "error") in the config so unreachable-code detection runs; then audit locations that currently rely on this being off and replace broad suppression with scoped biome-ignore comments only around the specific statements that need it (for example, patterns similar to existing scoped ignores for useTopLevelRegex), fixing any revealed logic issues or refining control flow in the functions/methods flagged by the rule.apps/public/src/app/docs/api-reference/[[...slug]]/page.tsx-66-71 (1)
66-71: 🛠️ Refactor suggestion | 🟠 Major
generateStaticParamsis ineffective withforce-dynamic.The
export const dynamic = 'force-dynamic'directive causes Next.js to skip static generation entirely, makinggenerateStaticParams()unused. If dynamic rendering is intentional (e.g., to always fetch fresh OpenAPI specs), consider removinggenerateStaticParamsto avoid confusion. If static generation is desired, removeforce-dynamic.♻️ Option A: Remove generateStaticParams if dynamic is intentional
-export async function generateStaticParams() { - const source = await getApiReferenceSource(); - return source.generateParams(); -} - export const dynamic = 'force-dynamic';♻️ Option B: Remove force-dynamic if static generation is desired
export async function generateStaticParams() { const source = await getApiReferenceSource(); return source.generateParams(); } - -export const dynamic = 'force-dynamic';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/public/src/app/docs/api-reference/`[[...slug]]/page.tsx around lines 66 - 71, The file exports both generateStaticParams() and export const dynamic = 'force-dynamic', which makes generateStaticParams ineffective; either remove generateStaticParams if you intend always-dynamic rendering, or remove the dynamic = 'force-dynamic' export to enable static generation; update the module by deleting the unused symbol (generateStaticParams) when keeping force-dynamic, or delete the dynamic export when keeping generateStaticParams (ensure getApiReferenceSource() and source.generateParams() remain intact if you keep static).apps/worker/src/jobs/import.ts-36-39 (1)
36-39:⚠️ Potential issue | 🟠 MajorImport job reads require primary database to ensure retry logic correctness.
The removal of
$primary()from line 36 is unsafe without a read-consistency mechanism. The import record read is critical for determining retry state (currentStep,currentBatch). If the application uses read replicas, replication lag could cause the read to fail for newly created imports or return stale state that breaks retry/resume logic.The
sessionConsistencyextension in the codebase is not applied to thedbclient and doesn't protect non-session contexts anyway. Either restore$primary()for this read, or confirm that:
- Read replicas are not in use in production, or
- The
dbclient has been reconfigured with read-consistency protection🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/worker/src/jobs/import.ts` around lines 36 - 39, The read of the import record via db.import.findUniqueOrThrow (using importId) must use the primary DB to avoid replica lag; restore $primary() on that query (e.g., db.import.findUniqueOrThrow({ where: { id: importId }, include: { project: true } }).$primary()) or otherwise ensure the db client is configured for session/read-consistency; update the code path that reads currentStep/currentBatch to call db.import.findUniqueOrThrow with $primary() (or replace with a primary-bound helper) so retry/resume logic always sees up-to-date state.docker/clickhouse/clickhouse-user-config.xml-11-14 (1)
11-14:⚠️ Potential issue | 🟠 MajorDo not leave the ClickHouse default user password empty.
An empty password combined with wide network access allows unauthenticated access to exposed ClickHouse ports. Set a non-empty password (preferably hashed) and/or restrict allowed networks.
🔒 Suggested hardening
- <password></password> + <password>change-me-in-local-env</password>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docker/clickhouse/clickhouse-user-config.xml` around lines 11 - 14, The ClickHouse user config currently leaves the <password> element empty and allows unrestricted access via <networks><ip>::/0</ip>; update the <password> element to a non-empty, securely hashed password (or reference a secret manager) and tighten the <networks><ip> entry to specific CIDR(s) or remove the open ::/0 entry to limit allowed source IPs so the default user cannot be accessed anonymously from the network.packages/db/src/services/dashboard.service.ts-42-50 (1)
42-50:⚠️ Potential issue | 🟠 Major
organizationIdis ignored in a tenant-scoped function.The function accepts
organizationIdbut does not use it in the query, which weakens scoping guarantees and creates a misleading API contract.✅ Suggested fix
export async function listDashboardsCore(input: { projectId: string; organizationId: string; }) { return db.dashboard.findMany({ - where: { projectId: input.projectId }, + where: { + projectId: input.projectId, + project: { organizationId: input.organizationId }, + }, orderBy: { createdAt: 'desc' }, select: { id: true, name: true, projectId: true }, }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/db/src/services/dashboard.service.ts` around lines 42 - 50, The function listDashboardsCore accepts organizationId but ignores it in the query; update the db.dashboard.findMany call to include organizationId in the where clause (e.g., where: { projectId: input.projectId, organizationId: input.organizationId }) so the tenant scope is enforced; locate the listDashboardsCore function and modify the where filter passed to db.dashboard.findMany to include input.organizationId while preserving the existing orderBy and select fields.apps/api/src/routes/manage.router.ts-54-63 (1)
54-63:⚠️ Potential issue | 🟠 MajorDon't collapse every
resolveClientProjectIdfailure into403.The bare catch on Lines 61-63 will also turn transient database or Prisma failures from Line 55 into "Project does not belong to your organization". That hides real outages and makes valid requests fail as authorization errors. Please only map the expected access-denied/not-found case to
403and let unexpected exceptions bubble as500.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/routes/manage.router.ts` around lines 54 - 63, The catch block around resolveClientProjectId is too broad and turns any failure (DB/Prisma/outage) into a 403; instead only translate the specific "not found / access denied" condition into reply.status(403) and rethrow or let other errors bubble. Update the code that calls resolveClientProjectId to inspect the thrown error (e.g. error.name, error.code or a custom Error subclass from resolveClientProjectId) and only return reply.status(403).send({ error: 'Forbidden', message: 'Project does not belong to your organization' }) when the error indicates the project is not in the organization (e.g. ProjectNotFoundError / AccessDeniedError / code === 'PROJECT_NOT_IN_ORG'); for any other error rethrow or don't catch so it surfaces as a 500. Ensure you reference resolveClientProjectId and the existing reply.status(403) handling when making this change.apps/api/src/utils/graceful-shutdown.ts-48-59 (1)
48-59:⚠️ Potential issue | 🟠 MajorStart MCP teardown in parallel with or before
fastify.close()to prevent shutdown deadlock.By default,
fastify.close()blocks until all in-flight HTTP requests complete. If MCP sessions maintain long-lived HTTP connections (e.g., streams, SSE, long polling),fastify.close()will wait indefinitely for them to end, meaning themcpSessionManager.destroy()call at line 58 is never reached. Start session destruction in parallel or first to allow MCP teardown to cancel these connections while Fastify drains, preventing shutdown from hanging on active MCP connections.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/utils/graceful-shutdown.ts` around lines 48 - 59, Start MCP teardown in parallel with Fastify shutdown to avoid deadlock: initiate mcpSessionManager.destroy() before awaiting fastify.close() (or kick off both without awaiting and then await Promise.allSettled) so MCP can cancel long-lived connections while Fastify drains; update the shutdown sequence around fastify.close() and mcpSessionManager.destroy() to run concurrently and still log successes/errors via logger.info/logger.error for both operations.packages/mcp/src/tools/analytics/events.ts-59-70 (1)
59-70:⚠️ Potential issue | 🟠 MajorRemove
.optional()from thelimitschema—the default won't be applied by Zod.Line 64 chains
.optional()after.default(20), which in Zod 4 causes the outer optional to short-circuit and returnundefinedwhen the input is omitted, bypassing the inner default. AlthoughqueryEventsCorehas a fallback (input.limit ?? 20), the schema should express the actual contract:limitis optional to the caller, but Zod should inject the default before passing to the handler.Suggested fix
limit: z .number() .min(1) .max(100) .default(20) - .optional() .describe('Maximum number of events to return (1-100, default 20)'),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/mcp/src/tools/analytics/events.ts` around lines 59 - 70, The limit schema currently chains .optional() after .default(20) which prevents Zod from applying the default; remove the .optional() call on the limit schema so Zod will inject the default value (20) before the handler runs. Update the schema definition around the symbol "limit" in this file and keep the handler flow that calls resolveProjectId and queryEventsCore unchanged (queryEventsCore can still use its fallback but will now receive the default from Zod).packages/db/src/services/reports.service.ts-132-151 (1)
132-151:⚠️ Potential issue | 🟠 MajorEnforce project/org ownership before listing dashboard reports.
listReportsCore()currently callsgetReportsByDashboardId(input.dashboardId)and ignores bothprojectIdandorganizationId, so anyone who can supply a dashboard ID can enumerate another project's report metadata through this core path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/db/src/services/reports.service.ts` around lines 132 - 151, listReportsCore ignores input.projectId and input.organizationId and calls getReportsByDashboardId directly, allowing unauthorized enumeration; update listReportsCore to first load the dashboard (e.g., via getDashboardById or a repository method) and verify dashboard.projectId === input.projectId and dashboard.organizationId === input.organizationId (or call a shared authorization helper), throwing an authorization/error if they don’t match, before calling getReportsByDashboardId and returning the mapped reports; reference listReportsCore, getReportsByDashboardId, input.projectId, and input.organizationId when making the change.apps/api/src/routes/mcp.router.ts-67-74 (1)
67-74:⚠️ Potential issue | 🟠 MajorDon't widen DELETE authorization from project scope to org scope.
The delete guard only compares
organizationId. For project-scopedreadclients, that lets one project close another project's session inside the same org if the session ID leaks.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/routes/mcp.router.ts` around lines 67 - 74, The current deletion guard only compares context.organizationId and thus allows project-scoped clients to delete sessions from other projects in the same org; update the guard in the delete route (where mcpSessionManager.getContext is used) to enforce project-level authorization for project-scoped callers by also checking context.projectId === callerContext.projectId when the callerContext indicates a project-scoped client (e.g., callerContext.projectId or a scope flag). If callerContext has no projectId (org-scoped), keep the existing orgId check; otherwise deny with 403. Ensure this logic is applied before the reply.status(...).send calls.packages/db/src/services/sankey.service.ts-798-825 (1)
798-825:⚠️ Potential issue | 🟠 MajorKeep the exported core wrapper behind the existing Sankey input bounds.
getUserFlowCore()forwards rawsteps/filters straight intogetSankey(), bypassingzGetSankeyInput. That means this new entrypoint can acceptsteps < 2or very large values even though the original service explicitly capped the query shape at 2–10.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/db/src/services/sankey.service.ts` around lines 798 - 825, getUserFlowCore is forwarding raw steps and filters into sankeyService.getSankey, bypassing zGetSankeyInput bounds; restore the same input validation/sanitization by feeding the input through the existing zGetSankeyInput (or replicate its logic) before calling sankeyService.getSankey so steps is clamped to the allowed range (e.g., 2–10) and include/exclude are normalized. Specifically, in getUserFlowCore call zGetSankeyInput or apply the same normalization used by zGetSankeyInput to compute steps, include and exclude, then pass those sanitized values (not raw input.steps/include/exclude) into sankeyService.getSankey to enforce the original query shape constraints.packages/mcp/src/tools/dashboard-links.ts-1-1 (1)
1-1: 🛠️ Refactor suggestion | 🟠 MajorRemove empty import statement.
The empty import
import { } from '@openpanel/db';has no effect and should be removed.🔧 Remove empty import
-import { } from '@openpanel/db'; import type { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/mcp/src/tools/dashboard-links.ts` at line 1, Remove the no-op empty import statement `import { } from '@openpanel/db';` from the top of the file; simply delete that line so there are no unused/empty imports left in packages/mcp/src/tools/dashboard-links.ts and ensure any required symbols from '@openpanel/db' are imported explicitly if actually needed elsewhere in this module.apps/api/src/controllers/insights.controller.ts-224-234 (1)
224-234:⚠️ Potential issue | 🟠 Major
cursorandlimitare ignored here.This handler still accepts pagination fields, but it no longer forwards them to
overviewService.getTopPages. Clients will always get the service default window, so paging and explicit caps stop working.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/controllers/insights.controller.ts` around lines 224 - 234, The getPages handler ignores pagination by not forwarding cursor/limit from the request to overviewService.getTopPages; update getPages to extract cursor and limit from req.query (e.g., req.query.cursor and req.query.limit) and pass them into the call to overviewService.getTopPages (add them to the options object alongside projectId, filters, startDate, endDate, timezone) so client-provided pagination parameters are respected.packages/db/src/services/pages.service.ts-182-195 (1)
182-195:⚠️ Potential issue | 🟠 Major
limitis accepted but never applied.
getTopPagesCoreadvertiseslimit?: number, yet the wrapper drops it before callingOverviewService.getTopPages. Callers will always get the downstream default, which changes both response size and expected semantics.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/db/src/services/pages.service.ts` around lines 182 - 195, getTopPagesCore accepts a limit parameter but never forwards it to the downstream call; update getTopPagesCore to pass the input.limit into _overviewServiceForPages.getTopPages (or provide a sensible default before forwarding) so callers get the requested page count rather than the downstream default. Locate the getTopPagesCore function and add the limit field to the object passed to _overviewServiceForPages.getTopPages, ensuring the property name matches the downstream API (limit) and preserves optional behavior when input.limit is undefined.packages/db/src/services/pages.service.ts-225-253 (1)
225-253:⚠️ Potential issue | 🟠 MajorThis ranking is computed from a truncated candidate set.
pagesService.getTopPagesalready orders bysessions DESC, so fetching only the first 1,000 rows and then re-sorting in memory cannot return the true top pages forpageviews,bounce_rate, oravg_durationonce a project has more than 1,000 pages.total_pagesis also capped at that same truncated subset.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/db/src/services/pages.service.ts` around lines 225 - 253, The current code truncates to 1000 rows after calling pagesService.getTopPages (then re-sorts in memory), which produces incorrect top results and an incorrect total when projects exceed 1000 pages; fix by pushing sorting and limiting into the data layer: update the call to pagesService.getTopPages to accept and honor input.sortBy and input.sortOrder (and use a sensible DB-side limit or no limit when computing totals), request either the full candidate set or let the DB perform ORDER BY (e.g., pass input.limit or null) so the returned pages are already correctly ordered, and use the total count returned by the service (instead of pages.length) for total_pages; refer to pagesService.getTopPages, input.sortBy, input.sortOrder, pages, and total_pages when making these changes.apps/api/src/app.ts-159-169 (1)
159-169:⚠️ Potential issue | 🟠 MajorDon't persist raw tRPC input and session objects in error logs.
ctx.inputandctx.ctx?.sessioncan contain emails, identifiers, and other customer data. Logging them on every procedure failure creates an avoidable PII trail; log a small redacted set of fields instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/app.ts` around lines 159 - 169, The onError handler currently logs raw ctx.input and ctx.ctx?.session (in the ctx.req.log.error call), which can contain PII; change the handler to stop persisting those raw objects and instead log a minimal, redacted payload (e.g., ctx.path, ctx.type, error metadata and a sanitized/small whitelist of non-PII fields). Implement or call a small sanitizer (e.g., sanitizeTrpcInput/sanitizeSession) before logging and replace ctx.input and ctx.ctx?.session in the log payload with the sanitized versions or omitted fields only.packages/db/src/services/pages.service.ts-274-315 (1)
274-315: 🛠️ Refactor suggestion | 🟠 MajorPlease rewrite this with the shared ClickHouse builder.
This new raw-SQL path bypasses the repository abstraction and reintroduces manual escaping, hard-coded table names, and direct date conversion in a file that otherwise uses
clix.As per coding guidelines, "When writing ClickHouse queries, always use the custom query builder from
./packages/db/src/clickhouse/query-builder.tsand./packages/db/src/clickhouse/query-functions.ts."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/db/src/services/pages.service.ts` around lines 274 - 315, The function currently builds raw SQL using manual escaping and hard-coded table/SQL functions (see chQuery, IPageConversionRow and variables projectId, conversionEvent, startDate, endDate, windowHours, limit) which bypasses the shared ClickHouse abstraction; rewrite this query to use the shared ClickHouse query builder and query-functions (from clickhouse/query-builder.ts and clickhouse/query-functions.ts) instead of string interpolation — replace the WITH CTEs (conversion_events, views_before_conversions, total_visitors) with builder subqueries, use parameterized builder methods for project_id, event name and date ranges (use the query-functions for toDateTime/interval/uniq/round), build the LEFT JOIN, GROUP BY, ORDER BY and LIMIT via builder APIs, and return the built query via chQuery<IPageConversionRow> so no manual sqlstring.escape or direct SQL literals remain.apps/api/src/controllers/insights.controller.ts-432-454 (1)
432-454:⚠️ Potential issue | 🟠 Major
rangeis silently dropped on the events endpoint.
zEventsQueryexposesrange, butqueryEventsCoreonly resolves explicitstartDate/endDate. Passingreq.querystraight through means requests like?range=30dfall back to the core default window instead of the requested one.Possible fix
export async function queryEvents( req: FastifyRequest<{ Params: { projectId?: string }; Querystring: z.infer<typeof zEventsQuery> }>, reply: FastifyReply ) { const projectId = await getProjectId(req as RequestWithProjectParam); - return reply.send(await queryEventsCore({ projectId, ...req.query })); + const { startDate, endDate } = await resolveDates(projectId, req.query); + const { range: _ignoredRange, ...rest } = req.query; + return reply.send( + await queryEventsCore({ projectId, ...rest, startDate, endDate }) + ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/controllers/insights.controller.ts` around lines 432 - 454, zEventsQuery includes a shorthand "range" field but queryEvents forwards req.query directly to queryEventsCore which only honors explicit startDate/endDate, so range values like "30d" are ignored; update the queryEvents handler to detect and convert the incoming range (from zEventsQuery) into concrete startDate and endDate before calling queryEventsCore (or map it to the core's expected params), e.g. parse req.query.range into startDate/endDate and pass { projectId, ...req.query, startDate, endDate } to queryEventsCore so the requested window is respected; adjust use of symbols zEventsQuery, queryEvents, queryEventsCore, req.query, range, startDate, endDate accordingly.packages/db/src/services/event.service.ts-1284-1291 (1)
1284-1291:⚠️ Potential issue | 🟠 MajorAdd a deterministic sort before
LIMIT.Without an
ORDER BY, ClickHouse can return any matching 20 rows. That makesqueryEventsCorenondeterministic and it won't reliably surface the latest matching events.Possible fix
- return builder.limit(input.limit ?? 20).execute(); + return builder + .orderBy('created_at', 'DESC') + .orderBy('id', 'DESC') + .limit(input.limit ?? 20) + .execute();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/db/src/services/event.service.ts` around lines 1284 - 1291, queryEventsCore (the builder usage around resolveDateRange and builder.where('created_at', 'BETWEEN', ...)) is nondeterministic because it applies LIMIT without an ORDER BY; add a deterministic ORDER BY before calling builder.limit(...). Specifically, order by created_at descending to surface latest events (and include a tiebreaker unique key/PK if available, e.g., id or event_id) by invoking the builder's ordering method prior to builder.limit to ensure consistent results.packages/db/src/services/event.service.ts-1190-1204 (1)
1190-1204:⚠️ Potential issue | 🟠 MajorDeduplicate
property_valuebefore applying the 200-row cap.This query limits raw rows, not unique values. For popular properties, repeated
property_values will consume the window and callers will miss other valid options.Possible fix
- const rows = await clix(ch) - .select<{ value: string }>(['property_value as value']) + const rows = await clix(ch) + .select<{ value: string; last_seen: string }>([ + 'property_value as value', + 'max(created_at) as last_seen', + ]) .from(TABLE_NAMES.event_property_values_mv) .where('project_id', '=', input.projectId) .where('name', '=', input.eventName) .where('property_key', '=', input.propertyKey) - .orderBy('created_at', 'DESC') + .groupBy(['property_value']) + .orderBy('last_seen', 'DESC') .limit(200) .execute();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/db/src/services/event.service.ts` around lines 1190 - 1204, The query currently limits raw rows so duplicate property_value entries can crowd out unique options; change the query against TABLE_NAMES.event_property_values_mv to deduplicate by property_value before applying the 200-row limit (e.g., groupBy('property_value') and select property_value as value with max('created_at') as max_created_at, then orderBy('max_created_at', 'DESC').limit(200)) so the returned rows variable contains up to 200 unique values; keep the returned shape (event, property, values: rows.map(r => r.value)).
🟡 Minor comments (10)
apps/public/content/docs/api/manage/clients.mdx-34-34 (1)
34-34:⚠️ Potential issue | 🟡 MinorUse US-English variant for consistency (
afterward).Line 34 uses “afterwards”; switch to “afterward” to match American English style consistency.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/public/content/docs/api/manage/clients.mdx` at line 34, Replace the US/UK inconsistent word in the sentence "Client secrets are only returned once at creation time and are never retrievable afterwards." by changing "afterwards" to the US-English variant "afterward" so the sentence reads "Client secrets are only returned once at creation time and are never retrievable afterward."packages/db/src/prisma-client.ts-91-118 (1)
91-118:⚠️ Potential issue | 🟡 MinorMissing
SELF_HOSTEDcheck inisActiveandisTrialcomputed fields.All other subscription-related computed fields (
isCanceled,isWillBeCanceled,isExpired,isExceeded, etc.) return a safe default whenprocess.env.SELF_HOSTED === 'true', butisActiveandisTrialdo not. This inconsistency could cause unexpected behavior in self-hosted deployments.Proposed fix
isActive: { needs: { subscriptionStatus: true, subscriptionEndsAt: true, subscriptionCanceledAt: true, }, compute(org) { + if (process.env.SELF_HOSTED === 'true') { + return true; + } + return ( org.subscriptionStatus === 'active' && org.subscriptionEndsAt && org.subscriptionEndsAt > new Date() && !isCanceled(org) && !isWillBeCanceled(org) ); }, }, isTrial: { needs: { subscriptionStatus: true, subscriptionEndsAt: true }, compute(org) { + if (process.env.SELF_HOSTED === 'true') { + return false; + } + const isSubscriptionInFuture = org.subscriptionEndsAt && org.subscriptionEndsAt > new Date(); return ( (org.subscriptionStatus === 'trialing' || org.subscriptionStatus === null) && isSubscriptionInFuture ); }, },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/db/src/prisma-client.ts` around lines 91 - 118, The computed fields isActive and isTrial in prisma-client.ts lack the SELF_HOSTED guard that other subscription helpers (isCanceled, isWillBeCanceled, isExpired, isExceeded) use; update the compute functions for isActive and isTrial to first check process.env.SELF_HOSTED === 'true' and return the same safe default used elsewhere (e.g., false) when self-hosted, then proceed with the existing logic (keep references to org.subscriptionStatus, org.subscriptionEndsAt and calls to isCanceled/isWillBeCanceled in isActive). Ensure both isActive and isTrial mirror the defensive pattern used by the other computed fields.packages/db/src/services/gsc.service.ts-40-41 (1)
40-41:⚠️ Potential issue | 🟡 MinorDuplicate filtering:
minImpressionsfilter applied twice with different values.The
computeOpportunitiesfunction has a hardcoded filterq.impressions >= 50at line 41, but the callergscGetQueryOpportunitiesCore(lines 159-161) already filters byminImpressions(defaulting to 50). IfminImpressionsis set to a value other than 50 (e.g., 100), the hardcoded filter here will still override it to 50, making the parameter ineffective for this condition.Proposed fix: Remove hardcoded impressions filter from computeOpportunities
return queries - .filter((q) => q.position >= 4 && q.position <= 20 && q.impressions >= 50) + .filter((q) => q.position >= 4 && q.position <= 20) .map((q) => {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/db/src/services/gsc.service.ts` around lines 40 - 41, computeOpportunities currently hardcodes q.impressions >= 50 which duplicates and overrides the caller's minImpressions; remove the hardcoded filter (or replace it to use a minImpressions parameter) so computeOpportunities respects the caller-provided minImpressions from gscGetQueryOpportunitiesCore, and ensure any references to the impressions threshold reference the passed-in minImpressions variable rather than the literal 50 (locate computeOpportunities and gscGetQueryOpportunitiesCore to apply the change).packages/mcp/src/tools/analytics/engagement.ts-30-47 (1)
30-47:⚠️ Potential issue | 🟡 MinorThe 60-day churn bucket is off by one.
Line 34 still counts
days === 60intoinactive_31_to_60_days, but Line 47 exposes the last bucket aschurned_60_plus_days. Please align the threshold or rename the bucket so the summary is unambiguous.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/mcp/src/tools/analytics/engagement.ts` around lines 30 - 47, The 60-day boundary is inconsistent: change the loop condition "else if (row.days <= 60) active_31_60 += row.users;" to "else if (row.days <= 59) active_31_60 += row.users;" so that days === 60 falls into the churned bucket, and update the summary key name "active_31_to_60_days" to "active_31_to_59_days" (while keeping "churned_60_plus_days" and the variables active_31_60 and churned_60_plus) so the labels and thresholds align unambiguously.packages/mcp/src/tools/gsc/queries.ts-42-43 (1)
42-43:⚠️ Potential issue | 🟡 Minor
minImpressionsbelow 50 is silently ignored.The handler already filters by the caller-supplied threshold, but
computeOpportunities()applies a second hard-codedimpressions >= 50filter. Requests likeminImpressions: 10will still drop all 10–49 impression queries.🛠️ Suggested fix
function computeOpportunities( queries: Array<{ query: string; clicks: number; impressions: number; ctr: number; position: number; }>, ): GscQueryOpportunity[] { return queries - .filter((q) => q.position >= 4 && q.position <= 20 && q.impressions >= 50) + .filter((q) => q.position >= 4 && q.position <= 20) .map((q) => { const benchmark = getBenchmark(q.position); const ctrGap = Math.max(0, benchmark - q.ctr);Also applies to: 115-123
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/mcp/src/tools/gsc/queries.ts` around lines 42 - 43, The code in computeOpportunities currently applies a hard-coded impressions >= 50 filter (the queries.filter(...) call) which overrides caller-supplied minImpressions; update computeOpportunities to accept and use the minImpressions parameter (or passed-in threshold) instead of 50 wherever queries are filtered by impressions (including the other similar filter further down), replacing the literal 50 with the minImpressions variable and ensuring the function signature/locals pass that value through to both filter calls (preserve the existing position bounds logic).packages/mcp/src/tools/analytics/reports.ts-99-105 (1)
99-105:⚠️ Potential issue | 🟡 MinorInconsistent error handling - returns error object instead of throwing.
get_report_datareturns{ error: '...' }objects for "not found" and "wrong project" cases, while other tools throw errors that are caught bywithErrorHandling. This makes error handling inconsistent for consumers.🔧 Proposed fix for consistent error handling
if (!report) { - return { error: 'Report not found', reportId }; + throw new Error(`Report not found: ${reportId}`); } if (report.projectId !== projectId) { - return { error: 'Report does not belong to this project', reportId }; + throw new Error('Report does not belong to this project'); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/mcp/src/tools/analytics/reports.ts` around lines 99 - 105, The get_report_data function currently returns error objects ({ error: '...' }) for "Report not found" and "Report does not belong to this project" cases, causing inconsistent handling compared to other tools that throw and are wrapped by withErrorHandling; change those return statements inside get_report_data to throw Error instances (or the same custom error type used elsewhere) with the same messages so callers and withErrorHandling receive exceptions consistently (update the two checks referencing report and report.projectId to throw new Error('Report not found') and throw new Error('Report does not belong to this project') respectively).packages/mcp/src/tools/analytics/page-performance.ts-64-67 (1)
64-67:⚠️ Potential issue | 🟡 MinorSorting comparison doesn't handle equality, causing unstable sort.
The comparison
(a[col] ?? 0) < (b[col] ?? 0) ? -1 : 1returns1when values are equal instead of0. This causes unstable sorting behavior where equal items may be reordered arbitrarily.🔧 Proposed fix for stable sorting
const col = sortBy ?? 'sessions'; const dir = sortOrder === 'asc' ? 1 : -1; -const sorted = [...pages].sort((a, b) => dir * ((a[col] ?? 0) < (b[col] ?? 0) ? -1 : 1)); +const sorted = [...pages].sort((a, b) => { + const aVal = a[col] ?? 0; + const bVal = b[col] ?? 0; + return dir * (aVal < bVal ? -1 : aVal > bVal ? 1 : 0); +});🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/mcp/src/tools/analytics/page-performance.ts` around lines 64 - 67, The sort comparator used when building sorted from pages is unstable because it returns 1 for equal values; update the comparator in the sorted assignment (the sort callback that uses col, dir, pages, sortBy, sortOrder) to compute aVal = (a[col] ?? 0) and bVal = (b[col] ?? 0) and return dir * (aVal === bVal ? 0 : aVal < bVal ? -1 : 1) so equal items return 0 and maintain stability; then slice into results as before with limit.packages/db/src/services/date.service.ts-173-182 (1)
173-182:⚠️ Potential issue | 🟡 MinorInconsistent boundary handling for
lastYearrange.Other ranges like
7d,6m,12m,monthToDate, andlastMonthaddplus({ millisecond: 1 })to the endDate, butlastYeardoesn't. This inconsistency could cause subtle off-by-one behavior when comparing year-over-year data.🔧 Suggested fix
if (range === 'lastYear') { const year = DateTime.now().minus({ year: 1 }).setZone(timezone); const startDate = year.startOf('year').toFormat('yyyy-MM-dd HH:mm:ss'); - const endDate = year.endOf('year').toFormat('yyyy-MM-dd HH:mm:ss'); + const endDate = year + .endOf('year') + .plus({ millisecond: 1 }) + .toFormat('yyyy-MM-dd HH:mm:ss'); return { startDate, endDate, }; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/db/src/services/date.service.ts` around lines 173 - 182, The lastYear branch in date.service.ts returns endDate without the millisecond bump unlike other branches (e.g., the handlers for '7d', '6m', '12m', 'monthToDate', 'lastMonth'); update the lastYear computation (the block using DateTime.now().minus({ year: 1 }).setZone(timezone) and the endDate variable) to add plus({ millisecond: 1 }) before formatting so endDate uses the same inclusive boundary handling as the other ranges.packages/db/src/services/profile.service.ts-472-496 (1)
472-496:⚠️ Potential issue | 🟡 MinorPotential false positive on null check.
getProfileMetrics(line 101) usesdata[0]!with a non-null assertion. If ClickHouse returns an empty result set,data[0]isundefinedand the assertion lies. The null check at line 477 would never catch this case becauserawwould be the assertedundefined(runtime error before reaching the check).Consider updating
getProfileMetricsto handle empty results gracefully:// In getProfileMetrics (around line 101) - .then((data) => data[0]!) + .then((data) => data[0] ?? null)Then the check in
getProfileMetricsCorewould work as intended.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/db/src/services/profile.service.ts` around lines 472 - 496, getProfileMetricsCore's null check can be bypassed because getProfileMetrics uses a non-null assertion on data[0] (data[0]!) which will throw before returning undefined when ClickHouse returns no rows; update getProfileMetrics to handle empty result sets gracefully (remove the non-null assertion, explicitly check if data.length === 0 and return null/undefined or a safe empty object) so callers like getProfileMetricsCore can rely on its returned value, and ensure getProfileMetricsCore still checks raw and handles the null/undefined case appropriately.test/fixtures.ts-449-451 (1)
449-451:⚠️ Potential issue | 🟡 MinorSchema statements may require sequential execution.
Promise.allexecutes all statements in parallel, but DDL statements often have ordering dependencies (e.g.,CREATE DATABASEmust complete beforeCREATE TABLE). Ifclickhouse-schema.sqlhas such dependencies, this could cause intermittent failures.🔧 Suggested fix for sequential execution
- await Promise.all( - statements.map((statement) => client.command({ query: statement })) - ); + for (const statement of statements) { + await client.command({ query: statement }); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/fixtures.ts` around lines 449 - 451, The schema statements are run in parallel with Promise.all which can break ordering-dependent DDL; change the logic that calls client.command for each entry in statements (the map over statements and client.command invocation) to execute sequentially by awaiting each client.command in order (e.g., a for..of loop or an await-reduce) so each statement completes before the next starts, ensuring CREATE DATABASE/TABLE ordering is preserved.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1940706f-b0a4-4438-8f96-799d3f07710a
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (127)
.github/workflows/docker-build.ymlapps/api/package.jsonapps/api/src/app.tsapps/api/src/controllers/export.controller.tsapps/api/src/controllers/insights.controller.tsapps/api/src/controllers/manage.controller.tsapps/api/src/controllers/track.controller.tsapps/api/src/index.tsapps/api/src/routes/event.router.tsapps/api/src/routes/export.router.tsapps/api/src/routes/import.router.tsapps/api/src/routes/insights.router.test.tsapps/api/src/routes/insights.router.tsapps/api/src/routes/manage.router.tsapps/api/src/routes/mcp.router.tsapps/api/src/routes/profile.router.tsapps/api/src/routes/track.router.tsapps/api/src/utils/graceful-shutdown.tsapps/api/src/utils/parse-zod-query-string.tsapps/api/src/utils/rate-limiter.tsapps/api/tsconfig.jsonapps/api/vitest.config.tsapps/public/content/docs/api-reference/authentication.mdxapps/public/content/docs/api-reference/rate-limits.mdxapps/public/content/docs/api/export.mdxapps/public/content/docs/api/insights.mdxapps/public/content/docs/api/manage/clients.mdxapps/public/content/docs/api/manage/index.mdxapps/public/content/docs/api/manage/projects.mdxapps/public/content/docs/api/manage/references.mdxapps/public/content/docs/api/meta.jsonapps/public/content/docs/api/track.mdxapps/public/content/docs/mcp/index.mdxapps/public/content/docs/meta.jsonapps/public/next.config.mjsapps/public/package.jsonapps/public/source.config.tsapps/public/src/app/docs/(docs)/[[...slug]]/page.tsxapps/public/src/app/docs/(docs)/layout.tsxapps/public/src/app/docs/api-reference/[[...slug]]/page.tsxapps/public/src/app/docs/api-reference/layout.tsxapps/public/src/app/docs/layout.tsxapps/public/src/app/global.cssapps/public/src/components/navbar.tsxapps/public/src/lib/openapi.tsapps/public/src/mdx-components.tsxapps/worker/src/jobs/import.tsbiome.jsondocker-compose.ymldocker/clickhouse/clickhouse-user-config.xmlpackages/auth/src/session.tspackages/db/index.tspackages/db/src/engine/index.tspackages/db/src/engine/normalize.tspackages/db/src/prisma-client.tspackages/db/src/services/access.service.tspackages/db/src/services/chart.service.tspackages/db/src/services/dashboard.service.tspackages/db/src/services/date.service.tspackages/db/src/services/event.service.tspackages/db/src/services/funnel.service.tspackages/db/src/services/group.service.tspackages/db/src/services/gsc.service.tspackages/db/src/services/organization.service.tspackages/db/src/services/overview.service.tspackages/db/src/services/pages.service.tspackages/db/src/services/profile.service.tspackages/db/src/services/project.service.tspackages/db/src/services/reports.service.tspackages/db/src/services/retention.service.tspackages/db/src/services/sankey.service.tspackages/db/src/services/session.service.tspackages/importer/src/providers/mixpanel.tspackages/logger/index.tspackages/mcp/index.tspackages/mcp/package.jsonpackages/mcp/src/auth.test.tspackages/mcp/src/auth.tspackages/mcp/src/handler.tspackages/mcp/src/integration/tools.test.tspackages/mcp/src/server.tspackages/mcp/src/session-manager.test.tspackages/mcp/src/session-manager.tspackages/mcp/src/tools/analytics/active-users.tspackages/mcp/src/tools/analytics/engagement.test.tspackages/mcp/src/tools/analytics/engagement.tspackages/mcp/src/tools/analytics/event-names.tspackages/mcp/src/tools/analytics/events.tspackages/mcp/src/tools/analytics/funnel.tspackages/mcp/src/tools/analytics/groups.tspackages/mcp/src/tools/analytics/overview.tspackages/mcp/src/tools/analytics/page-conversions.test.tspackages/mcp/src/tools/analytics/page-conversions.tspackages/mcp/src/tools/analytics/page-performance.test.tspackages/mcp/src/tools/analytics/page-performance.tspackages/mcp/src/tools/analytics/pages.tspackages/mcp/src/tools/analytics/profile-metrics.tspackages/mcp/src/tools/analytics/profiles.test.tspackages/mcp/src/tools/analytics/profiles.tspackages/mcp/src/tools/analytics/property-values.tspackages/mcp/src/tools/analytics/reports.tspackages/mcp/src/tools/analytics/retention.tspackages/mcp/src/tools/analytics/sessions.tspackages/mcp/src/tools/analytics/traffic.tspackages/mcp/src/tools/analytics/user-flow.tspackages/mcp/src/tools/dashboard-links.tspackages/mcp/src/tools/gsc/cannibalization.tspackages/mcp/src/tools/gsc/overview.tspackages/mcp/src/tools/gsc/pages.tspackages/mcp/src/tools/gsc/queries.tspackages/mcp/src/tools/index.tspackages/mcp/src/tools/projects.tspackages/mcp/src/tools/shared.test.tspackages/mcp/src/tools/shared.tspackages/mcp/tsconfig.jsonpackages/mcp/vitest.config.tspackages/trpc/src/routers/widget.tspackages/trpc/src/trpc.tspackages/validation/src/index.tspackages/validation/src/track.validation.tspnpm-workspace.yamltest/clickhouse-schema.sqltest/fixtures.tstest/global-setup.tstest/test-setup.tsvitest.config.tsvitest.shared.ts
💤 Files with no reviewable changes (2)
- apps/public/src/mdx-components.tsx
- apps/public/src/app/docs/layout.tsx
| fastify.route({ | ||
| method: 'GET', | ||
| url: '/projects/:projectId/dashboards/:dashboardId/reports', | ||
| schema: { | ||
| params: z.object({ projectId: z.string(), dashboardId: z.string() }), | ||
| tags: ['Manage'], | ||
| description: 'List all reports in a dashboard.', | ||
| }, | ||
| handler: listReports, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
controller=$(fd 'insights.controller.ts' apps | head -n1)
manage_router=$(fd 'manage.router.ts' apps | head -n1)
insights_router=$(fd 'insights.router.ts' apps | head -n1)
echo "Inspecting listReports implementation:"
rg -n -C4 '\blistReports\b|\bdashboardId\b|\bprojectId\b' "$controller" || true
echo
echo "Comparing route wiring:"
rg -n -C3 '\blistReports\b|\bdashboardId\b|\bprojectId\b' "$manage_router" "$insights_router" || trueRepository: Openpanel-dev/openpanel
Length of output: 44268
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Find where listReportsCore is defined
rg -n 'listReportsCore' --type ts --type tsx -A 15 | head -n 50Repository: Openpanel-dev/openpanel
Length of output: 94
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Find where listReportsCore is defined
rg -n 'listReportsCore' -A 15 | head -n 100Repository: Openpanel-dev/openpanel
Length of output: 4209
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Check getReportsByDashboardId implementation and schema
rg -n 'getReportsByDashboardId|function.*dashboardId|export.*dashboardId' -B 2 -A 10 packages/db/src | head -n 80Repository: Openpanel-dev/openpanel
Length of output: 2806
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Check dashboard schema to see if it has projectId field
find packages/db -name "*.prisma" -exec cat {} \; | grep -A 10 "model Dashboard"Repository: Openpanel-dev/openpanel
Length of output: 540
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Check if dashboard is fetched/validated anywhere before getReportsByDashboardId is called
rg -n 'listReportsCore|getDashboard|dashboard' packages/db/src/services/reports.service.ts -B 5 -A 20 | head -n 80Repository: Openpanel-dev/openpanel
Length of output: 1817
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Check if Report schema has a dashboard relation and if we can join/validate through it
cat packages/db/src/schema.prisma 2>/dev/null | grep -A 20 "model Report" || \
find packages/db -name "*.prisma" -exec cat {} \; | grep -A 20 "model Report"Repository: Openpanel-dev/openpanel
Length of output: 1278
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Get the full Report model definition
find packages/db -name "*.prisma" -exec cat {} \; | grep -A 50 "model Report" | grep -B 50 "@@"Repository: Openpanel-dev/openpanel
Length of output: 2731
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Verify the issue: check if listReportsCore validates the dashboard-project relationship
# by looking at the full implementation and how it's called from the controller
echo "=== listReportsCore implementation ==="
sed -n '132,152p' packages/db/src/services/reports.service.ts
echo -e "\n=== listReports controller call ==="
sed -n '615,621p' apps/api/src/controllers/insights.controller.ts
echo -e "\n=== Checking if dashboard is ever fetched/validated ==="
rg -n 'dashboard|Dashboard' packages/db/src/services/reports.service.tsRepository: Openpanel-dev/openpanel
Length of output: 1373
🏁 Script executed:
#!/bin/bash
set -euo progefail
# Verify the vulnerability: check what a malicious query could retrieve
# by examining if there's any projectId filtering when fetching reports
echo "=== Full getReportsByDashboardId function ==="
sed -n '97,108p' packages/db/src/services/reports.service.ts
echo -e "\n=== Checking if any other function validates dashboard-project relationship ==="
rg -n 'findMany.*dashboardId|where.*dashboardId' packages/db/src/services/ -A 5Repository: Openpanel-dev/openpanel
Length of output: 121
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Verify the vulnerability: check what a malicious query could retrieve
# by examining if there's any projectId filtering when fetching reports
echo "=== Full getReportsByDashboardId function ==="
sed -n '97,108p' packages/db/src/services/reports.service.ts
echo -e "\n=== Checking if any other function validates dashboard-project relationship ==="
rg -n 'findMany.*dashboardId|where.*dashboardId' packages/db/src/services/ -A 5Repository: Openpanel-dev/openpanel
Length of output: 447
The projectId parameter in listReportsCore is unused—add validation that the dashboard belongs to the requested project.
listReportsCore accepts projectId but never uses it. Instead, it only filters reports by dashboardId via getReportsByDashboardId(), which does not verify that the dashboard belongs to that project. A caller could pair their authorized projectId with another project's dashboardId to enumerate its reports.
The fix requires validating that dashboard.projectId === input.projectId before returning reports—either by fetching the dashboard first or filtering reports with both dashboardId and projectId conditions.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/api/src/routes/manage.router.ts` around lines 187 - 195, listReportsCore
currently ignores its projectId parameter and relies solely on
getReportsByDashboardId(dashboardId), allowing cross-project access; update
listReportsCore to validate ownership by ensuring the dashboard belongs to the
requested project before returning reports. Fetch the dashboard (e.g., via
getDashboardById or equivalent using dashboardId) and assert dashboard.projectId
=== input.projectId, or alter the report query to include both dashboardId and
projectId, and return a 403/empty result when the check fails; reference
listReportsCore, getReportsByDashboardId, dashboard.projectId, and
input.projectId when making the change.
There was a problem hiding this comment.
The projectId is already used -- getDashboardById(input.dashboardId, input.projectId) scopes the Prisma query with where: { id, projectId }, so only dashboards belonging to the requested project are returned. No change needed here.
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
rg -n 'getDashboardById' packages/db/src/services/ -A 10 | head -n 40
Length of output: 2442
@lindesvard, you're absolutely right — I apologize for the incorrect comment. Looking at getDashboardById in packages/db/src/services/dashboard.service.ts:
export async function getDashboardById(id: string, projectId: string) {
const dashboard = await db.dashboard.findUnique({
where: {
id,
projectId, // ← scopes the lookup to the given project
},
...
});And in listReportsCore (line 138):
const dashboard = await getDashboardById(input.dashboardId, input.projectId);
if (!dashboard) {
return []; // ← returns nothing if the dashboard doesn't belong to this project
}The ownership check is already enforced at the Prisma query level. If a caller passes a dashboardId that doesn't belong to the authenticated projectId, the query returns null and the function returns an empty array. No change needed — my original review comment was mistaken.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/db/src/services/reports.service.ts`:
- Around line 174-176: The report's explicit startDate/endDate are lost because
transformReport (used by getReportById) doesn't include those fields, so
getChartStartEndDate ends up recomputing an absolute window from range; fix by
preserving those fields: either update transformReport to copy startDate and
endDate from the DB row into the returned report object, or in getReportById
(before calling transformReport) extract the raw DB row's startDate/endDate and
include them when building chartInput so chartInput = { ...report, startDate:
rawStartDate, endDate: rawEndDate, timezone } is passed to getChartStartEndDate;
ensure the symbols mentioned (transformReport, getReportById,
getChartStartEndDate, chartInput, startDate, endDate, getSettingsForProject) are
updated accordingly.
- Around line 164-171: The code calls the unscoped helper
getReportById(input.reportId) then separately checks report.projectId, which
leaks existence of reports across projects; update the lookup to fetch the
report scoped by both id and projectId (e.g., replace getReportById call with or
add a new helper that queries by { id: input.reportId, projectId:
input.projectId } or use an existing getReportByIdAndProjectId), and collapse
the two error paths into a single "Report not found" error when the scoped query
returns no row so ownership is enforced at query time.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 21b6f380-bc02-4cf1-a8b8-cf787fbc38d5
📒 Files selected for processing (2)
.github/workflows/docker-build.ymlpackages/db/src/services/reports.service.ts
| const { timezone } = await getSettingsForProject(input.projectId); | ||
| const { startDate, endDate } = getChartStartEndDate(report, timezone); | ||
| const chartInput = { ...report, startDate, endDate, timezone }; |
There was a problem hiding this comment.
Preserve explicit startDate/endDate before recomputing the window.
getChartStartEndDate in packages/db/src/services/date.service.ts:202-220 prefers explicit startDate/endDate, but getReportById() currently returns transformReport(report), and transformReport() does not include those fields. At Lines 175-176 any report saved with an absolute window will be recalculated from range, so this can return data for the wrong dates. Thread those fields through transformReport() or derive them from the raw DB row here.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/db/src/services/reports.service.ts` around lines 174 - 176, The
report's explicit startDate/endDate are lost because transformReport (used by
getReportById) doesn't include those fields, so getChartStartEndDate ends up
recomputing an absolute window from range; fix by preserving those fields:
either update transformReport to copy startDate and endDate from the DB row into
the returned report object, or in getReportById (before calling transformReport)
extract the raw DB row's startDate/endDate and include them when building
chartInput so chartInput = { ...report, startDate: rawStartDate, endDate:
rawEndDate, timezone } is passed to getChartStartEndDate; ensure the symbols
mentioned (transformReport, getReportById, getChartStartEndDate, chartInput,
startDate, endDate, getSettingsForProject) are updated accordingly.
There was a problem hiding this comment.
The Report model has no startDate or endDate columns in the database schema, so there are no explicit dates to preserve through transformReport. getChartStartEndDate correctly falls back to computing from range. No change needed.
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check the Report model in Prisma schema for startDate/endDate fields
rg -n "startDate|endDate|model Report" --type prisma -A 30 -B 2 || \
rg -rn "model Report" --include="*.prisma" -A 30
Length of output: 472
🏁 Script executed:
#!/bin/bash
# Find Prisma schema files
fd "schema.prisma"
Length of output: 32
🏁 Script executed:
#!/bin/bash
# Check the Report model in Prisma schema
rg -n "model Report" packages/db/prisma/schema.prisma -A 30
Length of output: 2138
@lindesvard, you're absolutely right — the Prisma Report model only has a range field (no startDate/endDate columns), so getChartStartEndDate correctly derives the window from range. My concern was unfounded. Apologies for the noise!
✏️ Learnings added
Learnt from: lindesvard
Repo: Openpanel-dev/openpanel PR: 331
File: packages/db/src/services/reports.service.ts:174-176
Timestamp: 2026-04-08T12:17:25.366Z
Learning: In the OpenPanel codebase (`packages/db/prisma/schema.prisma`), the `Report` model has no `startDate` or `endDate` columns. Time range for reports is stored only as a `range` string field (e.g., "30d"). `getChartStartEndDate` in `packages/db/src/services/date.service.ts` correctly computes start/end dates from this `range` field. Do not flag missing `startDate`/`endDate` fields when reviewing `transformReport` or `getReportDataCore` in `packages/db/src/services/reports.service.ts`.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
packages/db/src/services/reports.service.ts (1)
126-131: Consider moving imports to the top of the file.These imports are placed after function definitions, which is unconventional. Unless this is intentional to avoid circular dependencies, grouping all imports at the top improves readability and follows standard module organization patterns.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/db/src/services/reports.service.ts` around lines 126 - 131, The listed imports (AggregateChartEngine, ChartEngine, getDashboardById, getChartStartEndDate, funnelService, getSettingsForProject) are located after function definitions; move these import statements to the top of reports.service.ts above all function/class declarations to restore conventional module ordering, unless they were intentionally delayed to avoid a circular dependency—if circularity exists, instead resolve it by refactoring the referenced functions (e.g., the functions that call getDashboardById or funnelService) or use lazy imports only at the call site and document why; otherwise place the six import lines at the top of the file so all imports are grouped together for readability.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/db/src/services/reports.service.ts`:
- Around line 133-137: The organizationId parameter is unused in listReportsCore
and getReportDataCore; either remove it from those signatures and all call
sites, or add explicit organization-level validation: update getDashboardById
(and any calls from listReportsCore/getReportDataCore) to accept/verify
organizationId and after fetching the dashboard assert dashboard.organizationId
=== organizationId (throw a suitable Forbidden/NotFound error if not), and keep
the organizationId param in listReportsCore/getReportDataCore when you add this
check; ensure all affected callers are updated to pass the org id and mirror the
pattern used in project.service.ts.
---
Nitpick comments:
In `@packages/db/src/services/reports.service.ts`:
- Around line 126-131: The listed imports (AggregateChartEngine, ChartEngine,
getDashboardById, getChartStartEndDate, funnelService, getSettingsForProject)
are located after function definitions; move these import statements to the top
of reports.service.ts above all function/class declarations to restore
conventional module ordering, unless they were intentionally delayed to avoid a
circular dependency—if circularity exists, instead resolve it by refactoring the
referenced functions (e.g., the functions that call getDashboardById or
funnelService) or use lazy imports only at the call site and document why;
otherwise place the six import lines at the top of the file so all imports are
grouped together for readability.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8d1b8c17-6bb0-4faf-9e31-d7dc80d31116
📒 Files selected for processing (3)
packages/db/src/services/reports.service.tstest/clickhouse-schema.sqltest/fixtures.ts
✅ Files skipped from review due to trivial changes (1)
- test/fixtures.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- test/clickhouse-schema.sql
| export async function listReportsCore(input: { | ||
| projectId: string; | ||
| dashboardId: string; | ||
| organizationId: string; | ||
| }) { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if organizationId is used for authorization elsewhere in similar service functions
rg -n "organizationId" packages/db/src/services/*.service.ts -B 2 -A 5 | head -100Repository: Openpanel-dev/openpanel
Length of output: 6525
🏁 Script executed:
sed -n '133,180p' packages/db/src/services/reports.service.tsRepository: Openpanel-dev/openpanel
Length of output: 1470
🏁 Script executed:
rg -n "getDashboardById|getReportsByDashboardId" packages/db/src/services/ -A 10 | head -80Repository: Openpanel-dev/openpanel
Length of output: 3622
🏁 Script executed:
rg -n "listReportsCore|getReportDataCore" packages/ -B 3 -A 3 --type ts --type tsxRepository: Openpanel-dev/openpanel
Length of output: 94
🏁 Script executed:
rg -n "listReportsCore|getReportDataCore" packages/ -B 3 -A 3Repository: Openpanel-dev/openpanel
Length of output: 1099
🏁 Script executed:
rg -n "from.*reports\.service|import.*listReportsCore|import.*getReportDataCore" packages/ -B 2 -A 5Repository: Openpanel-dev/openpanel
Length of output: 1911
🏁 Script executed:
find packages -type f -name "*.ts" | xargs grep -l "listReportsCore\|getReportDataCore" | head -10Repository: Openpanel-dev/openpanel
Length of output: 110
🏁 Script executed:
find . -type f -name "*.ts" -not -path "./node_modules/*" -not -path "./.next/*" | xargs grep -l "listReportsCore\|getReportDataCore" 2>/dev/nullRepository: Openpanel-dev/openpanel
Length of output: 162
🏁 Script executed:
rg -n "listReportsCore|getReportDataCore" ./apps/api/src/controllers/insights.controller.ts -B 5 -A 10Repository: Openpanel-dev/openpanel
Length of output: 2133
🏁 Script executed:
rg -n "function getProjectId|const getProjectId|export.*getProjectId" ./apps/api/src/ -A 8 | head -50Repository: Openpanel-dev/openpanel
Length of output: 1554
🏁 Script executed:
rg -n "resolveClientProjectId" packages/db/src/services/project.service.ts -B 2 -A 15 | head -40Repository: Openpanel-dev/openpanel
Length of output: 664
🏁 Script executed:
rg -n "where.*organizationId.*reportId|where.*reportId.*organizationId" packages/db/src/ -B 2 -A 2Repository: Openpanel-dev/openpanel
Length of output: 49
🏁 Script executed:
find packages/db/src -name "*.prisma" -o -name "schema.prisma" | head -5Repository: Openpanel-dev/openpanel
Length of output: 49
🏁 Script executed:
find . -name "*.prisma" 2>/dev/nullRepository: Openpanel-dev/openpanel
Length of output: 101
🏁 Script executed:
rg -n "model Report|model Dashboard" ./packages/db/prisma/schema.prisma -A 15 | head -60Repository: Openpanel-dev/openpanel
Length of output: 1818
🏁 Script executed:
sed -n '12,25p' packages/db/src/services/dashboard.service.tsRepository: Openpanel-dev/openpanel
Length of output: 328
Remove unused organizationId parameter or add explicit organization-level validation.
Both listReportsCore and getReportDataCore accept organizationId but never reference it. The database schema shows Dashboard has an explicit organizationId field, yet getDashboardById only validates (id, projectId) without checking organization membership. This is inconsistent with other service functions like those in project.service.ts, which explicitly filter by organizationId. Either remove the parameter if validation is implicit through projectId, or add explicit validation: getDashboardById should verify the dashboard's organizationId matches the caller's organization.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/db/src/services/reports.service.ts` around lines 133 - 137, The
organizationId parameter is unused in listReportsCore and getReportDataCore;
either remove it from those signatures and all call sites, or add explicit
organization-level validation: update getDashboardById (and any calls from
listReportsCore/getReportDataCore) to accept/verify organizationId and after
fetching the dashboard assert dashboard.organizationId === organizationId (throw
a suitable Forbidden/NotFound error if not), and keep the organizationId param
in listReportsCore/getReportDataCore when you add this check; ensure all
affected callers are updated to pass the org id and mirror the pattern used in
project.service.ts.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
test/fixtures.ts (1)
373-388: Raw SQL with string interpolation - consider parameterization or query builder.The
projectIdis directly interpolated into SQL strings, which is a SQL injection risk if this code is ever called with untrusted input. While this is test code with controlled inputs, it also violates the coding guideline requiring use of the custom query builder for ClickHouse queries.For test fixtures, parameterized queries or at minimum input validation would be safer:
🛡️ Proposed fix: Add input validation
async function deleteFixtures(client: ChClient, projectId: string) { + // Basic validation to prevent SQL injection in test fixtures + if (!/^[\w-]+$/.test(projectId)) { + throw new Error(`Invalid projectId format: ${projectId}`); + } await Promise.all([ client.command({ query: `DELETE FROM openpanel.profiles WHERE project_id = '${projectId}'`,As per coding guidelines: "When writing ClickHouse queries, always use the custom query builder from
./packages/db/src/clickhouse/query-builder.tsand./packages/db/src/clickhouse/query-functions.ts"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/fixtures.ts` around lines 373 - 388, The deleteFixtures function builds raw SQL with projectId interpolation (in client.command calls and ALTER TABLE DELETE) which risks injection and violates the ClickHouse query-builder guideline; update deleteFixtures to use the project's ClickHouse query builder/query-functions to compose parameterized DELETE statements (or pass projectId as a parameter via the builder), replace the four client.command raw strings with builder-generated queries or parameterized calls, and add a simple validation check on projectId (e.g., ensure it's a non-empty alphanumeric string) before executing to guard against accidental misuse; refer to deleteFixtures, client.command, and the query-builder/query-functions modules when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/docker-build.yml:
- Around line 113-118: The test job's environment omits SELF_HOSTED, causing
isClickhouseClustered() behavior mismatch between the migration and test steps;
add SELF_HOSTED: "true" to the "Run tests" step's env block so the test process
uses non-clustered ClickHouse mode consistent with the migration step (update
the env for the step named "Run tests" that currently sets DATABASE_URL,
CLICKHOUSE_URL, REDIS_URL).
In `@test/fixtures.ts`:
- Around line 430-441: Both setupFixtures and teardownFixtures leak the
ClickHouse client if deleteFixtures or insertFixtures throw; update both
functions (setupFixtures and teardownFixtures) to acquire the client via
getClient(), wrap the operations (deleteFixtures and insertFixtures as
applicable) inside a try block, and call await client.close() in a finally block
so the client is always closed even on errors; reference getClient,
deleteFixtures, insertFixtures, and client.close when making the change.
---
Nitpick comments:
In `@test/fixtures.ts`:
- Around line 373-388: The deleteFixtures function builds raw SQL with projectId
interpolation (in client.command calls and ALTER TABLE DELETE) which risks
injection and violates the ClickHouse query-builder guideline; update
deleteFixtures to use the project's ClickHouse query builder/query-functions to
compose parameterized DELETE statements (or pass projectId as a parameter via
the builder), replace the four client.command raw strings with builder-generated
queries or parameterized calls, and add a simple validation check on projectId
(e.g., ensure it's a non-empty alphanumeric string) before executing to guard
against accidental misuse; refer to deleteFixtures, client.command, and the
query-builder/query-functions modules when making the change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 44bd54d4-7b3d-4084-80e7-d75985237813
📒 Files selected for processing (3)
.github/workflows/docker-build.ymltest/fixtures.tstest/global-setup.ts
✅ Files skipped from review due to trivial changes (1)
- test/global-setup.ts
| - name: Run tests | ||
| run: pnpm test | ||
| env: | ||
| DATABASE_URL: postgresql://postgres:postgres@localhost:5432/postgres?schema=public | ||
| CLICKHOUSE_URL: http://localhost:8123/openpanel | ||
| REDIS_URL: redis://localhost:6379 |
There was a problem hiding this comment.
Missing SELF_HOSTED environment variable may cause test failures.
The migration step sets SELF_HOSTED: "true" but the test step omits it. Per packages/db/src/clickhouse/client.ts, isClickhouseClustered() returns true (clustered mode) when SELF_HOSTED is not set, but returns false (non-clustered) when SELF_HOSTED=true. This mismatch means migrations create tables for non-clustered mode while tests query assuming clustered mode, potentially causing query/table name mismatches.
🐛 Proposed fix to add SELF_HOSTED
- name: Run tests
run: pnpm test
env:
DATABASE_URL: postgresql://postgres:postgres@localhost:5432/postgres?schema=public
CLICKHOUSE_URL: http://localhost:8123/openpanel
REDIS_URL: redis://localhost:6379
+ SELF_HOSTED: "true"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Run tests | |
| run: pnpm test | |
| env: | |
| DATABASE_URL: postgresql://postgres:postgres@localhost:5432/postgres?schema=public | |
| CLICKHOUSE_URL: http://localhost:8123/openpanel | |
| REDIS_URL: redis://localhost:6379 | |
| - name: Run tests | |
| run: pnpm test | |
| env: | |
| DATABASE_URL: postgresql://postgres:postgres@localhost:5432/postgres?schema=public | |
| CLICKHOUSE_URL: http://localhost:8123/openpanel | |
| REDIS_URL: redis://localhost:6379 | |
| SELF_HOSTED: "true" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/docker-build.yml around lines 113 - 118, The test job's
environment omits SELF_HOSTED, causing isClickhouseClustered() behavior mismatch
between the migration and test steps; add SELF_HOSTED: "true" to the "Run tests"
step's env block so the test process uses non-clustered ClickHouse mode
consistent with the migration step (update the env for the step named "Run
tests" that currently sets DATABASE_URL, CLICKHOUSE_URL, REDIS_URL).
| export async function setupFixtures(projectId: string): Promise<void> { | ||
| const client = getClient(); | ||
| await deleteFixtures(client, projectId); | ||
| await insertFixtures(client, projectId); | ||
| await client.close(); | ||
| } | ||
|
|
||
| export async function teardownFixtures(projectId: string): Promise<void> { | ||
| const client = getClient(); | ||
| await deleteFixtures(client, projectId); | ||
| await client.close(); | ||
| } |
There was a problem hiding this comment.
Missing try/finally for ClickHouse client cleanup.
Unlike the Postgres functions (lines 394-426), setupFixtures and teardownFixtures don't wrap operations in try/finally. If deleteFixtures or insertFixtures throws, the client won't be closed, causing a resource leak.
🔧 Proposed fix: Add try/finally blocks
export async function setupFixtures(projectId: string): Promise<void> {
const client = getClient();
- await deleteFixtures(client, projectId);
- await insertFixtures(client, projectId);
- await client.close();
+ try {
+ await deleteFixtures(client, projectId);
+ await insertFixtures(client, projectId);
+ } finally {
+ await client.close();
+ }
}
export async function teardownFixtures(projectId: string): Promise<void> {
const client = getClient();
- await deleteFixtures(client, projectId);
- await client.close();
+ try {
+ await deleteFixtures(client, projectId);
+ } finally {
+ await client.close();
+ }
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/fixtures.ts` around lines 430 - 441, Both setupFixtures and
teardownFixtures leak the ClickHouse client if deleteFixtures or insertFixtures
throw; update both functions (setupFixtures and teardownFixtures) to acquire the
client via getClient(), wrap the operations (deleteFixtures and insertFixtures
as applicable) inside a try block, and call await client.close() in a finally
block so the client is always closed even on errors; reference getClient,
deleteFixtures, insertFixtures, and client.close when making the change.
Summary by CodeRabbit
Release Notes
New Features
Documentation
Testing