Skip to content

Feature/mcp#331

Merged
lindesvard merged 18 commits intomainfrom
feature/mcp
Apr 9, 2026
Merged

Feature/mcp#331
lindesvard merged 18 commits intomainfrom
feature/mcp

Conversation

@lindesvard
Copy link
Copy Markdown
Contributor

@lindesvard lindesvard commented Apr 8, 2026

Summary by CodeRabbit

Release Notes

  • New Features

    • Added interactive API documentation with Swagger/OpenAPI UI
    • Introduced MCP server endpoint for AI assistant integration (Claude, etc.)
    • Expanded analytics API with new endpoints for funnels, retention, profiles, sessions, and aggregations
    • Added Google Search Console integration tools
    • Added dashboard and report management endpoints
  • Documentation

    • New API reference section covering authentication and rate limits
    • Added comprehensive MCP server documentation
  • Testing

    • Implemented integration test infrastructure with automated database setup

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 8, 2026

📝 Walkthrough

Walkthrough

This PR introduces a Model Context Protocol (MCP) server for AI assistants like Claude, refactors the API with OpenAPI/Zod schema-driven routes, expands database service functions for analytics queries, adds comprehensive Vitest testing infrastructure with fixtures, updates API reference documentation, and configures CI with PostgreSQL and ClickHouse services for integration testing.

Changes

Cohort / File(s) Summary
MCP Server Package
packages/mcp/package.json, packages/mcp/tsconfig.json, packages/mcp/vitest.config.ts, packages/mcp/index.ts, packages/mcp/src/server.ts, packages/mcp/src/auth.ts, packages/mcp/src/session-manager.ts, packages/mcp/src/handler.ts, packages/mcp/src/tools/*.ts
New MCP server implementation with authentication, session management, and 30+ tools for analytics/dashboards/GSC queries. Includes auth token validation, Redis-backed session persistence, HTTP handlers (SSE/JSON-RPC), and tool registrations for discovery, profiles, events, funnels, retention, traffic, reports, and Google Search Console integration. Comprehensive test coverage for auth, session manager, and tool functionality.
API App Factory & Route Updates
apps/api/src/app.ts, apps/api/src/index.ts, apps/api/src/routes/track.router.ts, apps/api/src/routes/export.router.ts, apps/api/src/routes/insights.router.ts, apps/api/src/routes/manage.router.ts, apps/api/src/routes/mcp.router.ts, apps/api/src/routes/event.router.ts, apps/api/src/routes/profile.router.ts, apps/api/src/routes/import.router.ts
Extracted Fastify app configuration into new buildApp() factory handling CORS, session management, logging, compression, and tRPC setup. Converted routes to FastifyPluginAsyncZodOpenApi for OpenAPI documentation. Added request validation via Zod schemas in route definitions. New MCP router with session management and three endpoints (POST/GET/DELETE). Removed inline validation from track controller.
API Controllers Refactoring
apps/api/src/controllers/track.controller.ts, apps/api/src/controllers/export.controller.ts, apps/api/src/controllers/manage.controller.ts, apps/api/src/controllers/insights.controller.ts
Removed inline Zod validation and error handling; validation now delegated to Fastify route schemas. Exported previously private Zod schemas (eventsScheme, chartSchemeFull, zCreate*/zUpdate* schemas). Updated controllers to use direct request/body fields instead of parsed data. Insights controller significantly expanded with 40+ new exported functions for analytics endpoints.
Database Service Layer Expansion
packages/db/src/services/date.service.ts, packages/db/src/services/chart.service.ts, packages/db/src/services/event.service.ts, packages/db/src/services/funnel.service.ts, packages/db/src/services/group.service.ts, packages/db/src/services/gsc.service.ts, packages/db/src/services/overview.service.ts, packages/db/src/services/pages.service.ts, packages/db/src/services/profile.service.ts, packages/db/src/services/project.service.ts, packages/db/src/services/reports.service.ts, packages/db/src/services/retention.service.ts, packages/db/src/services/session.service.ts, packages/db/src/services/sankey.service.ts, packages/db/src/services/dashboard.service.ts
Created new date.service.ts with date range utilities and moved functions from chart service. Added 70+ new "Core" query functions for analytics: active users, retention cohorts, traffic breakdown variants, page performance, events/properties/values querying, profiles/sessions, groups, funnel analysis, GSC data, user flow/engagement. Introduced resolveClientProjectId and listProjectsCore for project resolution. Updated Prisma calls from db.$primary() to db (removed read-replica routing).
Prisma Client & Primary DB Access
packages/db/src/prisma-client.ts, packages/db/src/engine/index.ts, packages/db/src/engine/normalize.ts, packages/auth/src/session.ts, packages/db/src/services/access.service.ts, packages/worker/src/jobs/import.ts
Removed query interception logging and session consistency extensions. Updated organization computed fields from separate .$extends chains to single result extension block. Removed $primary() calls throughout codebase; now uses standard db for all database access (consolidates routing logic).
Rate Limiting & Shutdown
apps/api/src/utils/rate-limiter.ts, apps/api/src/utils/graceful-shutdown.ts, apps/api/src/utils/parse-zod-query-string.ts
Conditional Redis usage in rate limiter (disabled during tests). Added MCP session manager cleanup to graceful shutdown sequence. Enhanced query string parser with parseScalar helper for centralized value-type coercion (arrays, objects, numbers, booleans, JSON).
Testing Infrastructure
test/fixtures.ts, test/global-setup.ts, test/test-setup.ts, vitest.config.ts, vitest.shared.ts, apps/api/vitest.config.ts, packages/mcp/vitest.config.ts
Added deterministic ClickHouse/Postgres fixtures with stable IDs and timestamped events. Global setup initializes test databases and sets environment defaults. Shared Vitest config with Docker-backed service URLs (PostgreSQL, ClickHouse, Redis). Per-package Vitest configs inherit shared setup. Test teardown closes database clients.
API Documentation & References
apps/public/content/docs/api-reference/authentication.mdx, apps/public/content/docs/api-reference/rate-limits.mdx, apps/public/content/docs/mcp/index.mdx, apps/public/content/docs/api/..., apps/public/src/lib/openapi.ts, apps/public/source.config.ts, apps/public/src/app/docs/(docs)/layout.tsx, apps/public/src/app/docs/api-reference/[[...slug]]/page.tsx, apps/public/src/app/docs/api-reference/layout.tsx, apps/public/content/docs/meta.json, apps/public/content/docs/api/meta.json
Condensed API guides (export, insights, manage endpoints) to reference-style overviews with endpoint tables. Added authentication and rate-limits reference docs. New MCP documentation explaining token auth, client permissions, and 30+ tools. Created OpenAPI-backed API reference documentation generation (getApiReferenceSource) and new Next.js routes for /api-reference/* pages. Split docs layout into Documentation and API Reference tabs.
Frontend & Config Updates
apps/public/package.json, apps/public/next.config.mjs, apps/public/src/components/navbar.tsx, apps/public/src/mdx-components.tsx, apps/public/src/app/global.css
Bumped Fumadocs dependencies (core, mdx, ui), added fumadocs-openapi and shiki packages. Added shiki to Next.js server external packages. Updated navbar to use static siteName constant. Removed MDX type augmentation. Added OpenAPI CSS imports.
Validation & Schemas
packages/validation/src/index.ts, packages/validation/src/track.validation.ts, packages/importer/src/providers/mixpanel.ts, pnpm-workspace.yaml
Tightened record schemas requiring explicit z.string() keys (e.g., z.record(z.string(), z.unknown()) for properties). Updated Zod integration schemas from .merge() to .extend(). Changed zTrackHandlerPayload discriminator from z.literal() to z.enum() per branch with metadata. Upgraded Zod from ^3.24.2 to ^4.0.0.
CI & Docker Configuration
.github/workflows/docker-build.yml, docker-compose.yml, docker/clickhouse/clickhouse-user-config.xml, biome.json
Added PostgreSQL and ClickHouse service containers to GitHub Actions lint-and-test job with health checks and database URLs. Added database migration step (pnpm migrate:deploy). Added ClickHouse environment variables and removed default password. Updated Biome linter to disable noGlobalDirnameFilename and useTopLevelRegex rules.
Package Configuration
apps/api/package.json, apps/api/tsconfig.json, packages/mcp/package.json
Added Vitest test scripts to API. Extended Swagger/OpenAPI packages (@fastify/swagger, @fastify/swagger-ui, fastify-zod-openapi). Added MCP package to API dependencies. Updated tsconfig.json to include ../../test directory.

Sequence Diagram(s)

sequenceDiagram
    participant Claude
    participant MCP Server
    participant Auth
    participant SessionMgr
    participant Database
    participant Analytics

    Claude->>MCP Server: POST /mcp with token or Bearer auth
    MCP Server->>Auth: authenticateToken(token)
    Auth->>Database: Lookup client, verify password
    Database-->>Auth: Client context
    Auth-->>MCP Server: McpAuthContext (projectId, orgId, clientType)
    
    MCP Server->>SessionMgr: Create/store session (Redis)
    SessionMgr-->>MCP Server: Session ID
    
    MCP Server-->>Claude: Return session ID in response
    
    Claude->>MCP Server: Subsequent request with Mcp-Session-Id header
    MCP Server->>SessionMgr: Retrieve session context from Redis
    SessionMgr-->>MCP Server: McpAuthContext
    
    MCP Server->>Analytics: Execute tool (e.g., queryEvents)
    Analytics->>Database: Query with projectId filtering
    Database-->>Analytics: Results
    Analytics-->>MCP Server: Formatted tool result
    
    MCP Server-->>Claude: JSON-RPC response or SSE stream
Loading
sequenceDiagram
    participant Client
    participant Fastify
    participant Route Handler
    participant Controller
    participant Database
    
    Client->>Fastify: HTTP request with query/body
    Fastify->>Fastify: Validate with Zod schema (preValidation)
    alt Validation fails
        Fastify-->>Client: 400 Bad Request
    else Validation passes
        Fastify->>Route Handler: Call handler with validated req/reply
        Route Handler->>Controller: controller.methodName(req)
        Controller->>Database: Query with req.query/req.body fields
        Database-->>Controller: Results
        Controller-->>Route Handler: Response data
        Route Handler-->>Client: 200 JSON response with OpenAPI schema
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

  • PR #206: Modifies apps/api/src/controllers/track.controller.ts with changes to payload validation and request handling that complement this PR's Fastify validation refactoring.
  • PR #312: Implements Google Search Console feature with DB services (gsc.service.ts), TRPC routers, worker jobs, and frontend components—overlaps directly with GSC tools and services introduced here.
  • PR #310: Adds group-related analytics and buffer APIs in the same packages (packages/db, packages/trpc), complementing the new group query functions and MCP tools.

Poem

🐇 A rabbit's ode to Protocol and Claude:

With MCP hops, the AI can now query,
Sessions persist in Redis, no hurry,
Routes wear OpenAPI's finest attire,
Database services climb ever higher,
Tests in fixtures keep all in delight! 🌟

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feature/mcp

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Note

Due to the large number of review comments, Critical severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
packages/db/src/engine/index.ts (1)

135-140: ⚠️ Potential issue | 🟠 Major

Use query-builder.ts instead of sql-builder.ts for ClickHouse queries.

getAggregateChartSql currently uses createSqlBuilder from sql-builder.ts, but per coding guidelines, ClickHouse queries should use the custom query builder from ./packages/db/src/clickhouse/query-builder.ts. Migrate this function to use the clix pattern (e.g., clix().select(...).from(...).where(...)) like other services do (session, pages, profile, overview, event, etc.).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/db/src/engine/index.ts` around lines 135 - 140, getAggregateChartSql
currently builds SQL with createSqlBuilder from sql-builder.ts; replace its
internals to use the ClickHouse query builder (import clix from
./clickhouse/query-builder.ts) and migrate the SQL construction to the clix
pattern (e.g.,
clix().select(...).from(...).where(...).groupBy(...).orderBy(...)) so the string
passed to chQuery<ISerieDataItem>(getAggregateChartSql(...)) is produced by the
ClickHouse-specific builder; keep the existing function name
getAggregateChartSql and preserve the same query clauses/parameters but convert
each builder call from createSqlBuilder/... to the equivalent clix chain used
across session/pages/profile/overview/event services.
apps/api/src/controllers/manage.controller.ts (1)

44-55: ⚠️ Potential issue | 🟠 Major

Use z.string().datetime() for ISO 8601 datetime validation.

z.string() accepts any text, allowing invalid datetime strings to reach new Date(...) and potentially fail with Invalid Date errors. The codebase already uses z.string().datetime() in track validation (see packages/validation/src/track.validation.ts), which properly validates ISO 8601 datetime format at the schema boundary.

🛠️ Tighten the schema
 export const zCreateReference = z.object({
   projectId: z.string(),
   title: z.string().min(1),
   description: z.string().optional(),
-  datetime: z.string(),
+  datetime: z.string().datetime(),
 });
 
 export const zUpdateReference = z.object({
   title: z.string().min(1).optional(),
   description: z.string().optional(),
-  datetime: z.string().optional(),
+  datetime: z.string().datetime().optional(),
 });

Also applies to: 476, 509

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/api/src/controllers/manage.controller.ts` around lines 44 - 55, The
datetime fields in the validation schemas zCreateReference and zUpdateReference
currently use z.string() which accepts any text; change those fields to use
z.string().datetime() to enforce ISO 8601 datetime validation (replace the
datetime property in zCreateReference and the optional datetime in
zUpdateReference with z.string().datetime()). Ensure you follow the same pattern
used in the project's track validation so invalid datetimes are rejected at the
schema boundary.
🟠 Major comments (21)
packages/logger/index.ts-67-70 (1)

67-70: ⚠️ Potential issue | 🟠 Major

Broaden query-param key matching to avoid leaking token variants.

At Line 68, the regex only redacts exact key names (e.g., token=), but misses keys that contain those terms (e.g., access_token=, idToken=). This can leave secrets in logs.

🔐 Proposed fix
+  const escapedSensitiveKeys = sensitiveKeys.map((key) =>
+    key.replace(/[.*+?^${}()|[\]\\]/g, '\\$&'),
+  );
+
   const sensitiveUrlParamPattern = new RegExp(
-    `([?&])(${sensitiveKeys.join('|')})=([^&]*)`,
+    `([?&])([^=&]*(?:${escapedSensitiveKeys.join('|')})[^=&]*)=([^&#]*)`,
     'gi',
   );

Also applies to: 72-73

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/logger/index.ts` around lines 67 - 70, The current
sensitiveUrlParamPattern only matches exact query parameter names and misses
keys that contain sensitive substrings (e.g., access_token), so update the regex
construction that uses sensitiveKeys to match parameter keys that contain any
sensitive term anywhere in the key name; specifically, change the pattern
building for sensitiveUrlParamPattern to capture the param key and allow the
sensitive token to appear inside that key (for example using something like
`([?&])([^=]*?(?:${sensitiveKeys.join('|')})[^=]*)=([^&]*)` with 'gi'), and
apply the same broadened matching where the other URL/key-redaction regexes are
built (the places referencing sensitiveUrlParamPattern and sensitiveKeys on
lines 72-73) so keys containing the sensitive substrings are redacted.
biome.json-63-63 (1)

63-63: ⚠️ Potential issue | 🟠 Major

Re-enable unreachable-code detection to prevent silent logic bugs

Line 63 disables noUnreachable, removing an important correctness safeguard. The codebase uses scoped biome-ignore suppressions elsewhere (e.g., for useTopLevelRegex), yet this rule has no such suppressions—meaning violations are silently hidden. Enable the rule and add scoped suppressions only where necessary.

Suggested config change
       "correctness": {
         "useExhaustiveDependencies": "off",
-        "noUnreachable": "off",
+        "noUnreachable": "error",
         "noGlobalDirnameFilename": "off"
       },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@biome.json` at line 63, Re-enable the unreachable-code rule by changing the
"noUnreachable" setting from "off" to a strict level (e.g., "error") in the
config so unreachable-code detection runs; then audit locations that currently
rely on this being off and replace broad suppression with scoped biome-ignore
comments only around the specific statements that need it (for example, patterns
similar to existing scoped ignores for useTopLevelRegex), fixing any revealed
logic issues or refining control flow in the functions/methods flagged by the
rule.
apps/public/src/app/docs/api-reference/[[...slug]]/page.tsx-66-71 (1)

66-71: 🛠️ Refactor suggestion | 🟠 Major

generateStaticParams is ineffective with force-dynamic.

The export const dynamic = 'force-dynamic' directive causes Next.js to skip static generation entirely, making generateStaticParams() unused. If dynamic rendering is intentional (e.g., to always fetch fresh OpenAPI specs), consider removing generateStaticParams to avoid confusion. If static generation is desired, remove force-dynamic.

♻️ Option A: Remove generateStaticParams if dynamic is intentional
-export async function generateStaticParams() {
-  const source = await getApiReferenceSource();
-  return source.generateParams();
-}
-
 export const dynamic = 'force-dynamic';
♻️ Option B: Remove force-dynamic if static generation is desired
 export async function generateStaticParams() {
   const source = await getApiReferenceSource();
   return source.generateParams();
 }
-
-export const dynamic = 'force-dynamic';
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/public/src/app/docs/api-reference/`[[...slug]]/page.tsx around lines 66
- 71, The file exports both generateStaticParams() and export const dynamic =
'force-dynamic', which makes generateStaticParams ineffective; either remove
generateStaticParams if you intend always-dynamic rendering, or remove the
dynamic = 'force-dynamic' export to enable static generation; update the module
by deleting the unused symbol (generateStaticParams) when keeping force-dynamic,
or delete the dynamic export when keeping generateStaticParams (ensure
getApiReferenceSource() and source.generateParams() remain intact if you keep
static).
apps/worker/src/jobs/import.ts-36-39 (1)

36-39: ⚠️ Potential issue | 🟠 Major

Import job reads require primary database to ensure retry logic correctness.

The removal of $primary() from line 36 is unsafe without a read-consistency mechanism. The import record read is critical for determining retry state (currentStep, currentBatch). If the application uses read replicas, replication lag could cause the read to fail for newly created imports or return stale state that breaks retry/resume logic.

The sessionConsistency extension in the codebase is not applied to the db client and doesn't protect non-session contexts anyway. Either restore $primary() for this read, or confirm that:

  • Read replicas are not in use in production, or
  • The db client has been reconfigured with read-consistency protection
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/worker/src/jobs/import.ts` around lines 36 - 39, The read of the import
record via db.import.findUniqueOrThrow (using importId) must use the primary DB
to avoid replica lag; restore $primary() on that query (e.g.,
db.import.findUniqueOrThrow({ where: { id: importId }, include: { project: true
} }).$primary()) or otherwise ensure the db client is configured for
session/read-consistency; update the code path that reads
currentStep/currentBatch to call db.import.findUniqueOrThrow with $primary() (or
replace with a primary-bound helper) so retry/resume logic always sees
up-to-date state.
docker/clickhouse/clickhouse-user-config.xml-11-14 (1)

11-14: ⚠️ Potential issue | 🟠 Major

Do not leave the ClickHouse default user password empty.

An empty password combined with wide network access allows unauthenticated access to exposed ClickHouse ports. Set a non-empty password (preferably hashed) and/or restrict allowed networks.

🔒 Suggested hardening
-            <password></password>
+            <password>change-me-in-local-env</password>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docker/clickhouse/clickhouse-user-config.xml` around lines 11 - 14, The
ClickHouse user config currently leaves the <password> element empty and allows
unrestricted access via <networks><ip>::/0</ip>; update the <password> element
to a non-empty, securely hashed password (or reference a secret manager) and
tighten the <networks><ip> entry to specific CIDR(s) or remove the open ::/0
entry to limit allowed source IPs so the default user cannot be accessed
anonymously from the network.
packages/db/src/services/dashboard.service.ts-42-50 (1)

42-50: ⚠️ Potential issue | 🟠 Major

organizationId is ignored in a tenant-scoped function.

The function accepts organizationId but does not use it in the query, which weakens scoping guarantees and creates a misleading API contract.

✅ Suggested fix
 export async function listDashboardsCore(input: {
   projectId: string;
   organizationId: string;
 }) {
   return db.dashboard.findMany({
-    where: { projectId: input.projectId },
+    where: {
+      projectId: input.projectId,
+      project: { organizationId: input.organizationId },
+    },
     orderBy: { createdAt: 'desc' },
     select: { id: true, name: true, projectId: true },
   });
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/db/src/services/dashboard.service.ts` around lines 42 - 50, The
function listDashboardsCore accepts organizationId but ignores it in the query;
update the db.dashboard.findMany call to include organizationId in the where
clause (e.g., where: { projectId: input.projectId, organizationId:
input.organizationId }) so the tenant scope is enforced; locate the
listDashboardsCore function and modify the where filter passed to
db.dashboard.findMany to include input.organizationId while preserving the
existing orderBy and select fields.
apps/api/src/routes/manage.router.ts-54-63 (1)

54-63: ⚠️ Potential issue | 🟠 Major

Don't collapse every resolveClientProjectId failure into 403.

The bare catch on Lines 61-63 will also turn transient database or Prisma failures from Line 55 into "Project does not belong to your organization". That hides real outages and makes valid requests fail as authorization errors. Please only map the expected access-denied/not-found case to 403 and let unexpected exceptions bubble as 500.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/api/src/routes/manage.router.ts` around lines 54 - 63, The catch block
around resolveClientProjectId is too broad and turns any failure
(DB/Prisma/outage) into a 403; instead only translate the specific "not found /
access denied" condition into reply.status(403) and rethrow or let other errors
bubble. Update the code that calls resolveClientProjectId to inspect the thrown
error (e.g. error.name, error.code or a custom Error subclass from
resolveClientProjectId) and only return reply.status(403).send({ error:
'Forbidden', message: 'Project does not belong to your organization' }) when the
error indicates the project is not in the organization (e.g.
ProjectNotFoundError / AccessDeniedError / code === 'PROJECT_NOT_IN_ORG'); for
any other error rethrow or don't catch so it surfaces as a 500. Ensure you
reference resolveClientProjectId and the existing reply.status(403) handling
when making this change.
apps/api/src/utils/graceful-shutdown.ts-48-59 (1)

48-59: ⚠️ Potential issue | 🟠 Major

Start MCP teardown in parallel with or before fastify.close() to prevent shutdown deadlock.

By default, fastify.close() blocks until all in-flight HTTP requests complete. If MCP sessions maintain long-lived HTTP connections (e.g., streams, SSE, long polling), fastify.close() will wait indefinitely for them to end, meaning the mcpSessionManager.destroy() call at line 58 is never reached. Start session destruction in parallel or first to allow MCP teardown to cancel these connections while Fastify drains, preventing shutdown from hanging on active MCP connections.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/api/src/utils/graceful-shutdown.ts` around lines 48 - 59, Start MCP
teardown in parallel with Fastify shutdown to avoid deadlock: initiate
mcpSessionManager.destroy() before awaiting fastify.close() (or kick off both
without awaiting and then await Promise.allSettled) so MCP can cancel long-lived
connections while Fastify drains; update the shutdown sequence around
fastify.close() and mcpSessionManager.destroy() to run concurrently and still
log successes/errors via logger.info/logger.error for both operations.
packages/mcp/src/tools/analytics/events.ts-59-70 (1)

59-70: ⚠️ Potential issue | 🟠 Major

Remove .optional() from the limit schema—the default won't be applied by Zod.

Line 64 chains .optional() after .default(20), which in Zod 4 causes the outer optional to short-circuit and return undefined when the input is omitted, bypassing the inner default. Although queryEventsCore has a fallback (input.limit ?? 20), the schema should express the actual contract: limit is optional to the caller, but Zod should inject the default before passing to the handler.

Suggested fix
       limit: z
         .number()
         .min(1)
         .max(100)
         .default(20)
-        .optional()
         .describe('Maximum number of events to return (1-100, default 20)'),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/mcp/src/tools/analytics/events.ts` around lines 59 - 70, The limit
schema currently chains .optional() after .default(20) which prevents Zod from
applying the default; remove the .optional() call on the limit schema so Zod
will inject the default value (20) before the handler runs. Update the schema
definition around the symbol "limit" in this file and keep the handler flow that
calls resolveProjectId and queryEventsCore unchanged (queryEventsCore can still
use its fallback but will now receive the default from Zod).
packages/db/src/services/reports.service.ts-132-151 (1)

132-151: ⚠️ Potential issue | 🟠 Major

Enforce project/org ownership before listing dashboard reports.

listReportsCore() currently calls getReportsByDashboardId(input.dashboardId) and ignores both projectId and organizationId, so anyone who can supply a dashboard ID can enumerate another project's report metadata through this core path.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/db/src/services/reports.service.ts` around lines 132 - 151,
listReportsCore ignores input.projectId and input.organizationId and calls
getReportsByDashboardId directly, allowing unauthorized enumeration; update
listReportsCore to first load the dashboard (e.g., via getDashboardById or a
repository method) and verify dashboard.projectId === input.projectId and
dashboard.organizationId === input.organizationId (or call a shared
authorization helper), throwing an authorization/error if they don’t match,
before calling getReportsByDashboardId and returning the mapped reports;
reference listReportsCore, getReportsByDashboardId, input.projectId, and
input.organizationId when making the change.
apps/api/src/routes/mcp.router.ts-67-74 (1)

67-74: ⚠️ Potential issue | 🟠 Major

Don't widen DELETE authorization from project scope to org scope.

The delete guard only compares organizationId. For project-scoped read clients, that lets one project close another project's session inside the same org if the session ID leaks.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/api/src/routes/mcp.router.ts` around lines 67 - 74, The current deletion
guard only compares context.organizationId and thus allows project-scoped
clients to delete sessions from other projects in the same org; update the guard
in the delete route (where mcpSessionManager.getContext is used) to enforce
project-level authorization for project-scoped callers by also checking
context.projectId === callerContext.projectId when the callerContext indicates a
project-scoped client (e.g., callerContext.projectId or a scope flag). If
callerContext has no projectId (org-scoped), keep the existing orgId check;
otherwise deny with 403. Ensure this logic is applied before the
reply.status(...).send calls.
packages/db/src/services/sankey.service.ts-798-825 (1)

798-825: ⚠️ Potential issue | 🟠 Major

Keep the exported core wrapper behind the existing Sankey input bounds.

getUserFlowCore() forwards raw steps/filters straight into getSankey(), bypassing zGetSankeyInput. That means this new entrypoint can accept steps < 2 or very large values even though the original service explicitly capped the query shape at 2–10.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/db/src/services/sankey.service.ts` around lines 798 - 825,
getUserFlowCore is forwarding raw steps and filters into
sankeyService.getSankey, bypassing zGetSankeyInput bounds; restore the same
input validation/sanitization by feeding the input through the existing
zGetSankeyInput (or replicate its logic) before calling sankeyService.getSankey
so steps is clamped to the allowed range (e.g., 2–10) and include/exclude are
normalized. Specifically, in getUserFlowCore call zGetSankeyInput or apply the
same normalization used by zGetSankeyInput to compute steps, include and
exclude, then pass those sanitized values (not raw input.steps/include/exclude)
into sankeyService.getSankey to enforce the original query shape constraints.
packages/mcp/src/tools/dashboard-links.ts-1-1 (1)

1-1: 🛠️ Refactor suggestion | 🟠 Major

Remove empty import statement.

The empty import import { } from '@openpanel/db'; has no effect and should be removed.

🔧 Remove empty import
-import { } from '@openpanel/db';
 import type { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/mcp/src/tools/dashboard-links.ts` at line 1, Remove the no-op empty
import statement `import { } from '@openpanel/db';` from the top of the file;
simply delete that line so there are no unused/empty imports left in
packages/mcp/src/tools/dashboard-links.ts and ensure any required symbols from
'@openpanel/db' are imported explicitly if actually needed elsewhere in this
module.
apps/api/src/controllers/insights.controller.ts-224-234 (1)

224-234: ⚠️ Potential issue | 🟠 Major

cursor and limit are ignored here.

This handler still accepts pagination fields, but it no longer forwards them to overviewService.getTopPages. Clients will always get the service default window, so paging and explicit caps stop working.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/api/src/controllers/insights.controller.ts` around lines 224 - 234, The
getPages handler ignores pagination by not forwarding cursor/limit from the
request to overviewService.getTopPages; update getPages to extract cursor and
limit from req.query (e.g., req.query.cursor and req.query.limit) and pass them
into the call to overviewService.getTopPages (add them to the options object
alongside projectId, filters, startDate, endDate, timezone) so client-provided
pagination parameters are respected.
packages/db/src/services/pages.service.ts-182-195 (1)

182-195: ⚠️ Potential issue | 🟠 Major

limit is accepted but never applied.

getTopPagesCore advertises limit?: number, yet the wrapper drops it before calling OverviewService.getTopPages. Callers will always get the downstream default, which changes both response size and expected semantics.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/db/src/services/pages.service.ts` around lines 182 - 195,
getTopPagesCore accepts a limit parameter but never forwards it to the
downstream call; update getTopPagesCore to pass the input.limit into
_overviewServiceForPages.getTopPages (or provide a sensible default before
forwarding) so callers get the requested page count rather than the downstream
default. Locate the getTopPagesCore function and add the limit field to the
object passed to _overviewServiceForPages.getTopPages, ensuring the property
name matches the downstream API (limit) and preserves optional behavior when
input.limit is undefined.
packages/db/src/services/pages.service.ts-225-253 (1)

225-253: ⚠️ Potential issue | 🟠 Major

This ranking is computed from a truncated candidate set.

pagesService.getTopPages already orders by sessions DESC, so fetching only the first 1,000 rows and then re-sorting in memory cannot return the true top pages for pageviews, bounce_rate, or avg_duration once a project has more than 1,000 pages. total_pages is also capped at that same truncated subset.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/db/src/services/pages.service.ts` around lines 225 - 253, The
current code truncates to 1000 rows after calling pagesService.getTopPages (then
re-sorts in memory), which produces incorrect top results and an incorrect total
when projects exceed 1000 pages; fix by pushing sorting and limiting into the
data layer: update the call to pagesService.getTopPages to accept and honor
input.sortBy and input.sortOrder (and use a sensible DB-side limit or no limit
when computing totals), request either the full candidate set or let the DB
perform ORDER BY (e.g., pass input.limit or null) so the returned pages are
already correctly ordered, and use the total count returned by the service
(instead of pages.length) for total_pages; refer to pagesService.getTopPages,
input.sortBy, input.sortOrder, pages, and total_pages when making these changes.
apps/api/src/app.ts-159-169 (1)

159-169: ⚠️ Potential issue | 🟠 Major

Don't persist raw tRPC input and session objects in error logs.

ctx.input and ctx.ctx?.session can contain emails, identifiers, and other customer data. Logging them on every procedure failure creates an avoidable PII trail; log a small redacted set of fields instead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/api/src/app.ts` around lines 159 - 169, The onError handler currently
logs raw ctx.input and ctx.ctx?.session (in the ctx.req.log.error call), which
can contain PII; change the handler to stop persisting those raw objects and
instead log a minimal, redacted payload (e.g., ctx.path, ctx.type, error
metadata and a sanitized/small whitelist of non-PII fields). Implement or call a
small sanitizer (e.g., sanitizeTrpcInput/sanitizeSession) before logging and
replace ctx.input and ctx.ctx?.session in the log payload with the sanitized
versions or omitted fields only.
packages/db/src/services/pages.service.ts-274-315 (1)

274-315: 🛠️ Refactor suggestion | 🟠 Major

Please rewrite this with the shared ClickHouse builder.

This new raw-SQL path bypasses the repository abstraction and reintroduces manual escaping, hard-coded table names, and direct date conversion in a file that otherwise uses clix.

As per coding guidelines, "When writing ClickHouse queries, always use the custom query builder from ./packages/db/src/clickhouse/query-builder.ts and ./packages/db/src/clickhouse/query-functions.ts."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/db/src/services/pages.service.ts` around lines 274 - 315, The
function currently builds raw SQL using manual escaping and hard-coded table/SQL
functions (see chQuery, IPageConversionRow and variables projectId,
conversionEvent, startDate, endDate, windowHours, limit) which bypasses the
shared ClickHouse abstraction; rewrite this query to use the shared ClickHouse
query builder and query-functions (from clickhouse/query-builder.ts and
clickhouse/query-functions.ts) instead of string interpolation — replace the
WITH CTEs (conversion_events, views_before_conversions, total_visitors) with
builder subqueries, use parameterized builder methods for project_id, event name
and date ranges (use the query-functions for toDateTime/interval/uniq/round),
build the LEFT JOIN, GROUP BY, ORDER BY and LIMIT via builder APIs, and return
the built query via chQuery<IPageConversionRow> so no manual sqlstring.escape or
direct SQL literals remain.
apps/api/src/controllers/insights.controller.ts-432-454 (1)

432-454: ⚠️ Potential issue | 🟠 Major

range is silently dropped on the events endpoint.

zEventsQuery exposes range, but queryEventsCore only resolves explicit startDate/endDate. Passing req.query straight through means requests like ?range=30d fall back to the core default window instead of the requested one.

Possible fix
 export async function queryEvents(
   req: FastifyRequest<{ Params: { projectId?: string }; Querystring: z.infer<typeof zEventsQuery> }>,
   reply: FastifyReply
 ) {
   const projectId = await getProjectId(req as RequestWithProjectParam);
-  return reply.send(await queryEventsCore({ projectId, ...req.query }));
+  const { startDate, endDate } = await resolveDates(projectId, req.query);
+  const { range: _ignoredRange, ...rest } = req.query;
+  return reply.send(
+    await queryEventsCore({ projectId, ...rest, startDate, endDate })
+  );
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/api/src/controllers/insights.controller.ts` around lines 432 - 454,
zEventsQuery includes a shorthand "range" field but queryEvents forwards
req.query directly to queryEventsCore which only honors explicit
startDate/endDate, so range values like "30d" are ignored; update the
queryEvents handler to detect and convert the incoming range (from zEventsQuery)
into concrete startDate and endDate before calling queryEventsCore (or map it to
the core's expected params), e.g. parse req.query.range into startDate/endDate
and pass { projectId, ...req.query, startDate, endDate } to queryEventsCore so
the requested window is respected; adjust use of symbols zEventsQuery,
queryEvents, queryEventsCore, req.query, range, startDate, endDate accordingly.
packages/db/src/services/event.service.ts-1284-1291 (1)

1284-1291: ⚠️ Potential issue | 🟠 Major

Add a deterministic sort before LIMIT.

Without an ORDER BY, ClickHouse can return any matching 20 rows. That makes queryEventsCore nondeterministic and it won't reliably surface the latest matching events.

Possible fix
-  return builder.limit(input.limit ?? 20).execute();
+  return builder
+    .orderBy('created_at', 'DESC')
+    .orderBy('id', 'DESC')
+    .limit(input.limit ?? 20)
+    .execute();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/db/src/services/event.service.ts` around lines 1284 - 1291,
queryEventsCore (the builder usage around resolveDateRange and
builder.where('created_at', 'BETWEEN', ...)) is nondeterministic because it
applies LIMIT without an ORDER BY; add a deterministic ORDER BY before calling
builder.limit(...). Specifically, order by created_at descending to surface
latest events (and include a tiebreaker unique key/PK if available, e.g., id or
event_id) by invoking the builder's ordering method prior to builder.limit to
ensure consistent results.
packages/db/src/services/event.service.ts-1190-1204 (1)

1190-1204: ⚠️ Potential issue | 🟠 Major

Deduplicate property_value before applying the 200-row cap.

This query limits raw rows, not unique values. For popular properties, repeated property_values will consume the window and callers will miss other valid options.

Possible fix
-  const rows = await clix(ch)
-    .select<{ value: string }>(['property_value as value'])
+  const rows = await clix(ch)
+    .select<{ value: string; last_seen: string }>([
+      'property_value as value',
+      'max(created_at) as last_seen',
+    ])
     .from(TABLE_NAMES.event_property_values_mv)
     .where('project_id', '=', input.projectId)
     .where('name', '=', input.eventName)
     .where('property_key', '=', input.propertyKey)
-    .orderBy('created_at', 'DESC')
+    .groupBy(['property_value'])
+    .orderBy('last_seen', 'DESC')
     .limit(200)
     .execute();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/db/src/services/event.service.ts` around lines 1190 - 1204, The
query currently limits raw rows so duplicate property_value entries can crowd
out unique options; change the query against
TABLE_NAMES.event_property_values_mv to deduplicate by property_value before
applying the 200-row limit (e.g., groupBy('property_value') and select
property_value as value with max('created_at') as max_created_at, then
orderBy('max_created_at', 'DESC').limit(200)) so the returned rows variable
contains up to 200 unique values; keep the returned shape (event, property,
values: rows.map(r => r.value)).
🟡 Minor comments (10)
apps/public/content/docs/api/manage/clients.mdx-34-34 (1)

34-34: ⚠️ Potential issue | 🟡 Minor

Use US-English variant for consistency (afterward).

Line 34 uses “afterwards”; switch to “afterward” to match American English style consistency.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/public/content/docs/api/manage/clients.mdx` at line 34, Replace the
US/UK inconsistent word in the sentence "Client secrets are only returned once
at creation time and are never retrievable afterwards." by changing "afterwards"
to the US-English variant "afterward" so the sentence reads "Client secrets are
only returned once at creation time and are never retrievable afterward."
packages/db/src/prisma-client.ts-91-118 (1)

91-118: ⚠️ Potential issue | 🟡 Minor

Missing SELF_HOSTED check in isActive and isTrial computed fields.

All other subscription-related computed fields (isCanceled, isWillBeCanceled, isExpired, isExceeded, etc.) return a safe default when process.env.SELF_HOSTED === 'true', but isActive and isTrial do not. This inconsistency could cause unexpected behavior in self-hosted deployments.

Proposed fix
         isActive: {
           needs: {
             subscriptionStatus: true,
             subscriptionEndsAt: true,
             subscriptionCanceledAt: true,
           },
           compute(org) {
+            if (process.env.SELF_HOSTED === 'true') {
+              return true;
+            }
+
             return (
               org.subscriptionStatus === 'active' &&
               org.subscriptionEndsAt &&
               org.subscriptionEndsAt > new Date() &&
               !isCanceled(org) &&
               !isWillBeCanceled(org)
             );
           },
         },
         isTrial: {
           needs: { subscriptionStatus: true, subscriptionEndsAt: true },
           compute(org) {
+            if (process.env.SELF_HOSTED === 'true') {
+              return false;
+            }
+
             const isSubscriptionInFuture =
               org.subscriptionEndsAt && org.subscriptionEndsAt > new Date();
             return (
               (org.subscriptionStatus === 'trialing' ||
                 org.subscriptionStatus === null) &&
               isSubscriptionInFuture
             );
           },
         },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/db/src/prisma-client.ts` around lines 91 - 118, The computed fields
isActive and isTrial in prisma-client.ts lack the SELF_HOSTED guard that other
subscription helpers (isCanceled, isWillBeCanceled, isExpired, isExceeded) use;
update the compute functions for isActive and isTrial to first check
process.env.SELF_HOSTED === 'true' and return the same safe default used
elsewhere (e.g., false) when self-hosted, then proceed with the existing logic
(keep references to org.subscriptionStatus, org.subscriptionEndsAt and calls to
isCanceled/isWillBeCanceled in isActive). Ensure both isActive and isTrial
mirror the defensive pattern used by the other computed fields.
packages/db/src/services/gsc.service.ts-40-41 (1)

40-41: ⚠️ Potential issue | 🟡 Minor

Duplicate filtering: minImpressions filter applied twice with different values.

The computeOpportunities function has a hardcoded filter q.impressions >= 50 at line 41, but the caller gscGetQueryOpportunitiesCore (lines 159-161) already filters by minImpressions (defaulting to 50). If minImpressions is set to a value other than 50 (e.g., 100), the hardcoded filter here will still override it to 50, making the parameter ineffective for this condition.

Proposed fix: Remove hardcoded impressions filter from computeOpportunities
   return queries
-    .filter((q) => q.position >= 4 && q.position <= 20 && q.impressions >= 50)
+    .filter((q) => q.position >= 4 && q.position <= 20)
     .map((q) => {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/db/src/services/gsc.service.ts` around lines 40 - 41,
computeOpportunities currently hardcodes q.impressions >= 50 which duplicates
and overrides the caller's minImpressions; remove the hardcoded filter (or
replace it to use a minImpressions parameter) so computeOpportunities respects
the caller-provided minImpressions from gscGetQueryOpportunitiesCore, and ensure
any references to the impressions threshold reference the passed-in
minImpressions variable rather than the literal 50 (locate computeOpportunities
and gscGetQueryOpportunitiesCore to apply the change).
packages/mcp/src/tools/analytics/engagement.ts-30-47 (1)

30-47: ⚠️ Potential issue | 🟡 Minor

The 60-day churn bucket is off by one.

Line 34 still counts days === 60 into inactive_31_to_60_days, but Line 47 exposes the last bucket as churned_60_plus_days. Please align the threshold or rename the bucket so the summary is unambiguous.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/mcp/src/tools/analytics/engagement.ts` around lines 30 - 47, The
60-day boundary is inconsistent: change the loop condition "else if (row.days <=
60) active_31_60 += row.users;" to "else if (row.days <= 59) active_31_60 +=
row.users;" so that days === 60 falls into the churned bucket, and update the
summary key name "active_31_to_60_days" to "active_31_to_59_days" (while keeping
"churned_60_plus_days" and the variables active_31_60 and churned_60_plus) so
the labels and thresholds align unambiguously.
packages/mcp/src/tools/gsc/queries.ts-42-43 (1)

42-43: ⚠️ Potential issue | 🟡 Minor

minImpressions below 50 is silently ignored.

The handler already filters by the caller-supplied threshold, but computeOpportunities() applies a second hard-coded impressions >= 50 filter. Requests like minImpressions: 10 will still drop all 10–49 impression queries.

🛠️ Suggested fix
 function computeOpportunities(
   queries: Array<{
     query: string;
     clicks: number;
     impressions: number;
     ctr: number;
     position: number;
   }>,
 ): GscQueryOpportunity[] {
   return queries
-    .filter((q) => q.position >= 4 && q.position <= 20 && q.impressions >= 50)
+    .filter((q) => q.position >= 4 && q.position <= 20)
     .map((q) => {
       const benchmark = getBenchmark(q.position);
       const ctrGap = Math.max(0, benchmark - q.ctr);

Also applies to: 115-123

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/mcp/src/tools/gsc/queries.ts` around lines 42 - 43, The code in
computeOpportunities currently applies a hard-coded impressions >= 50 filter
(the queries.filter(...) call) which overrides caller-supplied minImpressions;
update computeOpportunities to accept and use the minImpressions parameter (or
passed-in threshold) instead of 50 wherever queries are filtered by impressions
(including the other similar filter further down), replacing the literal 50 with
the minImpressions variable and ensuring the function signature/locals pass that
value through to both filter calls (preserve the existing position bounds
logic).
packages/mcp/src/tools/analytics/reports.ts-99-105 (1)

99-105: ⚠️ Potential issue | 🟡 Minor

Inconsistent error handling - returns error object instead of throwing.

get_report_data returns { error: '...' } objects for "not found" and "wrong project" cases, while other tools throw errors that are caught by withErrorHandling. This makes error handling inconsistent for consumers.

🔧 Proposed fix for consistent error handling
       if (!report) {
-        return { error: 'Report not found', reportId };
+        throw new Error(`Report not found: ${reportId}`);
       }

       if (report.projectId !== projectId) {
-        return { error: 'Report does not belong to this project', reportId };
+        throw new Error('Report does not belong to this project');
       }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/mcp/src/tools/analytics/reports.ts` around lines 99 - 105, The
get_report_data function currently returns error objects ({ error: '...' }) for
"Report not found" and "Report does not belong to this project" cases, causing
inconsistent handling compared to other tools that throw and are wrapped by
withErrorHandling; change those return statements inside get_report_data to
throw Error instances (or the same custom error type used elsewhere) with the
same messages so callers and withErrorHandling receive exceptions consistently
(update the two checks referencing report and report.projectId to throw new
Error('Report not found') and throw new Error('Report does not belong to this
project') respectively).
packages/mcp/src/tools/analytics/page-performance.ts-64-67 (1)

64-67: ⚠️ Potential issue | 🟡 Minor

Sorting comparison doesn't handle equality, causing unstable sort.

The comparison (a[col] ?? 0) < (b[col] ?? 0) ? -1 : 1 returns 1 when values are equal instead of 0. This causes unstable sorting behavior where equal items may be reordered arbitrarily.

🔧 Proposed fix for stable sorting
 const col = sortBy ?? 'sessions';
 const dir = sortOrder === 'asc' ? 1 : -1;
-const sorted = [...pages].sort((a, b) => dir * ((a[col] ?? 0) < (b[col] ?? 0) ? -1 : 1));
+const sorted = [...pages].sort((a, b) => {
+  const aVal = a[col] ?? 0;
+  const bVal = b[col] ?? 0;
+  return dir * (aVal < bVal ? -1 : aVal > bVal ? 1 : 0);
+});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/mcp/src/tools/analytics/page-performance.ts` around lines 64 - 67,
The sort comparator used when building sorted from pages is unstable because it
returns 1 for equal values; update the comparator in the sorted assignment (the
sort callback that uses col, dir, pages, sortBy, sortOrder) to compute aVal =
(a[col] ?? 0) and bVal = (b[col] ?? 0) and return dir * (aVal === bVal ? 0 :
aVal < bVal ? -1 : 1) so equal items return 0 and maintain stability; then slice
into results as before with limit.
packages/db/src/services/date.service.ts-173-182 (1)

173-182: ⚠️ Potential issue | 🟡 Minor

Inconsistent boundary handling for lastYear range.

Other ranges like 7d, 6m, 12m, monthToDate, and lastMonth add plus({ millisecond: 1 }) to the endDate, but lastYear doesn't. This inconsistency could cause subtle off-by-one behavior when comparing year-over-year data.

🔧 Suggested fix
   if (range === 'lastYear') {
     const year = DateTime.now().minus({ year: 1 }).setZone(timezone);
     const startDate = year.startOf('year').toFormat('yyyy-MM-dd HH:mm:ss');
-    const endDate = year.endOf('year').toFormat('yyyy-MM-dd HH:mm:ss');
+    const endDate = year
+      .endOf('year')
+      .plus({ millisecond: 1 })
+      .toFormat('yyyy-MM-dd HH:mm:ss');

     return {
       startDate,
       endDate,
     };
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/db/src/services/date.service.ts` around lines 173 - 182, The
lastYear branch in date.service.ts returns endDate without the millisecond bump
unlike other branches (e.g., the handlers for '7d', '6m', '12m', 'monthToDate',
'lastMonth'); update the lastYear computation (the block using
DateTime.now().minus({ year: 1 }).setZone(timezone) and the endDate variable) to
add plus({ millisecond: 1 }) before formatting so endDate uses the same
inclusive boundary handling as the other ranges.
packages/db/src/services/profile.service.ts-472-496 (1)

472-496: ⚠️ Potential issue | 🟡 Minor

Potential false positive on null check.

getProfileMetrics (line 101) uses data[0]! with a non-null assertion. If ClickHouse returns an empty result set, data[0] is undefined and the assertion lies. The null check at line 477 would never catch this case because raw would be the asserted undefined (runtime error before reaching the check).

Consider updating getProfileMetrics to handle empty results gracefully:

// In getProfileMetrics (around line 101)
-    .then((data) => data[0]!)
+    .then((data) => data[0] ?? null)

Then the check in getProfileMetricsCore would work as intended.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/db/src/services/profile.service.ts` around lines 472 - 496,
getProfileMetricsCore's null check can be bypassed because getProfileMetrics
uses a non-null assertion on data[0] (data[0]!) which will throw before
returning undefined when ClickHouse returns no rows; update getProfileMetrics to
handle empty result sets gracefully (remove the non-null assertion, explicitly
check if data.length === 0 and return null/undefined or a safe empty object) so
callers like getProfileMetricsCore can rely on its returned value, and ensure
getProfileMetricsCore still checks raw and handles the null/undefined case
appropriately.
test/fixtures.ts-449-451 (1)

449-451: ⚠️ Potential issue | 🟡 Minor

Schema statements may require sequential execution.

Promise.all executes all statements in parallel, but DDL statements often have ordering dependencies (e.g., CREATE DATABASE must complete before CREATE TABLE). If clickhouse-schema.sql has such dependencies, this could cause intermittent failures.

🔧 Suggested fix for sequential execution
-  await Promise.all(
-    statements.map((statement) => client.command({ query: statement }))
-  );
+  for (const statement of statements) {
+    await client.command({ query: statement });
+  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/fixtures.ts` around lines 449 - 451, The schema statements are run in
parallel with Promise.all which can break ordering-dependent DDL; change the
logic that calls client.command for each entry in statements (the map over
statements and client.command invocation) to execute sequentially by awaiting
each client.command in order (e.g., a for..of loop or an await-reduce) so each
statement completes before the next starts, ensuring CREATE DATABASE/TABLE
ordering is preserved.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 1940706f-b0a4-4438-8f96-799d3f07710a

📥 Commits

Reviewing files that changed from the base of the PR and between a1ce71f and b2e2dba.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (127)
  • .github/workflows/docker-build.yml
  • apps/api/package.json
  • apps/api/src/app.ts
  • apps/api/src/controllers/export.controller.ts
  • apps/api/src/controllers/insights.controller.ts
  • apps/api/src/controllers/manage.controller.ts
  • apps/api/src/controllers/track.controller.ts
  • apps/api/src/index.ts
  • apps/api/src/routes/event.router.ts
  • apps/api/src/routes/export.router.ts
  • apps/api/src/routes/import.router.ts
  • apps/api/src/routes/insights.router.test.ts
  • apps/api/src/routes/insights.router.ts
  • apps/api/src/routes/manage.router.ts
  • apps/api/src/routes/mcp.router.ts
  • apps/api/src/routes/profile.router.ts
  • apps/api/src/routes/track.router.ts
  • apps/api/src/utils/graceful-shutdown.ts
  • apps/api/src/utils/parse-zod-query-string.ts
  • apps/api/src/utils/rate-limiter.ts
  • apps/api/tsconfig.json
  • apps/api/vitest.config.ts
  • apps/public/content/docs/api-reference/authentication.mdx
  • apps/public/content/docs/api-reference/rate-limits.mdx
  • apps/public/content/docs/api/export.mdx
  • apps/public/content/docs/api/insights.mdx
  • apps/public/content/docs/api/manage/clients.mdx
  • apps/public/content/docs/api/manage/index.mdx
  • apps/public/content/docs/api/manage/projects.mdx
  • apps/public/content/docs/api/manage/references.mdx
  • apps/public/content/docs/api/meta.json
  • apps/public/content/docs/api/track.mdx
  • apps/public/content/docs/mcp/index.mdx
  • apps/public/content/docs/meta.json
  • apps/public/next.config.mjs
  • apps/public/package.json
  • apps/public/source.config.ts
  • apps/public/src/app/docs/(docs)/[[...slug]]/page.tsx
  • apps/public/src/app/docs/(docs)/layout.tsx
  • apps/public/src/app/docs/api-reference/[[...slug]]/page.tsx
  • apps/public/src/app/docs/api-reference/layout.tsx
  • apps/public/src/app/docs/layout.tsx
  • apps/public/src/app/global.css
  • apps/public/src/components/navbar.tsx
  • apps/public/src/lib/openapi.ts
  • apps/public/src/mdx-components.tsx
  • apps/worker/src/jobs/import.ts
  • biome.json
  • docker-compose.yml
  • docker/clickhouse/clickhouse-user-config.xml
  • packages/auth/src/session.ts
  • packages/db/index.ts
  • packages/db/src/engine/index.ts
  • packages/db/src/engine/normalize.ts
  • packages/db/src/prisma-client.ts
  • packages/db/src/services/access.service.ts
  • packages/db/src/services/chart.service.ts
  • packages/db/src/services/dashboard.service.ts
  • packages/db/src/services/date.service.ts
  • packages/db/src/services/event.service.ts
  • packages/db/src/services/funnel.service.ts
  • packages/db/src/services/group.service.ts
  • packages/db/src/services/gsc.service.ts
  • packages/db/src/services/organization.service.ts
  • packages/db/src/services/overview.service.ts
  • packages/db/src/services/pages.service.ts
  • packages/db/src/services/profile.service.ts
  • packages/db/src/services/project.service.ts
  • packages/db/src/services/reports.service.ts
  • packages/db/src/services/retention.service.ts
  • packages/db/src/services/sankey.service.ts
  • packages/db/src/services/session.service.ts
  • packages/importer/src/providers/mixpanel.ts
  • packages/logger/index.ts
  • packages/mcp/index.ts
  • packages/mcp/package.json
  • packages/mcp/src/auth.test.ts
  • packages/mcp/src/auth.ts
  • packages/mcp/src/handler.ts
  • packages/mcp/src/integration/tools.test.ts
  • packages/mcp/src/server.ts
  • packages/mcp/src/session-manager.test.ts
  • packages/mcp/src/session-manager.ts
  • packages/mcp/src/tools/analytics/active-users.ts
  • packages/mcp/src/tools/analytics/engagement.test.ts
  • packages/mcp/src/tools/analytics/engagement.ts
  • packages/mcp/src/tools/analytics/event-names.ts
  • packages/mcp/src/tools/analytics/events.ts
  • packages/mcp/src/tools/analytics/funnel.ts
  • packages/mcp/src/tools/analytics/groups.ts
  • packages/mcp/src/tools/analytics/overview.ts
  • packages/mcp/src/tools/analytics/page-conversions.test.ts
  • packages/mcp/src/tools/analytics/page-conversions.ts
  • packages/mcp/src/tools/analytics/page-performance.test.ts
  • packages/mcp/src/tools/analytics/page-performance.ts
  • packages/mcp/src/tools/analytics/pages.ts
  • packages/mcp/src/tools/analytics/profile-metrics.ts
  • packages/mcp/src/tools/analytics/profiles.test.ts
  • packages/mcp/src/tools/analytics/profiles.ts
  • packages/mcp/src/tools/analytics/property-values.ts
  • packages/mcp/src/tools/analytics/reports.ts
  • packages/mcp/src/tools/analytics/retention.ts
  • packages/mcp/src/tools/analytics/sessions.ts
  • packages/mcp/src/tools/analytics/traffic.ts
  • packages/mcp/src/tools/analytics/user-flow.ts
  • packages/mcp/src/tools/dashboard-links.ts
  • packages/mcp/src/tools/gsc/cannibalization.ts
  • packages/mcp/src/tools/gsc/overview.ts
  • packages/mcp/src/tools/gsc/pages.ts
  • packages/mcp/src/tools/gsc/queries.ts
  • packages/mcp/src/tools/index.ts
  • packages/mcp/src/tools/projects.ts
  • packages/mcp/src/tools/shared.test.ts
  • packages/mcp/src/tools/shared.ts
  • packages/mcp/tsconfig.json
  • packages/mcp/vitest.config.ts
  • packages/trpc/src/routers/widget.ts
  • packages/trpc/src/trpc.ts
  • packages/validation/src/index.ts
  • packages/validation/src/track.validation.ts
  • pnpm-workspace.yaml
  • test/clickhouse-schema.sql
  • test/fixtures.ts
  • test/global-setup.ts
  • test/test-setup.ts
  • vitest.config.ts
  • vitest.shared.ts
💤 Files with no reviewable changes (2)
  • apps/public/src/mdx-components.tsx
  • apps/public/src/app/docs/layout.tsx

Comment on lines +187 to +195
fastify.route({
method: 'GET',
url: '/projects/:projectId/dashboards/:dashboardId/reports',
schema: {
params: z.object({ projectId: z.string(), dashboardId: z.string() }),
tags: ['Manage'],
description: 'List all reports in a dashboard.',
},
handler: listReports,
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

controller=$(fd 'insights.controller.ts' apps | head -n1)
manage_router=$(fd 'manage.router.ts' apps | head -n1)
insights_router=$(fd 'insights.router.ts' apps | head -n1)

echo "Inspecting listReports implementation:"
rg -n -C4 '\blistReports\b|\bdashboardId\b|\bprojectId\b' "$controller" || true

echo
echo "Comparing route wiring:"
rg -n -C3 '\blistReports\b|\bdashboardId\b|\bprojectId\b' "$manage_router" "$insights_router" || true

Repository: Openpanel-dev/openpanel

Length of output: 44268


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Find where listReportsCore is defined
rg -n 'listReportsCore' --type ts --type tsx -A 15 | head -n 50

Repository: Openpanel-dev/openpanel

Length of output: 94


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Find where listReportsCore is defined
rg -n 'listReportsCore' -A 15 | head -n 100

Repository: Openpanel-dev/openpanel

Length of output: 4209


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Check getReportsByDashboardId implementation and schema
rg -n 'getReportsByDashboardId|function.*dashboardId|export.*dashboardId' -B 2 -A 10 packages/db/src | head -n 80

Repository: Openpanel-dev/openpanel

Length of output: 2806


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Check dashboard schema to see if it has projectId field
find packages/db -name "*.prisma" -exec cat {} \; | grep -A 10 "model Dashboard"

Repository: Openpanel-dev/openpanel

Length of output: 540


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Check if dashboard is fetched/validated anywhere before getReportsByDashboardId is called
rg -n 'listReportsCore|getDashboard|dashboard' packages/db/src/services/reports.service.ts -B 5 -A 20 | head -n 80

Repository: Openpanel-dev/openpanel

Length of output: 1817


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Check if Report schema has a dashboard relation and if we can join/validate through it
cat packages/db/src/schema.prisma 2>/dev/null | grep -A 20 "model Report" || \
find packages/db -name "*.prisma" -exec cat {} \; | grep -A 20 "model Report"

Repository: Openpanel-dev/openpanel

Length of output: 1278


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Get the full Report model definition
find packages/db -name "*.prisma" -exec cat {} \; | grep -A 50 "model Report" | grep -B 50 "@@"

Repository: Openpanel-dev/openpanel

Length of output: 2731


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Verify the issue: check if listReportsCore validates the dashboard-project relationship
# by looking at the full implementation and how it's called from the controller

echo "=== listReportsCore implementation ==="
sed -n '132,152p' packages/db/src/services/reports.service.ts

echo -e "\n=== listReports controller call ==="
sed -n '615,621p' apps/api/src/controllers/insights.controller.ts

echo -e "\n=== Checking if dashboard is ever fetched/validated ==="
rg -n 'dashboard|Dashboard' packages/db/src/services/reports.service.ts

Repository: Openpanel-dev/openpanel

Length of output: 1373


🏁 Script executed:

#!/bin/bash
set -euo progefail

# Verify the vulnerability: check what a malicious query could retrieve
# by examining if there's any projectId filtering when fetching reports

echo "=== Full getReportsByDashboardId function ==="
sed -n '97,108p' packages/db/src/services/reports.service.ts

echo -e "\n=== Checking if any other function validates dashboard-project relationship ==="
rg -n 'findMany.*dashboardId|where.*dashboardId' packages/db/src/services/ -A 5

Repository: Openpanel-dev/openpanel

Length of output: 121


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Verify the vulnerability: check what a malicious query could retrieve
# by examining if there's any projectId filtering when fetching reports

echo "=== Full getReportsByDashboardId function ==="
sed -n '97,108p' packages/db/src/services/reports.service.ts

echo -e "\n=== Checking if any other function validates dashboard-project relationship ==="
rg -n 'findMany.*dashboardId|where.*dashboardId' packages/db/src/services/ -A 5

Repository: Openpanel-dev/openpanel

Length of output: 447


The projectId parameter in listReportsCore is unused—add validation that the dashboard belongs to the requested project.

listReportsCore accepts projectId but never uses it. Instead, it only filters reports by dashboardId via getReportsByDashboardId(), which does not verify that the dashboard belongs to that project. A caller could pair their authorized projectId with another project's dashboardId to enumerate its reports.

The fix requires validating that dashboard.projectId === input.projectId before returning reports—either by fetching the dashboard first or filtering reports with both dashboardId and projectId conditions.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/api/src/routes/manage.router.ts` around lines 187 - 195, listReportsCore
currently ignores its projectId parameter and relies solely on
getReportsByDashboardId(dashboardId), allowing cross-project access; update
listReportsCore to validate ownership by ensuring the dashboard belongs to the
requested project before returning reports. Fetch the dashboard (e.g., via
getDashboardById or equivalent using dashboardId) and assert dashboard.projectId
=== input.projectId, or alter the report query to include both dashboardId and
projectId, and return a 403/empty result when the check fails; reference
listReportsCore, getReportsByDashboardId, dashboard.projectId, and
input.projectId when making the change.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The projectId is already used -- getDashboardById(input.dashboardId, input.projectId) scopes the Prisma query with where: { id, projectId }, so only dashboards belonging to the requested project are returned. No change needed here.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
rg -n 'getDashboardById' packages/db/src/services/ -A 10 | head -n 40

Length of output: 2442


@lindesvard, you're absolutely right — I apologize for the incorrect comment. Looking at getDashboardById in packages/db/src/services/dashboard.service.ts:

export async function getDashboardById(id: string, projectId: string) {
  const dashboard = await db.dashboard.findUnique({
    where: {
      id,
      projectId,  // ← scopes the lookup to the given project
    },
    ...
  });

And in listReportsCore (line 138):

const dashboard = await getDashboardById(input.dashboardId, input.projectId);
if (!dashboard) {
  return [];  // ← returns nothing if the dashboard doesn't belong to this project
}

The ownership check is already enforced at the Prisma query level. If a caller passes a dashboardId that doesn't belong to the authenticated projectId, the query returns null and the function returns an empty array. No change needed — my original review comment was mistaken.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/db/src/services/reports.service.ts`:
- Around line 174-176: The report's explicit startDate/endDate are lost because
transformReport (used by getReportById) doesn't include those fields, so
getChartStartEndDate ends up recomputing an absolute window from range; fix by
preserving those fields: either update transformReport to copy startDate and
endDate from the DB row into the returned report object, or in getReportById
(before calling transformReport) extract the raw DB row's startDate/endDate and
include them when building chartInput so chartInput = { ...report, startDate:
rawStartDate, endDate: rawEndDate, timezone } is passed to getChartStartEndDate;
ensure the symbols mentioned (transformReport, getReportById,
getChartStartEndDate, chartInput, startDate, endDate, getSettingsForProject) are
updated accordingly.
- Around line 164-171: The code calls the unscoped helper
getReportById(input.reportId) then separately checks report.projectId, which
leaks existence of reports across projects; update the lookup to fetch the
report scoped by both id and projectId (e.g., replace getReportById call with or
add a new helper that queries by { id: input.reportId, projectId:
input.projectId } or use an existing getReportByIdAndProjectId), and collapse
the two error paths into a single "Report not found" error when the scoped query
returns no row so ownership is enforced at query time.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 21b6f380-bc02-4cf1-a8b8-cf787fbc38d5

📥 Commits

Reviewing files that changed from the base of the PR and between b2e2dba and 5d1e960.

📒 Files selected for processing (2)
  • .github/workflows/docker-build.yml
  • packages/db/src/services/reports.service.ts

Comment on lines +174 to +176
const { timezone } = await getSettingsForProject(input.projectId);
const { startDate, endDate } = getChartStartEndDate(report, timezone);
const chartInput = { ...report, startDate, endDate, timezone };
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Preserve explicit startDate/endDate before recomputing the window.

getChartStartEndDate in packages/db/src/services/date.service.ts:202-220 prefers explicit startDate/endDate, but getReportById() currently returns transformReport(report), and transformReport() does not include those fields. At Lines 175-176 any report saved with an absolute window will be recalculated from range, so this can return data for the wrong dates. Thread those fields through transformReport() or derive them from the raw DB row here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/db/src/services/reports.service.ts` around lines 174 - 176, The
report's explicit startDate/endDate are lost because transformReport (used by
getReportById) doesn't include those fields, so getChartStartEndDate ends up
recomputing an absolute window from range; fix by preserving those fields:
either update transformReport to copy startDate and endDate from the DB row into
the returned report object, or in getReportById (before calling transformReport)
extract the raw DB row's startDate/endDate and include them when building
chartInput so chartInput = { ...report, startDate: rawStartDate, endDate:
rawEndDate, timezone } is passed to getChartStartEndDate; ensure the symbols
mentioned (transformReport, getReportById, getChartStartEndDate, chartInput,
startDate, endDate, getSettingsForProject) are updated accordingly.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Report model has no startDate or endDate columns in the database schema, so there are no explicit dates to preserve through transformReport. getChartStartEndDate correctly falls back to computing from range. No change needed.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check the Report model in Prisma schema for startDate/endDate fields
rg -n "startDate|endDate|model Report" --type prisma -A 30 -B 2 || \
rg -rn "model Report" --include="*.prisma" -A 30

Length of output: 472


🏁 Script executed:

#!/bin/bash
# Find Prisma schema files
fd "schema.prisma"

Length of output: 32


🏁 Script executed:

#!/bin/bash
# Check the Report model in Prisma schema
rg -n "model Report" packages/db/prisma/schema.prisma -A 30

Length of output: 2138


@lindesvard, you're absolutely right — the Prisma Report model only has a range field (no startDate/endDate columns), so getChartStartEndDate correctly derives the window from range. My concern was unfounded. Apologies for the noise!


✏️ Learnings added
Learnt from: lindesvard
Repo: Openpanel-dev/openpanel PR: 331
File: packages/db/src/services/reports.service.ts:174-176
Timestamp: 2026-04-08T12:17:25.366Z
Learning: In the OpenPanel codebase (`packages/db/prisma/schema.prisma`), the `Report` model has no `startDate` or `endDate` columns. Time range for reports is stored only as a `range` string field (e.g., "30d"). `getChartStartEndDate` in `packages/db/src/services/date.service.ts` correctly computes start/end dates from this `range` field. Do not flag missing `startDate`/`endDate` fields when reviewing `transformReport` or `getReportDataCore` in `packages/db/src/services/reports.service.ts`.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
packages/db/src/services/reports.service.ts (1)

126-131: Consider moving imports to the top of the file.

These imports are placed after function definitions, which is unconventional. Unless this is intentional to avoid circular dependencies, grouping all imports at the top improves readability and follows standard module organization patterns.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/db/src/services/reports.service.ts` around lines 126 - 131, The
listed imports (AggregateChartEngine, ChartEngine, getDashboardById,
getChartStartEndDate, funnelService, getSettingsForProject) are located after
function definitions; move these import statements to the top of
reports.service.ts above all function/class declarations to restore conventional
module ordering, unless they were intentionally delayed to avoid a circular
dependency—if circularity exists, instead resolve it by refactoring the
referenced functions (e.g., the functions that call getDashboardById or
funnelService) or use lazy imports only at the call site and document why;
otherwise place the six import lines at the top of the file so all imports are
grouped together for readability.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/db/src/services/reports.service.ts`:
- Around line 133-137: The organizationId parameter is unused in listReportsCore
and getReportDataCore; either remove it from those signatures and all call
sites, or add explicit organization-level validation: update getDashboardById
(and any calls from listReportsCore/getReportDataCore) to accept/verify
organizationId and after fetching the dashboard assert dashboard.organizationId
=== organizationId (throw a suitable Forbidden/NotFound error if not), and keep
the organizationId param in listReportsCore/getReportDataCore when you add this
check; ensure all affected callers are updated to pass the org id and mirror the
pattern used in project.service.ts.

---

Nitpick comments:
In `@packages/db/src/services/reports.service.ts`:
- Around line 126-131: The listed imports (AggregateChartEngine, ChartEngine,
getDashboardById, getChartStartEndDate, funnelService, getSettingsForProject)
are located after function definitions; move these import statements to the top
of reports.service.ts above all function/class declarations to restore
conventional module ordering, unless they were intentionally delayed to avoid a
circular dependency—if circularity exists, instead resolve it by refactoring the
referenced functions (e.g., the functions that call getDashboardById or
funnelService) or use lazy imports only at the call site and document why;
otherwise place the six import lines at the top of the file so all imports are
grouped together for readability.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 8d1b8c17-6bb0-4faf-9e31-d7dc80d31116

📥 Commits

Reviewing files that changed from the base of the PR and between 5d1e960 and 40bd7e8.

📒 Files selected for processing (3)
  • packages/db/src/services/reports.service.ts
  • test/clickhouse-schema.sql
  • test/fixtures.ts
✅ Files skipped from review due to trivial changes (1)
  • test/fixtures.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • test/clickhouse-schema.sql

Comment on lines +133 to +137
export async function listReportsCore(input: {
projectId: string;
dashboardId: string;
organizationId: string;
}) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if organizationId is used for authorization elsewhere in similar service functions
rg -n "organizationId" packages/db/src/services/*.service.ts -B 2 -A 5 | head -100

Repository: Openpanel-dev/openpanel

Length of output: 6525


🏁 Script executed:

sed -n '133,180p' packages/db/src/services/reports.service.ts

Repository: Openpanel-dev/openpanel

Length of output: 1470


🏁 Script executed:

rg -n "getDashboardById|getReportsByDashboardId" packages/db/src/services/ -A 10 | head -80

Repository: Openpanel-dev/openpanel

Length of output: 3622


🏁 Script executed:

rg -n "listReportsCore|getReportDataCore" packages/ -B 3 -A 3 --type ts --type tsx

Repository: Openpanel-dev/openpanel

Length of output: 94


🏁 Script executed:

rg -n "listReportsCore|getReportDataCore" packages/ -B 3 -A 3

Repository: Openpanel-dev/openpanel

Length of output: 1099


🏁 Script executed:

rg -n "from.*reports\.service|import.*listReportsCore|import.*getReportDataCore" packages/ -B 2 -A 5

Repository: Openpanel-dev/openpanel

Length of output: 1911


🏁 Script executed:

find packages -type f -name "*.ts" | xargs grep -l "listReportsCore\|getReportDataCore" | head -10

Repository: Openpanel-dev/openpanel

Length of output: 110


🏁 Script executed:

find . -type f -name "*.ts" -not -path "./node_modules/*" -not -path "./.next/*" | xargs grep -l "listReportsCore\|getReportDataCore" 2>/dev/null

Repository: Openpanel-dev/openpanel

Length of output: 162


🏁 Script executed:

rg -n "listReportsCore|getReportDataCore" ./apps/api/src/controllers/insights.controller.ts -B 5 -A 10

Repository: Openpanel-dev/openpanel

Length of output: 2133


🏁 Script executed:

rg -n "function getProjectId|const getProjectId|export.*getProjectId" ./apps/api/src/ -A 8 | head -50

Repository: Openpanel-dev/openpanel

Length of output: 1554


🏁 Script executed:

rg -n "resolveClientProjectId" packages/db/src/services/project.service.ts -B 2 -A 15 | head -40

Repository: Openpanel-dev/openpanel

Length of output: 664


🏁 Script executed:

rg -n "where.*organizationId.*reportId|where.*reportId.*organizationId" packages/db/src/ -B 2 -A 2

Repository: Openpanel-dev/openpanel

Length of output: 49


🏁 Script executed:

find packages/db/src -name "*.prisma" -o -name "schema.prisma" | head -5

Repository: Openpanel-dev/openpanel

Length of output: 49


🏁 Script executed:

find . -name "*.prisma" 2>/dev/null

Repository: Openpanel-dev/openpanel

Length of output: 101


🏁 Script executed:

rg -n "model Report|model Dashboard" ./packages/db/prisma/schema.prisma -A 15 | head -60

Repository: Openpanel-dev/openpanel

Length of output: 1818


🏁 Script executed:

sed -n '12,25p' packages/db/src/services/dashboard.service.ts

Repository: Openpanel-dev/openpanel

Length of output: 328


Remove unused organizationId parameter or add explicit organization-level validation.

Both listReportsCore and getReportDataCore accept organizationId but never reference it. The database schema shows Dashboard has an explicit organizationId field, yet getDashboardById only validates (id, projectId) without checking organization membership. This is inconsistent with other service functions like those in project.service.ts, which explicitly filter by organizationId. Either remove the parameter if validation is implicit through projectId, or add explicit validation: getDashboardById should verify the dashboard's organizationId matches the caller's organization.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/db/src/services/reports.service.ts` around lines 133 - 137, The
organizationId parameter is unused in listReportsCore and getReportDataCore;
either remove it from those signatures and all call sites, or add explicit
organization-level validation: update getDashboardById (and any calls from
listReportsCore/getReportDataCore) to accept/verify organizationId and after
fetching the dashboard assert dashboard.organizationId === organizationId (throw
a suitable Forbidden/NotFound error if not), and keep the organizationId param
in listReportsCore/getReportDataCore when you add this check; ensure all
affected callers are updated to pass the org id and mirror the pattern used in
project.service.ts.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
test/fixtures.ts (1)

373-388: Raw SQL with string interpolation - consider parameterization or query builder.

The projectId is directly interpolated into SQL strings, which is a SQL injection risk if this code is ever called with untrusted input. While this is test code with controlled inputs, it also violates the coding guideline requiring use of the custom query builder for ClickHouse queries.

For test fixtures, parameterized queries or at minimum input validation would be safer:

🛡️ Proposed fix: Add input validation
 async function deleteFixtures(client: ChClient, projectId: string) {
+  // Basic validation to prevent SQL injection in test fixtures
+  if (!/^[\w-]+$/.test(projectId)) {
+    throw new Error(`Invalid projectId format: ${projectId}`);
+  }
   await Promise.all([
     client.command({
       query: `DELETE FROM openpanel.profiles WHERE project_id = '${projectId}'`,

As per coding guidelines: "When writing ClickHouse queries, always use the custom query builder from ./packages/db/src/clickhouse/query-builder.ts and ./packages/db/src/clickhouse/query-functions.ts"

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/fixtures.ts` around lines 373 - 388, The deleteFixtures function builds
raw SQL with projectId interpolation (in client.command calls and ALTER TABLE
DELETE) which risks injection and violates the ClickHouse query-builder
guideline; update deleteFixtures to use the project's ClickHouse query
builder/query-functions to compose parameterized DELETE statements (or pass
projectId as a parameter via the builder), replace the four client.command raw
strings with builder-generated queries or parameterized calls, and add a simple
validation check on projectId (e.g., ensure it's a non-empty alphanumeric
string) before executing to guard against accidental misuse; refer to
deleteFixtures, client.command, and the query-builder/query-functions modules
when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/docker-build.yml:
- Around line 113-118: The test job's environment omits SELF_HOSTED, causing
isClickhouseClustered() behavior mismatch between the migration and test steps;
add SELF_HOSTED: "true" to the "Run tests" step's env block so the test process
uses non-clustered ClickHouse mode consistent with the migration step (update
the env for the step named "Run tests" that currently sets DATABASE_URL,
CLICKHOUSE_URL, REDIS_URL).

In `@test/fixtures.ts`:
- Around line 430-441: Both setupFixtures and teardownFixtures leak the
ClickHouse client if deleteFixtures or insertFixtures throw; update both
functions (setupFixtures and teardownFixtures) to acquire the client via
getClient(), wrap the operations (deleteFixtures and insertFixtures as
applicable) inside a try block, and call await client.close() in a finally block
so the client is always closed even on errors; reference getClient,
deleteFixtures, insertFixtures, and client.close when making the change.

---

Nitpick comments:
In `@test/fixtures.ts`:
- Around line 373-388: The deleteFixtures function builds raw SQL with projectId
interpolation (in client.command calls and ALTER TABLE DELETE) which risks
injection and violates the ClickHouse query-builder guideline; update
deleteFixtures to use the project's ClickHouse query builder/query-functions to
compose parameterized DELETE statements (or pass projectId as a parameter via
the builder), replace the four client.command raw strings with builder-generated
queries or parameterized calls, and add a simple validation check on projectId
(e.g., ensure it's a non-empty alphanumeric string) before executing to guard
against accidental misuse; refer to deleteFixtures, client.command, and the
query-builder/query-functions modules when making the change.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 44bd54d4-7b3d-4084-80e7-d75985237813

📥 Commits

Reviewing files that changed from the base of the PR and between 40bd7e8 and de1c1d5.

📒 Files selected for processing (3)
  • .github/workflows/docker-build.yml
  • test/fixtures.ts
  • test/global-setup.ts
✅ Files skipped from review due to trivial changes (1)
  • test/global-setup.ts

Comment on lines +113 to +118
- name: Run tests
run: pnpm test
env:
DATABASE_URL: postgresql://postgres:postgres@localhost:5432/postgres?schema=public
CLICKHOUSE_URL: http://localhost:8123/openpanel
REDIS_URL: redis://localhost:6379
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Missing SELF_HOSTED environment variable may cause test failures.

The migration step sets SELF_HOSTED: "true" but the test step omits it. Per packages/db/src/clickhouse/client.ts, isClickhouseClustered() returns true (clustered mode) when SELF_HOSTED is not set, but returns false (non-clustered) when SELF_HOSTED=true. This mismatch means migrations create tables for non-clustered mode while tests query assuming clustered mode, potentially causing query/table name mismatches.

🐛 Proposed fix to add SELF_HOSTED
       - name: Run tests
         run: pnpm test
         env:
           DATABASE_URL: postgresql://postgres:postgres@localhost:5432/postgres?schema=public
           CLICKHOUSE_URL: http://localhost:8123/openpanel
           REDIS_URL: redis://localhost:6379
+          SELF_HOSTED: "true"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Run tests
run: pnpm test
env:
DATABASE_URL: postgresql://postgres:postgres@localhost:5432/postgres?schema=public
CLICKHOUSE_URL: http://localhost:8123/openpanel
REDIS_URL: redis://localhost:6379
- name: Run tests
run: pnpm test
env:
DATABASE_URL: postgresql://postgres:postgres@localhost:5432/postgres?schema=public
CLICKHOUSE_URL: http://localhost:8123/openpanel
REDIS_URL: redis://localhost:6379
SELF_HOSTED: "true"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/docker-build.yml around lines 113 - 118, The test job's
environment omits SELF_HOSTED, causing isClickhouseClustered() behavior mismatch
between the migration and test steps; add SELF_HOSTED: "true" to the "Run tests"
step's env block so the test process uses non-clustered ClickHouse mode
consistent with the migration step (update the env for the step named "Run
tests" that currently sets DATABASE_URL, CLICKHOUSE_URL, REDIS_URL).

Comment on lines +430 to +441
export async function setupFixtures(projectId: string): Promise<void> {
const client = getClient();
await deleteFixtures(client, projectId);
await insertFixtures(client, projectId);
await client.close();
}

export async function teardownFixtures(projectId: string): Promise<void> {
const client = getClient();
await deleteFixtures(client, projectId);
await client.close();
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing try/finally for ClickHouse client cleanup.

Unlike the Postgres functions (lines 394-426), setupFixtures and teardownFixtures don't wrap operations in try/finally. If deleteFixtures or insertFixtures throws, the client won't be closed, causing a resource leak.

🔧 Proposed fix: Add try/finally blocks
 export async function setupFixtures(projectId: string): Promise<void> {
   const client = getClient();
-  await deleteFixtures(client, projectId);
-  await insertFixtures(client, projectId);
-  await client.close();
+  try {
+    await deleteFixtures(client, projectId);
+    await insertFixtures(client, projectId);
+  } finally {
+    await client.close();
+  }
 }

 export async function teardownFixtures(projectId: string): Promise<void> {
   const client = getClient();
-  await deleteFixtures(client, projectId);
-  await client.close();
+  try {
+    await deleteFixtures(client, projectId);
+  } finally {
+    await client.close();
+  }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/fixtures.ts` around lines 430 - 441, Both setupFixtures and
teardownFixtures leak the ClickHouse client if deleteFixtures or insertFixtures
throw; update both functions (setupFixtures and teardownFixtures) to acquire the
client via getClient(), wrap the operations (deleteFixtures and insertFixtures
as applicable) inside a try block, and call await client.close() in a finally
block so the client is always closed even on errors; reference getClient,
deleteFixtures, insertFixtures, and client.close when making the change.

@lindesvard lindesvard merged commit dfcd967 into main Apr 9, 2026
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant