Conversation
WalkthroughAdds InfluxDB-backed telemetry: new telemetry module, middleware, types, and Influx client; wires telemetry into bot, commands, and multiple middlewares; updates env, dependency list, and many middleware/class APIs to propagate telemetry context and timing. Changes
Sequence Diagram(s)sequenceDiagram
participant Bot as Bot
participant TelemetryMW as telemetry() Middleware
participant Handler as Downstream Middleware / Handler
participant Influx as InfluxClient
Bot->>TelemetryMW: incoming update
TelemetryMW->>TelemetryMW: create Point (tg_update) and ctx.stackTimes
TelemetryMW->>Handler: call next()
Handler->>Handler: enrich ctx.point (tags/fields, durations, errors)
Handler-->>TelemetryMW: return
TelemetryMW->>TelemetryMW: compute duration and add to ctx.point
TelemetryMW->>Influx: writePoint(ctx.point)
Influx->>Influx: queue and flush (throttled)
Influx-->>TelemetryMW: ack / error
TelemetryMW-->>Bot: update processed
sequenceDiagram
participant User as User
participant ManagedCmd as ManagedCommands
participant CmdHook as conversationBegin/End
participant Handler as Command Handler
participant Influx as InfluxClient
User->>ManagedCmd: /command
ManagedCmd->>CmdHook: conversationBegin (attach ctx.point & start)
ManagedCmd->>Handler: enter conversation
Handler->>Handler: execute, add tags/fields
alt success
ManagedCmd->>CmdHook: conversationEnd
CmdHook->>Influx: writePoint(with duration)
else error
ManagedCmd->>CmdHook: handlerError
CmdHook->>Influx: writePoint(error tagged)
end
Influx-->>ManagedCmd: queued
ManagedCmd-->>User: reply / error message
Possibly related PRs
🚥 Pre-merge checks | ✅ 1 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/middlewares/auto-moderation-stack/index.ts (1)
232-259:⚠️ Potential issue | 🔴 CriticalBug:
ctx.messageis undefined foredited_messageupdates.The filter
[":text", ":caption"]matches bothmessageandedited_messageupdates. However,ctx.messageonly exists formessageupdates. Foredited_messageupdates,ctx.messagewill beundefined, causing a runtime error at line 233 and 249.Use
ctx.msg(the narrowed message object available on both update types) instead, consistent with other handlers likeharmfulContentHandlerat line 177.🐛 Proposed fix
private async nonLatinHandler(ctx: Filter<ModerationContext<C>, ":text" | ":caption">) { - const text = ctx.message.caption ?? ctx.message.text + const text = ctx.msg.caption ?? ctx.msg.text const match = text.match(NON_LATIN.REGEX) // ... if (match && (match.length - NON_LATIN.LENGTH_THR) / text.length > NON_LATIN.PERCENTAGE_THR) { // just delete the message and mute the user for 10 minutes const res = await Moderation.mute( ctx.from, ctx.chat, ctx.me, duration.zod.parse(NON_LATIN.MUTE_DURATION), - [ctx.message], + [ctx.msg], "Message contains non-latin characters" ) if (res.isErr()) { logger.error( - { from: ctx.from, chat: ctx.chat, messageId: ctx.message.message_id }, + { from: ctx.from, chat: ctx.chat, messageId: ctx.msg.message_id }, "AUTOMOD: nonLatinHandler - Cannot mute" ) } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/middlewares/auto-moderation-stack/index.ts` around lines 232 - 259, The nonLatinHandler currently uses ctx.message which is undefined for edited_message updates; update the handler to use the narrowed message object ctx.msg everywhere (for reading text/caption, computing match with NON_LATIN.REGEX, passing the message to Moderation.mute, and logging message_id) so it works for both message and edited_message updates; keep the same logic and references to NON_LATIN.LENGTH_THR, NON_LATIN.PERCENTAGE_THR, NON_LATIN.MUTE_DURATION (parsed with duration.zod.parse) and preserve the existing logger.error call and its context fields but read messageId from ctx.msg.message_id.
🧹 Nitpick comments (2)
src/bot.ts (1)
139-144: Minor: Boolean value inPromise.allarray.The expression
runner.isRunning() && runner.stop()evaluates tofalsewhen the runner isn't running, placing a boolean in the Promise array. This works (non-Promise values are auto-resolved), but a more explicit pattern would be:await Promise.all([ MessageUserStorage.getInstance().sync(), redis.quit(), - runner.isRunning() && runner.stop(), + runner.isRunning() ? runner.stop() : Promise.resolve(), modules.stop(), ])Not blocking since the current code functions correctly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bot.ts` around lines 139 - 144, The Promise.all array includes a boolean when the runner isn't running because of the expression runner.isRunning() && runner.stop(); change this to explicitly provide a Promise instead — e.g., use a conditional expression so that if runner.isRunning() is true you call runner.stop() otherwise return Promise.resolve() — leave the other entries (MessageUserStorage.getInstance().sync(), redis.quit(), modules.stop()) unchanged to keep the same shutdown ordering.src/utils/throttle.ts (1)
13-33: Async functions are fire-and-forget; concurrent executions possible.The signature change to
(...args: A) => unknownallows async functions, but returned Promises aren't awaited. For the InfluxDB flush use case, this means:
- If a flush is still in progress when the next throttled call fires, both will run concurrently.
- Unhandled rejections from the async function won't propagate (though the current usage has internal try/catch).
This is likely acceptable for the flush use case since InfluxDB client should handle concurrent writes gracefully, but worth noting that throttle doesn't serialize async work.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/throttle.ts` around lines 13 - 33, Throttle currently calls func(...args) without awaiting returned Promises, so async functions can run concurrently and unhandled rejections may be lost; update throttle (function throttle and its inner handler) to detect async results and serialize calls by storing the pending Promise (e.g., lastPromise) and awaiting it inside the handler before scheduling the next run, or chain calls so each invocation awaits the previous one; also adjust the function signature/type to reflect Promise<void> or Promise<unknown> return when func returns a Promise, and ensure errors are caught and propagated or logged to avoid unhandled rejections.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.env.example:
- Line 9: The file ends without a trailing newline causing the EndingBlankLine
lint error; open the .env.example content that contains the commented
INFLUXDB_URL line and ensure the file ends with a single newline character
(i.e., add one blank line at EOF) so dotenv linting passes.
In `@src/env.ts`:
- Line 16: The INFLUXDB_URL schema currently allows any string; update the
schema entry for INFLUXDB_URL (the z.string().default("http://localhost:8086")
definition) to validate URL format by using
z.string().url().default("http://localhost:8086") so malformed URLs are rejected
at config parsing; ensure this change is applied to the INFLUXDB_URL constant
inside src/env.ts (and that z from zod is available/imported where the schema is
defined).
In `@src/lib/managed-commands/index.ts`:
- Around line 505-524: The deletion logic is inverted when a command fails
requirements: after calling ManagedCommands.parseCommand and checking
requirements.isErr(), the code currently deletes the original message when
isPrivate is true but the comment says only group/supergroup messages should be
deleted. Change the condition around ctx.deleteMessage() so it deletes when NOT
isPrivate (i.e., in groups/supergroups) and retains messages in private chats;
keep the rest of the error reply flow (ctx.reply and ephemeral) unchanged
(references: ManagedCommands.parseCommand, requirements.isErr(), isPrivate,
ctx.deleteMessage(), ctx.reply, ephemeral).
In `@src/middlewares/check-username.ts`:
- Line 22: The ephemeral fire-and-forget call in check-username.ts currently
passes the raw potentially-rejecting promise ctx.reply(msg) into ephemeral (line
with void ephemeral(ctx.reply(msg), 10_000)), which can produce unhandled
rejections; modify that call so the ctx.reply promise is handled before passing
it in (e.g., append a .catch(...) to swallow/log the error) or otherwise ensure
the returned promise is awaited/handled; update the invocation of ephemeral to
use the handled promise so rejections from ctx.reply are consistently caught
like the other fast-fail operations.
In `@src/utils/deferred-middleware.ts`:
- Line 10: The current returned function uses Promise.all([middleware(context),
next()]) which waits for and propagates failures from middleware(context);
instead, invoke middleware(context) without awaiting it so it runs in background
and cannot delay or fail the main chain: start middleware(context) and attach a
.catch to swallow or log errors, then immediately return next() (i.e., ensure
the returned function calls middleware(context).catch(...) and returns next() so
only next() controls the middleware chain).
---
Outside diff comments:
In `@src/middlewares/auto-moderation-stack/index.ts`:
- Around line 232-259: The nonLatinHandler currently uses ctx.message which is
undefined for edited_message updates; update the handler to use the narrowed
message object ctx.msg everywhere (for reading text/caption, computing match
with NON_LATIN.REGEX, passing the message to Moderation.mute, and logging
message_id) so it works for both message and edited_message updates; keep the
same logic and references to NON_LATIN.LENGTH_THR, NON_LATIN.PERCENTAGE_THR,
NON_LATIN.MUTE_DURATION (parsed with duration.zod.parse) and preserve the
existing logger.error call and its context fields but read messageId from
ctx.msg.message_id.
---
Nitpick comments:
In `@src/bot.ts`:
- Around line 139-144: The Promise.all array includes a boolean when the runner
isn't running because of the expression runner.isRunning() && runner.stop();
change this to explicitly provide a Promise instead — e.g., use a conditional
expression so that if runner.isRunning() is true you call runner.stop()
otherwise return Promise.resolve() — leave the other entries
(MessageUserStorage.getInstance().sync(), redis.quit(), modules.stop())
unchanged to keep the same shutdown ordering.
In `@src/utils/throttle.ts`:
- Around line 13-33: Throttle currently calls func(...args) without awaiting
returned Promises, so async functions can run concurrently and unhandled
rejections may be lost; update throttle (function throttle and its inner
handler) to detect async results and serialize calls by storing the pending
Promise (e.g., lastPromise) and awaiting it inside the handler before scheduling
the next run, or chain calls so each invocation awaits the previous one; also
adjust the function signature/type to reflect Promise<void> or Promise<unknown>
return when func returns a Promise, and ensure errors are caught and propagated
or logged to avoid unhandled rejections.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: fad6fee7-b9d7-466d-a619-9ba0ae6097f7
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (27)
.env.examplepackage.jsonsrc/bot.tssrc/commands/index.tssrc/env.tssrc/lib/managed-commands/command.tssrc/lib/managed-commands/context.tssrc/lib/managed-commands/index.tssrc/middlewares/auto-moderation-stack/ai-moderation.tssrc/middlewares/auto-moderation-stack/index.tssrc/middlewares/bot-membership-handler.tssrc/middlewares/check-username.tssrc/middlewares/group-specific-actions.tssrc/middlewares/mention-listener.tssrc/middlewares/message-user-storage.tssrc/modules/index.tssrc/modules/telemetry/index.tssrc/modules/telemetry/influxdb.tssrc/modules/telemetry/middleware.tssrc/modules/telemetry/types.tssrc/modules/websocket.tssrc/utils/chat.tssrc/utils/deferred-middleware.tssrc/utils/throttle.tssrc/utils/types.tstests/modules.test.tstests/throttle.test.ts
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
src/lib/managed-commands/index.ts (1)
519-525:⚠️ Potential issue | 🔴 CriticalDelete invalid command invocations only in groups.
This branch is still inverted: it deletes in private chats and leaves the invalid command visible in groups, which is the opposite of what the comment describes.
🐛 Proposed fix
- if (isPrivate) await ctx.deleteMessage() + if (!isPrivate) await ctx.deleteMessage().catch(() => {})🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/managed-commands/index.ts` around lines 519 - 525, The code currently deletes invalid command messages when isPrivate is true, which is inverted; update the conditional in the requirements.isErr() branch so invalid commands are deleted only in group/supergroup contexts by changing the check from if (isPrivate) await ctx.deleteMessage() to if (!isPrivate) await ctx.deleteMessage(); keep the surrounding comment and the use of requirements.isErr() and ctx.deleteMessage() intact.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/commands/index.ts`:
- Line 57: The current assignments to context.stackTimes (e.g.,
context.stackTimes = { managedCommands: now }) overwrite the whole object and
drop prior telemetry; instead merge/update the existing map so you preserve
upstream timestamps. Locate where context.stackTimes is assigned (references to
context.stackTimes and the managedCommands/commandStart keys) and change the
logic to update only the specific key (e.g., set
context.stackTimes.managedCommands = now or merge via Object.assign/ spread into
context.stackTimes) ensuring you handle a possibly undefined context.stackTimes
by initializing it before updating; apply the same change for the other
occurrence that sets context.stackTimes.
- Around line 51-56: The Influx point construction using context.point.tag(...)
is incorrectly treating high-cardinality values as tags; change the usages where
you currently call .tag("invoked_by", ...), .tag("invoked_from", ...) to use
.intField(...) (or .field(...) with numeric values) for the user/chat IDs, and
change any error-related .tag(...) to .stringField(...). Locate the point
builder calls that use context.point.tag(...) (e.g., the chain starting with
ManagedCommands.commandID(command) and the similar block referenced later around
the second occurrence) and replace only the invoked_by/invoked_from tags and
error string tags with appropriate .intField and .stringField calls while
leaving low-cardinality tags like "command" and "chat_type" as tags.
---
Duplicate comments:
In `@src/lib/managed-commands/index.ts`:
- Around line 519-525: The code currently deletes invalid command messages when
isPrivate is true, which is inverted; update the conditional in the
requirements.isErr() branch so invalid commands are deleted only in
group/supergroup contexts by changing the check from if (isPrivate) await
ctx.deleteMessage() to if (!isPrivate) await ctx.deleteMessage(); keep the
surrounding comment and the use of requirements.isErr() and ctx.deleteMessage()
intact.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3cb0a90f-7a8b-4aa0-b721-eafde4fd3b51
📒 Files selected for processing (9)
src/commands/index.tssrc/commands/test/args.tssrc/commands/test/conversation.tssrc/commands/test/db.tssrc/commands/test/format.tssrc/commands/test/index.tssrc/commands/test/menu.tssrc/lib/managed-commands/index.tssrc/modules/telemetry/influxdb.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- src/modules/telemetry/influxdb.ts
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/commands/index.ts`:
- Around line 73-79: The log message in conversationEnd contains a stray closing
brace in the template literal; update the logger.debug call inside the
conversationEnd async handler (the one that calls
ManagedCommands.commandID(command) and printCtxFrom(context)) to remove the
extra `}` so the string interpolation is correct, then run a quick lint/test to
ensure the template literal compiles cleanly; other surrounding code
(context.point.intField, conversation.now(), context.stackTimes.managedCommands,
and modules.get("influx").writePoint) should remain unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: baaf2f91-9ae8-4242-9d1f-ca5a326fcf5d
📒 Files selected for processing (3)
.env.examplesrc/commands/index.tssrc/lib/managed-commands/index.ts
✅ Files skipped from review due to trivial changes (1)
- .env.example
🚧 Files skipped from review as they are similar to previous changes (1)
- src/lib/managed-commands/index.ts
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src/modules/telemetry/transformer.ts (1)
22-22: Storeerror_codeas numeric field for better telemetry queries.The
error_codefield from the Telegram Bot API response is numeric (typically matching HTTP status codes like 400, 429, 500), but line 22 stores it as a string after calling.toString(). UseintField("error_code", res.error_code)to preserve numeric semantics for range filters, aggregations, and grouping consistency in InfluxDB.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/modules/telemetry/transformer.ts` at line 22, In transformer.ts update the telemetry point construction so error_code is stored as a numeric field: replace the call that uses point.booleanField("success", false).stringField("error_code", res.error_code.toString()) with a numeric field (use intField("error_code", res.error_code)) so res.error_code is recorded as an integer for correct range/aggregation handling; keep the existing success boolean and error context intact while removing the .toString() conversion.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/modules/telemetry/transformer.ts`:
- Around line 17-24: The telemetry path currently lets a rejected prev(...) skip
telemetry fields and also allows writePoint errors to bubble up; fix by invoking
prev(method, payload, signal) inside a try/catch where you capture both success
and failure: in the try block set point.booleanField("success", true) and return
the response; in the catch block set point.booleanField("success", false) and
set point.stringField("error_code", <extract from caught error or res if
available>), then rethrow the original error; in a finally block compute and set
point.intField("duration_ms", Date.now() - start) and call
modules.get("influx").writePoint(point) inside its own try/catch so any errors
from writePoint are swallowed or logged but do not affect the original API flow
(do not suppress the rethrown API error).
---
Nitpick comments:
In `@src/modules/telemetry/transformer.ts`:
- Line 22: In transformer.ts update the telemetry point construction so
error_code is stored as a numeric field: replace the call that uses
point.booleanField("success", false).stringField("error_code",
res.error_code.toString()) with a numeric field (use intField("error_code",
res.error_code)) so res.error_code is recorded as an integer for correct
range/aggregation handling; keep the existing success boolean and error context
intact while removing the .toString() conversion.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: b5c91f42-ed4f-4fc8-b37a-0564ba779664
📒 Files selected for processing (2)
src/bot.tssrc/modules/telemetry/transformer.ts
✅ Files skipped from review due to trivial changes (1)
- src/bot.ts
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/bot.ts`:
- Around line 141-146: The shutdown currently uses Promise.all([...]) which will
fail-fast and abort the rest of shutdown when any of
MessageUserStorage.getInstance().sync(), redis.quit(), runner.isRunning() &&
runner.stop(), or modules.stop() rejects; change this to await
Promise.allSettled(...) (or wrap each call in try/catch and await them
individually) so all shutdown steps run to completion, ensure the conditional
runner.isRunning() call yields a Promise (not boolean) before awaiting, and log
any rejections from the settled results rather than letting one rejection
short-circuit the terminate flow.
- Around line 101-112: The logException function can throw when calling
influx.writePoint (and if modules.get returns undefined), so wrap the telemetry
write and module access in a safe try/catch to ensure logException never
rejects: retrieve tgLogger and influx with defensive checks (optional
chaining/null checks), then put the influx.writePoint call inside its own
try/catch (or wrap the whole body) and swallow/log any errors to a safe fallback
(e.g., console.error) so tgLogger.exception is still awaited and its errors are
also caught; ensure no exception escapes from logException by catching and
handling all errors before returning.
In `@src/modules/telemetry/trpc.ts`:
- Around line 13-37: The current observable only writes metrics on the subscribe
`error` and `complete` handlers so aborted/timeout/cancelled calls (when the
subscription is unsubscribed early) never emit a point; update the code around
the observable/Point("backend_trpc_call")/next(op).subscribe block to always
finalize and write a point on teardown: either use RxJS `finalize` (or add a
subscription finalizer) to compute duration and call
`influxClient.writePoint(point)` with booleanField("success", false) and a
stringField like "status":"cancelled"/"aborted" when unsubscribed, ensuring you
don't double-write on error/complete (use a finished flag or detect unsubscribe
path) and keep the existing error and complete handlers unchanged.
- Around line 21-27: The error handler in the error(err) callback is persisting
raw backend error_message via point.stringField("error_message", err.message)
which can leak sensitive data; update the error(err) block (the error callback
that builds point and calls influxClient.writePoint(point)) to avoid storing raw
messages by either omitting error_message, replacing it with a
sanitized/truncated version, or storing a stable error identifier (e.g.,
error_name or a hashed/normalized message) instead; ensure you use the same
point and influxClient.writePoint(point) flow but transform err.message through
a sanitizer/truncate/hash function before passing to point.stringField (or
remove that field entirely).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 0d1140bd-3fda-4352-b9e1-4247c8c35854
📒 Files selected for processing (6)
src/backend.tssrc/bot.tssrc/modules/index.tssrc/modules/telemetry/influxdb.tssrc/modules/telemetry/transformer.tssrc/modules/telemetry/trpc.ts
🚧 Files skipped from review as they are similar to previous changes (3)
- src/modules/telemetry/transformer.ts
- src/modules/index.ts
- src/modules/telemetry/influxdb.ts
There was a problem hiding this comment.
♻️ Duplicate comments (3)
src/modules/telemetry/transformer.ts (1)
36-38:⚠️ Potential issue | 🟠 MajorProtect API flow from telemetry write failures.
On Line 38, a throw from
writePointinside.finallycan override the original API result/error and leak telemetry failures into user-facing bot behavior. Keep telemetry best-effort by isolating this write in its owntry/catch.Proposed fix
.finally(() => { point.intField("duration_ms", Date.now() - start) - modules.get("influx").writePoint(point) + try { + modules.get("influx").writePoint(point) + } catch { + // telemetry must not affect Telegram API call outcomes + } })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/modules/telemetry/transformer.ts` around lines 36 - 38, The telemetry write in the finally block (where point.intField("duration_ms", ...) and modules.get("influx").writePoint(point) are called) must be made best-effort so it cannot override the original API result/error; move the modules.get("influx").writePoint(point) call into its own try/catch inside the finally (or call an async fire-and-forget helper) and swallow/log any errors from writePoint rather than rethrowing them, ensuring point.intField remains set but any exceptions from writePoint are caught and logged locally.src/modules/telemetry/trpc.ts (1)
28-35:⚠️ Potential issue | 🟠 MajorStop persisting raw backend error messages.
src/modules/telemetry/influxdb.ts:43-54writes these fields verbatim, so Line 34 can retain user input or internal backend details in InfluxDB. Keep a stable class/code here, or sanitize/truncate before writing.🛡️ Minimal safe change
error(err) { + const errorName = err instanceof Error ? err.name : "UnknownError" if (markFinalized()) { point .intField("duration_ms", Date.now() - start) .booleanField("success", false) - .stringField("error_name", err.name) - .stringField("error_message", err.message) + .stringField("error_name", errorName) influxClient.writePoint(point) } observer.error(err) },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/modules/telemetry/trpc.ts` around lines 28 - 35, The error handler in the trpc telemetry (error(err)) is persisting raw error_name and error_message into Influx via influxClient.writePoint(point); instead, stop writing raw messages and either remove the stringField("error_message") and stringField("error_name") calls or replace them with a sanitized token (e.g., error_code, hashed_id, or truncated/safe class name) before calling markFinalized()/influxClient.writePoint(point). Update the point population in error(err) so only non-sensitive, stable identifiers (or truncated/sanitized strings) are written to InfluxDB and avoid storing full backend/user messages.src/commands/index.ts (1)
86-88:⚠️ Potential issue | 🟠 MajorPreserve existing telemetry timestamps instead of replacing the map.
Line 87 still overwrites the entire
stackTimesobject, dropping timings recorded by upstream telemetry middleware. This was flagged in a previous review and fixed on line 57, but not here.🔧 Proposed fix
commandMiddlewareStart: async ({ context, command }) => { - context.stackTimes = { managedCommands: Date.now() } + context.stackTimes = { ...context.stackTimes, managedCommands: Date.now() } context.point.tag("command", ManagedCommands.commandID(command)).tag("chat_type", context.chat.type) },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/index.ts` around lines 86 - 88, The code in commandMiddlewareStart currently replaces the entire telemetry map by setting context.stackTimes = { managedCommands: Date.now() }, which discards any existing timestamps recorded earlier; instead, ensure you preserve existing entries on context.stackTimes (initialize it if undefined) and set or update the managedCommands key (e.g., context.stackTimes.managedCommands = Date.now()) so other telemetry entries from upstream middleware remain intact; locate this change in the commandMiddlewareStart handler and update how context.stackTimes is assigned, referencing ManagedCommands.commandID(command) for the tag logic that follows.
🧹 Nitpick comments (1)
src/commands/index.ts (1)
54-55: Consider usingintFieldfor numeric IDs.User and chat IDs are numeric values. While
stringFieldworks and addresses the cardinality concern from the previous review,intFieldis more semantically appropriate and enables numeric operations in InfluxDB queries.♻️ Proposed refactor
.tag("command", ManagedCommands.commandID(command)) .tag("chat_type", context.chat.type) - .stringField("invoked_by", context.from.id.toString(10)) - .stringField("invoked_from", context.chat.id.toString(10)) + .intField("invoked_by", context.from.id) + .intField("invoked_from", context.chat.id) .timestamp(new Date(now))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/index.ts` around lines 54 - 55, Replace the two uses of stringField for numeric IDs with intField so InfluxDB stores them as integers: change .stringField("invoked_by", context.from.id.toString(10)) and .stringField("invoked_from", context.chat.id.toString(10)) to use .intField and pass numeric values (e.g., Number(context.from.id) / Number(context.chat.id) or parseInt(..., 10)) so the fields are written as integers and can be used for numeric queries.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@src/commands/index.ts`:
- Around line 86-88: The code in commandMiddlewareStart currently replaces the
entire telemetry map by setting context.stackTimes = { managedCommands:
Date.now() }, which discards any existing timestamps recorded earlier; instead,
ensure you preserve existing entries on context.stackTimes (initialize it if
undefined) and set or update the managedCommands key (e.g.,
context.stackTimes.managedCommands = Date.now()) so other telemetry entries from
upstream middleware remain intact; locate this change in the
commandMiddlewareStart handler and update how context.stackTimes is assigned,
referencing ManagedCommands.commandID(command) for the tag logic that follows.
In `@src/modules/telemetry/transformer.ts`:
- Around line 36-38: The telemetry write in the finally block (where
point.intField("duration_ms", ...) and modules.get("influx").writePoint(point)
are called) must be made best-effort so it cannot override the original API
result/error; move the modules.get("influx").writePoint(point) call into its own
try/catch inside the finally (or call an async fire-and-forget helper) and
swallow/log any errors from writePoint rather than rethrowing them, ensuring
point.intField remains set but any exceptions from writePoint are caught and
logged locally.
In `@src/modules/telemetry/trpc.ts`:
- Around line 28-35: The error handler in the trpc telemetry (error(err)) is
persisting raw error_name and error_message into Influx via
influxClient.writePoint(point); instead, stop writing raw messages and either
remove the stringField("error_message") and stringField("error_name") calls or
replace them with a sanitized token (e.g., error_code, hashed_id, or
truncated/safe class name) before calling
markFinalized()/influxClient.writePoint(point). Update the point population in
error(err) so only non-sensitive, stable identifiers (or truncated/sanitized
strings) are written to InfluxDB and avoid storing full backend/user messages.
---
Nitpick comments:
In `@src/commands/index.ts`:
- Around line 54-55: Replace the two uses of stringField for numeric IDs with
intField so InfluxDB stores them as integers: change .stringField("invoked_by",
context.from.id.toString(10)) and .stringField("invoked_from",
context.chat.id.toString(10)) to use .intField and pass numeric values (e.g.,
Number(context.from.id) / Number(context.chat.id) or parseInt(..., 10)) so the
fields are written as integers and can be used for numeric queries.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 82337b18-1759-4b50-a6e5-651acb50fd5b
📒 Files selected for processing (8)
src/backend.tssrc/bot.tssrc/commands/index.tssrc/middlewares/auto-moderation-stack/index.tssrc/modules/index.tssrc/modules/telemetry/influxdb.tssrc/modules/telemetry/transformer.tssrc/modules/telemetry/trpc.ts
🚧 Files skipped from review as they are similar to previous changes (3)
- src/modules/index.ts
- src/modules/telemetry/influxdb.ts
- src/bot.ts
Logic is mostly implemented as-is in regards to how
InfluxDBabstracts data points.A Context flavor (and thus a plugin) is implemented to create a
Pointinstance that flows through the whole middleware stack and is written to the DB at the end.A
TrackedMiddlewareclass is used for other classes to extend from, and tracks the execution of a composer for which the subclass defines the middlewares, allowing for granular timekeeping.ManagedCommandshas also been slightly reworked to allow for better tracking of command executions, which (since they can span multiple telegram updates through conversations) is handled as a separate measurement.Note:
AutoModeration was probably bugged since the middlewares were declared one after the other and each filter would affect the latter, since the operations in AutoModeration where supposed to be forked from the main flow, they all got their own special tracking