Indie AI engineer in Toronto. I build things, ship them before they're perfect, and write about what I learn along the way.
Currently obsessed with the question: how do you know if AI output is actually good? Not "feels good" — measurably good, against standards you define yourself.
qed.systems — My answer to the question above. I'm building resonance profiles — standards of judgment you train from examples, not rules. Show it what "good" looks like, it scores everything else. No LLM in the loop, deterministic, sub-second. Live API, Claude Code plugin, go break it.
Word of Lore — AI newsletter covering what to build, what to use, and what to make of it all. Three tracks: Build AI, Apply AI, Understand AI.
Stream of Consciousness — A Claude Code plugin for brains that don't do well with to-do lists. Things flow in, decay over time, and either get resolved or restreamed. No tags, no priorities, just vibes and a clock.
interference — Where I think out loud about AI, building, and shipping as a one-person shop.
tarasyanchynskyy — The personal stuff.
Building in public, one bit at a time.



