Posts

Showing posts from September, 2025

cursor commands

I'm liking cursors commands : a way to integrate your common ai prompts into cursor using the / command store your prompts (commands) as markdown in the repo folder, then access in cursor chat using slash command I'm storing my commands in a file in the repo but this is a nice enhancement by cursor

conductor - manage claude code instances

Running Claude Code in terminals is ok, but I felt managing it all needed a better user interface to organize and track.   conductor is an example of what such a user interface could be

claude code - technology and usages

Image
claude code npm tech stack:  typescript - claude code is written in ts (pssst: typescript is a must know language imo) react - ui is written in react and uses ink for interactive command line ink - react components for command line! used by a whole list of AI whos who with command line yoga - embeddable flexbox layout engine and  bun  - js runner, runtime, bundler; choose over webpack/vite etc. because bun is faster) npm to distribute claude code Started as a simple project to query what music someone was listening to. Then added capabilities to access the file system and run batch commands. Usage spread rapidly within Claude Code team who dogfooded it. "around 90% of claude code is written with claude code" (not by, with) claude code is a thin wrapper over the claude model, they deliberately want people to experience the raw model, not obstructed by much business logic a guiding principal is: simplicity, choose the simplest approach possible the most complex part of cl...

cursor chat token usage

Have you noticed that cursor chat requests can use a lot more tokens than you'd expect?  Matt Pocock said it well: "tokens are the currency of LLMs". You're charged by the token. A little, but it does add up. So we need to pay attention to token usage. I ran some tests in cursor chat to review token usage and was surprised by how many tokens are used by my chat requests. I ran these tests in a large repo which has a number of cursor rules files defined.  Tests: 1. ask a general tech question in chat, not related to specific code in repo; context used : 19.7k tokens prompt: "how should I choose between useSWR and react-router v6 for data fetching?" to contrast, same question in Claude 4.0 outside of cursor user 28 input tokens and 768 output tokens in response to contrast further, same question in Gemini 2.5 Pro outside of cursor used 19 input tokens and 1060 output tokens in response almost one twentieth the token usage of cursor chat!!! wow 2. ask to write...