120 bits per second


Do you know why velocity slows down when you add more people? If I do it myself, I just open the editor and go.

If I have to explain what I want to you, my words are being transmitted to you at 120 bits per second (at a normal speaking pace).

And that assumes I made it clear on the first go and that's just not how human conversation works.

We're going to have a bit of back and forth. You'll ask me if you understood me correctly, I'll explain more, bitrate drops.

And this works fairly well.

As long as you have a few people involved, because it has poor scaling:

If 3 people need to coordinate, 3 interactions are required.
If 4 people need to coordinate, 6 interactions are required.
If 6 people need to coordinate, 15 interactions are required.
If 10 people need to coordinate, 45 interactions are required.
...

That's not a great trajectory. Now sure, not everyone needs to talk to everyone, but it still makes things damn slow.

If you want to move fast, get as few talented people as you can get away with, give them a big ask and remove process.

You'd be surprised what a small, efficient team can pump out.

Yours,

Taj

The TP Daily Newsletter

Hi, I’m Taj Pelc. I write about technical leadership, business mindset and enterpreneurship. Daily advice on building fantastic tech teams that deliver great products. I'll see you inside.

Read more from The TP Daily Newsletter

I wanted to get experience with the Telegram app ecosystem, so I build a very basic Flappy Bird clone (https://t.me/flappster_bot?game=flappster_classic — you can try it out here). The prototype had to be up and running as quick as possible and I wanted to test it through Telegram which expects the game to be accessible at a public URL. For development, I'd love to skip a deployment step and serve it right from my machine for fast iteration. I could punch a hole through my firewall and lose...

Generative AI tools are getting multi-file editing capability. It recently got introduced to GitHub Copilot. It's already in Cursor and my current favorite, Cline. It can implement a feature across multiple files, but it's still bound to the token limits that the LLM it's using has. Adding a parameter to an API response and modifying a few files to pass it along is cool, but forget about building the whole application from a prompt. (For all but the most simple apps) The limiting factor is my...

There's no way around maintenance. You build. It becomes messy. You have to do maintenance. You build more. It becomes messy. You have to do maintenance. This is not a problem, it's just the way software development is. But it does become a problem if you don't plan for any maintenance as you evolve your app. You'll eventually end up with a software system that's impossible to change and frustrating to work on. Tight coupling everywhere. You can't change one part without the others breaking....