Our Perseids campsite at Song Kol. No cell signal, no Blind, no LinkedIn panic.
I check Blind the way some people check horoscopes: every morning, expecting drama, occasionally finding truth. The Economist I read for the opposite reason, it's calm, structured, backed by actual data. These two don't usually agree on much. But over the past couple of months, they've converged on the same narrative, and that's when I start paying attention.
Most people missed the December signal. No press conference, no earnings call. Just a Schumpeter column in The Economist1 noting that SpaceX, OpenAI, and Anthropic are all circling public listings. Anthropic hired Wilson Sonsini, the firm that took Google and LinkedIn public. Valuation tripled in six months to $183 billion2. Revenue reportedly went up ninefold in a year3.
The current debate around GenAI and C++ is a good illustration of the real problem. Many engineers report that models are worse than juniors. Others report dramatic speedups on the same language and problem space. Both observations are correct.
The difference is not the model. It is the absence or presence of the state.
Most GenAI usage today is stateless. A model is dropped into an editor with a partial view of the codebase, no durable memory, no record of prior decisions, no history of failed attempts, and no awareness of long-running context. In that mode, the model behaves exactly like an amnesic junior engineer. It repeats mistakes, ignores constraints, and proposes changes without understanding downstream consequences.
When engineers conclude that “AI is not there yet for C++”, they are often reacting to this stateless setup.
At the same time, GenAI does not elevate engineering skill. It does not turn a junior into a senior. What it does is amplify the level at which an engineer already operates. A senior engineer using GenAI effectively becomes a faster senior, and a junior becomes a faster junior. Judgment is not transferred, and the gap does not close automatically.
These two facts are tightly coupled. In stateless, unstructured usage, GenAI amplifies noise. In a stateful, constrained workflow with explicit ownership and review, it amplifies competence.
This is why reported productivity gains vary so widely. Claims of 200–300% speedup are achievable, but only locally and only within the bounds of the user’s existing competence. Drafting, exploration, task decomposition, and mechanical transformation accelerate sharply. End-to-end throughput increases are lower because planning, integration, validation, and responsibility remain human-bound.
The question, then, is not whether GenAI is “good enough”. The question is what kind of system you embed it into.
Note
Everything I'll explain below is only applicable to the Stateful GenAI setup.
Sometimes you need to understand why something exists, and instead, you’re staring at a mystery. It feels like magic for a moment. But there is no magic in IT. There is always a reason, and usually it’s painfully concrete.
Today I learned that if git blame suddenly claims I wrote the entire million-line project, it might be lying 🙂
I ran into a situation where my local git blame attributed every line to a single recent commit, while GitLab showed the correct historical authors. At first glance, it looked like history had been rewritten, which is odd and incorrect.
Usually, you need just a few lines to initialize TSan in your project: you compile with the sanitizer flags, run the tests, and get a clear report of which threads touched which memory locations. On a modern Linux system, that simple expectation can fail in a very non-obvious way.
In my case, I attached TSan to a not-so-young C++ codebase and immediately encountered a fatal runtime error from the sanitizer, long before any of the project's code executed. No race report, no helpful stack trace, just a hard abort complaining about an "unexpected memory mapping."
If you can upgrade your toolchain to LLVM 18.1 or newer, this problem effectively disappears, because newer TSan builds know how to recover from the incompatible memory layout. Suppose you are pinned to an older LLVM (by CI images, production constraints, or corporate distro policy). In that case, you are in the same situation I was: you have to understand what the sanitizer is trying to do with the address space, and work around the failure mode yourself.
I just started a new embedded pet project on the Raspberry Pi, and I expect it'll be a pretty big one, so I've been thinking about the technology from the beginning. The overall goal is to create a glass-to-glass video pipeline example. Let's see how it's going. For now, I'm using a USB V4L2 camera while waiting for the native Pi modules to arrive, but it's enough to sketch the capture loop and start testing the build pipeline. The application itself is minimal—open /dev/video0, request YUYV at 1280x720, set up MMAP buffers, and iterate over frames—but the real challenge occurs when v4l triggers bindgen, and the build must cross-compile cleanly for aarch64.
The language choice immediately becomes part of the equation right away. Go is my favorite and, usually, is not considered as an option by many embedded developers. But it's a good choice for small embedded utilities because its cross-compilation story is nearly effortless. Need an ARM binary? One command and you have it!
A 10-minute ride and you have such a view from Chon Aryk hills.
I truly enjoy reading Blind and Levels. There is so much internal drama, messy details, and unexpected insights that you almost do not need reality shows anymore. And if you ever feel bored, you can always drop a mildly toxic comment into a thread and watch the whole thing ignite. It fits the overall style of Blind a little too well, but that is part of the fun. And considering that mix of casual toxicity and surprisingly rational takes you see there, you would expect people to look at layoffs with a bit more perspective. But when the topic comes up, the conversation usually drifts to the same explanation. People blame AI. People say their jobs vanished because a model wrote some code. And while I understand the frustration, the logic never sits right with me. Nobody complained during the hiring boom of 2020 and 2021, when companies doubled their headcount like it was nothing. That part gets forgotten. Now that the correction is here, many want a simple villain. AI fits the story, but it does not fit the data.
Birch Grove near Bishkek. The grove is incredibly popular in the autumn.
When a system manages dozens of cameras or edge devices, packets alone don’t tell you much. An IP and port might change, SSRCs can roll over, and NATs tend to shuffle everything just enough to break simple assumptions. Yet every media packet still needs a clear identity — not for transport, but for logic.
There are many ways to attach that identity: control channels, per-session negotiation, external registries. But the most simple one is already part of RTP itself — the header extension defined by RFC 8285.
RTP was designed to be extensible. After the fixed header and payload, packets can carry short metadata blocks called header extensions. Each extension has a small numeric ID and a URI describing its purpose.
In her one year, Molly saw many more exciting places than I did until I was about 28. She does pretty well :-D
When working on performance experiments across C++ and Go, you obviously need a multilingual project structure. There were two paths forward: create separate build systems under a shared repository, or consolidate everything under a single, coherent framework. Bazel made that decision easy.
Using Bazel to unify builds isn’t just convenient—it should be the default choice for any serious engineering effort that involves multiple languages. It eliminates the friction of managing isolated tools, brings deterministic builds, and handles dependencies, benchmarking, and cross-language coordination with minimal ceremony.
Here’s why Bazel makes sense for performance-critical, multilingual projects like this one—no fragile tooling, no redundant setups, just clean integration that scales.
Teskey-Torpok Pass is a lovely pass leading you to Song-Kol Lake, Naryn region.
I’ve been on a bit of a Leetcode streak lately, poking at problems from companies I secretly admire. To keep things interesting (and to avoid nodding off in front of the screen), I challenged myself to solve the same task three ways: good old C++, its shiny modern C++20 cousin, and Elixir. It turns out that staring at a problem through a functional programming lens is like putting on X-ray specs—you see the same lines of code, but suddenly, there’s a weird beauty in that filter chain.<
My first pass was as traditional as it gets—a simple C++ class with an array and a loop. No magic here, just pushing timestamps and scanning them one by one. The implementation is pretty naive, but considering the constraints provided by the Design a hit counter challenge, even an unscalable approach is acceptable. So, we will push all new timestamps into the std::queuestd::vector and then simply count.
Birch Grove is just 40 minutes from Bishkek. Although I was concerned about the strong, foggy weather, it was an amazing opportunity for photography!
C++ is a powerful language, and I genuinely love it, but sometimes, even in modern versions, it lacks some surprisingly simple features. One such missing feature is switch on std::string. You’d think that by now, we could use a switch statement on strings just like we do with integers or enums—after all, Go has it! But no, C++ keeps us on our toes.
Why Doesn’t C++ Support switch on strings? Because "you only pay for what you use," which is the standard C++ mantra. The switch statement in C++ relies on integral types. Under the hood, it works by converting the case values into jump table indices for efficient execution. But std::string (or even std::string_view) is not an integral type—it’s a more complex data structure. That’s why you can’t simply do:
switch(msg->get_value<std::string>()){// Nope, not possible :-(case"topology":// Handle network topologybreak;case"broadcast":// Handle network broadcastbreak;}