I’m really excited about KittenTTS [1], an amazing project by KittenML [2]. It’s worth exploring!
State-of-the-art TTS model under 25MB 😻
References:
[1]: https://github.com/KittenML/KittenTTS
[2]: https://github.com/KittenML
Published
All published posts
2493 posts
latest post 2026-05-11
Publishing rhythm
Learning to agent
All we are hearing lately is Agents are the future, something flipped around
NOV 2025 with opus 4.5. It turned snake oil into action. It changed
programmers will be replaced in 6 months to now. Not all of them, but probably
most of us who are not extraordinary. If you fall into the camp of folks not
adopting, I got no issues with that. No one is twisting your arm, well maybe
your boss or cto is, thats on them. I don't mean to say this is the future as
in, get in or get left behind. I mean it as this is where your other engineers
probably are, the junior to mid level engineers are here. If you are not
trying to meet them where they are how are you going to lead them.
Studio Ghibli Images in the Wild
I just stumbled into an image in my org chart of someone who clearly turned
themself into a Studio Ghibli character in chatgpt during the small window of
time that it seemed to do this for everything. Its clearly the aesthetic that
It would do by default for that week, then would not do it whatsoever. I'd
link it, but its from an org chart. I mostly found it interesting how we now
have these recognizable artifacts from specific moments in time.
Ping 36
I feel like there's an inevitable phase to every ai/agentic worked feature/epic
where you have to get in and chat with it 2025 style (except it actually works
and doesn't turn your project to shit). Planning is great, planning out epics
for full orchestrator's to churn for hours on is amazing, but it always leaves
me with a handful of thorns multiplied by complexity level of things that I
can shout a list of 6 items at a time that it can one shot. I haven't seen
anyone put a name to this phase yet, so I'm going to call it the UAT phase
for now and it seems like a very necessary part of the SDLC. It was
important before, but feels more so now as engineers distance themselves
from the implementation.
Research, Plan, Implement
I heard this term yesterday, and I think a lot of people are missing out on
step 1. It's important to experiment with agents and learn what they can do
well and what they cant, this changes every couple of weeks at this point. You
might be spending hours planning something that could have been implemented
right away, or maybe wasted time planning something that needed more research,
more context engineering. Agents start fresh every session, they cant remember
what you asked them to do 5 minutes ago in the other session, getting the right
tokens in session is critical.
Today I learned that docker creates an empty /.dockerenv file to indicate that
you are running in a docker container. Other runtimes like podman commonly use
/run/.containerenv. kubernetes uses neither of these, the most common way to
detect if you are running in kubernetes is to check for the presence of the
KUBERNETES_SERVICE_HOST environment variable. There will also be a directory
at /var/run/secrets/kubernetes.io/serviceaccount that contains the service
account credentials if you are running in kubernetes.
Context Poisoning Was There All Along
I wrote some code by hand on Sunday. Sat down with my son and started building
out a game in pygame from scratch. We went to google, we searched how to do
something, we copy and pasted from the docs. Not because we are dumb, but
because we cant remember some aspects of the pygame api. Now that these
patterns are established we no longer have to google them, we simply grep our
codebase and replicate the pattern. Easy right? It's funny that it took ai to
coin the term `context poisoning` even though it was there all along.
If you’re into interesting projects, don’t miss out on qmd [1], created by tobi [2].
mini cli search engine for your docs, knowledge bases, meeting notes, whatever. Tracking current sota approaches while being all local
References:
[1]: https://github.com/tobi/qmd
[2]: https://github.com/tobi
Looking for inspiration? OrcaSlicer-FullSpectrum [1] by ratdoux [2].
G-code generator for Snapmaker U1 with Full Spectrum layer blending
References:
[1]: https://github.com/ratdoux/OrcaSlicer-FullSpectrum
[2]: https://github.com/ratdoux
Agents cannot replace the thinking, they only amplify it
Agents cannot replace the thinking, they only amplify it. If you set the
agents off in the wrong direction that's where they will go. They will sprint
there faster than you can go. This is ok, its one of their advantages, they
can give you signal quick. Remember if they are off in the wrong direction
more research and planning is needed, and maybe a little bit more thinking on
your end to steer them in the right direction.
Dreaming of a ten-year computer – alexwlchan
alexwlchan.net [1]
Great gusto here from someone looking to fill landfills less. Get more use from what they paid for. Dodge some tough times in the hardware industry. I’m going to argue that the 10 year computer is not one bit crazy right now. No idea what the future entails, if local llms get good enough to really get so useful they feel required this could easily change. One issue I had with the post as they are looking to get a machine for the next 10 years is they were so focused on themself that they missed the point. They were so focused on buying something that would work for them for 10 years that they bought something brand new rather than thinking about the bigger issue of how do we get hardware to last 10+ years. Some factor of this involves giving our devices a second life. Two things went wrong here. First it appears they they have a perfectly good imac with a broken screen. I know nothing about apple/imac, assuming that the screen is toast and unrepairable, I know you can ssh into a mac this feels like good potential for server hardware. Next they purchased a brand new mac mini. Hardware has been good for a long time,...
-
Very interesting takes from @thdxr in this interview. A lot has been hashed out by others all over the place, but a hot take here is that code quality is higher than ever right now. Codebases are becoming more consistent than ever. If you are not starting with a good consistent base from the start you are poising your context and doomed to fail and have all the common failures of ai written code. He still reads almost every PR, and will read all of the code eventually. There are a few cases where reading the PR is not worthwhile only when its low stakes, knows that good patterns have been established and followed. He argues that someone needs to be the expert of the code and of the product still and fears that too many people not looking at prs will fail companies.
Note
This post is a thought [1]. It’s a short note that I make
about someone else’s content online #thoughts
References:
[1]: /thoughts/
Thinking about ai productivity again
Thinking about AI productivity again. It's allowing massive amounts of work to
get done, to levels that humans cannot physically type out in some cases. But
not all of this work is necessarily high value work. Right now I'm working on
one of the biggest PRs to an internal cli library. Probably the largest PR
I've ever done professionally. It touches all of the cli, refactors every
command, reaches into the business logic layers to drive deeper separation. I
reaches into the common layers to drive consistency. It ensures that every
command (50 or so) has similar flags, supports --plain, --no-color. It specs
out contracts to ensure that data goes out stdout, any extra goes out stderr.
This makes everything unix pipe friendly. There was quite a bit of research and
prep that went in, that turns out to already be distilled down into clig.dev.
The point is that this is all good work. It will make the product consistent,
repeatable, expected, and most of all boring. Most of the time, it wi...