-
SQLite as the production database — when it's actually fine
Every time someone mentions using SQLite in production, the response is predictable: “it doesn’t scale,” “no concurrent writes,” “use Postgres.” And they’re right — if you’re building the next Twitter. But most of us aren’t. Most of us are running apps that serve hundreds of requests per minute, not thousands per second. And for that, SQLite is not just fine — it’s better.
-
Spot instances on EKS — cutting costs without cutting reliability
Our EKS bill was growing faster than our traffic. Most of it was compute — on-demand
t3.mediumandt3a.mediuminstances running 24/7 for services that could tolerate occasional restarts. Spot instances are 60-70% cheaper than on-demand, but the trade-off is AWS can reclaim them with 2 minutes notice. The question was: which services can handle that, and how do you set it up without making the cluster fragile? -
React 19 + Vite — what changed from the webpack days
I’ve been building React apps since the
create-react-appdays. Webpack configs, Babel plugins, 45-second cold starts. It was fine — it was all we had. Then Vite happened, and then React 19, and now frontend development feels like a different job. A better one. -
Upgrading EKS across four environments — the rolling strategy
Upgrading Kubernetes on EKS sounds simple — change a version number, apply, done. In practice, with four environments (devnet, testnet, preprod, prod) and services that can’t afford downtime, it’s a multi-week process with a lot of “apply and watch” in between. I just finished rolling from 1.29 to 1.33 across the board, and here’s what that actually looked like.
-
Building with viem instead of ethers.js — the migration
I spent a good chunk of last year migrating from
ethers.jsv5 toviem. Not because I had to —etherswas working fine. But I kept seeingviempop up in every modern web3 project, and after reading the docs I understood why. It’s a fundamentally better approach to the same problem. -
Fail2ban and firewall hardening on a public-facing VPS
The first time I ran
grep "Failed password" /var/log/auth.log | wc -lon a new VPS, the number was embarrassing. Thousands of failed SSH attempts within 48 hours of provisioning. Bots scan the entire IPv4 space continuously — your server is being probed within minutes of getting a public IP. Let’s do something about it. -
Loki + Promtail for log aggregation on a budget
I’ve been running Prometheus + Grafana for over a year now and it’s great for metrics. But metrics tell you what happened — not why. For that you need logs. And my logging strategy was
sshinto the server andtail -fwhatever PM2 was writing to disk. Not scalable, not searchable, and definitely not “check this from my phone at 2am” friendly. -
First smart contract on Base — what surprised me
I’ve been writing Solidity for a while, but always on mainnet or testnets. Base launched in August 2023 and I figured it was time to try an L2 for real — deploy something, see how the tooling and gas economics actually differ. What follows is everything that surprised me — good and bad — about deploying on Base using the Foundry toolchain.
-
Ansible for a single server — overkill or exactly right?
When I first set up my VPS, I configured everything by hand. SSH’d in, ran commands, tweaked config files, forgot what I did three weeks later. The second time I set up a server I wrote bash scripts. Big bash scripts. Scripts that grew organically until they were unreadable, non-idempotent, and broke in subtle ways if they’d already been partially run. The third time I used Ansible. I haven’t looked back.
-
Deploying with PM2 — why I stopped using Docker for Node.js apps
Hot take: Docker is overkill for deploying Node.js apps on a single server. I know, I know. Containers are great. Isolation, reproducibility, all that. But when you’re running 4 Express apps on one VPS and your “deployment” is
rsync+ restart, Docker adds a layer of complexity that earns you almost nothing.