I have wasted too many tokens getting AI editors to work well with Terraform code. The authoritative JavaScript-heavy
provider documentation website makes it impossible to provide as a suitable reference to AI editors. Even adding those
links to Cursor doc index doesn’t work. So you get hallucinations and completely wrong code from even the best of the
models.
I have been using Helm charts, like everyone else, for the Kubernetes cluster in my homelab. Until a few months back, I
never gave a thought to the reliability of the Helm chart repositories I was using. And then the Bitnami news
dropped where they announced that they were going to stop supporting their Helm chart repositories.
Everyone has been scrambling to handle this situation, and most are settling on one of two options:
Vendor the sources of existing charts in the Git repositories.
Use a Helm chart repository, paid or free, to mirror them in a more scalable way.
I normally install Ubuntu on my Raspberry Pi machines because I am comfortable with its ecosystem. Most of the time
though, I have been using these boxes connected to my network via Ethernet.
Recently, I got a new Raspberry Pi 5, and as usual, I installed Ubuntu on it. This time I used the official HAT
to install the OS on an NVMe drive. The Raspberry Pi imager tool does a great job of setting up the machine with Wi-Fi
enabled. What I never paid attention to was how much the country setting in the options affects the Wi-Fi band.
After booting up the machine, I noticed that the Wi-Fi band was set to 2.4GHz. I found it odd. What followed was a lot of detail that, as usual, I wish I didn’t need to know, but now had to. :(
I like to use Homebrew on my Linux development machine as well, instead of random apt packages which
may or may not be up to date for common tools.
One annoyance I found a solution for, is getting the bash completion for Homebrew commands to work on Linux. The
problem is that Ubuntu (and other Linux distributions) have their own bash completion scripts for system commands. But
the way most bash completion scripts work is that they have a check to see if completions are already loaded.
So if I have a line in my bashrc to load homebrew completions, it won’t load, because it detects the completions from
the system already loaded. Specifically, it looks for the environment variable BASH_COMPLETION_VERSINFO which is set
by any bash completion script that is already loaded.
Reddit used a testing technique called “tap compare” for read migrations. The concept is straightforward:
A small percentage of traffic gets routed to the new Go microservice.
The new service generates its response internally.
Before returning anything, it calls the old Python endpoint to get that response too.
The system compares both responses and logs any differences.
The old endpoint’s response is what actually gets returned to users.
This approach meant that if the new service had bugs, users never saw them. The team got to validate their new code in
production with real traffic while maintaining zero risk to user experience.
I was working from outside home today, trying to push out changes to a bunch of my homelab servers. As usual I was using
Ansible, but I was connected over tailscale to the home network.
Now normally I would just create a socks/http proxy to one of my home machines and set the proxy environment variable
like HTTP_PROXY and most apps would just work. But Ansible doesn’t seem to respect that environment variable.
There is an environment keyword that lets you set http_proxy
variables, but that is for tasks
executing remotely. They can use that environment variable for commands they are executing which need to call over the Internet. But what we need is a way to reach the target host in the first place.
A new study released Thursday by research group Epoch AI projects that tech companies will exhaust the supply of
publicly available training data for AI language models by roughly the turn of the decade—sometime between 2026 and
2032.
…
In the longer term, there won’t be enough new blogs, news articles and social media commentary to sustain the current
trajectory of AI development, putting pressure on companies to tap into sensitive data now considered private—such as
emails or text messages—or relying on less-reliable “synthetic data” spit out by the chatbots themselves.
As more and more slop keeps getting churned out on the internet and floods our search results, there is little and little incentive anyway for humans to generate content in the first place. This data doom feels like a self-fulfilling prophecy.
One of the major reasons I switched to VS Code completely some years back is its excellent extension system, in
particular the seamless remote SSH editing extension.
Remote SSH extension is really cool!
Using a local editor on a remote filesystem without fiddling with sshfs or the like, managing to use extensions like
Python, Go and Copilot, all setup and configured locally but running on content on a remote system has been pretty cool.
It helps me keep my office and personal content separate - I edit my personal codebase on my personal computers using
vscode on my office laptop when I need to.
The annoyance about the code cli
But one of the most annoying thing while using the VS Code remote ssh extension was the VS Code cli, which I use a lot.
I spend a lot of time on the terminal inside VS Code, and sometimes it is just easier to open a file from the command
line using code FILE instead of reaching for the mouse to click on the file in the explorer.
Here is where it gets tricky.
If you have a VS Code installation on the remote computer, and you used it to install the CLI, that CLI executable (On
Mac installed in /usr/local/bin/code ) will always open files in that local VS Code installation. It will not open the
file in the remote SSH VS Code workspace that you have open.
TLDR: Use the script at the end of this post instead of the code cli in path if you want to open files in the remote ssh extension workspace.