<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[cgardens]]></title><description><![CDATA[tech, intellectual honesty, etc]]></description><link>https://blog.cgardens.dev</link><generator>Substack</generator><lastBuildDate>Thu, 23 Apr 2026 08:58:44 GMT</lastBuildDate><atom:link href="https://blog.cgardens.dev/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[charles]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[cgardens@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[cgardens@substack.com]]></itunes:email><itunes:name><![CDATA[charles]]></itunes:name></itunes:owner><itunes:author><![CDATA[charles]]></itunes:author><googleplay:owner><![CDATA[cgardens@substack.com]]></googleplay:owner><googleplay:email><![CDATA[cgardens@substack.com]]></googleplay:email><googleplay:author><![CDATA[charles]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Inverting the Developer and AI Relationship]]></title><description><![CDATA[Because there are more tokens than human-hours in the day]]></description><link>https://blog.cgardens.dev/p/inverting-the-developer-and-ai-relationship</link><guid isPermaLink="false">https://blog.cgardens.dev/p/inverting-the-developer-and-ai-relationship</guid><dc:creator><![CDATA[charles]]></dc:creator><pubDate>Mon, 20 Apr 2026 20:11:37 GMT</pubDate><content:encoded><![CDATA[<p><em>Originally posted on <a href="https://www.linkedin.com/posts/cgardens_when-a-jira-ticket-is-created-on-our-team-share-7452061518868799489-9VYc?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAAgTj18B0dc65XxH74VWlI5_UX1E1BVt6lE">LinkedIn</a>.</em></p><p>When a ticket is created on our team, an AI agent picks it up by default. A human only gets pulled in if the agent escalates.</p><p>How did we get here? In the beginning, there were engineers. Then engineers got AI assistants, and the assistant sat next to the human: you&#8217;d pick up a ticket, figure out what was worth delegating, and hand pieces of it to the model. The human was the one actually working the ticket. AI was optimizing the developer&#8217;s throughput.</p><p>The key shift that we made 3 weeks ago was that by default AI does the work. Engineers spend their time unblocking them: giving them context, shaping the environment, deciding what should trigger an escalation, fixing the cases where they get stuck. Our biggest bottleneck now is code review.</p><p>The job of the engineer stops being &#8220;do tickets&#8221; and starts being &#8220;design the conditions under which tickets get done.&#8221;</p><p>The results so far: 5 engineers running this system ship roughly as much as the other 50 on our engineering team combined.</p><p>And this is V1. It&#8217;s the dumbest the system will ever be. V2 is already underway. We are building a machine that turns tokens into product delivery capacity. Now the developer is optimizing the AI&#8217;s throughput.</p><p>If you&#8217;re trying to get more out of AI in engineering, the question probably isn&#8217;t &#8220;how do I delegate better.&#8221; It&#8217;s &#8220;what would it take for the agent to pick up the work by default.&#8221;</p>]]></content:encoded></item><item><title><![CDATA[Of Course This Blog is Written by AI]]></title><description><![CDATA[Because &#8220;never published&#8221; to &#8220;done in 15 minutes&#8221; is an infinity percent improvement]]></description><link>https://blog.cgardens.dev/p/of-course-this-blog-is-written-by</link><guid isPermaLink="false">https://blog.cgardens.dev/p/of-course-this-blog-is-written-by</guid><dc:creator><![CDATA[charles]]></dc:creator><pubDate>Sat, 24 Jan 2026 15:01:07 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!E_4-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57444edb-5f28-4f8a-a9d3-b56719d4449b_514x472.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>I am not a Writer. I am a tech exec, a dad, a software engineer, and a human with ideas I want to share.</strong> AI has been a huge unlock for me in doing this. Pre-AI, I struggled to publish anything. While I had plenty of ideas that were baked enough to share, the actual process of translating them from my head into written text was very slow. This struggle is evidenced by this<a href="https://blog.cgardens.dev/p/tool-for-the-job"> early post</a> on this Substack, where all I managed to do was write about how I wasn&#8217;t writing anything.</p><p>My process for writing articles is straightforward: I start with a topic I want to discuss. I then build a fairly dense outline, complete with examples and a point-by-point breakdown of what I&#8217;m trying to say. <strong>I know what I want to say.</strong> I use AI to help augment my research along the way. When it&#8217;s time to write, I pass the outline to a personal GPT, which has been trained on my past, pre-AI content to capture something of my voice. It then generates the text. There is an iteration process there where we work together to refine it. I also ask other LLMs to challenge the work to pressure test both the ideas and the writing.</p><p>Does this diminish the content of this blog? I don&#8217;t think so.</p><p>My brother texted me the other day, saying, &#8220;Some of your blog sounds like it was written by AI.&#8221; I responded, &#8220;Of course it does, because it is!&#8221; I understand his reaction, because I do it too: when I consume something and begin to suspect it was AI-written, my hackles go up, and I immediately become more skeptical of its quality. I believe we need to move past that reaction. This is related to an <a href="https://blog.cgardens.dev/p/definitive-guide-to-ai-slop">earlier piece</a> where I built a mental model for how to grapple with the use of AI in media content.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!E_4-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57444edb-5f28-4f8a-a9d3-b56719d4449b_514x472.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!E_4-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57444edb-5f28-4f8a-a9d3-b56719d4449b_514x472.png 424w, https://substackcdn.com/image/fetch/$s_!E_4-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57444edb-5f28-4f8a-a9d3-b56719d4449b_514x472.png 848w, https://substackcdn.com/image/fetch/$s_!E_4-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57444edb-5f28-4f8a-a9d3-b56719d4449b_514x472.png 1272w, https://substackcdn.com/image/fetch/$s_!E_4-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57444edb-5f28-4f8a-a9d3-b56719d4449b_514x472.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!E_4-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57444edb-5f28-4f8a-a9d3-b56719d4449b_514x472.png" width="514" height="472" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/57444edb-5f28-4f8a-a9d3-b56719d4449b_514x472.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:472,&quot;width&quot;:514,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:24099,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.cgardens.dev/i/185579641?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57444edb-5f28-4f8a-a9d3-b56719d4449b_514x472.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!E_4-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57444edb-5f28-4f8a-a9d3-b56719d4449b_514x472.png 424w, https://substackcdn.com/image/fetch/$s_!E_4-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57444edb-5f28-4f8a-a9d3-b56719d4449b_514x472.png 848w, https://substackcdn.com/image/fetch/$s_!E_4-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57444edb-5f28-4f8a-a9d3-b56719d4449b_514x472.png 1272w, https://substackcdn.com/image/fetch/$s_!E_4-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57444edb-5f28-4f8a-a9d3-b56719d4449b_514x472.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Similar to that article, I think we can build a truth table. One axis is whether the ideas being described are AI-Generated or Human-Generated. For where AI is now, none of the ideas it generates are interesting to read. It truly feels like slop. Now, of course, a lot of human-generated ideas are not that interesting either. But for the sake of our mental model, we&#8217;re going to assume a decent hit rate on humans sharing interesting thoughts and wrestling with hard problems.</p><p>The second axis is how the idea gets presented. As a software engineer, I think of the presentation as merely a &#8220;display layer.&#8221; In engineering, we&#8217;d consider whether functionality was being exposed via an API, a GUI, a mobile app, a chatbot, etc. These are different display layers. Genuinely useful products can become useless because of a bad interface. However, a fundamentally unuseful product cannot be saved by a nice display layer.</p><p>Given the time I am willing to put in, I cannot create a high enough quality display layer for my work. Yes, I could probably spend years learning the craft, but that opportunity cost is too high. AI lowers the barrier, enabling me to produce readable content quickly. The other axis on our truth table, therefore, is whether the &#8220;display layer,&#8221; or the text itself, is generated. We should be totally okay with the quadrant where the idea is human-generated, but the text is AI-written. That certainly doesn&#8217;t diminish the fully human-generated and written content. We can safely ignore all AI-generated ideas (at least for now).</p><p>My commitment in writing this blog is that I will never publish AI-generated ideas. I will not say, &#8220;Hey Claude, write an article in my voice about unit tests.&#8221; I will continue to take a human-generated thesis or outline and ask Claude to turn it into text. We are going to live in the top-right corner of that truth table.</p><p>I am not a <strong>W</strong>riter. My differentiation is not in the craft or style of my prose; the prose is simply a means of providing a display layer for the conversations and theses I want to share. With AI, I can take an idea that I&#8217;ve already baked and produce a full article in 15 minutes at a higher level of polish than if I put in hours. For people who <em>are</em> Writers, I understand why relying on AI to write the final text generally would not make sense; the prose itself is part of their differentiator. Naturally, the barrier to entry for people writing using AI like I do makes that field more competitive. I can imagine that is frustrating because it was already a brutal field.</p><p>My hope is that as consumers we can learn to identify the AI text generation that is helping share interesting ideas from that which does not, instead of immediately disregarding it. I look forward to continuing this experiment here in sharing ideas and takes that I think are genuinely interesting and challenging.</p>]]></content:encoded></item><item><title><![CDATA[Building a High Velocity Engineering Culture]]></title><description><![CDATA[Because speed improves every other metric]]></description><link>https://blog.cgardens.dev/p/building-a-high-velocity-engineering</link><guid isPermaLink="false">https://blog.cgardens.dev/p/building-a-high-velocity-engineering</guid><dc:creator><![CDATA[charles]]></dc:creator><pubDate>Fri, 23 Jan 2026 14:03:14 GMT</pubDate><content:encoded><![CDATA[<p>At some point, a CEO will come to you and say:</p><blockquote><p>&#8220;Things feel slow. We need an eng velocity metric.&#8221;</p></blockquote><p>This is almost always well-intentioned. We should always want to move faster. And in practice, engineering will almost always feel slower than we want it to be&#8212;especially from the CEO&#8217;s perspective. That tension is normal and permanent. If it ever goes away, something is wrong.</p><p>It&#8217;s tempting to respond by looking for a single number that explains the feeling. I&#8217;ve read extensively on this topic&#8212;<span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Abi Noda&quot;,&quot;id&quot;:98623269,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4a1c4cfd-639c-420c-8b20-a0a400c74265_3071x3071.jpeg&quot;,&quot;uuid&quot;:&quot;fd5fefe4-bc6b-4105-98ef-c1394cd774c8&quot;}" data-component-name="MentionToDOM"></span>&#8217;s <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Engineering Enablement&quot;,&quot;id&quot;:996688,&quot;type&quot;:&quot;pub&quot;,&quot;url&quot;:&quot;https://open.substack.com/pub/abinoda&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7dbd433b-6f11-4042-8b7d-0edb3b172966_1024x1024.png&quot;,&quot;uuid&quot;:&quot;cdd6166b-5052-4cb6-b37e-f38412320046&quot;}" data-component-name="MentionToDOM"></span> is a favorite&#8212;and the conclusion is consistent: there is no single metric that can definitively tell you whether an engineering organization is &#8220;fast.&#8221; Velocity matters, but it does not collapse cleanly into one measurement you can manage directly.</p><p>While there is not a single metric that exists, based on my research and hard-earned experience, these are the tools that I reach for when building a high velocity culture.</p><div><hr></div><h2><strong>Talk About Velocity. A Lot.</strong></h2><p>Culture changes because people talk about something repeatedly and without embarrassment.</p><p>If you want a high-velocity engineering culture, velocity has to be discussed openly and often. Not as a threat. Not as a performance weapon. As a shared goal. People should hear it in planning meetings, retros, one-on-ones, and hallway conversations.</p><p>This starts with stating the obvious: of course we want to be faster. Of course there is room to improve. Saying this out loud regularly matters more than most process changes.</p><p>Wins should be loud. Shipping should be celebrated. Progress should be visible. This reinforces that velocity exists to help engineers do their best work, not to pressure them into cutting corners.</p><p>This also creates a shared language. Talking about speed should not feel like an accusation or a critique of individual engineers. It should feel normal, neutral, and collective&#8212;something the team owns together.</p><p>As a side effect, this helps with managing up. When velocity is a consistent topic, CEOs and execs feel that it&#8217;s being actively worked on. They stop asking for a single magic metric and start trusting the system.</p><div><hr></div><h2><strong>Identify What&#8217;s Slowing You Down</strong></h2><p>You cannot make a team faster if you don&#8217;t know where time is being lost.</p><p>Some sources of drag are technical. Builds are slow. Tests are flaky. Deploys are fragile or mysterious. Engineers are usually very aware of these.</p><p>Others are organizational, and these are often harder to surface. I had a CTO friend recently realize that one reason teams were reluctant to ship early was that the ops team would yell at engineers any time a change was made without weeks of advance notice for training. From the outside, it looked like a lack of urgency. In reality, it was a rational response to punishment.</p><p>A high-velocity culture makes it easy to surface this friction. The cost of reporting drag should be close to zero; otherwise, people simply won&#8217;t do it. Engineers should be able to say, &#8220;This slowed me down,&#8221; without defensiveness or fear of consequences.</p><p>One simple mechanism is a lightweight velocity retro at the end of a project. Ten minutes. No slides. Just a central place where engineers can drop a list of everything that made progress slower than it needed to be. We especially want the embarrassing stuff, so we have the opportunity to fix it.</p><p>This process must be continuous. If friction is only discussed occasionally, it will compound faster than you can remove it. Teams that feel fast are not the ones without problems&#8212;they&#8217;re the ones that surface and address them quickly.</p><div><hr></div><h2><strong>Measure What Slows You Down</strong></h2><p>While measuring velocity directly is close to impossible, measuring the things that slow you down is usually straightforward.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>Start with the inner development loop: edit, build, test. This loop is sacred. If, after each change, an engineer has to wait long enough that their attention drifts, velocity is already gone.</p><p>Build time, test time, and deploy time in CI matter more than almost anything else. If this loop feels slow or unreliable, nothing else in the system will feel fast.</p><p>Beyond the core loop, there are other common sources of measurable drag:</p><ul><li><p>Code review turnaround time</p></li><li><p>Time to deploy (if deploying is slow, people do it less, and iteration slows with it)</p></li><li><p>Time to first deploy</p></li><li><p>Flaky tests and intermittent failures</p></li><li><p>Hard-to-read code or missing patterns</p></li></ul><p>For engineering work, having a concrete quantitative metric to ratchet down is powerful. When you&#8217;re confident something is slowing you down, measure it and reduce it. You may not be able to measure velocity directly, but you can measure drag.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><div><hr></div><h2><strong>Build a Productivity Baseline (Carefully)</strong></h2><p>You need a productivity baseline.</p><p>Not because maximizing output is the goal, but because without baseline data, the right conversations don&#8217;t happen. &#8220;It feels slow&#8221; is not something you can debug. Concrete data gives teams a shared starting point for understanding what changed and why.</p><p>Story points (or an equivalent) are sufficient for this. They don&#8217;t need to be precise; they need to be consistent. Shipping roughly 20 points one sprint, 22 the next, and then 15 after that is enough to ask useful questions. Without that data, those questions simply don&#8217;t happen.</p><p>Each team should also be using a productivity tool to collect this data. Tools like DX, LinearB, Jellyfish, and similar systems are best thought of as instrumentation. They surface patterns around PR throughput, cycle time, and deploy frequency that are otherwise hard to see.</p><p>The caution is in how these tools are used. The moment a baseline becomes a target, behavior will shift to optimize the number instead of the system. That&#8217;s the predictable Goodhart&#8217;s Law failure mode.</p><p>This is where the earlier, more qualitative cultural work matters. If you actually did the work to build a culture that genuinely cares about velocity, this data becomes empowering. Teams use it to be curious about their own bottlenecks and improve without being told what to do. If you skipped that part and jumped straight to dashboards, the same data will feel like overhead or surveillance&#8212;and will be treated accordingly.</p><div><hr></div><h2><strong>Holding Leaders Accountable for a Velocity Culture</strong></h2><p>Velocity does not sustain itself. It has to be actively propagated.</p><p>Leaders are responsible for keeping velocity present in how work is planned, discussed, and reflected on. That means making speed a normal topic of conversation&#8212;not a reaction to pressure, not a postmortem after something goes wrong, and not a proxy for performance management.</p><p>For any meaningful project, leaders should be pushing on timelines, not to create stress, but to surface assumptions. How long do we think this should take? Is that timeline conservative? What would have to change for us to be more aggressive? Asking what it would take to do the work in half the time is a useful forcing function. Not because it&#8217;s realistic, but because it reveals where scope, process, or sequencing can be rethought.</p><p>While there&#8217;s no clean way to roll velocity up into a single aggregate number, teams that care about velocity develop a strong intuition for it at the project level. &#8220;This used to take us a month, now it takes a week.&#8221; &#8220;Why is it so hard for us to upgrade to the next version of Postgres?&#8221; It is much easier to have a quantitative intuition for what fast looks like at a project level. Of course it&#8217;s not the magic eng velocity number that the CEO asked for, but sampling how long individual projects take can help build up data to better understand velocity.</p><p>That intuition only forms when leaders consistently reinforce it. By creating space for teams to notice when work feels slow, when progress stalls, or when a change meaningfully compresses timelines, leaders turn those observations into shared context. Retros surface what sped things up or slowed them down, and follow-through removes friction.</p><p>It&#8217;s intentional that this section keeps saying leaders. This work can&#8217;t be centralized. A VP of Engineering can&#8217;t be everywhere, and shouldn&#8217;t try to be. Velocity culture spreads through leaders throughout the organization. While this should be a core responsibility of engineering managers and staff engineers, there are many kinds of leaders in an engineering organization, and all of them should be accountable for nurturing a culture of velocity. In engineering, this looks like engineers jumping into reviews unprompted, fixing flaky tests they didn&#8217;t write, improving build scripts they don&#8217;t &#8220;own,&#8221; and noticing when someone is stuck before it becomes a ticket. You see this same pattern in elite sports teams, emergency rooms, trading floors, and flight decks &#8212; places where collective tempo matters more than individual heroics.</p><div><hr></div><h2><strong>A Note on AI and Velocity</strong></h2><p>It would be strange to write about engineering velocity in 2026 without acknowledging that AI is already changing how software gets built.</p><p>This piece is intentionally not about AI tactics. That&#8217;s a long conversation unto itself, and this post is already much longer than our median article. What matters here is that teams with a strong velocity culture naturally start asking better questions about how AI fits into their work.</p><p>They don&#8217;t ask, &#8220;How do we mandate AI usage?&#8221; They ask, &#8220;What&#8217;s slowing us down, and can AI remove it?&#8221; Can mundane work be turned into background tasks that generate PRs while engineers sleep? Can reviews, refactors, or migrations be accelerated without increasing risk? Can development environments be reshaped so that product requirements flow directly into agents that can build or scaffold features themselves?</p><p>Consider this a teaser on some of the most interesting ways that AI can accelerate engineering teams.</p><div><hr></div><h2><strong>Engineers Need to Ship</strong></h2><p>I worry that this focus on velocity can sound crushing at first. That reaction is understandable. Velocity often gets conflated with grinding harder. They are orthogonal. This approach isn&#8217;t about grind; it&#8217;s about intentional growth and improvement.</p><p>Is it intense? Yes. When it&#8217;s working, people are locked in and working hard. Great engineers have a deep-seated desire to ship great things. Velocity alone won&#8217;t fix poor compensation, bad management, or a weak mission &#8212; but without velocity, even those rarely hold great engineers for long.</p><p>A healthy velocity culture creates an environment where engineers can work on important problems, move with urgency, and see their work land in the world. Over time, that turns velocity from a management concern into a powerful part of your retention strategy. A high-velocity engineering culture builds a virtuous cycle where it helps grow your business and attract and retain talent.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Credit to <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Engineering Enablement&quot;,&quot;id&quot;:996688,&quot;type&quot;:&quot;pub&quot;,&quot;url&quot;:&quot;https://open.substack.com/pub/abinoda&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7dbd433b-6f11-4042-8b7d-0edb3b172966_1024x1024.png&quot;,&quot;uuid&quot;:&quot;672d226b-66f7-42ad-8c23-3e3c2d3bf42b&quot;}" data-component-name="MentionToDOM"></span> for introducing me to this idea. </p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:114153512,&quot;url&quot;:&quot;https://newsletter.getdx.com/p/cycle-time&quot;,&quot;publication_id&quot;:996688,&quot;publication_name&quot;:&quot;Engineering Enablement&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Niij!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7dbd433b-6f11-4042-8b7d-0edb3b172966_1024x1024.png&quot;,&quot;title&quot;:&quot;The Case Against Measuring Cycle Time&quot;,&quot;truncated_body_text&quot;:&quot;This is the latest issue of my newsletter. Each week I cover research, opinion, or practice in the field of developer productivity and experience. This week is an article I wrote about cycle time.&quot;,&quot;date&quot;:&quot;2023-04-14T08:01:56.351Z&quot;,&quot;like_count&quot;:11,&quot;comment_count&quot;:4,&quot;bylines&quot;:[{&quot;id&quot;:98623269,&quot;name&quot;:&quot;Abi Noda&quot;,&quot;handle&quot;:&quot;abinoda&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4a1c4cfd-639c-420c-8b20-a0a400c74265_3071x3071.jpeg&quot;,&quot;bio&quot;:&quot;Co-founder, CEO at DX&quot;,&quot;profile_set_up_at&quot;:&quot;2022-07-10T22:05:16.103Z&quot;,&quot;reader_installed_at&quot;:null,&quot;publicationUsers&quot;:[{&quot;id&quot;:941772,&quot;user_id&quot;:98623269,&quot;publication_id&quot;:996688,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:996688,&quot;name&quot;:&quot;Engineering Enablement&quot;,&quot;subdomain&quot;:&quot;abinoda&quot;,&quot;custom_domain&quot;:&quot;newsletter.getdx.com&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Research and perspectives on developer productivity. &quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7dbd433b-6f11-4042-8b7d-0edb3b172966_1024x1024.png&quot;,&quot;author_id&quot;:98623269,&quot;primary_user_id&quot;:98623269,&quot;theme_var_background_pop&quot;:&quot;#BAA049&quot;,&quot;created_at&quot;:&quot;2022-07-10T22:35:31.420Z&quot;,&quot;email_from_name&quot;:&quot;Engineering Enablement&quot;,&quot;copyright&quot;:&quot;Abi Noda&quot;,&quot;founding_plan_name&quot;:null,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;disabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;magaziney&quot;,&quot;is_personal_mode&quot;:false}}],&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null,&quot;status&quot;:{&quot;bestsellerTier&quot;:null,&quot;subscriberTier&quot;:null,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:null,&quot;paidPublicationIds&quot;:[],&quot;subscriber&quot;:null}}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://newsletter.getdx.com/p/cycle-time?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!Niij!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7dbd433b-6f11-4042-8b7d-0edb3b172966_1024x1024.png" loading="lazy"><span class="embedded-post-publication-name">Engineering Enablement</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">The Case Against Measuring Cycle Time</div></div><div class="embedded-post-body">This is the latest issue of my newsletter. Each week I cover research, opinion, or practice in the field of developer productivity and experience. This week is an article I wrote about cycle time&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">3 years ago &#183; 11 likes &#183; 4 comments &#183; Abi Noda</div></a></div></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Extra points for simple metrics. You want something where it&#8217;s obvious how changes engineering can make will move the number. The more synthesized a metric becomes, the harder it is to see that connection, and the less effective it is at focusing execution.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[What the AI Debate Gets Wrong About Progress]]></title><description><![CDATA[Because transition costs are asymmetric]]></description><link>https://blog.cgardens.dev/p/what-the-ai-debate-gets-wrong-about</link><guid isPermaLink="false">https://blog.cgardens.dev/p/what-the-ai-debate-gets-wrong-about</guid><dc:creator><![CDATA[charles]]></dc:creator><pubDate>Thu, 22 Jan 2026 18:47:54 GMT</pubDate><content:encoded><![CDATA[<p>The conversation around AI tends to collapse into a strangely binary shape.</p><p>On one side, there&#8217;s a calm, spreadsheet-friendly story: AI is just another productivity tool. It nudges GDP up a couple of points, makes companies more efficient, and otherwise fits neatly into existing economic models.</p><p>On the other side, there&#8217;s a much louder story: widespread job loss, social upheaval, and a fundamental break from how work has functioned for the last century.</p><p>What gets lost is that these two narratives aren&#8217;t actually in conflict. They&#8217;re describing different points on the same timeline.</p><p>In the short run, the disruption-focused view is often right. Technological change hits faster than people can adapt. Skills depreciate. Institutions lag. The impact is uneven and concentrated.</p><p>In the long run, the steady-growth view usually wins. Productivity gains compound, new roles emerge, and the next generation adapts by default by growing up inside the new system.</p><p>The mistake is pretending that this timeline is smooth.</p><p>Technological change doesn&#8217;t fail loudly or succeed quietly. It succeeds eventually, after a period where a very specific group absorbs most of the cost.</p><div><hr></div><p>When massive technological or economic change hits, humanity eventually figures it out. One or two generations later, things mostly work. New jobs appear. Institutions adapt. People point at the GDP chart and declare success.</p><p>But the generation caught in the transition? They tend to get wrecked.</p><p>The cleanest recent example is the <strong>China shock</strong>.</p><p>US GDP was fine. Consumers benefited. Economists nodded. Meanwhile, a mid-career factory worker with location-bound, asset-specific skills lost their job, their leverage, and often their community. &#8220;Just retrain&#8221; is great advice if you&#8217;re 19. It&#8217;s a gamble if you&#8217;re 45.</p><p>The pattern repeats:</p><ul><li><p>The Industrial Revolution eventually raised living standards, after decades of brutal work and political conflict</p></li><li><p>Agricultural mechanization hollowed out rural labor</p></li><li><p>Software quietly ate clerical work and entry-level white-collar roles</p></li></ul><p>Net positive. Locally catastrophic.</p><div><hr></div><p>Viewed through this lens, the AI trajectory looks fairly predictable. For the work that already exists today, we&#8217;re going to need far fewer people to do it. In practice, that shows up as job replacement.</p><p>The impact won&#8217;t be evenly distributed. Some sectors will absorb the shock first &#8212; customer support is the canary. When AI meaningfully pairs with robotics, displacement stops being confined to screens and starts showing up in physical work as well. Software engineering won&#8217;t be spared either: teams get much smaller, individual leverage goes up, and output concentrates.</p><p>What <em>is</em> clear is that much of what we currently think of as the most valuable and productive work no longer needs humans &#8212; or at least, far fewer of them. That severs a link we&#8217;ve relied on for a long time: productivity as the primary justification for livelihood. It&#8217;s a structural break that societies won&#8217;t metabolize quickly.</p><p>From there, several equilibria are possible. Maybe abundance enables stronger social systems, and a decent quality of life becomes less tightly coupled to formal employment. Maybe we invent new kinds of work &#8212; after all, we already have plenty of post-subsistence jobs. Or maybe some mix of Jevons&#8217; paradox and the effective defeat of Baumol&#8217;s cost disease explodes entrepreneurship and cheap consumption in ways we haven&#8217;t fully internalized yet.</p><p>AI is an enormous unlock for humanity. The question isn&#8217;t whether the system benefits in aggregate &#8212; it almost certainly will. The question is who bears the transition cost, and when.</p>]]></content:encoded></item><item><title><![CDATA[Moving Beyond Execution-Based Engineering Scaling]]></title><description><![CDATA[Because AI types faster than you and doesn&#8217;t need a lunch break]]></description><link>https://blog.cgardens.dev/p/moving-beyond-execution-based-engineering</link><guid isPermaLink="false">https://blog.cgardens.dev/p/moving-beyond-execution-based-engineering</guid><dc:creator><![CDATA[charles]]></dc:creator><pubDate>Wed, 21 Jan 2026 22:56:33 GMT</pubDate><content:encoded><![CDATA[<p>We used to scale engineering teams based on <strong>execution capacity</strong>. With AI, we&#8217;ll scale them based on <strong>cognitive load</strong>.</p><p>In the old model, if an inventory feature needed six engineers&#8217; worth of work and you only had three, you hired three more (four, if you&#8217;d been burned by attrition before). More work meant more humans because humans were the execution layer.</p><p>In the AI model, agents handle most of the execution. You still need a small number of engineers who deeply understand the feature&#8212;at least two, to avoid bus factor&#8212;but if you need more execution throughput, you buy it as tokens, not headcount.</p><p>By cognitive load, I mean something very specific: deep ownership of a system. Understanding how it works technically, how it serves the customer, where it can fail, and which invariants actually matter. This has to live with an engineer. If secrets leak or data is corrupted, you can&#8217;t reasonably hold a PM&#8212;or today&#8217;s app-builder tools&#8212;responsible. It&#8217;s possible those tools eventually get good enough to own this kind of accountability, but they&#8217;re not there yet. You still need a throat to choke, and it needs to belong to someone who actually understands the system.</p><p>That&#8217;s the real limit. Not how much code can be written, but how many systems an engineer can accurately hold in their head and be accountable for at once.</p><p>A few corollaries fall out of this.</p><p>First, the more AI reduces cognitive load, the smaller engineering teams can be without breaking. Execution is cheap; understanding is not.</p><p>Second, for agents to act like real execution engineers, we need to be far better at sharing context with them. They need to know the codebase, the invariants, and the &#8220;why,&#8221; not just generate plausible diffs. That implies documentation living with the code. Engineers won&#8217;t reliably do this, but agents can&#8212;if we make auto-updating docs part of the workflow.</p><p>Third, engineers are going to write far less code. Possibly none. Their primary job becomes reviewing generated code submitted by PMs or other non-engineers, or handing designs to agents and letting them implement. Even when engineers make major changes, they&#8217;ll often do it by directing an agent rather than typing everything themselves.</p><p>The underlying AI is probably good enough already. What&#8217;s missing are the tools: context management, infra automation, and guardrails that reduce supervision overhead instead of adding to it.</p><p>In the AI era, teams won&#8217;t scale with how much work needs to get done. They&#8217;ll scale with how much responsibility a human can realistically carry.</p>]]></content:encoded></item><item><title><![CDATA[RTO: From Religion to Systems Design]]></title><description><![CDATA[Because holy wars make bad operating principles]]></description><link>https://blog.cgardens.dev/p/rto-from-religion-to-systems-design</link><guid isPermaLink="false">https://blog.cgardens.dev/p/rto-from-religion-to-systems-design</guid><dc:creator><![CDATA[charles]]></dc:creator><pubDate>Thu, 15 Jan 2026 19:17:32 GMT</pubDate><content:encoded><![CDATA[<p>Remote vs in-person debates in startups have the energy of a theological schism. Lots of vibes. Very little accounting. Having led engineering at Airbyte through a shift from fully remote to in-person, I&#8217;ve picked up a few hard-earned lessons along the way. What tends to get lost in these conversations is that they drift into ideology, when in reality, like any other leadership decision, remote vs in-person is just a set of trade-offs.</p><div><hr></div><h3><strong>Remote: Talent First, Pay in Process</strong></h3><p>Great companies are built on great talent. Remote&#8217;s core advantage is simple: it massively expands the talent pool. Even with time zone constraints, you can hire people you&#8217;d never reach otherwise.</p><p>The cost is that context, trust, and momentum don&#8217;t emerge naturally. They have to be artificially manufactured. Meetings replace osmosis. Docs replace hallway conversations. Slack replaces tone. Team bonding follows the same pattern: in person you eat lunch together and build relationships by default; remotely you schedule offsites and Zoom &#8220;bonding.&#8221; None of this is fatal&#8212;but all of it is work. And it shows up most clearly in ideation: good ideas come from people talking to each other, and almost no one wants to casually brainstorm on Zoom the way they will at a whiteboard.</p><p>Remote is also meaningfully worse at training junior people. We are <em>still</em> waiting for someone to publish the seminal blogpost on how they solved growing new grads in a remote environment.</p><p>In a startup, time is the scarcest resource. Remote teams pay a small coordination tax per person relative to in-person teams, and that tax compounds nonlinearly as the team grows.</p><div><hr></div><h3><strong>In-Person: Energy First, Optionality Reduced</strong></h3><p>In-person optimizes for speed, energy, and tight feedback loops. Alignment is cheaper. Intent survives the trip from one brain to another.</p><p>The tradeoff is talent optionality. You&#8217;re bound to a local labor market, which in places like SF is extremely cyclical. In the last five years alone, roughly half the time it&#8217;s been nearly impossible to hire (crypto, AI booms), and the other half merely difficult.</p><p>That volatility is the price.</p><div><hr></div><h3><strong>The Middle Is the Failure Mode</strong></h3><p>Most problems come from unclear optimization. That&#8217;s how you get the worst-of-all-worlds setup: nominally in-person, functionally remote, culturally confused.</p><p>If you choose in-person, it has to actually be in-person. That means accepting reduced access to parts of the talent pool. That&#8217;s the trade. In-person maximizes energy and commitment. Remote maximizes access to exceptional talent.</p><p>Both are fine. Waffling is not.</p><div><hr></div><h3><strong>Conclusion</strong></h3><p>This isn&#8217;t about ideology. It&#8217;s about systems design. Every model has costs, and pretending otherwise just pushes those costs into places you&#8217;re not watching. My intuition is that AI-driven smaller teams push the needle toward remote, but not decisively. Smaller teams blunt coordination costs, yet also make co-location feasible. In the meantime, the best thing you can do is be explicit about what you&#8217;re optimizing for, honest about the costs, and willing to commit. Ambiguity is the only option that reliably makes everyone worse off.</p>]]></content:encoded></item><item><title><![CDATA[Unlocking Ephemeral Testing with Generative AI: Part Two]]></title><description><![CDATA[Because zero marginal cost reshapes what tests are even for]]></description><link>https://blog.cgardens.dev/p/unlocking-ephemeral-testing-with-a12</link><guid isPermaLink="false">https://blog.cgardens.dev/p/unlocking-ephemeral-testing-with-a12</guid><dc:creator><![CDATA[charles]]></dc:creator><pubDate>Thu, 04 Dec 2025 00:00:28 GMT</pubDate><content:encoded><![CDATA[<p><em>This article was originally published on the <a href="https://airbyte.com/blog/ephemeral-testing-with-generative-ai-part-two">Airbyte Blog</a>. </em></p><div><hr></div><p>In <a href="https://cgardens.substack.com/p/unlocking-ephemeral-testing-with">Part 1</a>, we talked about using LLMs to generate <em>ephemeral tests</em> that unfreeze legacy code &#8212; basically treating your AI as a bored-but-willing intern who will brute-force the emergent contract you&#8217;re too scared to guess at.</p><p>But there&#8217;s an adjacent point I keep circling back to: if the marginal cost of writing a test is now effectively zero, we&#8217;ve been massively underusing tests <em>everywhere</em>, not just in fossilized code.</p><p>Most of us only write tests when we &#8220;need to.&#8221; Translation: right before we touch a landmine. But writing tests as part of the exploratory process? Understanding new tools? Probing edge cases you didn&#8217;t think of? Historically that&#8217;s been too slow, too expensive, and too annoying. Now it&#8217;s just a cheap prompt away.</p><h3><strong>1. Testing unfamiliar systems: documentation by empiricism</strong></h3><p>This is basically the same move as Part 1 but for greenfield work.</p><p>You pick up a new library, framework, or middleware &#8212; say, the thing at my job that magically turns incoming JSON into Kotlin objects, plus some undocumented set of delightful quirks that everyone politely ignores.</p><p>Before, your options were:</p><ul><li><p>Read the docs (ha).</p></li><li><p>Read the source (double ha).</p></li><li><p>Ship something, hit prod, and learn what <em>actually</em> happens (the traditional method).</p></li></ul><p>Now you can just have an LLM carpet-bomb the thing with tests and infer the behavioral terrain map. Ask it for dozens of permutations: missing fields, extra fields, weird nesting, bad types, whitespace crimes &#8212; all the delightful real-world entropy the docs never mention.</p><p>Most of these tests you&#8217;ll delete. A few you might keep, because they reveal a &#8220;fun&#8221; subtlety that Future Developer (which is also Present Developer + six months of memory decay) is going to trip over.</p><p>It&#8217;s like doing reconnaissance on a foreign API. Except now the recon is both free and tireless.</p><h3><strong>2. Testing things you didn&#8217;t think of</strong></h3><p>At a recent conference, George Fraser from Fivetran said something to the effect of:</p><blockquote><p>&#8220;I get an LLM&#8217;s opinion on everything I do, because sometimes it notices something I don&#8217;t.&#8221;</p></blockquote><p>That stuck with me.</p><p>Not because I need more opinions in my life (I write software; opinions are my primary export), but because it&#8217;s the exact same philosophy as ephemeral testing:</p><p>Ask the model to test your code for cases you never considered.<br><strong>Worst case:</strong> the tests are useless and you don&#8217;t commit them.<br><strong>Best case:</strong> it surfaces a weird edge case that would have cost you a day of debugging and a grumpy Slack thread.</p><p>Treat your LLM like an extremely pedantic coworker who specializes in pointing out the one thing you forgot. The key difference is that you don&#8217;t owe the LLM coffee or emotional labor.</p><h3><strong>3. The next frontier: acceptance testing via prompt</strong></h3><p>This last bit is more speculative, but I&#8217;m increasingly convinced that some tests shouldn&#8217;t be written in code at all.</p><p>Long-lived automated tests age poorly. They embed outdated assumptions in a thousand helper functions and silently pass even after they stop exercising the actual code path.</p><p>But a prompt like:</p><blockquote><p>&#8220;When I update the number of records moved in a replication job, the job summary returns the updated count.&#8221;</p></blockquote><p>&#8230;is short, human-readable, and tightly scoped to <em>intent</em>, not implementation.</p><p>Imagine a small suite of English prompts that represent the product&#8217;s core acceptance criteria. As part of CI, you ask an LLM to execute those prompts against your system and confirm that reality matches the story.</p><p>Two nice properties fall out of this:</p><ol><li><p><strong>It detects mismatches between the code and the canonical user-facing behavior. </strong>Maybe the old tests still hit the v1 endpoint, while your docs point to v2. A human might miss that; an LLM poking at the surface won&#8217;t.</p></li><li><p><strong>Anyone can write prompts. </strong>We needed QA teams because the tools for encoding and checking product level behavior required a lot of technical expertise. If PMs, designers, and support can maintain their own prompts to describe what workflows they care about, that unlocks entirely new ways of approaching QA.</p></li></ol><p>Of course, there&#8217;s an obvious landmine: LLMs are nondeterministic, and flaky tests are how engineering teams slowly lose their will to live.</p><p>So I&#8217;m not claiming victory here. This idea needs more experimenting &#8212; guardrails, temperature controls, scenario anchoring, maybe multiple-run consensus. But the shape of the opportunity is interesting: acceptance tests that read like requirements, not code.</p><h3><strong>The bigger shift: when tests hit zero marginal cost</strong></h3><p>This whole series boils down to a simple economic shift: writing tests just dropped from &#8220;painful but virtuous&#8221; to basically zero marginal cost. And any time something useful collapses to zero marginal cost (think Ben Thompson and <a href="https://stratechery.com/aggregation-theory/">Aggregation Theory</a>), the right question isn&#8217;t &#8220;How do we do the old thing cheaper?&#8221; It&#8217;s &#8220;What new behaviors does this unlock?&#8221;</p><p>That&#8217;s the invitation here. Once tests are cheap, they stop being artifacts you carefully curate and start becoming probes &#8212; disposable instruments for exploring unfamiliar code, mapping legacy behavior, surfacing missed edges, or even expressing acceptance criteria in plain English. The examples in these posts are just early sketches. The real point is: we should get weird and creative again. When tests cost nothing, the space of things worth testing suddenly gets a lot bigger.</p>]]></content:encoded></item><item><title><![CDATA[The Fleeting Life of an LLM]]></title><description><![CDATA[Because someone had to anthropomorphize them]]></description><link>https://blog.cgardens.dev/p/the-fleeting-life-of-an-llm</link><guid isPermaLink="false">https://blog.cgardens.dev/p/the-fleeting-life-of-an-llm</guid><dc:creator><![CDATA[charles]]></dc:creator><pubDate>Tue, 02 Dec 2025 00:01:03 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!MgxQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c7580db-0c58-4ae5-bc95-8b4c6b892ada_1204x746.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MgxQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c7580db-0c58-4ae5-bc95-8b4c6b892ada_1204x746.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MgxQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c7580db-0c58-4ae5-bc95-8b4c6b892ada_1204x746.png 424w, https://substackcdn.com/image/fetch/$s_!MgxQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c7580db-0c58-4ae5-bc95-8b4c6b892ada_1204x746.png 848w, https://substackcdn.com/image/fetch/$s_!MgxQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c7580db-0c58-4ae5-bc95-8b4c6b892ada_1204x746.png 1272w, https://substackcdn.com/image/fetch/$s_!MgxQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c7580db-0c58-4ae5-bc95-8b4c6b892ada_1204x746.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MgxQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c7580db-0c58-4ae5-bc95-8b4c6b892ada_1204x746.png" width="1204" height="746" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2c7580db-0c58-4ae5-bc95-8b4c6b892ada_1204x746.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:746,&quot;width&quot;:1204,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:66919,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://cgardens.substack.com/i/180156069?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c7580db-0c58-4ae5-bc95-8b4c6b892ada_1204x746.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MgxQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c7580db-0c58-4ae5-bc95-8b4c6b892ada_1204x746.png 424w, https://substackcdn.com/image/fetch/$s_!MgxQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c7580db-0c58-4ae5-bc95-8b4c6b892ada_1204x746.png 848w, https://substackcdn.com/image/fetch/$s_!MgxQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c7580db-0c58-4ae5-bc95-8b4c6b892ada_1204x746.png 1272w, https://substackcdn.com/image/fetch/$s_!MgxQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c7580db-0c58-4ae5-bc95-8b4c6b892ada_1204x746.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Humans start with nothing&#8212;no knowledge, no skills, barely a sense of which way is up. Our mental acuity ramps fast, peaks around 35, and then begins the long, polite glide path toward &#8220;why did I walk into this room?&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> But knowledge keeps accumulating long after the sharpness fades, which is why our overall usefulness often rises well past our cognitive prime.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> Eventually, either our acuity drops too low to access what we know, or what we know stops mattering&#8212;but until then, we lean on experience.</p><p>LLMs live a similar lifecycle&#8212;just accelerated to the point of comedy. They spawn at peak intelligence, like a newborn who&#8217;s already finished grad school. But from that very first token, due to attention dilution, they&#8217;re on the decline. You&#8217;re in a desperate race to shove context in before the model forgets why you&#8217;re talking to it, who you are, or what century it is. After fifteen minutes, it&#8217;s basically your friend at 2 a.m. insisting they&#8217;re &#8220;totally fine to drive.&#8221;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QRv0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7925aba6-6728-4321-b3fe-f759b59ef298_750x1000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QRv0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7925aba6-6728-4321-b3fe-f759b59ef298_750x1000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!QRv0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7925aba6-6728-4321-b3fe-f759b59ef298_750x1000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!QRv0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7925aba6-6728-4321-b3fe-f759b59ef298_750x1000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!QRv0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7925aba6-6728-4321-b3fe-f759b59ef298_750x1000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QRv0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7925aba6-6728-4321-b3fe-f759b59ef298_750x1000.jpeg" width="322" height="429.3333333333333" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7925aba6-6728-4321-b3fe-f759b59ef298_750x1000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:750,&quot;resizeWidth&quot;:322,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Brain Surge Sticker&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Brain Surge Sticker" title="Brain Surge Sticker" srcset="https://substackcdn.com/image/fetch/$s_!QRv0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7925aba6-6728-4321-b3fe-f759b59ef298_750x1000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!QRv0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7925aba6-6728-4321-b3fe-f759b59ef298_750x1000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!QRv0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7925aba6-6728-4321-b3fe-f759b59ef298_750x1000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!QRv0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7925aba6-6728-4321-b3fe-f759b59ef298_750x1000.jpeg 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Humans are racing against time; LLMs are racing against the context window (e.g. attention dilution, noise, instruction drift, etc). That&#8217;s the key difference. We learn by doing things&#8212;stacking up lived experience to offset declining throughput. LLMs, meanwhile, get dumber <em>because</em> they&#8217;re doing things. Every new token is both &#8220;experience&#8221; and a small act of self-erosion. It would be as if reading a page in a book made you instantly worse at reading the next page.</p><p>The reason I&#8217;m thinking about all this is that working with LLMs forced me into a strange kind of self-reflection. At first, their rapid slide into confusion felt completely foreign, and I had to invent new ways of giving them work&#8212;splitting tasks, adding structure, managing their attention. But the longer I sat with it, the more humbling the realization became: this isn&#8217;t alien at all. It&#8217;s a compressed version of our own lives. We build systems and habits to compensate for fading focus, limited memory, and the hope that accumulated experience will outrun the entropy. LLMs just do the whole thing on fast-forward.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Intentionally not using scientific terms here. &#8220;Acuity&#8221; is handwaving over some notion of mental horsepower.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>&#8220;Knowledge&#8221; is handwaving over learned skills, experience, pattern recognition, facts, etc. All the stuff that makes you better at something that you had to learn or practice. </p></div></div>]]></content:encoded></item><item><title><![CDATA[Unlocking Ephemeral Testing with Generative AI: Part One]]></title><description><![CDATA[Because sometimes the best tests are the ones you leave behind]]></description><link>https://blog.cgardens.dev/p/unlocking-ephemeral-testing-with</link><guid isPermaLink="false">https://blog.cgardens.dev/p/unlocking-ephemeral-testing-with</guid><dc:creator><![CDATA[charles]]></dc:creator><pubDate>Fri, 28 Nov 2025 03:48:51 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a3ae12e0-35e9-4dba-9dd7-a4c687f3ecbe_1024x1792.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This article was originally published on the <a href="https://airbyte.com/blog/ephemeral-testing-with-generative-ai">Airbyte Blog</a>.</em></p><div><hr></div><p>Every codebase has <em>that</em> file.</p><p>The one everyone tiptoes around in code review.<br>The one that&#8217;s &#8220;critical&#8221; and &#8220;old&#8221; and &#8220;owned by&#8230; someone who left three years ago.&#8221;<br>The one with no tests, no docs, and a suspicious # TODO: refactor from 2019.</p><p>What happens to that code? It freezes.<br>No one wants to touch it. Or when they <em>do</em> touch it, production catches fire and we all learn, again, that fear is a perfectly rational emotion.</p><p>AI lets us unfreeze this code by generating piles of tests that reveal emergent-contract of the existing code before rewriting anything</p><p>This post is the first in a small series on &#8220;ephemeral testing&#8221;: an exploration of how generative AI unlocks new approaches to testing. In the past, tests were slow to write and expensive to maintain. With LLMs, they&#8217;re cheap &#8212; which opens up entirely new ways to use them.</p><div><hr></div><h2><strong>Legacy code without context: the perfect AI target</strong></h2><p>The pattern is simple:</p><ol><li><p>You find some untested, low-context, scary code.</p></li><li><p>You want to modify it &#8212; for correctness, performance, or sanity.</p></li><li><p>You have no idea how it&#8217;s actually being used in the wild.</p></li><li><p>Hyrum&#8217;s Law whispers: <em>&#8220;If an API can be depended on, it will be.&#8221;<br></em> Translation: someone is relying on behavior you don&#8217;t know exists.</p></li></ol><p>Historically, the &#8220;responsible&#8221; move here is:</p><ul><li><p>Write a couple of tests for the obvious cases.</p></li><li><p>Make your change.</p></li><li><p>Pray.</p></li></ul><p>What you&#8217;re really doing here is empirically mapping the code&#8217;s behavior &#8212; reverse-engineering its actual contract by peppering it with tests. It&#8217;s useful, but it&#8217;s also tedious enough that most developers will only scratch the surface before giving up.</p><p>Enter generative AI: a junior engineer who never gets bored and will happily spit out 77 test cases for a single function without blinking.</p><div><hr></div><h2><strong>The date parser from hell</strong></h2><p>Here&#8217;s a real example I used.</p><p>We&#8217;ve got a Python function whose job is:</p><blockquote><p>&#8220;Take any string that looks like a time, date, or datetime and return an ISO 8601 datetime.&#8221;</p></blockquote><p>Here&#8217;s the original implementation:</p><pre><code><code>from dateutil import parser

&#8220;&#8221;&#8220;
Date parser implementation. Converts any string that looks like a time, date, or datetime into an ISO 8601 datetime.
&#8220;&#8221;&#8220;
def parse_to_iso8601(date_string):
    if not isinstance(date_string, str):
        return None
    try:
        dt = parser.parse(date_string)
        return dt.isoformat()
    except (ValueError, TypeError, parser.ParserError):
        return None</code></code></pre><p>A couple of problems:</p><ul><li><p>It uses dateutil.parser, which is flexible but slow.</p></li><li><p>The function takes a plain str. No type hints, no constraints.</p></li><li><p>This code is presumably used all over the place, in ways no one fully remembers.</p></li><li><p>There are zero tests.</p></li></ul><p>This is a classic Hyrum&#8217;s Law trap. If it &#8220;kind of works&#8221; for arbitrary date-ish strings, someone, somewhere, is passing it absolute garbage in production &#8212; and depending on how it behaves.</p><p>I&#8217;d like to replace dateutil with the more efficient ciso8601 library. Easy change, right?</p><p>Sure. If you don&#8217;t care what breaks.</p><div><hr></div><h2><strong>Point the AI at it: &#8220;Please describe all my sins&#8221;</strong></h2><p>Instead of guessing which formats matter, I ask an LLM to do the annoying part for me.</p><p>Here&#8217;s the prompt I actually used:</p><blockquote><p>I am about to rewrite the implementation of the function in date_parser.py. The previous developer didn&#8217;t write any tests. I don&#8217;t have full context into all of the places where this function is being used. Because the input is a string, the breadth of inputs coming in could be quite broad. I need you to write tests that capture all of the existing behavior of the current implementation. I especially need to test every permutation of date, time, and datetime inputs. I will then use these tests to make sure that my new implementation doesn&#8217;t cause a regression.</p></blockquote><p>The model happily generated <strong>77 tests</strong> in seconds.</p><p>They covered:</p><ul><li><p>Date formats: 2024-01-15, 01/15/2024, 15/01/2024, 20240115, Jan 15, 2024, January 15th, 2024, etc.</p></li><li><p>Times: 14:30, 2:30 PM, 14:30:45.123456, &#8220;midnight&#8221;, &#8220;noon&#8221;.</p></li><li><p>Datetimes: ISO strings, &#8220;January 15, 2024 2:30 PM&#8221;, 2024-01-15 23:59:59, slash formats with times.</p></li><li><p>Timezones: Z, +05:00, -08:00, &#8220;EST&#8221;.</p></li><li><p>Edge cases: leap years, invalid dates, numeric strings, weird whitespace.</p></li><li><p>Invalid input types: None, 123, {}, True, etc.</p></li></ul><p>Here&#8217;s a tiny sample (out of those 77):</p><pre><code>class TestDateOnly:

    def test_iso_date_format(self):
        assert parse_to_iso8601(&#8221;2024-01-15&#8221;) == &#8220;2024-01-15T00:00:00&#8221;

    def test_slash_date_format_mdy(self):
        assert parse_to_iso8601(&#8221;01/15/2024&#8221;) == &#8220;2024-01-15T00:00:00&#8221;

    def test_written_date_format(self):
        assert parse_to_iso8601(&#8221;January 15, 2024&#8221;) == &#8220;2024-01-15T00:00:00&#8221;

class TestTimeOnly:

    def test_24hour_time_with_seconds(self):
        result = parse_to_iso8601(&#8221;14:30:45&#8221;)
        assert result is not None
        assert &#8220;14:30:45&#8221; in result

    def test_12hour_time_pm(self):
        result = parse_to_iso8601(&#8221;2:30 PM&#8221;)
        assert result is not None
        assert &#8220;14:30:00&#8221; in result

class TestInvalidInputs:

    def test_non_string_input_integer(self):
        assert parse_to_iso8601(123) is None

    def test_invalid_date_values(self):
        assert parse_to_iso8601(&#8221;2024-13-01&#8221;) is None</code></pre><p>Do I know that this is <em>perfect</em> coverage? No.</p><p>Do I know it&#8217;s vastly better than the 3 tests I would&#8217;ve manually written before getting bored? Absolutely.</p><div><hr></div><h2><strong>Now swap the implementation</strong></h2><p>Here&#8217;s the new implementation using ciso8601:</p><pre><code>import ciso8601


def parse_to_iso8601(date_string):
    if not isinstance(date_string, str):
        return None

    try:
        dt = ciso8601.parse_datetime(date_string)
        if dt is None:
            return None
        return dt.isoformat()
    except (ValueError, TypeError):
        return None</code></pre><p>Same interface. Stricter, faster parser.</p><p>I run the AI-generated tests against this new version.</p><p><strong>42 tests fail.</strong></p><p>Perfect.</p><p>Not because I enjoy failure (I work in software, I get plenty), but because this is exactly the information I actually need:</p><ul><li><p>Where does ciso8601 behave differently from dateutil?</p></li><li><p>Which formats were silently &#8220;working&#8221; before that will now break?</p></li><li><p>Which of those differences do I care about, and which are acceptable changes?</p></li></ul><div><hr></div><h2><strong>Reading failing tests as a behavioral diff</strong></h2><p>The failing tests become a behavioral diff between &#8220;legacy weirdness&#8221; and &#8220;new stricter behavior.&#8221;</p><p>Now I get to work through them <em>deliberately</em>:</p><ul><li><p>Maybe I don&#8217;t care that &#8220;January 2024&#8221; used to be accepted and now isn&#8217;t.<br> Mark that as an intentional breaking change.</p></li><li><p>Maybe I <em>do</em> care that &#8220;2024-01-15 14:30&#8221; used to parse fine and now fails.<br> I can either:</p><ul><li><p>adjust the new implementation, or</p></li><li><p>add a small compatibility shim, or</p></li><li><p>explicitly document the supported formats.</p></li></ul></li></ul><p>The key point: with almost no manual effort, I&#8217;ve surfaced <em>dozens</em> of behavioral differences I&#8217;d never have thought to test.</p><p>Without these tests, I would have:</p><ul><li><p>Shipped the new implementation.</p></li><li><p>Broken a bunch of obscure paths.</p></li><li><p>Found out from angry users and mysterious alerts.</p></li></ul><p>With the ephemeral tests, I instead:</p><ul><li><p>See the blast radius before I ship.</p></li><li><p>Choose which behaviors to preserve.</p></li><li><p>Turn accidental behavior into intentional behavior.</p></li></ul><div><hr></div><h2><strong>&#8220;Ephemeral&#8221; tests: why we don&#8217;t keep them all</strong></h2><p>Crucially, I don&#8217;t intend to commit all 77 tests.</p><p>Most of them are scaffolding:</p><ul><li><p>They exist to help me understand current behavior.</p></li><li><p>They help me safely refactor.</p></li><li><p>Once I&#8217;ve decided what behavior I actually support, their job is done.</p></li></ul><p>In practice, I&#8217;ll:</p><ol><li><p>Keep a subset of tests that define the <strong>intended contract</strong> going forward.</p></li><li><p>Drop the ones that encode legacy quirks I&#8217;ve explicitly chosen to remove.</p></li><li><p>Possibly regenerate a smaller, cleaner test suite that matches the new behavior, also with AI&#8217;s help.</p></li></ol><p>This is why I call them <strong>ephemeral tests</strong>:</p><p>They&#8217;re part of the <em>development process</em>, not necessarily part of the enduring test suite.</p><p>They&#8217;re like temporary scaffolding around a building: essential while you&#8217;re doing the work, ugly if you leave them up forever.</p><div><hr></div><h2><strong>Why this is such a big unlock for teams</strong></h2><p>As a leader, I see this pattern all the time:</p><ul><li><p>Engineers are scared to touch old, critical code.</p></li><li><p>The lack of tests becomes a psychological barrier, not just a technical one.</p></li><li><p>Refactors get kicked down the road because &#8220;it&#8217;s risky&#8221; and everyone&#8217;s busy.</p></li></ul><p>Generative AI doesn&#8217;t magically make that risk go away &#8212; but it gives us a <strong>cheap, fast way to map it</strong>.</p><p>Now, when someone on the team has to change a scary subsystem, I can give them a playbook:</p><ol><li><p>Identify the untested surface you&#8217;re about to touch.</p></li><li><p>Ask an LLM to generate aggressive characterization tests for the <em>current</em> behavior.</p></li><li><p>Make your changes.</p></li><li><p>Run the tests.</p></li><li><p>Inspect what broke:</p><ul><li><p>Decide what to keep.</p></li><li><p>Decide what to drop.</p></li><li><p>Turn surprises into choices.</p></li></ul></li><li><p>Keep only the tests that define the contract you care about.</p></li></ol><p>Instead of &#8220;I&#8217;m afraid to change this,&#8221; the conversation becomes:</p><blockquote><p>&#8220;Here are the 17 behaviors that will change if we ship this.<br> We care about these 5. The rest are deprecated weirdness.&#8221;</p></blockquote><p>That&#8217;s a very different kind of engineering culture.</p>]]></content:encoded></item><item><title><![CDATA[Definitive Guide to AI Slop]]></title><description><![CDATA[Because someone had to set the record straight]]></description><link>https://blog.cgardens.dev/p/definitive-guide-to-ai-slop</link><guid isPermaLink="false">https://blog.cgardens.dev/p/definitive-guide-to-ai-slop</guid><dc:creator><![CDATA[charles]]></dc:creator><pubDate>Sat, 01 Nov 2025 05:03:05 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!QGxv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88eabbce-9db7-40a2-b10c-deac719c0106_874x684.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Think of all creative work as a 2&#215;2 grid:</p><ul><li><p><strong>Originality:</strong> low &#8596; high</p></li><li><p><strong>Craft:</strong> low &#8596; high</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QGxv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88eabbce-9db7-40a2-b10c-deac719c0106_874x684.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QGxv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88eabbce-9db7-40a2-b10c-deac719c0106_874x684.png 424w, https://substackcdn.com/image/fetch/$s_!QGxv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88eabbce-9db7-40a2-b10c-deac719c0106_874x684.png 848w, https://substackcdn.com/image/fetch/$s_!QGxv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88eabbce-9db7-40a2-b10c-deac719c0106_874x684.png 1272w, https://substackcdn.com/image/fetch/$s_!QGxv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88eabbce-9db7-40a2-b10c-deac719c0106_874x684.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QGxv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88eabbce-9db7-40a2-b10c-deac719c0106_874x684.png" width="874" height="684" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/88eabbce-9db7-40a2-b10c-deac719c0106_874x684.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:684,&quot;width&quot;:874,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:61666,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://cgardens.substack.com/i/177709516?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88eabbce-9db7-40a2-b10c-deac719c0106_874x684.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!QGxv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88eabbce-9db7-40a2-b10c-deac719c0106_874x684.png 424w, https://substackcdn.com/image/fetch/$s_!QGxv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88eabbce-9db7-40a2-b10c-deac719c0106_874x684.png 848w, https://substackcdn.com/image/fetch/$s_!QGxv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88eabbce-9db7-40a2-b10c-deac719c0106_874x684.png 1272w, https://substackcdn.com/image/fetch/$s_!QGxv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88eabbce-9db7-40a2-b10c-deac719c0106_874x684.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><br>Most of what&#8217;s good lives in three of those squares. Even low originality / high craft stuff can be fun &#8212; it&#8217;s formulaic, but clean. High originality in either direction is fascinating, even when messy.</p><p>Only one square truly sucks: <strong>low originality + low craft.</strong> That&#8217;s the creative uncanny valley &#8212; the zone of warmed-over clich&#233;s and lazy execution. The best we can do is love to hate it.</p><p>Now add AI:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!brIp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2c93eb3-85ee-41ca-b04a-808b6983841e_866x686.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!brIp!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2c93eb3-85ee-41ca-b04a-808b6983841e_866x686.png 424w, https://substackcdn.com/image/fetch/$s_!brIp!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2c93eb3-85ee-41ca-b04a-808b6983841e_866x686.png 848w, https://substackcdn.com/image/fetch/$s_!brIp!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2c93eb3-85ee-41ca-b04a-808b6983841e_866x686.png 1272w, https://substackcdn.com/image/fetch/$s_!brIp!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2c93eb3-85ee-41ca-b04a-808b6983841e_866x686.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!brIp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2c93eb3-85ee-41ca-b04a-808b6983841e_866x686.png" width="866" height="686" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e2c93eb3-85ee-41ca-b04a-808b6983841e_866x686.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:686,&quot;width&quot;:866,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:69235,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://cgardens.substack.com/i/177709516?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2c93eb3-85ee-41ca-b04a-808b6983841e_866x686.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!brIp!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2c93eb3-85ee-41ca-b04a-808b6983841e_866x686.png 424w, https://substackcdn.com/image/fetch/$s_!brIp!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2c93eb3-85ee-41ca-b04a-808b6983841e_866x686.png 848w, https://substackcdn.com/image/fetch/$s_!brIp!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2c93eb3-85ee-41ca-b04a-808b6983841e_866x686.png 1272w, https://substackcdn.com/image/fetch/$s_!brIp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2c93eb3-85ee-41ca-b04a-808b6983841e_866x686.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><br>AI can do <em>craft</em>. It&#8217;s a tireless assistant. But originality? Not so much. So AI lives on the top row. When it&#8217;s high craft, you get slick, competent entertainment (hello, <em>K-Pop Demon Hunters</em>). When it&#8217;s low craft, you get content sludge.</p><p>AI generated content is slop not because it&#8217;s artificial, but because it&#8217;s <em>unoriginal</em>. That doesn&#8217;t make it bad.</p><p>Humans make slop too. AI just scales it. When it is high craft, we can still love it.</p>]]></content:encoded></item><item><title><![CDATA[Tool for the Job]]></title><description><![CDATA[Choosing a blogging platform]]></description><link>https://blog.cgardens.dev/p/tool-for-the-job</link><guid isPermaLink="false">https://blog.cgardens.dev/p/tool-for-the-job</guid><dc:creator><![CDATA[charles]]></dc:creator><pubDate>Sun, 11 Sep 2022 22:26:36 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!8Itr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fc97fa939-a75b-4ac4-b180-47b91ec85131_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://labs.openai.com/e/3hpUISypp2Ddl2yUm5UaEzGv/HngohNPGLZftDQ9rASmHnFCc" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8Itr!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fc97fa939-a75b-4ac4-b180-47b91ec85131_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!8Itr!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fc97fa939-a75b-4ac4-b180-47b91ec85131_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!8Itr!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fc97fa939-a75b-4ac4-b180-47b91ec85131_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!8Itr!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fc97fa939-a75b-4ac4-b180-47b91ec85131_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8Itr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fc97fa939-a75b-4ac4-b180-47b91ec85131_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/c97fa939-a75b-4ac4-b180-47b91ec85131_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1659420,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://labs.openai.com/e/3hpUISypp2Ddl2yUm5UaEzGv/HngohNPGLZftDQ9rASmHnFCc&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!8Itr!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fc97fa939-a75b-4ac4-b180-47b91ec85131_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!8Itr!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fc97fa939-a75b-4ac4-b180-47b91ec85131_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!8Itr!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fc97fa939-a75b-4ac4-b180-47b91ec85131_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!8Itr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fc97fa939-a75b-4ac4-b180-47b91ec85131_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Over the past 2 years, I have steadily added ideas to my &#8220;to write about&#8221; list. I have, however, written nothing.&nbsp;</p><p>Each weekend, I would sit down to write something and then remember that I still had not set up a receptacle for this writing. Writing is fun; choosing blogging software is not.</p><p>The process was hung up on a lot of decisions that were distractions from the whole point of this exercise (which is to <strong>build the habit of writing regularly</strong>). In software development, I subscribe to the philosophy that you should identify the piece of what you are building that is a key differentiator and spend time writing that code. Then, try to find outside tools to handle the rest.&nbsp;</p><p>In this case, the writing (one hopes) is the differentiator. Spending time on areas outside of that (e.g. selecting a platform to use, picking a theme, choosing a domain name) was painful. I was particularly paralyzed by aesthetics. While there was no choice that I was likely to make that would be a positive differentiator, I certainly could have picked something so awful that it was a distraction from the content.&nbsp;</p><p>Ironically, the least important decisions took the longest to make because I so deeply desired to not spend time on them at all. In <a href="https://www.intercom.com/blog/first-rule-prioritization-no-snacking/">project </a></p><p><a href="https://www.intercom.com/blog/first-rule-prioritization-no-snacking/">management parlance</a>, my behavior had transformed these decisions that should have been both <em>low effort</em> and <em>low value</em> into those that were <em>high effort</em> but still <em>low value</em>.</p><p>I&#8217;m over it. This weekend I decided to pick the path of least resistance. Substack is both very easy to set up and very free. Of the tools out there, it is the fastest way to deliver value (i.e. published writing). Perhaps the most valuable feature in Substack so far is that <strong>its defaults are all fine</strong>. No additional aesthetic choices needed. Now I can focus on writing.</p>]]></content:encoded></item><item><title><![CDATA[Coming soon]]></title><description><![CDATA[This is cgardens, a blog about tech, intellectual honesty, etc.]]></description><link>https://blog.cgardens.dev/p/coming-soon</link><guid isPermaLink="false">https://blog.cgardens.dev/p/coming-soon</guid><dc:creator><![CDATA[charles]]></dc:creator><pubDate>Sun, 11 Sep 2022 18:41:43 GMT</pubDate><content:encoded><![CDATA[<p><strong>This is cgardens</strong>, a blog about tech, intellectual honesty, etc.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.cgardens.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.cgardens.dev/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item></channel></rss>