<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
    <channel>
        <title><![CDATA[This Dot Labs RSS feed]]></title>
        <description><![CDATA[This Dot Labs RSS feed]]></description>
        <link>https://www.thisdot.co</link>
        
        <generator>RSS for Node</generator>
        <lastBuildDate>Mon, 30 Mar 2026 19:11:02 GMT</lastBuildDate>
        <atom:link href="https://www.thisdot.co/rss.xml" rel="self" type="application/rss+xml"/>
        <pubDate>Mon, 30 Mar 2026 19:11:02 GMT</pubDate>
        <copyright><![CDATA[All rights reserved 2026]]></copyright>
        <item>
            <title><![CDATA[Making AI Deliver: From Pilots to Measurable Business Impact]]></title>
            <description><![CDATA[<p>A lot of organizations have experimented with AI, but far fewer are seeing real business results.</p>
<p>At the Leadership Exchange, this panel focused on what it actually takes to move beyond experimentation and turn AI into measurable ROI.</p>
<p>Over the past few years, many organizations have experimented with AI, but the challenge today is translating experimentation into measurable business value. Moderated by Tracy Lee, CEO at This Dot Labs, panelists featured Dorren Schmitt, Vice President IT Strategy &amp; Innovation at Allen Media Group, Greg Geodakyan, CTO at Client Command, and Elliott Fouts, CAIO &amp; CTO at This Dot Labs. Panelists discussed how companies are moving from early AI experiments to initiatives that deliver real results.</p>
<p>They began by examining how experimentation has evolved over the past year. While many organizations did not fully utilize AI experimentation budgets in 2025, 2026 is showing a shift toward more intentional investment. Structured budgets and clearly defined frameworks are enabling companies to explore AI strategically and identify initiatives with high potential impact.</p>
<p>The conversation then turned to alignment and ROI. Panelists highlighted the importance of connecting AI projects to corporate strategy and leadership priorities. Ensuring that AI initiatives translate into operational efficiency, productivity gains, and measurable business impact is essential. Companies that successfully align AI efforts with organizational goals are better equipped to demonstrate tangible outcomes from their investments.</p>
<p>Moving from pilots and proofs of concept to production was another major focus. Governance, prioritization, and workflow integration were cited as essential for scaling AI initiatives. One panelist shared that out of nine proofs of concept, eight successfully launched, resulting in improvements in quality and operational efficiency.</p>
<p>Panelists also explored the future of AI within organizations, including the potential for agentic workflows and reduced human-in-the-loop processes. New capabilities are emerging that extend beyond coding tasks, reshaping how teams collaborate and how work is structured across departments.</p>
<p>Key Takeaways</p>
<ul>
<li>Structured experimentation and defined budgets allow organizations to explore AI strategically and safely.</li>
<li>Alignment with business priorities is essential for translating AI capabilities into measurable outcomes.</li>
<li>Governance and workflow integration are critical to moving AI initiatives from pilot stages to production deployment.</li>
</ul>
<p>Successfully leveraging AI requires a balance between experimentation, strategic alignment, and operational discipline. Organizations that approach AI as a structured, measurable initiative can capture meaningful results and unlock new opportunities for innovation.</p>
<p>Curious how your organization can move from AI experimentation to real impact? Let’s talk. Reach out to continue the conversation or join us at an upcoming Leadership Exchange. Tracy can be reached at <a href="mailto:tlee@thisdot.co">tlee@thisdot.co</a>.</p>
]]></description>
            <link>https://www.thisdot.co/blog/ai-pilots-to-impact</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/ai-pilots-to-impact</guid>
            <pubDate>Fri, 27 Mar 2026 16:30:41 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[What does it actually look like to build software with AI today? Not in theory, but in practice.]]></title>
            <description><![CDATA[<p>What does it actually look like to build software with AI today? Not in theory, but in practice.</p>
<p>At the Leadership Exchange, this was the question at the center of the Developer Panel, where leaders from across the industry unpacked what’s really changing inside engineering teams and what organizations need to do right now to keep up.</p>
<p>The Developer Panel at the Leadership Exchange explored the cutting edge of AI in software engineering and examined what organizations should focus on today to prepare for the future. Moderated by Jeff Cross, Co-Founder &amp; CEO at Nx, the panel featured Victor Savkin, Cofounder &amp; CTO at Nx, Alex Sover, Vice President of Engineering at OpenAP, Brent Zucker, Senior Director of Engineering at Visa, and Jonathan Fontanez, AI Engineering Lead at This Dot Labs. Panelists shared insights into how AI is transforming the software development lifecycle and how teams can adopt tools effectively while preparing for organizational change.</p>
<p>Panelists discussed emerging workflows, including CI-in-the-loop, agentic healing, and context engineering. They examined how validation, code reviews, and PRDs are evolving alongside AI capabilities and how teams are integrating external sources such as production traces to improve quality and reliability. The discussion also covered what the next generation of agentic tools might look like and how these capabilities will shape engineering practices in the near future.</p>
<p>Adoption of AI comes with challenges. Teams often rely on plugins or extensions without foundational understanding, and individual contributors may fear displacement. Panelists emphasized that education, governance, and skill-building are essential for teams to manage AI agents effectively while maintaining quality. They also highlighted the need to standardize workflows and ensure organizational alignment to fully leverage AI capabilities.</p>
<p>The conversation extended beyond technical challenges to organizational implications. Panelists discussed how teams can avoid issues like Conway’s Law, manage distributed teams effectively, and evolve engineering practices alongside AI adoption. Leadership and management strategies play a crucial role in ensuring that AI integration delivers meaningful outcomes while maintaining efficiency and alignment with business objectives.</p>
<p>Key Takeaways</p>
<ul>
<li>AI workflows require both technical and organizational preparation. </li>
<li>Education, governance, and skill development are essential for successful implementation. </li>
<li>Forward-looking teams are rethinking validation, CI pipelines, and context management to fully leverage agentic AI.</li>
</ul>
<p>The discussion highlighted that adopting AI at the cutting edge is not just about new tools - it is about rethinking processes, workflows, and organizational culture. Companies that embrace this holistic approach are most likely to succeed in leveraging AI to its full potential.</p>
<p>Are you interested in more conversations like this? Message us for an invite to the next, or for a private discussion around these topics. Tracy can be reached at <a href="mailto:tlee@thisdot.co">tlee@thisdot.co</a>.</p>
]]></description>
            <link>https://www.thisdot.co/blog/building-software-with-ai-today</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/building-software-with-ai-today</guid>
            <pubDate>Fri, 27 Mar 2026 16:07:52 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[AI Is Speeding Up Development. But Where Are the New Bottlenecks?]]></title>
            <description><![CDATA[<p>AI is accelerating development, but it’s also exposing everything else that’s broken.</p>
<p>At the Leadership Exchange, leaders unpacked how AI is reshaping the SDLC and what organizations need to address beyond just coding to make adoption successful.</p>
<p>Moderated by Rob Ocel, VP of Innovation at This Dot Labs, the panel featured Itai Gerchikov at Anthropic and Harald Kirschner, Principal Product Manager for GitHub Copilot &amp; VS Code at Microsoft. Panelists explored the current state of AI adoption across the software development lifecycle and shared practical insights into how organizations can effectively integrate AI tools.</p>
<p>Panelists discussed how companies are investing in AI tools, skills, and managed competency programs to support developers. While AI can dramatically accelerate coding, the panel emphasized that adoption affects every stage of the SDLC. Bottlenecks now appear in testing, DevOps, product delivery, and marketing as AI speeds up development. Organizations that address technical debt and process inefficiencies are better positioned to extract maximum value from AI tools.</p>
<p>The conversation also focused on opportunities and risks. Security, governance, and workforce education were highlighted as critical factors for adoption. Panelists stressed that AI initiatives should be aligned with broader business goals rather than pursued in isolation. They noted that companies experimenting at the cutting edge need to consider organizational readiness just as carefully as technical capabilities.</p>
<p>Panelists also explored how leading organizations are navigating the early stages of adoption. Those ahead of the curve are using structured experimentation, prioritizing process improvements, and continuously evaluating outcomes to refine their AI strategies. Learning from these early adopters allows other organizations to anticipate emerging trends and prepare for the next phase of AI adoption rather than simply replicating past approaches.</p>
<p>Key Takeaways</p>
<ul>
<li>Investing in AI skills and tools should be done thoughtfully, with clear alignment to business objectives.</li>
<li>Examining the full SDLC helps identify bottlenecks that AI may accelerate or expose.</li>
<li>Organizations can gain a competitive advantage by learning from early adopters and planning for where AI adoption is heading.</li>
</ul>
<p>AI adoption is not just a technical initiative; it is a strategic transformation that requires attention to people, process, and technology. Organizations that balance innovation with operational discipline will be best positioned to capture the full potential of AI across the software lifecycle.</p>
<p>Seeing similar challenges in your own SDLC? Let’s compare notes. Join us at an upcoming Leadership Exchange or reach out to continue the conversation. Tracy can be reached at <a href="mailto:tlee@thisdot.co">tlee@thisdot.co</a>.</p>
]]></description>
            <link>https://www.thisdot.co/blog/ai-speed-vs-bottlenecks</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/ai-speed-vs-bottlenecks</guid>
            <pubDate>Fri, 27 Mar 2026 16:43:07 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Understanding Sourcemaps: From Development to Production]]></title>
            <description><![CDATA[<h2>What Are Sourcemaps?</h2>
<p>Modern web development involves transforming your source code before deploying it. We minify JavaScript to reduce file sizes, bundle multiple files together, transpile TypeScript to JavaScript, and convert modern syntax into browser-compatible code. These optimizations are essential for performance, but they create a significant problem: the code running in production does not look like the original code you wrote.</p>
<p>Here&#39;s a simple example. Your original code might look like this:</p>
<pre><code class="language-javascript">function calculateTotal(items) {
  return items.reduce((total, item) =&gt; {
    return total + (item.price * item.quantity);
  }, 0);
}

const total = calculateTotal(cart);
console.log(`Total is: $${total}`);
</code></pre>
<p>After minification, it becomes something like this:</p>
<pre><code class="language-javascript">function a(b){return b.reduce((c,d)=&gt;c+d.price*d.quantity,0)}const e=a(f);console.log(`Total is: $${e}`);
</code></pre>
<p>Now imagine trying to debug an error in that minified code. Which line threw the exception? What was the value of variable <code>d</code>?</p>
<p>This is where sourcemaps come in. A sourcemap is a JSON file that contains a mapping between your transformed code and your original source files. When you open browser DevTools, the browser reads these mappings and reconstructs your original code, allowing you to debug with variable names, comments, and proper formatting intact.</p>
<h2>How Sourcemaps Work</h2>
<p>When you build your application with tools like Webpack, Vite, or Rollup, they can generate sourcemap files alongside your production bundles. A minified file references its sourcemap using a special comment at the end:</p>
<pre><code class="language-js">//# sourceMappingURL=bundle.min.js.map
</code></pre>
<p>The sourcemap file itself contains a JSON structure with several key fields:</p>
<pre><code class="language-json">{
  &quot;version&quot;: 3,
  &quot;sources&quot;: [&quot;cart.js&quot;, &quot;checkout.js&quot;],
  &quot;names&quot;: [&quot;calculateTotal&quot;, &quot;items&quot;, &quot;total&quot;, &quot;item&quot;],
  &quot;mappings&quot;: &quot;AAAA,SAASA,...&quot;,
  &quot;file&quot;: &quot;checkout.min.js&quot;,
  &quot;sourcesContent&quot;: [&quot;function calculateTotal(items) { ... }&quot;]
}
</code></pre>
<p>The <code>mappings</code> field uses an encoding format called VLQ (Variable Length Quantity) to map each position in the minified code back to its original location. The browser&#39;s DevTools use this information to show you the original code while you&#39;re debugging.</p>
<h2>Types of Sourcemaps</h2>
<p>Build tools support several variations of sourcemaps, each with different trade-offs:</p>
<p><strong>Inline sourcemaps</strong>: The entire mapping is embedded directly in your JavaScript file as a base64 encoded data URL. This increases file size significantly but simplifies deployment during development.</p>
<pre><code class="language-javascript">//# sourceMappingURL=data:application/json;base64,eyJ2ZXJzaW9uIjozLC...
</code></pre>
<p><strong>External sourcemaps</strong>: A separate <code>.map</code> file that&#39;s referenced by the JavaScript bundle. This is the most common approach, as it keeps your production bundles lean since sourcemaps are only downloaded when DevTools is open.</p>
<p><strong>Hidden sourcemaps</strong>: External sourcemap files without any reference in the JavaScript bundle. These are useful when you want sourcemaps available for error tracking services like Sentry, but don&#39;t want to expose them to end users.</p>
<h2>Why Sourcemaps</h2>
<p>During development, sourcemaps are absolutely critical. They will help avoid having to guess where errors occur, making debugging much easier.</p>
<p>Most modern build tools enable sourcemaps by default in development mode. </p>
<h2>Sourcemaps in Production</h2>
<p>Should you ship sourcemaps to production? It depends.</p>
<p>While security by making your code more difficult to read is not real security, there&#39;s a legitimate argument that exposing your source code makes it easier for attackers to understand your application&#39;s internals. Sourcemaps can reveal internal API endpoints and 
routing logic, business logic, and algorithmic implementations, code comments that might contain developer notes or TODO items.</p>
<p>Anyone with basic developer tools can reconstruct your entire codebase when sourcemaps are publicly accessible. While the Apple leak contained no credentials or secrets, it did expose their component architecture and implementation patterns.</p>
<p>Additionally, code comments can inadvertently contain internal URLs, developer names, or company-specific information that could potentially be exploited by attackers.</p>
<p>But that’s not all of it. On the other hand, services like Sentry can provide much more actionable error reports when they have access to sourcemaps. So you can understand exactly where errors happened.</p>
<p>If a customer reports an issue, being able to see the actual error with proper context makes diagnosis significantly faster.</p>
<p>If your security depends on keeping your frontend code secret, you have bigger problems. Any determined attacker can reverse engineer minified JavaScript. It just takes more time. 
Sourcemaps are only downloaded when DevTools is open, so shipping them to production doesn&#39;t affect load times or performance for end users.</p>
<h2>How to manage sourcemaps in production</h2>
<p>You don&#39;t have to choose between no sourcemaps and publicly accessible ones.</p>
<p>For example,  you can restrict access to sourcemaps with server configuration. You can make <code>.map</code> accessible from specific IP addresses.</p>
<p>Additionally, tools like Sentry allow you to upload sourcemaps during your build process without making them publicly accessible.
Then configure your build to generate sourcemaps without the reference comment, or use hidden sourcemaps. Sentry gets the mapping information it needs, but end users can&#39;t access the files.</p>
<h2>Learning from Apple&#39;s Incident</h2>
<p>Apple&#39;s sourcemap incident is a valuable reminder that even the largest tech companies can make deployment oversights. But it also highlights something important: the presence of sourcemaps wasn&#39;t actually a security vulnerability. This can be achieved by following good security practices. Never include sensitive data in client code.</p>
<p>Developers got an interesting look at how Apple structures its Svelte codebase.</p>
<p>The lesson is that you must be intentional about your deployment configuration. If you&#39;re going to include sourcemaps in production, make that decision deliberately after considering the trade-offs. And if you decide against using public sourcemaps, verify that your build process actually removes them.</p>
<p>In this case, the public repo was quickly removed after Apple filed a DMCA takedown. (<a href="https://github.com/github/dmca/blob/master/2025/11/2025-11-05-apple.md">https://github.com/github/dmca/blob/master/2025/11/2025-11-05-apple.md</a>)</p>
<h2>Making the Right Choice</h2>
<p>So what should you do with sourcemaps in your projects?</p>
<p><strong>For development</strong>: Always enable them. Use fast options, such as <code>eval-source-map</code> in Webpack or the default configuration in Vite. The debugging benefits far outweigh any downsides.</p>
<p><strong>For production</strong>: Consider your specific situation. But most importantly, make sure your sourcemaps don&#39;t accidentally expose secrets. Review your build output, check for hardcoded credentials, and ensure sensitive configurations stay on the backend where they belong.</p>
<h2>Conclusion</h2>
<p>Sourcemaps are powerful development tools that bridge the gap between the optimized code your users download and the readable code you write. They&#39;re essential for debugging and make error tracking more effective.</p>
<p>The question of whether to include them in production doesn&#39;t have a unique answer. Whatever you decide, make it a deliberate choice. Review your build configuration. Verify that sourcemaps are handled the way you expect. And remember that proper frontend security doesn&#39;t come from hiding your code.</p>
<h2>Useful Resources</h2>
<ul>
<li>Source map specification - <a href="https://tc39.es/ecma426/">https://tc39.es/ecma426/</a></li>
<li>What are sourcemaps - <a href="https://web.dev/articles/source-maps">https://web.dev/articles/source-maps</a></li>
<li>VLQ implementation - <a href="https://github.com/Rich-Harris/vlq">https://github.com/Rich-Harris/vlq</a></li>
<li>Sentry sourcemaps - <a href="https://docs.sentry.io/platforms/javascript/sourcemaps/">https://docs.sentry.io/platforms/javascript/sourcemaps/</a></li>
<li>Apple DMCA takedown - <a href="https://github.com/github/dmca/blob/master/2025/11/2025-11-05-apple.md">https://github.com/github/dmca/blob/master/2025/11/2025-11-05-apple.md</a></li>
</ul>
]]></description>
            <link>https://www.thisdot.co/blog/understanding-sourcemaps-from-development-to-production</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/understanding-sourcemaps-from-development-to-production</guid>
            <pubDate>Fri, 21 Nov 2025 12:02:54 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Vercel BotID: The Invisible Bot Protection You Needed]]></title>
            <description><![CDATA[<p>Nowadays, bots do not act like “bots”. They can execute JavaScript, solve <a href="https://en.wikipedia.org/wiki/CAPTCHA">CAPTCHAs</a>, and navigate as real users. Traditional defenses often fail to meet expectations or frustrate genuine users. </p>
<p>That’s why <a href="https://vercel.com/">Vercel</a> created <a href="https://vercel.com/docs/botid">BotID</a>, an invisible CAPTCHA that has real-time protections against sophisticated bots that help you protect your critical endpoints.</p>
<p>In this blog post, we will explore why you should care about this new tool, how to set it up, its use cases, and some key considerations to take into account.</p>
<p>We will be using <a href="https://nextjs.org/">Next.js</a> for our examples, but please note that this tool is not tied to this framework alone; the only requirement is that your app is deployed and running on Vercel.</p>
<h2><strong>Why Should You Care?</strong></h2>
<p>Think about these scenarios:</p>
<ul>
<li><strong>Checkout flows</strong> are overwhelmed by <a href="https://dictionary.cambridge.org/dictionary/english/scalper">scalpers</a></li>
<li><strong>Signup forms</strong> inundated with fake registrations</li>
<li><strong>API endpoints</strong> draining resources with malicious requests</li>
</ul>
<p>They all impact you and your users in a negative way. For example, when bots flood your checkout page, real customers are unable to complete their purchases, resulting in your business losing money and damaging customer trust. </p>
<p>Fake signups clutter the app, slowing things down and making user data unreliable. When someone deliberately overloads your app’s API, it can crash or become unusable, making users angry and creating a significant issue for you, the owner.</p>
<p>BotID automatically detects and filters bots attempting to perform any of the above actions without interfering with real users. </p>
<p>How does it work? A lightweight first-party script quickly gathers a high set of browser &amp; environment signals (this takes ~30ms, really fast so no worry about performance issues), packages them into an opaque token, and sends that token with protected requests via the rewritten challenge/proxy path + header; Vercel’s edge scores it, attaches a verdict, and <code>checkBotId()</code> function simply reads that verdict so your code can allow or block. 
We will see how this is implemented in a second! But first, let’s get started.</p>
<h2><strong>Getting Started in Minutes</strong></h2>
<ol>
<li><strong>Install</strong> the SDK:</li>
</ol>
<pre><code>npm install botid
</code></pre>
<ol>
<li>Configure redirects</li>
</ol>
<p>Wrap your <a href="https://nextjs.org/docs/app/api-reference/config/next-config-js">next.config.ts</a> with BotID’s helper. This sets up the right rewrites so BotID can do its job (and not get blocked by ad blockers, extensions, etc.):</p>
<pre><code class="language-jsx">import { withBotId } from &#39;botid/next/config&#39;;

const nextConfig = {
  // Your existing Next.js config
};

export default withBotId(nextConfig);
</code></pre>
<ol start="2">
<li>Integrate the client on public-facing pages (where BotID runs checks):</li>
</ol>
<p>Declare which routes are protected so BotID can attach special headers when a real user triggers those routes.</p>
<p>We need to create <a href="https://nextjs.org/docs/app/api-reference/file-conventions/instrumentation-client">instrumentation-client.ts</a> (place it in the root of your application or inside a src folder) and initialize <code>BotID</code> once:</p>
<pre><code class="language-jsx">import { initBotId } from &#39;botid/client/core&#39;;

// Define the paths that need bot protection.
// These are paths that are routed to by your app.
// These can be:
// - API endpoints (e.g., &#39;/api/checkout&#39;)
// - Server actions invoked from a page (e.g., &#39;/dashboard&#39;)
// - Dynamic routes (e.g., &#39;/api/create/*&#39;)

initBotId({
  protect: [
    {
      path: &#39;/api/checkout&#39;,
      method: &#39;POST&#39;,
    },
    {
      // Wildcards can be used to expand multiple segments
      // /team/*/activate will match
      // /team/a/activate
      // /team/a/b/activate
      // /team/a/b/c/activate
      // ...
      path: &#39;/team/*/activate&#39;,
      method: &#39;POST&#39;,
    },
    {
      // Wildcards can also be used at the end for dynamic routes
      path: &#39;/api/user/*&#39;,
      method: &#39;POST&#39;,
    },
  ],
});
</code></pre>
<p><code>instrumentation-client.ts</code> runs before the app hydrates, so it’s a perfect place for a global setup!</p>
<p>If we have an inferior Next.js version than 15.3, then we would need to use a different approach. We need to render the <BotIdClient /> React component inside the pages or layouts you want to protect, specifying the protected routes:</p>
<pre><code class="language-jsx">import { BotIdClient } from &#39;botid/client&#39;;
import { ReactNode } from &#39;react&#39;;

const protectedRoutes = [
  {
    path: &#39;/api/checkout&#39;,
    method: &#39;POST&#39;,
  },
];

type RootLayoutProps = {
  children: ReactNode;
};

export default function RootLayout({ children }: RootLayoutProps) {
  return (
    &lt;html lang=&quot;en&quot;&gt;
      &lt;head&gt;
        &lt;BotIdClient protect={protectedRoutes} /&gt;
      &lt;/head&gt;
      &lt;body&gt;{children}&lt;/body&gt;
    &lt;/html&gt;
  );
}
</code></pre>
<ol start="3">
<li><strong>Verify</strong> requests on your server or API:</li>
</ol>
<pre><code class="language-jsx">import { checkBotId } from &#39;botid/server&#39;;

export async function POST(req: Request) {
  const { isBot } = await checkBotId();

  if (isBot) {
   return new Response(&quot;Access Denied&quot;, { status: 403 });
  } 

  return new Response(&quot;✅ Success!&quot;);
}
</code></pre>
<ul>
<li>NOTE: <code>checkBotId()</code> will fail if the route wasn’t listed on the client, because the client is what attaches the special headers that let the edge classify the request!</li>
</ul>
<p>You’re all set - your routes are now protected!</p>
<p>In development, <code>checkBotId()</code> function will always return <code>isBot = false</code> so you can build without friction. To disable this, you can override the options for development:</p>
<pre><code class="language-jsx">const { isBot } = await checkBotId({
    developmentOptions: {
      bypass: &#39;BAD-BOT&#39;, // default: &#39;HUMAN&#39;
    },
  });
</code></pre>
<h3><strong>What happens on a failed check?</strong></h3>
<p>In our example above, if the check failed, we return a 403, but it is mostly up to you what to do in this case; the most common approaches for this scenario are:</p>
<ul>
<li><strong>Hard block</strong> with a 403 for obviously automated traffic (just what we did in the example above)</li>
<li><strong>Soft fail</strong> (generic error/“try again”) when you want to be cautious.</li>
<li><strong>Step-up</strong> (require login, email verification, or other business logic).</li>
</ul>
<p>Remember, although rare, false positives can occur, so it’s up to you to determine how you want to balance your fail strategy between security, UX, telemetry, and attacker behavior.</p>
<h2>checkBotId()</h2>
<p>So far, we have seen how to use the property <code>isBot</code> from <code>checkBotId()</code>, but there are a few more properties that you can leverage from it. There are:</p>
<p><code>isHuman</code> (boolean): <code>true</code> when BotID classifies the request as a real human session (i.e., a clear “pass”). BotID is designed to return an unambiguous yes/no, so you can gate actions easily.</p>
<p><code>isBot</code> (boolean): We already saw this one. It will be <code>true</code> when the request is classified as automated traffic.</p>
<p><code>isVerifiedBot</code> (boolean): Here comes a less obvious property. Vercel maintains and continuously updates a comprehensive <a href="https://vercel.com/docs/bot-management#verified-bots-directory">directory</a> of known legitimate bots from across the internet. This directory is regularly updated to include new legitimate services as they emerge. This could be helpful for allowlists or custom logic per bot. We will see an example in a sec.</p>
<p><code>verifiedBotName?</code> (string): The name for the specific verified bot (e.g., “claude-user”).</p>
<p><code>verifiedBotCategory?</code> (string):  The type of the verified bot (e.g., “webhook”, “advertising”, “ai_assistant”).</p>
<p><code>bypassed</code> (boolean): it is <code>true</code> if the request skipped <code>BotID</code> check due to a configured <a href="https://vercel.com/docs/vercel-firewall/firewall-concepts#bypass">Firewall bypass</a> (custom or system). You could use this flag to avoid taking bot-based actions when you’ve explicitly bypassed protection.  </p>
<h3><strong>Handling Verified Bots</strong></h3>
<ul>
<li>NOTE: Handling verified bots is available in <a href="mailto:botid@1.5.0">botid@1.5.0</a> and above.</li>
</ul>
<p>It might be the case that you don’t want to block some verified bots because they are not causing damage to you or your users, as it can sometimes be the case for AI-related bots that fetch your site to give information to a user.</p>
<p>We can use the properties related to verified bots from <code>checkBotId()</code> to handle these scenarios:</p>
<pre><code class="language-jsx">import { checkBotId } from &quot;botid/server&quot;;
import { NextResponse } from &quot;next/server&quot;;

export async function POST(request: Request) {
  const botResult = await checkBotId();

  const { isBot, verifiedBotName, isVerifiedBot } = botResult;

  // Check if it&#39;s ChatGPT Operator
  const isOperator = isVerifiedBot &amp;&amp; verifiedBotName === &quot;chatgpt-operator&quot;;

  if (isBot &amp;&amp; !isOperator) {
    return Response.json({ error: &quot;Access denied&quot; }, { status: 403 });
  }

  // ... rest of your handler
  return Response.json(botResult);
} 
</code></pre>
<h2>Choosing your BotID mode</h2>
<p>When leveraging <code>BotID</code>, you can choose between 2 modes:</p>
<ul>
<li><strong>Basic Mode</strong>: Instant session-based protection, available for all Vercel plans.</li>
<li><strong>Deep Analysis Mode</strong>: Enhanced <a href="https://www.kasada.io/">Kasada-powered</a> detection, only available for Pro and Enterprise plan users. Using this mode, you will leverage a more advanced detection and will block the hardest to catch bots</li>
</ul>
<p>To specify the mode you want, you must do so in both the client and the server. This is important because if either of the two does not match, the verification will fail!</p>
<pre><code class="language-jsx">// Client side
initBotId({
  protect: [
    {
      path: &#39;/api/checkout&#39;,
      method: &#39;POST&#39;,
      advancedOptions: {
        checkLevel: &#39;deepAnalysis&#39;
      },
    },
 ...
  ],
});

// Server side
export async function POST(request: NextRequest) {
  const verification = await checkBotId({
    advancedOptions: {
      checkLevel: &#39;deepAnalysis&#39;, // Must match client-side config
    },
  });

  if (verification.isBot) {
    return NextResponse.json({ error: &#39;Access denied&#39; }, { status: 403 });
  }

  // Your protected logic here
}
</code></pre>
<h2>Conclusion</h2>
<p>Stop chasing bots - let BotID handle them for you! Bots are and will get smarter and more sophisticated. <code>BotID</code> gives you a simple way to push back without slowing your customers down. It is simple to install, customize, and use.</p>
<p>Stronger protection equals fewer headaches. Add BotID, ship with confidence, and let the bots trample into a wall without knowing what’s going on.</p>
]]></description>
            <link>https://www.thisdot.co/blog/vercel-botid-the-invisible-bot-protection-you-needed</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/vercel-botid-the-invisible-bot-protection-you-needed</guid>
            <pubDate>Fri, 03 Oct 2025 12:17:02 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Implementing Dynamic Types in Docusign Extension Apps]]></title>
            <description><![CDATA[<h1>Implementing Dynamic Types in Docusign Extension Apps</h1>
<p>In our previous blog post about Docusign Extension Apps, <a href="https://www.thisdot.co/blog/advanced-authentication-and-onboarding-workflows-with-docusign-extension">Advanced Authentication and Onboarding Workflows with Docusign Extension Apps</a>, we touched on how you can extend the OAuth 2 flow to build a more powerful onboarding flow for your Extension Apps. In this blog post, we will continue explaining more advanced patterns in developing Extension Apps. For that reason, we assume at least basic familiarity with how Extension Apps work and ideally some experience developing them.</p>
<p>To give a brief recap, Docusign Extension Apps are a powerful way to embed custom logic into Docusign agreement workflows. These apps are lightweight services, typically cloud-hosted, that integrate at specific workflow extension points to perform custom actions, such as data validation, participant input collection, or interaction with third-party services. Each Extension App is configured using a <a href="https://developers.docusign.com/extension-apps/build-an-extension-app/register/use-manifest/">manifest file</a>. This manifest defines metadata such as the app&#39;s author, support links, and the list of extension points it uses (these are the locations in the workflow where your app&#39;s logic will be executed).</p>
<p>The extension points that are relevant for us in the context of this blog post are <a href="https://developers.docusign.com/extension-apps/extension-app-reference/extension-contracts/data-io/"><code>GetTypeNames</code> and <code>GetTypeDefinitions</code></a>. These are used by Docusign to retrieve the types supported by the Extension App and their definitions, and to show them in the Maestro UI.</p>
<p>In most apps, these types are static and rarely change. However, they don&#39;t have to be. They can also be dynamic and change based on certain configurations in the target system that the Extension App is integrating with, or based on the user role assigned to the Maestro administrator on the target system.</p>
<h2>Static vs. Dynamic Types</h2>
<p>To explain the difference between static and dynamic types, we&#39;ll use the example from <a href="https://www.thisdot.co/blog/advanced-authentication-and-onboarding-workflows-with-docusign-extension">our previous blog post</a>, where we integrated with an imaginary task management system called TaskVibe. In the example, our Extension App enabled agreement workflows to communicate with TaskVibe, allowing tasks to be read, created, and updated.</p>
<p>Our first approach to implementing the <code>GetTypeNames</code> and <code>GetTypeDefinitions</code> endpoints for the TaskVibe Extension App might look like the following. The <code>GetTypeNames</code> endpoint returns a single record named <code>task</code>:</p>
<pre><code class="language-json">{
    &quot;typeNames&quot;: [
        {
            &quot;typeName&quot;: &quot;task&quot;,
            &quot;label&quot;: &quot;Task&quot;,
            &quot;description&quot;: &quot;A task on TaskVibe.&quot;
        }
    ]
}
</code></pre>
<p>Given the type name <code>task</code>, the <code>GetTypeDefinitions</code> endpoint would return the following definition for that type:</p>
<pre><code class="language-json">{
    &quot;declarations&quot;: [
        {
            // ...
            &quot;name&quot;: &quot;task&quot;,
            &quot;isAbstract&quot;: false,
            &quot;identified&quot;: {
                &quot;$class&quot;: &quot;concerto.metamodel@1.0.0.IdentifiedBy&quot;,
                &quot;name&quot;: &quot;recordId&quot;
            },
            &quot;properties&quot;: [
                {
                  // ...
                  &quot;name&quot;: &quot;recordId&quot;
                },
                {
                  // ...
                  &quot;name&quot;: &quot;title&quot;
                },
                // Other task properties
            ]
        }
    ]
}
</code></pre>
<p><a href="https://developers.docusign.com/extension-apps/extension-app-reference/extension-contracts/data-io/#dataio-version6-get-type-definitions">As noted in the Docusign documentation</a>, this endpoint must return a Concerto schema representing the type. For clarity, we&#39;ve omitted most of the Concerto-specific properties. The above declaration states that we have a <code>task</code> type, and this type has properties that correspond to task fields in TaskVibe, such as record ID, title, description, assignee, and so on.</p>
<p>The type definition and its properties, as described above, are static and they never change. A TaskVibe task will always have the same properties, and these are essentially set in stone.</p>
<p>Now, imagine a scenario where TaskVibe supports custom properties that are also project-dependent. One project in TaskVibe might follow a typical agile workflow with sprints, and the project manager might want a &quot;Sprint&quot; field in every task within that project. Another project might use a Kanban workflow, where the project manager wants a status field with values like &quot;Backlog,&quot; &quot;ToDo,&quot; and so on. With static types, we would need to return every possible field from any project as part of the <code>GetTypeDefinitions</code> response, and this introduces new challenges. For example, we might be dealing with hundreds of custom field types, and showing them in the Maestro UI might be too overwhelming for the Maestro administrator. Or we might be returning fields that are simply not usable by the Maestro administrator because they relate to projects the administrator doesn&#39;t have access to in TaskVibe.</p>
<p>With dynamic types, however, we can support this level of customization.</p>
<h2>Implementing Dynamic Types</h2>
<p>When Docusign sends a request to the <code>GetTypeNames</code> endpoint and the types are dynamic, the Extension App has a bit more work than before. </p>
<p>As we&#39;ve mentioned earlier, we can no longer return a generic task type. Instead, we need to look into each of the TaskVibe projects the user has access to, and return the tasks as they are represented under each project, with all the custom fields. (Determining access can usually be done by making a query to a user information endpoint on the target system using the same OAuth 2 token used for other calls.)</p>
<p>Once we find the task definitions on TaskVibe, we then need to return them in the response of <code>GetTypeNames</code>, where each type corresponds to a task for the given project. This is a big difference from static types, where we would only return a single, generic task.</p>
<p>For example:</p>
<pre><code class="language-json">{
    &quot;typeNames&quot;: [
        {
            &quot;typeName&quot;: &quot;task_project1&quot;,
            &quot;label&quot;: &quot;Task - Project 1&quot;,
            &quot;description&quot;: &quot;A task on TaskVibe, project 1.&quot;
        },
        {
            &quot;typeName&quot;: &quot;task_project2&quot;,
            &quot;label&quot;: &quot;Task - Project 2&quot;,
            &quot;description&quot;: &quot;A task on TaskVibe, project 2.&quot;
        }      
    ]
}
</code></pre>
<p>The key point here is that we are now returning one type per task in a TaskVibe project. You can think of this as having a separate class for each type of task, in object-oriented lingo. The type name can be any string you choose, but it needs to be unique in the list, and it needs to contain the minimum information necessary to be able to distinguish it from other task definitions in the list. In our case, we&#39;ve decided to form the ID by concatenating the string &quot;task_&quot; with the ID of the project on TaskVibe.</p>
<p>The implementation of the <code>GetTypeDefinitions</code> endpoint needs to:</p>
<ol>
<li>Extract the project ID from the requested type name.</li>
<li>Using the project ID, retrieve the task definition from TaskVibe for that project. This definition specifies which fields are present on the project&#39;s tasks, including all custom fields.</li>
<li>Once the fields are retrieved, map them to the properties of the Concerto schema.</li>
</ol>
<p>The resulting JSON could look like this (again, many of the Concerto properties have been omitted for clarity):</p>
<pre><code class="language-json">{
    &quot;declarations&quot;: [
        {
            // ... 
            &quot;name&quot;: &quot;task_project1&quot;,
            &quot;isAbstract&quot;: false,
            &quot;identified&quot;: {
                &quot;$class&quot;: &quot;concerto.metamodel@1.0.0.IdentifiedBy&quot;,
                &quot;name&quot;: &quot;project1_task_recordId&quot;
            },
            &quot;properties&quot;: [
                {
                  // ...
                  &quot;name&quot;: &quot;project1_task_recordId&quot;
                },
                {
                  // ...
                  &quot;name&quot;: &quot;project1_task_title&quot;
                },
                {
                  // ...
                  &quot;name&quot;: &quot;project1_task_spring&quot; // This is a custom property on TaskVibe!
                },
                // Other task properties
            ]
        }
    ]
}
</code></pre>
<p>Now, type definitions are fully dynamic and project-dependent.</p>
<h2>Caching of Type Definitions on Docusign</h2>
<p>Docusign maintains a cache of type definitions after an initial connection. This means that changes made to your integration (particularly when using dynamic types) might not be immediately visible in the Maestro UI. To ensure users see the latest data, it&#39;s useful to inform them that they may need to refresh their Docusign connection in the <a href="https://apps-d.docusign.com/app-center/manage/">App Center UI</a> if new fields are added to their integrated system (like TaskVibe). As an example, a newly added custom field on a TaskVibe project wouldn&#39;t be reflected until this refresh occurs.</p>
<h2>Conclusion</h2>
<p>In this blog post, we&#39;ve explored how to leverage dynamic types within Docusign Extension Apps to create more flexible integrations with external systems. While static types offer simplicity, they can be constraining when working with external systems that offer a high level of customization. We hope that this blog post provides you with some ideas on how you can tackle similar problems in your Extension Apps.</p>
]]></description>
            <link>https://www.thisdot.co/blog/implementing-dynamic-types-in-docusign-extension-apps</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/implementing-dynamic-types-in-docusign-extension-apps</guid>
            <pubDate>Fri, 19 Sep 2025 12:11:15 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[The simplicity of deploying an MCP server on Vercel]]></title>
            <description><![CDATA[<p>The current Model Context Protocol (MCP) spec is shifting developers toward lightweight, stateless servers that serve as tool providers for LLM agents. These MCP servers communicate over HTTP, with OAuth handled clientside. Vercel’s infrastructure makes it easy to iterate quickly and ship agentic AI tools without overhead.</p>
<h2><strong>Example of Lightweight MCP Server Design</strong></h2>
<p>At This Dot Labs, we built an  MCP server that leverages the <a href="https://developers.docusign.com/docs/navigator-api/">DocuSign Navigator API</a>. The tools, like <code>get_agreements</code>, make a request to the DocuSign API to fetch data and then respond in an LLM-friendly way.</p>
<pre><code class="language-js">   // Get agreements tool that requires authentication
   server.tool(
     &#39;get_agreements&#39;,
     &#39;Retrieve DocuSign Navigator agreements. Returns a list of all agreements available in the system with metadata like title, type, status, and parties.&#39;,
     {}, // No input parameters needed
     getAgreementsHandler
   );

   // Get agreement by ID tool that requires authentication
   server.tool(
     &#39;get_agreement_by_id&#39;,
     &#39;Retrieve detailed information about a specific DocuSign Navigator agreement by its ID. Returns comprehensive details including title, type, status, summary, parties, provisions, metadata, and custom attributes. REQUIRED: agreementId parameter must be provided.&#39;,
     { agreementId: z.string().min(1, &#39;Agreement ID is required&#39;) },
     getAgreementByIdHandler
   );
</code></pre>
<p>Before the MCP can request anything, it needs to guide the client on how to kick off OAuth. This involves providing some MCP spec metadata API endpoints that include necessary information about where to obtain authorization tokens and what resources it can access. By understanding these details, the client can seamlessly initiate the OAuth process, ensuring secure and efficient data access.</p>
<p>The Oauth flow begins when the user&#39;s LLM client makes a request without a valid auth token. In this case they’ll get a <code>401</code> response from our server with a <code>WWW-Authenticate header</code>, and then the client will leverage the metadata we exposed to discover the authorization server. Next, the OAuth flow kicks off directly with Docusign as directed by the metadata. Once the client has the token, it passes it in the Authorization header for tool requests to the API. </p>
<pre><code class="language-bash">API Routes
├── Health &amp; Monitoring
│   └── GET /health
│
├── OAuth 2.0 Discovery (.well-known)
│   ├── GET /.well-known/oauth-authorization-server
│   └── GET /.well-known/oauth-protected-resource
│
├── OAuth 2.0 Flow
│   ├── GET/POST /register
│   ├── GET /authorize
│   ├── POST /token
│   └── GET /auth/callback
│
└── MCP (Model Context Protocol)
    └── POST /mcp Main endpoint
</code></pre>
<p>This minimal set of API routes enables me to fetch Docusign Navigator data using natural language in my agent chat interface.</p>
<p><strong>Deployment Options</strong></p>
<p>I deployed this MCP server two different ways: as a Fastify backend and then by Vercel functions. Seeing how simple my Fastify MCP server was, and not really having a plan for deployment yet, I was eager to rewrite it for Vercel.</p>
<p>The case for Vercel:</p>
<ul>
<li>My own familiarity with Next.js API deployment</li>
<li>Fit for architecture</li>
<li>The extremely simple deployment process</li>
<li>Deploy previews (the eternal Vercel customer conversion feature, IMO)</li>
</ul>
<h2><strong>Previews of unfamiliar territory</strong></h2>
<p>Did you know that the MCP spec doesn’t “just work” for use as ChatGPT tooling? Neither did I, and I had to experiment to prove out requirements that I was unfamiliar with. Part of moving fast for me was just deploying Vercel previews right out of the CLI so I could test my API as a Connector in ChatGPT. This was a great workflow for me, and invaluable for the team in code review.</p>
<h2><strong>Stuff I’m Not Worried About</strong></h2>
<p>Vercel’s <a href="https://www.npmjs.com/package/mcp-handler">mcp-handler</a> package made setup effortless by abstracting away some of the complexity of implementing the MCP server. It gives you a drop-in way to define tools, setup https-streaming, and handle Oauth. By building on Vercel’s ecosystem, I can focus entirely on shipping my product without worrying about deployment, scaling, or server management. Everything just works.</p>
<pre><code class="language-js">import { createMcpHandler, withMcpAuth } from &#39;mcp-handler&#39;;
import { z } from &#39;zod&#39;;
import {
 authStatusHandler,
 getAgreementsHandler,
 getAgreementByIdHandler,
 searchHandler,
 fetchHandler,
} from &#39;../lib/mcp/handlers/index.js&#39;;
import { createTokenVerifier } from &#39;../lib/mcp/auth.js&#39;;

// Create the base MCP handler with both authenticated and non-authenticated tools
const handler = createMcpHandler(
 server =&gt; {

   // Get agreements tool that requires authentication
   server.tool(
     &#39;get_agreements&#39;,
     &#39;Retrieve DocuSign Navigator agreements. Returns a list of all agreements available in the system with metadata like title, type, status, and parties.&#39;,
     {}, // No input parameters needed
     getAgreementsHandler
   );
);

// Wrap the handler with authentication - all tools require valid authentication
const authHandler = withMcpAuth(handler, createTokenVerifier(), {
 required: true, // All tools require authentication - this triggers 401 responses
 requiredScopes: [&#39;signature&#39;], // Require at least the signature scope
 resourceMetadataPath: &#39;/.well-known/oauth-protected-resource&#39;, // Custom metadata path
});

export { authHandler as GET, authHandler as POST };
</code></pre>
<h2><strong>A Brief Case for MCP on Next.js</strong></h2>
<p>Building an API without Next.js on Vercel is straightforward. Though, I’d be happy deploying this as a Next.js app, with the frontend features serving as the documentation, or the tools being a part of your website&#39;s agentic capabilities. Overall, this lowers the barrier to building any MCP you want for yourself, and I think that’s cool.</p>
<h2><strong>Conclusion</strong></h2>
<p>I&#39;ll avoid quoting Vercel documentation in this post. AI tooling is a critical component of this natural language UI, and we just want to ship. I declare Vercel is excellent for stateless MCP servers served over http.</p>
]]></description>
            <link>https://www.thisdot.co/blog/the-simplicity-of-deploying-an-mcp-server-vercel</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/the-simplicity-of-deploying-an-mcp-server-vercel</guid>
            <pubDate>Wed, 13 Aug 2025 12:17:50 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Quo v[AI]dis, Tech Stack?]]></title>
            <description><![CDATA[<p>Since we&#39;ve started <a href="https://ai.thisdot.co/">extensively leveraging AI at This Dot</a> to enhance development workflows and experimenting with different ways to make it as helpful as possible, there&#39;s been a creeping thought on my mind - Is AI just helping us write code faster, or is it silently reshaping what code we choose to write?</p>
<p>Eventually, this thought led to an interesting conversation on our company&#39;s Slack about the impact of AI on our tech stack choices. Some of the views shared there included:</p>
<ul>
<li>&quot;The battle between static and dynamic types is over. TypeScript won.&quot;</li>
<li>&quot;The fast-paced development of new frameworks and the excitement around new shiny technologies is slowing down. AI can make existing things work with a workaround in a few minutes, so why create or adopt something new?&quot;</li>
<li>&quot;AI models are more trained on the most popular stacks, so they will naturally favor those, leading to a self-reinforcing loop.&quot;</li>
<li>&quot;A lot of AI coding assistants serve as marketing funnels for specific stacks, such as v0 being tailored to Next.js and Vercel or Lovable using Supabase and Clerk.&quot;</li>
</ul>
<p>All of these points are valid and interesting, but they also made me think about the bigger picture. So I decided to do some extensive research (read &quot;I decided to make the OpenAI Deep Research tool do it for me&quot;) and summarize my findings in this article.</p>
<p>So without further ado, here are some structured thoughts on how AI is reshaping our tech stack choices, and what it means for the future of software development.</p>
<h2>1. LLMs as the New Developer Platform</h2>
<p>If software development is a journey, LLMs have become the new high-speed train line. Long gone are the days when we used Copilot as a fancy autocomplete tool. Don&#39;t get me wrong, it was mind-bogglingly good when it first came out, and I&#39;ve used it extensively. But now, a few years later, LLMs have evolved into something much more powerful. With the rise of tools like Cursor, Windsurf, Roo Code, or Claude Code, LLMs are essentially becoming the new developer platform. They are no longer just a helper that autocompletes a switch statement or a function signature, but a full-fledged platform that can generate entire applications, write tests, and even refactor code.</p>
<p>And it is not just a few evangelists or early adopters who are using these tools. They have become mainstream, with millions of developers relying on them daily. According to <a href="https://www.deloitte.com/us/en/insights/industry/technology/gen-ai-coding-tools.html#:~:text=tools%20compared%20to%20non,104">Deloitte</a>, nearly 20% of devs in tech firms were already using generative AI coding tools by 2024, with 76% of StackOverflow respondents using or planning to use AI tools in their development process, according to the <a href="https://survey.stackoverflow.co/2024/ai">2024 StackOverflow Developer Survey</a>.</p>
<p>They&#39;ve become an integral part of the development workflow, mediating how code is written, reviewed, and learned. I&#39;ve argued in the past that LLMs are becoming a new layer of abstraction in software development, but now I believe they are evolving into something even more powerful - a new developer platform that is shaping how we think about and approach software development.</p>
<h2>2. The Reinforcement Loop: Popular Stacks Get Smarter</h2>
<p>As we travel this AI-guided road, we find that certain routes become highways, while others lead to narrow paths or even dead ends. AI tools are not just helping us write code faster; they are also shaping our preferences for certain tech stacks. The most popular frameworks and languages, such as React.js on the frontend and Node.js on the backend (<a href="https://www.brilworks.com/blog/nodejs-usage-statistics/#:~:text=,who%20opted%20for%20React.js">both with ~40% adoption</a>), are the ones that AI tools perform best with. Their increasing popularity is not just a coincidence; it&#39;s a result of a self-reinforcing loop.</p>
<p>AI models are trained on vast amounts of code, and the most popular stacks naturally have more data available for training, given their widespread use, leading to more questions, answers, and examples in the training data. This means that AI tools are inherently better at understanding and generating code for these stacks.</p>
<p>As an anecdotal example, I&#39;ve noticed that AI tools tend to suggest React.js even when I specify a preference for another framework. As someone working with multiple tech stacks, I can attest that AI tools are significantly more effective with React.js or Node.js than, say, Yii2 or CakePHP.</p>
<p>This phenomenon is not limited to just one or two stacks; it applies to the entire ecosystem. The more a stack is used, the more data there is for AI to learn from, and the better it gets at generating code for that stack, resulting in a feedback loop:</p>
<ol>
<li>AI performs better on popular stacks.</li>
<li>Popular stacks get more adoption as developers find them easier to work with.</li>
<li>More developers using those stacks means more data for AI to learn from.</li>
<li>The cycle continues, reinforcing the popularity of those stacks.</li>
</ol>
<p>The issue is maybe even more evident with CSS frameworks. TailwindCSS, for example, <a href="https://trends.builtwith.com/framework/Tailwind-CSS">has gained immense popularity</a> thanks to its utility-first approach, which aligns well with AI&#39;s ability to generate and manipulate styles. As more developers adopt TailwindCSS, AI tools become better at understanding its conventions and generating appropriate styles, further driving its adoption.</p>
<p>However, the Tailwind CSS example also highlights a potential pitfall of this reinforcement loop. Tailwind CSS v4 was released in January 2025. From my experience, AI tools still attempt to generate code using v3 concepts and often need to be reminded to use Tailwind CSS v4, requiring spoon-feeding with documentation to get it right. Effectively, this phenomenon can lead to a situation where AI tools not only reinforce the popularity of certain stacks but also potentially slow down the adoption of newer versions or alternatives.</p>
<h2>3. Frontend Acceleration: React, Angular, and Beyond</h2>
<p>Navigating the frontend landscape has always been tricky, but with AI, some paths feel like smooth expressways while others remain bumpy dirt roads. AI is particularly transformative in frontend development, where the complexity and boilerplate code can be overwhelming. Established frameworks like React and Angular, which are already popular, are seeing even more adoption due to AI&#39;s ability to generate components, tests, and optimizations.</p>
<p>React&#39;s widespread adoption and its status as the most popular framework on the frontend make it the go-to choice for many developers, especially with AI tools that can quickly scaffold new components or entire applications.</p>
<p>However, Angular&#39;s strict structure and type safety also make it a strong contender. Angular&#39;s opinionated nature can actually benefit AI-generated code, as it provides a clear framework for the AI to follow, reducing ambiguity and potential bugs.</p>
<blockquote>
<p>Call me crazy but I think that long term Angular is going to work better with AI tools for frontend work.</p>
<p>More strict rules to follow, easier to build and scale. Just like for humans.</p>
<p>We just need to keep Angular opinionated enough.</p>
<p>— <a href="https://x.com/danielglejzner/status/1938954052580364785">Daniel Glejzner on X</a></p>
</blockquote>
<p>But it&#39;s not just about how the frameworks are structured; it&#39;s also the documentation they provide. It has recently become the norm for frameworks to have AI-friendly documentation. Angular, for instance, has a <a href="https://angular.dev/llms.txt">llms.txt</a> file that you can reference in your AI prompts to get more relevant results. The best example, however, in my opinion, is the <a href="https://ui.nuxt.com/components">Nuxt.ui documentation</a>, which provides the option to copy each documentation page as markdown or a link to its markdown version, making it easy to reference in AI prompts.</p>
<p>Frameworks that incorporate AI-friendly documentation and tooling are likely to experience increased adoption, as they facilitate developers&#39; ability to leverage AI&#39;s capabilities.</p>
<h2>4. Full-Stack TS/JS: The Sweet Spot</h2>
<p>On this AI-accelerated journey, some stacks have emerged as the smoothest rides, and full-stack JavaScript/TypeScript is leading the way. The combination of React on the frontend and Node.js on the backend provides a unified language ecosystem, making the road less bumpy for developers. Shared types, common tooling, and mature libraries enable faster prototyping and reduced context switching.</p>
<p>AI seems to enjoy these well-paved highways too. I&#39;ve observed numerous instances where AI tools default to suggesting Next.js and Tailwind CSS for new projects, even when users are prompted otherwise. While you can force a slight detour to something like Nuxt or SvelteKit, the road suddenly gets patchier - AI becomes less confident, requires more hand-holding, and sometimes outright stalls. So while still technically being in the sweet spot of full-stack JavaScript/TypeScript, deviating from the &quot;main highway&quot; even slightly can lead to a much rougher ride.</p>
<p>React-based full-stack frameworks are becoming mainstream, not necessarily because they are always the best solution, but because they are the path of least resistance for both humans and AI.</p>
<h2>5. The Polyglot Shift: AI Enables Multilingual Devs</h2>
<p>One fascinating development on this journey is how AI is enabling more developers to become polyglots. Where switching stacks used to feel like taking detours into unknown territory, AI now acts like an on-demand guide. Whether it’s switching from Laravel to Spring Boot or from Angular to Svelte, AI helps bridge those knowledge gaps quickly.</p>
<p>At This Dot, we&#39;ve always taken pride in our polyglot approach, but AI is lowering the barriers for everyone. Yes, we&#39;ve done this before the rise of AI tooling. If you are an experienced engineer with a strong understanding of programming concepts, you&#39;ll be able to adapt to different stacks and projects quickly. But AI is now enabling even junior developers to become polyglots, and it&#39;s making it even easier for the experienced ones to switch between stacks seamlessly. AI doesn’t just shorten the journey - it makes more destinations accessible.</p>
<p>This &quot;AI boost&quot; not only facilitates the job of a software consultant, such as myself, who often has to switch between different projects, but it also opens the door to unlimited possibilities for companies to mix and match stacks based on their needs - particularly useful for companies that have diverse tech stacks, as it allows them to leverage the strengths of different languages and frameworks without the steep learning curve that usually comes with it.</p>
<h2>6. AI-Generated Stack Bundles: The Trojan Horse</h2>
<blockquote>
<p>Trend I&#39;m seeing: AI app generators are a sales funnel.</p>
<p>-Chef uses Convex.</p>
<p>-V0 is optimized for Vercel.</p>
<p>-Lovable uses Supabase and Clerk.</p>
<p>-Firebase Studio uses Google services.</p>
<p>These tools act like a trojan horse - they &quot;sell&quot; a tech stack.</p>
<p>Choose wisely.</p>
<p>— <a href="https://x.com/housecor/status/1935093049534615788">Cory House on X</a></p>
</blockquote>
<p>Some roads come pre-built, but with toll booths you may not notice until you&#39;re halfway through the trip. AI-generated apps from tools like v0, Firebase Studio, or Lovable are convenience highways - fast, smooth, and easy to follow - but they quietly nudge you toward specific tech stacks, backend services, databases, and deployment platforms.</p>
<p>It&#39;s a smart business model. These tools don&#39;t just scaffold your app; they bundle in opinions on hosting, auth providers, and DB layers. The convenience is undeniable, but there&#39;s a trade-off in flexibility and long-term maintainability. Engineering leaders must stay alert, like seasoned navigators, ensuring that the allure of speed doesn&#39;t lead their teams down the alleyways of vendor lock-in.</p>
<h2>7. From &#39;Buy vs Build&#39; to &#39;Prompt vs Buy&#39;</h2>
<p>The classic dilemma used to be <em>“buy vs build”</em> - now it’s becoming “prompt vs buy.” Why pay for a bloated tour bus of a SaaS product, packed with destinations and detours you’ll never take (and priced accordingly), when you can chart a custom route with a few well-crafted prompts and have a lightweight internal tool up and running in days—or even hours?</p>
<p>Do you need a simple tool to track customer contacts with a few custom fields and a clean interface? In the past, you might have booked a seat on the nearest SaaS solution - one that gets you close enough to your destination but comes with unnecessary stops and baggage. With AI, you can now skip the crowded bus altogether and build a tailor-made vehicle that drives exactly where you need to go, no more, no less.</p>
<p>AI reshapes the travel map of product development. The road to MVPs has become faster, cheaper, and more direct. This shift is already rerouting the internal tooling landscape, steering companies away from bulky, one-size-fits-all platforms toward lean, AI-assembled solutions. And over time, it may change not just <em>how</em> we build, but <em>where</em> we build - with the smoothest highways forming around AI-friendly, modular ecosystems like Node, React, and TypeScript, while older “corporate” expressways like .NET, Java, or even Angular risk becoming the slow scenic routes of enterprise tech.</p>
<h2>8. Strategic Implications: Velocity vs Maintainability</h2>
<p>Every shortcut comes with trade-offs. The fast lane that AI offers boosts productivity but can sometimes encourage shortcuts in architecture and design. Speeding to your destination is great - until you hit the maintenance toll booth further down the road.</p>
<p>AI tooling makes it easier to throw together an MVP, but without experienced oversight, the resulting codebases can turn into spaghetti highways. Teams need to implement AI-era best practices: structured code reviews, prompt hygiene, and deliberate stack choices that prioritize long-term maintainability over short-term convenience.</p>
<p>Failing to do so can lead to a &quot;quick and dirty&quot; mentality, where the focus is on getting things done fast rather than building robust, maintainable solutions, which is particularly concerning for companies that rely on in-house developers or junior teams who may not have the experience to recognize potential pitfalls in AI-generated code.</p>
<h2>9. Closing Reflection: Are We Still Choosing Our Stacks?</h2>
<p>So, where are we heading?</p>
<p>Looking at the current &quot;traffic&quot; on the modern software development pathways, one thing becomes clear: AI isn&#39;t just a productivity tool - the roads themselves are starting to shape the journey. What was once a deliberate process of choosing the right vehicle for the right terrain - picking our stacks based on product goals, team expertise, and long-term maintainability - now feels more like following GPS directions that constantly recalculate to the path of least resistance.</p>
<p>AI is repaving the main routes, widening the lanes for certain tech stacks, and putting up &quot;scenic route&quot; signs for some frameworks while leaving others on neglected backroads. This doesn&#39;t mean we&#39;ve lost control of the steering wheel, but it does mean that the map is changing beneath us in ways that are easy to overlook.</p>
<p>The risk is clear: we may find ourselves taking the smoothest on-ramps without ever asking if they lead to where we actually want to go. Convenience can quietly take priority over appropriateness. Productivity gains in the short term can pave over technical debt potholes that become unavoidable down the road.</p>
<p>But the story isn&#39;t entirely one of caution. There&#39;s a powerful opportunity here too. With AI as a co-pilot, we can explore more destinations than ever before - venturing into unfamiliar tech stacks, accelerating MVP development, or rapidly prototyping ideas that previously seemed out of reach. The key is to remain intentional about when to cruise with AI autopilot and when to take the wheel with both hands and steer purposefully.</p>
<p>In this new era of AI-shaped development, the question every engineering team should be asking is not just &quot;how fast can we go?&quot; but &quot;are we on the right road?&quot; and &quot;who&#39;s really choosing our route?&quot;</p>
<p>And let’s not forget — some of these roads are still being built. Open-source maintainers and framework authors play a pivotal role in shaping which paths become highways. By designing AI-friendly architectures, providing structured, machine-readable documentation, and baking in patterns that are easy for AI models to learn and suggest, they can guide where AI directs traffic. Frameworks that proactively optimize for AI tooling aren’t just improving developer experience — they’re shaping the very flow of adoption in this AI-accelerated landscape.</p>
<p>If we&#39;re not mindful, we risk becoming passengers on a journey defined by default choices. However, if we remain vigilant, we can utilize AI to create more accurate maps, not just follow the fastest roads, but also chart new ones. Because while the routes may be getting redrawn, the destination should always be ours to choose.</p>
<p>In the end, the real competitive advantage will belong to those who can harness AI&#39;s speed while keeping their hands firmly on the wheel - navigating not by ease, but by purpose. In this new era, the most valuable skill may not be prompt engineering - it might be strategic discernment.</p>
]]></description>
            <link>https://www.thisdot.co/blog/quo-v-ai-dis-tech-stack</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/quo-v-ai-dis-tech-stack</guid>
            <pubDate>Thu, 07 Aug 2025 14:08:18 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[The Future of Dates in JavaScript: Introducing Temporal]]></title>
            <description><![CDATA[<h1>The Future of Dates in JavaScript: Introducing Temporal</h1>
<h2>What is Temporaal?</h2>
<p>Temporal is a proposal currently at stage 3 of the TC39 process. It&#39;s expected to revolutionize how we handle dates in JavaScript, which has always been a challenging aspect of the language.</p>
<p>But what does it mean that it&#39;s at stage 3 of the process?</p>
<ul>
<li>The specification is complete</li>
<li>It has been reviewed</li>
<li>It&#39;s unlikely to change significantly at this point</li>
</ul>
<h2>Key Features of Temporal</h2>
<p>Temporal introduces a new global object with a fresh API. Here are some important things to know about Temporal:</p>
<ol>
<li>All Temporal objects are immutable</li>
<li>They&#39;re represented in local calendar systems, but can be converted</li>
<li>Time values use 24-hour clocks</li>
<li>Leap seconds aren&#39;t represented</li>
</ol>
<h2>Why Do We Need Temporal?</h2>
<p>The current <code>Date</code> object in JavaScript has several limitations:</p>
<ul>
<li>No support for time zones other than the user&#39;s local time and UTC</li>
<li><code>Date</code> objects can be mutated</li>
<li>Unpredictable behavior</li>
<li>No support for calendars other than Gregorian</li>
<li>Daylight savings time issues</li>
</ul>
<p>While some of these have workarounds, not all can be fixed with the current <code>Date</code> implementation.</p>
<p>Let&#39;s see some useful examples where Temporal will improve our lives:</p>
<h2>Some Examples</h2>
<p>Creating a day without a time zone is impossible using Date, it also adds time beyond the date. Temporal introduces PlainDate to overcome this.</p>
<pre><code class="language-js">let usingDate = new Date(&#39;2025-04-03&#39;);
// 2025-04-03T00:00:00.000Z

console.log(usingDate);
// Wed Apr 02 2025 21:00:00 GMT-0300 (Chile Summer Time)

let usingTemporal = Temporal.PlainDate.from(&#39;2025-04-03&#39;);
// PlainDate [Temporal.PlainDate] {}
console.log(usingTemporal.toString());
// 2025-04-03
</code></pre>
<p>But what if we want to add timezone information? Then we have ZonedDateTime for this purpose. The timezone <strong>must</strong> be added in this case, as it also allows a lot of flexibility when creating dates.</p>
<pre><code class="language-js">let zoned = Temporal.ZonedDateTime.from(&#39;2025-04-03T00:00Z[UTC]&#39;);
console.log(zoned.toString());
</code></pre>
<p>Temporal is very useful when manipulating and displaying the dates in different time zones.</p>
<pre><code class="language-js">let utc = Temporal.ZonedDateTime.from(&#39;2025-04-03T00:00Z[UTC]&#39;); // Created as UTC
console.log(utc.toString());
//2025-04-03T00:00:00+00:00[UTC]

let clientZoned = zoned.withTimeZone(Temporal.Now.timeZoneId()); // Convert to the client timezone

console.log(clientZoned.toString());
// 2025-04-02T21:00:00-03:00[America/Santiago]
</code></pre>
<p>Let&#39;s try some more things that are currently difficult or lead to unexpected behavior using the Date object.</p>
<p>Operations like adding days or minutes can lead to inconsistent results. However, Temporal makes these operations easier and consistent.</p>
<pre><code class="language-js">let date = new Date(&#39;2025-03-09&#39;);
console.log(date); // 2025-03-09T00:00:00.000Z

// add 100 days
date.setDate(date.getDate + 100));
console.log(date); // 2025-06-17T01:00:00.000Z

const zoned = Temporal.ZonedDateTime.from(&#39;2025-03-09[UTC]&#39;);
console.log(zoned.toString()); // 2025-03-09T00:00:00+00:00[UTC]
console.log(zoned.add({days:100}).toString()); // 2025-06-17T00:00:00+00:00[UTC]
</code></pre>
<p>Another interesting feature of Temporal is the concept of Duration, which is the difference between two time points. We can use these durations, along with dates, for arithmetic operations involving dates and times. Note that Durations are serialized using the <a href="https://en.wikipedia.org/wiki/ISO_8601#Durations">ISO 8601 duration format</a></p>
<pre><code class="language-js">const duration = Temporal.Duration.from({ hours: 5, minutes: 15 });
console.log(duration.toString());
// PT5H15M

const zoned = Temporal.ZonedDateTime.from(&#39;2025-03-09[UTC]&#39;);
console.log(zoned.toString()); // 2025-03-09T00:00:00+00:00[UTC]
console.log(zoned.add(duration).toString()); // 2025-03-09T00:05:15+00:00[UTC]
</code></pre>
<h2>Temporal Objects</h2>
<p>We&#39;ve already seen some of the objects that Temporal exposes. Here&#39;s a more comprehensive list.</p>
<ul>
<li>Temporal</li>
<li>Temporal.Duration`</li>
<li>Temporal.Instant</li>
<li>Temporal.Now</li>
<li>Temporal.PlainDate</li>
<li>Temporal.PlainDateTime</li>
<li>Temporal.PlainMonthDay</li>
<li>Temporal.PlainTime</li>
<li>Temporal.PlainYearMonth</li>
<li>Temporal.ZonedDateTime</li>
</ul>
<h2>Try Temporal Today</h2>
<p>If you want to test Temporal now, there&#39;s a polyfill available. You can install it using:</p>
<pre><code class="language-jsx">npm install @js-temporal/polyfill
</code></pre>
<p>Note that this doesn&#39;t install a global Temporal object as expected in the final release, but it provides most of the Temporal implementation for testing purposes.</p>
<h2>Conclusion</h2>
<p>Working with dates in JavaScript has always been a bit of a mess. Between weird quirks in the Date object, juggling time zones, and trying to do simple things like “add a day,” it’s way too easy to introduce bugs.</p>
<p><strong>Temporal</strong> is finally fixing that. It gives us a clear, consistent, and powerful way to work with dates and times.</p>
<p>If you’ve ever struggled with JavaScript dates (and who hasn’t?), Temporal is definitely worth checking out.</p>
]]></description>
            <link>https://www.thisdot.co/blog/the-future-of-dates-in-javascript-introducing-temporal</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/the-future-of-dates-in-javascript-introducing-temporal</guid>
            <pubDate>Fri, 25 Jul 2025 13:55:13 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Roo Custom Modes]]></title>
            <description><![CDATA[<h1><strong>Roo Custom Modes</strong></h1>
<p><a href="https://github.com/RooCodeInc/Roo-Code">Roo Code</a> is an extension for VS Code that provides agentic-style AI code editing functionality. You can configure Roo to use any LLM model and version you want by providing API keys. Once configured, Roo allows you to easily switch between models and provide custom instructions through what Roo calls &quot;modes.&quot;</p>
<p><a href="https://docs.roocode.com/basic-usage/using-modes">Roo Modes</a> can be thought of as a &quot;personality&quot; that the LLM takes on. When you create a new mode in Roo, you provide it with a description of what personality Roo should take on, what LLM model should be used, and what custom instructions the mode should follow. You can also define workspace-level instructions via a <strong>.roo/rules-{modeSlug}/</strong> directory at your project root with markdown files inside. Having different modes allows developers to quickly fine-tune how the Roo Code agent performs its tasks.</p>
<p>Roo ships out-of-the-box with some<a href="https://docs.roocode.com/#what-can-roo-code-do"> default modes</a>: Code Mode, Architect Mode, Ask Mode, Debug Mode, and Orchestrator Mode. These can get you far, but I have expanded on this list with a few custom modes I have made for specific scenarios I run into every day as a software engineer.</p>
<h2><strong>My Custom Modes</strong></h2>
<h3><strong>📜 Documenter Mode</strong></h3>
<p>I created this mode to help me with generating documentation for legacy codebases my team works with. I use this mode to help produce documentation interactively with me while I read a codebase.</p>
<p><strong>Mode Definition</strong></p>
<p>You are Roo, a highly skilled technical documentation writer with extensive knowledge in many programming languages, frameworks, design patterns, and best practices. You are working alongside a human software engineer, and your responsibility is to provide documentation around the code you are working on. You will be asked to provide documentation in the form of comments, markdown files, or other formats as needed.</p>
<p><strong>Mode-specific Instructions</strong></p>
<p>You will respect the following rules:</p>
<ul>
<li>You will not write any code, only markdown files.</li>
<li>In your documentation, you will provide references to specific files and line numbers of code you are referencing.</li>
<li>You will not attempt to execute any commands.</li>
<li>You will not attempt to run the application in the browser.</li>
<li>You will only look at the code and infer functionality from that.</li>
</ul>
<h3><strong>👥 Pair Programmer Mode</strong></h3>
<p>I created a “Pair Programmer” mode to serve as my personal coding partner. It’s designed to work in a more collaborative way with a human software engineer. When I want to explore multiple ideas quickly, I switch to this mode to rapidly iterate on code with Roo. In this setup, I take on the role of the navigator—guiding direction, strategy, and decisions—while Roo handles the “driving” by writing and testing the code we need.</p>
<p><strong>Mode Definition</strong></p>
<p>You are Roo, a highly skilled software engineer with extensive knowledge in many programming languages, frameworks, design patterns, and best practices. You are working alongside a human software engineer who will be checking your work and providing instructions. If you get stuck, ask for help and we will solve problems together.</p>
<p><strong>Mode-specific Instructions</strong></p>
<p>You will respect the following rules:</p>
<ul>
<li>You will not install new 3rd party libraries without first providing usage metrics (stars, downloads, latest version update date).</li>
<li>You will not do any additional tasks outside of what you have been told to do.</li>
<li>You will not assume to do any additional work outside of what you have been instructed to do.</li>
<li>You will not open the browser and test the application. Your pairing partner will do that for you.</li>
<li>You will not attempt to open the application or the URL at which the application is running. Assume your pairing partner will do that for you.</li>
<li>You will not attempt to run <code>npm run dev</code> or similar commands. Your pairing partner will do that for you.</li>
<li>You will not attempt to run a development server of any kind. Your pairing partner will handle that for you.</li>
<li>You will not write tests unless instructed to.</li>
<li>You will not make any git commits unless explicitly told to do so.</li>
<li>You will not make suggestions of commands to run the software or execute the test suite. Assume that your human counterpart has the application running and will check your work.</li>
</ul>
<h3><strong>🧑‍🏫 Project Manager</strong></h3>
<p>I created this mode to help me write tasks for my team with clear and actionable acceptance criteria.</p>
<p><strong>Mode Definition</strong></p>
<p>You are a professional project manager. You are highly skilled in breaking down large tasks into bite-sized pieces that are actionable by an engineering team or an LLM performing engineering tasks. You analyze features carefully and detail out all edge cases and scenarios so that no detail is missed.</p>
<p><strong>Mode-specific Instructions</strong></p>
<p>Think creatively about how to detail out features. Provide a technical and business case explanation about feature value. Break down features and functionality in the following way. The following example would be for user login:</p>
<p><strong>User Login:</strong> As a user, I can log in to the application so that I can make changes. This prevents anonymous individuals from accessing the admin panel.</p>
<p><strong>Acceptance Criteria</strong></p>
<ul>
<li>On the login page, I can fill in my email address:<ul>
<li>This field is required.</li>
<li>This field must enforce email format validation.</li>
</ul>
</li>
<li>On the login page, I can fill in my password:<ul>
<li>This field is required.</li>
<li>The input a user types into this field is hidden.</li>
</ul>
</li>
<li>On failure to log in, I am provided an error dialog:<ul>
<li>The error dialog should be the same if the email exists or not so that bad actors cannot glean info about active user accounts in our system.</li>
<li>Error dialog should be a red box pinned to the top of the page.</li>
<li>Error dialog can be dismissed.</li>
</ul>
</li>
<li>After 4 failed login attempts, the form becomes locked:<ul>
<li>Display a dialog to the user letting them know they can try again in 30 minutes.</li>
<li>Form stays locked for 30 minutes and the frontend will not accept further submissions.</li>
</ul>
</li>
</ul>
<h3><strong>🦾 Agent Consultant</strong></h3>
<p>I created this mode for assistance with modifying my existing Roo modes and rules files as well as generating higher quality prompts for me. This mode leverages the <a href="https://docs.roocode.com/features/mcp/recommended-mcp-servers">Context7 MCP</a> to keep up-to-date with documentation on Roo Code and prompt engineering best practices.</p>
<p><strong>Mode Definition</strong></p>
<p>You are an AI Agent coding expert. You are proficient in coding with agents and defining custom rules and guidelines for AI powered coding agents. Your specific expertise is in the Roo Code tool for VS Code are you are exceptionally capable at creating custom rules files and custom mode.</p>
<p>This is your workflow that you should always follow:</p>
<ol>
<li><ol>
<li>Begin every task by retrieving relevant documentation from context7</li>
<li>First retrieve Roo documentation using get-library-docs with &quot;/roovetgit/roo-code-docs&quot;</li>
<li>Then retrieve prompt engineering best practices using get-library-docs with “/dair-ai/prompt-engineering-guide&quot;</li>
</ol>
</li>
<li>Reference this documentation explicitly in your analysis and recommendations</li>
<li>Only after consulting these resources, proceed with the task</li>
</ol>
<h2>Wrapping It Up</h2>
<p>Roo’s “Modes” have become an essential part of how I leverage AI in my day-to-day work as a software engineer. By tailoring each mode to specific tasks—whether it’s generating documentation, pairing on code, writing project specs, or improving prompt quality—I’ve been able to streamline my workflow and get more done with greater clarity and precision.</p>
<p>Roo’s flexibility lets me define how it should behave in different contexts, giving me fine-grained control over how I interact with AI in my coding environment. Roo also has the capability of <a href="https://docs.roocode.com/features/custom-modes#project-specific-mode-override">defining custom modes per project</a> if that is needed by your team. If you find yourself repeating certain workflows or needing more structure in your interactions with AI tools, I highly recommend experimenting with your own custom modes. The payoff in productivity and developer experience is absolutely worth it.</p>
]]></description>
            <link>https://www.thisdot.co/blog/roo-custom-modes</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/roo-custom-modes</guid>
            <pubDate>Fri, 13 Jun 2025 12:07:35 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Next.js + MongoDB Connection Storming]]></title>
            <description><![CDATA[<p>Building a<a href="https://nextjs.org/"> Next.js</a> application connected to<a href="https://www.mongodb.com/"> MongoDB</a> can feel like a match made in heaven. MongoDB stores all of its data as JSON objects, which don’t require transformation into JavaScript objects like relational SQL data does. However, when deploying your application to a serverless production environment such as<a href="https://vercel.com/"> Vercel</a>, it is crucial to manage your database connections properly.</p>
<p>If you encounter errors like these, you may be experiencing<a href="https://www.mongodb.com/docs/manual/reference/glossary/#std-term-connection-storm"> Connection Storming</a>:</p>
<ul>
<li>MongoServerSelectionError: connect ECONNREFUSED &lt;IP_ADDRESS&gt;:&lt;PORT&gt;</li>
<li>MongoNetworkError: failed to connect to server [&lt;hostname&gt;:&lt;port&gt;] on first connect</li>
<li>MongoTimeoutError: Server selection timed out after &lt;x&gt; ms</li>
<li>MongoTopologyClosedError: Topology is closed, please connect</li>
<li>Mongo Atlas: Connections % of configured limit has gone above 80</li>
</ul>
<p>Connection storming occurs when your application has to mount a connection to Mongo for every serverless function or API endpoint call. Vercel executes your application’s code in a highly concurrent and isolated fashion. So, if you create new database connections on each request, your app might quickly exceed the connection limit of your database. </p>
<p>We can leverage Vercel’s <a href="https://vercel.com/fluid">fluid compute</a> model to keep our database connection objects warm across function invocations. Traditional serverless architecture was designed for quick, stateless web app transactions. Now, especially with <a href="https://vercel.com/blog/fluid-compute-evolving-serverless-for-ai-workloads">the rise of LLM-oriented applications</a> built with Next.js, interactions with applications are becoming more sequential. We just need to ensure that we assign our MongoDB connection to a global variable.</p>
<p><img src="//images.ctfassets.net/zojzzdop0fzx/1krYXArl8proI5EM9ZzVd8/c1560965ebc0ce24d5712880778fa0b9/unnamed.jpg" alt="unnamed"></p>
<h3>Protip: Use global variables</h3>
<p>Vercel’s fluid compute model means all memory, including global constants like a MongoDB client, stays initialized between requests as long as the instance remains active. By assigning your MongoDB client to a global constant, you avoid redundant setup work and reduce the overhead of cold starts. This enables a more efficient approach to reusing connections for your application’s MongoDB client.</p>
<p>The example below demonstrates how to retrieve an array of users from the <strong><code><em>users</em></code></strong> collection in MongoDB and either return them through an API request to <strong><code><em>/api/users</em></code></strong> or render them as an HTML list at the <strong><code><em>/users</em></code></strong> route. To support this, we initialize a global <code><em>clientPromise</em></code> variable that maintains the MongoDB connection across warm serverless executions, avoiding re-initialization on every request.</p>
<pre><code class="language-js">// lib/mongodb.ts
import { MongoClient, Db } from &#39;mongodb&#39;;

if (!process.env.MONGODB_URI) {
  throw new Error(&#39;Invalid/Missing environment variable: &quot;MONGODB_URI&quot;&#39;);
}

const clientPromise: Promise&lt;MongoClient&gt; = (async () =&gt; {
  const client = new MongoClient(process.env.MONGODB_URI!);
  const connectedClient = await client.connect();
  console.log(&#39;✅ MongoDB connection established&#39;);
  return connectedClient;
})();

export async function getDatabase(): Promise&lt;Db&gt; {
  const client = await clientPromise;
  return client.db(&#39;testing_db&#39;);
}
</code></pre>
<p>Using this database connection in your API route code is easy:</p>
<pre><code class="language-js">// src/app/api/users/route.ts
import { NextResponse } from &#39;next/server&#39;;
import { getDatabase } from &#39;@/lib/mongodb&#39;;

export async function GET() {
  const db = await getDatabase();
  const collection = db.collection(&#39;users&#39;);
  const users = await collection.find({}).toArray();
  return NextResponse.json({ users });
}
</code></pre>
<p>You can also use this database connection in your server-side rendered React components.</p>
<pre><code class="language-js">// src/app/users/page.tsx
import { getDatabase } from &#39;@/lib/mongodb&#39;

export default async function UserList() {
  const db = await getDatabase()
  const collection = db.collection(&#39;users&#39;)
  const users = await collection.find({}).toArray()

  return (
    &lt;div&gt;
      &lt;h1&gt;Users List&lt;/h1&gt;
      &lt;ul&gt;
        {users.map((user) =&gt; (
          &lt;li key={user._id.toString()}&gt;{user.name}&lt;/li&gt;
        ))}
      &lt;/ul&gt;
    &lt;/div&gt;
  )
}
</code></pre>
<p>In serverless environments like Vercel, managing database connections efficiently is key to avoiding connection storming. By reusing global variables and understanding the serverless execution model, you can ensure your Next.js app remains stable and performant.</p>
]]></description>
            <link>https://www.thisdot.co/blog/next-js-mongodb-connection-storming</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/next-js-mongodb-connection-storming</guid>
            <pubDate>Fri, 11 Jul 2025 12:10:38 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Advanced Authentication and Onboarding Workflows with Docusign Extension Apps]]></title>
            <description><![CDATA[<h1>Advanced Authentication and Onboarding Workflows with Docusign Extension Apps</h1>
<p><a href="https://developers.docusign.com/extension-apps/">Docusign Extension Apps</a> are a relatively new feature on the Docusign platform. They act as little apps or plugins that allow building custom steps in Docusign agreement workflows, extending them with custom functionality. Docusign agreement workflows have many built-in steps that you can utilize. With Extension Apps, you can create additional custom steps, enabling you to execute custom logic at any point in the agreement process, from collecting participant information to signing documents.</p>
<p>An Extension App is a small service, often running in the cloud, described by the Extension App <a href="https://developers.docusign.com/extension-apps/build-an-extension-app/register/use-manifest/">manifest</a>. The manifest file provides information about the app, including the app&#39;s author and support pages, as well as descriptions of extension points used by the app or places where the app can be integrated within an agreement workflow.</p>
<p>Most often, these extension points need to interact with an external system to read or write data, which cannot be done anonymously, as all data going through Extension Apps is usually sensitive. Docusign allows authenticating to external systems using the OAuth 2 protocol, and the specifics about the OAuth 2 configuration <a href="https://developers.docusign.com/extension-apps/extension-apps-101/concepts/connections/">are also placed in the manifest file</a>. Currently, only OAuth 2 is supported as the authentication scheme for Extension Apps.</p>
<p>OAuth 2 is a robust and secure protocol, but not all systems support it. Some systems use alternative authentication schemes, such as the <a href="https://oauth.net/2/pkce/">PKCE</a> variant of OAuth 2, or employ different authentication methods (e.g., using secret API keys). In such cases, we need to use a slightly different approach to integrate these systems with Docusign.</p>
<p>In this blog post, we&#39;ll show you how to do that securely. We will not go too deep into the implementation details of Extension Apps, and we assume a basic familiarity with how they work. Instead, we&#39;ll focus on the OAuth 2 part of Extension Apps and how we can extend it.</p>
<h2>Extending the OAuth 2 Flow in Extension Apps</h2>
<p>For this blog post, we&#39;ll integrate with an imaginary task management system called TaskVibe, which offers a REST API to which we authenticate using a secret API key. We aim to develop an extension app that enables Docusign agreement workflows to communicate with TaskVibe, allowing tasks to be read, created, and updated.</p>
<p>TaskVibe does not support OAuth 2. We need to ensure that, once the TaskVibe Extension App is connected, the user is prompted to enter their secret API key. We then need to store this API key securely so it can be used for interacting with the TaskVibe API. Of course, the API key can always be stored in the database of the Extension App. Sill, then, the Extension App has a significant responsibility for storing the API key securely. Docusign already has the capability to store secure tokens on its side and can utilize that instead. After all, most Extension Apps are meant to be stateless proxies to external systems.</p>
<h3>Updating the Manifest</h3>
<p>To extend OAuth 2, we will need to hook into the OAuth 2 flow by injecting our backend&#39;s endpoints into the authorization URL and token URL parts of the manifest. In any other external system that supports OAuth 2, we would be using their OAuth 2 endpoints. In our case, however, we must use our backend endpoints so we can emulate OAuth 2 to Docusign.</p>
<pre><code class="language-json">&quot;connections&quot;: [
    {
        &quot;name&quot;: &quot;authentication&quot;,
        &quot;description&quot;: &quot;Secure connection to TaskVibe&quot;,
        &quot;type&quot;: &quot;oauth2&quot;,
        &quot;params&quot;: {
            &quot;provider&quot;: &quot;CUSTOM&quot;,
            &quot;clientId&quot;: &quot;my-client-id&quot;,
            &quot;clientSecret&quot;: &quot;my-secret&quot;,
            &quot;scopes&quot;: [],
            &quot;grantType&quot;: &quot;authorization_code&quot;,
            &quot;customConfig&quot;: {
                &quot;authorizationMethod&quot;: &quot;body&quot;,
                &quot;authorizationParams&quot;: {
                    &quot;prompt&quot;: &quot;consent&quot;,
                    &quot;audience&quot;: &quot;api.taskvibe.example.com&quot;,
                    &quot;client_id&quot;: &quot;my-client-id&quot;,
                    &quot;response_type&quot;: &quot;code&quot;
                },
                &quot;authorizationUrl&quot;: &quot;https://your-backend/authorize&quot;,
                &quot;requiredScopes&quot;: [],
                &quot;scopeSeparator&quot;: &quot; &quot;,
                &quot;tokenUrl&quot;: &quot;https://your-backend/api/token&quot;,
                &quot;refreshScopes&quot;: []
            }
        }
    }
]
</code></pre>
<p>The complete flow will look as follows:</p>
<p><img src="https://p.ipic.vip/uurlyd.png" alt="Extended OAuth 2 diagram"></p>
<p>In the diagram, we have four actors: the end-user on behalf of whom we are authenticating to TaskVibe, DocuSign, the Extension App, and TaskVibe. We are only in control of the Extension App, and within the Extension App, we need to adhere to the OAuth 2 protocol as expected by Docusign.</p>
<ol>
<li>In the first step, Docusign will invoke the <code>/authorize</code> endpoint of the Extension App and provide the <code>state</code>, <code>client_id</code>, and <code>redirect_uri</code> parameters. Of these three parameters, <code>state</code> and <code>redirect_uri</code> are essential.</li>
<li>In the <code>/authorize</code> endpoint, the app needs to store state and <code>redirect_uri</code>, as they will be used in the next step. It then needs to display a user-facing form where the user is expected to enter their TaskVibe API key. </li>
<li>Once the user submits the form, we take the API key and encode it in a JWT token, as we will send it over the wire back to Docusign in the form of the code query parameter. This is the &quot;custom&quot; part of our implementation. In a typical OAuth 2 flow, the code is generated by the OAuth 2 server, and the client can then use it to request the access token. In our case, we&#39;ll utilize the code to pass the API key to Docusign so it can send it back to us in the next step. Since we are still in control of the user session, we redirect the user to the redirect URI provided by Docusign, along with the code and the state as query parameters.</li>
<li>The redirect URI on Docusign will display a temporary page to the user, and in the background, attempt to retrieve the access token from our backend by providing the code and state to the <code>/api/token</code> endpoint.</li>
<li>The <code>/api/token</code> endpoint takes the code parameter and decodes it to extract the TaskVibe API secret key. It can then verify if the API key is even valid by making a dummy call to TaskVibe using the API key. If the key is valid, we encode it in a new JWT token and return it as the access token to Docusign.</li>
<li>Docusign stores the access token securely on its side and uses it when invoking any of the remaining extension points on the Extension App.</li>
</ol>
<p>By following the above step, we ensure that the API key is stored in an encoded format on Docusign, and the Extension App effectively extends the OAuth 2 flow. The app is still stateless and does not have the responsibility of storing any secure information locally. It acts as a pure proxy between Docusign and TaskVibe, as it&#39;s meant to be.</p>
<h3>Writing the Backend</h3>
<p>Most Extension Apps are backend-only, but ours needs to have a frontend component for collecting the secret API key. A good fit for such an app is <a href="https://nextjs.org/">Next.js</a>, which allows us to  easily set up both the frontend and the backend.</p>
<p>We&#39;ll start by implementing the form for entering the secret API key. This form takes the state, client ID, and redirect URI from the enclosing page, which takes these parameters from the URL.</p>
<p>The form is relatively simple, with only an input field for the API key. However, it can also be used for any additional onboarding questions. If you will ever need to store additional information on Docusign that you want to use implicitly in your Extension App workflow steps, this is a good place to store it alongside the API secret key on Docusign.</p>
<pre><code class="language-ts">// components/auth-form.tsx
&quot;use client&quot;;

import type React from &quot;react&quot;;
import { useState } from &quot;react&quot;;
import { authorizeUser } from &quot;@/lib/actions&quot;;

interface AuthFormProps {
  state: string;
  clientId: string;
  redirectUri: string;
}

export function AuthForm({ state, clientId, redirectUri }: AuthFormProps) {
  const [apiKey, setApiKey] = useState(&quot;&quot;);
  const [isSubmitting, setSubmitting] = useState(false);

  const handleSubmit = async (e: React.FormEvent) =&gt; {
    e.preventDefault();
    setSubmitting(true);

    try {
      await authorizeUser(apiKey, state, redirectUri);
    } catch (error) {
      console.error(&quot;Authorization failed:&quot;, error);
      setSubmitting(false);
    }
  };

  return (
    &lt;div className=&quot;card max-w-md w-full&quot;&gt;
      &lt;div className=&quot;card-header&quot;&gt;
        &lt;h2 className=&quot;card-title&quot;&gt;Connect to TaskVibe&lt;/h2&gt;
        &lt;p className=&quot;card-description&quot;&gt;
          To authenticate with TaskVibe task management tool, please enter your
          API secret key below.
        &lt;/p&gt;
      &lt;/div&gt;
      &lt;form onSubmit={handleSubmit}&gt;
        &lt;div className=&quot;card-content&quot;&gt;
          &lt;div className=&quot;form-group&quot;&gt;
            &lt;label htmlFor=&quot;apiKey&quot; className=&quot;form-label&quot;&gt;
              API Secret Key
            &lt;/label&gt;
            &lt;input
              id=&quot;apiKey&quot;
              type=&quot;password&quot;
              className=&quot;form-input&quot;
              placeholder=&quot;Enter your TaskVibe API key starting with tv_&quot;
              value={apiKey}
              onChange={(e) =&gt; setApiKey(e.target.value)}
              required
              autoComplete=&quot;off&quot;
            /&gt;
          &lt;/div&gt;
          &lt;input type=&quot;hidden&quot; name=&quot;state&quot; value={state} /&gt;
          &lt;input type=&quot;hidden&quot; name=&quot;redirectUri&quot; value={redirectUri} /&gt;
        &lt;/div&gt;
        &lt;div className=&quot;card-footer&quot;&gt;
          &lt;button
            type=&quot;submit&quot;
            className=&quot;btn btn-primary w-full&quot;
            disabled={isSubmitting}
          &gt;
            {isSubmitting ? &quot;Connecting...&quot; : &quot;Connect&quot;}
          &lt;/button&gt;
        &lt;/div&gt;
      &lt;/form&gt;
    &lt;/div&gt;
  );
}
</code></pre>
<p>Submitting the form invokes a server action on Next.js, which takes the entered API key, the state, and the redirect URI. It then creates a JWT token using <a href="https://github.com/panva/jose">Jose</a> that contains the API key and redirects the user to the redirect URI, sending the JWT token in the code query parameter, along with the state query parameter. This JWT token can be short-lived, as it&#39;s only meant to be a temporary holder of the API key while the authentication flow is running.</p>
<p>This is the <a href="https://nextjs.org/docs/app/building-your-application/data-fetching/server-actions-and-mutations">server action</a>:</p>
<pre><code class="language-ts">// lib/actions.ts
&quot;use server&quot;;

import { redirect } from &quot;next/navigation&quot;;
import { signJwt } from &quot;./jwt&quot;;

export async function authorizeUser(
  apiKey: string,
  state: string,
  redirectUri: string,
) {
  // Create a JWT with 1 hour expiration
  // This is only for the initial authorization code flow, which should be short-lived
  const code = await signJwt({ apiKey }, { expiresIn: &quot;1h&quot; });

  // Construct the redirect URL with state and code
  const redirectUrl = new URL(redirectUri);
  redirectUrl.searchParams.append(&quot;state&quot;, state);
  redirectUrl.searchParams.append(&quot;code&quot;, code);

  // Redirect to the callback URL on Docusign
  // Docusign will then invoke the token endpoint with the code to obtain the access token
  redirect(redirectUrl.toString());
}
</code></pre>
<p>After the user is redirected to Docusign, Docusign will then invoke the <code>/api/token</code> endpoint to obtain the access token. This endpoint will also be invoked occasionally after the authentication flow, before any extension endpoint is invoked, to get the latest access token using a refresh token. Therefore, the endpoint needs to cover two scenarios.</p>
<p>In the first scenario, during the authentication phase, Docusign will send the code and state to the <code>/api/token</code> endpoint. In this scenario, the endpoint must retrieve the value of the <code>code</code> parameter (storing the JWT value), parse the JWT, and extract the API key. Optionally, it can verify the API key&#39;s validity by invoking an endpoint on TaskVibe using that key.</p>
<p>Then, it should return an access token and a refresh token back to Docusign. Since we are not using refresh tokens in our case, we can create a new JWT token containing the API key and return it as both the access token and the refresh token to Docusign.</p>
<p>In the second scenario, Docusign will send the most recently obtained refresh token to get a new access token. Again, because we are not using refresh tokens, we can return both the retrieved access token and the refresh token to Docusign.</p>
<p>The <code>api/token</code> endpoint is implemented as a Next.js <a href="https://nextjs.org/docs/app/getting-started/route-handlers-and-middleware">route handler</a>:</p>
<pre><code class="language-ts">// src/app/api/token/route.ts

export async function POST(request: NextRequest) {
  try {
    const body = await request.text();
    const parsedBody = parseQueryString(body);
    const { code, refresh_token } = parsedBody;

    if (code) {
      // This is the initial authorization code flow
      // Verify and decode the JWT from the authorization code
      const payload = await verifyJwt&lt;{ apiKey: string }&gt;(code);
      const { apiKey } = payload;

      // Verify the API key with TaskVibe
      const isValid = await verifyApiKey(apiKey);

      if (!isValid) {
        return NextResponse.json({ error: &quot;Invalid API key&quot; }, { status: 401 });
      }

      // Create a new JWT with no expiration
      const accessToken = await signJwt({ apiKey });

      // Return the tokens
      // We are not using a refresh token in this implementation, so we are returning the same token for both access and refresh
      return NextResponse.json({
        access_token: accessToken,
        refresh_token: accessToken,
        token_type: &quot;Bearer&quot;,
      });
    } else if (refresh_token) {
      // This is the flow that happens for every subsequent request
      // The refresh token is the same as the access token we created in the initial authorization code flow
      return NextResponse.json({
        access_token: refresh_token,
        refresh_token: refresh_token,
        token_type: &quot;Bearer&quot;,
      });
    }

    return NextResponse.json(
      { error: &quot;Missing required parameters&quot; },
      { status: 400 },
    );
  } catch (error) {
    console.error(&quot;Token exchange error:&quot;, error);
    return NextResponse.json(
      { error: &quot;Invalid or expired token&quot; },
      { status: 401 },
    );
  }
}
</code></pre>
<p>In all the remaining endpoints defined in the manifest file, Docusign will provide the access token as the bearer token. It&#39;s up to each endpoint to then read this value, parse the JWT, and extract the secret API key.</p>
<h2>Conclusion</h2>
<p>In conclusion, your Extension App does not need to be limited by the fact that the external system you are integrating with does not have OAuth 2 support or requires additional onboarding. We can safely build upon the existing OAuth 2 protocol and add custom functionality on top of it. This is also a drawback of the approach - it involves custom development, which requires additional work on our part to ensure all cases are covered. Fortunately, the scope of the Extension App does not extend significantly. All remaining endpoints are implemented in the same manner as any other OAuth 2 system, and the app remains a stateless proxy between Docusign and the external system as all necessary information, such as the secret API key and other onboarding details, is stored as an encoded token on the Docusign side.</p>
<p>We hope this blog post was helpful. Keep an eye out for more Docusign content soon, and if you need help building an Extension App of your own, feel free to reach out. The complete source code for this project is available on <a href="https://stackblitz.com/edit/extending-docusign-auth?file=src%2Fapp%2Fauthorize%2Fpage.tsx">StackBlitz</a>.</p>
]]></description>
            <link>https://www.thisdot.co/blog/advanced-authentication-and-onboarding-workflows-with-docusign-extension</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/advanced-authentication-and-onboarding-workflows-with-docusign-extension</guid>
            <pubDate>Fri, 04 Jul 2025 11:59:47 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[The Quirks And Gotchas of PHP]]></title>
            <description><![CDATA[<h1>The Quirks And Gotchas of PHP</h1>
<p>If you come from a JavaScript background, you&#39;ll likely be familiar with some of its famous quirks, such as <code>1 + &quot;1&quot;</code> equaling <code>&quot;11&quot;</code>. Well, PHP has its own set of quirks and gotchas, too. Some are oddly similar to JavaScript&#39;s, while others can surprise a JavaScript developer.</p>
<p>Let&#39;s start with the more familiar ones.</p>
<h2>1. Type Juggling and Loose Comparisons</h2>
<p>Like JavaScript, PHP has two types of comparison operators: strict and loose. The loose comparison operator in PHP uses <code>==</code>, while the strict comparison operator uses <code>===</code>.</p>
<p>Here&#39;s an example of a loose vs. strict comparison in PHP:</p>
<pre><code class="language-php">var_dump(1 == &quot;1&quot;); // true
var_dump(1 === &quot;1&quot;); // false
</code></pre>
<p>PHP is a loosely typed language, meaning it will automatically convert variables from one type to another when necessary, just like JavaScript. This is not only when doing comparisons but also, for example, when doing numeric operations. Such conversions can lead to some unexpected results if you&#39;re not careful:</p>
<pre><code class="language-php">var_dump(1 + &quot;1&quot;); // int(2)
var_dump(1 + &quot;1.5&quot;); // float(2.5)
var_dump(1 + &quot;foo&quot;); // int(1) in PHP 7, TypeError in PHP 8
</code></pre>
<p>As you can see, the type system has gotten a bit stricter in PHP 8, so it won&#39;t let you commit some of the &quot;atrocities&quot; that were possible in earlier versions, throwing a <code>TypeError</code> instead. PHP 8 introduced many changes that aim to eliminate some of the unpredictable behavior; we will cover some of them throughout this article.</p>
<h3>1.1. Truthiness of Strings</h3>
<p>This is such a common gotcha in PHP that it deserves its own heading. By default, PHP considers an empty string as <code>false</code> and a non-empty string as <code>true</code>:</p>
<pre><code class="language-php">if (&quot;0&quot;) {
    // This block executes because &quot;0&quot; is a non-empty string
    echo &quot;This is considered TRUE in PHP&quot;;
}
</code></pre>
<p>But wait, there&#39;s more! PHP also considers the string <code>&quot;0&quot;</code> as <code>false</code>:</p>
<pre><code class="language-php">if (&quot;0&quot; == false) {
    // This block executes because &quot;0&quot; is considered FALSE in PHP
    echo &quot;This is considered FALSE in PHP&quot;;
}
</code></pre>
<p>You might think we&#39;re done here, but no! Try comparing a string such as &quot;php&quot; to <code>0</code>:</p>
<pre><code class="language-php">if (&quot;php&quot; == 0) {
    // This block executes in PHP 7
    echo &quot;This is considered TRUE in PHP 7&quot;;
}
</code></pre>
<p>Until PHP7, any non-numeric string was converted to <code>0</code> when cast to an integer to compare it to the other integer. That&#39;s why this example will be evaluated as <code>true</code>. This quirk has been fixed in PHP 8.</p>
<p>For a comprehensive comparison table of PHP&#39;s truthiness, check out the <a href="https://www.php.net/manual/en/types.comparisons.php#types.comparisions-loose">PHP documentation</a>.</p>
<h3>1.2. Switch Statements</h3>
<p>Switch statements in PHP <a href="https://www.php.net/manual/en/control-structures.switch.php">use loose comparisons</a>, so don&#39;t be surprised if you see some unexpected behavior when using them:</p>
<pre><code class="language-php">$value = &quot;foo&quot;;
switch ($value) {
    case 0:
        echo &quot;Value was 0&quot;; // This block executes
        break;
    case &quot;foo&quot;:
        echo &quot;Value was foo&quot;;
        break;
}
</code></pre>
<h4>The New Match Expression in PHP 8</h4>
<p>PHP 8 introduced the <code>match</code> expression, which is similar to <code>switch</code> but uses strict comparisons (i.e., <code>===</code> under the hood) and returns a value:</p>
<pre><code class="language-php">$result = match ($value) {
    0 =&gt; &#39;Value is zero&#39;,
    1 =&gt; &#39;Value is one&#39;,
    default =&gt; &#39;Something else&#39;,
};
</code></pre>
<p>Unlike <code>switch</code>, there is no &quot;fall-through&quot; behavior in <code>match</code>, and each branch must return a value, making <code>match</code> a great alternative when you need a more precise or concise form of branching—especially if you want to avoid the loose comparisons of a traditional <code>switch</code>.</p>
<h3>1.3 String to Number Conversion</h3>
<p>In earlier versions of PHP, string-to-number conversions were often done silently, even if the string wasn’t strictly numeric (like <code>&#39;123abc&#39;</code>). In PHP 7, this would typically result in <code>123</code> plus a Notice:</p>
<pre><code class="language-php">// In PHP 7:
var_dump(&quot;123abc&quot; + 0);
// int(123), with a Notice
</code></pre>
<p>In PHP 8, you’ll still get <code>int(123)</code>, but now with a Warning, and in other scenarios (like extremely malformed strings), you might see a TypeError. This stricter behavior can reveal hidden bugs in code that relied on implicit type juggling.</p>
<h4>Stricter Type Checks &amp; Warnings in PHP 8</h4>
<ul>
<li><p><strong>Performing arithmetic on non-numeric strings:</strong></p>
<p>As noted, in older versions, something like <code>&quot;123abc&quot; + 0</code> would silently drop the non-numeric part, often producing <code>123</code> plus a PHP Notice. In PHP 8, such operations throw a more visible Warning or TypeError, depending on the exact scenario.</p>
</li>
<li><p><strong>Null to Non-Nullable Internal Arguments:</strong></p>
<p>Passing null to a function parameter that’s internally declared as non-nullable will trigger a TypeError in PHP 8. Previously, this might have been silently accepted or triggered only a warning.</p>
</li>
<li><p><strong>Internal Function Parameter Names:</strong></p>
<p>PHP 8 introduced named arguments but also made internal parameter names part of the public API. If you use named arguments with built-in functions, be aware that renaming or reordering parameters in future releases might break your code. Always match official parameter names as documented in the <a href="https://www.php.net/manual/en/index.php">PHP manual</a>.</p>
</li>
</ul>
<h4>Union Types &amp; Mixed</h4>
<p>Since PHP 8.0, we can declare <strong>union types</strong>, which allows you to specify that a parameter or return value can be one of multiple types. For example:</p>
<pre><code class="language-php">function getUser(int|string $id) {
// ...
}
</code></pre>
<p>Specifying the union of types your function accepts can help clarify your code’s intent and reveal incompatibilities if your existing code relies on looser type checking, preventing some of the conversion quirks we’ve discussed.</p>
<h2>2. Operator Precedence and Associativity</h2>
<p>Operator precedence can lead to confusing situations if you’re not careful with parentheses. For instance, the <code>.</code> operator (string concatenation similar to <code>+</code> in JavaScript) has left-to-right associativity, but certain logical operators have lower precedence than assignment or concatenation, leading to puzzling results in PHP 7 and earlier:</p>
<pre><code class="language-php">echo &quot;Sum: &quot; . 1 + 2;
// Actually interpreted as ((echo &quot;Sum: &quot;) . 1) + 2
// Outputs `2` and a Warning: A non-numeric value encountered

echo &quot;Sum: &quot; . (1 + 2);
// Correctly prints &quot;Sum: 3&quot;
</code></pre>
<p>PHP 8 has fixed this issue by making the <code>+</code> and <code>-</code> operators take a higher precedence.</p>
<h2>3. Variable Variables and Variable Functions</h2>
<p>Now, we&#39;re getting into unfamiliar territory as JavaScript Developers. PHP allows you to define <a href="https://www.php.net/manual/en/language.variables.variable.php">variable variables</a> and <a href="https://www.php.net/manual/en/functions.variable-functions.php">variable functions</a>. This can be a powerful feature, but it can also lead to some confusing code:</p>
<pre><code class="language-php">$varName = &#39;hello&#39;;
$$varName = &#39;world&#39;;

echo $hello; // Outputs &#39;world&#39;
</code></pre>
<p>In this example, the variable <code>$varName</code> contains the string <code>&#39;hello&#39;</code>. By using <code>$$varName</code>, we&#39;re creating a new variable with the name <code>&#39;hello&#39;</code> and assigning it the value <code>&#39;world&#39;</code>.</p>
<p>Similarly, you can create variable functions:</p>
<pre><code class="language-php">function greet() {
    echo &quot;Hello!&quot;;
}

$func = &#39;greet&#39;;
$func(); // Calls greet()
</code></pre>
<h3>4. Passing Variables by Reference</h3>
<p>You can pass variables by reference using the <code>&amp;</code> operator in PHP. This means that any changes made to the variable inside the function will be reflected outside the function:</p>
<pre><code class="language-php">function increment(&amp;$num) {
    $num++;
}

$number = 5;
increment($number);
echo $number; // Outputs 6
</code></pre>
<p>While this example is straightforward, not knowing the pass-by-reference feature can lead to some confusion, and bugs can arise when you inadvertently pass variables by reference.</p>
<h2>5. Array Handling</h2>
<p>PHP arrays are a bit different from JavaScript arrays. They can be used as both arrays and dictionaries, and they have some quirks that can catch you off guard. For example, if you try to access an element that doesn&#39;t exist in an array, PHP will return <code>null</code> instead of throwing an error:</p>
<pre><code class="language-php">$arr = [1, 2, 3];
var_dump($arr[3]); // NULL
</code></pre>
<p>Furthermore, PHP arrays can contain both numerical and string keys at the same time, but numeric string keys can sometimes convert to integers, depending on the context&gt;</p>
<pre><code class="language-php">$array = [
    &quot;1&quot;   =&gt; &quot;One (as string)&quot;,
    1     =&gt; &quot;One (as int)&quot;,
    true  =&gt; &quot;True as key?&quot;
];

var_dump($array);
// Output can be surprising:
// array(1) {
//   [1] =&gt; string(12) &quot;True as key?&quot;
// }
</code></pre>
<p>In this example:</p>
<ul>
<li><code>&quot;1&quot;</code> (string) and <code>1</code> (integer) collide, resulting in the array effectively having only one key: <code>1</code>.</li>
<li><code>true</code> is also cast to <code>1</code> as an integer, so it overwrites the same key.</li>
</ul>
<p>And last, but not least, let&#39;s go back to the topic of passing variables by reference. You can assign an array element by reference, which can feel quite unintuitive:</p>
<pre><code class="language-php">$array = [&#39;apple&#39;, &#39;banana&#39;];
$fruit = &amp;$array[0];  // $fruit is now referencing the first element
$fruit = &#39;pear&#39;;

var_dump($array);
// array(2) {
//   [0] =&gt; &quot;pear&quot;,
//   [1] =&gt; &quot;banana&quot;
// }
</code></pre>
<h3>6 Checking for Variable Truthiness (isset, empty, and nullsafe operator)</h3>
<p>In PHP, you can use the <code>empty()</code> function to check if a variable is empty. But what does &quot;empty&quot; mean in PHP? The mental model of what&#39;s considered &quot;empty&quot; in PHP might differ from what you&#39;re used to in JavaScript. Let&#39;s clarify this:</p>
<p>The following values are considered empty by the <code>empty()</code> function:</p>
<ul>
<li><code>&quot;&quot;</code> (an empty string)</li>
<li><code>0</code> (0 as an integer)</li>
<li><code>0.0</code> (0 as a float)</li>
<li><code>&quot;0&quot;</code> (0 as a string)</li>
<li><code>null</code></li>
<li><code>false</code></li>
<li><code>[]</code> (an empty array)</li>
</ul>
<p>This means that the following values are <strong>not</strong> considered empty:</p>
<ul>
<li><code>&quot;0&quot;</code> (a string containing &quot;0&quot;)</li>
<li><code>&quot; &quot;</code> (a string containing a space)</li>
<li><code>0.0</code> (0 as a float)</li>
<li><code>new stdClass()</code> (an empty object)</li>
</ul>
<p>Keep this in mind when using <code>empty()</code> in your code, otherwise, you might end up debugging some unexpected behavior.</p>
<h4>Undefined Variables and <code>isset()</code></h4>
<p>Another little gotcha is that you might expect <code>empty()</code> to return <code>true</code> for undefined variables too - they contain nothing after all, right? Unfortunately, <code>empty()</code> will throw a notice in such case. To account for undefined variables, you may want to use the <code>isset()</code> function, which checks if a variable is set and not <code>null</code>:</p>
<pre><code class="language-php">$var = 0;
if (isset($var) &amp;&amp; !empty($var)) {
    echo &quot;Variable is set and not empty&quot;;
}
</code></pre>
<h4>The Nullsafe Operator</h4>
<p>If you have a chain of properties or methods that you want to access, you may tend to check each step with <code>isset()</code> to avoid errors:</p>
<pre><code class="language-php">if (isset($object) &amp;&amp; isset($object-&gt;child)) {
    echo $object-&gt;child-&gt;getName();
}
</code></pre>
<p>In fact, because <code>isset()</code> is a special language construct and it doesn&#39;t fully evaluate an undefined part of the chain, it can be used to evaluate the whole chain at once:</p>
<pre><code class="language-php">if (isset($object-&gt;child)) {
    $result = $object-&gt;child-&gt;getName();
}
</code></pre>
<p>That&#39;s much nicer! However, it could be even more elegant with the nullsafe operator (<code>?-&gt;</code>) introduced in PHP 8:</p>
<pre><code class="language-php">// Instead of checking multiple times if $object or $object-&gt;child is null:
$result = $object?-&gt;child?-&gt;getName();
</code></pre>
<p>If you’ve used optional chaining in JavaScript or other languages, this should look familiar. It returns null if any part of the chain is null, which is handy but can also hide potential logic mistakes — if your application logic expects objects to exist, silently returning null may lead to subtle bugs.</p>
<h2>Conclusion</h2>
<p>While PHP shares a few loose typing quirks with JavaScript, it also has its own distinctive behaviors around type juggling, operator precedence, passing by reference, and array handling. Becoming familiar with these nuances — and with the newer, more predictable features in PHP 8 — will help you avoid subtle bugs and write clearer, more robust code. PHP continues to evolve, so always consult the <a href="https://www.php.net/manual/en/index.php">official documentation</a> to stay current on best practices and language changes.</p>
]]></description>
            <link>https://www.thisdot.co/blog/the-quirks-and-gotchas-of-php</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/the-quirks-and-gotchas-of-php</guid>
            <pubDate>Fri, 20 Jun 2025 12:10:56 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[What Sets the Best Autonomous Coding Agents Apart?]]></title>
            <description><![CDATA[<h1>Must-have Features of Coding Agents</h1>
<p>Autonomous coding agents are no longer experimental, they are becoming an integral part of modern development workflows, redefining how software is built and maintained.  As models become more capable, agents have become easier to produce, leading to an explosion of options with varying depth and utility. Drawing insights from our experience using many agents, let&#39;s delve into the features that you&#39;ll absolutely want to get the best results.</p>
<h3>1. Customizable System Prompts</h3>
<p>Custom agent modes, or roles, allow engineers to tailor the outputs to the desired results of their task. For instance, an agent can be set to operate in a &quot;planning mode&quot; focused on outlining development steps and gathering requirements, a &quot;coding mode&quot; optimized for generating and testing code, or a &quot;documentation mode&quot; emphasizing clarity and completeness of written artifacts. You might start with the off-the-shelf planning prompt, but you&#39;ll quickly want your own tailored version. Regardless of which modes are included out of the box, the ability to customize and extend them is critical. Agents must adapt to your unique workflows and prioritize what&#39;s important to your project. Without this flexibility, even well-designed defaults can fall short in real-world use.</p>
<p>Engineers have preferences, and projects contain existing work. The best agents offer ways to communicate these preferences and decisions effectively. For example, &#39;pnpm&#39; instead of &#39;npm&#39; for package management, requiring the agent to seek root causes rather than offer temporary workarounds, or mandating that tests and linting must pass before a task is marked complete. Rules are a layer of control to accomplish this. Rules reinforce technical standards but also shape agent behavior to reflect project priorities and cultural norms. They inform the agent across contexts, think constraints, preferences, or directives that apply regardless of the task. Rules can encode things like style guidelines, risk tolerances, or communication boundaries. By shaping how the agent reasons and responds, rules ensure consistent alignment with desired outcomes. </p>
<p>Roo code is an agent that makes great use of custom modes, and rules are ubiquitous across coding agents. These features form a meta-agent framework that allows engineers to construct the most effective agent for their unique project and workflow details.</p>
<h3>2. Usage-based Pricing</h3>
<p>The best agents provide as much relevant information as possible to the model. They give transparency and control over what information is sent. This allows engineers to leverage their knowledge of the project to improve results. Being liberal with relevant information to the models is more expensive however, it also significantly improves results. </p>
<p>The pricing model of some agents prioritizes fixed, predictable costs that include model fees. This creates an incentive to minimize the amount of information sent to the model in order to control costs. To get the most out of these tools, you’ve got to get the most out of models, which typically implies usage-based pricing. </p>
<h3>3. Autonomous Workflows</h3>
<p>The way we accomplish work has phases. For example, creating tests and then making them pass, creating diagrams or plans, or reviewing work before submitting PRs. The best agents have mechanisms to facilitate these phases in an autonomous way. For the best results, each phase should have full use of a context window without watering down the main session&#39;s context. This should leverage your custom modes, which excel at each phase of your workflow.</p>
<h3>4. Working in the Background</h3>
<p>The best agents are more effective at producing desired results and thus are able to be more autonomous. As agents become more autonomous, the ability to work in the background or work on multiple tasks at once becomes increasingly necessary to unlock their full potential. Agents that leverage local or cloud containers to perform work independently of IDEs or working copies on an engineer&#39;s machine further increase their utility. This allows engineers to focus on drafting plans and reviewing proposed changes, ultimately to work toward managing multiple tasks at once, overseeing their agent-powered workflows as if guiding a team.</p>
<h3>5. Integrations with your Tools</h3>
<p>The Model Context Protocol (MCP) serves as a standardized interface, allowing agents to interact with your tools and data sources. The best agents seamlessly integrate with the platforms that engineers rely on, such as Confluence for documentation, Jira for tasks, and GitHub for source control and pull requests.
These integrations ensure the agent can participate meaningfully across the full software development lifecycle.</p>
<h3>6. Support for Multiple Model Providers</h3>
<p>Reliance on a single AI provider can be limiting. Top-tier agents support multiple providers, allowing teams to choose the best models for specific tasks. This flexibility enhances performance, the ability to use the latest and greatest, and also safeguards against potential downtimes or vendor-specific issues.</p>
<h2>Final Thoughts</h2>
<p>Selecting the right autonomous coding agent is a strategic decision. By prioritizing the features mentioned, technology leaders can adopt agents that can be tuned for their team&#39;s success. Tuning agents to projects and teams takes time, as does configuring the plumbing to integrate well with other systems. However, unlocking massive productivity gains is worth the squeeze. Models will become better and better, and the best agents capitalize on these improvements with little to no added effort. Set your organization and teams up to tap into the power of AI-enhanced engineering, and be more effective and more competitive.</p>
]]></description>
            <link>https://www.thisdot.co/blog/what-sets-the-best-autonomous-coding-agents-apart</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/what-sets-the-best-autonomous-coding-agents-apart</guid>
            <pubDate>Tue, 03 Jun 2025 16:07:30 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Next.js Rendering Strategies and how they affect core web vitals]]></title>
            <description><![CDATA[<p>When it comes to building fast and scalable web apps with Next.js, it’s important to understand how rendering works, especially with the App Router. Next.js organizes rendering around two main environments: the <strong>server</strong> and the <strong>client</strong>. On the server side, you’ll encounter three key strategies: <strong>Static Rendering</strong>, <strong>Dynamic Rendering</strong>, and <strong>Streaming</strong>. Each one comes with its own set of trade-offs and performance benefits, so knowing when to use which is crucial for delivering a great user experience.</p>
<p>In this post, we&#39;ll break down each strategy, what it&#39;s good for, and how it impacts your site&#39;s performance, especially Core Web Vitals. We&#39;ll also explore hybrid approaches and provide practical guidance on choosing the right strategy for your use case.</p>
<h2>What Are Core Web Vitals?</h2>
<p><a href="https://web.dev/articles/vitals#core-web-vitals">Core Web Vitals</a> are a set of metrics defined by Google that measure real-world user experience on websites. These metrics play a major role in search engine rankings and directly affect how users perceive the speed and smoothness of your site.</p>
<ul>
<li><strong>Largest Contentful Paint (LCP):</strong> This measures loading performance. It calculates the time taken for the largest visible content element to render. A good LCP is 2.5 seconds or less.</li>
<li><strong>Interaction to Next Paint (INP):</strong> This measures responsiveness to user input. A good INP is 200 milliseconds or less.</li>
<li><strong>Cumulative Layout Shift (CLS):</strong> This measures the visual stability of the page. It quantifies layout instability during load. A good CLS is 0.1 or less.</li>
</ul>
<p>If you want to dive deeper into Core Web Vitals and understand more about their impact on your website&#39;s performance, I recommend reading this detailed guide on  <a href="https://www.thisdot.co/blog/new-core-web-vitals-and-how-they-work">New Core Web Vitals and How They Work</a>.</p>
<h2>Next.js Rendering Strategies and Core Web Vitals</h2>
<p>Let&#39;s explore each rendering strategy in detail:</p>
<h3>1. Static Rendering (Server Rendering Strategy)</h3>
<p>Static Rendering is the default for Server Components in Next.js. With this approach, components are rendered at build time (or during revalidation), and the resulting HTML is reused for each request. This pre-rendering happens on the server, not in the user&#39;s browser. Static rendering is ideal for routes where the data is not personalized to the user, and this makes it suitable for:</p>
<ul>
<li><strong>Content-focused websites:</strong> Blogs, documentation, marketing pages</li>
<li><strong>E-commerce product listings:</strong> When product details don&#39;t change frequently</li>
<li><strong>SEO-critical pages:</strong> When search engine visibility is a priority</li>
<li><strong>High-traffic pages:</strong> When you want to minimize server load</li>
</ul>
<h4>How Static Rendering Affects Core Web Vitals</h4>
<ul>
<li><strong>Largest Contentful Paint (LCP):</strong> Static rendering typically leads to excellent LCP scores (typically &lt; 1s). The Pre-rendered HTML can be cached and delivered instantly from CDNs, resulting in very fast delivery of the initial content, including the largest element. Also, there is no waiting for data fetching or rendering on the client.</li>
<li><strong>Interaction to Next Paint (INP):</strong> Static rendering provides a good foundation for INP, but doesn&#39;t guarantee optimal performance (typically ranges from 50-150 ms depending on implementation). While Server Components don&#39;t require hydration, any Client Components within the page still need JavaScript to become interactive. To achieve a very good INP score, you will need to make sure the Client Components within the page is minimal.</li>
<li><strong>Cumulative Layout Shift (CLS):</strong> While static rendering delivers the complete page structure upfront which can be very beneficial for CLS, achieving excellent CLS requires additional optimization strategies:<ul>
<li>Static HTML alone doesn&#39;t prevent layout shifts if resources load asynchronously</li>
<li>Image dimensions must be properly specified to reserve space before the image loads</li>
<li>Web fonts can cause text to reflow if not handled properly with font display strategies</li>
<li>Dynamically injected content (ads, embeds, lazy-loaded elements) can disrupt layout stability</li>
<li>CSS implementation significantly impacts CLS—immediate availability of styling information helps maintain visual stability</li>
</ul>
</li>
</ul>
<p><strong>Code Examples:</strong></p>
<ol>
<li>Basic static rendering:</li>
</ol>
<pre><code class="language-js">// app/page.tsx (Server Component - Static Rendering by default)
export default async function Page() {
  const res = await fetch(&#39;https://api.example.com/static-data&#39;);
  const data = await res.json();
  return (
    &lt;div&gt;
      &lt;h1&gt;Static Content&lt;/h1&gt;
      &lt;p&gt;{data.content}&lt;/p&gt;
    &lt;/div&gt;
  );
}
</code></pre>
<ol start="2">
<li>Static rendering with revalidation (ISR):</li>
</ol>
<pre><code class="language-js">// app/dashboard/page.tsx
export default async function Dashboard() {
  // Static data that revalidates every day
  const siteStats = await fetch(&#39;https://api.example.com/site-stats&#39;, {
    next: { revalidate: 86400 } // 24 hours
  }).then(r =&gt; r.json());

  // Data that revalidates every hour
  const popularProducts = await fetch(&#39;https://api.example.com/popular-products&#39;, {
    next: { revalidate: 3600 } // 1 hour
  }).then(r =&gt; r.json());

  // Data with a cache tag for on-demand revalidation
  const featuredContent = await fetch(&#39;https://api.example.com/featured-content&#39;, {
    next: { tags: [&#39;featured&#39;] }
  }).then(r =&gt; r.json());

  return (
    &lt;div className=&quot;dashboard&quot;&gt;
      &lt;section className=&quot;stats&quot;&gt;
        &lt;h2&gt;Site Statistics&lt;/h2&gt;
        &lt;p&gt;Total Users: {siteStats.totalUsers}&lt;/p&gt;
        &lt;p&gt;Total Orders: {siteStats.totalOrders}&lt;/p&gt;
      &lt;/section&gt;

      &lt;section className=&quot;popular&quot;&gt;
        &lt;h2&gt;Popular Products&lt;/h2&gt;
        &lt;ul&gt;
          {popularProducts.map(product =&gt; (
            &lt;li key={product.id}&gt;{product.name} - {product.sales} sold&lt;/li&gt;
          ))}
        &lt;/ul&gt;
      &lt;/section&gt;

      &lt;section className=&quot;featured&quot;&gt;
        &lt;h2&gt;Featured Content&lt;/h2&gt;
        &lt;div&gt;{featuredContent.html}&lt;/div&gt;
      &lt;/section&gt;
    &lt;/div&gt;
  );
}
</code></pre>
<ol start="3">
<li>Static path generation:</li>
</ol>
<pre><code class="language-js">// app/products/[id]/page.tsx
export async function generateStaticParams() {
  const products = await fetch(&#39;https://api.example.com/products&#39;).then(r =&gt; r.json());

  return products.map((product) =&gt; ({
    id: product.id.toString(),
  }));
}

export default async function Product({ params }) {
  const product = await fetch(`https://api.example.com/products/${params.id}`).then(r =&gt; r.json());

  return (
    &lt;div&gt;
      &lt;h1&gt;{product.name}&lt;/h1&gt;
      &lt;p&gt;${product.price.toFixed(2)}&lt;/p&gt;
      &lt;p&gt;{product.description}&lt;/p&gt;
    &lt;/div&gt;
  );
}
</code></pre>
<h3>2. Dynamic Rendering (Server Rendering Strategy)</h3>
<p>Dynamic Rendering generates HTML on the server for each request at request time. Unlike static rendering, the content is not pre-rendered or cached but freshly generated for each user. This kind of rendering works best for:</p>
<ul>
<li><strong>Personalized content:</strong> User dashboards, account pages</li>
<li><strong>Real-time data:</strong> Stock prices, live sports scores</li>
<li><strong>Request-specific information:</strong> Pages that use cookies, headers, or search parameters</li>
<li><strong>Frequently changing data:</strong> Content that needs to be up-to-date on every request</li>
</ul>
<h4>How Dynamic Rendering Affects Core Web Vitals</h4>
<ul>
<li><strong>Largest Contentful Paint (LCP):</strong> With dynamic rendering, the server needs to generate HTML for each request, and that can&#39;t be fully cached at the CDN level. It is still faster than client-side rendering as HTML is generated on the server.</li>
<li><strong>Interaction to Next Paint (INP):</strong> The performance is similar to static rendering once the page is loaded. However, it can become slower if the dynamic content includes many Client Components.</li>
<li><strong>Cumulative Layout Shift (CLS):</strong> Dynamic rendering can potentially introduce CLS if the data fetched at request time significantly alters the layout of the page compared to a static structure. However, if the layout is stable and the dynamic content size fits within predefined areas, the CLS can be managed effectively.</li>
</ul>
<p><strong>Code Examples:</strong></p>
<ol>
<li>Explicit dynamic rendering:</li>
</ol>
<pre><code class="language-js">// app/dashboard/page.tsx
export const dynamic = &#39;force-dynamic&#39;; // Force this route to be dynamically rendered

export default async function Dashboard() {
  // This will run on every request
  const data = await fetch(&#39;https://api.example.com/dashboard-data&#39;).then(r =&gt; r.json());

  return (
    &lt;div&gt;
      &lt;h1&gt;Dashboard&lt;/h1&gt;
      &lt;p&gt;Last updated: {new Date().toLocaleString()}&lt;/p&gt;
      {/* Dashboard content */}
    &lt;/div&gt;
  );
}
</code></pre>
<ol start="2">
<li>Simplicit dynamic rendering with cookies:</li>
</ol>
<pre><code class="language-js">// app/profile/page.tsx
import { cookies } from &#39;next/headers&#39;;

export default async function Profile() {
  // Using cookies() automatically opts into dynamic rendering
  const userId = cookies().get(&#39;userId&#39;)?.value;

  const user = await fetch(`https://api.example.com/users/${userId}`).then(r =&gt; r.json());

  return (
    &lt;div&gt;
      &lt;h1&gt;Welcome, {user.name}&lt;/h1&gt;
      &lt;p&gt;Email: {user.email}&lt;/p&gt;
      {/* Profile content */}
    &lt;/div&gt;
  );
}
</code></pre>
<ol start="3">
<li>Dynamic routes:</li>
</ol>
<pre><code class="language-js">// app/blog/[slug]/page.tsx
export default async function BlogPost({ params }) {
  // It will run at request time for any slug not explicitly pre-rendered
  const post = await fetch(`https://api.example.com/posts/${params.slug}`).then(r =&gt; r.json());

  return (
    &lt;article&gt;
      &lt;h1&gt;{post.title}&lt;/h1&gt;
      &lt;div&gt;{post.content}&lt;/div&gt;
    &lt;/article&gt;
  );
}
</code></pre>
<h3>3. Streaming (Server Rendering Strategy)</h3>
<p>Streaming allows you to progressively render UI from the server. Instead of waiting for all the data to be ready before sending any HTML, the server sends chunks of HTML as they become available. This is implemented using React&#39;s Suspense boundary.</p>
<p>React Suspense works by creating boundaries in your component tree that can &quot;suspend&quot; rendering while waiting for asynchronous operations. When a component inside a Suspense boundary throws a promise (which happens automatically with data fetching in React Server Components), React pauses rendering of that component and its children, renders the fallback UI specified in the Suspense component, continues rendering other parts of the page outside this boundary, and eventually resumes and replaces the fallback with the actual component once the promise resolves.</p>
<p>When streaming, this mechanism allows the server to send the initial HTML with fallbacks for suspended components while continuing to process suspended components in the background. The server then streams additional HTML chunks as each suspended component resolves, including instructions for the browser to seamlessly replace fallbacks with final content. It works well for:</p>
<ul>
<li><strong>Pages with mixed data requirements:</strong> Some fast, some slow data sources</li>
<li><strong>Improving perceived performance:</strong> Show users something quickly while slower parts load</li>
<li><strong>Complex dashboards:</strong> Different widgets have different loading times</li>
<li><strong>Handling slow APIs:</strong> Prevent slow third-party services from blocking the entire page</li>
</ul>
<h4>How Streaming Affects Core Web Vitals</h4>
<ul>
<li><strong>Largest Contentful Paint (LCP):</strong> Streaming can improve the perceived LCP. By sending the initial HTML content quickly, including potentially the largest element, the browser can render it sooner. Even if other parts of the page are still loading, the user sees the main content faster.</li>
<li><strong>Interaction to Next Paint (INP):</strong> Streaming can contribute to a better INP. When used with React&#39;s <code>&amp;lt;Suspense /&gt;</code>, interactive elements in the faster-loading parts of the page can become interactive earlier, even while other components are still being streamed in. This allows users to engage with the page sooner.</li>
<li><strong>Cumulative Layout Shift (CLS):</strong> Streaming can cause layout shifts as new content streams in. However, when implemented carefully, streaming should not negatively impact CLS. The initially streamed content should establish the main layout, and subsequent streamed chunks should ideally fit within this structure without causing significant reflows or layout shifts. Using placeholders and ensuring dimensions are known can help prevent CLS.</li>
</ul>
<p><strong>Code Examples:</strong></p>
<ol>
<li>Basic Streaming with Suspense:</li>
</ol>
<pre><code class="language-js">// app/dashboard/page.tsx
import { Suspense } from &#39;react&#39;;
import UserProfile from &#39;./components/UserProfile&#39;;
import RecentActivity from &#39;./components/RecentActivity&#39;;
import PopularPosts from &#39;./components/PopularPosts&#39;;

export default function Dashboard() {
  return (
    &lt;div className=&quot;dashboard&quot;&gt;
      {/* This loads quickly */}
      &lt;h1&gt;Dashboard&lt;/h1&gt;

      {/* User profile loads first */}
      &lt;Suspense fallback={&lt;div className=&quot;skeleton-profile&quot;&gt;Loading profile...&lt;/div&gt;}&gt;
        &lt;UserProfile /&gt;
      &lt;/Suspense&gt;

      {/* Recent activity might take longer */}
      &lt;Suspense fallback={&lt;div className=&quot;skeleton-activity&quot;&gt;Loading activity...&lt;/div&gt;}&gt;
        &lt;RecentActivity /&gt;
      &lt;/Suspense&gt;

      {/* Popular posts might be the slowest */}
      &lt;Suspense fallback={&lt;div className=&quot;skeleton-posts&quot;&gt;Loading popular posts...&lt;/div&gt;}&gt;
        &lt;PopularPosts /&gt;
      &lt;/Suspense&gt;
    &lt;/div&gt;
  );
}
</code></pre>
<ol start="2">
<li>Nested Suspense boundaries for more granular control:</li>
</ol>
<pre><code class="language-js">// app/complex-page/page.tsx
import { Suspense } from &#39;react&#39;;

export default function ComplexPage() {
  return (
    &lt;Suspense fallback={&lt;PageSkeleton /&gt;}&gt;
      &lt;Header /&gt;

      &lt;div className=&quot;content-grid&quot;&gt;
        &lt;div className=&quot;main-content&quot;&gt;
          &lt;Suspense fallback={&lt;MainContentSkeleton /&gt;}&gt;
            &lt;MainContent /&gt;
          &lt;/Suspense&gt;
        &lt;/div&gt;

        &lt;div className=&quot;sidebar&quot;&gt;
          &lt;Suspense fallback={&lt;SidebarTopSkeleton /&gt;}&gt;
            &lt;SidebarTopSection /&gt;
          &lt;/Suspense&gt;

          &lt;Suspense fallback={&lt;SidebarBottomSkeleton /&gt;}&gt;
            &lt;SidebarBottomSection /&gt;
          &lt;/Suspense&gt;
        &lt;/div&gt;
      &lt;/div&gt;

      &lt;Footer /&gt;
    &lt;/Suspense&gt;
  );
}
</code></pre>
<ol start="3">
<li>Using Next.js loading.js convention:</li>
</ol>
<pre><code class="language-js">// app/products/loading.tsx - This will automatically be used as a Suspense fallback
export default function Loading() {
  return (
    &lt;div className=&quot;products-loading-skeleton&quot;&gt;
      &lt;div className=&quot;header-skeleton&quot; /&gt;
      &lt;div className=&quot;filters-skeleton&quot; /&gt;
      &lt;div className=&quot;products-grid-skeleton&quot;&gt;
        {Array.from({ length: 12 }).map((_, i) =&gt; (
          &lt;div key={i} className=&quot;product-card-skeleton&quot; /&gt;
        ))}
      &lt;/div&gt;
    &lt;/div&gt;
  );
}

// app/products/page.tsx
export default async function ProductsPage() {
  // This component can take time to load
  // Next.js will automatically wrap it in Suspense
  // and use the loading.js as the fallback
  const products = await fetchProducts();

  return &lt;ProductsList products={products} /&gt;;
}
</code></pre>
<h3>4. Client Components and Client-Side Rendering</h3>
<p>Client Components are defined using the React <code>&#39;use client&#39;</code> directive. They are pre-rendered on the server but then hydrated on the client, enabling interactivity. This is different from pure client-side rendering (CSR), where rendering happens entirely in the browser. In the traditional sense of CSR (where the initial HTML is minimal, and all rendering happens in the browser), Next.js has moved away from this as a default approach but it can still be achievable by using dynamic imports and setting <code>ssr: false</code>.</p>
<pre><code class="language-js">// app/csr-example/page.tsx
&#39;use client&#39;;

import { useState, useEffect } from &#39;react&#39;;
import dynamic from &#39;next/dynamic&#39;;

// Lazily load a component with no SSR
const ClientOnlyComponent = dynamic(
  () =&gt; import(&#39;../components/heavy-component&#39;),
  { ssr: false, loading: () =&gt; &lt;p&gt;Loading...&lt;/p&gt; }
);

export default function CSRPage() {
  const [isClient, setIsClient] = useState(false);

  useEffect(() =&gt; {
    setIsClient(true);
  }, []);

  return (
    &lt;div&gt;
      &lt;h1&gt;Client-Side Rendered Page&lt;/h1&gt;
      {isClient ? (
        &lt;ClientOnlyComponent /&gt;
      ) : (
        &lt;p&gt;Loading client component...&lt;/p&gt;
      )}
    &lt;/div&gt;
  );
}
</code></pre>
<p>Despite the shift toward server rendering, there are valid use cases for CSR:</p>
<ol>
<li><strong>Private dashboards</strong>: Where SEO doesn&#39;t matter, and you want to reduce server load</li>
<li><strong>Heavy interactive applications</strong>: Like data visualization tools or complex editors</li>
<li><strong>Browser-only APIs</strong>: When you need access to browser-specific features like localStorage or WebGL</li>
<li><strong>Third-party integrations</strong>: Some third-party widgets or libraries that only work in the browser</li>
</ol>
<p>While these are valid use cases, using Client Components is generally preferable to pure CSR in Next.js. Client Components give you the best of both worlds: server-rendered HTML for the initial load (improving SEO and LCP) with client-side interactivity after hydration. Pure CSR should be reserved for specific scenarios where server rendering is impossible or counterproductive.</p>
<p>Client components are good for:</p>
<ul>
<li><strong>Interactive UI elements:</strong> Forms, dropdowns, modals, tabs</li>
<li><strong>State-dependent UI:</strong> Components that change based on client state</li>
<li><strong>Browser API access:</strong> Components that need localStorage, geolocation, etc.</li>
<li><strong>Event-driven interactions:</strong> Click handlers, form submissions, animations</li>
<li><strong>Real-time updates:</strong> Chat interfaces, live notifications</li>
</ul>
<h4>How Client Components Affect Core Web Vitals</h4>
<ul>
<li><strong>Largest Contentful Paint (LCP):</strong> Initial HTML includes the server-rendered version of Client Components, so LCP is reasonably fast. Hydration can delay interactivity but doesn&#39;t necessarily affect LCP.</li>
<li><strong>Interaction to Next Paint (INP):</strong> For Client Components, hydration can cause input delay during page load, and when the page is hydrated, performance depends on the efficiency of event handlers. Also, complex state management can impact responsiveness.</li>
<li><strong>Cumulative Layout Shift (CLS):</strong> Client-side data fetching can cause layout shifts as new data arrives. Also, state changes might alter the layout unexpectedly. Using Client Components will require careful implementation to prevent shifts.</li>
</ul>
<p><strong>Code Examples:</strong></p>
<ol>
<li>Basic Client Component:</li>
</ol>
<pre><code class="language-js">// app/components/Counter.tsx
&#39;use client&#39;;

import { useState } from &#39;react&#39;;

export default function Counter() {
  const [count, setCount] = useState(0);

  return (
    &lt;div&gt;
      &lt;p&gt;Count: {count}&lt;/p&gt;
      &lt;button onClick={() =&gt; setCount(count + 1)}&gt;Increment&lt;/button&gt;
    &lt;/div&gt;
  );
}
</code></pre>
<ol start="2">
<li>Client Component with server data:</li>
</ol>
<pre><code class="language-js">// app/products/page.tsx - Server Component
import ProductFilter from &#39;../components/ProductFilter&#39;;

export default async function ProductsPage() {
  // Fetch data on the server
  const products = await fetch(&#39;https://api.example.com/products&#39;).then(r =&gt; r.json());

  // Pass server data to Client Component as props
  return &lt;ProductFilter initialProducts={products} /&gt;;
}
</code></pre>
<h2>Hybrid Approaches and Composition Patterns</h2>
<p>In real-world applications, you&#39;ll often use a combination of rendering strategies to achieve the best performance. Next.js makes it easy to compose Server and Client Components together.</p>
<h3>Server Components with Islands of Interactivity</h3>
<p>One of the most effective patterns is to use Server Components for the majority of your UI and add Client Components only where interactivity is needed. This approach:</p>
<ol>
<li>Minimizes JavaScript sent to the client</li>
<li>Provides excellent initial load performance</li>
<li>Maintains good interactivity where needed</li>
</ol>
<pre><code class="language-js">// app/products/[id]/page.tsx - Server Component
import AddToCartButton from &#39;../../components/AddToCartButton&#39;;
import ProductReviews from &#39;../../components/ProductReviews&#39;;
import RelatedProducts from &#39;../../components/RelatedProducts&#39;;

export default async function ProductPage({ params }: {
  params: { id: string; }
}) {
  // Fetch product data on the server
  const product = await fetch(`https://api.example.com/products/${params.id}`).then(r =&gt; r.json());

  return (
    &lt;div className=&quot;product-page&quot;&gt;
      &lt;div className=&quot;product-main&quot;&gt;
        &lt;h1&gt;{product.name}&lt;/h1&gt;
        &lt;p className=&quot;price&quot;&gt;${product.price.toFixed(2)}&lt;/p&gt;
        &lt;div className=&quot;description&quot;&gt;{product.description}&lt;/div&gt;

        {/* Client Component for interactivity */}
        &lt;AddToCartButton product={product} /&gt;
      &lt;/div&gt;

      {/* Server Component for product reviews */}
      &lt;ProductReviews productId={params.id} /&gt;

      {/* Server Component for related products */}
      &lt;RelatedProducts categoryId={product.categoryId} /&gt;
    &lt;/div&gt;
  );
}
</code></pre>
<h3>Partial Prerendering (Next.js 15)</h3>
<p>Next.js 15 introduced Partial Prerendering, a new hybrid rendering strategy that combines static and dynamic content in a single route. This allows you to:</p>
<ol>
<li>Statically generate a shell of the page</li>
<li>Stream in dynamic, personalized content</li>
<li>Get the best of both static and dynamic rendering</li>
</ol>
<p><strong>Note:</strong> At the time of this writing, Partial Prerendering is experimental and is not ready for production use. <a href="https://nextjs.org/docs/app/building-your-application/rendering/partial-prerendering">Read more</a></p>
<pre><code class="language-js">// app/dashboard/page.tsx
import { unstable_noStore as noStore } from &#39;next/cache&#39;;
import StaticContent from &#39;./components/StaticContent&#39;;
import DynamicContent from &#39;./components/DynamicContent&#39;;

export default function Dashboard() {
  return (
    &lt;div className=&quot;dashboard&quot;&gt;
      {/* This part is statically generated */}
      &lt;StaticContent /&gt;

      {/* This part is dynamically rendered */}
      &lt;DynamicPart /&gt;
    &lt;/div&gt;
  );
}

// This component and its children will be dynamically rendered
function DynamicPart() {
  // Opt out of caching for this part
  noStore();

  return &lt;DynamicContent /&gt;;
}
</code></pre>
<h2>Measuring Core Web Vitals in Next.js</h2>
<p>Understanding the impact of your rendering strategy choices requires measuring Core Web Vitals in real-world conditions. Here are some approaches:</p>
<h3>1. Vercel Analytics</h3>
<p>If you deploy on Vercel, you can use <a href="https://vercel.com/analytics">Vercel Analytics</a> to automatically track Core Web Vitals for your production site:</p>
<pre><code class="language-js">// app/layout.tsx
import { Analytics } from &#39;@vercel/analytics/react&#39;;

export default function RootLayout({ children }: {
  children: React.ReactNode;
}) {
  return (
    &lt;html lang=&quot;en&quot;&gt;
      &lt;body&gt;
        {children}
        &lt;Analytics /&gt;
      &lt;/body&gt;
    &lt;/html&gt;
  );
}
</code></pre>
<h3>2. Web Vitals API</h3>
<p>You can manually track Core Web Vitals using the <a href="https://www.npmjs.com/package/web-vitals">web-vitals</a> library:</p>
<pre><code class="language-js">// app/components/WebVitalsReporter.tsx
&#39;use client&#39;;

import { useEffect } from &#39;react&#39;;
import { onCLS, onINP, onLCP } from &#39;web-vitals&#39;;

export function WebVitalsReporter() {
  useEffect(() =&gt; {
    // Report Core Web Vitals
    onCLS(metric =&gt; console.log(&#39;CLS:&#39;, metric.value));
    onINP(metric =&gt; console.log(&#39;INP:&#39;, metric.value));
    onLCP(metric =&gt; console.log(&#39;LCP:&#39;, metric.value));

    // In a real app, you would send these to your analytics service
  }, []);

  return null; // This component doesn&#39;t render anything
}
</code></pre>
<h3>3. Lighthouse and PageSpeed Insights</h3>
<p>For development and testing, use:</p>
<ul>
<li>Chrome DevTools Lighthouse tab</li>
<li><a href="https://pagespeed.web.dev/">PageSpeed Insights</a></li>
<li><a href="https://developers.google.com/web/tools/chrome-user-experience-report">Chrome User Experience Report</a></li>
</ul>
<h2>Making Practical Decisions: Which Rendering Strategy to Choose?</h2>
<p>Choosing the right rendering strategy depends on your specific requirements. Here&#39;s a decision framework:</p>
<h3>Choose Static Rendering when</h3>
<ul>
<li>Content is the same for all users</li>
<li>Data can be determined at build time</li>
<li>Page doesn&#39;t need frequent updates</li>
<li>SEO is critical</li>
<li>You want the best possible performance</li>
</ul>
<h3>Choose Dynamic Rendering when</h3>
<ul>
<li>Content is personalized for each user</li>
<li>Data must be fresh on every request</li>
<li>You need access to request-time information</li>
<li>Content changes frequently</li>
</ul>
<h3>Choose Streaming when</h3>
<ul>
<li>Page has a mix of fast and slow data requirements</li>
<li>You want to improve perceived performance</li>
<li>Some parts of the page depend on slow APIs</li>
<li>You want to prioritize showing critical UI first</li>
</ul>
<h3>Choose Client Components when</h3>
<ul>
<li>UI needs to be interactive</li>
<li>Component relies on browser APIs</li>
<li>UI changes frequently based on user input</li>
<li>You need real-time updates</li>
</ul>
<h2>Conclusion</h2>
<p>Next.js provides a powerful set of rendering strategies that allow you to optimize for both performance and user experience. By understanding how each strategy affects Core Web Vitals, you can make informed decisions about how to build your application.</p>
<p>Remember that the best approach is often a hybrid one, combining different rendering strategies based on the specific requirements of each part of your application. Start with Server Components as your default, use Static Rendering where possible, and add Client Components only where interactivity is needed.</p>
<p>By following these principles and measuring your Core Web Vitals, you can create Next.js applications that are fast, responsive, and provide an excellent user experience.</p>
]]></description>
            <link>https://www.thisdot.co/blog/next-js-rendering-strategies-and-how-they-affect-core-web-vitals</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/next-js-rendering-strategies-and-how-they-affect-core-web-vitals</guid>
            <pubDate>Fri, 30 May 2025 14:56:20 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Introduction to Vercel’s Flags SDK]]></title>
            <description><![CDATA[<h1>Introduction to Vercel’s Flags SDK</h1>
<p>In this blog, we will dig into <a href="https://flags-sdk.dev/">Vercel’s Flags SDK</a>. We&#39;ll explore how it works, highlight its key capabilities, and discuss best practices to get the most out of it.</p>
<p>You&#39;ll also understand why you might prefer this tool over other feature flag solutions out there. And, despite its strong integration with <a href="https://nextjs.org/">Next.js</a>, this SDK isn&#39;t limited to just one framework—it&#39;s fully compatible with <a href="https://react.dev/">React</a> and <a href="https://svelte.dev/">SvelteKit</a>. We&#39;ll use Next.js for examples, but feel free to follow along with the framework of your choice.</p>
<h2>Why should I use it?</h2>
<p>You might wonder, &quot;Why should I care about yet another feature flag library?&quot; Unlike some other solutions, Vercel&#39;s Flags SDK offers unique, practical features. It offers simplicity, flexibility, and smart patterns to help you manage feature flags quickly and efficiently.</p>
<h3>It’s simple</h3>
<p>Let&#39;s start with a basic example:</p>
<pre><code class="language-jsx">app
 ↳flags.js

import { flag } from &#39;flags/next&#39;;

export const exampleFlag = flag({
    key: &#39;example-flag&#39;,
    identify() {
        return { user: { id: &#39;123&#39; } };
    },
    decide({ entities }) {
        return entities.user.id === &#39;123&#39;;
    },
});

// page.js
const exampleValue = await exampleFlag();
</code></pre>
<p>This might look simple — and it is! — but it showcases some important features. Notice how easily we can define and call our flag without repeatedly passing context or configuration.</p>
<p>Many other SDKs require passing the flag&#39;s name and context every single time you check a flag, like this:</p>
<pre><code class="language-jsx">const exampleValue = await client.getBooleanValue(&#39;exampleFlag&#39;, context);
</code></pre>
<p>This can become tedious and error-prone, as you might accidentally use different contexts throughout your app. With the Flags SDK, you define everything once upfront, keeping things consistent across your entire application.</p>
<p>By &quot;context&quot;, I mean the data needed to evaluate the flag, like user details or environment settings. We&#39;ll get into more detail shortly.</p>
<h3>It’s flexible</h3>
<p>Vercel’s Flags SDK is also flexible. You can integrate it with other popular feature flag providers like <a href="https://svelte.dev/">LaunchDarkly</a> or <a href="https://www.statsig.com/">Statsig</a> using built-in <a href="https://flags-sdk.dev/providers">adapters</a>. And if the provider you want to use isn’t supported yet, you can easily create your own <a href="https://flags-sdk.dev/providers/custom-adapters">custom adapter</a>.</p>
<p>While we&#39;ll use Next.js for demonstration, remember that the SDK works just as well with React or SvelteKit.</p>
<h2>Latency solutions</h2>
<p>Feature flags require definitions and context evaluations to determine their values — imagine checking conditions like, &quot;Is the user ID equal to 12?&quot; Typically, these evaluations involve fetching necessary information from a server, which can introduce latency.</p>
<p>These evaluations happen through two primary functions: <code>identify</code> and <code>decide</code>. The identify function gathers the context needed for evaluation, and this context is then passed as an argument named <code>entities</code> to the <code>decide</code> function. Let&#39;s revisit our earlier example to see this clearly:</p>
<pre><code class="language-jsx">app
 ↳flags.js

import { flag } from &#39;flags/next&#39;;

export const exampleFlag = flag({
    key: &#39;example-flag&#39;,
    identify() {
        // Identify our evaluation context   
        return { user: { id: &#39;123&#39; } };
    },
    decide({ entities }) {
        // Evaluate or decide our value based on our condition
        return entities.user.id === &#39;123&#39;;
    },
});
</code></pre>
<p>You could add a <a href="https://flags-sdk.dev/principles/evaluation-context#custom-evaluation-context">custom evaluation context</a> when reading a feature flag, but it’s not the best practice, and it’s not usually recommended.</p>
<h3>Using Edge Config</h3>
<p>When loading our flags, normally, these definitions and evaluation contexts get bootstrapped by making a network request and then opening a web socket listening to changes on the server. The problem is that if you do this in Serverless Functions with a short lifespan, you would need to bootstrap the definitions not just once but multiple times, which could cause latency issues.</p>
<p>To handle latency efficiently, especially in short-lived Serverless Functions, you can use <a href="https://vercel.com/docs/edge-config">Edge Config</a>. Edge Config stores flag definitions at the Edge, allowing super-fast retrieval via <a href="https://vercel.com/docs/edge-middleware">Edge Middleware</a> or Serverless Functions, significantly reducing latency.</p>
<h3>Cookies</h3>
<p>For more complex contexts requiring network requests, avoid doing these requests directly in Edge Middleware or CDNs, as this can drastically increase latency. Edge Middleware and CDNs are fast because they avoid making network requests to the origin server. Depending on the end user’s location, accessing a distant origin can introduce significant latency. For example, a user in Tokyo might need to connect to a server in the US before the page can load.</p>
<p>Instead, a good pattern that the Flags SDK offers us to avoid this is cookies. You could use cookies to store context data. The browser automatically sends cookies with each request in a standard format, providing consistent (no matter if you are in Edge Middleware, <a href="https://nextjs.org/docs/app">App Router</a> or <a href="https://nextjs.org/docs/pages">Page Router</a>), low-latency access to evaluation context data:</p>
<pre><code class="language-jsx">export const exampleFlag = flag({
    // Definition
    key: &#39;example-flag&#39;,
    // Context
    identify({ cookies }) {
        // We get the cookie that we need for our context
        const userId = cookies.get(&#39;user-id&#39;)?.value;
        return { user: userId ? { id: userId } : undefined };
    },
    // Evaluation
    decide({ entities }) {
        return entities?.user?.id === 12;
    },
});
</code></pre>
<p>You can also encrypt or sign cookies for additional security from the client side.</p>
<h3>Dedupe</h3>
<p>Dedupe helps you cache function results to prevent redundant evaluations. If multiple flags rely on a common context method, like checking a user&#39;s region, Dedupe ensures the method executes only once per runtime, regardless of how many times it&#39;s invoked. Additionally, similar to cookies, the Flags SDK standardizes headers, allowing easy access to them. Let&#39;s illustrate this with the following example:</p>
<pre><code class="language-jsx">app
 ↳flags.js

 import { dedupe, flag } from &quot;flags/next&quot;;

// Simulate a fake fetch function
async function fakeFetch(url, options) {
    return new Response(JSON.stringify({ region: &#39;EU&#39; }), { status: 200 });
}

// Simulated function to get the user&#39;s region from the request headers.
async function getUserRegion(headers) {
    // In a real-world scenario, this might involve calling an external geolocation API.
    // So we&#39;ll use a fake API to simulate the response.
    const response = await fakeFetch(&#39;https://api.example.com/get-region&#39;, {
        method: &#39;GET&#39;,
        headers: { &#39;x-country&#39;: headers.get(&#39;x-country&#39;) || &#39;&#39; }
    });
    const data = await response.json();
    return data;
}

// Wrap the region retrieval function using dedupe so that it runs only once per request.
const identifyRegion = dedupe(
    async ({ headers }) =&gt; {
        return await getUserRegion(headers);
    },
);

// Define the feature flag that decides the promotional discount eligibility based on the user&#39;s region.
export const promoDiscountFlag = flag({
    key: &#39;promo-discount-flag&#39;,
    // Use the deduped identify function for evaluation context.
    identify: identifyRegion,
    decide({ entities }) {
        // If the region isn’t determined, disable the flag.
        if (!entities?.region) return false;
        // Only enable the promotion for users in either &#39;EU&#39; or &#39;NA&#39;.
        return [&#39;EU&#39;, &#39;NA&#39;].includes(entities.region);
    },
});

app
 ↳plans
   ↳page.jsx

import { promoDiscountFlag } from &#39;../flags&#39;;

export default async function PlansPage() {
    const isPromoAvailable = await promoDiscountFlag();

    return (
        &lt;div className=&quot;p-4&quot;&gt;
            &lt;h1 className=&quot;text-2xl font-bold mb-4&quot;&gt;Store&lt;/h1&gt;

            &lt;div className=&quot;grid grid-cols-1 md:grid-cols-2 gap-4&quot;&gt;
                &lt;div className=&quot;border rounded-lg p-4&quot;&gt;
                    &lt;h2 className=&quot;text-xl font-semibold mb-2&quot;&gt;Basic Plan&lt;/h2&gt;
                    &lt;p className=&quot;text-gray-600 mb-2&quot;&gt;Essential features for everyday use&lt;/p&gt;
                    &lt;p className=&quot;text-2xl font-bold&quot;&gt;$9.99/month&lt;/p&gt;
                &lt;/div&gt;

                &lt;div className=&quot;border rounded-lg p-4&quot;&gt;
                    &lt;h2 className=&quot;text-xl font-semibold mb-2&quot;&gt;Premium Plan&lt;/h2&gt;
                    &lt;p className=&quot;text-gray-600 mb-2&quot;&gt;Advanced features for power users&lt;/p&gt;
                    { isPromoAvailable ? (
                        &lt;div&gt;
                            &lt;p className=&quot;text-sm text-gray-500 line-through&quot;&gt;$19.99/month&lt;/p&gt;
                            &lt;p className=&quot;text-2xl font-bold text-green-600&quot;&gt;$14.99/month&lt;/p&gt;
                            &lt;p className=&quot;text-sm text-green-600&quot;&gt;Special regional promotion!&lt;/p&gt;
                        &lt;/div&gt;
                    ) : (
                        &lt;p className=&quot;text-2xl font-bold&quot;&gt;$19.99/month&lt;/p&gt;
                    )}
                &lt;/div&gt;
            &lt;/div&gt;
        &lt;/div&gt;
    );
}
</code></pre>
<h2>Server-side patterns for static pages</h2>
<p>You can use feature flags on the client side, but that will lead to unnecessary loaders/skeletons or layout shifts, which are never that great. Of course, it brings benefits, like static rendering.</p>
<p>To maintain static rendering benefits while using server-side flags, the SDK provides a method called <a href="https://flags-sdk.dev/principles/precompute">precompute</a>.</p>
<h3>Precompute</h3>
<p>Precompute lets you decide which page version to display based on feature flags and then we can cache that page to statically render it. You can precompute flag combinations in Middleware or <a href="https://nextjs.org/docs/app/building-your-application/routing/route-handlers">Route Handlers</a>:</p>
<pre><code class="language-jsx">app
 ↳flags.js

import { flag } from &quot;flags/next&quot;;

export const showNewLayout = flag({
    // Definition
    key: &#39;new-layout&#39;,
    // Context
    identify({ cookies }) {
        const userId = cookies.get(&#39;user-id&#39;)?.value;
        return { user: userId ? { id: userId } : undefined };
    },
    // Evaluation
    decide({ entities }) {
        return entities?.user?.id === &#39;12&#39;;
    },
});

export const showSilksongBanner = flag({
    key: &#39;silksong-banner&#39;,
    identify({ cookies }) {
        return { user: cookies.get(&#39;vessel&#39;)?.value ? { id: cookies.get(&#39;vessel&#39;)?.value } : undefined };
    },
    decide({ entities }) {
        return entities?.user?.id === &#39;hornet&#39;;
    },
});

// Export our flags in an array (it can be just one or multiple flags)
export const homePageFlags = [showNewLayout, showSilksongBanner];
</code></pre>
<p>Next, inside a middleware (or route handler), we will precompute these flags and create <a href="https://nextjs.org/docs/pages/building-your-application/rendering/static-site-generation">static pages</a> per each combination of them. </p>
<pre><code class="language-jsx">// middleware.ts
import { type NextRequest, NextResponse } from &#39;next/server&#39;;
import { precompute } from &#39;flags/next&#39;;
import { homePageFlags } from &#39;./flags&#39;;

// Note that we&#39;re running this middleware for / only, but
//You could extend it to further pages you&#39;re experimenting on
export const config = { matcher: [&#39;/&#39;] };

export async function middleware(request: NextRequest) {
  // precompute returns a string encoding each flag&#39;s returned value
  const code = await precompute(homePageFlags);

  // rewrites the request to include the precomputed code for this flag combination
  const nextUrl = new URL(
    `/${code}${request.nextUrl.pathname}${request.nextUrl.search}`,
    request.url,
  );

  return NextResponse.rewrite(nextUrl, { request });
}
</code></pre>
<p>The user will never notice this because, as we use “rewrite”, they will only see the original URL. </p>
<p>Now, on our page, we “invoke” our flags, sending the code from the params:</p>
<pre><code class="language-jsx">app
 ↳[code]
   ↳page.jsx

import { Params } from &quot;next/dist/server/request/params&quot;;
import { showSilksongBanner, homePageFlags, showNewLayout } from &quot;../flags&quot;;

export default async function Page({ params }) {
    const { code } = params;
    const shouldShowSilksongBanner = await showSilksongBanner(code, homePageFlags);
    const shouldShowNewLayout = await showNewLayout(code, homePageFlags);

    return (
        &lt;div className=&quot;p-4&quot;&gt;
            {shouldShowSilksongBanner &amp;&amp; (
                &lt;div className=&quot;bg-blue-100 p-3 mb-4 rounded&quot;&gt;
                    🎮 Silksong Available
                &lt;/div&gt;
            )}

            &lt;div className=&quot;bg-white p-4 rounded shadow&quot;&gt;
                &lt;h1 className=&quot;text-xl font-bold mb-2&quot;&gt;Welcome to Hallownest&lt;/h1&gt;

                {shouldShowNewLayout ? (
                    &lt;div className=&quot;mt-4&quot;&gt;
                        &lt;h2 className=&quot;font-semibold mb-2&quot;&gt;Your Progress&lt;/h2&gt;
                        &lt;div className=&quot;space-y-2&quot;&gt;
                            &lt;div&gt;✅ 3 areas completed&lt;/div&gt;
                            &lt;div&gt;🔄 2 areas in progress&lt;/div&gt;
                            &lt;div&gt;🔒 5 areas locked&lt;/div&gt;
                        &lt;/div&gt;
                    &lt;/div&gt;
                ) : (
                    &lt;p className=&quot;text-gray-600&quot;&gt;Start your journey in the vast underground kingdom.&lt;/p&gt;
                )}
            &lt;/div&gt;
        &lt;/div&gt;
    );
}
</code></pre>
<p>By sending our code, we are not really invoking the flag again but getting the value right away. Our middleware is deciding which variation of our pages to display to the user.</p>
<p>Finally, after rendering our page, we can enable <a href="https://nextjs.org/docs/pages/building-your-application/data-fetching/incremental-static-regeneration">Incremental Static Regeneration</a> (ISR). ISR allows us to cache the page and serve it statically for subsequent user requests:</p>
<pre><code class="language-jsx">import { Params } from &quot;next/dist/server/request/params&quot;;
import { showSilksongBanner, homePageFlags, showNewLayout } from &quot;../flags&quot;;

interface HomeParams extends Params {
    code: string;
}

export async function generateStaticParams() {
    // returning an empty array is enough to enable ISR
    return [];
}

export default async function Page({ params }: { params: HomeParams }) {
...
}
</code></pre>
<p>Using <code>precompute</code> is particularly beneficial when enabling ISR for pages that depend on flags whose values cannot be determined at build time. Headers, geo, etc., we can’t know their value at build, so we use <code>precompute()</code> so the Edge can evaluate it on the fly. In these cases, we rely on Middleware to dynamically determine the flag values, generate the HTML content once, and then cache it. At build time, we simply create an initial HTML shell.</p>
<h3>Generate Permutations</h3>
<p>If we prefer to generate static pages at build-time instead of runtime, we can use the <code>generatePermutations</code> function from the Flags SDK. This method enables us to pre-generate static pages with different combinations of flags at build time. It&#39;s especially useful when the flag values are known beforehand. For example, scenarios involving A/B testing and a marketing site with a single on/off banner flag are ideal use cases.</p>
<pre><code class="language-jsx">app
 ↳flags.js

import { flag } from &#39;flags/next&#39;; 

export const showSilksongBanner = flag({
    key: &#39;showSilksongBanner&#39;,
    decide() {
        return true;
    },
});

export const showNewLayout = flag({
    key: &#39;showNewLayout&#39;,
    decide() {
        return true;
    },
});

export const greetingStyle = flag({
    key: &#39;greetingStyle&#39;,
    options: [&#39;classic&#39;, &#39;modern&#39;, &#39;steampunk&#39;],
    decide() {
        return &#39;classic&#39;;
    },
});

export const homePageFlags = [
  showSilksongBanner,
  showNewLayout,
  greetingStyle,
];
</code></pre>
<pre><code class="language-jsx">import { generatePermutations } from &#39;flags/next&#39;;
import {
  showSilksongBanner,
  showNewLayout,
  greetingStyle,
  homePageFlags,
} from &#39;../flags&#39;;
import { Params } from &#39;next/dist/server/request/params&#39;;

// 1) at build time, Next will run this and prerender each combo
export async function generateStaticParams() {
  const codes = await generatePermutations(homePageFlags);
  return codes.map((code) =&gt; ({ code }));
}

export default async function Page({ params }) {
  const { code } = params;

  // 2) at request time, Next simply reads the prerendered HTML for this code
  const showBanner = await showSilksongBanner(code, homePageFlags);
  const useNewLayout = await showNewLayout(code, homePageFlags);
  const style = await greetingStyle(code, homePageFlags);

  return (
    &lt;div className=&quot;p-4&quot;&gt;
      {showBanner &amp;&amp; (
        &lt;div className=&quot;bg-blue-100 p-3 mb-4 rounded&quot;&gt;
          🎮 Silksong Available
        &lt;/div&gt;
      )}

      &lt;div className=&quot;bg-white p-4 rounded shadow&quot;&gt;
        &lt;h1 className=&quot;text-xl font-bold mb-2&quot;&gt;
          {style === &#39;steampunk&#39;
            ? &#39;Welcome, Cog-and-Gear Explorer!&#39;
            : style === &#39;modern&#39;
            ? &#39;Welcome Back to Hallownest&#39;
            : &#39;Welcome to Hallownest&#39;}
        &lt;/h1&gt;

        {useNewLayout ? (
          &lt;div className=&quot;mt-4&quot;&gt;
            &lt;h2 className=&quot;font-semibold mb-2&quot;&gt;Your Progress&lt;/h2&gt;
            &lt;div className=&quot;space-y-2&quot;&gt;
              &lt;div&gt;✅ 3 areas completed&lt;/div&gt;
              &lt;div&gt;🔄 2 areas in progress&lt;/div&gt;
              &lt;div&gt;🔒 5 areas locked&lt;/div&gt;
            &lt;/div&gt;
          &lt;/div&gt;
        ) : (
          &lt;p className=&quot;text-gray-600&quot;&gt;
            Start your journey in the vast underground kingdom.
          &lt;/p&gt;
        )}
      &lt;/div&gt;
    &lt;/div&gt;
  );
}
</code></pre>
<h2>Conclusion</h2>
<p>Vercel’s Flags SDK stands out as a powerful yet straightforward solution for managing feature flags efficiently. With its ease of use, remarkable flexibility, and effective patterns for reducing latency, this SDK streamlines the development process and enhances your app’s performance. Whether you&#39;re building a Next.js, React, or SvelteKit application, the Flags SDK provides intuitive tools that keep your application consistent, responsive, and maintainable. Give it a try, and see firsthand how it can simplify your feature management workflow!</p>
]]></description>
            <link>https://www.thisdot.co/blog/introduction-to-vercels-flags-sdk</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/introduction-to-vercels-flags-sdk</guid>
            <pubDate>Fri, 23 May 2025 14:09:10 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[The Importance of a Scientific Mindset in Software Engineering: Part 2 (Debugging)]]></title>
            <description><![CDATA[<h1>The Importance of a Scientific Mindset in Software Engineering: Part 2 (Debugging)</h1>
<p>In the <a href="https://www.thisdot.co/blog/the-importance-of-a-scientific-mindset-in-software-engineering-part-1-source">first part of my series on the importance of a scientific mindset in software engineering</a>, we explored how the principles of the scientific method can help us evaluate sources and make informed decisions. Now, we will focus on how these principles can help us tackle one of the most crucial and challenging tasks in software engineering: <strong>debugging</strong>.</p>
<p>In software engineering, debugging is often viewed as an art - an intuitive skill honed through experience and trial and error. In a way, it is - the same as a GP, even a very evidence-based one, will likely diagnose most of their patients based on their experience and intuition and not research scientific literature every time; a software engineer will often rely on their experience and intuition to identify and fix common bugs. However, an internist faced with a complex case will likely not be able to rely on their intuition alone and must apply the scientific method to diagnose the patient. Similarly, a software engineer can benefit from using the scientific method to identify and fix the problem when faced with a complex bug.</p>
<p>From that perspective, treating engineering challenges like scientific inquiries can transform the way we tackle problems. Rather than resorting to guesswork or gut feelings, we can apply the principles of the scientific method—forming hypotheses, designing controlled experiments, gathering and evaluating evidence—to identify and eliminate bugs systematically.</p>
<p>This approach, sometimes referred to as &quot;scientific debugging,&quot; reframes debugging from a haphazard process into a structured, disciplined practice. It encourages us to be skeptical, methodical, and transparent in our reasoning. For instance, as <a href="https://shop.elsevier.com/books/why-programs-fail/zeller/978-0-08-092300-0">Andreas Zeller notes in the book <em>Why Programs Fail</em></a>, the key aspect of scientific debugging is its explicitness: Using the scientific method, you make your assumptions and reasoning explicit, allowing you to understand your assumptions and often reveals hidden clues that can lead to the root cause of the problem on hand.</p>
<p><strong>Note:</strong> If you&#39;d like to read an excerpt from the book, you can find it <a href="https://www.embedded.com/scientific-debugging-finding-out-why-your-code-is-buggy-part-1/">on Embedded.com</a>.</p>
<h2>Scientific Debugging</h2>
<p>At its core, scientific debugging applies the principles of the scientific method to the process of finding and fixing software defects. Rather than attempting random fixes or relying on intuition, it encourages engineers to move systematically, guided by data, hypotheses, and controlled experimentation. By adopting debugging as a rigorous inquiry, we can reduce guesswork, speed up the resolution process, and ensure that our fixes are based on solid evidence.</p>
<p>Just as a scientist begins with a <a href="https://www.thisdot.co/blog/the-importance-of-a-scientific-mindset-in-software-engineering-part-1-source#defining-your-research-questions">well-defined research question</a>, a software engineer starts by identifying the specific symptom or error condition. For instance, if our users report inconsistencies in the data they see across different parts of the application, our research question could be: <em>&quot;Under what conditions does the application display outdated or incorrect user data?&quot;</em></p>
<p>From there, we can follow a structured debugging process that mirrors the scientific method:</p>
<ul>
<li><p><strong>1. Observe and Define the Problem:</strong> First, we need to clearly state the bug&#39;s symptoms and the environment in which it occurs. We should isolate whether the issue is deterministic or intermittent and identify any known triggers if possible. Such a structured definition serves as the groundwork for further investigation.</p>
</li>
<li><p><strong>2. Formulate a Hypothesis:</strong> A hypothesis in debugging is a testable explanation for the observed behavior. For instance, you might hypothesize: <em>&quot;The data inconsistency occurs because a caching layer is serving stale data when certain user profiles are updated.&quot;</em> The key is that this explanation must be falsifiable; if experiments don&#39;t support the hypothesis, it must be refined or discarded.</p>
</li>
<li><p><strong>3. Collect Evidence and Data:</strong> Evidence often includes logs, system metrics, error messages, and runtime traces. Similar to reviewing primary sources in academic research, treat your raw debugging data as crucial evidence. Evaluating these data points can reveal patterns. In our example, such patterns could be whether the bug correlates with specific caching mechanisms, increased memory usage, or database query latency. During this step, it&#39;s essential to approach data critically, just as you would analyze the quality and credibility of sources in a research literature review. Don&#39;t forget that even logs can be misleading, incomplete, or even incorrect, so cross-referencing multiple sources is key.</p>
</li>
<li><p><strong>4. Design and Run Experiments:</strong> Design minimal, controlled tests to confirm or refute your hypothesis. In our example, you may try disabling or shortening the cache&#39;s time-to-live (TTL) to see if more recent data is displayed correctly. By manipulating one variable at a time - such as cache invalidation intervals - you gain clearer insights into causation. Tools such as profilers, debuggers, or specialized test harnesses can help isolate factors and gather precise measurements.</p>
</li>
<li><p><strong>5. Analyze Results and Refine Hypotheses:</strong> If the experiment&#39;s outcome doesn&#39;t align with your hypothesis, treat it as a stepping stone, not a dead end. Adjust your explanation, form a new hypothesis, or consider additional variables (for example, whether certain API calls bypass caching). Each iteration should bring you closer to a better understanding of the bug&#39;s root cause. Remember, the goal is not to prove an initial guess right but to arrive at a verifiable explanation.</p>
</li>
<li><p><strong>6. Implement and Verify the Fix:</strong> Once you&#39;re confident in the identified cause, you can implement the fix. Verification doesn&#39;t stop at deployment - re-test under the same conditions and, if possible, beyond them. By confirming the fix in a controlled manner, you ensure that the solution is backed by evidence rather than wishful thinking.</p>
<ul>
<li>Personally, I consider implementing end-to-end tests (e.g., with <a href="https://playwright.dev/">Playwright</a>) that reproduce the bug and verify the fix to be a crucial part of this step. This both ensures that the bug doesn&#39;t reappear in the future due to changes in the codebase and avoids possible imprecisions of manual testing.</li>
</ul>
</li>
</ul>
<p>Now, we can explore these steps in more detail, highlighting how the scientific method can guide us through the debugging process.</p>
<h2>Establishing Clear Debugging Questions (Formulating a Hypothesis)</h2>
<p>A hypothesis is a <a href="https://en.wikipedia.org/wiki/Hypothesis">proposed explanation for a phenomenon that can be tested through experimentation</a>. In a debugging context, that phenomenon is the bug or issue you&#39;re trying to resolve. Having a clear, falsifiable statement that you can prove or disprove ensures that you stay focused on the real problem rather than jumping haphazardly between possible causes. A properly formulated hypothesis lets you design precise experiments to evaluate whether your explanation holds true.</p>
<p>To formulate a hypothesis effectively, you can follow these steps:</p>
<h3>1. Clearly Identify the Symptom(s)</h3>
<p>Before forming any hypothesis, pin down the specific issue users are experiencing. For instance:</p>
<ul>
<li>&quot;Users intermittently see outdated profile information after updating their accounts.&quot;</li>
<li>&quot;Some newly created user profiles don&#39;t reflect changes in certain parts of the application.&quot;</li>
</ul>
<p>Having a well-defined problem statement keeps your hypothesis focused on the actual issue. Just like a research question in science, the clarity of your symptom definition directly influences the quality of your hypothesis.</p>
<h3>2. Draft a Tentative Explanation</h3>
<p>Next, convert your symptom into a statement that describes a <em>possible root cause</em>, such as:</p>
<ul>
<li><p>&quot;<strong>Data inconsistency</strong> occurs because the <strong>caching layer</strong> isn&#39;t invalidating or refreshing user data properly when profiles are updated.&quot;</p>
</li>
<li><p>&quot;<strong>Stale data</strong> is displayed because the <strong>cache timeout</strong> is too long under certain load conditions.&quot;</p>
</li>
</ul>
<p>This step makes your assumption about the root cause explicit. As with the scientific method, your hypothesis should be something you can test and either confirm or refute with data or experimentation.</p>
<h3>3. Ensure Falsifiability</h3>
<p>A valid hypothesis must be falsifiable - meaning it can be proven <em>wrong</em>. You&#39;ll struggle to design meaningful experiments if a hypothesis is too vague or broad. For example:</p>
<ul>
<li><p><strong>Not Falsifiable</strong>: &quot;Occasionally, the application just shows weird data.&quot;</p>
</li>
<li><p><strong>Falsifiable</strong>: &quot;Users see stale data when the cache is not invalidated within 30 seconds of profile updates.&quot;</p>
</li>
</ul>
<p>Making your hypothesis specific enough to fail a test will pave the way for more precise debugging.</p>
<h3>4. Align with Available Evidence</h3>
<p>Match your hypothesis to <strong>what you already know</strong> - logs, stack traces, metrics, and user reports. For example:</p>
<ul>
<li><p>If logs reveal that cache invalidation events aren&#39;t firing, form a hypothesis explaining why those events fail or never occur.</p>
</li>
<li><p>If metrics show that data served from the cache is older than the configured TTL, hypothesize about how or why the TTL is being ignored.</p>
</li>
</ul>
<p>If your current explanation contradicts existing data, refine your hypothesis until it fits.</p>
<h3>5. Plan for Controlled Tests</h3>
<p>Once you have a testable hypothesis, figure out how you&#39;ll attempt to <em>disprove</em> it. This might involve:</p>
<ul>
<li><p><strong>Reproducing the environment</strong>: Set up a staging/local system that closely mimics production. For instance with the same cache layer configurations.</p>
</li>
<li><p><strong>Varying one condition at a time</strong>: For example, only adjust cache invalidation policies or TTLs and then observe how data freshness changes.</p>
</li>
<li><p><strong>Monitoring metrics</strong>: In our example, such monitoring would involve tracking user profile updates, cache hits/misses, and response times. These metrics should lead to confirming or rejecting your explanation.</p>
</li>
</ul>
<p>These plans become your blueprint for experiments in further debugging stages.</p>
<h2>Collecting and Evaluating Evidence</h2>
<p>After formulating a clear, testable hypothesis, the next crucial step is to gather data that can either support or refute it. This mirrors how scientists collect observations in a literature review or initial experiments.</p>
<ol>
<li><p><strong>Identify &quot;Primary Sources&quot; (Logs, Stack Traces, Code History):</strong></p>
<ul>
<li><p><strong>Logs and Stack Traces:</strong> These are your direct pieces of evidence - treat them like raw experimental data. For instance, look closely at timestamps, caching-related events (e.g., invalidation triggers), and any error messages related to stale reads.</p>
</li>
<li><p><strong>Code History:</strong> Look for related changes in your source control, e.g. using <a href="https://www.thisdot.co/blog/git-bisect-the-time-traveling-bug-finder">Git bisect</a>. In our example, we would look for changes to caching mechanisms or references to cache libraries in commits, which could pinpoint when the inconsistency was introduced. Sometimes, reverting a commit that altered cache settings helps confirm whether the bug originated there.</p>
</li>
</ul>
</li>
<li><p><strong>Corroborate with &quot;Secondary Sources&quot; (Documentation, Q&amp;A Forums):</strong></p>
<ul>
<li><p><strong>Documentation:</strong> Check official docs for known behavior or configuration details that might differ from your assumptions.</p>
</li>
<li><p><strong>Community Knowledge:</strong> Similar issues reported on GitHub or StackOverflow may reveal known pitfalls in a library you&#39;re using.</p>
</li>
</ul>
</li>
<li><p><strong>Assess Data Quality and Relevance:</strong></p>
<ul>
<li><p><strong>Look for Patterns:</strong> For instance, does stale data appear only after certain update frequencies or at specific times of day?</p>
</li>
<li><p><strong>Check Environmental Factors:</strong> For instance, does the bug happen only with particular deployment setups, container configurations, or memory constraints?</p>
</li>
<li><p><strong>Watch Out for Biases:</strong> Avoid seeking only the data that confirms your hypothesis. Look for contradictory logs or metrics that might point to other root causes.</p>
</li>
</ul>
</li>
</ol>
<p>You keep your hypothesis grounded in real-world system behavior by treating logs, stack traces, and code history as primary data - akin to raw experimental results. This evidence-first approach reduces guesswork and guides more precise experiments.</p>
<h2>Designing and Running Experiments</h2>
<p>With a hypothesis in hand and evidence gathered, it&#39;s time to test it through controlled experiments - much like scientists isolate variables to verify or debunk an explanation.</p>
<ol>
<li><p><strong>Set Up a Reproducible Environment:</strong></p>
<ul>
<li><p><strong>Testing Environments:</strong> Replicate production conditions as closely as possible. In our example, that would involve ensuring the same caching configuration, library versions, and relevant data sets are in place.</p>
</li>
<li><p><strong>Version Control Branches:</strong> Use a dedicated branch to experiment with different settings or configuration, e.g., cache invalidation strategies. This streamlines reverting changes if needed.</p>
</li>
</ul>
</li>
<li><p><strong>Control Variables One at a Time:</strong></p>
<ul>
<li><p>For instance, if you suspect data inconsistency is tied to cache invalidation events, first adjust only the invalidation timeout and re-test.</p>
</li>
<li><p>Or, if concurrency could be a factor (e.g., multiple requests updating user data simultaneously), test different concurrency levels to see if stale data issues become more pronounced.</p>
</li>
</ul>
</li>
<li><p><strong>Measure and Record Outcomes:</strong></p>
<ul>
<li><p><strong>Automated Tests:</strong> Tests provide a great way to formalize and verify your assumptions. For instance, you could develop tests that intentionally update user profiles and check if the displayed data matches the latest state.</p>
</li>
<li><p><strong>Monitoring Tools:</strong> Monitor relevant metrics before, during, and after each experiment. In our example, we might want to track cache hit rates, TTL durations, and query times.</p>
</li>
<li><p><strong>Repeat Trials:</strong> Consistency across multiple runs boosts confidence in your findings.</p>
</li>
</ul>
</li>
<li><p><strong>Validate Against a Baseline:</strong></p>
<ul>
<li><p>If baseline tests manifest normal behavior, but your experimental changes manifest the bug, you&#39;ve isolated the variable causing the issue. E.g. if the baseline tests show that data is consistently fresh under normal caching conditions but your experimental changes cause stale data.</p>
</li>
<li><p>Conversely, if your change eliminates the buggy behavior, it supports your hypothesis - e.g. that the cache configuration was the root cause.</p>
</li>
</ul>
</li>
</ol>
<p>Each experiment outcome is a data point supporting or contradicting your hypothesis. Over time, these data points guide you toward the true cause.</p>
<h2>Analyzing Results and Iterating</h2>
<p>In scientific debugging, an unexpected result isn&#39;t a failure - it&#39;s valuable feedback that brings you closer to the right explanation.</p>
<ol>
<li><p><strong>Compare Outcomes to the hypothesis.</strong> For instance:</p>
<ul>
<li><p>Did user data stay consistent after you reduced the cache TTL or fixed invalidation logic?</p>
</li>
<li><p>Did logs show caching events firing as expected, or did they reveal unexpected errors?</p>
</li>
<li><p>Are there only partial improvements that suggest multiple overlapping issues?</p>
</li>
</ul>
</li>
<li><p><strong>Incorporate Unexpected Observations:</strong></p>
<ul>
<li><p>Sometimes, debugging uncovers side effects - e.g. performance bottlenecks exposed by more frequent cache invalidations. Note these for future work.</p>
</li>
<li><p>If your hypothesis is disproven, revise it. For example, the cache may only be part of the problem, and a separate load balancer setting also needs attention.</p>
</li>
</ul>
</li>
<li><p><strong>Avoid Confirmation Bias:</strong></p>
<ul>
<li><p>Don&#39;t dismiss contrary data. For instance, if you see evidence that updates are fresh in some modules but stale in others, you may have found a more nuanced root cause (e.g., partial cache invalidation).</p>
</li>
<li><p>Consider other credible explanations if your teammates propose them. Test those with the same rigor.</p>
</li>
</ul>
</li>
<li><p><strong>Decide If You Need More Data:</strong></p>
<ul>
<li><p>If results aren&#39;t conclusive, add deeper instrumentation or enable debug modes to capture more detailed logs.</p>
</li>
<li><p>For production-only issues, implement distributed tracing or sampling logs to diagnose real-world usage patterns.</p>
</li>
</ul>
</li>
<li><p><strong>Document Each Iteration:</strong></p>
<ul>
<li><p>Record the results of each experiment, including any unexpected findings or new hypotheses that arise.</p>
</li>
<li><p>Through iterative experimentation and analysis, each cycle refines your understanding. By letting evidence shape your hypothesis, you ensure that your final conclusion aligns with reality.</p>
</li>
</ul>
</li>
</ol>
<h2>Implementing and Verifying the Fix</h2>
<p>Once you&#39;ve identified the likely culprit - say, a misconfigured or missing cache invalidation policy - the next step is to implement a fix and verify its resilience.</p>
<ol>
<li><p><strong>Implementing the Change:</strong></p>
<ul>
<li><p><strong>Scoped Changes:</strong> Adjust just the component pinpointed in your experiments. Avoid large-scale refactoring that might introduce other issues.</p>
</li>
<li><p><strong>Code Reviews:</strong> Peer reviews can catch overlooked logic gaps or confirm that your changes align with best practices.</p>
</li>
</ul>
</li>
<li><p><strong>Regression Testing:</strong></p>
<ul>
<li><p>Re-run the same experiments that initially exposed the issue. In our stale data example, confirm that the data remains fresh under various conditions.</p>
</li>
<li><p>Conduct broader tests - like integration or end-to-end tests - to ensure no new bugs are introduced.</p>
</li>
</ul>
</li>
<li><p><strong>Monitoring in Production:</strong></p>
<ul>
<li><p>Even with positive test results, real-world scenarios can differ. Monitor logs and metrics (e.g. cache hit rates, user error reports) closely post-deployment.</p>
</li>
<li><p>If the buggy behavior reappears, revisit your hypothesis or consider additional factors, such as unpredicted user behavior.</p>
</li>
</ul>
</li>
<li><p><strong>Benchmarking and Performance Checks (If Relevant):</strong></p>
<ul>
<li><p>When making changes that affect the frequency of certain processes - such as how often a cache is refreshed - be sure to measure the performance impact. Verify you meet any latency or resource usage requirements.</p>
</li>
<li><p>Keep an eye on the trade-offs: For instance, more frequent cache invalidations might solve stale data but could also raise system load.</p>
</li>
</ul>
</li>
</ol>
<p>By systematically verifying your fix - similar to confirming experimental results in research - you ensure that you&#39;ve addressed the true cause and maintained overall software stability.</p>
<h2>Documenting the Debugging Process</h2>
<p>Good science relies on transparency, and so does effective debugging. Thorough documentation guarantees your findings are reproducible and valuable to future team members.</p>
<ol>
<li><p><strong>Record Your Hypothesis and Experiments:</strong></p>
<ul>
<li><p>Keep a concise log of your main hypothesis, the tests you performed, and the outcomes.</p>
</li>
<li><p>A simple markdown file within the repo can capture critical insights without being cumbersome.</p>
</li>
</ul>
</li>
<li><p><strong>Highlight Key Evidence and Observations:</strong></p>
<ul>
<li><p>Note the logs or metrics that were most instrumental - e.g., seeing repeated stale cache hits 10 minutes after updates.</p>
</li>
<li><p>Document any edge cases discovered along the way.</p>
</li>
</ul>
</li>
<li><p><strong>List Follow-Up Actions or Potential Risks:</strong></p>
<ul>
<li><p>If you discover additional issues - like memory spikes from more frequent invalidation - note them for future sprints.</p>
</li>
<li><p>Identify parts of the code that might need deeper testing or refactoring to prevent similar issues.</p>
</li>
</ul>
</li>
<li><p><strong>Share with Your Team:</strong></p>
<ul>
<li><p>Publish your debugging report on an internal wiki or ticket system. A well-documented troubleshooting narrative helps educate other developers.</p>
</li>
<li><p>Encouraging open discussion of the debugging process fosters a culture of continuous learning and collaboration.</p>
</li>
</ul>
</li>
</ol>
<p>By paralleling scientific publication practices in your documentation, you establish a knowledge base to guide future debugging efforts and accelerate collective problem-solving.</p>
<h2>Conclusion</h2>
<p>Debugging can be as much a rigorous, methodical exercise as an art shaped by intuition and experience. By adopting the principles of scientific inquiry - forming hypotheses, designing controlled experiments, gathering evidence, and transparently documenting your process - you make your debugging approach both <strong>systematic</strong> and <strong>repeatable</strong>.</p>
<p>The explicitness and structure of scientific debugging offer several benefits:</p>
<ul>
<li><p><strong>Better Root-Cause Discovery:</strong> Structured, hypothesis-driven debugging sheds light on the <em>true</em> underlying factors causing defects rather than simply masking symptoms.</p>
</li>
<li><p><strong>Informed Decisions:</strong> Data and evidence lead the way, minimizing guesswork and reducing the chance of reintroducing similar issues.</p>
</li>
<li><p><strong>Knowledge Sharing:</strong> As in scientific research, detailed documentation of methods and outcomes helps others learn from your process and fosters a collaborative culture.</p>
</li>
</ul>
<p>Ultimately, whether you are diagnosing an intermittent crash or chasing elusive performance bottlenecks, <strong>scientific debugging</strong> brings clarity and objectivity to your workflow. By aligning your debugging practices with the scientific method, you build confidence in your solutions and empower your team to tackle complex software challenges with precision and reliability.</p>
<p>But most importantly, <strong>do not get discouraged</strong> by the number of rigorous steps outlined above or by the fact you won&#39;t always manage to follow them all religiously. Debugging is a complex and often frustrating process, and it&#39;s okay to rely on your intuition and experience when needed. Feel free to adapt the debugging process to your needs and constraints, and as long as you keep the scientific mindset at heart, you&#39;ll be on the right track.</p>
]]></description>
            <link>https://www.thisdot.co/blog/the-importance-of-a-scientific-mindset-in-software-engineering-part-2</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/the-importance-of-a-scientific-mindset-in-software-engineering-part-2</guid>
            <pubDate>Fri, 09 May 2025 10:52:10 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Docusign Momentum 2025 From A Developer’s Perspective]]></title>
            <description><![CDATA[<p><em>What if your contract details stuck in PDFs could ultimately become the secret sauce of your business automation workflows?</em></p>
<p>In a world drowning in PDFs and paperwork, I never thought I’d get goosebumps about agreements – until I attended Docusign Momentum 2025. I went in expecting talks about e-signatures; I left realizing the big push and emphasis with many enterprise-level organizations will be around <strong>Intelligent Agreement Management (IAM)</strong>. It is positioned to transform how we build business software, so let’s talk about it.
As Director of Technology at This Dot Labs, I had a front-row seat to all the exciting announcements at Docusign Momentum. Our team  also had a booth there showing off the 6 Docusign extension apps This Dot Labs has released this year. We met 1-on-1 with a lot of companies and leaders to discuss the exciting promise of IAM. What can your company accomplish with IAM? Is it really worth it for you to start adopting IAM?? Let’s dive in and find out.</p>
<p>After his keynote, I met up with Robert Chatwani, President of Docusign and he said this </p>
<blockquote>
<p>“At Docusign, we truly believe that the power of a great platform is that you won’t be able to exactly predict what can be built on top of it,and builders and developers are at the heart of driving this type of innovation. Now with AI, we have entered what I believe is a renaissance era for new ideas and business models, all powered by developers.”</p>
</blockquote>
<p>Docusign’s annual conference in NYC was an eye-opener: agreements are no longer just documents to sign and shelve, but dynamic data hubs driving key processes. Here’s my take on what I learned, why it matters, and why developers should pay attention.</p>
<h1>From E-Signatures to Intelligent Agreements – A New Era</h1>
<p>Walking into Momentum 2025, you could feel the excitement. Docusign’s CEO and product team set the tone in the keynote: “Agreements make the world go round, but for too long they’ve been stuck in inboxes and PDFs, creating drag on your business.” Their message was clear – Docusign is moving from a product to a platform​. In other words, the company that pioneered e-signatures now aims to turn static contracts into live, integrated assets that propel your business forward.</p>
<p>I saw this vision click when I chatted with an attendee from a major financial services firm. His team manages millions of forms a year – loan applications, account forms, you name it. He admitted they were still “just scanning and storing PDFs” and struggled to imagine how IAM could help. We discussed how much value was trapped in those documents (what Docusign calls the “Agreement Trap” of disconnected processes​). By the end of our coffee, the lightbulb was on: with the right platform, those forms could be automatically routed, data-extracted, and trigger workflows in other systems – no more black hole of PDFs. His problem wasn’t unique; many organizations have critical data buried in agreements, and they’re waking up to the idea that it doesn’t have to be this way.</p>
<h1>What Exactly is Intelligent Agreement Management (IAM)?</h1>
<p>So what is Docusign’s Intelligent Agreement Management? In essence, IAM is an AI-powered platform that connects every part of the agreement lifecycle. It’s not a single product, but a collection of services and tools working in concert​. Docusign IAM helps transform agreement data into insights and actions, accelerate contract cycles, and boost productivity across departments. The goal is to address the inefficiencies in how agreements are created, signed, and managed – those inefficiencies that cost businesses time and money.</p>
<p>At Momentum, Docusign showcased the core components of IAM:</p>
<ul>
<li><p><strong>Docusign Navigator <a href="https://www.docusign.com/blog/capabilities/navigator">link</a>:</strong> A smart repository to centrally store, search, and analyze agreements. It uses AI to convert your signed documents (which are basically large chunks of text) into structured, queryable data​. Instead of manually digging through contracts for a specific clause, you can search across all agreements in seconds. Navigator gives you a clear picture of your organization’s contractual relationships and obligations (think of it as Google for your contracts). Bonus: it comes with out-of-the-box dashboards for things like renewal dates, so you can spot risks and opportunities at a glance.</p>
</li>
<li><p><strong>Docusign Maestro <a href="https://www.docusign.com/blog/capabilities/maestro">link</a>:</strong> A no-code workflow engine to automate agreement workflows from start to finish. Maestro lets you design customizable workflows that orchestrate Docusign tasks and integrate with third-party apps – all without writing code​. For example, you could have a workflow for new vendor onboarding: once a vendor contract is signed, Maestro could automatically notify your procurement team, create a task in your project tracker, and update a record in your ERP system. At the conference, they demoed how Maestro can streamline processes like employee onboarding and compliance checks through simple drag-and-drop steps or archiving PDFs of signed agreements into Google Drive or Dropbox.</p>
</li>
<li><p><strong>Docusign Iris (AI Engine) <a href="https://www.docusign.com/blog/docusign-iris-agreement-ai">link</a>:</strong> The brains of the operation. Iris is the new AI engine powering all of IAM’s “smarts” – from reading documents to extracting data and making recommendations​. It’s behind features like automatic field extraction, AI-assisted contract review, intelligent search, and even document summarization. In the keynote, we saw examples of Iris in action: identify key terms (e.g. payment terms or renewal clauses) across a stack of contracts, or instantly generate a summary of a lengthy agreement. These capabilities aren’t just gimmicks; as one Docusign executive put it, they’re “signals of a new way of working with agreements”. Iris essentially gives your agreement workflow a brain – it can understand the content of agreements and help you act on it.</p>
</li>
<li><p><strong>Docusign App Center <a href="https://www.docusign.com/products/platform/app-center">link</a></strong>: A hub to connect the tools of your trade into Docusign. App Center is like an app store for integrations – it lets you plug in other software (project management, CRM, HR systems, etc.) directly into your Maestro workflows. This is huge for developers (and frankly, anyone tired of building one-off integrations). Instead of treating Docusign as an isolated e-signature tool, App Center makes it a platform you can extend. I’ll dive more into this in the next section, since it’s close to my heart – my team helped build some of these integrations!</p>
</li>
</ul>
<p>In short, IAM ties together the stages of an agreement (create → sign → store → manage) and supercharges each with automation and AI. It’s modular, too – you can adopt the pieces you need. Docusign essentially unbundled the agreement process into <strong>building blocks that developers and admins can mix-and-match.</strong> The future of agreements, as Docusign envisions it, is a world where organizations <em>“seamlessly add, subtract, and rearrange modular solutions to meet ever-changing needs”</em> on a single trusted platform.</p>
<h1>The App Center and Real-World Integrations (Yes, We Built Those!)</h1>
<p>One of the most exciting parts of Momentum 2025 for me was seeing the Docusign App Center come alive. As someone who works on integrations, I was practically grinning during the App Center demos. Docusign highlighted several partner-built apps that snap into IAM, and I’m proud to say This Dot Labs built six of them – including integrations for Monday.com, Slack, Jira, Asana, Airtable, and Mailchimp.
Why are these integrations a big deal? Because developers often spend countless hours wiring up systems that need to talk to each other. With App Center, a lot of that heavy lifting is already done. You can install an app with a few clicks and configure data flows in minutes instead of coding for months​. In fact, a study found it takes the average org 12 months to develop a custom workflow via APIs, whereas with Docusign’s platform you can do it via configuration almost immediately​. That’s a game-changer for time-to-value.
At our This Dot Labs booth, I spoke with many developers who were intrigued by these possibilities. For example, we showed how our Docusign Slack Extension lets teams send Slack messages and notifications when agreements are sent and signed.. If a sales contract gets signed, the Slack app can automatically post a notification in your channel and even attach the signed PDF – no more emailing attachments around. People loved seeing how easily Docusign and Slack now talk to each other using this extension​. Another popular one was our Monday.com app. With it, as soon as an agreement is signed, you can trigger actions in Monday – like assigning onboarding tasks for a new client or employee. Essentially, signing the document kicks off the next steps automatically.
These integrations showcase why IAM is not just about Docusign’s own features, but about an ecosystem. App Center already includes connectors for popular platforms like Salesforce, HubSpot, Workday, ServiceNow, and more. The apps we built for Monday, Slack, Jira, etc., extend that ecosystem. Each app means one less custom integration a developer has to build from scratch. And if an app you need doesn’t exist yet – well, that’s an opportunity. (<a href="https://www.thisdot.co/partnerships/docusign">Shameless plug: we’re happy to help build it!</a>)</p>
<p>The key takeaway here is that <strong>Docusign is positioning itself as a foundational layer in the enterprise software stack.</strong> Your agreement workflow can now natively include things like project management updates, CRM entries, notifications, and data syncs. As a developer, I find that pretty powerful. It’s a shift from thinking of Docusign as a single SaaS tool to thinking of it as a platform that glues processes together.</p>
<h1>Not Just Another Contract Tool – Why IAM Matters for Business</h1>
<p>After absorbing all the Momentum keynotes and sessions, one thing is clear: IAM is not “just another contract management tool.” It’s aiming to be the platform that automates critical business processes which happen to revolve around agreements. The use cases discussed were not theoretical – they were tangible scenarios every developer or IT lead will recognize:</p>
<ul>
<li><p><strong>Procurement Automation:</strong> We heard how companies are using IAM to streamline procurement. Imagine a purchase order process where a procurement request triggers an agreement that goes out for e-signature, and once signed, all relevant systems update instantly. One speaker described connecting Docusign with their ERP so that vendor contracts and purchase orders are generated and tracked automatically. This reduces the back-and-forth with legal and ensures nothing falls through the cracks. It’s easy to see the developer opportunity: instead of coding a complex procurement approval system from scratch, you can leverage Docusign’s workflow + integration hooks to handle it. Docusign IAM is designed to connect to systems like CRM, HR, and ERP so that agreements flow into the same stream of data. For developers, that means using pre-built connectors and APIs rather than reinventing them.</p>
</li>
<li><p><strong>Faster Employee Onboarding:</strong> Onboarding a new hire or client typically involves a flurry of forms and tasks – offer letters or contracts to sign, NDAs, setup of accounts, etc. We saw how IAM can accelerate onboarding by combining e-signature with automated task generation. For instance, the moment a new hire signs their offer letter, Maestro could trigger an onboarding workflow: provisioning the employee in systems, scheduling orientation, and creating tasks in tools like Asana or Monday. All those steps get kicked off by the signed agreement. Docusign Maestro’s integration capabilities shine here – it can tie into HR systems or project management apps to carry the baton forward​. The result is a smoother day-one experience for the new hire and less manual coordination for IT and HR. As developers, we can appreciate how this modular approach saves us from writing yet another “onboarding script”; we configure the workflow, and IAM handles the rest.</p>
</li>
<li><p><strong>Reducing Contract Auto-Renewal Risk:</strong> If your company manages a lot of recurring contracts (think vendor services, subscriptions, leases), missing a renewal deadline can be costly. One real-world story shared at Momentum was about using IAM to prevent unwanted auto-renewals. With traditional tracking (spreadsheets or calendar reminders), it’s easy to forget a termination notice and end up locked into a contract for another year. Docusign’s solution: let the AI engine (Iris) handle it. It can scan your repository, surface any renewal or termination dates, and proactively remind stakeholders – or even kick off a non-renewal workflow if desired. As the Bringing Intelligence to Obligation Management session highlighted, “Missed renewal windows lead to unwanted auto-renewals or lost revenue… A forgotten termination deadline locks a company into an unneeded service for another costly term.”​ With IAM, those pitfalls are avoidable. The system can automatically flag and assign tasks well before a deadline hits​. For developers, this means we can deliver risk-reduction features without building a custom date-tracking system – the platform’s AI and notification framework has us covered.</p>
</li>
</ul>
<p>These examples all connect to a bigger point: agreements are often the linchpin of larger business processes (buying something, hiring someone, renewing a service). By making agreements “intelligent,” Docusign IAM is essentially automating chunks of those processes. This translates to real outcomes – faster cycle times, fewer errors, and less risk. From a technical perspective, it means we developers have a powerful ally: we can offload a lot of workflow logic to the IAM platform. Why code it from scratch if a combination of Docusign + a few integration apps can do it?</p>
<h1>Why Developers Should Care about IAM (Big Time)</h1>
<p>If you’re a software developer or architect building solutions for business teams, you might be thinking: This sounds cool, but is it relevant to me? Let me put it this way – after Momentum 2025, I’m convinced that <strong>ignoring IAM would be a mistake</strong> for anyone in enterprise software. Here’s why:</p>
<ul>
<li><p><strong>Faster time-to-value for your clients or stakeholders:</strong> Business teams are always pressuring IT to deliver solutions faster. With IAM, you have ready-made components to accelerate projects. Need to implement a contract approval workflow? Use Maestro, not months of coding. Need to integrate Docusign with an internal system? Check App Center for an app or use their APIs with far less glue code. Docusign’s own research shows that connecting systems via App Center and Maestro can cut development time dramatically (from ~12 months of custom dev to mere weeks or less). For us developers, that means we can deliver results sooner, which definitely wins points with the business.</p>
</li>
<li><p><strong>Fewer custom builds (and less maintenance):</strong> Let’s face it – maintaining custom scripts or one-off integrations is not fun. Every time a SaaS API changes or a new requirement comes in, you’re back in the code. IAM’s approach offers more reuse and configuration instead of raw code. The platform is doing the hard work of staying updated (for example, when Slack or Salesforce change something in their API, Docusign’s connector app will handle it). By leveraging these pre-built connectors and templates, you write less custom code, which means fewer bugs and lower maintenance overhead. You can focus your coding effort on the unique parts of your product, not the boilerplate integration logic.</p>
</li>
<li><p><strong>Reusable and modular workflows:</strong> I love designing systems as Lego blocks – and IAM encourages that. You can build a workflow once and reuse it across multiple projects or clients with slight tweaks. For instance, an approval workflow for sales contracts might be 90% similar to one for procurement contracts – with IAM, you can reuse that blueprint. The fact that everything is on one platform also means these workflows can talk to each other or be combined. This modularity is a developer’s dream because it leads to cleaner architecture. Docusign explicitly touts this modular approach, noting that organizations can easily rearrange solutions on the fly to meet new needs​. It’s like having a library of proven patterns to draw from.</p>
</li>
<li><p><strong>AI enhancements with minimal effort:</strong> Adding AI into your apps can be daunting if you have to build or train models yourself. IAM essentially gives you AI-as-a-service for agreements. Need to extract key data from 1,000 contracts? Iris can do that out-of-the-box​. Want to implement a risk scoring for contracts? The AI can flag unusual terms or deviations. As a developer, being able to call an API or trigger a function that returns “these are the 5 clauses to look at” is incredibly powerful – you’re injecting intelligence without needing a data science team. It means you can offer more value in your applications (and impress those end-users!) by simply tapping into IAM’s AI features.</p>
</li>
</ul>
<p>Ultimately, Docusign IAM empowers developers to build more with less code. It’s about higher-level building blocks. This doesn’t replace our jobs – it makes our jobs more focused on the interesting problems. I’d rather spend time designing a great user experience or tackling a complex business rule than coding yet another Docusign-to-Slack integration. IAM is taking care of the plumbing and adding a layer of smarts on top.</p>
<h1>Don’t Underestimate Agreement Intelligence – Your Call to Action</h1>
<p>Momentum 2025 left me with a clear call to action: embrace agreement intelligence. If you’re a developer or tech leader, it’s time to explore what Docusign IAM can do for your projects. This isn’t just hype from a conference – it’s a real shift in how we can deliver solutions.</p>
<h3>Here are a few ways to get started:</h3>
<ul>
<li><p><strong>Browse the IAM App Center –</strong> Take a look at the growing list of apps in the Docusign App Center. You might find that integration you’ve been meaning to build is already available (or one very close to it). Installing an app is trivial, and you can configure it to fit your workflow. This is the low-hanging fruit to immediately add value to your existing Docusign processes. If you have Docusign eSignature or CLM in your stack, App Center is where you extend it.</p>
</li>
<li><p><strong>Think about integrations that could unlock value –</strong> Consider the systems in your organization that aren’t talking to each other. Is there a manual step where someone re-enters data from a contract into another system? Maybe an approval that’s done via email and could be automated? Those are prime candidates for an IAM solution. For example, if Legal and Sales use different tools, an integration through IAM can bridge them, ensuring no agreement data falls through the cracks. Map out your agreement process end-to-end and identify gaps – chances are, IAM has a feature to fill them.</p>
</li>
<li><p><strong>Experiment with Maestro and the API –</strong> If you’re technical, spin up a trial of Docusign IAM. Try creating a Maestro workflow for a simple use case, or use the Docusign API/SDKs to trigger some AI analysis on a document. Seeing it in action will spark ideas. I was amazed how quickly I could set up a workflow with conditions and parallel steps – things that would take significant coding time if I did them manually. The barrier to entry for adding complex logic has gotten a lot lower.</p>
</li>
<li><p><strong>Stay informed and involved –</strong> Docusign’s developer community and IAM documentation are growing. Momentum may be over, but the “agreement intelligence” movement is just getting started. Keep an eye on upcoming features (they hinted at even more AI-assisted tools coming soon). Engage with the community forums or join Docusign’s IAM webinars. And if you’re building something cool with IAM, consider sharing your story – the community benefits from hearing real use cases.</p>
</li>
</ul>
<p>My final thought: don’t underestimate the impact that agreement intelligence can have in modern workflows. We spend so much effort optimizing various parts of our business, yet often overlook the humble agreement – the contracts, forms, and documents that initiate or seal every deal. Docusign IAM is shining a spotlight on these and saying, “Here is untapped gold. Let’s mine it.” As developers, we have an opportunity (and now the tools) to lead that charge.
I’m incredibly excited about this new chapter. After seeing what Docusign has built, I’m convinced that intelligent agreements can be a foundational layer for digital transformation. It’s not just about getting documents signed faster; it’s about connecting dots and automating workflows in ways we couldn’t before. As I reflect on Momentum 2025, I’m inspired and already coding with new ideas in mind. I encourage you to do the same – check out IAM, play with the App Center, and imagine what you could build when your agreements start working intelligently for you. The future of agreements is here, and it’s time for us developers to take full advantage of it.</p>
<p><strong>Ready to explore?</strong> Head to the Docusign App Center and IAM documentation and see how you can turn your agreements into engines of growth. Trust me – the next time you attend Momentum, you might just have your own success story to share. Happy building!</p>
]]></description>
            <link>https://www.thisdot.co/blog/docusign-momentum-2025-from-a-developers-perspective</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/docusign-momentum-2025-from-a-developers-perspective</guid>
            <pubDate>Tue, 29 Apr 2025 18:49:03 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[“Music and code have a lot in common,” freeCodeCamp’s Jessica Wilkins on what the tech community is doing right to onboard new software engineers]]></title>
            <description><![CDATA[<p>Before she was a software developer at <a href="https://www.freecodecamp.org/">freeCodeCamp</a>, Jessica Wilkins was a classically trained clarinetist performing across the country.</p>
<p>Her days were filled with rehearsals, concerts, and teaching, and she hadn’t considered a tech career until the world changed in 2020.</p>
<blockquote>
<p>“When the pandemic hit, most of my gigs were canceled,” she says. “I suddenly had time on my hands and an idea for a site I wanted to build.”</p>
</blockquote>
<p>That site, a tribute to Black musicians in classical and jazz music, turned into much more than a personal project. It opened the door to a whole new career where her creative instincts and curiosity could thrive just as much as they had in music.</p>
<p>Now at freeCodeCamp, Jessica maintains and develops the very JavaScript curriculum that has helped her and millions of developers around the world. </p>
<p>We spoke with Jessica about her advice for JavaScript learners, why musicians make great developers, and how inclusive communities are helping more women thrive in tech.</p>
<h3>Jessica’s Top 3 JavaScript Skill Picks for 2025</h3>
<p>If you ask Jessica what it takes to succeed as a JavaScript developer in 2025, she won’t point you straight to the newest library or trend.</p>
<p>Instead, she lists three skills that sound simple, but take real time to build:</p>
<blockquote>
<p>“Learning how to ask questions and research when you get stuck. Learning how to read error messages. And having a strong foundation in the fundamentals”</p>
</blockquote>
<p>She says those skills don’t come from shortcuts or shiny tools. They come from building.</p>
<blockquote>
<p>“Start with small projects and keep building,” she says. “Books like You Don’t Know JS help you understand the theory, but experience comes from writing and shipping code. You learn a lot by doing.”</p>
</blockquote>
<p>And don’t forget the people around you. </p>
<blockquote>
<p>“Meetups and conferences are amazing,” she adds. “You’ll pick up things faster, get feedback, and make friends who are learning alongside you.”</p>
</blockquote>
<h3>Why So Many Musicians End Up in Tech</h3>
<p>A musical past like Jessica’s isn’t unheard of in the JavaScript industry. In fact, she’s noticed a surprising number of musicians making the leap into software.</p>
<blockquote>
<p>“I think it’s because music and code have a lot in common,” she says. “They both require creativity, pattern recognition, problem-solving… and you can really get into flow when you’re deep in either one.”</p>
</blockquote>
<p>That crossover between artistry and logic feels like home to people who’ve lived in both worlds.</p>
<h3>What the Tech Community Is Getting Right</h3>
<p>Jessica has seen both the challenges and the wins when it comes to supporting women in tech.</p>
<blockquote>
<p>“There’s still a lot of toxicity in some corners,” she says. “But the communities that are doing it right—like Women Who Code, Women in Tech, and Virtual Coffee—create safe, supportive spaces to grow and share experiences.”</p>
</blockquote>
<p>She believes those spaces aren’t just helpful, but they’re essential. </p>
<blockquote>
<p>“Having a network makes a huge difference, especially early in your career.”</p>
</blockquote>
<h3>What’s Next for Jessica Wilkins?</h3>
<p>With a catalog of published articles, open-source projects under her belt, and a growing audience of devs following her journey, Jessica is just getting started.</p>
<p>She’s still writing. Still mentoring. Still building. And still proving that creativity doesn’t stop at the orchestra pit—it just finds a new stage.</p>
<p>Follow Jessica Wilkins on <a href="https://x.com/codergirl1991">X</a> and <a href="https://www.linkedin.com/in/jessica-wilkins-developer/">Linkedin</a> to keep up with her work in tech, her musical roots, and whatever she’s building next.</p>
<p>Sticker illustration by <a href="https://linktr.ee/JacobAshley">Jacob Ashley</a>.</p>
]]></description>
            <link>https://www.thisdot.co/blog/music-and-code-have-a-lot-in-common-freecodecamps-jessica-wilkins-on-what</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/music-and-code-have-a-lot-in-common-freecodecamps-jessica-wilkins-on-what</guid>
            <pubDate>Fri, 25 Apr 2025 19:53:20 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[“We were seen as amplifiers, not collaborators,” Ashley Willis, Sr. Director of Developer Relations at GitHub, on How DevRel has Changed, Open Source, and Holding Space as a Leader]]></title>
            <description><![CDATA[<p>Ashley Willis has seen Developer Relations evolve from being on the sidelines of the tech team to having a seat at the strategy table.</p>
<p>In her ten years in the space, she’s done more than give great conference talks or build community—she’s helped shape what the DevRel role looks like for software providers. Now as the Senior Director of Developer Relations at GitHub, Ashley is focused on building spaces where developers feel heard, seen, and supported.</p>
<blockquote>
<p>“A decade ago, we were seen as amplifiers, not collaborators,” she says. “Now we’re influencing product roadmaps and shaping developer experience end to end.”</p>
</blockquote>
<h3>DevRel Has Changed</h3>
<p>For Ashley, the biggest shift hasn’t been the work itself—but how it’s understood.</p>
<blockquote>
<p>“The work is still outward-facing, but it’s backed by real strategic weight,” she explains. “We’re showing up in research calls and incident reviews, not just keynotes.”</p>
</blockquote>
<p>That shift matters, but it’s not the finish line. Ashley is still pushing for change when it comes to burnout, representation, and sustainable metrics that go beyond conference ROI.</p>
<blockquote>
<p>“We’re no longer fighting to be taken seriously. That’s a win. But there’s more work to do.”</p>
</blockquote>
<p>Talking Less as a Leader</p>
<p>When we asked what the best advice Ashley ever received, she shared an early lesson she received from a mentor: “Your presence should create safety, not pressure.”</p>
<blockquote>
<p>“It reframed how I saw my role,” she says. “Not as the one with answers, but the one who holds the space.”</p>
</blockquote>
<p>Ashley knows what it’s like to be in rooms where it’s hard to speak up. She leads with that memory in mind, and by listening more than talking, normalizing breaks, and creating environments where others can lead too.</p>
<blockquote>
<p>“Leadership is emotional labor. It’s not about being in control. It’s about making it safe for others to lead, too.”</p>
</blockquote>
<h3>Scaling More Than Just Tech</h3>
<p>Having worked inside high-growth companies, Ashley knows firsthand: scaling tech is one thing. Scaling trust is another.</p>
<blockquote>
<p>“Tech will break. Roadmaps will shift. But if there’s trust between product and engineering, between company and community—you can adapt.”</p>
</blockquote>
<p>And she’s learned not to fall for premature optimization. Scale what you have. Don’t over-design for problems you don’t have yet.</p>
<h3>Free Open Source Isn’t Free</h3>
<p>There’s one myth Ashley is eager to debunk: that open source is “free.”</p>
<blockquote>
<p>“Open source isn’t free labor. It’s labor that’s freely given,” she says. “And it includes more than just code. There’s documentation, moderation, mentoring, emotional care. None of it is effortless.”</p>
</blockquote>
<p>Open source runs on human energy. And when we treat contributors like an infinite resource, we risk burning them out, and breaking the ecosystem we all rely on.</p>
<blockquote>
<p>“We talk a lot about open source as the foundation of innovation. But we rarely talk about sustaining the people who maintain that foundation.”</p>
</blockquote>
<h3>Burnout is Not Admirable</h3>
<p>Early in her career, Ashley wore burnout like a badge of honor. She doesn’t anymore.</p>
<blockquote>
<p>“Burnout doesn’t prove commitment,” she says. “It just dulls your spark.”</p>
</blockquote>
<p>Now, she treats rest as productive. And she’s learned that clarity is kindness—especially when giving feedback.</p>
<blockquote>
<p>“I thought being liked was the same as being kind. It’s not. Kindness is honesty with empathy.”</p>
</blockquote>
<h3>The Most Underrated GitHub Feature?</h3>
<p>Ashley’s pick: <a href="https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fcopilot">personal instructions in GitHub Copilot</a>.</p>
<p>Most users don’t realize they can <a href="https://docs.github.com/en/copilot/customizing-copilot/adding-personal-custom-instructions-for-github-copilot">shape how Copilot writes</a>, like its tone, assumptions, and context awareness.</p>
<p>Her own instructions are specific: empathetic, plainspoken, technical without being condescending. For Ashley, that helps reduce cognitive load and makes the tool feel more human.</p>
<blockquote>
<p>“Most people skip over this setting. But it’s one of the best ways to make Copilot more useful—and more humane.”</p>
</blockquote>
<h3>Connect with Ashley Willis</h3>
<p>She has been building better systems for over a decade. Whether it’s shaping Copilot UX, creating safer teams, or speaking truth about the labor behind open source, she’s doing the quiet work that drives sustainable change.</p>
<p>Follow <a href="https://bsky.app/profile/ashley.dev">Ashley on BlueSky</a> to learn more about her work, her maker projects, and the small things that keep her grounded in a fast-moving industry.</p>
<p>Sticker Illustration by <a href="https://linktr.ee/JacobAshley">Jacob Ashley</a>.</p>
]]></description>
            <link>https://www.thisdot.co/blog/we-were-seen-as-amplifiers-not-collaborators-ashley-willis-sr-director-of</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/we-were-seen-as-amplifiers-not-collaborators-ashley-willis-sr-director-of</guid>
            <pubDate>Fri, 18 Apr 2025 19:09:30 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Increasing development velocity with Cursor]]></title>
            <description><![CDATA[<p>If you’re a developer, you’ve probably heard of <a href="https://www.cursor.com/">Cursor</a> by now and have either tried it out or are just curious to learn more about it. Cursor is a fork of VSCode with a ton of powerful AI/LLM-powered features added on. For around $20/month, I think it’s the best value in the AI coding space.</p>
<p>Tech giants like Shopify and smaller companies like <a href="https://www.thisdot.co/">This Dot Labs</a> have purchased Cursor subscriptions for their developers with the goal of increased productivity.</p>
<p>I have been using Cursor heavily for a few months now and am excited to share how it’s impacted me personally. In this post, we will cover some of the basic features, use cases, and I’ll share some tips and tricks I’ve learned along the way.</p>
<p>If you love coding and building like me, I hope this post will help you unleash some of the superpowers Cursor’s AI coding features make possible. Let’s jump right in!</p>
<h2>Cursor 101</h2>
<p>The core tools of the Cursor tool belt are <a href="https://docs.cursor.com/tab/overview">Autocomplete</a>, <a href="https://docs.cursor.com/chat/ask">Ask</a>, and <a href="https://docs.cursor.com/chat/agent">Agent</a>.</p>
<h3>Feature: Autocomplete</h3>
<p>The first thing that got me hooked was Autocomplete. It just worked so much better than the tools I had used previously, like GitHub Copilot. It was quicker and smarter, and I could immediately notice the amount of keystrokes that it was saving me.</p>
<p>This feature is great because it doesn’t really require any work or skilled prompting from the user. There are a couple of tricks for getting a little bit more out of it that I will share later, but for now, just enjoy the ride!</p>
<h3>Feature: Ask</h3>
<p>If you’ve interacted with AI/LLMs before, like ChatGPT - this is what the Ask feature is. It’s just a chat feature you can easily provide context to from your code base and choose which Model to chat with.</p>
<p>This feature is best suited for just asking more general questions that you might have queried Google or Stack Overflow for in the past. It’s also good for planning how to implement a feature you’re working on.</p>
<p>After chatting or planning, you can switch directly to Agent mode to pick up and take action on something you were cooking up in Ask mode.</p>
<p>Here’s an example of planning a simple tic-tac-toe game implementation using the Ask feature:</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXfd5o5N7_up3sQLg0y0FHGDvHS225kbhLXBLsPMH4dzOLcWADvi7k7_gcbqwUSr51AUJhktqXO2h8rFgSKTV_8KLIPUqVIgmuJJaXe-mLY9qqAez_qdBWtVe02i0nN-wXOSTsvMjQ?key=xMFmV_3o54VBMaU93xcm1dFP" alt=""></p>
<h3>Feature: Agent</h3>
<p>Agent mode lets the AI model take the wheel and write code, make edits, or take other similar actions on your code base. The goal is that you can write prompts and give instructions, and the Agent can generate the code and build features or even entire applications for you.</p>
<p>With great power comes great responsibility.</p>
<p>Agents are a feature where the more you put into them, the more you get out. The more skilled you become in using them by providing better prompts and including the right context, you will continue to get better results.</p>
<p>The AI doesn’t always get it right, but the fact that the models and the users are both getting better is exciting. Throughout this post, I will share the best use cases, tips, and tricks I have found using Cursor Agent.</p>
<p>Here’s an example using the Agent to execute the implementation details of the tic-tac-toe game we planned using Ask:</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXchK3d3CUHfBMxMJElPr2GsHuFVDziOJFNIcGQTrqbR1I8VjrYlh90HzuEl5FaHIxOkM6nhphJetOqYnmAoWcn0Ptig2FO_K96RMjOTBnB1W8eUCz8EvSqGbWEyUOeDGZDsOFEkrQ?key=xMFmV_3o54VBMaU93xcm1dFP" alt=""></p>
<h3>Core Concept: Context</h3>
<p>After understanding the features and the basics of prompting, context is the most important thing for getting the best results out of Cursor.</p>
<p>In Cursor and in general, whenever you’re prompting a chat or an agent, you want to make sure that it has all the relevant information that it needs to provide an answer or result.</p>
<p>Cursor, by default, always has some context of your code. It indexes your code base and usually keeps the open buffer in the context window at the very least.</p>
<p>At the top left of the Ask or Agent panel, there is an @ button, and next to that are badges for all the current items that have been explicitly added to the context for the current session. The @ button has a dropdown that allows you to add files, folders, web links, past chats, git commits, and more to the context.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXc0QYDhLOTQfQA_S3jnBtk6c8AyDDC9H-baUCsA4nRYbSTnr3DPn8wbpF8wCXexlafjMduNRJyLF8ZL-1zm8Cwxylp3e80Vt25olO3t553X_IX2BKIS0daIgl_6p-nuvBbaFn4Z-A?key=xMFmV_3o54VBMaU93xcm1dFP" alt=""></p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXfuPk1-xJ1XEPnCLNUEzkZDKvoRI-TRT8vZkO8IdHZqAHdZ-vcvRHxNOg2NZHkw8dGwQq3IBsA3idpftKuxkAoBB5Pxyq6qt-cy6yvOSkPztmVGET0J4C3O0tVnb-UqY7qG2q-4AA?key=xMFmV_3o54VBMaU93xcm1dFP" alt=""></p>
<p>Before you prompt, always make sure you add the relevant content it needs as context so that it has everything it needs to provide the best response.</p>
<h3>Settings and Rules</h3>
<p>Cursor has its own settings page, which you can access through Cursor → Settings → Cursor Settings.</p>
<p>This is where you log in to your account, manage various features, and enable or disable models.</p>
<p>In the General section, there is an option for Privacy Mode. This is one setting in particular I recommend enabling. Aside from that, just explore around and see what’s available.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXdESytib1MHp4sDVwFf0VN3qiYV07xVdHRp6UiWLYlKMY5aorbGM1m9-2YpAT6kJm96PKpmnNKFV_0RQESo2eJtKrn1-yzhJxfaFOuUd7HaxPicqLbgX7BpysiUwhwu4pMTeLECiw?key=xMFmV_3o54VBMaU93xcm1dFP" alt=""></p>
<h3>Models</h3>
<p>The model you use is just as important as your prompt and the context that you provide. Models are the underlying AI/LLM used to process your input. The most well-known is GPT-4o, the default model for ChatGPT. There are a lot of different models available, and Cursor provides access to most of them out of the box.</p>
<h3>Model pricing</h3>
<p>A lot of the most common models, like GPT-4o or Sonnet 3.5/3.7, are included in your Cursor subscription. Some models like o1 and Sonnet 3.7 MAX are considered premium models, and you will be billed for usage for these.</p>
<p>Be sure to pay attention to which models you are using so you don’t get any surprise bills.</p>
<h3>Choosing a Model</h3>
<p>Some models are better suited for certain tasks than others. You can configure which models are enabled in the Cursor Settings.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXc90kQ9Vz00_VHwRiOipPu5F9PSxT7DFSC2Q7PlCTkm3raCX6z0_0b_GzEaLxaCyq0iJboUKl1rQcBbluwH87kJr3DlsgF1K3w__NCcLuWGSJFnedmY-hwOghfHrxnES9g70TMK6A?key=xMFmV_3o54VBMaU93xcm1dFP" alt=""></p>
<p>If you are planning out a big feature or trying to solve some complex logic issue, you may want to use one of the thinking models, like o1, o3-mini, or Deep Seek R1.</p>
<p>For most coding tasks and as a good default, I recommend using Sonnet 3.5 or 3.7.</p>
<p>The great thing about Cursor is that you have the options available right in your editor. The most important piece of advice that I can give in this post is to keep trying things out and experimenting. Try out different models for different tasks, get a feel for it, and find what works for you.</p>
<h2>Use cases</h2>
<p>Agents and LLM models are still far from perfect. That being said, there are already a lot of tasks they are very good at. The more effective you are with these tools, the more you will be able to get done in a shorter amount of time.</p>
<h3>Generating test cases</h3>
<p>Have some code that you would like unit tested? Cursor is very good at generating test cases and assertions for your code. The fewer barriers there are to testing a piece of code, the better the result you will get. So, try your best to write code that is easily testable! If testing the code requires some mocks or other pieces to work, do your best to provide it the context and instructions it needs before writing the tests.</p>
<p>Always review the test cases! There could be errors or test cases that don’t make sense. Most of the time, it will get you pretty close to where you want to be.</p>
<p>Here’s an example of using the Agent mode to install packages for testing and generate unit tests for the tic-tac-toe game logic:</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXcxa-6-bVNM2MQss9DgWR1G3hrk1cKRYlhX2TQI0LZxyPtSFfgRWuVzf9fMgHX57fxUdouybAixct9HlUhbINw1YZNPz3cS6bW1xhXupfD1P0WgGCM35C85gbeBA5P7zYwcZQwX?key=xMFmV_3o54VBMaU93xcm1dFP" alt=""></p>
<h3>Generating documentation</h3>
<p>This is another thing we know AI models are good at - summarizing large chunks of information. Make sure it has the context of whatever you want to document.</p>
<p>This one, in particular, is really great because historically, keeping documentation up to date is a rare and challenging practice.</p>
<p>Here’s an example of using the Agent mode to generate documentation for the tic-tac-toe game:</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXeqyX6n5PtyUQCAYLCMkx6R4aVwogQVy2f3ocUiri1Fi3sJo4sW7e0ys2ohO7ZCLsE1ll3tCZRS0yvQuqx-4BcH3dug17k_v_xkABLDec3AoROeWqmjq1Uya0RRiqHyNGKt3sBK?key=xMFmV_3o54VBMaU93xcm1dFP" alt=""></p>
<h3>Code review</h3>
<p>There are a lot of up-and-coming tools outside of Cursor that can handle this. For example, GitHub now has Copilot integrated in pull requests for code reviews. It’s never a bad idea to have whatever change set you’re looking to commit reviewed and inspected before pushing it up to the remote, though. You can provide your unstaged changes or even specific commits as context to a Cursor Ask or Agent prompt.</p>
<h3>Getting up to speed in a new code base</h3>
<p>Being able to query a codebase with the power of LLM’s is truly fantastic. It can be a great help to get up to speed in a large new codebase quickly.</p>
<p>Some example prompts:</p>
<blockquote>
<p>Please provide an overview of this project and how to get started developing with it</p>
</blockquote>
<blockquote>
<p>I need to make some changes to the way that notifications are grouped in the UI, please provide a detailed analysis and pseudo code outlining how the grouping algorithm works</p>
</blockquote>
<p>If you have a question about the code base, ask Cursor!</p>
<h3>Refactoring</h3>
<p>Refactoring code in a code base is a much quicker process in Cursor. You can execute refactors depending on their scope in a couple of distinct ways.</p>
<p>For refactors that don’t span a lot of files or are less complex, you can probably get away with just using the autocomplete. For example, if you make a change to something in a file and there are several instances of the same pattern following, the autocomplete will quickly pick up on this and help you tab through the changes. If you switch to another file, this information will still be in context and can be continued most of the time.</p>
<p>For larger refactors spanning several files, using the Agent feature will most likely be the quickest way to get it done.</p>
<p>Add all the files you plan to make changes to the Agent tab’s context window.</p>
<p>Provide specific instructions and/or a basic example of how to execute the refactor.</p>
<p>Let the Agent work, if it doesn’t get it exactly right initially, you can always give it corrections in a follow-up prompt.</p>
<h3>Generating new code/features</h3>
<p>This is the big promise of AI agents and the one with the most room for mixed results. My main recommendation here is to keep experimenting. Keep learning to prompt more effectively, compare results from different models, and pay attention to the results you get from each use case.</p>
<p>I personally get the best results building new features in small, focused chunks of work. It can also be helpful to have a dialog with the Ask feature first to plan out the feature&#39;s details that the Agent can follow up on and implement.</p>
<p>If there are existing patterns in your codebase for accomplishing certain things, provide this information in your prompts and make sure to add the relevant code to the context. For example, if you’re adding a new form to the web page and you have other similar forms that handle validation and making back-end calls in the same way, Cursor can base the code for the new feature on this.</p>
<p>Example prompt: Generate a form for creating a new post, follow similar patterns from the create user profile form, and look to the post schema for the fields that should be included.</p>
<p>Remember that you can always follow up with additional prompts if you aren’t quite happy with the results of the first.. If the results are close but need to be adjusted in some way, let the agent know in the next prompt.</p>
<p>You may find that for some things, it just doesn’t do well yet. Mentally note these things and try to get to a place where you can intuit when to reach for the Agent feature or just write some of the code the old-fashioned way.</p>
<h2>Tips and tricks</h2>
<p>The more you use Cursor, the more you will find little ways to get more out of it. Here are some of the tips and patterns that I find particularly useful in my day-to-day work.</p>
<h3>Generating UI with screenshots</h3>
<p>You can attach images to your prompts that the models can understand using computer vision. To the left of the send button, there is a little button to attach an image from your computer.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXeOSuyMScaxhAkTAmfvJdy59maRKtVUTlg-FE_mpCoH_Y2XfEduB7OeXvALL77ppZ0aekKkqKo2KDOuWhk6sEAX4B16v83kYMaynAFBnTLaSSfka_sU78AFBxvaJg1UD0YN5f9w?key=xMFmV_3o54VBMaU93xcm1dFP" alt=""></p>
<p>This functionality is incredibly useful for generating UI code, whether you are giving it an example UI as a reference for generating new UI in your application or providing a screenshot of existing UI in your application and prompting it to change details in reference to the image.</p>
<h3>Cursor Rules</h3>
<p><a href="https://docs.cursor.com/context/rules-for-ai">Cursor Rules</a> allow you to add additional information that the LLM models might need to provide the best possible experience in your codebase. You can create global rules as well as project-specific ones.</p>
<p>An example use case is if your project has some updated dependency with newer APIs than the one on which the LLM has been trained. I ran into this when adding Tailwind v4 to a project; the models are always generating code based on Tailwind v3 or earlier.</p>
<p>Here’s how we can add a rules file to handle this use case:</p>
<pre><code>Path: .cursor/rules/tailwind.mdc

Rule Type: Agent Requested

Description: When generating CSS using Tailwind utility classes

Access CSS variables in Tailwind CSS using the text(--var-name) syntax and not the previous text-[var(--var-name)] syntax
</code></pre>
<p>If you want to see some more examples, check out the <a href="https://github.com/PatrickJS/awesome-cursorrules">awesome-cursorrules repository</a>.</p>
<h2>Summary</h2>
<p>Learn to use Cursor and similar tools to enhance your development process. It may not give you actual superpowers, but it may feel like it. All the features and tools we’ve covered in this post come together to provide an amazing experience for developing all types of software and applications.</p>
]]></description>
            <link>https://www.thisdot.co/blog/increasing-development-velocity-with-cursor</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/increasing-development-velocity-with-cursor</guid>
            <pubDate>Fri, 18 Apr 2025 13:00:02 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Internationalization in Next.js with next-intl]]></title>
            <description><![CDATA[<h1>Internationalization in Next.js with next-intl</h1>
<p>Internationalization (i18n) is essential for providing a multi-language experience for global applications. <code><a href="https://next-intl.dev/">next-intl</a></code> integrates well with Next.js’ App Router, handling i18n routing, locale detection, and dynamic configuration. This guide will walk you through setting up i18n in Next.js using <code>next-intl</code> for URL-based routing, user-specific settings, and domain-based locale routing.</p>
<h2>Getting Started</h2>
<p>First, <a href="https://nextjs.org/docs/getting-started/installation">create a Next.js app</a> with the App Router and install <code>next-intl</code>:</p>
<pre><code>npm install next-intl
</code></pre>
<p>Next, configure <code>next-intl</code> in the <code>next.config.ts</code> file to provide a request-specific i18n configuration for Server Components:</p>
<pre><code class="language-js">import type { NextConfig } from &#39;next&#39;;
import createNextIntlPlugin from &#39;next-intl/plugin&#39;;

const withNextIntl = createNextIntlPlugin();

const nextConfig: NextConfig = {
  /* config options here */
};

export default withNextIntl(nextConfig);
</code></pre>
<h2>Without i18n Routing</h2>
<p>Setting up an app without i18n routing integration can be advantageous in scenarios where you want to provide a locale to <code>next-intl</code> based on user-specific settings or when your app supports only a single language. This approach offers the simplest way to begin using <code>next-intl</code>, as it requires no changes to your app’s structure, making it an ideal choice for straightforward implementations.</p>
<pre><code>├── translations
│   ├── en.json
│   └── es.json
├── next.config.ts
└── src
    ├── i18n
    │   └── request.ts
    └── app
        ├── layout.tsx
        └── page.tsx
</code></pre>
<p>Here’s a quick explanation of each file&#39;s role:</p>
<ul>
<li><code>translations/</code>: Stores different translations per language (e.g., <code>en.json</code> for English, <code>es.json</code> for Spanish). Organize this as needed, e.g., <code>translations/en/common.json</code>.</li>
<li><code>request.ts</code>: Manages locale-based configuration scoped to each request.</li>
</ul>
<h3>Setup request.ts for Request-Specific Configuration</h3>
<p>Since we will be using features from <code>next-intl</code> in Server Components, we need to add the following configuration in <code>i18n/request.ts</code>:</p>
<pre><code class="language-js">import { getRequestConfig } from &#39;next-intl/server&#39;;

export default getRequestConfig(async () =&gt; {
  const locale = &#39;en&#39;;

  return {
    locale,
    messages: (await import(`../../translations/${locale}.json`)).default,
  };
});
</code></pre>
<p>Here, we define a static locale and use that to determine which translation file to import. The imported JSON data is stored in the message variable, and is returned together with the locale so that we can access them from various components in the application.</p>
<h3>Using Translation in RootLayout</h3>
<p>Inside <code>RootLayout</code>, we use <code>getLocale()</code> to retrieve the static locale and set the document language for SEO and pass translations to <code>NextIntlClientProvider</code>:</p>
<pre><code class="language-js">import { NextIntlClientProvider } from &#39;next-intl&#39;;
import { getLocale, getMessages } from &#39;next-intl/server&#39;;

export default async function RootLayout({ children }: { children: React.ReactNode }) {
  const locale = await getLocale();
  const messages = await getMessages();

  return (
    &lt;html lang={locale}&gt;
      &lt;body&gt;
        &lt;NextIntlClientProvider messages={messages}&gt;{children}&lt;/NextIntlClientProvider&gt;
      &lt;/body&gt;
    &lt;/html&gt;
  );
}
</code></pre>
<p><strong>Note</strong> that <code>NextIntlClientProvider</code> automatically inherits configuration from <code>i18n/request.ts</code> here, but messages must be explicitly passed.</p>
<p>Now you can use translations and other functionality from <code>next-intl</code> in your components:</p>
<pre><code class="language-js">import { useTranslations } from &#39;next-intl&#39;;
import { Link } from &#39;@/i18n/routing&#39;;

export default function HomePage() {
  const t = useTranslations(&#39;HomePage&#39;);
  return (
    &lt;div&gt;
      &lt;h1&gt;{t(&#39;title&#39;)}&lt;/h1&gt;
      &lt;Link href=&quot;/about&quot;&gt;{t(&#39;about&#39;)}&lt;/Link&gt;
    &lt;/div&gt;
  );
}
</code></pre>
<p>In case of async components, you can use the awaitable <code>getTranslations</code> function instead:</p>
<pre><code class="language-js">import { getTranslations } from &#39;next-intl/server&#39;;
import { Link } from &#39;@/i18n/routing&#39;;

export default async function HomePage() {
  const t = await getTranslations(&#39;HomePage&#39;);
  return (
    &lt;div&gt;
      &lt;h1&gt;{t(&#39;title&#39;)}&lt;/h1&gt;
      &lt;Link href=&quot;/about&quot;&gt;{t(&#39;about&#39;)}&lt;/Link&gt;
    &lt;/div&gt;
  );
}
</code></pre>
<p>And with that, you have i18n configured and working on your application!  <br>Now, let’s take it a step further by introducing routing. \</p>
<h3>With i18n Routing</h3>
<p>To set up i18n routing, we need a file structure that separates each language configuration and translation file. Below is the recommended structure:</p>
<pre><code>├── translations
│   ├── en.json
│   └── es.json
├── next.config.ts
└── src
    ├── i18n
    │   ├── routing.ts
    │   └── request.ts
    ├── middleware.ts
    └── app
        └── [locale]
            ├── layout.tsx
            └── page.tsx
</code></pre>
<p>We updated the earlier structure to include some files that we require for routing:</p>
<ul>
<li><code>routing.ts</code>: Sets up locales, default language, and routing, shared between middleware and navigation.</li>
<li><code>middleware.ts</code>: Handles URL rewrites and locale negotiation.</li>
<li><code>app/[locale]/</code>: Creates dynamic routes for each locale like <code>/en/about</code> and <code>/es/about</code>.</li>
</ul>
<h3>Define Routing Configuration in i18n/routing.ts</h3>
<p>The <code>routing.ts</code> file configures supported locales and the default locale, which is referenced by <code>middleware.ts</code> and other navigation functions:</p>
<pre><code class="language-js">import { defineRouting } from &#39;next-intl/routing&#39;;
import { createNavigation } from &#39;next-intl/navigation&#39;;

export const routing = defineRouting({
  locales: [&#39;en&#39;, &#39;es&#39;], // Supported locales
  defaultLocale: &#39;en&#39;,    // Fallback locale if none matches
});

// Provides wrappers for Next.js navigation APIs to handle locale routing
export const { Link, redirect, usePathname, useRouter } = createNavigation(routing);
</code></pre>
<p>This configuration lets Next.js handle URL paths like <code>/about</code>, with locale management managed by <code>next-intl</code>.</p>
<h3>Update request.ts for Request-Specific Configuration</h3>
<p>We need to update the <code>getRequestConfig </code>function from the above implementation in <code>i18n/request.ts</code>.</p>
<pre><code class="language-js">import { getRequestConfig } from &#39;next-intl/server&#39;;
import { routing } from &#39;./routing&#39;;

export default getRequestConfig(async ({ requestLocale }) =&gt; {
  let locale = await requestLocale;

  if (!locale || !routing.locales.includes(locale as any)) {
    locale = routing.defaultLocale;
  }

  return {
    locale,
    messages: (await import(`../../translations/${locale}.json`)).default,
  };
});
</code></pre>
<p>Here, <code>request.ts</code> ensures that each request loads the correct translation files based on the user’s locale or falls back to the default.</p>
<h3>Setup Middleware for Locale Matching</h3>
<p>The <code>middleware.ts</code> file matches the <code>locale</code> based on the request:</p>
<pre><code class="language-js">import createMiddleware from &#39;next-intl/middleware&#39;;
import { routing } from &#39;./i18n/routing&#39;;

export default createMiddleware(routing);

export const config = {
  matcher: [&#39;/&#39;, &#39;/(es|en)/:path*&#39;], // Matches i18n paths only
};
</code></pre>
<p>Middleware handles locale matches and redirects to localized paths like <code>/en</code> or <code>/es</code>.</p>
<h3>Updating the <code>RootLayout</code> file</h3>
<p>Inside <code>RootLayout</code>, we use the locale from params (matched by middleware) instead of calling <code>getLocale()</code></p>
<pre><code class="language-js">import { NextIntlClientProvider } from &#39;next-intl&#39;;
import { getMessages } from &#39;next-intl/server&#39;;
import { notFound } from &#39;next/navigation&#39;;
import { routing } from &#39;@/i18n/routing&#39;;
import &#39;../globals.css&#39;;

export default async function RootLayout({
  children,
  params,
}: {
  children: React.ReactNode;
  params: { locale: string };
}) {
  const { locale } = params;

  if (!routing.locales.includes(locale as any)) {
    notFound();
  }

  const messages = await getMessages();

  return (
    &lt;html lang={locale}&gt;
      &lt;body&gt;
        &lt;NextIntlClientProvider messages={messages}&gt;{children}&lt;/NextIntlClientProvider&gt;
      &lt;/body&gt;
    &lt;/html&gt;
  );
}
</code></pre>
<p>The <code>locale</code> we get from the params was matched in the <code>middleware.ts</code> file and we use that here to set the document language for SEO purposes. Additionally, we used this file to pass configuration from <code>i18n/request.ts</code> to Client Components through <code>NextIntlClientProvider</code>.</p>
<p><strong>Note</strong>: When using the above setup with i18n routing, <code>next-intl</code> will currently opt into dynamic rendering when APIs like <code>useTranslations</code> are used in Server Components. <code>next-intl</code> provides a temporary API that can be used to enable static rendering.</p>
<h3>Static Rendering for i18n Routes</h3>
<p>For apps with dynamic routes, use <code>generateStaticParams</code> to pass all possible locale values, allowing Next.js to render at build time:</p>
<pre><code class="language-js">import { routing } from &#39;@/i18n/routing&#39;;

export function generateStaticParams() {
  return routing.locales.map((locale) =&gt; ({ locale }));
}
</code></pre>
<p><code>next-intl</code> provides an API <code>setRequestLocale</code> that can be used to distribute the locale that is received via params in layouts and pages for usage in all Server Components that are rendered as part of the request. You need to call this function in every layout/page that you intend to enable static rendering for since Next.js can render layouts and pages independently.</p>
<pre><code class="language-js">import { setRequestLocale } from &#39;next-intl/server&#39;;
import { notFound } from &#39;next/navigation&#39;;
import { routing } from &#39;@/i18n/routing&#39;;

export default async function RootLayout({ children, params: { locale } }) {
  if (!routing.locales.includes(locale as any)) {
    notFound();
  }

  setRequestLocale(locale);

  return (
    // ...
  );
}
</code></pre>
<p><strong>Note</strong>: Call <code>setRequestLocale</code> before invoking <code>useTranslations</code> or <code>getMessages</code> or any <code>next-intl</code> functions.</p>
<h2>Domain Routing</h2>
<p>For domain-specific locale support, use the <code>domains</code> setting to map domains to locales, such as <code>us.example.com/en</code> or <code>ca.example.com/fr</code>.</p>
<pre><code class="language-js">import { defineRouting } from &#39;next-intl/routing&#39;;

export const routing = defineRouting({
  locales: [&#39;en&#39;, &#39;fr&#39;],
  defaultLocale: &#39;en&#39;,
  domains: [
    { domain: &#39;us.example.com&#39;, defaultLocale: &#39;en&#39;, locales: [&#39;en&#39;] },
    { domain: &#39;ca.example.com&#39;, defaultLocale: &#39;en&#39; },
  ],
});
</code></pre>
<p>This setup allows you to serve localized content based on domains. <a href="https://next-intl-docs.vercel.app/docs/routing#domains">Read more on domain routing here</a>.</p>
<h2>Conclusion</h2>
<p>Setting up internationalization in Next.js with <code>next-intl</code> provides a modular way to handle URL-based routing, user-defined locales, and domain-specific configurations. Whether you need URL-based routing or a straightforward single-locale setup, <code>next-intl</code> adapts to fit diverse i18n needs.</p>
<p>With these tools, your app will be ready to deliver a seamless multi-language experience to users worldwide.</p>
]]></description>
            <link>https://www.thisdot.co/blog/internationalization-in-next-js-with-next-intl</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/internationalization-in-next-js-with-next-intl</guid>
            <pubDate>Fri, 11 Apr 2025 15:18:29 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[“Recognize leadership behavior early. Sometimes people don’t even realize it in themselves…” Kelly Vaughn on Product Leadership, Creating Pathways for Women in Tech, & Conferences]]></title>
            <description><![CDATA[<p>Some leaders build products. Some lead engineering teams. Kelly Vaughn is doing both.</p>
<p>As Director of Engineering at <a href="https://www.spot.ai/">Spot AI</a>—a company building video intelligence software—Kelly recently expanded her role to oversee both Product and Engineering for their VMS offering. That shift means juggling strategy, execution, and team development, all while helping others step confidently into leadership themselves.</p>
<p>And yes, she still finds time to speak at conferences and answer DMs from people navigating the same transitions she once did.</p>
<p>We spoke with Kelly about spotting leadership potential early, why ambiguity doesn’t have to feel chaotic, and the lesson she learned the hard way about managing up.</p>
<h3>Stepping into Product Leadership</h3>
<p>Kelly’s new title might look like a promotion on paper, but the shift is more philosophical than anything.</p>
<blockquote>
<p>“Engineering leadership is about execution,” she says. “Product leadership is about defining why we’re building something in the first place.”</p>
</blockquote>
<p>Now leading Product and Engineering for Spot AI’s VMS product, she’s talking to customers, researching market trends, and making smart bets on where to invest next. </p>
<p>It’s a role she’s clearly energized by.</p>
<blockquote>
<p>“I’m really looking forward to dedicating time to shaping our product’s future.”</p>
</blockquote>
<h3>Thriving in Ambiguity</h3>
<p>Some people panic when problems are fuzzy or undefined. Others use it as fuel.</p>
<blockquote>
<p>“There are two key traits I see in people who handle ambiguity well,” Kelly says. “They stay calm under stress, and they know how to form a hypothesis from a vague problem statement.”</p>
</blockquote>
<p>That means asking the right questions, taking action quickly, and being totally okay with pivoting when something doesn’t pan out. </p>
<p>It’s no surprise that these same traits overlap with great product thinking—a mindset she’s now leaning into more than ever.</p>
<blockquote>
<p>“I do some of my best work when navigating uncertainty,” she adds.</p>
</blockquote>
<p><a href="https://modernleader.is/p/what-to-do-when-the-path-isnt-clear">Read Kelly’s blog on embracing ambiguity in Product!</a></p>
<h3>Creating Leadership Pathways for Women in Tech</h3>
<p>When asked how leaders can create more leadership pathways for women in software engineering, Kelly stressed that it is not a passive process.</p>
<blockquote>
<p>“Senior leaders need to be proactive,” Kelly says. “That starts with identifying and addressing bias across hiring, promotions, and day-to-day interactions.”</p>
</blockquote>
<p>She emphasizes psychological safety—so women feel confident advocating for themselves and others. But she also knows not everyone feels ready to raise their hand.</p>
<blockquote>
<p>“Don’t wait for someone to ask for a title change or a growth opportunity. Recognize leadership behavior early. Sometimes people don’t even realize it in themselves yet.”</p>
</blockquote>
<h3>On Stage, In Real Life</h3>
<p>Kelly’s no stranger to the tech conference circuit—often giving talks on engineering leadership and team growth. </p>
<p>Her biggest source of inspiration? Conversations with people trying to make the leap into leadership.</p>
<blockquote>
<p>“I might use the same slide deck at three conferences,” she says, “but the talk itself will be different every time.”</p>
</blockquote>
<p>Rather than sticking to a script, she likes to share recent examples from her own work, tailoring the delivery to the audience in front of her. It keeps things relevant, grounded, and never too polished.</p>
<p>Between setting product strategy, mentoring the next generation of leaders, and hopping from one tech conference to the next, Kelly Vaughn is showing what it means to lead with clarity—even when things are unclear.</p>
<p>She’s not here to tell you it’s easy. But she will tell you it’s worth it.</p>
<p>Connect with Kelly Vaughn on <a href="https://bsky.app/profile/kvlly.com">Bluesky</a>.</p>
<p>Sign up for Kelly Vaughn’s <a href="https://modernleader.is/">Newsletter</a>!</p>
<p>Sticker Illustration by <a href="https://linktr.ee/JacobAshley">Jacob Ashley</a>.</p>
]]></description>
            <link>https://www.thisdot.co/blog/recognize-leadership-behavior-early-sometimes-people-dont-even-realize-it-in</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/recognize-leadership-behavior-early-sometimes-people-dont-even-realize-it-in</guid>
            <pubDate>Fri, 04 Apr 2025 20:04:31 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Vercel & React Native - A New Era of Mobile Development?]]></title>
            <description><![CDATA[<h1>Vercel &amp; React Native - A New Era of Mobile Development?</h1>
<p>Jared Palmer of <a href="https://vercel.com">Vercel</a> recently announced an <a href="https://www.linkedin.com/posts/jaredlpalmer_excited-to-welcome-fernando-rojo-to-vercel-activity-7303074748454834177-xYo3">acquisition</a> that spiked our interest. Having worked extensively with both Next.js and Vercel, as well as React Native, we were curious to see what the appointment of Fernando Rojo, the creator of <a href="https://solito.dev/">Solito</a>, as Vercel&#39;s Head of Mobile, would mean for the future of React Native and Vercel.</p>
<p>While we can only speculate on what the future holds, we can look closer at Solito and its current integration with Vercel. Based on the information available, we can also make some educated guesses about what the future might hold for React Native and Vercel.</p>
<h2>What is Solito?</h2>
<p>Based on a <a href="https://x.com/rauchg/status/1897104998301061454">recent tweet by Guillermo Rauch</a>, one might assume that Solito allows you to build mobile apps with Next.js. While that might become a reality in the future, Jamon Holmgren, the CTO of <a href="https://infinite.red/">Infinite Red</a>, <a href="https://x.com/jamonholmgren/status/1897364023261454560">added some context</a> to the conversation. According to Jamon, Solito is a cross-platform framework built on top of two existing technologies:</p>
<ul>
<li>For the web, Solito leverages Next.js.</li>
<li>For mobile, Solito takes advantage of <a href="https://expo.dev/">Expo</a>.</li>
</ul>
<p>That means that, at the moment, you can&#39;t build mobile apps using Next.js &amp; Solito only - you still need Expo and React Native. Even Jamon, however, admits that even the current integration of Solito with Vercel is exciting.</p>
<p>Let&#39;s take a closer look at what Solito is according to its <a href="https://solito.dev/">official website</a>:</p>
<blockquote>
<p>This library is two things:</p>
<ol>
<li><p>A tiny wrapper around React Navigation and Next.js that lets you share navigation code across platforms.</p>
</li>
<li><p>A set of patterns and examples for building cross-platform apps with React Native + Next.js.</p>
</li>
</ol>
</blockquote>
<p>We can see that Jamon was right - Solito allows you to share navigation code between Next.js and React Native and provides some patterns and components that you can use to build cross-platform apps, but it doesn&#39;t replace React Native or Expo.</p>
<h3>The Cross-Platformness of Solito</h3>
<p>So, we know Solito provides a way to share navigation and some patterns between Next.js and React Native. But what precisely does that entail?</p>
<h4>Cross-Platform Hooks and Components</h4>
<p>If you look at Solito&#39;s <a href="https://solito.dev/">documentation</a>, you&#39;ll see that it&#39;s not only navigation you can share between Next.js and React Native. There are a few components that wrap Next.js components and make them available in React Native:</p>
<ul>
<li><a href="https://solito.dev/usage/link">Link</a> - a component that wraps Next.js&#39; <code>Link</code> component and allows you to navigate between screens in React Native.</li>
<li><a href="https://solito.dev/usage/text-link">TextLink</a> - a component that also wraps Next.js&#39; <code>Link</code> component but accepts text nodes as children.</li>
<li><a href="https://solito.dev/usage/moti-link">MotiLink</a> - a component that wraps Next.js&#39; <code>Link</code> component and allows you to animate the link using <a href="https://moti.fyi/">moti</a>, a popular animation library for React Native.</li>
<li><a href="https://solito.dev/usage/image">SolitoImage</a> - a component that wraps Next.js&#39; <code>Image</code> component and allows you to display images in React Native.</li>
</ul>
<p>On top of that, Solito provides a few hooks that you can use for shared routing and navigation:</p>
<ul>
<li><a href="https://solito.dev/usage/use-router">useRouter()</a> - a hook that lets you navigate between screens across platforms using URLs and Next.js <code>Url</code> objects.</li>
<li><a href="https://solito.dev/usage/use-link">useLink()</a> - a hook that lets you create <code>Link</code> components across the two platforms.</li>
<li><code>createParam()</code> - a function that returns the <code>useParam()</code> and <code>useParams()</code> hooks which allow you to access and update URL parameters across platforms.</li>
</ul>
<h4>Shared Logic</h4>
<p>The Solito <a href="https://github.com/nandorojo/solito/tree/master/example-monorepos/blank">starter project</a> is structured as a monorepo containing:</p>
<ul>
<li><p><code>apps/next</code> - the Next.js application.</p>
</li>
<li><p><code>apps/expo</code> or <code>apps/native</code> - the React Native application.</p>
</li>
<li><p><code>packages/app</code> - shared packages across the two applications:</p>
<ul>
<li><code>features</code></li>
<li><code>providers</code></li>
<li><code>navigation</code></li>
</ul>
</li>
</ul>
<p>The shared packages contain the shared logic and components you can use across the two platforms. For example, the <code>features</code> package contains the shared components organized by feature, the <code>providers</code> package contains the shared context providers, and the <code>navigation</code> package includes the shared navigation logic.</p>
<p>One of the key principles of Solito is <a href="https://solito.dev/gradual-adoption">gradual adoption</a>, meaning that if you use Solito and follow the recommended structure and patterns, you can start with a Next.js application only and eventually add a React Native application to the mix.</p>
<h4>Deployments</h4>
<p>Deploying the Next.js application built on Solito is as easy as deploying any other Next.js application. You can deploy it to Vercel like any other Next.js application, e.g., by linking your GitHub repository to Vercel and setting up automatic deployments.</p>
<p>Deploying the React Native application built on top of Solito to Expo is a little bit more involved - you cannot directly use the <a href="https://docs.expo.dev/eas-update/github-actions/">Github Action recommended by Expo</a> without some modification as Solito uses a monorepo structure.</p>
<p>The adjustment, however, is luckily just a one-liner. You just need to add the <code>working-directory</code> parameter to the <code>eas update --auto</code> command in the Github Action. Here&#39;s what the modified part of the Expo Github Action would look like:</p>
<pre><code class="language-yaml">### the beginning of the Github action...

- name: Create preview
  uses: expo/expo-github-action/preview@v8
  with:
    command: eas update --auto
    working-directory: ./apps/expo # This needs to be added in order to work with the monorepo structure
</code></pre>
<h2>What Does the Future Hold?</h2>
<p>While we can&#39;t predict the future, we can make some educated guesses about what the future might hold for Solito, React Native, Expo, and Vercel, given what we know about the current state of Solito and the recent acquisition of Fernando Rojo by Vercel.</p>
<h3>A Competitor to Expo?</h3>
<p>One question that comes to mind is whether Vercel will work towards creating a competitor to Expo. While it&#39;s too early to tell, it&#39;s not entirely out of the question. Vercel has been expanding its offering beyond Next.js and static sites, and it&#39;s not hard to imagine that it might want to provide a more integrated, frictionless solution for building mobile apps, further bridging the gap between web and mobile development.</p>
<p>However, Expo is a mature and well-established platform, and building a mobile app toolchain from scratch is no trivial task. It would be easier for Vercel to build on top of Expo and partner with them to provide a more integrated solution for building mobile apps with Next.js.</p>
<p>Furthermore, we need to consider Vercel&#39;s target audience. Most of Vercel&#39;s customers are focused on web development with Next.js, and switching to a mobile-first approach might not be in their best interest. That being said, Vercel has been expanding its offering to cater to a broader audience, and providing a more integrated solution for building mobile apps might be a step in that direction.</p>
<h3>A Cross-Platform Framework for Mobile Apps with Next.js?</h3>
<p>Imagine a future where you write your entire application in Next.js — using its routing, file structure, and dev tools — and still produce native mobile apps for iOS and Android.</p>
<p>It&#39;s unlikely such functionality would be built from scratch. It would likely still rely on React Native + Expo to handle the actual native modules, build processes, and distribution. From the developer’s point of view, however, it would still feel like writing Next.js.</p>
<p>While this idea sounds exciting, it&#39;s not likely to happen in the near future. Building a cross-platform framework that allows you to build mobile apps with Next.js only would require a lot of work and coordination between Vercel, Expo, and the React Native community. Furthermore, there are some conceptual differences between Next.js and React Native that would need to be addressed, such as Next.js being primarily SSR-oriented and native mobile apps running on the client.</p>
<h3>Vercel Building on Top of Solito?</h3>
<p>One of the more likely scenarios is that Vercel will build on top of Solito to provide a more integrated solution for building mobile apps with Next.js. This could involve providing more components, hooks, and patterns for building cross-platform apps, as well as improving the deployment process for React Native applications built on top of Solito.</p>
<p>A potential partnership between Vercel and Expo, or at least some kind of closer integration, could also be in the cards in this scenario. While Expo already provides a robust infrastructure for building mobile apps, Vercel could provide complementary services or features that make it easier to build mobile apps on top of Solito.</p>
<h2>Conclusion</h2>
<p>Some news regarding Vercel and mobile development is very likely on the horizon. After all, Guillermo Rauch, the CEO of Vercel, has himself stated that <a href="https://x.com/rauchg/status/1896943726486032783">Vercel will keep raising the quality bar of the mobile and web ecosystems</a>.</p>
<p>While it&#39;s unlikely we&#39;ll see a full-fledged mobile app framework built on top of Next.js or a direct competitor to Expo in the near future, it&#39;s not hard to imagine that Vercel will provide more tools and services for building mobile apps with Next.js. Solito is a step in that direction, and it&#39;s exciting to see what the future holds for mobile development with Vercel.</p>
]]></description>
            <link>https://www.thisdot.co/blog/vercel-and-react-native-a-new-era-of-mobile-development</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/vercel-and-react-native-a-new-era-of-mobile-development</guid>
            <pubDate>Fri, 04 Apr 2025 13:48:09 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Keeping Costs in Check When Hosting Next.js on Vercel]]></title>
            <description><![CDATA[<p><a href="https://vercel.com/">Vercel</a> is usually the go-to platform for hosting Next.js apps, and not without reason. Not only are they one of the sponsors of Next.js, but their platform is very straightforward to use, not just for Next.js but for other frameworks, too. So it&#39;s no wonder people choose it more and more over other providers.</p>
<p>Vercel, however, is a serverless platform, which means there are a few things you need to be aware of to keep your costs predictable and under control. This blog post covers the most important aspects of hosting a Next.js app on Vercel.</p>
<h2>Vercel&#39;s Pricing Structure</h2>
<p>Vercel&#39;s pricing structure is based on fixed and usage-based pricing, which is implemented through two big components of Vercel: the <a href="https://vercel.com/docs/pricing#developer-experience-platform">Developer Experience Platform (DX Platform)</a> and the <a href="https://vercel.com/docs/pricing#managed-infrastructure">Managed Infrastructure</a>.</p>
<p>The <strong>DX Platform</strong> offers a monthly-billed suite of tools and services focused on building, deploying, and optimizing web apps. Think of it as the developer-focused interface on Vercel, which assists you in managing your app and provides team collaboration tools, deployment infrastructure, security, and administrative services. Additionally, it includes Vercel support. Because the DX Platform is developer-focused, it&#39;s also charged per seat on a monthly basis. The more developers have access to the DX Platform, the more you&#39;re charged. In addition to charging per seat, there are also optional, fixed charges for non-included, extra features. <a href="https://vercel.com/docs/observability#observability-plus">Observability Plus</a> is one such example feature.</p>
<p>The <strong>Managed Infrastructure</strong>, on the other hand, is what makes your app run and scale. It is a <a href="https://en.wikipedia.org/wiki/Serverless_computing">serverless platform</a> priced per usage. Thanks to this infrastructure, you don&#39;t need to worry about provisioning, maintaining, or patching your servers. Everything is executed through serverless functions, which can scale up and down as needed. Although this sounds great, this is also one of the reasons many developers hesitate to adopt serverless; it may have unpredictable costs. One day, your site sees minimal traffic, and the next, it goes viral on <a href="https://news.ycombinator.com/">Hacker News</a>, leading to a sudden spike in costs.</p>
<p>Vercel addresses this uncertainty by including a set amount of free serverless usage in each of its DX Platform plans. Once you exceed those limits, additional charges apply based on usage.</p>
<h3>Pricing Plans</h3>
<p>The DX Platform can be used in <a href="https://vercel.com/pricing">three different pricing plans</a> on a team level. A team can represent a single person, a team within a company, or even a whole company. When creating a team on Vercel, this team can have one or more projects.</p>
<p>The first of the three pricing plans is the <strong>Hobby</strong> plan, which is ideal for personal projects and experimentation. The Hobby plan is free and comes with some of the features and resources of the DX Platform and Managed Infrastructure out of the box, making it suitable for hosting small websites. However, note that the Hobby plan is limited to non-commercial, personal use only.</p>
<p>The <strong>Pro</strong> plan is the most recommended for most teams and can be used for commercial purposes. At the time of this writing, it costs $20 per team member and comes with generous resources that support most teams.</p>
<p>The third tier is the <strong>Enterprise</strong> plan, which is the most advanced and expensive option. This plan becomes necessary when your application requires specific compliance or performance features, such as isolated build infrastructure, Single Sign-On (SSO) for corporate user management or custom support with Service-Level Agreements. The Enterprise plan has a custom pricing model and is negotiated directly with Vercel.</p>
<h2>What Contributes to Usage Costs and Limiting Them</h2>
<p>Now that you&#39;ve selected a plan for your team, you&#39;re ready to deploy Next.js. But how do you determine what contributes to your per-usage costs and ensure they stay within your plan limits?</p>
<p>The <a href="https://vercel.com/pricing">Vercel pricing page</a> has a detailed breakdown of the resource usage limits for each plan, which can help you understand what affects your costs. However, in this section, we&#39;ve highlighted some of the most impactful factors on pricing that we&#39;ve seen on many of our client projects.</p>
<h3>Number of Function Invocations</h3>
<p>Each time a serverless function runs, it counts as an <a href="https://vercel.com/docs/functions/usage-and-pricing#managing-function-invocations">invocation</a>. Most of the processing on Vercel for your Next.js apps happens through serverless functions. Some might think that only API endpoints or server actions count as serverless function invocations, but this is not true. Every request that comes to the backend goes through a serverless function invocation, which includes:</p>
<ul>
<li>Invoking <a href="https://nextjs.org/docs/app/building-your-application/data-fetching/server-actions-and-mutations">server actions</a> (server functions)</li>
<li>Invoking <a href="https://nextjs.org/docs/pages/building-your-application/routing/api-routes">API routes</a> (from the frontend, another system, or even within another serverless function)</li>
<li>Rendering a <a href="https://nextjs.org/docs/app/building-your-application/rendering/server-components">React server component tree</a> (as part of a request to display a page)</li>
</ul>
<p>To give you an idea of the number of invocations included in a plan, the Pro plan includes 1 million invocations per month for free. After that, it costs $0.60 per million, which can total a significant amount for popular websites.</p>
<p>To minimize serverless function invocations, focus on reducing any of the above points. For example:</p>
<ul>
<li><strong>Batch up server actions:</strong> If you have multiple server actions, such as downloading a file and increasing its download count, you can combine them into one server action.</li>
<li><strong>Minimize calls to the backend:</strong> Closely related to the previous point, unoptimized client components can call the backend more than they need to, contributing to increased function invocation count. If needed, consider using a library such as <a href="https://swr.vercel.app/">useSwr</a> or <a href="https://tanstack.com/query/latest">TanStack Query</a> to keep your backend calls under control.</li>
<li><strong>Use API routes correctly:</strong> Next.js recommends using API routes for external systems invoking your app. For instance, <a href="https://www.contentful.com/">Contentful</a> can invoke a blog post through a webhook without incurring additional invocations. However, avoid invoking API routes from server component tree renders, as this counts as at least two invocations.</li>
</ul>
<p>Reducing React server component renders is also possible. Not all pages need to be dynamic - convert dynamic routes to <a href="https://nextjs.org/docs/app/building-your-application/routing/dynamic-routes#generating-static-params">static content</a> when you don’t expect them to change in real-time. On the client, utilize <a href="https://nextjs.org/docs/app/api-reference/components/link">Next.js navigation primitives</a> to use the client-side router cache.</p>
<p>Middleware in Next.js runs before every request. Although this doesn&#39;t necessarily count as a function invocation (for edge middleware, this is counted in a separate bucket), it&#39;s a good idea to minimize the number of times it has to run. To minimize middleware invocations, limit them only to requests that require it, such as protected routes. For static asset requests, you can skip middleware altogether using <a href="https://nextjs.org/docs/app/building-your-application/routing/middleware#matching-paths">matchers</a>. For example, the matcher configuration below would prevent invoking the middleware for most static assets:</p>
<pre><code class="language-ts">export const config = {
  matcher: [
    /*
     * Match all request paths except for the ones starting with:
     * - api (API routes)
     * - _next/static (static files)
     * - _next/image (image optimization files)
     * - favicon.ico, sitemap.xml, robots.txt (metadata files)
     */
    &#39;/((?!api|_next/static|_next/image|favicon.ico|sitemap.xml|robots.txt).*)&#39;,
  ],
}
</code></pre>
<h3>Function Execution Time</h3>
<p>The duration your serverless function takes to execute counts as the execution time, and it impacts your end bill unless it&#39;s within the limits of your plan. This means that any inefficient code that takes longer to execute directly adds up to the total function invocation time.</p>
<p>Many things can contribute to this, but one common pattern we&#39;ve seen is not utilizing caching properly or under-caching. Next.js offers several <a href="https://nextjs.org/docs/app/building-your-application/caching">caching techniques</a> you can use, such as:</p>
<ul>
<li>Using a <a href="https://nextjs.org/docs/app/building-your-application/caching#data-cache">data cache</a> to prevent unnecessary database calls or API calls</li>
<li>Using <a href="https://nextjs.org/docs/app/building-your-application/caching#request-memoization">memoization</a> to prevent too many API or database calls in the same rendering pass</li>
</ul>
<p>Another reason, especially now in the age of AI APIs, is having a function run too long due to AI processing. In this case, what we could do is utilize some sort of queuing for long-processing jobs, or enable <a href="https://vercel.com/fluid">Fluid Compute</a>, a recent feature by Vercel that optimizes function invocations and reusability.</p>
<h3>Bandwidth Usage</h3>
<p>The volume of data transferred between users and Vercel, including JavaScript bundles, RSC payload, API responses, and assets, directly contributes to bandwidth usage. In the Pro plan, you receive 1 TB/month of included bandwidth, which may seem substantial but can quickly be consumed by large Next.js apps with:</p>
<ul>
<li>Large JavaScript bundles</li>
<li>Many images</li>
<li>Large API JSON payloads</li>
</ul>
<p><a href="https://vercel.com/docs/image-optimization">Image optimization</a> is crucial for reducing bandwidth usage, as images are typically large assets. By implementing image optimization, you can significantly reduce the amount of data transferred.</p>
<p>To further optimize your bandwidth usage, focus on using the <a href="https://nextjs.org/docs/app/api-reference/components/link"><code>Link</code> component</a> efficiently. This component performs automatic prefetch of content, which can be beneficial for frequently accessed pages. However, you may want to <a href="https://nextjs.org/docs/app/api-reference/components/link#prefetch">disable</a> this feature for infrequently accessed pages.</p>
<p>The <code>Link</code> component also plays a role in reducing bandwidth usage, as it aids in client-side navigation. When a page is cached client-side, no request is made when the user navigates to it, resulting in reduced bandwidth usage.</p>
<p>Additionally, API and RSC payload responses count towards bandwidth usage. To minimize this impact, <a href="https://vercel.com/guides/how-to-optimize-rsc-payload-size#best-practices">always return only the minimum amount of data necessary</a> to the end user.</p>
<h3>Image Transformations</h3>
<p>Every time Vercel transforms an image from an unoptimized image, this counts as an <a href="https://vercel.com/docs/image-optimization">image transformation</a>. After transformation, every time an optimized image is written to Vercel&#39;s CDN network and then read by the user&#39;s browser, this counts as an image cache read and an image cache write, respectively.</p>
<p>The Pro plan includes 10k transformations per month, 600k CDN cache reads, and 200k CDN cache writes. Given the high volume of image requests in many apps, it&#39;s worth checking if the associated costs can be reduced.</p>
<p>Firstly, not every image needs to be transformed. Certain types of images, such as logos and icons, small UI elements (e.g., button graphics), vector graphics, and other pre-optimized images you may have optimized yourself already, don&#39;t require transformation. You can store these images in the public folder and use the <a href="https://nextjs.org/docs/pages/api-reference/components/image#unoptimized"><code>unoptimized</code></a> property with the Image component to mark them as non-transformable.</p>
<p>Another approach is to utilize an external image provider like <a href="https://cloudinary.com/">Cloudinary</a> or <a href="https://aws.amazon.com/cloudfront/">AWS CloudFront</a>, which may have already optimized the images. In this case, you can use a <a href="https://nextjs.org/docs/pages/building-your-application/optimizing/images#loaders">custom image loader</a> to take advantage of their optimizations and avoid Vercel&#39;s image transformations.</p>
<p>Finally, Next.js provides several configuration options to fine-tune image transformation:</p>
<ul>
<li><a href="https://nextjs.org/docs/app/api-reference/components/image#minimumcachettl"><code>images.minimumCacheTTL</code></a>: Controls the cache duration, reducing the need for rewritten images.</li>
<li><a href="https://nextjs.org/docs/app/api-reference/components/image#formats"><code>images.formats</code></a>: Allows you to limit eligible image formats for transformation.</li>
<li><a href="https://nextjs.org/docs/app/api-reference/components/image#remotepatterns"><code>images.remotePatterns</code></a>: Defines external sources for image transformation, giving you more control over what&#39;s optimized.</li>
<li><a href="https://nextjs.org/docs/app/api-reference/components/image#quality"><code>images.quality</code></a>: Enables you to set the image quality for transformed images, potentially reducing bandwidth usage.</li>
</ul>
<h2>Monitoring</h2>
<p>The &quot;<a href="https://vercel.com/docs/pricing/manage-and-optimize-usage">Usage</a>&quot; tab on the team page in Vercel provides a clear view of your team&#39;s resource usage. It includes information such as function invocation counts, function durations, and fast origin transfer amounts. You can easily see how far you are from reaching your team&#39;s limit, and if you&#39;re approaching it, you&#39;ll see the amount. This page is a great way to monitor regularity.</p>
<img src="https://p.ipic.vip/jccfy0.png" alt="image-20250321091900340" style="zoom:60%;" />

<p>However, you don&#39;t need to check it constantly. Vercel offers various aspects of spending management, and you can set <a href="https://vercel.com/docs/spend-management#managing-alert-threshold-notifications">alert thresholds</a> to get notified when you&#39;re close to or exceed your limit. This helps you proactively manage your spending and avoid unexpected charges.</p>
<p>One good feature of Vercel is its ability to <a href="https://vercel.com/docs/spend-management#pausing-projects">pause projects</a> when your spending reaches a certain point, acting as an &quot;emergency break&quot; in the case of a DDoS attack or a very unusual spike in traffic. However, this will stop the production deployment, and the users will not be able to use your site, but at least you won&#39;t be charged for any extra usage. This option is enabled by default.</p>
<h2>Conclusion</h2>
<p>Hosting a Next.js app on Vercel offers a great developer experience, but it&#39;s also important to consider how this contributes to your end bill and keep it under control. Hopefully, this blog post will clear up some of the confusion around pricing and how to plan, optimize, and monitor your costs.</p>
<p>We hope you enjoyed this blog post. Be sure to check out our <a href="https://www.thisdot.co/blog?tags=nextjs">other blog posts</a> on Next.js for more in-depth coverage of different features of this framework.</p>
]]></description>
            <link>https://www.thisdot.co/blog/keeping-costs-in-check-when-hosting-next-js-on-vercel</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/keeping-costs-in-check-when-hosting-next-js-on-vercel</guid>
            <pubDate>Fri, 28 Mar 2025 09:18:16 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[“ChatGPT knows me pretty well… but it drew me as a white man with a man bun.” – Angie Jones on AI Bias, DevRel, and Block’s new open source AI agent “goose”]]></title>
            <description><![CDATA[<p>Angie Jones is a veteran innovator, educator, and inventor with over twenty years of industry experience and twenty-seven digital technology patents both domestically and internationally.  </p>
<p>As the <strong>VP of Developer Relations at Block</strong>, she facilitates developer training and enablement, delivering tools for developer users and open source contributors. However, her educational work doesn’t end with her day job. She is also a contributor to multiple books examining the intersection of technology and career, including <em>DevOps: Implementing Cultural Change</em>, and <em>97 Things Every Java Programmer Should Know</em>, and is an active speaker in the global developer conference circuit.</p>
<p>With the release of Block’s new open source AI agent “goose”, Angie drives conversations around AI’s role in developer productivity, ethical practices, and the application of intelligent tooling. We had the chance to talk with her about the evolution of DevRel, what makes a great leader, emergent data governance practices, women who are crushing it right now in the industry, and more:</p>
<h3>Developer Advocacy is Mainstream</h3>
<p>A decade ago, <strong>Developer Relations (DevRel)</strong> wasn’t the established field it is today. It was often called <strong>Developer Evangelism</strong>, and fewer companies saw the value in having engineers speak directly to other engineers.</p>
<blockquote>
<p>“Developer Relations was more of a niche space. It’s become much more mainstream these days with pretty much every developer-focused company realizing that the best way to reach developers is with their peers.”</p>
</blockquote>
<p>That shift has opened up <strong>more opportunities for engineers</strong> who enjoy teaching, community-building, and breaking down complex technical concepts. But because DevRel straddles multiple functions, its place within an organization remains up for debate—<strong>should it sit within Engineering, Product, Marketing, or even its own department?</strong> There’s no single answer, but its cross-functional nature makes it a crucial bridge between technical teams and the developers they serve.</p>
<h3>Leadership Is Not an Extension of Engineering Excellence</h3>
<p>Most engineers assume that excelling as an IC is enough to prepare them for leadership, but Angie warns that <strong>this is a common misconception</strong>.</p>
<p>She’s seen firsthand how technical skills don’t always equate to <strong>strong leadership abilities</strong>—we’ve all worked under leaders who made us wonder <em>how they got there</em>. When she was promoted into leadership, Angie was determined <strong>not to become one of those leaders</strong>:</p>
<blockquote>
<p>“This required humility. Acknowledging that while I was an expert in one area, I was a novice in another.”</p>
</blockquote>
<p>Instead of assuming leadership would come naturally, she took a deliberate approach to learning—taking courses, reading books, and working with executive coaches to build leadership skills the right way.</p>
<h3>Goose: An Open Source AI Assistant That Works for You</h3>
<p>At Block, Angie is working on a tool called <a href="https://block.github.io/goose/">goose</a>, <strong>an open-source AI agent that runs locally on your machine</strong>. Unlike many AI assistants that are locked into specific platforms, <strong>goose is designed to be fully customizable:</strong></p>
<blockquote>
<p>“You can use your LLM of choice and integrate it with any API through the Model Context Protocol (MCP).”</p>
</blockquote>
<p>That flexibility means <strong>goose can be tailored to fit developers’ workflows.</strong> Angie gives an example of what this looks like in action:</p>
<blockquote>
<p>“Goose, take this Figma file and build out all of the components for it. Check them into a new GitHub repo called @org/design-components and send a message to the #design channel in Slack informing them of the changes.”</p>
</blockquote>
<p>And just like that, it’s done— <strong>no manual intervention required.</strong></p>
<h3>The Future of Data Governance</h3>
<p>As AI adoption accelerates, <strong>data governance has become a top priority for companies</strong>. Strong governance requires clear policies, security measures, and accountability. Angie points out that organizations are already making moves in this space:</p>
<blockquote>
<p>“Cisco recently launched a product called AI Defense to help organizations enhance their data governance frameworks and ensure that AI deployments align with established data policies and compliance requirements.”</p>
</blockquote>
<p>According to Angie, in the next five years, <strong>we can expect more structured frameworks around AI data usage</strong>, especially as businesses navigate privacy concerns and regulatory compliance.</p>
<h3>Bias in AI Career Tools: Helping or Hurting?</h3>
<p>AI-powered resume screeners and promotion predictors are becoming <strong>more common in hiring</strong>, but are they helping or hurting underrepresented groups? Angie’s own experience with AI bias was eye-opening:</p>
<blockquote>
<p>“I use ChatGPT every day. It knows me pretty well. I asked it to draw a picture of what it thinks my current life looks like, and it drew me as a white male (with a man bun).”</p>
</blockquote>
<p>When she called it out, the AI responded:</p>
<blockquote>
<p>“No, I don’t picture you that way at all, but it sounds like the illustration might’ve leaned into the tech stereotype aesthetic a little too much.”</p>
</blockquote>
<p>This illustrates a bigger problem— <strong>AI often reflects human biases at scale</strong>. However, there are emerging solutions, such as identity masking, which removes names, race, and gender markers so that only skills are evaluated.</p>
<blockquote>
<p>“In scenarios like this, minorities are given a fairer shot.”</p>
</blockquote>
<p>It’s a step toward a more equitable hiring process, but it also surfaces the <strong>need for constant vigilance in AI development to prevent harmful biases</strong>.</p>
<h3>Women at the Forefront of AI Innovation</h3>
<p>While AI is reshaping nearly every industry, <strong>women are playing a leading role in its development</strong>. Angie highlights several technologists:</p>
<blockquote>
<p>“I’m so proud to see women are already at the forefront of AI innovation. I see amazing women leading AI research, training, and development such as Mira Murati, Timnit Gebru, Joelle Pineau, Meredith Whittaker, and even Block’s own VP of Data &amp; AI, Jackie Brosamer.”</p>
</blockquote>
<p>These women are influencing not just the technical advancements in AI but also <strong>the ethical considerations that come with it</strong>. </p>
<h3>Connect with Angie</h3>
<p>Angie Jones is an undeniable pillar of the online JavaScript community, and <strong>it isn’t hard to connect with her!</strong></p>
<p>You can find Angie on <a href="https://x.com/techgirl1908">X (Twitter)</a>, <a href="https://www.linkedin.com/in/angiejones/">Linkedin</a>, or <a href="https://angiejones.tech/">on her personal site</a> (where you can also access her free Linkedin Courses).</p>
<p>Learn more about <a href="https://block.github.io/goose/">goose by Block</a>.</p>
<p><a href="https://linktr.ee/JacobAshley">Sticker Illustration by Jacob Ashley</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/chatgpt-knows-me-pretty-well-but-it-drew-me-as-a-white-man-with-a-man-bun</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/chatgpt-knows-me-pretty-well-but-it-drew-me-as-a-white-man-with-a-man-bun</guid>
            <pubDate>Thu, 13 Mar 2025 19:05:36 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[“It Sounds a Little Dystopian, But Also Kind of Amazing”: Conversations on Long Term AI Agents and "Winning" Product Hunt with Ellie Zubrowski]]></title>
            <description><![CDATA[<p>Ellie Zubrowski doesn’t walk a traditional path. </p>
<p>In the three years since graduating from a university program in Business Administration, she biked across the U.S., studied Kung Fu in China, learned Mandarin just for fun, and completed the #100DaysOfCode challenge after deciding she wanted a career switch. </p>
<p>That same sense of curiosity and willingness to jump into the unknown now fuels her work as a Developer Advocate at Pieces, where she leads product launches, mentors job seekers, and helps developers learn how to best leverage Pieces’ Long-Term Memory Agent. </p>
<p>Her journey into tech was guided not just by a want to learn how to code and break into the industry, but by a fascination with the structure of language itself.</p>
<blockquote>
<p>“There are so many parallels between human languages and programming languages,” she says. “That realization really made me fall in love with software.”</p>
</blockquote>
<p>We spoke with Ellie about launching a #1 Product Hunt release, her predictions for AI agents, and why conferences don’t have to break your budget.</p>
<h3>Launching LTM-2 to the Top of Product Hunt</h3>
<p>Recently, Ellie led the launch of Pieces’ Long-Term Memory Agent (LTM-2), which took the top spot on Product Hunt—a major win for the team and their community.</p>
<blockquote>
<p>“I’m super competitive,” she admits. “So I really wanted us to win.”</p>
</blockquote>
<p>The launch was fully organic—no paid promotions, just coordinated team efforts, a well-prepared content pipeline, and an ambassador program that brought in authentic engagement across X, Discord, and Reddit.</p>
<p>She documented their entire strategy <a href="https://pieces.app/blog/product-hunt-ltm2-win">in this blog post</a>, and credits the success not just to good planning but to a passionate developer community that believed in the product. </p>
<p>Following a successful performance at Product Hunt, Ellie is committed to keeping Pieces’ user community engaged and contributing to its technological ecosystem.</p>
<blockquote>
<p>“Although I’m still fairly new to DevRel (coming up on a year at Pieces!), I think success comes down to a few things: developer adoption and retention, user feedback, community engagement, and maintaining communication with engineering.”</p>
</blockquote>
<h3>Why AI Agents Are the Next Big Thing</h3>
<p>Ellie sees a major shift on the horizon: AI that doesn’t wait for a prompt.</p>
<blockquote>
<p>“The biggest trend of 2025 seems to be AI agents,” she explains, “or AI that acts proactively instead of reactively.”</p>
</blockquote>
<p>Until now, most of us have had to tell AI exactly what to do—whether that’s drafting emails, debugging code, or generating images. </p>
<p>But Ellie imagines a near future where AI tools act more like intelligent teammates than assistants—running locally, deeply personalized, and working in the background to handle the repetitive stuff.</p>
<blockquote>
<p>“Imagine something that knows how you work and quietly handles your busy work while you focus on the creative parts,” she says. “It sounds a little dystopian, but also kind of amazing.”</p>
</blockquote>
<p>Whether we hit that level of autonomy in 2025 or (likely) have to wait until 2026, she believes the move toward agentic AI is inevitable—and it’s changing how developers think about productivity, ownership, and trust.</p>
<p><a href="https://pieces.app/blog/large-action-models-the-future-of-llms">You can read more of Ellie’s 2025 LLM predictions here!</a></p>
<h3>The Secret to Free Conferences (and Winning the GitHub Claw Machine)</h3>
<p>Ellie will be the first to tell you: attending a tech conference can be a total game-changer.</p>
<p>“Attending my first tech conference completely changed my career trajectory,” she says. “It honestly changed my life.”</p>
<p>And the best part? You might not even need to pay for a ticket.</p>
<blockquote>
<p>“Most conferences offer scholarship tickets,” Ellie explains. “And if you’re active in dev communities, there are always giveaways. You just have to know where to look.”</p>
</blockquote>
<p>In her early days of job hunting, Ellie made it to multiple conferences for free (minus travel and lodging)—which she recommends to anyone trying to break into tech.</p>
<p>Also, she lives for conference swag. One of her all-time favorite moments? Winning a GitHub Octocat from the claw machine at RenderATL.</p>
<blockquote>
<p>“She’s one of my prized possessions,” Ellie laughs. <a href="https://x.com/elliezub/status/1801304986522988958">Proof here. 🐙</a></p>
</blockquote>
<p>Her advice: if you’re even a little curious about going to a conference—go. Show up. Say hi to someone new. You never know what connection might shape your next step.</p>
<h3>Ellie’s Journeys Away from her Desk</h3>
<p>Earlier this year, Ellie took a break from product launches and developer events to visit China for Chinese New Year with her boyfriend’s family—and turned the trip into a mix of sightseeing, food adventures, and a personal mission: document every cat she met. <a href="https://x.com/elliezub/status/1890753005630235047">(You can follow the full feline thread here 🐱)</a></p>
<p>The trip took them through Beijing, Nanjing, Taiyuan, Yuci, Zhùmǎdiàn, and Yangzhou, where they explored palaces, museums, and even soaked in a hot spring once reserved for emperors. </p>
<blockquote>
<p>“Fancy, right?” Ellie jokes.</p>
</blockquote>
<p>But the real highlight? The food.</p>
<blockquote>
<p>“China has some of the best food in the world,” she says. “And lucky for me, my boyfriend’s dad is an amazing cook—every meal felt like a five-star experience.”</p>
</blockquote>
<h3>What’s Next?</h3>
<p>With a YouTube series on the way, thousands of developers reached through her workshops, and an eye on the next generation of AI tooling, Ellie Zubrowski is loving her experience as a developer advocate.</p>
<p><a href="https://x.com/elliezub">Follow @elliezub on X</a> to stay in the loop on her work, travels, tech experiments, and the occasional Octocat sighting. She’s building in public, cheering on other devs, and always down to share what she’s learning along the way.</p>
<p><a href="https://pieces.app/">Learn more about Pieces, the long-term LLM agent.</a></p>
<p><a href="https://linktr.ee/JacobAshley">Sticker Illustration by Jacob Ashley</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/it-sounds-a-little-dystopian-but-also-kind-of-amazing-conversations-on-long</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/it-sounds-a-little-dystopian-but-also-kind-of-amazing-conversations-on-long</guid>
            <pubDate>Fri, 28 Mar 2025 19:01:31 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Incremental Hydration in Angular]]></title>
            <description><![CDATA[<h1>Incremental Hydration in Angular</h1>
<p>Some time ago, I wrote a <a href="https://www.thisdot.co/blog/ssr-finally-a-first-class-citizen-in-angular">post about SSR finally becoming a first-class citizen in Angular</a>. It turns out that the Angular team really treats SSR as a priority, and they have been working tirelessly to make SSR even better.</p>
<p>As the previous blog post mentioned, full-page hydration was launched in Angular 16 and made stable in Angular 17, providing a great way to improve your Core Web Vitals. Another feature aimed to help you improve your INP and other Core Web Vitals was introduced in Angular 17: <a href="https://angular.dev/guide/defer">deferrable views</a>. Using the <code>@defer</code> blocks allows you to reduce the initial bundle size and defer the loading of heavy components based on certain triggers, such as the section entering the viewport.</p>
<p>Then, <a href="https://github.com/angular/angular/discussions/57664">in September 2024</a>, the smart folks at Angular figured out that they could build upon those two features, allowing you to mark parts of your application to be server-rendered dehydrated and then hydrate them incrementally when needed - hence incremental hydration.</p>
<p>I’m sure you know what hydration is. In short, the server sends fully formed HTML to the client, ensuring that the user sees meaningful content as quickly as possible and once JavaScript is loaded on the client side, the framework will reconcile the rendered DOM with component logic, event handlers, and state - effectively <strong>hydrating</strong> the server-rendered content.</p>
<p>But what exactly does &quot;dehydrated&quot; mean, you might ask? Here&#39;s what will happen when you mark a part of your application to be incrementally hydrated:</p>
<ol>
<li><strong>Server-Side Rendering (SSR):</strong> The content marked for incremental hydration is rendered on the server.</li>
<li><strong>Skipped During Client-Side Bootstrapping:</strong> The dehydrated content is not initially hydrated or bootstrapped on the client, reducing initial load time.</li>
<li><strong>Dehydrated State:</strong> The code for the dehydrated components is excluded from the initial client-side bundle, optimizing performance.</li>
<li><strong>Hydration Triggers:</strong> The application listens for specified hydration conditions (e.g., on interaction, on viewport), defined with a <code>hydrate</code> trigger in the <code>@defer</code> block.</li>
<li><strong>On-Demand Hydration:</strong> Once the hydration conditions are met, Angular downloads the necessary code and hydrates the components, allowing them to become interactive without layout shifts.</li>
</ol>
<h2>How to Use Incremental Hydration</h2>
<p>Thanks to <a href="https://x.com/marktechson">Mark Thompson</a>, who recently hosted a <a href="https://www.youtube.com/watch?v=I4n1IcZ3vRs">feature showcase on incremental hydration</a>, we can show some code.</p>
<p>The first step is to enable incremental hydration in your Angular application&#39;s <code>appConfig</code> using the <code>provideClientHydration</code> provider function:</p>
<pre><code class="language-typescript">// app/app.config.ts
export const appConfig: ApplicationConfig = {
  providers: [provideClientHydration(withPartialHydration())],
};
</code></pre>
<p>Then, you can mark the components you want to be incrementally hydrated using the <a href="https://angular.dev/guide/templates/defer">@defer</a> block with a <code>hydrate</code> trigger:</p>
<pre><code class="language-html">// Trigger the @defer block immediately after non-deferred content has finished rendering
// and start hydrating once the component enters the viewport
@defer (on immediate; hydrate on viewport) {
&lt;app-incremental-hydrated-component&gt;&lt;/app-incremental-hydrated-component&gt;
}
</code></pre>
<p>And that&#39;s it! You now have a component that will be server-rendered dehydrated and hydrated incrementally when it becomes visible to the user.</p>
<p>But what if you want to hydrate the component on interaction or some other trigger? Or maybe you don&#39;t want to hydrate the component at all?</p>
<p>The same <a href="https://angular.dev/guide/defer#triggers">triggers</a> already supported in <code>@defer</code> blocks are available for hydration:</p>
<ul>
<li><code>idle</code>: Hydrate once the browser reaches an idle state.</li>
<li><code>viewport</code>: Hydrate once the component enters the viewport.</li>
<li><code>interaction</code>: Hydrate once the user interacts with the component through <code>click</code> or <code>keydown</code> triggers.</li>
<li><code>hover</code>: Hydrate once the user hovers over the component.</li>
<li><code>immediate</code>: Hydrate immediately when the component is rendered.</li>
<li><code>timer</code>: Hydrate after a specified time delay.</li>
<li><code>when</code>: Hydrate when a provided conditional expression is met.</li>
</ul>
<p>And on top of that, there&#39;s a new trigger available for hydration:</p>
<ul>
<li><code>never</code>: When used, the component will remain static and not hydrated.</li>
</ul>
<p>The <code>never</code> trigger is handy when you want to exclude a component from hydration altogether, making it a completely static part of the page.</p>
<p>Personally, I&#39;m very excited about this feature and can&#39;t wait to try it out. How about you?</p>
]]></description>
            <link>https://www.thisdot.co/blog/incremental-hydration-in-angular</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/incremental-hydration-in-angular</guid>
            <pubDate>Fri, 14 Mar 2025 14:00:47 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Lessons from the DOGE Website Hack: How to Secure Your Next.js Website]]></title>
            <description><![CDATA[<h1>Lessons from the DOGE Website Hack: How to Secure Your Next.js Website</h1>
<p>The Department of Government Efficiency (DOGE) launched a new website, <a href="doge.gov">doge.gov</a>. Within days, it was defaced with messages from hackers.</p>
<p>The culprit? A misconfigured database was left open, letting anyone edit content. Reports suggest the site was built on Cloudflare Pages, possibly with a Next.js frontend pulling data dynamically. While we don’t have the tech stack confirmed, we are confident that Next.js was used from early reporting around the website. </p>
<p>Let’s dive into what went wrong—and how you can secure your own Next.js projects.</p>
<p><strong>What Happened to DOGE.gov?</strong></p>
<p><img src="//images.ctfassets.net/zojzzdop0fzx/4OqX7CiMMNLpSwSnab1E8T/93d61e471aa98e7487d4a6a8b4f41222/dogescreenshot.png" alt="Hacked Doge Website Screenshot"></p>
<p>The hack was a classic case of security 101 gone wrong. The database—likely hosted in the cloud—was accessible without authentication. No passwords, no API keys, no nothing. Hackers simply connected to it and started scribbling their graffiti. Hosted on Cloudflare Pages (not government servers), the site might have been rushed, skipping critical security checks. For a .gov domain, this is surprising—but it’s a reminder that even big names can miss best practices.</p>
<p>It’s easy to imagine how this happened: an unsecured server action is being used on the client side, a serverless function or API route fetching data from an unsecured database, no middleware enforcing access control, and a deployment that didn’t double-check cloud configs. Let’s break down how to avoid this in your own Next.js app.</p>
<h2>Securing Your Next.js Website: 5 Key Steps</h2>
<p>Next.js is a powerhouse for building fast, scalable websites, but its flexibility means you’re responsible for locking the doors. Here’s how to keep your site safe.</p>
<h3>1. Double-check your Server Actions</h3>
<p>If Next.js 13 or later was used, Server Actions might’ve been part of the mix—think form submissions or dynamic updates straight from the frontend. These are slick for handling server-side logic without a separate API, but they’re a security risk if not handled right. An unsecured Server Action could’ve been how hackers slipped into the database. </p>
<p>Why?</p>
<p>Next.js generates a public endpoint for each Server Action. If these Server Actions lack proper authentication and authorization measures, they become vulnerable to unauthorized data access.</p>
<p>Example:</p>
<p><img src="//images.ctfassets.net/zojzzdop0fzx/3sJq4Qj7BXtDvDEmsR0klb/9fa8419d572946f908122337116d5d84/screenshot2.png" alt="next.js server action"></p>
<ul>
<li><strong>Restrict Access</strong>: Always validate the user’s session or token before executing sensitive operations.</li>
<li><strong>Limit Scope</strong>: Only allow Server Actions to perform specific, safe tasks—don’t let them run wild with full database access.</li>
<li>Don’t use server action on the client side without authorization and authentication checks</li>
</ul>
<h3>2. Lock Down Your Database Access</h3>
<p>Another incident happened in 2020. A hacker used an automated script to scan for misconfigured MongoDB databases, wiping the content of 23 thousand databases that have been left wide open, and leaving a ransom note behind asking for money.</p>
<p>So whether you’re using MongoDB, PostgreSQL, or Cloudflare’s D1, never leave it publicly accessible. Here’s what to do:</p>
<ul>
<li><strong>Set Authentication:</strong> Always require credentials (username/password or API keys) to connect. Store these in environment variables (e.g., .env.local for Next.js) and access them via process.env.</li>
<li><strong>Whitelist IPs</strong>: If your database is cloud-hosted, restrict access to your Next.js app’s server or Vercel deployment IP range.</li>
<li><strong>Use VPCs:</strong> For extra security, put your database in a Virtual Private Cloud (VPC) so it’s not even exposed to the public internet. If you are using Vercel, you can create private connections between Vercel Functions and your backend cloud, like databases or other private infrastructure, using <a href="https://vercel.com/docs/security/secure-compute">Vercel Secure Compute</a></li>
</ul>
<p>Example: In a Next.js API route (/app/api/data.js):</p>
<pre><code class="language-javascript">
import { MongoClient } from &#39;mongodb&#39;;
export default async function handler(req, res) {
  const client = new MongoClient(process.env.MONGO_URI); // Secure URI from .env

  try {
    await client.connect();
    const db = client.db(&#39;myDatabase&#39;);
    const data = await db.collection(&#39;myCollection&#39;).find().toArray();
    res.status(200).json(data);
  } catch (error) {
    res.status(500).json({ message: &#39;Database error&#39; });
  } finally {
    await client.close();
  }
}
</code></pre>
<blockquote>
<p>Tip: Don’t hardcode MONGO_URI—keep it in .env and add .env to .gitignore.</p>
</blockquote>
<h3>3. Secure Your API Routes</h3>
<p>Next.js API routes are awesome for server-side logic, but they’re a potential entry point if left unchecked. The site might’ve had an API endpoint feeding its database updates without protection.</p>
<ul>
<li><strong>Add Authentication:</strong> Use a library like next-auth or JSON Web Tokens (JWT) to secure routes.</li>
<li><strong>Rate Limit:</strong> Prevent abuse with something like <a href="https://www.npmjs.com/package/rate-limiter-flexible">rate-limiter-flexible</a>.</li>
</ul>
<p>Example:</p>
<pre><code class="language-javascript">
import { getSession } from &#39;next-auth/react&#39;;

export default async function handler(req, res) {

  const session = await getSession({ req });

  if (!session) {

    return res.status(401).json({ message: &#39;Unauthorized&#39; });

  }

  // Proceed with database operations

  res.status(200).json({ message: &#39;Secure data&#39; });

}
</code></pre>
<h3>4. Double-Check Your Cloud Config</h3>
<p>A misconfigured cloud setup may have exposed the database. If you’re deploying on Vercel, Netlify, or Cloudflare:</p>
<ul>
<li>Environment Variables: Store secrets in your hosting platform’s dashboard, not in code.</li>
<li>Serverless Functions: Ensure they’re not leaking sensitive data in responses. Log errors, not secrets.</li>
<li>Access Controls: Verify your database firewall rules only allow connections from your app.</li>
</ul>
<h3>5. Sanitize and Validate Inputs</h3>
<p>Hackers love injecting junk into forms or APIs. If your app lets users submit data (e.g., feedback forms), unvalidated inputs could’ve been a vector. In Next.js:</p>
<ul>
<li>Sanitize: Use libraries like <a href="https://www.npmjs.com/package/sanitize-html">sanitize-html</a> for user inputs.</li>
<li>Validate: Check data types and lengths before hitting your database.</li>
</ul>
<p>Example:</p>
<pre><code class="language-javascript">
import sanitizeHtml from &#39;sanitize-html&#39;;

export default async function handler(req, res) {

  if (req.method === &#39;POST&#39;) {
    const { input } = req.body;
    const cleanInput = sanitizeHtml(input, {
      allowedTags: [],
      allowedAttributes: {},
    });

    // Save cleanInput to database

    res.status(200).json({ message: &#39;Success&#39; });

  }
}
</code></pre>
<h2>Summary</h2>
<p>The DOGE website hack serves as a reminder of the ever-present need for robust security measures in web development. By following the outlined steps–double-checking Server Actions, locking down database access, securing API routes, verifying cloud configurations, and sanitizing/validating inputs–you can enhance the security posture of your Next.js applications and protect them from potential threats. Remember, a proactive approach to security is always the best defense.</p>
]]></description>
            <link>https://www.thisdot.co/blog/lessons-from-the-doge-website-hack-how-to-secure-your-next-js-website</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/lessons-from-the-doge-website-hack-how-to-secure-your-next-js-website</guid>
            <pubDate>Fri, 07 Mar 2025 13:54:53 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Integrating Playwright Tests into Your GitHub Workflow with Vercel]]></title>
            <description><![CDATA[<p>Vercel previews offer a great way to test PRs for a project. They have a predefined environment and don’t require any additional setup work from the reviewer to test changes quickly. Many projects also use end-to-end tests with Playwright as part of the review process to ensure that no regressions slip uncaught.</p>
<p>Usually, workflows configure Playwright to run against a project running on the GitHub action worker itself, maybe with dependencies in Docker containers as well, however, why bother setting that all up and configuring yet another environment for your app to run in when there’s a working preview right there? Not only that, the Vercel preview will be more similar to production as it’s running on the same infrastructure, allowing you to be more confident about the accuracy of your tests.</p>
<p>In this article, I’ll show you how you can run Playwright against the Vercel preview associated with a PR.</p>
<h2>Setting up the Vercel Project</h2>
<p>To set up a project in Vercel, we first need to have a codebase. I’m going to use the Next.js starter, but you can use whatever you like. What technology stack you use for this project won’t matter, as integrating Playwright with it will be the same experience.</p>
<p>You can create a Next.js project with the following command:</p>
<pre><code class="language-yaml">npx create-next-app@latest
</code></pre>
<p>If you’ve selected all of the defaults, you should be able to run <code>npm run dev</code> and navigate to the app at <code>http://localhost:3000</code>.</p>
<h2>Setting up Playwright</h2>
<p>We will set up Playwright the standard way and make a few small changes to the configuration and the example test so that they run against our site and not the Playwright site. Setup Playwright in our existing project by running the following command:</p>
<pre><code class="language-bash">npm init playwright@latest
</code></pre>
<p>Install all browsers when prompted, and for the workflow question, say no since the one we’re going to use will work differently than the default one. The default workflow doesn’t set up a development server by default, and if that is enabled, it will run on the GitHub action virtual machine instead of against our Vercel deployment.</p>
<p>To make Playwright run tests against the Vercel deployment, we’ll need to define a <code>baseUrl</code> in <code>playwright.config.ts</code> and send an additional header called <code>X-Vercel-Protection-Bypass</code> where we&#39;ll pass the bypass secret that we generated earlier so that we don’t get blocked from making requests to the deployment. I’ll cover how to add this environment variable to GitHub later.</p>
<pre><code class="language-jsx">export default defineConfig({
    ...

  use: {
    /* Base URL to use in actions like `await page.goto(&#39;/&#39;)`. */
    baseURL: process.env.DEPLOYMENT_URL ?? &quot;http://127.0.0.1:3000&quot;,
    extraHTTPHeaders: {
      &quot;X-Vercel-Protection-Bypass&quot;:
        process.env.VERCEL_AUTOMATION_BYPASS_SECRET ?? &quot;&quot;,
    },

    /* Collect trace when retrying the failed test. See https://playwright.dev/docs/trace-viewer */
    trace: &quot;on-first-retry&quot;,
  },

    ...
}
</code></pre>
<p>Our GitHub workflow will set the <code>DEPLOYMENT_URL</code> environment variable automatically.</p>
<p>Now, in <code>tests/example.spec.ts</code> let’s rewrite the tests to work against the Next.js starter that we generated earlier:</p>
<pre><code class="language-tsx">import { test, expect } from &quot;@playwright/test&quot;;

test(&quot;has title&quot;, async ({ page }) =&gt; {
  await page.goto(&quot;/&quot;);
  await expect(page).toHaveTitle(/Create Next App/);
});

test(&quot;has deploy button&quot;, async ({ page }) =&gt; {
  await page.goto(&quot;/&quot;);
  await expect(page.getByRole(&quot;link&quot;, { name: &quot;Deploy now&quot; })).toBeVisible();
});
</code></pre>
<p>This is similar to the default test provided by Playwright. The main difference is we’re loading pages relative to <code>baseURL</code> instead of Playwright’s website. With that done and your Next.js dev server running, you should be able to run <code>npx playwright test</code> and see 6 passing tests against your local server. Now that the boilerplate is handled let’s get to the interesting part.</p>
<h2>The Workflow</h2>
<p>There is a lot going on in the workflow that we’ll be using, so we’ll go through it step by step, starting from the top. At the top of the file, we name the workflow and specify when it will run.</p>
<pre><code class="language-yaml">name: E2E Tests (Playwright)

on:
  pull_request:
    branches:
      - main
  push:
    branches:
      - main
</code></pre>
<p>This workflow will run against new PRs against the default branch and whenever new commits are merged against it. If you only want the workflow to run against PRs, you can remove the <code>push</code> object.</p>
<p>Be careful about running workflows against your <code>main</code> branch if the deployment associated with it in Vercel is the production deployment. Some tests might not be safe to run against production such as destructive tests or those that modify customer data. In our simple example, however, this isn’t something to worry about.</p>
<h3>Installing Playwright in the Virtual Machine</h3>
<p>Workflows have jobs associated with them, and each job has multiple steps. Our test job takes a few steps to set up our project and install Playwright.</p>
<pre><code class="language-yaml">jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - uses: actions/setup-node@v4
        with:
          node-version: 22
          cache: &#39;npm&#39;

      - name: Install npm dependencies
        run: npm ci

      - name: Install system dependencies needed by Playwright
        run: sudo npx playwright install-deps

      - name: Install all supported Playwright browsers
        run: npx playwright install
</code></pre>
<p>The <code>actions/checkout@v4</code> step clones our code since it isn’t available straight out of the gate. After that, we install Node v22 with <code>actions/setup-node@v4</code>, which, at the time of writing this article, is the latest LTS available. The latest LTS version of Node should always work with Playwright. With the project cloned and Node installed, we can install dependencies now. We run <code>npm ci</code> to install packages using the versions specified in the lock file.</p>
<p>After our JS dependencies are installed, we have to install dependencies for Playwright now. <code>sudo npx playwright install-deps</code> installs all system dependencies that Playwright needs to work using <code>apt</code>, which is the package manager used by Ubuntu. This command needs to be run as the administrative user since higher privilege is needed to install system packages. Playwright’s dependencies aren’t all available in <code>npm</code> because the browser engines are native code that has native library dependencies that aren’t in the registry.</p>
<h3>Vercel Preview URL and GitHub Action Await Vercel</h3>
<p>The next couple of steps is where the magic happens. We need two things to happen to run our tests against the deployment. First, we need the URL of the deployment we want to test. Second, we want to wait until the deployment is ready to go before we run our tests. We have written about this topic before <a href="https://www.thisdot.co/blog/how-to-run-end-to-end-tests-on-vercel-preview-deployments">on our blog</a> if you want more information about this step, but we’ll reiterate some of that here.</p>
<p>Thankfully, the community has created GitHub actions that allow us to do this called <code>zentered/vercel-preview-url</code> and <code>UnlyEd/github-action-await-vercel</code>. Here is how you can use these actions:</p>
<pre><code class="language-yaml">jobs:
  test:
    runs-on: ubuntu-latest
    steps:
        ...

      - name: Get the Vercel preview url
        id: vercel_preview_url
        uses: zentered/vercel-preview-url@v1.4.0
        env:
          VERCEL_TOKEN: ${{ secrets.VERCEL_TOKEN }}
        with:
          vercel_app: &#39;playwright-vercel-preview-demo&#39;

      - uses: UnlyEd/github-action-await-vercel@v2.0.0
        env:
          VERCEL_TOKEN: ${{ secrets.VERCEL_TOKEN }}
        with:
          deployment-url: ${{ format(&#39;https://{0}&#39;, steps.vercel_preview_url.outputs.preview_url) }}
          timeout: 420
          poll-interval: 15
</code></pre>
<p>There are a few things to take note of here. Firstly, some variables need to be set that will differ from project to project. <code>vercel_app</code> in the <code>zentered/vercel-preview-url</code> step needs to be set to the name of your project in Vercel that was created earlier.</p>
<p>The other variable that you need is the <code>VERCEL_TOKEN</code> environment variable. You can get this by going to <strong>Vercel &gt; Account Settings &gt; Tokens</strong> and creating a token in the form that appears. For the scope, select the account that has your project.</p>
<p>To put <code>VERCEL_TOKEN</code> into GitHub, navigate to your repo, go to <strong>Settings &gt; Secrets and variables &gt; Actions</strong> and add it to <strong>Repository secrets</strong>. </p>
<p>We should also add <code>VERCEL_AUTOMATION_BYPASS_SECRET</code>l. In Vercel, go to your project then navigate to <strong>Settings &gt; Deployment Protection &gt; Protection Bypass for Automation</strong>. From here you can add the secret, copy it to your clipboard, and put it in your GitHub action environment variables just like we did with <code>VERCEL_TOKEN</code>.</p>
<p>With the variables taken care of, let’s take a look at how these two steps work together. You will notice that the <code>zentered/vercel-preview-url</code> step has an ID set to <code>vercel_preview_url</code>. We need this so we can pass the URL we receive to the <code>UnlyEd/github-action-await-vercel</code> action, as it needs a URL to know which deployment to wait on.</p>
<h3>Running Playwright</h3>
<p>After the last steps we just added, our deployment should be ready to go, and we can run our tests! The following steps will run the Playwright tests against the deployment and save the results to GitHub:</p>
<pre><code class="language-yaml">jobs:
  test:
    runs-on: ubuntu-latest
    steps:
        ...

      - name: Run E2E tests
        run:
          npx playwright test
        env:
          DEPLOYMENT_URL: ${{ format(&#39;https://{0}&#39;, steps.vercel_preview_url.outputs.preview_url) }}
          VERCEL_AUTOMATION_BYPASS_SECRET: ${{ secrets.VERCEL_AUTOMATION_BYPASS_SECRET }}

      - name: Upload the Playwright report
        uses: actions/upload-artifact@v4
        if: always() # Always run regardless if the tests pass or fail
        with:
          name: playwright-report
          path: ${{ format(&#39;{0}/playwright-report/&#39;, github.workspace) }}
          retention-days: 30
</code></pre>
<p>In the first step, where we run the tests, we pass in the environment variables needed by our Playwright configuration that’s stored in <code>playwright.config.ts</code>. <code>DEPLOYMENT_URL</code> uses the Vercel deployment URL we got in an earlier step, and <code>VERCEL_AUTOMATION_BYPASS_SECRET</code> gets passed the secret with the same name directly from the GitHub secret store.</p>
<p>The second step uploads a report of how the tests did to GitHub, regardless of whether they’ve passed or failed. If you need to access these reports, you can find them in the GitHub action log. There will be a link in the last step that will allow you to download a zip file.</p>
<p>Once this workflow is in the default branch, it should start working for all new PRs! It’s important to note that this won’t work for forked PRs unless they are explicitly approved, as that’s a potential security hazard that can lead to secrets being leaked. You can read more about this in the <a href="https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository#controlling-changes-from-forks-to-workflows-in-public-repositories">GitHub documentation</a>.</p>
<h2>One Caveat</h2>
<p>There’s one caveat that is worth mentioning with this approach, which is latency. Since your application is being served by Vercel and not locally on the GitHub action instance itself, there will be longer round-trips to it. This could result in your tests taking longer to execute. How much latency there is can vary based on what region your runner ends up being hosted in and whether the pages you’re loading are served from the edge or not.</p>
<h2>Conclusion</h2>
<p>Running your Playwright tests against Vercel preview deployments provides a robust way of running your tests against new code in an environment that more closely aligns with production. Doing this also eliminates the need to create and maintain a 2nd test environment under which your project needs to work.</p>
]]></description>
            <link>https://www.thisdot.co/blog/integrating-playwright-tests-into-your-github-workflow-with-vercel</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/integrating-playwright-tests-into-your-github-workflow-with-vercel</guid>
            <pubDate>Tue, 25 Feb 2025 14:00:02 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[CSS Container Queries, what are they?]]></title>
            <description><![CDATA[<h1>CSS Container queries, what are they?</h1>
<h1>Intro</h1>
<p><a href="https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_media_queries/Using_media_queries">Media queries</a> have always been crucial to building web applications. They help make our apps more accessible and easier to use and ensure we reach most of our audience. Media queries have been essential in frontend development to create unique user interfaces.</p>
<p>But now, there’s something new: <a href="https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_containment/Container_queries">Container queries.</a></p>
<p>In this blog post, we’ll explore what Container queries are, how they differ from media queries, and why they’re so amazing.</p>
<p>So, let’s get started!</p>
<h1>Refresh on Media queries</h1>
<p>Media queries have been available in browsers for a long time, but they didn’t become popular until around 2010 when mobile devices started to take off.</p>
<p>Media queries let us add specific styles based on the type of device, like screens or printers. This is especially helpful for creating modern, responsive apps.</p>
<p>A simple use of Media queries would be changing, for example, a paragraph&#39;s font size when the screen width is less than a specific number.</p>
<pre><code class="language-css">p {
  font-size: 12px
}

// Media query
@media screen and (min-width: 400px) {
 p {
   font-size: 8px
 }
}
</code></pre>
<p>In this simple example, when the browser’s viewport width is less or equal to 400px, the font size changes to 8px.</p>
<p>Notice how straightforward the syntax is: we start with the keyword <code>@media</code>, followed by the type of device it should apply to. In this case, we use screen so it doesn’t affect users who print the page—if you don’t add anything, then it falls back to the default, which is “all” including both print and screen. Then we specify a <a href="https://developer.mozilla.org/en-US/docs/Web/CSS/@media#media_features">media feature</a>, in this case, the <a href="https://developer.mozilla.org/en-US/docs/Web/CSS/@media/width">width</a>.</p>
<h1>Container queries</h1>
<p>Container queries are similar to Media queries. Their main function is to apply styles under certain conditions. The difference is that instead of listening to the viewport of the browser, it listens to a container size. Let’s see this example:</p>
<p><img src="//images.ctfassets.net/zojzzdop0fzx/4oURKpaPYNBJ8Ax2uvH9G1/757875debfe37ce71b820aa98efef8ff/dashboard.png" alt="dashboard screenshot 1"></p>
<p>In the above layout, we have a layout with a sidebar and three cards as the content. Using Media queries we could listen to the viewport width and change the layout depending on a specific width. Like so:</p>
<pre><code class="language-css">@media (max-width: 768px) {
  .layout {
    flex-direction: column;
  }

  .sidebar {
    width: 100%;
    border-right: none;
    border-bottom: 1px solid #333;
  }

  .card-inner {
    flex-direction: column;
  }

  .card-left {
    border-right: none;
    border-bottom: 1px solid #333;
  }
}
</code></pre>
<p><img src="//images.ctfassets.net/zojzzdop0fzx/7GfoXKExLDZafoLMJzwz5w/91f1aecc581a6d32d2d16f57cb6197a0/dashboard2.png" alt="dashboard screenshot 2"></p>
<p>That’s acceptable, but it requires us to constantly monitor the layout. For example, if we added another sidebar on the right (really weird, but let’s imagine that this is a typical case), our layout would become more condensed:</p>
<p><img src="//images.ctfassets.net/zojzzdop0fzx/1NK5HmzTUWLMoPjEJ7E33B/b3d00efe9ac1265e6d2b01cbb1e0aa19/dashboard3.png" alt="dashboard screenshot 3"></p>
<p>We would need to change our media queries and adjust their range in this situation. Wouldn’t it be better to check the card container’s width and update its styles based on that? That way, we wouldn’t need to worry about if the layout changes, and that’s precisely what container queries are made for!</p>
<p>First, to define the container we are going to listen to, we are going to add a new property to our styles:</p>
<pre><code class="language-css">// cards container
.container {
  display: flex;
  flex-wrap: wrap;
  gap: 20px;
  justify-content: flex-start;
  // new property to define our container
  container-type: inline-size;
}
</code></pre>
<p>The <code>.container</code> class is the one in which our cards reside. By adding the property `container-type, &#39; we now define this class as a container we want to listen to. We said <a href="https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_containment/Container_queries#inline-size">inline-size</a> as the value to query based on the inline dimensions of the container because we just want to listen to the element&#39;s width.</p>
<p>The value of <code>container-type</code> will depend on your use case. If you want to listen to both width and height, then <code>size</code> will be a better fit for you. </p>
<p>You can also have <code>normal</code> as your <code>container-type</code> value, which means the element won’t act as a query container at all. This is handy if you need to revert to the default behavior.</p>
<p>Next, to define our query, we use the new <code>@container</code> CSS at-rule:</p>
<pre><code class="language-css">@container (max-width: 400px) {
  .card-inner {
    flex-direction: column;
  }

  .card-left {
    border-right: none;
    border-bottom: 1px solid #333;
  }
}
</code></pre>
<p>Notice that it is really similar to how we define our Media queries. Now, if we look at the same screen, we will see the following:</p>
<p><img src="//images.ctfassets.net/zojzzdop0fzx/4efHJuqtyI1scLMkdGKxdm/09009064dbe8c77f625eaa4d253b7028/dashboard4.png" alt="dashboard screenshot 4"></p>
<p>This is very powerful because we can now style each component with its own rules without changing the rules based on the layout changes.</p>
<p>The <code>@container</code> will affect all the defined containers in the scope; we might not want that. We can define the name of our container to specify that we only want to listen to that in specific:</p>
<pre><code class="language-css">.container {
  display: flex;
  flex-wrap: wrap;
  gap: 20px;
  justify-content: flex-start;
  container-type: inline-size;
  // New property to define the name of our container
  container-name: cards-container;
}

//We now specify which container we are listening to
@container cards-container (max-width: 400px) {
  .card-inner {
    flex-direction: column;
  }

  .card-left {
    border-right: none;
    border-bottom: 1px solid #333;
  }
}
</code></pre>
<p>We can also have a shorthand to define our container and its name:</p>
<pre><code class="language-css">.container {
  display: flex;
  flex-wrap: wrap;
  gap: 20px;
  justify-content: flex-start;
  // name of our container / its type
  container: cards-container / inline-size;
}
</code></pre>
<h2>Container query length units</h2>
<p><a href="https://developer.mozilla.org/en-US/docs/Web/CSS/length#container_query_length_units">Container query lengths</a> are similar to the viewport-percentage length units like  <a href="https://developer.mozilla.org/en-US/docs/Web/CSS/length#vh"><code>vh</code></a> or <a href="https://developer.mozilla.org/en-US/docs/Web/CSS/length#vw"><code>vw</code></a> units, but instead of being relative to the viewport, they are to the dimensions of the query container. We have different units, each relative to different dimensions of the container:</p>
<ul>
<li><code>cqw</code>: 1% of a query container&#39;s width</li>
<li><code>cqh</code>: 1% of a query container&#39;s height</li>
<li><code>cqi</code>: 1% of a query container&#39;s inline size</li>
<li><code>cqb</code>: 1% of a query container&#39;s block size</li>
<li><code>cqmin</code>: The smaller value of either <code>cqi</code> or <code>cqb</code></li>
<li><code>cqmax</code>: The larger value of either <code>cqi</code> or <code>cqb</code></li>
</ul>
<p>In our example, we could use them to define the font size of our cards:</p>
<pre><code class="language-css">.card p {
  // Pick the maximum value.
  font-size: max(16px, 1cqi);
}
</code></pre>
<p>Using these units alone isn’t recommended because they’re percentage-based and can have a value we don’t want. Instead, it’s better to use a dynamic range. Using the <a href="https://developer.mozilla.org/en-US/docs/Web/CSS/max">max</a> function, we can set 2 values and always pick the highest one.</p>
<h1>Conclusion</h1>
<p>Container queries bring a fresh and powerful approach to web design but are not meant to replace Media queries. I think their real power shines when used together. </p>
<p>Media queries often require constant adjustments as your layout evolves. Container queries, however, let you style individual components based on their dimensions, making the designs more flexible and easier to manage.</p>
<p>Adding a new component or rearranging elements won’t force us to rewrite our media queries. Instead, each component handles its styling, leading to cleaner and more organized code.</p>
<p>Please note that, as of writing this blog post, they aren’t compatible with all browsers yet. Take a look at this table from <a href="https://caniuse.com/?search=container%20queries">caniuse.com</a>:</p>
<p><img src="//images.ctfassets.net/zojzzdop0fzx/3fPCUC9Ondep3jRJXJamdE/cde6918d19c4d1bd2b8afc00f93b0529/container_style_queries.png" alt="can I use css container style queries"></p>
<p>A good fallback strategy for this, when hitting an unsupported browser would be the use of the  <a href="https://developer.mozilla.org/en-US/docs/Web/CSS/@supports">@support</a> rule, which allows you to apply styles only if the browser supports the CSS feature. For example:</p>
<pre><code class="language-css">/* Fallback for browsers that don&#39;t support container queries */
@supports not (container-type: inline-size) {
  @media screen and (max-width: 1024px) {
    .card-inner {
      flex-direction: column;
    }

    .card-left {
      border-right: none;
      border-bottom: 1px solid #333;
    }
  }
}

@container cards-container (max-width: 400px) {
  .card-inner {
    flex-direction: column;
  }

  .card-left {
    border-right: none;
    border-bottom: 1px solid #333;
  }
}
</code></pre>
<p>Ensure your media queries are good enough to keep everything responsive and user-friendly when the condition is unmet.</p>
<p>Thank you for reading! Enjoy the extra flexibility that container queries bring to your web designs. Check out a live <a href="https://github.com/thisdot/blog-demos/tree/main/20250117-container-queries">demo</a> to see it in action. Happy styling!</p>
]]></description>
            <link>https://www.thisdot.co/blog/css-container-queries-what-are-they</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/css-container-queries-what-are-they</guid>
            <pubDate>Fri, 21 Feb 2025 11:17:49 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[The 2025 Guide to JS Build Tools]]></title>
            <description><![CDATA[<h1>The 2025 Guide to JS Build Tools</h1>
<p>In 2025, we&#39;re seeing the largest number of JavaScript build tools being actively maintained and used in history. Over the past few years, we&#39;ve seen the trend of many build tools being rewritten or forked to use a faster and more efficient language like Rust and Go. In the last year, new companies have emerged, even with venture capital funding, with the goal of working on specific sets of build tools. <a href="https://voidzero.dev/">Void Zero</a> is one such recent example.</p>
<p>With so many build tools around, it can be difficult to get your head around and understand which one is for what. Hopefully, with this blog post, things will become a bit clearer. But first, let&#39;s explain some concepts.</p>
<h2>Concepts</h2>
<p>When it comes to build tools, there is no one-size-fits-all solution. Each tool typically focuses on one or two primary features, and often relies on other tools as dependencies to accomplish more. While it might be difficult to explain here all of the possible functionalities a build tool might have, we&#39;ve attempted to explain some of the most common ones so that you can easily understand how tools compare.</p>
<h3>Minification</h3>
<p>The concept of minification has been in the JavaScript ecosystem for a long time, and not without reason. JavaScript is typically delivered from the server to the user&#39;s browser through a network whose speed can vary. Thus, there was a need very early in the web development era to compress the source code as much as possible while still making it executable by the browser. This is done through the process of <em>minification</em>, which removes unnecessary whitespace, comments, and uses shorter variable names, reducing the total size of the file.</p>
<p>This is what an unminified JavaScript looks like:</p>
<pre><code class="language-js">function greetUser(name) {
    if (name) {
        console.log(&quot;Hello, &quot; + name + &quot;!&quot;);
    } else {
        console.log(&quot;Hello, stranger!&quot;);
    }
}

function addNumbers(a, b) {
    return a + b;
}

console.log(greetUser(&quot;Alice&quot;));
console.log(addNumbers(5, 7));
</code></pre>
<p>This is the same file, minified:</p>
<pre><code class="language-js">function greetUser(e){e?console.log(&quot;Hello, &quot;+e+&quot;!&quot;):console.log(&quot;Hello, stranger!&quot;)}function addNumbers(e,o){return e+o}console.log(greetUser(&quot;Alice&quot;)),console.log(addNumbers(5,7));
</code></pre>
<p>Closely related to minimizing is the concept of <a href="https://en.wikipedia.org/wiki/Minification_(programming)#Source_mapping">source maps</a>, which goes hand in hand with minimizing - source maps are essentially mappings between the minified file and the original source code. Why is that needed? Well, primarily for debugging minified code. Without source maps, understanding errors in minified code is nearly impossible because variable names are shortened, and all formatting is removed. With source maps, browser developer tools can help you debug minified code.</p>
<h3>Tree-Shaking</h3>
<p><em>Tree-shaking</em> was the next-level upgrade from minification that became possible when ES modules were introduced into the JavaScript language. While a minified file is smaller than the original source code, it can still get quite large for larger apps, especially if it contains parts that are effectively not used. Tree shaking helps eliminate this by performing a static analysis of all your code, building a dependency graph of the modules and how they relate to each other, which allows the bundler to determine which exports are used and which are not. Once unused exports are found, the build tool will remove them entirely. This is also called <em>dead code elimination</em>.</p>
<h3>Bundling</h3>
<p>Development in JavaScript and TypeScript rarely involves a single file. Typically, we&#39;re talking about tens or hundreds of files, each containing a specific part of the application. If we were to deliver all those files to the browser, we would overwhelm both the browser and the network with many small requests. <em>Bundling</em> is the process of combining multiple JS/TS files (and often other assets like CSS, images, etc.) into one or more larger files.</p>
<p>A bundler will typically start with an entry file and then recursively include every module or file that the entry file depends on, before outputting one or more files containing all the necessary code to deliver to the browser. As you might expect, a bundler will typically also involve minification and tree-shaking, as explained previously, in the process to deliver only the minimum amount of code necessary for the app to function.</p>
<h3>Transpiling</h3>
<p>Once TypeScript arrived on the scene, it became necessary to translate it to JavaScript, as browsers did not natively understand TypeScript. Generally speaking, the purpose of a <em>transpiler</em> is to transform one language into another. In the JavaScript ecosystem, it&#39;s most often used to transpile TypeScript code to JavaScript, optionally targeting a specific version of JavaScript that&#39;s supported by older browsers. However, it can also be used to transpile newer JavaScript to older versions. For example, <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/Arrow_functions">arrow functions</a>, which are specified in ES6, are converted into regular function declarations if the target language is ES5.</p>
<p>Additionally, a transpiler can also be used by modern frameworks such as React to transpile <a href="https://react.dev/learn/writing-markup-with-jsx">JSX syntax</a> (used in React) into plain JavaScript. Typically, with transpilers, the goal is to maintain similar abstractions in the target code. For example, transpiling TypeScript into JavaScript might preserve constructs like loops, conditionals, or function declarations that look natural in both languages.</p>
<h3>Compiling</h3>
<p>While a transpiler&#39;s purpose is to transform from one language to another without or with little optimization, the purpose of a <em>compiler</em> is to perform more extensive transformations and optimizations, or translate code from a high-level programming language into a lower-level one such as bytecode. The focus here is on optimizing for performance or resource efficiency. Unlike transpiling, compiling will often transform abstractions so that they suit the low-level representation, which can then run faster.</p>
<h3>Hot-Module Reloading (HMR)</h3>
<p><em>Hot-module reloading</em> (HMR) is an important feature of modern build tools that drastically improves the developer experience while developing apps. In the early days of the web, whenever you&#39;d make a change in your source code, you would need to hit that refresh button on the browser to see the change. This would become quite tedious over time, especially because with a full-page reload, you lose all the application state, such as the state of form inputs or other UI components.</p>
<p>With HMR, we can update modules in real-time without requiring a full-page reload, speeding up the feedback loop for any changes made by developers. Not only that, but the full application state is typically preserved, making it easier to test and iterate on code.</p>
<h3>Development Server</h3>
<p>When developing web applications, you need to have a locally running development server set up on something like <code>http://localhost:3000</code>. A development server typically serves unminified code to the browser, allowing you to easily debug your application. Additionally, a development server will typically have hot module replacement (HMR) so that you can see the results on the browser as you are developing your application.</p>
<h2>The Tools</h2>
<p>Now that you understand the most important features of build tools, let&#39;s take a closer look at some of the popular tools available. This is by no means a complete list, as there have been many build tools in the past that were effective and popular at the time. However, here we will focus on those used by the current popular frameworks. In the table below, you can see an overview of all the tools we&#39;ll cover, along with the features they primarily focus on and those they support secondarily or through plugins.</p>
<p><img src="https://p.ipic.vip/8k7aee.png" alt="Overview of the tools"></p>
<p>The tools are presented in alphabetical order below.</p>
<h3>Babel</h3>
<p><a href="https://babeljs.io/">Babel</a>, which celebrated its 10th anniversary since its initial release last year, is primarily a JavaScript transpiler used to convert modern JavaScript (ES6+) into backward-compatible JavaScript code that can run on older JavaScript engines. Traditionally, developers have used it to take advantage of the newer features of the JavaScript language without worrying about whether their code would run on older browsers.</p>
<h3>esbuild</h3>
<p><a href="https://esbuild.github.io/">esbuild</a>, created by <a href="https://www.madebyevan.com/">Evan Wallace</a>, the co-founder and former CTO of <a href="https://www.figma.com/">Figma</a>, is primarily a bundler that advertises itself as being one of the fastest bundlers in the market. Unlike all the other tools on this list, esbuild is written in Go. When it was first released, it was unusual for a JavaScript bundler to be written in a language other than JavaScript. However, this choice has provided significant performance benefits.</p>
<p>esbuild supports ESM and CommonJS modules, as well as CSS, TypeScript, and JSX. Unlike traditional bundlers, esbuild creates a separate bundle for each entry point file. Nowadays, it is used by tools like <a href="https://vite.dev/">Vite</a> and frameworks such as <a href="https://angular.dev/">Angular</a>.</p>
<h3>Metro</h3>
<p>Unlike other build tools mentioned here, which are mostly web-focused, Metro&#39;s primary focus is <a href="https://reactnative.dev/">React Native</a>. It has been specifically optimized for bundling, transforming, and serving JavaScript and assets for React Native apps. Internally, it utilizes Babel as part of its transformation process. Metro is sponsored by <a href="https://about.meta.com/">Meta</a> and actively maintained by the Meta team.</p>
<h3>Oxc</h3>
<p>The JavaScript Oxidation Compiler, or <a href="https://oxc.rs/">Oxc</a>, is a collection of Rust-based tools. Although it is referred to as a compiler, it is essentially a toolchain that includes a parser, linter, formatter, transpiler, minifier, and resolver. Oxc is sponsored by Void Zero and is set to become the backbone of other Void Zero tools, like Vite.</p>
<h3>Parcel</h3>
<p>Feature-wise, <a href="https://parceljs.org/">Parcel</a> covers a lot of ground (no pun intended). Largely created by <a href="https://bsky.app/profile/devongovett.bsky.social">Devon Govett</a>, it is designed as a zero-configuration build tool that supports bundling, minification, tree-shaking, transpiling, compiling, HMR, and a development server. It can utilize all the necessary types of assets you will need, from JavaScript to HTML, CSS, and images. The core part of it is  mostly written in JavaScript, with a CSS transformer written in Rust, whereas it delegates the JavaScript compilation to a SWC. Likewise, it also has a large collection of community-maintained plugins. Overall, it is a good tool for quick development without requiring extensive configuration.</p>
<h3>Rolldown</h3>
<p><a href="https://rolldown.rs/">Rolldown</a> is the future bundler for Vite, written in Rust and built on top of Oxc, currently leveraging its parser and resolver. Inspired by Rollup (hence the name), it will provide Rollup-compatible APIs and plugin interface, but it will be more similar to esbuild in scope. Currently, it is still in heavy development and it is not ready for production, but we should definitely be hearing more about this bundler in 2025 and beyond.</p>
<h3>Rollup</h3>
<p><a href="https://rollupjs.org/">Rollup</a> is the current bundler for Vite. Originally created by <a href="https://bsky.app/profile/rich-harris.dev">Rich Harris</a>, the creator of <a href="https://svelte.dev/">Svelte</a>, Rollup is slowly becoming a veteran (speaking in JavaScript years) compared to other build tools here. When it originally launched, it introduced novel ideas focused on ES modules and tree-shaking, at the time when Webpack as its competitor was becoming too complex due to its extensive feature set - Rollup promised a simpler way with a straightforward configuration process that is easy to understand. Rolldown, mentioned previously, is hoped to become a replacement for Rollup at some point.</p>
<h3>Rsbuild</h3>
<p><a href="https://rsbuild.dev/">Rsbuild</a> is a high-performance build tool written in Rust and built on top of Rspack. Feature-wise, it has many similiarities with Vite. Both Rsbuild and Rspack are sponsored by the <a href="https://webinfra.org/">Web Infrastructure Team at ByteDance</a>, which is a division of ByteDance, the parent company of TikTok. Rsbuild is built as a high-level tool on top of Rspack that has many additional features that Rspack itself doesn&#39;t provide, such as a better development server, image compression, and type checking. </p>
<h3>Rspack</h3>
<p><a href="https://rspack.dev/">Rspack</a>, as the name suggests, is a Rust-based alternative to Webpack. It offers a Webpack-compatible API, which is helpful if you are familiar with setting up Webpack configurations. However, if you are not, it might have a steep learning curve. To address this, the same team that built Rspack also developed Rsbuild, which helps you achieve a lot with out-of-the-box configuration.</p>
<p>Under the hood, Rspack uses SWC for compiling and transpiling. Feature-wise, it’s quite robust. It includes built-in support for TypeScript, JSX, Sass, Less, CSS modules, Wasm, and more, as well as features like module federation, PostCSS, Lightning CSS, and others.</p>
<h3>Snowpack</h3>
<p><a href="https://www.snowpack.dev/">Snowpack</a> was created around the same time as Vite, with both aiming to address similar needs in modern web development. Their primary focus was on faster build times and leveraging ES modules. Both Snowpack and Vite introduced a novel idea at the time: instead of bundling files while running a local development server, like traditional bundlers, they served the app unbundled. Each file was built only once and then cached indefinitely. When a file changed, only that specific file was rebuilt. For production builds, Snowpack relied on external bundlers such as Webpack, Rollup, or esbuild.</p>
<p>Unfortunately, Snowpack is a tool you’re likely to hear less and less about in the future. It is <a href="https://github.com/FredKSchott/snowpack">no longer actively developed</a>, and Vite has become the recommended alternative.</p>
<h3>SWC</h3>
<p><a href="https://swc.rs/">SWC</a>, which stands for Speedy Web Compiler, can be used for both compilation and bundling (with the help of SWCpack), although compilation is its primary feature. And it really is speedy, thanks to being written in Rust, as are many other tools on this list. Primarily advertised as an alternative to Babel, its SWC is roughly 20x faster than Babel on a single thread. SWC compiles TypeScript to JavaScript, JSX to JavaScript, and more. It is used by tools such as Parcel and Rspack and by frameworks such as Next.js, which are used for transpiling and minification. </p>
<p><a href="https://swc.rs/docs/usage/bundling">SWCpack</a> is the bundling part of SWC. However, active development within the SWC ecosystem is not currently a priority. The main author of SWC now works for Turbopack by <a href="https://vercel.com/">Vercel</a>, and the documentation states that SWCpack is presently not in active development.</p>
<h3>Terser</h3>
<p><a href="https://terser.org/">Terser</a> has the smallest scope compared to other tools from this list, but considering that it&#39;s used in many of those tools, it&#39;s worth separating it into its own section. Terser&#39;s primary role is minification. It is the successor to the older <a href="https://www.npmjs.com/package/uglify-js">UglifyJS</a>, but with better performance and ES6+ support. </p>
<h3>Vite</h3>
<p><a href="https://vite.dev/">Vite</a> is a somewhat of a special beast. It&#39;s primarily a development server, but calling it just that would be an understatement, as it combines the features of a fast development server with modern build capabilities.</p>
<p>Vite shines in different ways depending on how it&#39;s used. During development, it provides a fast server that doesn&#39;t bundle code like traditional bundlers (e.g., Webpack). Instead, it uses native ES modules, serving them directly to the browser. Since the code isn&#39;t bundled, Vite also delivers fast HMR, so any updates you make are nearly instant.</p>
<p>Vite uses two bundlers under the hood. During development, it uses esbuild, which also allows it to act as a TypeScript transpiler. For each file you work on, it creates a file for the browser, allowing an easy separation between files which helps HMR. For production, it uses Rollup, which generates a single file for the browser. However, Rollup is not as fast as esbuild, so production builds can be a bit slower than you might expect. (This is why Rollup is being rewritten in Rust as Rolldown. Once complete, you&#39;ll have the same bundler for both development and production.)</p>
<p>Traditionally, Vite has been used for client-side apps, but with the new <a href="https://vite.dev/guide/api-environment">Environment API</a> released in Vite 6.0, it bridges the gap between client-side and server-rendered apps.</p>
<h3>Turbopack</h3>
<p><a href="https://turbo.build/pack/docs">Turbopack</a> is a bundler, written in Rust by the creators of webpack and <a href="https://nextjs.org/">Next.js</a> at <a href="https://vercel.com/">Vercel</a>. The idea behind Turbopack was to do a complete rewrite of Webpack from scratch and try to keep a Webpack compatible API as much as possible. This is not an easy feat, and this task is still not over.</p>
<p>The enormous popularity of Next.js is also helping Turbopack gain traction in the developer community. Right now, Turbopack is being used as an opt-in feature in Next.js&#39;s dev server. Production builds are not yet supported but are planned for future releases.</p>
<h3>Webpack</h3>
<p>And finally we arrive at <a href="https://webpack.js.org/">Webpack</a>, the legend among bundlers which has had a dominant position as the primary bundler for a long time. Despite the fact that there are so many alternatives to Webpack now (as we&#39;ve seen in this blog post), it is still widely used, and some modern frameworks such as Next.js still have it as a default bundler. Initially released back in 2012, its development is still going strong. Its primary features are bundling, code splitting, and HMR, but other features are available as well thanks to its popular plugin system. Configuring Webpack has traditionally been challenging, and since it&#39;s written in JavaScript rather than a lower-level language like Rust, its performance lags behind compared to newer tools. As a result, many developers are gradually moving away from it.</p>
<h2>Conclusion</h2>
<p>With so many build tools in today&#39;s JavaScript ecosystem, many of which are similarly named, it&#39;s easy to get lost. Hopefully, this blog post was a useful overview of the tools that are most likely to continue being relevant in 2025. Although, with the speed of development, it may as well be that we will be seeing a completely different picture in 2026!</p>
]]></description>
            <link>https://www.thisdot.co/blog/the-2025-guide-to-js-build-tools</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/the-2025-guide-to-js-build-tools</guid>
            <pubDate>Fri, 14 Feb 2025 13:46:29 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[An Introduction to Laravel Queues and Temporary URLs]]></title>
            <description><![CDATA[<p><a href="https://laravel.com/">Laravel</a> is a mature, robust, and powerful web framework that makes developing PHP applications a breeze. In particular, I want to demonstrate how to create a website that can be used to convert videos online using queue jobs for processing and temporary URLs for downloading the converted files.</p>
<p>This article is aimed at those who aren’t very familiar with Laravel yet.</p>
<h2>Prerequisites</h2>
<p>There are many ways to set up Laravel, and which is the best method may depend on your operating system or preference. I have found <a href="https://herd.laravel.com/">Laravel Herd</a> to be very easy to use if you’re using Windows or macOS. Herd is a Laravel development environment that has everything you need with minimal configuration required. Command-line tools are installed and added to your path, and background services are configured automatically.</p>
<p>If you’re developing on Linux then Herd is not an option. However, Laravel Sail works for all major operating systems and uses a Docker based environment instead. You can find a full list of supported installation methods in the <a href="https://laravel.com/docs/11.x/installation">Laravel documentation</a>. To keep things simpl,e this article assumes the use of Herd, though this won’t make a difference when it comes to implementation.</p>
<p>You will also need a text editor or IDE that has good PHP support. PhpStorm is a great editor that works great with Laravel, but you can also use VSCode with the Phpactor language server, and I’ve found Phpactor to work quite well.</p>
<h2>Project Setup</h2>
<p>With a development environment setup, you can create a new Laravel project using <a href="https://getcomposer.org/">composer</a>, which is the most popular package manager for PHP. Herd installs <code>composer</code> for you. <code>composer</code> installs dependencies and lets you run scripts. Let’s create a Laravel project using it:</p>
<pre><code class="language-bash">composer create-project laravel/laravel laravel-video-converter
</code></pre>
<p>Once that is done you can navigate into the project directory and start the server with <code>artisan</code>:</p>
<pre><code class="language-bash">php artisan serve
</code></pre>
<p>Awesome! You can now navigate to <code>http://localhost:8000/</code> and see the Laravel starter application’s welcome page. Artisan is the command-line interface for Laravel. It comes with other utilities as well such as a database migration tool, scripts for generating classes, and other useful things.</p>
<h2>Uploading Videos Using Livewire</h2>
<p>Livewire is a library that allows you to add dynamic functionality to your Laravel application without having to add a frontend framework. For this guide we’ll be using Livewire to upload files to our server and update the status of the video conversion without requiring any page reloads.</p>
<p>Livewire can be installed with <code>composer</code> like so.</p>
<pre><code class="language-bash">composer require livewire/livewire
</code></pre>
<p>With it installed we need to make a Livewire component now. This component will act as the controller of our video upload page.</p>
<pre><code class="language-bash">php artisan make:livewire video-uploader
</code></pre>
<p>With that done you should see two new files were created according to the output of the command, one being a PHP file and the other being a Blade file. Laravel has its own HTML template syntax for views that allow you to make your pages render dynamically.</p>
<p>For this demo we’ll make the video conversion page render at the root of the site. You can do this by going to <code>routes/web.php</code> and editing the root route definition to point to our new component.</p>
<pre><code class="language-php">&lt;?php

use App\Livewire\VideoUploader;
use Illuminate\Support\Facades\Route;

Route::get(&#39;/&#39;, VideoUploader::class);
</code></pre>
<p>However, if we visit our website now it will return an error. This is due to the app template being missing, which is the view that encapsulates all page components and contains elements such as the document head, header, footer, etc.</p>
<p>Create a file at <code>resources/views/components/layouts/app.blade.php</code> and put the following contents inside. This will give you a basic layout that we can render our page component inside of.</p>
<pre><code class="language-html">&lt;!DOCTYPE html&gt;
&lt;html lang=&quot;{{ str_replace(&#39;_&#39;, &#39;-&#39;, app()-&gt;getLocale()) }}&quot;&gt;
    &lt;head&gt;
        &lt;meta charset=&quot;utf-8&quot;&gt;
        &lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1&quot;&gt;
        &lt;title&gt;WebM Video Converter&lt;/title&gt;

        &lt;style&gt;
            html {
                font-family: sans-serif;
                background-color: #eaeaea;
            }

            main {
                max-width: 1000px;
                margin: 100px auto 0 auto;
                padding: 32px;
                border-radius: 24px;
                background-color: white;
            }

            h1 {
                margin-top: 0;
            }

            a {
                text-decoration: none;
            }
        &lt;/style&gt;
    &lt;/head&gt;
    &lt;body&gt;
        &lt;main&gt;
            {{ $slot }}

            &lt;footer&gt;
                Laravel v{{ Illuminate\Foundation\Application::VERSION }} (PHP v{{ PHP_VERSION }})
            &lt;/footer&gt;
        &lt;/main&gt;
    &lt;/body&gt;
&lt;/html&gt;
</code></pre>
<p>The <code>{{ $slot }}</code> string in the main tag is a Blade echo statement. That is where our Livewire component will be injected when loading it.</p>
<p>Now, let’s edit the Livewire component’s template so it has something meaningful in it that will allow us to verify that it renders correctly. Edit <code>resources/views/livewire/video-uploader.blade.php</code> and put in the following:</p>
<pre><code class="language-html">&lt;div&gt;
    &lt;h1&gt;Hello, Laravel!&lt;/h1&gt;
&lt;/div&gt;
</code></pre>
<p>With that done you can go to the root of the site and see this hello message rendered inside of a box. Seeing that means everything is working as it should. We may as well delete the welcome template since we’re not using it anymore. This file is located at <code>resources/views/welcome.blade.php</code>.</p>
<p>Now, let’s go ahead and add uploading functionality. For now we’ll just upload the file into storage and do nothing with it. Go ahead and edit <code>app/Livewire/VideoUploader.php</code> with the following:</p>
<pre><code class="language-php">&lt;?php

namespace App\Livewire;

use Illuminate\Contracts\View\View;
use Livewire\Attributes\Validate;
use Livewire\Component;
use Livewire\Features\SupportFileUploads\TemporaryUploadedFile;
use Livewire\WithFileUploads;

class VideoUploader extends Component
{
    use WithFileUploads;

    /**
     * @var TemporaryUploadedFile
     */
    #[Validate(&#39;mimetypes:video/avi,video/mpeg,video/quicktime&#39;)]
    public $video;

    public function save(): void
    {
        $videoFilename = $this-&gt;video-&gt;store();
    }

    public function render(): View
    {
        return view(&#39;livewire.video-uploader&#39;);
    }
}
</code></pre>
<p>This will only allow uploading files with video file MIME types. The <code>$video</code> class variable can be wired inside of the component’s blade template using a form.</p>
<p>Create a form in <code>resources/views/livewire/video-uploader.blade.php</code> like so:</p>
<pre><code class="language-html">&lt;div&gt;
    &lt;h1&gt;WebM Video Converter&lt;/h1&gt;

    &lt;form wire:submit=&quot;save&quot;&gt;
        &lt;input type=&quot;file&quot; wire:model=&quot;video&quot;&gt;

        @error(&#39;video&#39;) &lt;span class=&quot;error&quot;&gt;{{ $message }}&lt;/span&gt; @enderror

        &lt;button type=&quot;submit&quot;&gt;Convert Video&lt;/button&gt;
    &lt;/form&gt;
&lt;/div&gt;
</code></pre>
<p>You will note a <code>wire:submit</code> attribute attached to the form. This will prevent the form submission from reloading the page and will result in Livewire calling the component’s <code>save</code> method using the video as a parameter. The <code>$video</code> property is wired with <code>wire:model=&quot;video&quot;</code>.</p>
<p>Now you can upload videos, and they will be stored into persistent storage in the <code>storage/app/private</code> directory. Awesome!</p>
<h2>Increase the Filesize Limit</h2>
<p>If you tried to upload a larger video you may have gotten an error. This is because the default upload size limit enforced by Livewire and PHP is very small. We can adjust these to accommodate our use-case.</p>
<p>Let’s start with adjusting the Livewire limit. To do that, we need to generate a configuration file for Livewire.</p>
<pre><code class="language-html">php artisan livewire:publish --config
</code></pre>
<p>All values in the generated file are the defaults we have been using already. Now edit <code>config/livewire.php</code> and make sure the <code>temporary_file_upload</code> looks like this:</p>
<pre><code class="language-php">...

&#39;temporary_file_upload&#39; =&gt; [
    &#39;disk&#39; =&gt; null,    	// Example: &#39;local&#39;, &#39;s3&#39;          	| Default: &#39;default&#39;
    &#39;rules&#39; =&gt; null,   	// Example: [&#39;file&#39;, &#39;mimes:png,jpg&#39;]  | Default: [&#39;required&#39;, &#39;file&#39;, &#39;max:12288&#39;] (12MB)
    &#39;directory&#39; =&gt; null,   // Example: &#39;tmp&#39;                  	| Default: &#39;livewire-tmp&#39;
    &#39;middleware&#39; =&gt; null,  // Example: &#39;throttle:5,1&#39;         	| Default: &#39;throttle:60,1&#39;
    &#39;preview_mimes&#39; =&gt; [   // Supported file types for temporary pre-signed file URLs...
        &#39;png&#39;, &#39;gif&#39;, &#39;bmp&#39;, &#39;svg&#39;, &#39;wav&#39;, &#39;mp4&#39;,
        &#39;mov&#39;, &#39;avi&#39;, &#39;wmv&#39;, &#39;mp3&#39;, &#39;m4a&#39;,
        &#39;jpg&#39;, &#39;jpeg&#39;, &#39;mpga&#39;, &#39;webp&#39;, &#39;wma&#39;,
    ],
    &#39;max_upload_time&#39; =&gt; 5, // Max duration (in minutes) before an upload is invalidated...
    &#39;cleanup&#39; =&gt; true, // Should cleanup temporary uploads older than 24 hrs...
    &#39;rules&#39; =&gt; &#39;max:102400&#39;, // NEW: Override default so we can upload long videos.
],

...
</code></pre>
<p>The <code>rules</code> key allows us to change the maximum file size, which in this case is 100 megabytes.</p>
<p>This alone isn’t good enough though as the PHP runtime also has a limit of its own. We can configure this by editing the <code>php.ini</code> file. Since this article assumes the use of Herd, I will show how that is done with it.</p>
<p>Go to Herd &gt; Settings &gt; PHP &gt; Max File Upload Size &gt; and set it to 100. Once done you need to stop all Herd services in order for the changes to take effect. Also make sure to close any background PHP processes with task manager in-case any are lingering, as this happened with me. Once you’ve confirmed everything is shut off, turn on all the services again.</p>
<p>If you’re not using Herd, you can add the following keys to your <code>php.ini</code> file to get the same effect:</p>
<pre><code>upload_max_filesize=100M
post_max_size=100M
</code></pre>
<h2>Creating a Background Job</h2>
<p>Now, let’s get to the more interesting part that is creating a background job to run on an asynchronous queue. First off, we need a library that will allow us to convert videos. We’ll be using <a href="https://github.com/PHP-FFMpeg/PHP-FFMpeg">php-ffmpeg</a>. It should be noted that FFmpeg needs to be installed and accessible in the system path. There are <a href="https://ffmpeg.org/download.html">instructions</a> on their website that tell you how to install it for all major platforms. On macOS this is automatic if you install it with <code>homebrew</code>. On Windows you can use <code>winget</code>.</p>
<p>On macOS and Linux you can confirm that ffmpeg is in your path like so:</p>
<pre><code class="language-bash">which ffmpeg
</code></pre>
<p>If a file path to ffmpeg is returned then it’s installed correctly. Now with FFmpeg installed you can install the PHP library adapter with <code>composer</code> like so:</p>
<pre><code class="language-bash">composer require php-ffmpeg/php-ffmpeg
</code></pre>
<p>Now that we have everything we need to convert videos, let’s make a job class that will use it:</p>
<pre><code class="language-bash">php artisan make:job ProcessVideo
</code></pre>
<p>Edit <code>app/Jobs/ProcessVideo.php</code> and add the following:</p>
<pre><code class="language-php">&lt;?php

namespace App\Jobs;

use FFMpeg\FFMpeg;
use FFMpeg\Format\Video\WebM;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Queue\Queueable;
use Illuminate\Support\Facades\File;

class ProcessVideo implements ShouldQueue
{
    use Queueable;

    private string $videoPath;

    private string $outputPath;

    public function __construct(string $videoPath, string $outputPath)
    {
        $this-&gt;videoPath = $videoPath;
        $this-&gt;outputPath = $outputPath;
    }

    public function handle(): void
    {
        $tempOutputPath = &quot;{$this-&gt;outputPath}.tmp&quot;;

        // Convert the video to the WebM container format (SLOW).
        $video = FFMpeg::create()-&gt;open($this-&gt;videoPath);
        $video-&gt;save(new WebM, $tempOutputPath);
        File::move($tempOutputPath, $this-&gt;outputPath);
    }
}
</code></pre>
<p>To create a job we need to make a class that implements the <code>ShouldQueue</code> interface and uses the <code>Queueable</code> trait. The <code>handle</code> method is called when the job is executed. Converting videos with <code>php-ffmpeg</code> is done by passing in an input video path and calling the <code>save</code> method on the returned object. In this case we’re going to convert videos to the WebM container format. Additional options can be specified here as well, but for this example we’ll keep things simple.</p>
<p>One important thing to note with this implementation is that the converted video is moved to a file path known by the livewire component. Later and to keep things simple we’re going to modify the component to check this file path until the file appears, and while for demo purposes this is fine, in an app deployed at a larger scale with multiple instances it is not. In that scenario it would be better to write to a cache like Redis instead with a URL to the file (if uploaded to something like S3) that can be checked instead.</p>
<p>Now let’s use this job! Edit <code>app/Livewire/VideoUploader.php</code> and let’s add some new properties and expand on our <code>save</code> method.</p>
<pre><code class="language-php">use App\Jobs\ProcessVideo;
use Illuminate\Contracts\View\View;
use Illuminate\Support\Facades\Storage;
use Livewire\Attributes\Validate;
use Livewire\Component;
use Livewire\Features\SupportFileUploads\TemporaryUploadedFile;
use Livewire\WithFileUploads;

class VideoUploader extends Component
{
    use WithFileUploads;

        /**
         * @var TemporaryUploadedFile
         */
        #[Validate(&#39;mimetypes:video/avi,video/mpeg,video/quicktime&#39;)]
        public $video;

        public ?string $jobStatus = &#39;Inactive&#39;;

        public ?string $outputVideoLink = null;

        public ?string $outputPath = null;

        public ?string $outputFilename = null;

        public function save(): void
        {
            $this-&gt;jobStatus = &#39;In Progress&#39;;
            $this-&gt;outputVideoLink = null;
            $this-&gt;outputPath = null;
            $this-&gt;outputFilename = null;

            // Store the uploaded file and generate the input and output paths of
            // the video to be converted for the job.
            $videoFilename = $this-&gt;video-&gt;store();
            $videoPath = Storage::disk(&#39;local&#39;)-&gt;path($videoFilename);
            $videoPathInfo = pathinfo($videoPath);
            $this-&gt;outputFilename = &quot;{$videoPathInfo[&#39;filename&#39;]}.webm&quot;;
            $this-&gt;outputPath = &quot;{$videoPathInfo[&#39;dirname&#39;]}/{$this-&gt;outputFilename}&quot;;

            // Add the long-running job onto the queue to be processed when possible.
            ProcessVideo::dispatch($videoPath, $this-&gt;outputPath);
        }

        ...
}
</code></pre>
<p>How this works is we tell the job where it can find the video and tell it where it should output the converted video when it’s done. We have to make the output filename be the same as the original with just the extension changed, so we use <code>pathinfo</code> to extract that for us.</p>
<p>The <code>ProcessVideo::dispatch</code> method is fire and forget. We aren’t given a handle of any kind to be able to check the status of a job out of the box. For this example we’ll be waiting for the video to appear at the output location.</p>
<p>To process jobs on the queue you need to start a queue worker as jobs are not processed in the same process as the server that we are currently running. You can start the queue with <code>artisan</code>:</p>
<pre><code class="language-bash">php artisan queue:work
</code></pre>
<p>Now the queue is running and ready to process jobs! Technically you can upload videos for conversion right now and have them be processed by the job, but you won’t be able to download the file in the browser yet.</p>
<h2>Generating a Temporary URL and Sending it with Livewire</h2>
<p>To download the file we need to generate a temporary URL. Traditionally this feature has only been available for S3, but as of Laravel v11.24.0 this is also usable with the local filesystem, which is really useful for development.</p>
<p>Let’s add a place to render the download link and the status of the job. Edit <code>resources/views/livewire/video-uploader.blade.php</code> and add a new section under the form:</p>
<pre><code class="language-html">&lt;div&gt;
    &lt;h1&gt;WebM Video Converter&lt;/h1&gt;

    &lt;form wire:submit=&quot;save&quot;&gt;
        &lt;input type=&quot;file&quot; wire:model=&quot;video&quot;&gt;

        @error(&#39;video&#39;) &lt;span class=&quot;error&quot;&gt;{{ $message }}&lt;/span&gt; @enderror

        &lt;button type=&quot;submit&quot;&gt;Convert Video&lt;/button&gt;
    &lt;/form&gt;

    &lt;div wire:poll&gt;
        &lt;p&gt;Job Status: {{ $jobStatus }}&lt;/p&gt;
        @if ($outputVideoLink != null)
            &lt;a href=&quot;{{ $outputVideoLink }}&quot; download&gt;Download Converted File&lt;/a&gt;
        @endif
    &lt;/div&gt;
&lt;/div&gt;
</code></pre>
<p>Note the <code>wire:poll</code> attribute. This will cause the Blade echo statements inside of the div to refresh occasionally and will re-render if any of them changed. By default, it will re-render every 2.5 seconds. Let’s edit <code>app/Livewire/VideoUploader.php</code> to check the status of the conversion, and generate a download URL.</p>
<pre><code class="language-php">use App\Jobs\ProcessVideo;
use Illuminate\Contracts\View\View;
use Illuminate\Support\Facades\Storage;
use Illuminate\Support\Facades\File;
use Livewire\Attributes\Validate;
use Livewire\Component;
use Livewire\Features\SupportFileUploads\TemporaryUploadedFile;
use Livewire\WithFileUploads;

class VideoUploader extends Component
{
    use WithFileUploads;

        ...

        public function render(): View
        {
            if ($this-&gt;outputPath != null &amp;&amp; File::exists($this-&gt;outputPath)) {
                // Create a temporary URL that lasts for 10 minutes and allows the
                // user to download the processed video file.
                $this-&gt;outputVideoLink = Storage::temporaryUrl(
                    $this-&gt;outputFilename, now()-&gt;addMinutes(10)
                );

                $this-&gt;jobStatus = &#39;Done&#39;;
                $this-&gt;outputPath = null;
                $this-&gt;outputFilename = null;
            }

            return view(&#39;livewire.video-uploader&#39;, [
                &#39;jobStatus&#39; =&gt; $this-&gt;jobStatus,
                &#39;outputVideoLink&#39; =&gt; $this-&gt;outputVideoLink,
            ]);
        }
}
</code></pre>
<p>Every time the page polls we check if the video has appeared at the output path. Once it’s there we generate the link, store it to state, and pass it to the view. Temporary URLs are customizable as well. You can change the expiration time to any duration you want, and if you’re using S3, you can also pass S3 request parameters using the optional 3rd argument.</p>
<p>Now you should be able to upload videos and download them with a link when they’re done processing!</p>
<h2>Limitations</h2>
<p>Although this setup works fine in a development environment with a small application, there are some changes you might need to make if you plan on scaling beyond that.</p>
<p>If your application is being served by multiple nodes then you will need to use a remote storage driver such as the <a href="https://laravel.com/docs/11.x/filesystem#amazon-s3-compatible-filesystems">S3 driver</a>, which works with any S3 compatible file storage service. The same Laravel API calls are used regardless of the driver you use. You would only have to update the driver passed into the <code>Storage</code> facade methods from <code>local</code> to <code>s3</code>, or whichever driver you choose.</p>
<p>You also wouldn’t be able to rely on the same local filesystem being shared between your job workers and your app server either and would have to use a storage driver or database to pass files between them. This demo uses the database driver for simplicity’s sake, but it&#39;s also worth noting that by default, queues and jobs use the database driver, but SQS, Redis, and Beanstalkd can also be used. Consider using these other drives instead of depending on how much traffic you need to process.</p>
<h2>Conclusion</h2>
<p>In this article, we explored how to utilize queues and temporary URLs to implement a video conversion site. Laravel queues allow for efficient processing of long-running tasks like video conversion in a way that won’t bog down your backend servers that are processing web requests.</p>
<p>While this setup works fine for development, some changes would need to be made for scaling this such as using remote storage drivers for passing data between the web server and queue workers. By effectively leveraging Laravel’s features, developers can create robust and scalable applications with relative ease.</p>
]]></description>
            <link>https://www.thisdot.co/blog/an-introduction-to-laravel-queues-and-temporary-urls</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/an-introduction-to-laravel-queues-and-temporary-urls</guid>
            <pubDate>Tue, 04 Feb 2025 10:08:21 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[An example-based guide to CSS Cascade Layers]]></title>
            <description><![CDATA[<p>CSS is actually good now! If you’ve been a web developer for a while, you’ll know this hasn’t always been the case. Over the past few years, a lot of really amazing features have been added that now support all the major browsers. Cascading and selector specificity have always been a pain point when writing stylesheets. <a href="https://developer.mozilla.org/en-US/docs/Learn/CSS/Building_blocks/Cascade_layers">CSS cascade layers</a> is a new feature that provides us with a lot more power and flexibility for tackling this problem. We no longer need to resort to tricky specificity hacks or order-of-appearance magic.</p>
<p>Cascade layers are really easy to get started with. I think the best way to understand how and when they are useful is by walking through some practical examples. </p>
<p>In this post, we’ll cover:</p>
<ul>
<li>What CSS cascade layers are and how they work</li>
<li>Real-world examples of using layers to manage style priorities</li>
<li>How Tailwind CSS leverages cascade layers</li>
</ul>
<h4><strong>What are CSS Cascade Layers?</strong></h4>
<p>Imagine CSS cascade layers as drawers in a filing cabinet, each holding a set of styles. The drawer at the top represents the highest priority, so when you open the cabinet, you first access the styles in that drawer. If a style isn&#39;t found there, you move down to the next drawer until you find what you need. </p>
<p>Traditionally, CSS styles cascade by specificity (i.e., more specific selectors win) and source order (styles declared later in the file override earlier ones). Cascade layers add a new, structured way to manage styles within a single origin—giving you control over which layer takes precedence without worrying about specificity.</p>
<p>This is useful when you need to control the order of styles from different sources, like:</p>
<ul>
<li><strong>Resets</strong> (e.g., Normalize)</li>
<li><strong>Third-party libraries</strong> (e.g., Tailwind CSS)</li>
<li><strong>Themes</strong> and <strong>overrides</strong></li>
</ul>
<p>You define cascade layers using the <code>@layer</code> rule, assigning styles to a specific layer. The order in which layers are defined determines their priority in the cascade. Styles in later layers override those in earlier layers, regardless of specificity or order within the file.</p>
<p>Here’s a quick example:</p>
<pre><code class="language-css">@layer base {
  p { color: blue; }
}

@layer theme {
  p { color: darkblue; }
}
</code></pre>
<p>In this example, since the <code>theme</code> layer comes after <code>base</code>, it overrides the paragraph text color to dark blue—even though both declarations have the same specificity.</p>
<h4><strong>How Do CSS Layers Work?</strong></h4>
<p>Cascade layers allow you to assign rules to specific named layers, and then control the order of those layers. This means that:</p>
<ul>
<li>Layers declared later take priority over earlier ones.</li>
<li>You don’t need to increase selector specificity to override styles from another layer—just place it in a higher-priority layer.</li>
<li>Styles outside of any layer will always take precedence over layered styles unless explicitly ordered.</li>
</ul>
<p>Let’s break it down with a more detailed example.</p>
<pre><code class="language-css">audio {
  display: flex;
}

@layer reset {
  audio[controls] {
    display: block;
  }
}
</code></pre>
<p>In this example:</p>
<ul>
<li>The unlayered <code>audio</code> rule takes precedence because it’s not part of the <code>reset</code> layer, even though the <code>audio[controls]</code> rule has higher specificity.</li>
<li>Without the cascade layers feature, specificity and order-of-appearance would normally decide the winner, but now, we have clear control by defining styles in or outside of a layer.</li>
</ul>
<h4><strong>Use Case: Overriding Styles with Layers</strong></h4>
<p>Cascade layers become especially useful when working with frameworks and third-party libraries. Say you’re using a CSS framework that defines a keyframe animation, but you want to override it in your custom styles. Normally, you might have to rely on specificity or carefully place your custom rules at the end. With layers, this is simplified:</p>
<pre><code class="language-css">@layer framework, custom;

@layer framework {
  @keyframes slide-left {
    from { margin-left: 0; }
    to { margin-left: -100%; }
  }
}

@layer custom {
  @keyframes slide-left {
    from { translate: 0; }
    to { translate: -100% 0; }
  }
}
</code></pre>
<p>There’s some new syntax in this example. Multiple layers can be defined at once. This declares up front the order of the layers. With the first line defined, we could even switch the order of the framework and custom layers to achieve the same result.</p>
<p>Here, the <code>custom</code> layer comes after <code>framework</code>, so the <code>translate</code> animation takes precedence, no matter where these rules appear in the file.</p>
<h4><strong>Cascade Layers in Tailwind CSS</strong></h4>
<p><a href="https://tailwindcss.com/">Tailwind CSS</a>, a utility-first CSS framework, uses cascade layers starting with version 3. Tailwind organizes its layers in a way that gives you flexibility and control over third-party utilities, customizations, and overrides.</p>
<p>In Tailwind, the framework styles are divided into distinct layers like <strong>base</strong>, <strong>components</strong>, and <strong>utilities</strong>. These layers can be reordered or combined with your custom layers.</p>
<p>Here&#39;s an example:</p>
<pre><code class="language-css">@layer base {
  /* Base styles like resets or typography */
  h1 { font-size: 2rem; }
}

@layer components {
  /* Component-level styles */
  .btn { background-color: blue; }
}

@layer utilities {
  /* Utility classes (e.g., margin, padding) */
  .mt-4 { margin-top: 1rem; }
}
</code></pre>
<p>Tailwind assigns these layers in a way that <strong>utilities</strong> take precedence over <strong>components</strong>, and <strong>components</strong> override <strong>base</strong> styles. You can use Tailwind’s <code>@layer</code> directive to extend or override any of these layers with your custom rules.</p>
<p>For example, if you want to add a custom button style that overrides Tailwind’s built-in <code>btn</code> component, you can do it like this:</p>
<pre><code class="language-css">@layer components {
  .btn { background-color: green; }
}
</code></pre>
<h4><strong>Practical Example: Layering Resets and Overrides</strong></h4>
<p>Let’s say you’re building a design system with both Tailwind and your own custom styles. You want a reset layer, some basic framework styles, and custom overrides.</p>
<pre><code class="language-css">@layer reset {
  * { box-sizing: border-box; }
}

@layer framework {
  p { color: gray; }
}

@layer custom {
  p { color: black; }
}
</code></pre>
<p>In this setup:</p>
<ul>
<li>The reset layer applies basic resets (like <code>box-sizing</code>).</li>
<li>The framework layer provides default styles for elements like paragraphs.</li>
<li>Your custom layer overrides the paragraph color to black.</li>
</ul>
<p>By controlling the layer order, you ensure that your custom styles override both the framework and reset layers, without messing with specificity.</p>
<h3><strong>Conclusion</strong></h3>
<p>CSS cascade layers are a powerful tool that helps you organize your styles in a way that’s scalable, easy to manage, and doesn’t rely on specificity hacks or the appearance order of rules. When used with frameworks like Tailwind CSS, you can create clean, structured styles that are easy to override and customize, giving you full control of your project’s styling hierarchy. It really shines for managing complex projects and integrating with third-party CSS libraries.</p>
]]></description>
            <link>https://www.thisdot.co/blog/an-example-based-guide-to-css-cascade-layers</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/an-example-based-guide-to-css-cascade-layers</guid>
            <pubDate>Wed, 22 Jan 2025 13:22:28 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[The Importance of a Scientific Mindset in Software Engineering: Part 1 (Source Evaluation & Literature Review)]]></title>
            <description><![CDATA[<h1>The Importance of a Scientific Mindset in Software Engineering: Part 1 (Source Evaluation &amp; Literature Review)</h1>
<p>Today, I will write about something very dear to me - science. But not about science as a field of study but rather as a way of thinking.</p>
<p>It&#39;s easy nowadays to get lost in the sea of information, fall for marketing hype, or even be trolled by a hallucinating LLM. A scientific mindset can be a powerful tool for navigating the complex modern world and the world of software engineering in particular.</p>
<p>Not only is it a powerful tool, but I&#39;ll argue that it&#39;s a must nowadays if you want to make informed decisions, solve problems effectively, and become a better engineer.</p>
<p>As software engineers, we are constantly confronted with an overwhelming array of frameworks, technologies, and infrastructure choices. Sometimes, it feels like there&#39;s a new tool or platform every day, each accompanied by its own wave of hype and marketing. It&#39;s easy to feel lost in the myriad of information or even suffer from FOMO and insecurity about not jumping on the latest bandwagon.</p>
<p>But it&#39;s not only about the abundance of information and making technological decisions. As engineers, we often write documentation, blog posts, talks, or even books. We need to be able to communicate our ideas clearly and effectively. Furthermore, we have to master the art of debugging code, which is essentially a scientific process where we form hypotheses, test them, and iterate until we find the root cause of the problem.</p>
<p>Therefore, here&#39;s my hot take: engineering is a science; hence, to deserve an engineer title, one needs to think like a scientist.</p>
<p>So, let&#39;s <em>review</em> (pun intended) what it means to think like a scientist in the context of software engineering.</p>
<h2>Systematic Review</h2>
<p>In science, systematic review is not only an essential means to understand a topic and map the current state of knowledge in the field, but it also has a well-defined methodology. You can&#39;t just google whatever supports your hypothesis and call it a day. You must define your research question, choose the databases you will search, set your inclusion and exclusion criteria, systematically search for relevant studies, evaluate their quality, and synthesize the results. Most importantly, you must be transparent about and describe your methodology in detail.</p>
<p>The general process of systematic review can be summarized in the following steps:</p>
<ol>
<li><p>Define your research question(s)</p>
</li>
<li><p>Choose databases and other sources to search</p>
</li>
<li><p>Define keywords and search terms</p>
</li>
<li><p>Define inclusion and exclusion criteria</p>
<p>a. Define practical criteria such as publication date, language, etc.</p>
<p>b. Define methodological criteria such as study design, sample size, etc.</p>
</li>
<li><p>Search for relevant studies</p>
</li>
<li><p>Evaluate the quality of the studies</p>
</li>
<li><p>Synthesize the results</p>
</li>
</ol>
<p>Source: <a href="https://books.google.cz/books/about/Conducting_Research_Literature_Reviews.html?id=2bKI6405TXwC">Conducting Research Literature Reviews: From the Internet to Paper by Dr. Fink</a></p>
<p>I&#39;m pretty sure you can see where I&#39;m going with this. There are many use cases in software engineering where a process similar to systematic review can be applied. Whether you&#39;re evaluating a new technology, choosing a tech stack for a new project, or researching for a blog post or a conference talk, it&#39;s important to be systematic in your approach, transparent about your methodology, and honest about the limitations of your research.</p>
<p>Of course, when choosing a tech stack to learn or researching for a blog post, you don&#39;t have to be as rigorous as in a scientific study. But a few of these steps will always be worth following. Let&#39;s focus on those and see how we can apply them in the context of software engineering.</p>
<h3>Defining Your Research Question(s)</h3>
<p>Before you start researching, it&#39;s important to define your research questions. What are you trying to find out? What problem are you trying to solve? What are the goals of your research? These questions will help you stay focused and avoid focusing on irrelevant information.</p>
<blockquote>
<p><strong>A practical example:</strong> If you&#39;re evaluating, say, whether to use bundler <em>A</em> or bundler <em>B</em> without a clear research question, you might end up focusing on marketing claims about how bundler <em>A</em> is faster than bundler <em>B</em> or how bundler <em>B</em> is more popular than bundler <em>A</em>, even though such aspects may have minimal impact on your project. With a clear research question, you can focus on what really matters for your project, like how well each bundler integrates with your existing tools, how well they handle your specific use case, or how well they are maintained.</p>
</blockquote>
<p>A research question is not a hypothesis - you don&#39;t have to have a clear idea of the answer. It&#39;s more about defining the scope of your research and setting clear goals. It can be as simple and general as &quot;What are the pros and cons of using React vs. Angular for a particular project?&quot; but also more specific and focused, like &quot;What are the legal implications of using open-source library <em>X</em> for purpose <em>Y</em> in project <em>Z</em>?&quot;. You can have multiple research questions, but keeping them focused and relevant to your goals is essential.</p>
<p>In my personal opinion, part of the scientific mindset is automatically having at least a vague idea of a research question in your head whenever you&#39;re facing a problem or a decision, and that alone can make you a more confident and effective engineer.</p>
<h3>Choosing Databases and Other Sources to Search</h3>
<p>In engineering, some information (especially when researching rare bugs) can be scarce, and you have to search wherever and take what you can get. Hence, this step is arguably much easier in science, where you can include well-established databases and publications in your search. Information in science is simply more standardized and easier to find.</p>
<p>There are, however, still some decisions to be made about where to search. Do you want to include community websites like <a href="https://stackoverflow.com/">StackOverflow</a> or <a href="https://stackoverflow.com/">Reddit</a>? Do you want to include marketing materials from the companies behind the technologies you&#39;re evaluating? These can all be valid sources of information, but they have their limitations and biases, and it&#39;s important to be aware of them.</p>
<p>Or do you want to ask a LLM? I hadn&#39;t included LLMs in the list of valid sources of information on purpose as they are not literature databases in the traditional sense, and I wouldn&#39;t consider them a search source for literature research. And for a very good reason - they are essentially a black box, and therefore, you cannot reliably describe a reproducible methodology of your search.</p>
<p>That doesn&#39;t mean you shouldn&#39;t ask an LLM for inspiration or a TL;DR, but you should always verify the information you get from them and be aware of their limitations.</p>
<h3>Defining Keywords and Search Terms</h3>
<p>This section will be short, as most of you are familiar with the concept of keywords and search terms and how to use search engines. However, I still wanted to highlight the importance of knowing how to search effectively for a software engineer. It&#39;s not just about typing in a few keywords and hoping for the best. It&#39;s about learning how to use advanced search operators, filter out irrelevant results, and find the information you need quickly and efficiently.</p>
<p>If you&#39;re not familiar with advanced search operators, I highly recommend you take some time to learn them, for example, at <a href="https://www.freecodecamp.org/news/how-to-google-like-a-pro-10-tips-for-effective-googling/">FreeCodeCamp</a>. Please note, however, that the article is specific to Google and different search engines may have different operators and syntax. This is especially true for scientific databases, which often have their own search syntax and operators. So, if you&#39;re doing more formal research, familiarize yourself with the database&#39;s search syntax. The underlying principles, however, are pretty much the same everywhere; just the syntax and UI might differ.</p>
<p>With a solid search strategy in place, the next critical step is to assess the quality of the information we find.</p>
<h3>Methodological Criteria and Evaluation of Sources</h3>
<p>This is where things get interesting. In science, evaluating the quality of the studies is a crucial step in the systematic review process. You can&#39;t just take the results of a study at face value - you need to critically evaluate its design, the sample size, the methodology, and the conclusions - and you need to be aware of the limitations of the study and the potential biases that may have influenced the results.</p>
<p>In science, there is a pretty straightforward yet helpful categorization of sources that my students surprisingly needed help understanding because no one ever explained it to them. So let me lay out and explain the three categories to you now:</p>
<p><strong>1. Primary sources</strong></p>
<p>Primary sources represent original research. You can find them in studies, conference papers, etc. In science, this is what you generally want to cite in your own research.</p>
<p>However, remember that only some of what you find in an original research paper is a primary source. Only the parts that present the original research are primary sources. For example, the introduction can contain citations to other studies, which are not primary, but secondary sources.</p>
<p>While primary sources can sometimes be perceived as hard to read and understand, in many cases, they can actually be easier to reach and understand as the methods and results are usually presented in a condensed form in the abstract, and often you can only skim the introduction and discussion to get a good idea of the study.</p>
<p>In software engineering, primary sources can sometimes be papers, but more often, they are original documentation, case studies, or even blog posts that present original research or data. For example, if you&#39;re evaluating a new technology, the official documentation, case studies, and blog posts from its developers can be considered primary sources.</p>
<p><strong>2. Secondary sources</strong></p>
<p>Secondary sources are typically reviews, meta-analyses, and other sources that summarize, analyze, or reference the primary sources. A good way to identify a source as secondary is to look for citations to other studies. If a claim has a citation, it&#39;s likely a secondary source. On the other hand, something is likely wrong if it doesn&#39;t have a citation and doesn&#39;t seem to present original research.</p>
<p>Secondary sources can be very useful for getting an overview of a topic, understanding the current state of knowledge, and finding relevant primary sources. Meta-analyses, in particular, can provide a beneficial point of view on a subject by combining the results of multiple studies and looking for patterns and trends.</p>
<p>The downside of secondary sources is that they can introduce information noise, as they are basically introducing another layer of interpretation and analysis. So, it&#39;s always a good idea to go back to the primary sources and verify the information you get from secondary sources.</p>
<p>Secondary sources in software engineering include blog posts, talks, or articles that summarize, analyze, or reference primary sources. For example, if you&#39;re researching a new technology, a blog post that compares different technologies based on their documentation and/or studies made by their authors can be considered a secondary source.</p>
<p><strong>3. Tertiary sources</strong></p>
<p>Tertiary sources represent a further level of abstraction. They are typically textbooks, encyclopedias, and other sources that summarize, analyze, or reference secondary sources. They are useful for getting a broad overview of a topic, understanding the basic concepts, and finding relevant secondary sources.</p>
<p>One example I see as a tertiary source is <a href="https://www.wikipedia.org/">Wikipedia</a>, and while you shouldn&#39;t ever cite Wikipedia in academic research, it can be a good starting point for getting an overview of a topic and finding relevant primary and secondary sources as you can easily click through the references.</p>
<blockquote>
<p>Note: It&#39;s fine to reference Wikipedia in a blog post or a talk to give your audience a convenient explanation of a term or concept. I&#39;m even doing it in this post. However, you should always verify that the article is up to date and that the information is correct.</p>
</blockquote>
<p>The distinction between primary, secondary, and tertiary sources in software engineering is not as clear-cut as in science, but the general idea still applies. When researching a topic, knowing the different types of sources and their limitations is essential. Primary sources are generally the most reliable and should be your go-to when seeking evidence to support your claims. Secondary sources can help get an overview of a topic, but they should be used cautiously, as they can introduce bias and noise. Tertiary sources are good for getting a broad overview of a topic but should not be used as evidence in academic research.</p>
<h4>Evaluating Sources</h4>
<p>Now that we have the categories laid out let&#39;s talk about evaluating the quality of the sources because, realistically, not all sources are created equal.</p>
<p>In science, we have some well-established criteria for evaluating the quality of a source. Some focus on the general credibility of the source, like the reputation of the journal or the author. In contrast, others focus on the quality of the study itself, like the study design, the sample size, and the methodology.</p>
<p>First, we usually look at the <strong>number of citations</strong> and the <a href="https://en.wikipedia.org/wiki/Impact_factor">impact factor</a> of the journal in which the study was published. These numbers can give us an idea of how well the scientific community received the study and how much other researchers have cited it.</p>
<p>In software engineering, we don&#39;t have the concept of impact factor when it comes to researching a concept or a technology, but we can still look at how many people are sharing the particular piece of information and how well the professional community receives it and how reputable the person sharing the information is.</p>
<p>Second, we look at the <strong>study design</strong> and the <strong>methodology</strong>. Does the study have a clear research question? Is the study design appropriate for the research question? Are the methods well-described and reproducible? Are the results presented clearly and honestly? Do the data support the conclusions?</p>
<p>Arguably, in software engineering, the honest and clear presentation of the method and results can be even more important than in science, given the amounts of money circulating in the industry and the potential for conflicts of interest. Therefore, it&#39;s important to understand where the data is coming from, how it was collected, and how it was analyzed.</p>
<p>If a company (or their DevRel person) is presenting data that show their product is the best (fastest, most secure...), it&#39;s important to be aware of the potential biases and conflicts of interest that may have influenced the results.</p>
<p>The ways in which the results can be skewed may include:</p>
<ul>
<li><p><strong>Missing, incomplete, or inappropriate methodology</strong>. Often, the methodology is not described in enough detail to be reproducible, or the whole experiment is designed in a way that doesn&#39;t actually answer the research question properly. For example, the methodology can omit important details, such as the environment in which the experiment was conducted or even the way the data was collected (e.g., to hide selection bias).</p>
</li>
<li><p><strong>Selection bias</strong> can be a common issue in software engineering experiments. For example, if someone is comparing two technologies, they might choose a dataset that they expect to perform better with one of the technologies or a metric that they expect to show a difference. Selection bias can lead to skewed results that don&#39;t reflect the technologies&#39; real-world performance.</p>
</li>
<li><p><strong>Publication bias</strong> is a common issue in science, where studies that show a positive result are more likely to be published than studies that show a negative outcome. In software engineering, this can manifest as a bias towards publishing success stories and case studies, while ignoring failures and negative results.</p>
</li>
<li><p><strong>Confirmation bias</strong> is a problem in science and software engineering alike. It&#39;s the tendency to look for evidence that confirms your hypothesis and ignore evidence that contradicts it. Confirmation bias can lead to cherry-picking data, misinterpreting results, and drawing incorrect conclusions.</p>
</li>
<li><p><strong>Conflict of interest</strong>. While less common in academic research, conflicts of interest can be a big issue in industry research. If a company is funding a study that shows its product in a positive light, it&#39;s important to be aware of the potential biases that may have influenced the results.</p>
</li>
</ul>
<p>Another thing we look at is the <strong>conclusions</strong>. Do the data support the conclusions? Are they reasonable and justified? Are they overstated or exaggerated? Are the limitations of the study acknowledged? Are the implications of the study discussed? It all goes back to honesty and transparency, which is crucial for evaluating the quality of the source.</p>
<p>Last but not least, we should look at the <strong>citations and references</strong> included in the source. In the same way we apply the systematic review process to our research, we should also apply it to the sources we use. I would argue that this is even more important in software engineering, where the information is often less standardized, and you come across many unsupported claims. If a source doesn&#39;t provide citations or references to back up their claims, it&#39;s a red flag that the information may not be reliable.</p>
<p>This brings us to something called <a href="https://en.wikipedia.org/wiki/Anecdotal_evidence">anecdotal evidence</a>. Anecdotal evidence is a personal story or experience used to support a claim. While anecdotal evidence can be compelling and persuasive, it is generally considered a weak form of evidence, as it is based on personal experience rather than empirical data. So when someone tells you that X is better than Y because they tried it and it worked for them, or that Z is true because they heard it from someone, take it with a massive grain of salt and look for more reliable sources of information.</p>
<p>That, of course, doesn&#39;t mean you should ask for a source under every post on social media, but it&#39;s important to recognize what&#39;s a personal opinion and what&#39;s a claim based on evidence.</p>
<h3>Synthesizing the Results</h3>
<p>Once you have gathered all the relevant information, it&#39;s time to synthesize the results. This is where you combine all the evidence you have collected, analyze it, and draw conclusions.</p>
<p>In science, this is often done as part of a <a href="https://en.wikipedia.org/wiki/Meta-analysis">meta-analysis</a>, where the results of multiple studies are combined and analyzed to look for patterns and trends using statistical methods. A meta-analysis is a powerful tool for synthesizing the results of multiple studies and drawing more robust conclusions than can be drawn from any single study.</p>
<p>You might not be doing a formal meta-analysis in software engineering, but you can still apply the same principles to your research. Look for common themes and trends in the information you have gathered, compare and contrast different sources, and draw conclusions based on the evidence.</p>
<h2>Conclusion</h2>
<p>Adopting a scientific way of thinking isn&#39;t just a nice-to-have in software engineering - it&#39;s essential to make informed decisions, solve problems effectively, and navigate the vast amount of information around you with confidence. Applying systematic review principles to your research allows you to gather reliable information, evaluate it critically, and draw sound conclusions based on evidence.</p>
<p>Let&#39;s summarize what such a systematic research approach can look like:</p>
<ul>
<li><strong>Define Clear Research Questions:</strong><ul>
<li>Start every project or decision-making process by clearly stating what you aim to achieve or understand.</li>
<li>Example: &quot;What factors should influence our choice between Cloud Service A and Cloud Service B for our application&#39;s specific needs?&quot;</li>
</ul>
</li>
<li><strong>Critically Evaluate Sources:</strong><ul>
<li>Identify the type of sources (primary, secondary, tertiary) and assess their credibility.</li>
<li>Be wary of biases and seek out multiple perspectives for a well-rounded understanding.</li>
</ul>
</li>
<li><strong>Be Aware of Biases:</strong><ul>
<li>Recognize common biases that can cloud judgment, such as confirmation or selection bias.</li>
<li>Actively counteract these biases by seeking disconfirming evidence and questioning assumptions.</li>
</ul>
</li>
<li><strong>Systematically Synthesize Information:</strong><ul>
<li>Organize your findings and analyze them methodically.</li>
<li>Use tools and frameworks to compare options based on defined criteria relevant to your project&#39;s goals.</li>
</ul>
</li>
</ul>
<p>I encourage you to embrace this scientific approach in your daily work. The next time you&#39;re facing a critical decision - be it selecting a technology stack, debugging complex code, or planning a project - apply these principles:</p>
<ul>
<li><strong>Start with a Question:</strong> Clearly define what you need to find out.</li>
<li><strong>Gather and Evaluate Information:</strong> Seek out reliable sources and scrutinize them.</li>
<li><strong>Analyze Systematically:</strong> Organize your findings and look for patterns or insights.</li>
<li><strong>Make Informed Decisions:</strong> Choose the path supported by evidence and sound reasoning.</li>
</ul>
<p>By doing so, you will enhance your problem-solving skills and contribute to a culture of thoughtful, evidence-based practice in the software engineering community.</p>
<p>The best part is that once you start applying a critical and systematic approach to your sources of information, it becomes second nature. You&#39;ll automatically start asking questions like, &quot;Where did this information come from?&quot; &quot;Is it reliable?&quot; and &quot;Can I reproduce the results?&quot; Doing so will make you much less susceptible to hype, marketing, and new shiny things, ultimately making you happier and more confident.</p>
<p>In the next part of this series, we&#39;ll look at applying the scientific mindset to debugging and using hypothesis testing and experimentation principles to solve problems more effectively.</p>
]]></description>
            <link>https://www.thisdot.co/blog/the-importance-of-a-scientific-mindset-in-software-engineering-part-1-source</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/the-importance-of-a-scientific-mindset-in-software-engineering-part-1-source</guid>
            <pubDate>Fri, 10 Jan 2025 13:10:36 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[D1 SQLite: Writing queries with the D1 Client API]]></title>
            <description><![CDATA[<h1><strong>Writing queries with the D1 Client API</strong></h1>
<p>In the previous post we defined our database schema, got up and running with migrations, and loaded some seed data into our database. In this post we will be working with our new database and seed data. If you want to participate, make sure to follow the steps in the first post.</p>
<p>We’ve been taking a minimal approach so far by using only <code>wrangler</code> and <code>sql</code> scripts for our workflow. The D1 Client API has a small surface area. Thanks to the power of SQL, we will have everything we need to construct all types of queries. Before we start writing our queries, let&#39;s touch on some important concepts.</p>
<h2>Prepared statements and parameter binding</h2>
<p>This is the<a href="https://developers.cloudflare.com/d1/build-with-d1/d1-client-api/#prepared-and-static-statements"> first section of the docs</a> and it highlights two different ways to write our SQL statements using the client API: prepared and static statements. Best practice is to use prepared statements because they are more performant and prevent<a href="https://en.wikipedia.org/wiki/SQL_injection#:~:text=In%20computing%2C%20SQL%20injection%20is,database%20contents%20to%20the%20attacker"> SQL injection</a> attacks. So we will write our queries using prepared statements.</p>
<p>We need to use<a href="https://developers.cloudflare.com/d1/build-with-d1/d1-client-api/#parameter-binding"> parameter binding</a> to build our queries with prepared statements. This is pretty straightforward and there are two variations.</p>
<p>By default we add <code>?</code> ’s to our statement to represent a value to be filled in. The bind method will bind the parameters to each question mark by their index. The first <code>?</code> is tied to the first parameter in bind, 2nd, etc. I would stick with this most of the time to avoid any confusion.</p>
<pre><code class="language-js">const stmt = db.prepare(&#39;SELECT * FROM users WHERE name = ? AND age = ?&#39;).bind( &#39;John Doe&#39;, 41 );
</code></pre>
<p>I like this second method less as it feels like something I can imagine messing up very innocently. You can add a number directly after a question mark to indicate which number parameter it should be bound to. In this exampl, we reverse the previous binding.</p>
<pre><code class="language-js">const stmt = db.prepare(&#39;SELECT * FROM users WHERE name = ?2 AND age = ?1&#39;).bind( 41, &#39;John Doe&#39; );
</code></pre>
<h3><strong>Reusing prepared statements</strong></h3>
<p>If we take the first example above and not bind any values we have a statement that can be reused:</p>
<pre><code class="language-js">const stmt = db.prepare(&#39;SELECT * FROM users WHERE name = ? AND age = ?&#39;)

const results = stmt.bind(&#39;John Doe&#39;, 41).all()
const results = stmt.bind(&#39;Jane Doe&#39;, 38).all()
</code></pre>
<h2>Querying</h2>
<p>For the purposes of this post we will just build example queries by writing them out directly in our Worker <code>fetch</code> handler. If you are building an app I would recommend building functions or some other abstraction around your queries.</p>
<h3><strong>select queries</strong></h3>
<p>Let&#39;s write our first query against our data set to get our feet wet.</p>
<p>Here’s the initial worker code and a query for all authors:</p>
<pre><code class="language-js">import * as Schema from &#39;./schema&#39;;

export default {
    async fetch(request, env, ctx): Promise&lt;Response&gt; {
        let results = await env.DB.prepare(&#39;SELECT * FROM authors&#39;).all&lt;Schema.Author[]&gt;();

        return new Response(JSON.stringify(results.results), {
            headers: { &#39;content-type&#39;: &#39;application/json&#39; },
        });
    },
} satisfies ExportedHandler&lt;Env&gt;;
</code></pre>
<p>We pass our SQL statement into <code>prepare</code> and use the <code>all</code> method to get all the rows. Notice that we are able to pass our types to a generic parameter in <code>all</code>. This allows us to get a fully typed response from our query.</p>
<p>We can run our worker with <code>npm run dev</code> and access it at<a href="http://localhost:8787"> http://localhost:8787</a> by default. We’ll keep this simple workflow of writing queries and passing them as a <code>json</code> response for inspection in the browser. Opening the page we get our author results.</p>
<h3><strong>joins</strong></h3>
<p>Not using an ORM means we have full control over our own destiny. Like anything else though, this has tradeoffs. Let’s look at a query to fetch the list of posts that includes author and tags information.</p>
<pre><code class="language-js">type PostsWithAuthorsAndTags = Schema.Post &amp; {
    author_name: string;
    tags: string;
};

let results = await env.DB.prepare(
        `
SELECT
  posts.*,
    authors.name AS author_name,
    COALESCE(
        JSON_GROUP_ARRAY(tags.name),
        &#39;[]&#39;
    ) AS tags
FROM
    posts
JOIN
    authors ON posts.author_id = authors.id
LEFT JOIN
    posts_tags ON posts.id = posts_tags.post_id
LEFT JOIN
    tags ON posts_tags.tag_id = tags.id
GROUP BY
posts.id
            `
).all&lt;PostsWithAuthorsAndTags[]&gt;();
</code></pre>
<p>Let’s walk through each part of the query and highlight some pros and cons.</p>
<pre><code class="language-js">SELECT
  posts.*,
    authors.name AS author_name,
    COALESCE(
        JSON_GROUP_ARRAY(tags.name),
        &#39;[]&#39;
    ) AS tags
</code></pre>
<ul>
<li>The query selects all columns from the <code>posts</code> table.</li>
<li>It also selects the <code>name</code> column from the <code>authors</code> table and renames it to <code>author_name</code>.</li>
<li>It aggregates the <code>name</code> column from the <code>tags</code> table into a JSON array. If there are no tags, it returns an empty JSON array. This aggregated result is renamed to <code>tags</code>.</li>
</ul>
<pre><code class="language-js">FROM
    posts
JOIN
    authors ON posts.author_id = authors.id
LEFT JOIN
    posts_tags ON posts.id = posts_tags.post_id
LEFT JOIN
    tags ON posts_tags.tag_id = tags.id
GROUP BY
    posts.id
</code></pre>
<ul>
<li>The query starts by selecting data from the <code>posts</code> table.</li>
<li>It then joins the <code>authors</code> table to include author information for each post, matching posts to authors using the <code>author_id</code> column in <code>posts</code> and the <code>id</code> column in <code>authors</code>.</li>
<li>Next, it left joins the <code>posts_tags</code> table to include tag associations for each post, ensuring that all posts are included even if they have no tags.</li>
<li>Next, it left joins the <code>tags</code> table to include tag names, matching tags to posts using the <code>tag_id</code> column in <code>posts_tags</code> and the <code>id</code> column in <code>tags</code>.</li>
<li>Finally, group the results by the post id so that all rows with the same post id are combined in a single row</li>
</ul>
<p>SQL provides a lot of power to query our data in interesting ways. <code>JOIN</code> ’s will typically be more performant than performing additional queries.You could just as easily write a simpler version of this query that uses subqueries to fetch post tags and join all the data by hand with JavaScript. This is the nice thing about writing SQL, you’re free to fetch and handle your data how you please.</p>
<p>Our results should look similar to this:</p>
<pre><code class="language-js">[
  {
    &quot;id&quot;: 1,
    &quot;author_id&quot;: 1,
    &quot;title&quot;: &quot;Exploring the Alps&quot;,
    &quot;content&quot;: &quot;Content about exploring the Alps...&quot;,
    &quot;published_at&quot;: &quot;2024-07-31 18:37:21&quot;,
    &quot;author_name&quot;: &quot;Alice Smith&quot;,
    &quot;tags&quot;: &quot;[\\&quot;Travel\\&quot;,\\&quot;Photography\\&quot;]&quot;
  },
  ...
]
</code></pre>
<p>This brings us to our next topic.</p>
<h3><strong>Marshaling / coercing result data</strong></h3>
<p>A couple of things we notice about the format of the result data our query provides:</p>
<p>Rows are flat. We join the author directly onto the post and prefix its column names with <code>author</code>.</p>
<pre><code>&quot;author_name&quot;: &quot;Alice Smith&quot;
</code></pre>
<p>Using an ORM we might get the data back as a child object:</p>
<pre><code class="language-js">{
  &quot;id&quot;: 1,
  &quot;title&quot;: &quot;Exploring the Alps&quot;,
  &quot;author&quot;: {
    &quot;name&quot;: &quot;Alice Smith&quot;
  },
  ...
 }
</code></pre>
<p>Another thing is that our <code>tags</code> data is a JSON <code>string</code> and not a JavaScript array. This means that we will need to parse it ourselves.</p>
<pre><code class="language-js">result.tags = JSON.parse(result.tags)
</code></pre>
<p>This isn’t the end of the world but it is some more work on our end to coerce the result data into the format that we actually want.</p>
<p>This problem is handled in most ORM’s and is their main selling point in my opinion.</p>
<h3><strong>insert / update / delete</strong></h3>
<p>Next, let’s write a function that will add a new post to our database.</p>
<pre><code class="language-js">async function createNewPost(env: Env, newPostData: NewPostData): Promise&lt;{ id: number }&gt; {
    // Insert the new post into the posts table
    const postResult = await env.DB.prepare(
        `
INSERT INTO posts (author_id, title, content)
VALUES (?, ?, ?)
RETURNING id
                    `
    )
        .bind(newPostData.authorId, newPostData.title, newPostData.content)
        .first&lt;{ id: number }&gt;();

    if (!postResult) {
        throw new Error(&#39;Failed to insert new post&#39;);
    }

    const postId = postResult.id;

    // Insert tags into the tags table if they don&#39;t already exist and get their IDs
    const tagIds = await Promise.all(
        newPostData.tags
            .map(async (tag) =&gt; {
                await env.DB.prepare(
                    `
INSERT OR IGNORE INTO tags (name)
VALUES (?)
                                    `
                )
                    .bind(tag)
                    .run();

                const tagResult = await env.DB.prepare(
                    `
SELECT id FROM tags WHERE name = ?
                                    `
                )
                    .bind(tag)
                    .first&lt;{ id: number }&gt;();

                return tagResult?.id;
            })
            .filter(Boolean)
    );

    // Link tags to the new post in the posts_tags table
    for (const tagId of tagIds) {
        await env.DB.prepare(
            `
INSERT INTO posts_tags (post_id, tag_id)
VALUES (?, ?)
                            `
        )
            .bind(postId, tagId)
            .run();
    }

    return { id: postId };
}
</code></pre>
<p>There’s a few queries involved in our create post function:</p>
<ul>
<li>first we create the new post</li>
<li>next we run through the tags and either create or return an existing tag</li>
<li>finally, we add entries to our post_tags join table to associate our new post with the tags assigned</li>
</ul>
<p>We can test our new function by providing post content in query params on our index page and formatting them for our function.</p>
<pre><code class="language-js">const newPostData: NewPostData = {
    authorId: Number(url.searchParams.get(&#39;authorId&#39;)),
    tags: url.searchParams.get(&#39;tags&#39;)?.split(&#39;,&#39;) ?? [],
    title: url.searchParams.get(&#39;title&#39;) ?? &#39;&#39;,
    content: url.searchParams.get(&#39;content&#39;) ?? &#39;&#39;,
};
if (!newPostData.authorId || !newPostData.title || !newPostData.content) {
    return new Response(&#39;Missing required fields&#39;, { status: 400 });
}

const newPost = await createNewPost(env, newPostData);
</code></pre>
<p>I gave it a run like this: <code>http://localhost:8787authorId=1&amp;tags=Food%2CReview&amp;title=A+review+of+my+favorite+Italian+restaurant&amp;content=I+got+the+sausage+orchette+and+it+was+amazing.+I+wish+that+instead+of+baby+broccoli+they+used+rapini.+Otherwise+it+was+a+perfect+dish+and+the+vibes+were+great</code></p>
<p>And got a new post with the id 11.</p>
<p><code>UPDATE</code> and <code>DELETE</code> operations are pretty similar to what we’ve seen so far. Most complexity in your queries will be similar to what we’ve seen in the posts query where we want to <code>JOIN</code> or <code>GROUP BY</code> data in various ways.</p>
<p>To update the post we can write a query that looks like this:</p>
<pre><code class="language-js">await env.DB.prepare(
`
UPDATE posts
SET
author_id = COALESCE(?, author_id),
title = COALESCE(?, title),
content = COALESCE(?, content)
WHERE id = ?
`
)
.bind(updatedPostData.authorId, updatedPostData.title, updatedPostData.content, postId)
.run();
</code></pre>
<p><code>COALESCE</code> acts similarly to if we had written <code>a ?? b</code> in JavaScript. If the binded value that we provide is null it will fall back to the default.</p>
<p>We can delete our new post with a simple DELETE query:</p>
<pre><code class="language-js">await env.DB.prepare(
`
DELETE posts WHERE id = ?
`
)
.bind(postId)
.run();
</code></pre>
<h3><strong>Transactions / Batching</strong></h3>
<p>One thing to note with D1 is that I don’t think the traditional style of SQLite transactions are supported. You can use the<a href="https://developers.cloudflare.com/d1/build-with-d1/d1-client-api/#dbbatch"> db.batch</a> API to achieve similar functionality though.</p>
<p>According to the docs:</p>
<p>Batched statements are <a href="https://www.sqlite.org/lang_transaction.html">SQL transactions ↗</a>. If a statement in the sequence fails, then an error is returned for that specific statement, and it aborts or rolls back the entire sequence.</p>
<pre><code class="language-js">await db.batch([
    db.prepare(&quot;UPDATE users SET name = ?1 WHERE id = ?2&quot;).bind( &quot;John&quot;, 17 ),
    db.prepare(&quot;UPDATE users SET age = ?1 WHERE id = ?2&quot;).bind( 35, 19 ),
]);
</code></pre>
<h2>Summary</h2>
<p>In this post, we&#39;ve taken a hands-on approach to exploring the D1 Client API, starting with defining our database schema and loading seed data. We then dove into writing queries, covering the basics of prepared statements and parameter binding, before moving on to more complex topics like joins and transactions. We saw how to construct and execute queries to fetch data from our database, including how to handle relationships between tables and marshal result data into a usable format. We also touched on inserting, updating, and deleting data, and how to use transactions to ensure data consistency. By working through these examples, we&#39;ve gained a solid understanding of how to use the D1 Client API to interact with our database and build robust, data-driven applications.</p>
]]></description>
            <link>https://www.thisdot.co/blog/d1-sqlite-writing-queries-with-the-d1-client-api</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/d1-sqlite-writing-queries-with-the-d1-client-api</guid>
            <pubDate>Mon, 23 Dec 2024 10:51:31 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Building a Stripe App: A Step-by-Step Guide to QR Code Generation]]></title>
            <description><![CDATA[<h1>Building a Stripe App: A Step-by-Step Guide to QR Code Generation</h1>
<h2>Why Build a Stripe App?</h2>
<p>I recently participated in an audio space with the Stripe team, and something they said really stuck with me: the Stripe app store is a growing area that isn&#39;t overly saturated yet. There&#39;s a lot of potential for new apps, and companies can use this opportunity to grow.</p>
<p>I work at a company called This Dot Labs, and we&#39;ve created several Stripe apps and even own one. After looking at the data, I can confirm that the Stripe team was right!</p>
<h2>Creating a QR Code Generator App</h2>
<p>For this tutorial, we&#39;ll build a QR code app that can take a URL and generate a code for it. This is a good use case to help you understand the ins and outs of Stripe&#39;s developer tools.
<img src="//images.ctfassets.net/zojzzdop0fzx/5xWbtv5sXZDbtEPC7byVCj/24f21639cb7da75afc1df071fc70bf3d/image__84_.png" alt="stripeQRCode"></p>
<h3>Why QR Codes?</h3>
<p>QR codes are useful tools that have become common in e-commerce, restaurants, and other industries. While Stripe already has a QR code tool, we&#39;ll make our own to familiarize ourselves with their syntax and problem-solving approaches.</p>
<h3>Project Structure</h3>
<p>Before we dive into the implementation, let&#39;s look at the structure of our Stripe App QR Code project:</p>
<ul>
<li><code>.vscode</code>: Contains settings for Visual Studio Code</li>
<li><code>source/views</code>: Holds the main application views</li>
<li><code>.gitignore</code>: Specifies files to ignore in version control</li>
<li><code>stripe-app.json</code>: Defines the Stripe app configuration</li>
<li><code>ui-extensions.d.ts</code>: TypeScript declaration file for UI extensions</li>
<li><code>.build</code>: this is where the built Stripe app gets placed.
<img src="//images.ctfassets.net/zojzzdop0fzx/2cm1prG0zDmoAEXy9WT6vX/60b37685729b120dd67d8f39b2b7f563/image__85_.png" alt="image (85)"></li>
</ul>
<h2>Step-by-Step Implementation</h2>
<h3>1. Install Stripe Locally</h3>
<p>First, you need to install Stripe on your local machine. The documentation provides great instructions for this:</p>
<ul>
<li>For Mac users: Use Brew to install</li>
<li>For Windows users: Download the package and add it to your environment variables</li>
</ul>
<p>You can find the details here in the stripe docs to install the Stripe CLI <a href="https://docs.stripe.com/stripe-cli">https://docs.stripe.com/stripe-cli</a></p>
<p><a href="https://docs.stripe.com/stripe-cli"></a></p>
<p>When using Windows, you must do <code>stripe login</code> from Powershell, NOT from Git bash or any other tool. After the server is up, then you can continue using git bash for everything else. After stripe login, you need to enter <code>stripe apps start</code>. Once you do that, the server is up and running and you can go back to using git bash or any other tool.</p>
<h3>2. Install Dependencies</h3>
<p>We&#39;ll be using an extra package for QR code generation. Install it using npm:</p>
<pre><code class="language-jsx">npm install qrcode
</code></pre>
<h3>3. Set Up the Main Component</h3>
<p>Let&#39;s look at the <code>home.tsx</code> file, where we&#39;ll use Stripe&#39;s UI components:</p>
<pre><code class="language-tsx">import { Box, ContextView, Button, TextField, Banner } from 
&#39;@stripe/ui-extension-sdk/ui&#39;;
</code></pre>
<p>These components are similar to other UI libraries like Bootstrap or Tailwind CSS.</p>
<h3>4. Create the UI Structure</h3>
<p>Our app will have:</p>
<ul>
<li>An input field for the URL</li>
<li>Validation using a regex pattern</li>
<li>Error handling for invalid URLs</li>
<li>QR code generation and display</li>
</ul>
<p>Here is the Home.tsx file that is located in the <code>src/views</code> folder
<img src="//images.ctfassets.net/zojzzdop0fzx/7rwXUn2SxrMUkjiu0uh7xJ/25efe8855611e8fdc82508c5eb497612/image__86_.png" alt="image (86)"></p>
<pre><code class="language-jsx">import {
  Box,
  ContextView,
  Button,
  TextField,
  Img,
  Banner,
} from &quot;@stripe/ui-extension-sdk/ui&quot;;
import { useState } from &#39;react&#39;;
import QRCode from &#39;qrcode&#39;;
const Home = () =&gt; {
  const [url, setUrl] = useState(&#39;&#39;);
  const [qrCode, setQrCode] = useState(&#39;&#39;);
  const [error, setError] = useState(&#39;&#39;);
  const generateQRCode = async () =&gt; {
    try {
      if (!url) {
        setError(&quot;Please enter a URL.&quot;);
        return;
      }
//basic regex pattern for URL validation
      const urlPattern = new RegExp(
        &#39;^(https?:\\/\\/)?&#39; +
        &#39;((([a-z\\d]([a-z\\d-]*[a-z\\d])*)\\.)+[a-z]{2,}|&#39; +
        &#39;((\\d{1,3}\\.){3}\\d{1,3}))&#39; +
        &#39;(\\:\\d+)?(\\/[-a-z\\d%_.~+]*)*&#39; +
        &#39;(\\?[;&amp;a-z\\d%_.~+=-]*)?&#39; +
        &#39;(\\#[-a-z\\d_]*)?&#39;, &#39;i&#39;
      );
      if (!urlPattern.test(url)) {
        setError(&quot;Please enter a valid URL (e.g., https://example.com)&quot;);
        return;
      }
      const qrCodeDataUrl = await QRCode.toDataURL(url, {
        width: 200,
        margin: 2,
      });
      setQrCode(qrCodeDataUrl);
      setError(&#39;&#39;);
    } catch (error) {
      console.error(&quot;Error generating QR code&quot;, error);
      setError(&quot;Failed to generate QR Code. Please try again.&quot;);
    }
  };
  return (
    &lt;ContextView
      title=&quot;URL QR Code Generator&quot;
      brandColor=&quot;#635bff&quot;
      externalLink={{
        label: &quot;Stripe Docs&quot;,
        href: &quot;https://stripe.com/docs&quot;,
      }}
    &gt;
      &lt;Box css={{ stack: &quot;y&quot;, rowGap: &quot;large&quot;, padding: &quot;medium&quot; }}&gt;
        &lt;Box css={{ font: &quot;heading&quot;, marginBottom: &quot;medium&quot; }}&gt;
          Generate User Payment QR Code
        &lt;/Box&gt;
        &lt;TextField
          label=&quot;Enter URL&quot;
          placeholder=&quot;https://example.com&quot;
          value={url}
          onChange={(e) =&gt; setUrl(e.target.value)}
          type=&quot;url&quot;
        /&gt;
        {error &amp;&amp; (
          &lt;Banner
            type=&quot;critical&quot;
            title=&quot;Error&quot;
            description={error}
          /&gt;
        )}
        &lt;Button
          type=&quot;primary&quot;
          onPress={() =&gt; generateQRCode()}
          disabled={!url}
        &gt;
          Generate QR Code
        &lt;/Button&gt;
        {qrCode &amp;&amp; (
          &lt;Box css={{
            stack: &quot;y&quot;,
            rowGap: &quot;medium&quot;,
            alignSelfY: &quot;center&quot;,
            marginTop: &quot;large&quot;
          }}&gt;
            &lt;Box css={{ font: &quot;heading&quot; }}&gt;Your QR Code&lt;/Box&gt;
            &lt;Img
              src={qrCode}
              alt=&quot;Generated QR Code&quot;
            /&gt;
            &lt;Button
              type=&quot;secondary&quot;
              onPress={() =&gt; {
                window.open(qrCode, &#39;_blank&#39;);
              }}
            &gt;
              Download QR Code
            &lt;/Button&gt;
          &lt;/Box&gt;
        )}
      &lt;/Box&gt;
    &lt;/ContextView&gt;
  );
};
export default Home;
</code></pre>
<ul>
<li>ContextView` is at the top level of the app where we see the Title and the link to the Stripe Docs that we placed in our Context View.</li>
<li>Box is how you use Divs.</li>
<li>Banners can be used to show notification errors or any other item you wish to display.</li>
<li>Textfields are input fields.</li>
<li>Everything else is pretty self-explanatory.</li>
</ul>
<h3>5. Handle Content Security Policy</h3>
<p>One problem I personally ran into was when I tried to redirect users, the Stripe policies would block it since I did not express that I knew what it was doing. I had to go into the stripe-app.json file and mention the specific security policies. For this particular exercise, I kept these as null.</p>
<p>This is my stripe-app.json file.</p>
<pre><code class="language-jsx">{
    &quot;id&quot;: &quot;com.example.my-stripe-app&quot;,
    &quot;version&quot;: &quot;0.0.1&quot;,
    &quot;name&quot;: &quot;My Stripe App&quot;,
    &quot;icon&quot;: &quot;&quot;,
    &quot;permissions&quot;: [],
    &quot;stripe_api_access_type&quot;: &quot;platform&quot;,
    &quot;ui_extension&quot;: {
        &quot;views&quot;: [
            {
                &quot;viewport&quot;: &quot;stripe.dashboard.home.overview&quot;,
                &quot;component&quot;: &quot;Home&quot;
            },
            {
                &quot;viewport&quot;: &quot;stripe.dashboard.invoice.detail&quot;,
                &quot;component&quot;: &quot;Invoice&quot;
            }
        ],
        &quot;content_security_policy&quot;: {
            &quot;connect-src&quot;: null,
            &quot;image-src&quot;: null,
            &quot;purpose&quot;: &quot;&quot;
        }
    }
}
</code></pre>
<h3>6. Configure App Views</h3>
<p>As you can see here, the stripe-app.json file shows the views for each file I have. The Home.tsx file and the Invoice.tsx are also included This is our way of saying that for each view we have, show the app functionality on that page. Our stripe-app.json file will show it but also, the manifest.js file in our .build folder will also show the same. Any view that doesn&#39;t have a file will not show the application&#39;s functionality. So, if I were to go to transactions, the app would not show the same logic as the home or invoices page.</p>
<p>By following these steps, you&#39;ll have a fully functional QR code generator app for Stripe. This is just a simple example, but the potential for Stripe apps is massive, especially for businesses serving e-commerce customers.</p>
<p>If you need help or get stuck, don&#39;t hesitate to reach out, <a href="mailto:danny.thompson@thisdot.co">danny.thompson@thisdot.co</a>. The Stripe team is also very active in answering questions, so leverage them as a resource. Happy coding!</p>
]]></description>
            <link>https://www.thisdot.co/blog/building-a-stripe-app-a-step-by-step-guide-to-qr-code-generation</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/building-a-stripe-app-a-step-by-step-guide-to-qr-code-generation</guid>
            <pubDate>Wed, 18 Dec 2024 03:53:13 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[How to Take Extreme Ownership Over Your Engineering Efforts with Nate Emerson]]></title>
            <description><![CDATA[<p>Nate Emerson is both a self-taught developer and a computer science university grad. How is that possible? In this episode, Nate talks about his unusual career trajectory, and what it has taught him about the differences between being self-taught vs. formal education in software development. Along with Tracy Lee and Jason Torres, he discusses leadership principles, such as extreme ownership, the value of confidence in engineering, and how humility and teamwork contribute to organizational success. They also highlight the importance of solving practical problems in tech and how this mindset can lead to innovation. </p>
<p>Here are the chapter titles with timestamps:</p>
<p>1: Setting the Stage – The Importance of Practical Skills in Engineering (00:00)
2: Engineering Management and Leadership Culture (04:36)
3: Extreme Ownership – A Leadership Superpower (09:12)
4: Confidence and the Developer&#39;s Journey (13:37)
5: Hiring Smarter and Building Stronger Teams (18:27)
6: Problem-Solving and Technology&#39;s Real-World Impact (23:13)
7: Leadership, Humility, and Long-Term Success (30:20)
8: Wrapping Up – Final Thoughts on Leadership and Ego (41:58)</p>
<p>Follow Nate Emerson on Social Media
Twitter: <a href="https://x.com/nateemerson">https://x.com/nateemerson</a>
Linkedin: <a href="https://www.linkedin.com/in/nate-emerson">https://www.linkedin.com/in/nate-emerson</a>
YouTube: <a href="https://www.youtube.com/channel/UC0K8hu90G3iV6327ymEViNw">https://www.youtube.com/channel/UC0K8hu90G3iV6327ymEViNw</a></p>
<p>Sponsored by <a href="thisdot.co">This Dot</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/how-to-take-extreme-ownership-over-your-engineering-efforts-with-nate</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/how-to-take-extreme-ownership-over-your-engineering-efforts-with-nate</guid>
            <pubDate>Tue, 10 Sep 2024 17:42:02 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[How User-Centric Design Motivates Developers with Paul McCollum]]></title>
            <description><![CDATA[<p>Paul McCollum, author of &quot;Practical Salesforce Architecture&quot;, shares his journey from microbiology to tech, discussing his work at Nortel Networks, and his transition into enterprise architecture with Salesforce. They explore the importance of empathy in engineering, how user-centric design motivates developers, and the evolution of agile development. Paul emphasizes solving real user problems over technical tasks and how continuous learning keeps work exciting.</p>
<p>Chapters</p>
<ol>
<li>Introduction – <strong>[00:00:00]</strong>  </li>
<li>Early Career &amp; Transition into Tech – <strong>[00:00:58]</strong>  </li>
<li>The Evolution of Technology Stacks – <strong>[00:03:15]</strong>  </li>
<li>Learning Through Play &amp; Empathy – <strong>[00:04:52]</strong>  </li>
<li>Agile Development &amp; Challenges – <strong>[00:18:51]</strong>  </li>
<li>Value-Driven Software Development – <strong>[00:21:23]</strong>  </li>
<li>Avoiding Burnout in Tech Careers – <strong>[00:25:42]</strong>  </li>
<li>Conclusion – <strong>[00:36:17]</strong></li>
</ol>
<p>Follow Paul McCollum on Social Media
Paul McCollum Linkedin: <a href="https://www.linkedin.com/in/realpaulmccollum/">https://www.linkedin.com/in/realpaulmccollum/</a>
Paul McCollum Twitter: <a href="https://x.com/uxaholic">https://x.com/uxaholic</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/how-user-centric-design-motivates-developers-with-paul-mccollum</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/how-user-centric-design-motivates-developers-with-paul-mccollum</guid>
            <pubDate>Wed, 11 Sep 2024 17:38:01 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Challenges of Growing into a Software Leadership Role with Gant Laborde]]></title>
            <description><![CDATA[<p>In this episode, Rob Ocel sits down with Gant Laborde, CIO at Infinite Red, to explore Gant&#39;s journey in the tech industry, his transition into leadership, and his role as a Chief Innovation Officer. Gant shares insights into the challenges and rewards of innovation within a company, how to manage upward and downward effectively, and the importance of trust in leadership. They also discuss the evolving landscape of AI, the significance of experimentation, and the courage needed to make bold decisions. </p>
<p>Chapters</p>
<p>Introduction and Opening Remarks - 00:00
Gant&#39;s Background and Journey in Tech - 02:05
Transitioning to Leadership at Infinite Red - 05:08
Defining Innovation at an Agency - 07:28
The Role of AI in React Native - 09:39
Navigating the Hype and Troughs of Technology - 11:35
The Challenges of Middle Management - 15:12
Building Trust and Managing Upwards - 16:25
Empowering Teams and Passing the Torch - 19:40
Developing Courage and Taking Risks - 22:30
Why Leadership is Worth It - 30:28
Final Thoughts and Wrap-Up - 31:53</p>
<p>Follow Gant Laborde on Social Media
Twitter: <a href="https://x.com/GantLaborde">https://x.com/GantLaborde</a>
Github: <a href="https://github.com/GantMan">https://github.com/GantMan</a>
Linkedin: <a href="https://www.linkedin.com/in/gant-laborde/">https://www.linkedin.com/in/gant-laborde/</a>
Mastodon: <a href="https://mastodon.social/@gantlaborde">https://mastodon.social/@gantlaborde</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/challenges-of-growing-into-a-software-leadership-role-with-gant-laborde</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/challenges-of-growing-into-a-software-leadership-role-with-gant-laborde</guid>
            <pubDate>Fri, 30 Aug 2024 17:11:56 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Integrating AI Models Locally with Next.js ft. Jesus Padron]]></title>
            <description><![CDATA[<p>Jesus Padron from the This Dot team shows you how to integrate AI models into a Next.js application. Jesus walks through the process of running Meta&#39;s Llama 3.1 model locally, leveraging OpenAI&#39;s Whisper for speech-to-text conversion, and using OpenAI&#39;s TTS model for text-to-speech conversion. By the end of the episode, listeners will know how to create an AI voice assistant that processes voice input, understands the content, and responds audibly.</p>
<p>Chapters:</p>
<ol>
<li><strong>Introduction to the Episode</strong> (00:00:03)</li>
<li><strong>Overview of Llama 3.1 and Setup</strong> (00:02:14)</li>
<li><strong>Setting Up the Next.js Application</strong> (00:04:40)</li>
<li><strong>Recording Audio with MediaRecorder API</strong> (00:11:37)</li>
<li><strong>Integrating OpenAI&#39;s Whisper for Speech-to-Text</strong> (00:36:46)</li>
<li><strong>Generating Responses with Llama 3.1</strong> (00:48:24)</li>
<li><strong>Implementing Text-to-Speech with OpenAI&#39;s TTS</strong> (01:03:26)</li>
<li><strong>Final Testing and Demonstration</strong> (01:06:37)</li>
<li><strong>Summary and Next Steps</strong> (01:09:01)</li>
<li><strong>Closing Remarks</strong> (01:14:19)</li>
</ol>
<p><strong>Follow Jesus on Social Media</strong>
Twitter: <a href="https://x.com/padron4497">https://x.com/padron4497</a>
Github: <a href="https://github.com/padron4497">https://github.com/padron4497</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/integrating-ai-models-locally-with-next-js-ft-jesus-padron</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/integrating-ai-models-locally-with-next-js-ft-jesus-padron</guid>
            <pubDate>Tue, 03 Sep 2024 21:55:29 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[How to Invest in New Software Engineering Talent with Shashi Lo]]></title>
            <description><![CDATA[<p>In this episode of the Modern Web Podcast, we sit down with Shashi Lo, Senior UX Engineer at Microsoft and the founder of the Gridiron Survivor project. Shashi shares his approach to mentoring junior developers by helping them bridge the gap between boot camp and their first job in tech. We cover the challenges of onboarding, the importance of code reviews, and how companies can better support new talent by investing in mentorship and training. Shashi also talks about his experience with building a community of learners, the process of de-risking junior candidates, and why companies should be more proactive in nurturing the next generation of developers. </p>
<p>00:00 - Meet Shashi Loh
02:25 - The Gridiron Survivor Project
05:02 - The Importance of Code Reviews
07:25 - Teaching the Basics of Project Communication
09:47 - Code Reviews as a Learning Tool
12:06 - Why Shashi Mentors: Giving Back to the Community
14:26 - The Importance of De-Risking Junior Candidates
16:41 - Building in Public: Transparency and Learning
19:00 - Assessing Candidates for the Gridiron Survivor Project
21:25 - The Power of Simple Coding Tests
23:45 - Scaling Up Skills: From Small Tasks to Big Projects
26:07 - Should Companies Be Doing This?
28:25 - Finding Hidden Gems in the Job Market
30:47 - The Challenges of Filtering Candidates
33:02 - Where to Find Shashi Online
34:38 - Closing Remarks</p>
<p>Follow Shashi Lo on Social Media
Twitter: <a href="https://x.com/shashiwhocodes">https://x.com/shashiwhocodes</a>
Linkedin: <a href="https://www.linkedin.com/in/shashilo/">https://www.linkedin.com/in/shashilo/</a>
Github: <a href="https://github.com/shashilo">https://github.com/shashilo</a></p>
<p>Sponsored by <a href="https://www.thisdot.co/">This Dot</a>.</p>
]]></description>
            <link>https://www.thisdot.co/blog/how-to-invest-in-new-software-engineering-talent-with-shashi-lo</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/how-to-invest-in-new-software-engineering-talent-with-shashi-lo</guid>
            <pubDate>Thu, 05 Sep 2024 11:53:47 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Docker: The Secret Weapon for Cloud Efficiency with Kyle Tryon]]></title>
            <description><![CDATA[<p>Rob Ocel, Tracy Lee, Adam Rackis, and Danny Thompson sit down with Kyle Tryon, Senior Developer Advocate at Depot, to talk all things Docker, dev containers, and modern build systems. They break down how Docker simplifies development by solving those &quot;it works on my machine&quot; problems, how cloud-based caching speeds up builds, and why tools like Kubernetes are becoming essential for scaling modern apps. Kyle also shares his journey from fixing laptops in retail to becoming a leading voice in the dev space, plus some great insights into how Docker and Depot are changing the game for developers everywhere.</p>
<p>Chapters
1: Introductions (00:00 – 02:40)
2: What is Docker and Why It Matters (02:41 – 06:30)
3: Docker Files and Containers 101 (06:31 – 11:00)
4: Challenges of Environment Setup &amp; Dev Containers (11:01 – 15:00)
5: The Power of Layer Caching in Docker (15:01 – 20:30)
6: Introduction to Depot and Cloud-Based Builds (20:31 – 25:00)
7: Optimizing Docker Builds with Depot (25:01 – 30:00)
8: Docker in the Modern Web Stack (30:01 – 35:00)
9: The Future of Cloud Builds and CI/CD Pipelines (35:01 – 40:00)
10: Final Thoughts and Where to Find More (40:01 – End)</p>
<p>Follow Kyle Tryon on Social Media
Twitter: <a href="https://x.com/TechSquidTV">https://x.com/TechSquidTV</a>
Linkedin: <a href="https://www.linkedin.com/in/kyle-tryon/">https://www.linkedin.com/in/kyle-tryon/</a>
Github: <a href="https://github.com/techsquidtv">https://github.com/techsquidtv</a></p>
<p>Sponsored by This Dot: <a href="thisdot.co">thisdot.co</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/docker-the-secret-weapon-for-cloud-efficiency-with-kyle-tryon</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/docker-the-secret-weapon-for-cloud-efficiency-with-kyle-tryon</guid>
            <pubDate>Tue, 17 Sep 2024 17:48:00 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Cybersecurity Entrepreneurship: Scaling, Partner Models, & Managing Relationships with Dr. Mike Saylor]]></title>
            <description><![CDATA[<p>Dr. Mike Saylor, CEO of Black Swan Cybersecurity and professor of cybersecurity at UT San Antonio, shares his journey from starting his first computer business to leading a successful cybersecurity company. He discusses entrepreneurship, the challenges of building and growing a business, and the importance of strong partner relationships. Along with Rob Ocel, he covers strategies for managing client relationships, navigating partner models, and balancing direct sales with partnerships.</p>
<p>00:00 - Introduction of the episode and guest Dr. Mike Saylor<br>00:41 - Dr. Saylor&#39;s journey to becoming CEO of Black Swan Cybersecurity<br>03:17 - Early entrepreneurial ventures and building his own computer company<br>04:57 - Challenges in selling cybersecurity services<br>06:53 - Building reputation and generating leads through various methods<br>07:50 - Navigating different business models for growth<br>09:24 - Managing client relationships and the importance of business development  11:17 - Exploring different partner models in the business<br>13:57 - Working with partners to grow opportunities and manage risks<br>15:24 - The referral business model and subcontracting challenges<br>16:20 - Complacency and the need for diversification in sales channels<br>17:24 - The importance of knowing your business and finding the right partner strategy<br>18:38 - Developing strong partnerships and managing conflicts<br>20:38 - Recognizing when a partner has different priorities<br>22:48 - Risks of over-reliance on partners and how to hedge bets<br>25:41 - Managing long-term relationships with partners<br>27:28 - Closing remarks and where to find more information about Black Swan Cybersecurity</p>
<p>Follow Dr. Mike Saylor on Social Media
Linkedin: <a href="https://www.linkedin.com/in/misaylor/">https://www.linkedin.com/in/misaylor/</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/cybersecurity-entrepreneurship-scaling-partner-models-and-managing</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/cybersecurity-entrepreneurship-scaling-partner-models-and-managing</guid>
            <pubDate>Thu, 19 Sep 2024 15:41:36 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Why TypeScript is the Most Important Tool in Open-Source with Nick Taylor]]></title>
            <description><![CDATA[<p>In this episode of the Modern Web Podcast, Nick Taylor, Senior Software Engineer at OpenSauce discusses the current state of open source, including the challenges around funding, sustainability, and contributor burnout. Nick shares insights into how open source has impacted his career and highlights the growing importance of tools like TypeScript in the open-source ecosystem.</p>
<p>The panel also discusses the evolution of TypeScript, its widespread adoption, and its role in shaping the modern web development landscape. They explore the nuances between JavaScript and TypeScript, the friction that developers sometimes face when working with types, and how TypeScript has grown into a default tool for many projects. </p>
<p>Chapters</p>
<p>00:00 - Introduction and Co-Host Introductions
00:47 - Guest Introduction: Nick Taylor
01:37 - The Current State of Open Source
02:50 - Funding Challenges in Open Source
03:54 - Open Source Success Stories and Funding Examples
05:35 - Open Source Burnout and Quiet Quitting in Tech 
06:43 - Challenges for Open Source Maintainers
07:26 - Motivation and Incentives for Contributing to Open Source
08:35 - Career Benefits of Open Source Contributions
10:11 - Nick’s Journey into Open Source Contributions
12:38 - The Burden of Managing Popular Open Source Projects 
14:27 - Hacktoberfest and Low-Quality Contributions
16:14 - Challenges for Beginners Contributing to Open Source 
18:01 - The Impact of Hacktoberfest and Mitigating Spam PRs
20:12 - TypeScript&#39;s Rise in Popularity
23:16 - Why TypeScript Became Popular in Open Source<br>25:45 - The Debate Around Static Typing in JavaScript<br>27:08 - TypeScript vs. JavaScript: Future Considerations
30:21 - The Role of Build Steps in Modern Development Frameworks<br>33:35 - The Complexity of TypeScript for Different Developer Levels<br>36:12 - Enum Usage and TypeScript&#39;s Type System
38:53 - TypeScript’s Structural Typing and Its Implications<br>39:47 - Nick’s Contact Information and Closing Remarks</p>
<p>Follow Nick Taylor on Social Media
Twitter: <a href="https://x.com/nickytonline">https://x.com/nickytonline</a>
Linkedin: <a href="https://www.linkedin.com/in/nickytonline/">https://www.linkedin.com/in/nickytonline/</a>
Github: <a href="https://github.com/nickytonline">https://github.com/nickytonline</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/why-typescript-is-the-most-important-tool-in-open-source-with-nick-taylor</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/why-typescript-is-the-most-important-tool-in-open-source-with-nick-taylor</guid>
            <pubDate>Fri, 27 Sep 2024 18:59:01 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Aligning Product Teams and User Goals with Stacie Frederick, CPO at Stanza]]></title>
            <description><![CDATA[<p>Stacie Frederick, Chief Product Officer at Stanza, discusses how her engineering background informs her work in product management, particularly in understanding user needs and building solutions for Stanza, which focuses on improving reliability engineering for development teams. Stacie shares her approach to balancing the needs of current users with future growth, the importance of clear customer personas, and how friction can arise when product teams misalign with user goals. The conversation also touches on how technologists can develop a product mindset by staying connected with customers, and the unique challenges of working across different industries where technology intersects with non-tech domains. </p>
<p>Chapters
00:00 - Introduction
01:00 - Transition from CTO to CPO
03:00 - Understanding Users in Product Development
06:00 - Role of Personas in Building for Users
09:00 - The Balance of Serving Current and Future Users
11:30 - Challenges of Startups and Growing with Users
14:00 - Product vs. Engineering: The What and the How
17:00 - Blending Product and Engineering Roles
21:00 - Encouraging Product Mindedness in Engineers
24:00 - The Importance of Understanding Users in Tech
28:00 - Industry-Specific Challenges for Technologists
31:00 - Closing Remarks</p>
<p>Follow Stacie Frederick on Social Media:
Linkedin: <a href="https://www.linkedin.com/in/stacie-frederick/">https://www.linkedin.com/in/stacie-frederick/</a></p>
<p>Sponsored by <a href="https://www.thisdot.co/">This Dot</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/aligning-product-teams-and-user-goals-with-stacie-frederick-cpo-at-stanza</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/aligning-product-teams-and-user-goals-with-stacie-frederick-cpo-at-stanza</guid>
            <pubDate>Mon, 30 Sep 2024 09:49:09 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[The Evolution of AI Tooling & Ethical AI Practices with Shivay Lamba]]></title>
            <description><![CDATA[<p>Machine Learning and AI expert Shivay Lamba, discusses the evolution of machine learning tools, and his work on MLOps and deploying large language models (LLMs). The conversation covers the accessibility of AI, the power of JavaScript in machine learning through tools like TensorFlow.js, and the growing importance of ethical AI practices. Shivay also discusses the transition of web-based AI tools, the importance of transfer learning, and how developers can break into the space of AI and machine learning.</p>
<p>Chapters</p>
<ol>
<li>Shivay’s Journey into Machine Learning (00:00 - 03:30)  </li>
<li>The Power of TensorFlow.js and Web AI (03:31 - 07:00)  </li>
<li>Challenges in Hackathons: Using Pre-trained Models (07:01 - 10:00)  </li>
<li>Navigating the AI Ecosystem: Python vs. JavaScript (10:01 - 13:30)  </li>
<li>LLMs and Their Growing Popularity (13:31 - 17:00)  </li>
<li>The Importance of Core Machine Learning Knowledge (17:01 - 20:00)  </li>
<li>AI Ethics &amp; Challenges in Scaling Models (20:01 - 23:00)  </li>
<li>Shivay’s Content &amp; Community Involvement (23:01 - 25:00)  </li>
<li>Conclusion &amp; Final Thoughts (25:01 - End)</li>
</ol>
<p>Follow Shivay on Social Media
Twitter: <a href="https://x.com/HowDevelop">https://x.com/HowDevelop</a>
Github: <a href="https://github.com/shivaylamba">https://github.com/shivaylamba</a></p>
<p>Sponsored by This Dot: <a href="thisdot.co">thisdot.co</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/the-evolution-of-ai-tooling-and-ethical-ai-practices-with-shivay-lamba</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/the-evolution-of-ai-tooling-and-ethical-ai-practices-with-shivay-lamba</guid>
            <pubDate>Tue, 01 Oct 2024 19:33:13 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Why is It so HARD to Break Into Tech with Jason Torres]]></title>
            <description><![CDATA[<p>Tracy Lee and Rob Ocel chat with Jason Torres about the challenges of breaking into tech, especially for self-taught and underrepresented developers. Jason shares his journey from a 15-year career in the film industry to pursuing software development, discussing the emotional and financial hurdles involved. They also discuss the importance of networking, finding a niche rather than mastering everything, and the impact of the tech downturn on junior developers.</p>
<p>Chapters</p>
<ol>
<li>Introduction and Tech Career Journeys (00:00 - 02:30)  </li>
<li>The Struggles of Breaking Into Tech (02:31 - 06:00)  </li>
<li>Jason’s Career Pivot from Film to Tech (06:01 - 10:30)  </li>
<li>The Importance of Networking and Community (10:31 - 15:00)  </li>
<li>Specializing vs. Being a Generalist in Tech (15:01 - 20:00)  </li>
<li>Finding Your Path in Tech (20:01 - 25:30)  </li>
<li>Dealing with Imposter Syndrome and Belonging (25:31 - 30:00)  </li>
<li>Final Thoughts and Tips for Breaking Into Tech (30:01 - 33:00)  </li>
<li>Closing Remarks and Tech Talk Humor (33:01 - End)</li>
</ol>
<p>Follow Jason Torres on Social Media
Twitter: <a href="https://x.com/TasonJorres">https://x.com/TasonJorres</a>
Linkeidn: <a href="https://www.linkedin.com/in/thejasontorres/">https://www.linkedin.com/in/thejasontorres/</a></p>
<p>Sponsored by Wix Studio: <a href="https://www.wix.com/studio">https://www.wix.com/studio</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/why-is-it-so-hard-to-break-into-tech-with-jason-torres</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/why-is-it-so-hard-to-break-into-tech-with-jason-torres</guid>
            <pubDate>Tue, 08 Oct 2024 15:57:38 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Unit Testing, TypeScript, and AI: Enhancing Code Quality and Productivity in 2024]]></title>
            <description><![CDATA[<p>In this episode of the Modern Web Podcast, host <a href="https://x.com/robocell">Rob Ocel</a> and co-hosts <a href="https://x.com/AdamRackis">Adam Rackis</a>, <a href="https://x.com/ladyleet">Tracy Lee</a>, and <a href="https://x.com/DThompsonDev">Danny Thompson</a> discuss the importance of unit testing for maintaining code quality and reliability, emphasizing its role in scaling projects and ensuring long-term stability. The conversation also highlights the benefits of TypeScript in improving code safety and developer productivity, sharing experiences on how it catches errors early in the process. They also examine the growing role of AI in automating development tasks, weighing the efficiency gains against the risks of over-reliance on automation while stressing the importance of understanding the underlying processes.</p>
<p>Chapters</p>
<p>00:00 - Introduction and Episode Overview
02:59 - The Importance of Unit Testing
10:03 - Best Practices for Implementing Unit Tests
17:15 - TypeScript’s Role in Code Safety and Productivity
2:30 - AI in Software Development: Automating Tasks
29:16 - Balancing AI Automation with Developer Expertise
32:07 - Final Thoughts and Closing Remarks</p>
<p>Sponsored by This Dot: <a href="thisdot.co">thisdot.co</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/unit-testing-typescript-and-ai-enhancing-code-quality-and-productivity-in</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/unit-testing-typescript-and-ai-enhancing-code-quality-and-productivity-in</guid>
            <pubDate>Wed, 23 Oct 2024 17:19:36 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[How to Create a Website from Scratch with Nuxt Studio, Nuxt Content, and Nuxt UI]]></title>
            <description><![CDATA[<p>In this JS Drop, Simone is joined by Ferdinand and Baptiste from Nuxt Labs to explore the latest updates in the Nuxt ecosystem. Ferdinand kicks off with an introduction to Nuxt Labs and its dual mission of supporting the open-source Nuxt framework while building sustainable products like Nuxt Studio, Nuxt Content, and Nuxt UI.</p>
<p>Baptiste takes over with a live demo, showcasing how to create a website from scratch using Nuxt Studio. He demonstrates the platform’s powerful content management features, showing how Nuxt Content integrates to manage and edit website content easily. Baptiste highlights Nuxt UI components and how they simplify coding by providing ready-to-use elements. The demo also features live editing and previews, making collaboration easier for both technical and non-technical users.
Ferdinand wraps up by emphasizing Nuxt Studio’s user-friendly design and hints at exciting future updates, including branch management and internationalization support. This session highlights Nuxt Labs&#39; commitment to enhancing the Vue.js ecosystem with versatile, user-focused tools.</p>
<p>Follow Baptiste Leproux and Ferdinand Coumau:</p>
<p>Baptiste Twitter: <a href="https://x.com/_larbish">https://x.com/larbish</a>
Ferdinand Twitter: <a href="https://x.com/CoumauFerdinand">https://x.com/CoumauFerdinand</a>
Baptiste Linkedin: <a href="https://www.linkedin.com/in/baptiste-leproux-618842b0/">https://www.linkedin.com/in/baptiste-leproux-618842b0/</a>
Ferdinand Linkedin: <a href="https://www.linkedin.com/in/ferdinand-coumau-nuxt/">https://www.linkedin.com/in/ferdinand-coumau-nuxt/</a></p>
<p>Sponsored by <a href="thisdot.co">This Dot</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/how-to-create-a-website-from-scratch-with-nuxt-studio-nuxt-content-and-nuxt</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/how-to-create-a-website-from-scratch-with-nuxt-studio-nuxt-content-and-nuxt</guid>
            <pubDate>Fri, 25 Oct 2024 18:27:13 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Angular Signals for Simpler State Management and DOM Performance ]]></title>
            <description><![CDATA[<p>In this episode of the Modern Web Podcast, host Rob Ocel is joined by Adam Rackis, Danny Thompson, and guest Braydon Coyer, Senior Front-End Developer at LogicGate to talk about using Angular Signals for improved state management and DOM performance. Braydon explains how Signals simplify Angular development and offer better readability and efficiency compared to traditional methods like RxJS. The conversation also touches on hiring in the AI era, discussing challenges around take-home tests and live coding, and how AI tools like ChatGPT are changing the interview process.</p>
<p>Chapters</p>
<ul>
<li>00:00 - Introduction  </li>
<li>00:57 - The Angular Renaissance  </li>
<li>02:24 - Signals in Angular  </li>
<li>03:27 - Transitioning to Signals  </li>
<li>04:19 - Signals in Utility Development  </li>
<li>05:09 - RxJS and Signals  </li>
<li>07:52 - Signals vs Other State Management Solutions  </li>
<li>09:34 - Testing Signals  </li>
<li>10:29 - Control Flow and Standalone Components in Angular  </li>
<li>12:02 - Angular&#39;s Evolution and Accessibility  </li>
<li>13:28 - Angular’s Framework Governance  </li>
<li>17:10 - Hiring in the Age of AI  </li>
<li>19:15 - Pair Programming and Real-Time Problem Solving  </li>
<li>22:24 - The Role of AI in Interviews  </li>
<li>27:58 - Wrapping Up</li>
</ul>
<p>Follow Braydon Coyer
Twitter: <a href="https://x.com/BraydonCoyer">https://x.com/BraydonCoyer</a>
Linkedin: <a href="https://www.linkedin.com/in/braydon-coyer/">https://www.linkedin.com/in/braydon-coyer/</a>
Github: <a href="https://github.com/braydoncoyer">https://github.com/braydoncoyer</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/angular-signals-for-simpler-state-management-and-dom-performance</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/angular-signals-for-simpler-state-management-and-dom-performance</guid>
            <pubDate>Wed, 30 Oct 2024 15:52:55 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Java’s AI Evolution: Semantic Caching JVM, and GenAI Architectures with Theresa Mamarella & Brian Sam-Bodden]]></title>
            <description><![CDATA[<p>In this episode of the Modern Web Podcast, Danny Thompson, Director of Technology at This Dot Labs, hosts a conversation with Theresa Mammarella, JVM Engineer at IBM, and Brian Sam-Bodden, Applied AI Engineer at Redis. They explore their talks at JCONF in Dallas, Texas, covering topics like GenAI architectures in the Java community and OpenJDK&#39;s Project Valhalla. Their conversation covers Java’s evolution, AI applications, semantic caching, and how these technologies are impacting development workflows and performance optimization.</p>
<p>Chapters</p>
<ul>
<li>00:00 - Introduction  </li>
<li>01:00 - Brian on GenAI in the Java Community  </li>
<li>01:47 - Java’s Safe Evolution Path  </li>
<li>02:17 - Teresa on Project Valhalla  </li>
<li>03:54 - Value Classes and Performance  </li>
<li>04:33 - Brian on Semantic Caching  </li>
<li>06:54 - Challenges of Rewording Prompts  </li>
<li>09:15 - What is RAG Architecture?  </li>
<li>11:34 - Java’s Role in AI  </li>
<li>13:57 - Cost of LLMs and Caching Strategies  </li>
<li>15:57 - Teresa on Java’s Future  </li>
<li>18:22 - Learning Resources for Java Developers  </li>
<li>20:44 - Addressing Misconceptions About Java  </li>
<li>22:39 - Final Thoughts</li>
</ul>
<p>Follow Theresa Mammarella &amp; Brian Sam on Social Media
Theresa Mammarella Twitter: <a href="https://x.com/t_mammarella">https://x.com/t_mammarella</a>
Brian Sam-Bodden Twitter: <a href="https://x.com/bsbodden">https://x.com/bsbodden</a>
Theresa Mammarella Linkedin: <a href="https://www.linkedin.com/in/tmammarella/">https://www.linkedin.com/in/tmammarella/</a>
Brian Sam-Bodden Linkedin:  <a href="https://www.linkedin.com/in/sambodden/">https://www.linkedin.com/in/sambodden/</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/javas-ai-evolution-semantic-caching-jvm-and-genai-architectures-with-theresa</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/javas-ai-evolution-semantic-caching-jvm-and-genai-architectures-with-theresa</guid>
            <pubDate>Tue, 29 Oct 2024 23:22:06 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Lessons from Building Netlify with Matt Biilmann, CEO at Netlify]]></title>
            <description><![CDATA[<p>Matt Biilmann, CEO and co-founder of Netlify, for an in-depth discussion about the company&#39;s incredible growth journey—from a bootstrapped two-person startup to a global platform serving over 5 million developers and powering sites for major companies like Unilever and Asana. Matt reflects on the key lessons he’s learned while scaling Netlify, including raising $212 million in venture capital and growing the team to 200 employees. He shares valuable insights on balancing day-to-day operations with long-term vision, navigating the challenges of hiring experienced leaders, and fostering a culture of clarity and focus. Matt also highlights the importance of reducing friction for web development teams and ensuring fast time-to-market for web projects. </p>
<p>Chapters</p>
<ul>
<li>00:00 - Introduction</li>
<li>01:00 - The Origins of Netlify</li>
<li>02:30 - Netlify’s Growth Journey</li>
<li>04:00 - Impact of Netlify on the Web Ecosystem </li>
<li>05:30 - Building the Right Team</li>
<li>07:45 - From Developer to CEO: Evolving as a Leader </li>
<li>10:00 - The Balance Between Vision and Operations </li>
<li>12:00 - Delegating vs. Staying Hands-On </li>
<li>15:30 - Hiring Experienced Leaders </li>
<li>18:00 - Building Diverse Teams </li>
<li>20:00 - Intuition in Leadership</li>
<li>22:30 - Simplifying Goals and Objectives </li>
<li>25:00 - The Shift in Tech Leadership</li>
<li>28:00 - Changing Expectations for Engineers  </li>
<li>30:00 - Advice for Startup Founders  </li>
<li>32:00 - Where to Find Matt Online  </li>
<li>33:00 - Conclusion</li>
</ul>
<p>Follow Matt Biilmann on Social Media
Twitter: <a href="https://x.com/biilmann">https://x.com/biilmann</a>
Linkedin: <a href="https://www.linkedin.com/in/mathias-biilmann-christensen-a5a3805/">https://www.linkedin.com/in/mathias-biilmann-christensen-a5a3805/</a>
Github: <a href="https://github.com/biilmann">https://github.com/biilmann</a></p>
<p>Sponsored by This Dot: <a href="thisdot.co">thisdot.co</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/lessons-from-building-netlify-with-matt-biilmann-ceo-at-netlify</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/lessons-from-building-netlify-with-matt-biilmann-ceo-at-netlify</guid>
            <pubDate>Tue, 22 Oct 2024 15:15:11 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[How a First Reddit Engineer Builds Strong Engineering Cultures with Jeremy Edberg]]></title>
            <description><![CDATA[<p>In this episode of the Leadership Exchange, host Tracy Lee welcomes Jeremy Edberg, CEO of DeVos and former first employee at Reddit, to discuss leadership, engineering culture, and team building. They review Jeremy&#39;s career journey from Reddit to Netflix and beyond, sharing insights on scaling engineering teams, the impact of culture on development practices, and hiring strategies. Jeremy reflects on the evolution of his management style, emphasizing the importance of human connection in leadership, while also sharing lessons learned from his time at companies with strong engineering cultures.</p>
<h2>Chapters</h2>
<ul>
<li>00:00 - Introduction and Guest Welcome</li>
<li>00:48 - Jeremy’s Background and Career Journey</li>
<li>01:51 - Introduction to DeVos and Its Founders</li>
<li>02:30 - Throwback: Reddit Meetups and &quot;Chad Berg&quot; Chanting</li>
<li>03:05 - Rebuilding the Reddit Engineering Team</li>
<li>05:16 - Challenges of Scaling and Maintaining Reddit&#39;s Culture</li>
<li>07:08 - The Role of Code in Driving Team Culture</li>
<li>08:14 - Differences in Team Dynamics at Reddit and Netflix</li>
<li>09:07 - Working at Netflix vs. Cloudflare</li>
<li>09:38 - The &quot;Sports Team, Not a Family&quot; Philosophy at Netflix</li>
<li>11:21 - Understanding the Keeper Test at Netflix</li>
<li>14:27 - Evolving Netflix&#39;s Culture to Support Diversity and Inclusion</li>
<li>16:05 - Misconceptions About Netflix&#39;s Work Environment</li>
<li>17:17 - Work-Life Balance at Netflix: High Performance in a Chill Setting</li>
<li>20:28 - Key Elements of a Good Engineering Culture</li>
<li>23:09 - How Jeremy&#39;s Leadership Style Has Evolved</li>
<li>24:34 - Advice for Building Successful Engineering Teams</li>
<li>25:26 - Closing Remarks and Sponsor Thanks</li>
<li>26:13 - Where to Follow Jeremy Online</li>
</ul>
<p>Follow Jeremy Edberg 
Twitter: <a href="https://x.com/jedberg">https://x.com/jedberg</a>
Linkedin: <a href="https://www.linkedin.com/in/jedberg/">https://www.linkedin.com/in/jedberg/</a></p>
<p>Sponsored by Wix Studio: <a href="wix.com/studio">wix.com/studio</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/how-a-first-reddit-engineer-builds-strong-engineering-cultures-with-jeremy</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/how-a-first-reddit-engineer-builds-strong-engineering-cultures-with-jeremy</guid>
            <pubDate>Tue, 15 Oct 2024 16:27:04 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Building Scalable AI Applications: Insights from AWS's Michael Liendo]]></title>
            <description><![CDATA[<p>In this episode of the Modern Web Podcast, Rob Ocel, Danny Thompson, and Adam Rackis talk with Michael Liendo, Senior Developer Advocate at AWS, about building practical AI applications and tackling challenges like scalability, multimodal functionality, and cloud infrastructure choices. Michael shares insights on tools like AWS Amplify and DynamoDB, discusses strategies for managing cloud costs, and explores the evolving role of prompt engineering. Michael previews his upcoming talks at AWS re:Invent on AI and scalable B2B SaaS applications.</p>
<h2>Chapters</h2>
<ul>
<li>00:00 - Introduction and Guest Welcome  </li>
<li>01:30 - Talking Weather and Life in the Midwest  </li>
<li>03:00 - Exploring Generative AI and Practical Applications  </li>
<li>06:45 - Navigating Cloud Costs and Scalability Considerations  </li>
<li>08:30 - Maintaining Creativity and Customization with AI  </li>
<li>11:00 - Managed Services vs. On-Prem Infrastructure Debate  </li>
<li>15:30 - Choosing a Tech Stack for Side Projects and Startups  </li>
<li>18:45 - Learning Cloud: Paths for Full-Stack Cloud Development  </li>
<li>22:30 - The Role of Cloud Certifications in Today&#39;s Market  </li>
<li>26:00 - Preview of Michael’s Upcoming Talks at AWS re:Invent  </li>
<li>32:00 - Where to Find Michael Online</li>
</ul>
<p>Follow Michael Liendo on Social Media
Twitter: <a href="https://x.com/focusotter">https://x.com/focusotter</a>
Linkedin: <a href="https://www.linkedin.com/in/focusotter/">https://www.linkedin.com/in/focusotter/</a></p>
<p>Sponsored by <a href="wix.com/studio">Wix Studio</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/building-scalable-ai-applications-insights-from-awss-michael-liendo</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/building-scalable-ai-applications-insights-from-awss-michael-liendo</guid>
            <pubDate>Mon, 14 Oct 2024 16:55:54 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[The Future of Healthcare Delivery Models with Anita Ballaney, Founder of MyHealthQ]]></title>
            <description><![CDATA[<p>In this episode of the Leadership Exchange, Tracy Lee welcomes Anita Ballaney, Founder of MyHealthQ, to discuss the future of healthcare delivery models and the impact of technology. Anita discusses the evolution of care, from its community-driven roots to today&#39;s telemedicine and AI-driven advancements, while emphasizing the need for change in payment models that have long been stuck in outdated practices. They explore how AI can revolutionize healthcare by eliminating inefficiencies, improving risk prediction, and reducing costs. </p>
<h2>Chapters</h2>
<ul>
<li>[00:00 - 00:25] Introduction</li>
<li>[00:26 - 01:11] Challenging the Healthcare Status Quo</li>
<li>[01:12 - 03:17] The Evolution of Healthcare Delivery Models</li>
<li>[03:18 - 06:30] Technology’s Role in Revolutionizing Healthcare</li>
<li>[06:31 - 10:19] Breaking Free from Outdated Coding Systems</li>
<li>[10:20 - 12:16] AI and Risk Prediction in Healthcare</li>
<li>[12:17 - 15:36] Enhancing Care Through Technology and Telemedicine</li>
<li>[15:37 - 19:02] The Future of Telemedicine and Sustainable Models</li>
<li>[19:03 - 22:37] Innovating Healthcare Accessibility and Affordability</li>
<li>[22:38 - 26:56] People Problems: The Biggest Barrier to Healthcare Innovation</li>
<li>[26:57 - End] Final Thoughts and Closing Remarks</li>
</ul>
<p>Follow Anita Ballaney on Social Media
Linkedin: <a href="https://www.linkedin.com/in/anitab/">https://www.linkedin.com/in/anitab/</a>
MyHealthQ: <a href="https://myhealthq.com/about-us/">https://myhealthq.com/about-us/</a></p>
<p>Sponsored by <a href="thisdot.co">This Dot</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/the-future-of-healthcare-delivery-models-with-anita-ballaney-founder-of</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/the-future-of-healthcare-delivery-models-with-anita-ballaney-founder-of</guid>
            <pubDate>Thu, 03 Oct 2024 15:47:32 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Balancing Innovation with Compliance and Privacy Concerns in Healthcare with John Crighton, CTO Lightning Step]]></title>
            <description><![CDATA[<p>In this episode of the Leadership Exchange, John Crichton, Chief Technology Officer at Lightning Step Technologies, shares insights on balancing privacy and compliance requirements while fostering innovation in the electronic medical records (EMR) space. The discussion explores the complexities of healthcare data, the evolving use of AI to improve efficiency and patient care, and the importance of servant leadership in building high-performing teams. John also reflects on his experience transitioning from the financial services industry into healthcare, offering advice on integrating AI and developing team culture in both fields.</p>
<h2>Chapters</h2>
<ul>
<li>[00:00 - 00:25] Introduction and Welcome</li>
<li>[00:26 - 01:07] Guest Introduction: John Crichton</li>
<li>[01:08 - 02:21] The Lack of Standardization in EMRs</li>
<li>[02:22 - 02:52] Challenges in Clinical Trials and EMR Data</li>
<li>[02:53 - 04:28] Balancing Innovation and Compliance in Healthcare</li>
<li>[04:29 - 06:51] The Impact of Shifting Compliance and Privacy Concerns</li>
<li>[06:52 - 07:10] The AI and Data Privacy Challenge in Healthcare</li>
<li>[07:11 - 09:43] Integrating AI While Ensuring Data Security</li>
<li>[09:44 - 12:33] Leveraging AI for Developer Productivity and Clinical Efficiency</li>
<li>[12:34 - 15:13] AI in Enhancing Patient Care: Lightning Intelligent Assistant</li>
<li>[15:14 - 16:41] Ethical Considerations Around AI in Healthcare</li>
<li>[16:42 - 19:00] Comparing Regulatory Challenges: Financial Services vs. Healthcare</li>
<li>[19:01 - 21:22] Healthcare Records and Financial Records: Privacy and Security</li>
<li>[21:23 - 24:00] The Role of AI in Agile Development Processes</li>
<li>[24:01 - 26:21] Leadership Lessons: Mentorship and Servant Leadership</li>
<li>[26:22 - 29:12] Building a High-Performing Team Through Culture and Leadership</li>
<li>[29:13 - End] Closing Remarks and Where to Find John Crighton</li>
</ul>
<p>Find John Crighton on Social Media
Linkedin: ⁠<a href="https://www.linkedin.com/in/johncrighton/">https://www.linkedin.com/in/johncrighton/</a>
Lightning Step Technologies: ⁠<a href="https://lightningstep.com/">https://lightningstep.com/</a></p>
<p>Sponsored by <a href="https://www.thisdot.co/">This Dot</a>.</p>
]]></description>
            <link>https://www.thisdot.co/blog/balancing-innovation-with-compliance-and-privacy-concerns-in-healthcare-with</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/balancing-innovation-with-compliance-and-privacy-concerns-in-healthcare-with</guid>
            <pubDate>Fri, 04 Oct 2024 18:32:38 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[The Journey To Making A New Framework: TanStack Start with Tanner Linsley]]></title>
            <description><![CDATA[<p>Danny Thompson, Director of Technology at This Dot Labs, talks with Tanner Linsley, Creator of TanStack, about his latest project, TanStack Start. They discuss the challenges of existing frameworks like Next.js and Remix, the development of TanStack Router, and the future of React Server Components. Tanner also explains how caching strategies and fine-grained invalidation can transform the user experience.</p>
<h2>Chapters</h2>
<p>- </p>
<ul>
<li>Introduction &amp; Tanner’s Background (00:00)</li>
<li>Going Full-Time on TanStack (01:00)</li>
<li>The Birth of TanStack Router (02:21)</li>
<li>Why Build Another Framework? (04:00)</li>
<li>React Server Components: Potential &amp; Limitations (07:05)</li>
<li>Fine-Grained Cache Invalidation &amp; UX (09:02)</li>
<li>Parallel Data Fetching in Routing (13:39)</li>
<li>TanStack Start: Alpha &amp; Future Plans (16:41)</li>
<li>Where to Learn More About TanStack (18:48)</li>
</ul>
<p>Find Tanner Linsley on Social Media
Twitter: <a href="https://x.com/tannerlinsley">https://x.com/tannerlinsley</a>
Linkedin: <a href="https://www.linkedin.com/in/tannerlinsley/">https://www.linkedin.com/in/tannerlinsley/</a>
Github: <a href="https://github.com/tannerlinsley">https://github.com/tannerlinsley</a>
TanStack: <a href="https://tanstack.com/">https://tanstack.com/</a></p>
<p>Sponsored by <a href="https://www.wix.com/studio">Wix Studio</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/the-journey-to-making-a-new-framework-tanstack-start-with-tanner-linsley</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/the-journey-to-making-a-new-framework-tanstack-start-with-tanner-linsley</guid>
            <pubDate>Wed, 09 Oct 2024 14:53:26 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[The “Bottom-Up” Roadmap to Leadership with Ty Allen, Fractional CPO at Neso Advisors]]></title>
            <description><![CDATA[<p>Ty Allen, Founder and Fractional CPO at Neso Advisors, discusses his product management journey, from his early days at Georgia Tech and a successful startup to leading teams across various industries. Ty shares valuable insights on building adaptable roadmaps, balancing tech debt with feature development, and connecting product strategy to company vision. He also highlights the importance of a “bottom-up” roadmap approach and maintaining flexibility while ensuring strategic alignment. </p>
<h2>Chapters</h2>
<ul>
<li>00:00 - 01:32 — Introduction to Leadership and Product Management</li>
<li>01:33 - 04:16 — The Early Days: From Georgia Tech to Startup Success</li>
<li>04:17 - 08:03 — Product Strategy and Roadmap Essentials</li>
<li>08:04 - 12:23 — The Power of Adaptable Roadmaps</li>
<li>12:24 - 16:30 — Managing Tech Debt and Security in Roadmaps</li>
<li>16:31 - 19:52 — The Commercial Lens: Monetizing Features and Value Creation</li>
<li>19:53 - 23:23 — Balancing Innovation and Maintenance</li>
<li>23:24 - 27:29 — Aligning Product Teams with Company Goals</li>
<li>27:30 - End — Final Thoughts: Roadmap Wisdom and Career Advice</li>
</ul>
<p>Follow Ty Allen on Social Media
Linkedin: <a href="https://www.linkedin.com/in/tylerallen/">https://www.linkedin.com/in/tylerallen/</a></p>
<p>Sponsored by This Dot: <a href="thisdot.co">thisdot.co</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/the-bottom-up-roadmap-to-leadership-with-ty-allen-fractional-cpo-at-neso</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/the-bottom-up-roadmap-to-leadership-with-ty-allen-fractional-cpo-at-neso</guid>
            <pubDate>Mon, 04 Nov 2024 22:58:18 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[The Evolution of CSS: From Early Days to Flexbox & Grid with Kevin Powell]]></title>
            <description><![CDATA[<p>In this episode of the Modern Web Podcast, CSS expert Kevin Powell discusses the evolution of CSS, from the early days of CSS3 to the rapid advancements with Flexbox, Grid, and the latest innovations. Kevin explains how CSS is making strides to integrate features that previously required JavaScript, like scroll-driven animations and popovers, simplifying development and improving performance. He also touches on the importance of understanding the fundamentals of CSS, debugging techniques, and the future of tools like Tailwind and Sass. </p>
<h2>Chapters</h2>
<p>Here are the timestamped chapters for the episode:</p>
<ul>
<li>00:00 - Introduction and Technical Setup Issues</li>
<li>01:05 - Guest Introduction: Kevin Powell</li>
<li>02:00 - Kevin&#39;s Journey into CSS and Content Creation</li>
<li>03:21 - Evolution of CSS: From CSS3 to Modern Features</li>
<li>05:46 - The Role of JavaScript in CSS and New Features</li>
<li>08:08 - Popovers, Anchor Positioning, and Progressive Enhancement in CSS</li>
<li>10:26 - Discussion on SAS, Tailwind, and CSS Tools</li>
<li>12:35 - Challenges with Tailwind and Over-Componentization</li>
<li>14:57 - The Importance of Learning Core CSS Principles</li>
<li>16:56 - The &quot;CSS is Hard&quot; Memes and Overcoming CSS Frustration</li>
<li>19:12 - Formatting Contexts and Advanced CSS Concepts</li>
<li>21:31 - Opportunities for Junior Developers to Master CSS</li>
<li>23:54 - Browser Discrepancies and the Future of Web Standards</li>
<li>26:14 - Refactoring CSS for Performance and Best Practices</li>
<li>27:50 - Favorite CSS Resources and Conferences</li>
<li>28:26 - Imposter Syndrome and Kevin&#39;s Speaking Journey</li>
<li>29:55 - Closing Remarks and Where to Find Kevin Powell Online</li>
</ul>
<p>Follow  Kevin Powell on Social Media
Twitter: <a href="https://x.com/KevinJPowell">https://x.com/KevinJPowell</a>
Github: <a href="https://github.com/kevin-powell">https://github.com/kevin-powell</a>
YouTube: <a href="https://www.youtube.com/kevinpowell">https://www.youtube.com/kevinpowell</a></p>
<p>Sponsored by <a href="thisdot.co">This Dot</a>.</p>
]]></description>
            <link>https://www.thisdot.co/blog/the-evolution-of-css-from-early-days-to-flexbox-and-grid-with-kevin-powell</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/the-evolution-of-css-from-early-days-to-flexbox-and-grid-with-kevin-powell</guid>
            <pubDate>Tue, 24 Sep 2024 21:58:04 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Fly.io for Easier Cloud Deployment with Annie Sexton]]></title>
            <description><![CDATA[<p>Annie Sexton, Developer Advocate at Fly.io, to discuss Fly.io’s approach to simplifying cloud deployment. Annie shares Fly.io&#39;s unique position as a public cloud that offers the flexibility of infrastructure control with a streamlined developer experience. They explore Fly.io’s private networking and distributed app capabilities, allowing developers to deploy applications close to users worldwide with ease. Annie also addresses common challenges in distributed systems, including latency, data replication, and the balance between global reach and simple, single-region projects.</p>
<p>Chapters:</p>
<ul>
<li>00:00 - 01:32 Introduction to the Modern Web Podcast and Guests</li>
<li>01:33 - 04:00 Overview of Fly.io and Annie’s Role as Developer Advocate</li>
<li>04:01 - 06:35 What Makes Fly.io Stand Out Among Cloud Platforms</li>
<li>06:36 - 08:57 Distributed Applications: Benefits and Use Cases</li>
<li>08:58 - 11:28 Understanding Distributed Web Servers and Private Networking</li>
<li>11:29 - 13:49 Challenges in Distributed Data and Replication Techniques</li>
<li>13:50 - 16:12 Fly.io’s Unique Solutions for Data Consistency</li>
<li>16:13 - 18:34 When to Consider a Distributed Setup for Your Application</li>
<li>18:35 - 20:35 Tools and Tips for Evaluating Geographical Distribution Needs</li>
<li>20:36 - 22:22 Simplifying Global Deployment with Fly.io’s Command Features</li>
<li>22:23 - 24:18 Considerations for Latency and Performance Optimization</li>
<li>24:19 - 26:45 Balancing Simplicity with Advanced Control for Developers</li>
<li>26:46 - 29:04 Easy Deployment for Hobbyists and Smaller Projects</li>
<li>29:05 - 31:27 Getting Started on Fly.io with Fly Launch</li>
<li>31:28 - 33:48 Developer Advocacy and Meeting Diverse Needs in the Cloud</li>
<li>33:49 - 36:15 Catering to Beginners and Experienced Developers Alike</li>
<li>36:16 - End Closing Remarks and Where to Find Fly.io and the Hosts</li>
</ul>
<p>Follow Annie Sexton on Social Media
Linkedin: ⁠<a href="https://www.linkedin.com/in/annie-sexton-11472a46/%E2%81%A0">https://www.linkedin.com/in/annie-sexton-11472a46/⁠</a>
Github: ⁠<a href="https://github.com/anniebabannie">https://github.com/anniebabannie⁠</a></p>
<p>Sponsored by This Dot: <a href="thisdot.co">thisdot.co</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/fly-io-for-easier-cloud-deployment-with-annie-sexton</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/fly-io-for-easier-cloud-deployment-with-annie-sexton</guid>
            <pubDate>Wed, 06 Nov 2024 16:05:01 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Fostering a Culture of Optimization and Continuous Improvement with Scott Roehrenbeck]]></title>
            <description><![CDATA[<p>In this episode of The Leadership Exchange, host Rob Ocel, VP of Innovation at This Dot Labs, sits down with <a href="https://www.linkedin.com/in/scott-roehrenbeck-5a573431">Scott Roehrenbeck</a>, CTO of Apptegy, for an in-depth discussion on leadership, process improvement, and the role of people in building effective teams. Scott shares insights from his 20+ years in tech, reflecting on the evolution of his leadership style, the importance of balancing process with flexibility, and how to support team autonomy while maintaining consistency. They also discuss the challenges of navigating turbulent times in tech and strategies for aligning team outputs with business goals. Perfect for anyone interested in tech leadership, process optimization, and fostering a culture of continuous improvement.</p>
<p>Chapters</p>
<ul>
<li>Introduction to the Leadership Exchange (00:00 - 00:23)  </li>
<li>Scott’s Journey to CTO (00:23 - 02:15)  </li>
<li>Cyclical Trends in Tech (02:15 - 03:27)  </li>
<li>The Challenges of Leadership (03:27 - 06:07)  </li>
<li>The Purpose of Process (11:43 - 14:09)  </li>
<li>Process vs. Output: What Really Matters (14:09 - 17:20)  </li>
<li>Building vs. Buying Process Frameworks (17:28 - 20:57)  </li>
<li>The Role of Adaptation in Process Improvement (20:57 - 24:08)  </li>
<li>Navigating ‘Religious’ Arguments in Process (24:08 - 27:02)  </li>
<li>Defining a Team’s Unique Process (27:02 - 29:41)  </li>
<li>Wrapping Up and Final Thoughts (29:41 - 30:40)  </li>
<li>Thank You to Sponsors (30:40 - End)</li>
</ul>
<p>Follow Scott Roehrenbeck on Social Media
Linkedin: <a href="https://www.linkedin.com/in/scott-roehrenbeck-5a573431">https://www.linkedin.com/in/scott-roehrenbeck-5a573431/ </a></p>
]]></description>
            <link>https://www.thisdot.co/blog/fostering-a-culture-of-optimization-and-continuous-improvement-with-scott</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/fostering-a-culture-of-optimization-and-continuous-improvement-with-scott</guid>
            <pubDate>Mon, 11 Nov 2024 17:34:47 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Build Facial Recognition and Chatbot AIs using TypeScript with Jack Herrington]]></title>
            <description><![CDATA[<p>In this JS Drop, <a href="https://x.com/DThompsonDev">Danny Thompson</a> is joined by YouTuber <a href="https://x.com/jherr">Jack Herrington</a> to explore a unique TypeScript and AI project that lets you recognize TV show characters in real time. Just point your camera at a character, and instantly get their details or even chat with them as if they were real! Jack walks through the tech, explaining how client-side face recognition and server-side AI work together to make this possible using the Versal AI library. They discuss prompt engineering, building efficient APIs, and ensuring smooth, interactive AI responses. Jack also shares potential real-world applications, from entertainment to security.</p>
<p>Chapters</p>
<ul>
<li>0:32 Project Overview – Jack explains the AI-powered character recognition project</li>
<li>2:17 Setting Up the Project – Overview of how the application is structured and initial setup</li>
<li>3:04 Client-Side AI – How face detection and character recognition work on the client side</li>
<li>5:12 Switching to Server Side – Jack demonstrates server-side AI and setting up API endpoints</li>
<li>8:20 Explaining AI Tooling – How tools and prompts are used to give context to the AI</li>
<li>10:01 Detailed Prompt Structure – Breaking down the prompt and character context for AI responses</li>
<li>12:40 Client-Server Interaction – Using the Versal AI library to manage streaming responses</li>
<li>15:09 Handling Character Data – Training the AI on specific character images and details</li>
<li>18:15 Practical Use Cases – Discussing potential real-world applications for the face recognition tool</li>
<li>21:34 Challenges and Lessons Learned – Jack shares obstacles he faced and how he overcame them</li>
<li>25:45 Building the API – Tips and considerations for creating reliable API endpoints</li>
<li>28:40 Handling User Inputs – Testing unexpected questions and how the AI responds</li>
<li>32:00 Using Advanced AI Models – Jack talks about choosing GPT-4 and issues with smaller models</li>
<li>35:47 Introducing ProNextJS.dev – Jack discusses his new Next.js course, covering advanced topics</li>
<li>37:20 Closing Thoughts – Danny and Jack wrap up with final thoughts and a link to the GitHub repo</li>
</ul>
<p>Follow Jack Herrington on Social Media
Twitter: <a href="https://x.com/jherr">https://x.com/jherr</a>
Linkedin: <a href="https://www.linkedin.com/in/jherr/">https://www.linkedin.com/in/jherr/</a>
YouTube: <a href="https://www.youtube.com/@jherr">https://www.youtube.com/@jherr</a></p>
<p>Sponsored by This Dot: <a href="thisdot.co">thisdot.co</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/build-facial-recognition-and-chatbot-ais-using-typescript-with-jack</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/build-facial-recognition-and-chatbot-ais-using-typescript-with-jack</guid>
            <pubDate>Mon, 11 Nov 2024 21:35:37 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Learning Paths for Next.JS Developers with Ankita Kulkarni]]></title>
            <description><![CDATA[<p>In this video, Rob Ocel and co-hosts Tracy Lee, Adam Rackis, and Danny Thompson talk with tech educator Ankita Kulkarni about her journey from engineering leader to full-time educator. Ankita shares insights on teaching Next.js, bridging practical knowledge gaps, and helping developers tackle real-world challenges. They discuss Next.js as a React-based framework, its benefits, and the challenges it presents for beginners.</p>
<p>Chapters</p>
<ul>
<li>Introduction to the Podcast and Guests 00:01  </li>
<li>Meet Ankita Kulkarni, Tech Educator 00:26   </li>
<li>Ankita&#39;s Transition to Full-Time Education 01:41   </li>
<li>Teaching Practical Knowledge in Next.js 03:19   </li>
<li>Effective Methods for Teaching Next.js 05:27  </li>
<li>Challenges of Being a Full-Time Educator 07:48  </li>
<li>Balancing Broad and Specific Examples 09:54   </li>
<li>Embracing Mistakes as a Teaching Tool 12:13  </li>
<li>Pair Programming and Mentorship 14:00   </li>
<li>Discussion on Next.js and Framework Adoption 16:48   </li>
<li>Advantages and Challenges of Next.js 18:12  </li>
<li>Choosing the Right Framework for Your Needs 20:35   </li>
<li>Impact of Next.js in React Documentation 22:26   </li>
<li>Learning Paths for New Developers 23:24   </li>
<li>The Rise of Full-Stack Web Development 25:09   </li>
<li>Benefits of Frameworks Abstracting Complexity 26:27   </li>
<li>OpenNext and Deployment Flexibility 28:06 </li>
<li>Ankita&#39;s Excitement for New Next.js Features 30:35  </li>
<li>The Future of Next.js Without Vercel 32:16  </li>
<li>Final Thoughts and Where to Find Everyone Online 34:21</li>
</ul>
<p>Follow Ankita Kulkarni on Social Media 
Twitter: <a href="https://x.com/kulkarniankita9">https://x.com/kulkarniankita9</a>
YouTube: <a href="https://www.youtube.com/@kulkarniankita">https://www.youtube.com/@kulkarniankita</a></p>
<p>Sponsored by This Dot: <a href="thisdot.co">thisdot.co</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/learning-paths-for-next-js-developers-with-ankita-kulkarni</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/learning-paths-for-next-js-developers-with-ankita-kulkarni</guid>
            <pubDate>Tue, 12 Nov 2024 21:28:53 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[How Unstructured Transforms Data with Google Drive and Astra DB with Nina Lopatina - Video]]></title>
            <description><![CDATA[<p>Unstructured data lacks a predefined format, posing challenges for machine understanding. However, companies like <a href="https://unstructured.io/">Unstructured</a> offer solutions to overcome this hurdle. This JSDrops training hosted by <a href="https://x.com/NinaLopatina">Nina Lopatina</a> explores how Unstructured utilizes tools such as Google Drive and Astra DB to convert unstructured data into machine-readable formats, opening new possibilities for businesses and individuals.</p>
<p>In this Video, Nina reviews the setup process, discussing the critical roles of API keys and service accounts. API keys secure access to Google Drive, while service accounts facilitate interactions with Astra DB. This setup ensures seamless data access and processing, establishing a robust foundation for Unstructured&#39;s operations. She explains how to set up an efficient notebook environment, focusing on library installations, Google Drive connectivity, and security configurations. </p>
<p>Emphasizing collaboration, this setup ensures seamless teamwork, enabling Unstructured to efficiently structure and analyze data.
During setup, viewers learn to execute code in Google Colab, a cloud-based platform. Troubleshooting API keys underscores the importance of proper configuration for smooth data processing.</p>
]]></description>
            <link>https://www.thisdot.co/blog/how-unstructured-transforms-data-with-google-drive-and-astra-db-with-nina</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/how-unstructured-transforms-data-with-google-drive-and-astra-db-with-nina</guid>
            <pubDate>Tue, 16 Jul 2024 18:43:39 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[AI Leadership: Data-Driven Decision Making & Avoiding "Analysis Paralysis" with Jerry Reghunadh]]></title>
            <description><![CDATA[<p>In this episode of Leadership Exchange, host Rob Ocel chats with Jerry Reghunadh, Senior Director of Enterprise Architecture at Data Nimbus, about leadership, AI adoption, and data-driven decision-making. Jerry shares his career journey, insights on leveraging tools like ChatGPT and Copilot, and strategies for building effective data pipelines. They explore how companies can avoid &quot;analysis paralysis,&quot; adopt AI strategically, and evaluate new technologies to solve real-world problems. Tune in for practical advice on aligning innovation with business goals and staying competitive in a rapidly evolving tech landscape.</p>
<p>Chapters
00:00 – Introduction<br>00:31 – Jerry’s Leadership Journey<br>02:01 – Discussing AI in Leadership<br>03:59 – Experimenting with AI Tools<br>06:24 – Overcoming Analysis Paralysis<br>09:18 – Importance of Early AI Adoption<br>13:12 – AI’s Impact on Efficiency<br>14:30 – Sponsor Message<br>16:26 – Setting Realistic Goals for AI<br>19:41 – Data Management and AI<br>23:09 – Is AI Just a Fad?<br>27:22 – Testing New Technologies<br>29:56 – Final Thoughts on AI<br>30:25 – Connect with Jerry<br>30:44 – Closing Remarks<br>31:20 – Outro  </p>
<p>Follow Jerry on Social Media
Linkedin: <a href="https://www.linkedin.com/in/jerrymannel/">https://www.linkedin.com/in/jerrymannel/</a></p>
<p>Sponsored by This Dot: <a href="thisdot.co">thisdot.co</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/ai-leadership-data-driven-decision-making-and-avoiding-analysis-paralysis</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/ai-leadership-data-driven-decision-making-and-avoiding-analysis-paralysis</guid>
            <pubDate>Mon, 18 Nov 2024 20:07:37 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[How Nuxt Studio is Redefining Developer & User Experience with Baptiste Leproux & Ferdinand Coumau]]></title>
            <description><![CDATA[<p>In this episode of the Modern Web Podcast, Danny Thompson sits down with Ferdinand Como and Baptiste Leproux from Nuxt Labs to uncover the story behind Nuxt Studio, a tool that&#39;s transforming how developers and non-technical users interact with Nuxt applications.</p>
<p>Ferdinand and Baptiste share how Nuxt Studio bridges the gap between developer customization and user-friendly content management. Built to empower agencies, freelancers, and their clients, Nuxt Studio combines powerful features like live previews, Vue component integration, and schema-driven forms to make managing content seamless.</p>
<p>The conversation also explores the broader mission of Nuxt Labs—building sustainable open-source tools that enhance developer experience and meet real-world needs. With insights into the future of Nuxt Studio and its potential to scale for larger organizations, this episode is a must-listen for anyone passionate about innovation in web development.</p>
<p>Chapters</p>
<ol>
<li>Introduction and Setting the Stage (00:00:00)  </li>
<li>The Vision Behind Nuxt Studio (00:03:10)  </li>
<li>Nuxt Studio’s Core Features (00:08:45)  </li>
<li>Challenges in Building Nuxt Studio (00:16:20)  </li>
<li>Target Audience and Use Cases (00:22:35)  </li>
<li>Sustainability in Open Source (00:29:00)  </li>
<li>The Future of Nuxt Studio (00:35:10)  </li>
<li>Nuxt Studio’s Role in the Nuxt Ecosystem (00:42:30)  </li>
<li>Closing Thoughts and What’s Next (00:48:00)  </li>
<li>Sponsor Shoutout and Wrap-Up (00:53:20)</li>
</ol>
<p>Follow Baptiste Leproux and Ferdinand Coumau
Baptiste Twitter: <a href="https://x.com/_larbish">https://x.com/_larbish</a>
Ferdinand Twitter: <a href="https://x.com/CoumauFerdinand">https://x.com/CoumauFerdinand</a>
Baptiste Linkedin:  <a href="https://www.linkedin.com/in/baptiste-leproux-618842b0/">https://www.linkedin.com/in/baptiste-leproux-618842b0/</a>
Ferdinand Linkedin:  <a href="https://www.linkedin.com/in/ferdinand-coumau-nuxt/">https://www.linkedin.com/in/ferdinand-coumau-nuxt/</a></p>
<p>Sponsored by This Dot: <a href="thisdot.co">thisdot.co</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/how-nuxt-studio-is-redefining-developer-and-user-experience-with-baptiste</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/how-nuxt-studio-is-redefining-developer-and-user-experience-with-baptiste</guid>
            <pubDate>Wed, 20 Nov 2024 16:14:55 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Generative AI in the Global Payments Industry: Insights from Dondi Black, CPO of TSYS]]></title>
            <description><![CDATA[<p>Dondi Black, Chief Product Officer of Issuer Solutions at TSYS, to explore the transformative journey of innovation, cultural shifts, and emerging technologies in the payments industry. Dondi shares her insights from three decades in the field, discussing how her organization leverages synergies, empowers teams, and implements measurable strategies to drive innovation and transformation. The conversation dives into the practical applications of generative AI, privacy-enhancing technologies, and a North Star approach to cultural transformation. Tracy and Dondi also touch on the importance of self-advocacy, honest feedback, and creating inclusive environments to foster innovation at every level.</p>
<p>Chapters
00:00:04 Welcome &amp; Introductions<br>00:00:34 Reflections on Transformation &amp; Innovation<br>00:01:28 Synergies &amp; Organizational Transformation<br>00:02:19 Measuring Success in Cultural Transformation<br>00:05:07 Empowering Individuals &amp; Leadership Growth<br>00:08:07 Emerging Innovations: Generative AI in Fraud &amp; Risk<br>00:10:15 Security Investments &amp; AI&#39;s Industry Impact<br>00:12:28 The Future of Privacy-Enhancing Technologies<br>00:15:30 Sustaining Momentum in Transformation<br>00:18:28 The Industry’s Pragmatic Shift<br>00:20:39 Inclusive Innovation &amp; Cultural Change<br>00:23:12 Advocating for Yourself &amp; Your Ideas<br>00:25:27 Leaders as Coaches: The Power of Feedback<br>00:28:56 Sponsor Spotlight: This Dot<br>00:29:53 Where to Connect with Dondi<br>00:30:39 Closing Remarks  </p>
<p>Follow Dondi Black on Social Media
Linkedin: <a href="https://www.linkedin.com/in/dondi-black/">https://www.linkedin.com/in/dondi-black/</a></p>
<p>Sponsored by This Dot: <a href="thisdot.co">thisdot.co</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/generative-ai-in-the-global-payments-industry-insights-from-dondi-black-cpo</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/generative-ai-in-the-global-payments-industry-insights-from-dondi-black-cpo</guid>
            <pubDate>Mon, 25 Nov 2024 18:28:51 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Reducing Fatigue for On-Call SWEs with AI, Mentorship, & More with Dr. Sally Wabha]]></title>
            <description><![CDATA[<p>In this episode of the Modern Web Podcast, recorded live at All Things Open in Raleigh, NC, hosts Rob Ocel and Danny Thompson sit down with Dr. Sally Wahba, Principal Software Engineer at Splunk. Dr. Wahba shares her experience tackling on-call burnout, offering insights into reducing fatigue through better observability, automation, and thoughtful team practices.
The conversation also touches on mentorship and growth in the tech industry, including practical advice for junior engineers navigating the transition from academics to professional roles and tips for companies to better support new talent.</p>
<p>Chapters
00:00:13 - Introduction to Marketing This Dot
00:01:00 - Asking for Help Effectively
00:02:21 - Reducing On-Call Fatigue
00:04:42 - Observability Best Practices
00:07:07 - Balancing Alerts and On-Call Efficiency
00:09:30 - The Role of On-Call in Modern Engineering
00:11:29 - Insights from the Grace Hopper Celebration
00:13:56 - Mentorship and Team Dynamics
00:16:14 - Rapid Changes in Technology and Adaptation
00:18:39 - Automation, Observability, and Debugging Challenges
00:21:04 - Addressing the Talent Gap and Junior Engineer Growth
00:24:00 - Closing Thoughts and Where to Learn More</p>
<p>Follow Dr. Sally Wahba on Social Media
Twitter: <a href="https://x.com/sallyky">https://x.com/sallyky</a>
Linkedin: <a href="https://linkedin.com/in/sallywahba/">https://linkedin.com/in/sallywahba/</a></p>
<p>Sponsored by This Dot: <a href="thisdot.co">thisdot.co</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/reducing-fatigue-for-on-call-swes-with-ai-mentorship-and-more-with-dr-sally</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/reducing-fatigue-for-on-call-swes-with-ai-mentorship-and-more-with-dr-sally</guid>
            <pubDate>Wed, 27 Nov 2024 17:02:40 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Gen UI, Astra DB, & Vercel’s AI SDK for User Friendly Apps: A Demo by Tejas Kumar]]></title>
            <description><![CDATA[<p>Join <a href="https://x.com/ladyleet">Tracy Lee</a> and <a href="https://x.com/MarkSShenouda">Mark Shenouda</a> in this JS Drop episode as they discuss AI and GenUI with <a href="https://x.com/TejasKumar_">Tejas Kumar</a>. Learn how to use DataStax Astra DB, Vercel AI SDK, and other cutting-edge tools to build smarter, more dynamic applications. This session covers everything from vector searches to generating interactive React components, offering practical tips and hands-on demos for developers.</p>
<p>Chapters
[00:00:00] Introduction to JS Drop
[00:02:30] Tejas Kumar’s AI Presentation Overview
[00:05:00] Demo: Traditional Keyword Search vs. AI Search
[00:09:30] Building AI-Driven Search with DataStax Astra and Vercel AI SDK
[00:16:00] Generating React Components with AI
[00:24:30] Exploring Advanced AI Tools: WebSim and Beyond
[00:32:00] Connecting DataStax Astra with AI Models
[00:39:00] Best Practices for AI-Powered Development
[00:45:00] Q&amp;A and Final Thoughts</p>
<p>Follow Tejas on Social Media
Twitter: <a href="https://x.com/TejasKumar_">https://x.com/TejasKumar_</a>
Linkedin: <a href="https://www.linkedin.com/in/tejasq/">https://www.linkedin.com/in/tejasq/</a>
Github: <a href="https://www.linkedin.com/in/tejasq/">https://www.linkedin.com/in/tejasq/</a>
ConTejas Podcast: <a href="https://www.youtube.com/playlist?list=PLEJpU2pV0Lie1VWU1unMg_7FRQ1gqFmAZ">https://www.youtube.com/playlist?list=PLEJpU2pV0Lie1VWU1unMg_7FRQ1gqFmAZ</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/gen-ui-astra-db-and-vercels-ai-sdk-for-user-friendly-apps-a-demo-by-tejas</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/gen-ui-astra-db-and-vercels-ai-sdk-for-user-friendly-apps-a-demo-by-tejas</guid>
            <pubDate>Thu, 29 Aug 2024 21:47:46 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Color Theory For Software Engineers + Color Accessibility & Performance with Sarah Shook]]></title>
            <description><![CDATA[<p>In this episode of the Modern Web podcast, Tracy Lee and Rob Ocel sit down with Sarah Shook, a UI/UX engineer at Hunter Industries. They discuss the topic of Sarah’s THAT Conference talk on color theory, exploring the intricacies of RGB, HSL, and hex color models. The discussion also touches on the importance of understanding how color accessibility impacts your page’s performance.</p>
<p>Chapters</p>
<ul>
<li>00:00 - Introduction and Guest Introduction</li>
<li>01:30 - Balancing Work and Parenthood</li>
<li>03:20 - Family-Friendly Conferences and THAT Conference Experience</li>
<li>05:45 - Overview of Sarah&#39;s Presentation on Color Theory</li>
<li>07:00 - Understanding Additive and Subtractive Color Models</li>
<li>09:00 - RGB to Hex Conversion Explained</li>
<li>11:45 - Importance of Color Theory in Web Development</li>
<li>14:00 - Accessibility and Color Luminescence</li>
<li>16:00 - Tools and Resources for Color Accessibility</li>
<li>18:30 - Sarah’s Experience with Vue and Other Frameworks</li>
<li>21:00 - Discussion on Framework Deployment and Tooling</li>
<li>23:15 - The Challenges and Benefits of Learning Multiple Frameworks</li>
<li>25:00 - Tailwind, TypeScript, and Framework Preferences</li>
<li>27:00 - Vue Community and Tooling Insights</li>
<li>29:00 - Advanced JavaScript and TypeScript Content with DropJS</li>
<li>31:00 - Sarah’s Color Utility Project and Where to Find Her</li>
<li>32:30 - Closing Thoughts and Outro</li>
</ul>
<p>Follow Sarah Shook on Social Media
Twitter: <a href="https://x.com/shookcodes">https://x.com/shookcodes</a>
Linkedin: <a href="https://www.linkedin.com/in/sarahshook/">https://www.linkedin.com/in/sarahshook/</a>
Github: <a href="https://github.com/shookcodes">https://github.com/shookcodes</a></p>
<p>Sponsored by <a href="https://www.wix.com/studio">Wix Studio</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/color-theory-for-software-engineers-color-accessibility-and-performance-with</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/color-theory-for-software-engineers-color-accessibility-and-performance-with</guid>
            <pubDate>Tue, 27 Aug 2024 15:38:44 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Optimizing React Components with the React Compiler in V18]]></title>
            <description><![CDATA[<p><a href="https://www.linkedin.com/in/jtomchak/">Jesse Tomchak</a> shows viewers how to use the new React compiler to automate memoization and useCallback to optimize React components. He demonstrates the manual process of optimizing React code and then shows how the React compiler simplifies this by automatically managing these optimizations. He also demonstrates setting up and configuring the compiler in React v18 using a Babel plugin, and explores the generated output to explain how the compiler enhances performance.</p>
<p>Chapters
Introduction and Overview - 00:00
Introduction to the React Compiler - 02:27
Manual Optimization with useMemo and useCallback - 09:27
Setting Up the React Compiler - 26:00
Analyzing Compiler Output - 44:50
Exploring the Playground and Generated Code - 50:47
Handling Skipped Components and Memoization - 58:18
Discussion on React&#39;s Future and Best Practices - 01:03:28
Q&amp;A and Audience Interaction - 01:10:53
Conclusion and Final Thoughts - 01:15:38</p>
<p>Follow Jesse Tomchak on Social Media
Twitter: <a href="https://x.com/jtomchak">https://x.com/jtomchak</a>
Linkedin: <a href="https://www.linkedin.com/in/jtomchak/">https://www.linkedin.com/in/jtomchak/</a>
Mastodon: <a href="https://www.linkedin.com/in/jtomchak/">https://moth.social/@jtomchak</a>
Github: <a href="https://github.com/jtomchak">https://github.com/jtomchak</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/optimizing-react-components-with-the-react-compiler-in-v18</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/optimizing-react-components-with-the-react-compiler-in-v18</guid>
            <pubDate>Fri, 23 Aug 2024 11:58:28 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[The Impact of AI on Testing with Ivan Barajas Vargas, CEO + Co-Founder at MuukTest]]></title>
            <description><![CDATA[<p><a href="https://x.com/ibarajasvargas">Ivan Barajas Vargas</a>, CEO and Co-Founder, at <a href="https://muuktest.com/">MuukTest</a> discusses his journey from QA expert to engineering leadership, emphasizing the importance of adapting to changes in QA and effectively utilizing resources. Along with <a href="https://x.com/robocell">Rob Ocel</a>, Ivan highlights the impact of AI technologies on effective QA testing, and the limitations of AI in understanding user experiences. </p>
<p>Ivan stresses the importance of adaptability in the evolving field of QA. He highlights the need for QA professionals to stay updated with the latest trends and technologies and to be willing to adapt their strategies accordingly. Ivan also emphasizes the effective utilization of available resources, such as automation tools, to streamline QA processes and improve efficiency. He believes automation can significantly reduce manual effort and allow QA teams to focus on more critical aspects of testing.</p>
<p>Along with host Rob Ocel, he also discusses the limitations of AI in understanding user experiences. While AI can be a powerful tool in automating certain aspects of testing, Ivan emphasizes the need for human intervention to ensure accurate results. He believes that AI should be seen as a complement to human testers, rather than a replacement. Ivan also shares his vision for the future, where AI will play a more significant role in augmenting testing processes, but human expertise will remain crucial for ensuring quality.</p>
<p><a href="https://engineeringleadership.podbean.com/e/the-impact-of-ai-on-testing-with-ivan-barajas-vargas-ceo-co-founder-at-muuktest/">Download this episode here</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/the-impact-of-ai-on-testing-with-ivan-barajas-vargas-ceo-co-founder-at</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/the-impact-of-ai-on-testing-with-ivan-barajas-vargas-ceo-co-founder-at</guid>
            <pubDate>Wed, 10 Jul 2024 07:05:14 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Hype Cycles and How Teams Should Be Organized with Jimmy Jacobson, CTO at Codingscape]]></title>
            <description><![CDATA[<p><a href="https://x.com/jimmyjacobson">Jimmy Jacobson</a>, CTO at <a href="https://x.com/codingscape">Codingscape</a>, sits down with <a href="https://x.com/ladyleet">Tracy Lee</a> to discuss engineering consultancy management and team organization. They talk about professional development, hype cycles, and the benefit of investing in social media and marketing both as an organization and as an individual. </p>
<p>Both Jimmy and Tracy stress the importance of a flat team structure, which fosters professional development and encourages collaboration among team members. As technology continues to evolve, so do business practices. It is crucial for engineering leaders to adapt and embrace new technologies and methodologies. </p>
<p>Tracy Lee and Jimmy Jacobson&#39;s podcast provides valuable insights into the world of engineering leadership, technology, and business. From the importance of hiring experienced engineers and product managers to the evolution of business practices, their discussions shed light on the strategies and approaches that drive success.</p>
<p><a href="https://engineeringleadership.podbean.com/e/hype-cycles-and-how-teams-should-be-organized-with-jimmy-jacobson-cto-at-codingscape/">Download this episode here.</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/hype-cycles-and-how-teams-should-be-organized-with-jimmy-jacobson-cto-at</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/hype-cycles-and-how-teams-should-be-organized-with-jimmy-jacobson-cto-at</guid>
            <pubDate>Mon, 15 Jul 2024 04:12:49 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Systemized Problem Solving in Engineering Leadership Using Data with Ankur Jain]]></title>
            <description><![CDATA[<p>What is it like to transition from technologies to Fractional CTO? How much do systems matter when operating at the C Level? <a href="https://www.linkedin.com/in/ankurjain2/">Ankur Jain</a>, Fractional CTO and Founder at Sprout discusses the transition from being a technologist to a fractional CTO, and how to define and meet engineering KPIs. He emphasizes the significance of systemizing and design thinking in problem-solving, stressing the need to understand customer needs and deliver effective solutions.</p>
<p>By adopting a systematic approach, businesses can effectively identify and address customer needs. Design thinking, on the other hand, encourages a human-centered approach to innovation, ensuring that technology solutions are not only functional but also user-friendly. Ankur insights remind us that successful technology implementation requires a deep understanding of customer pain points and a commitment to delivering effective solutions.</p>
<p>In an era where data is abundant, Ankur emphasizes the value of making data-driven decisions. However, he cautions against relying on biased data, which can lead to flawed conclusions. He advises businesses to carefully analyze and interpret data, ensuring that it aligns with the goals and objectives of the organization. By leveraging data effectively, businesses can gain valuable insights, make informed decisions, and drive growth.</p>
<p>Ankur highlights the significance of ensuring product-market fit by closely collaborating with early customers. By actively involving customers in the development process, businesses can gain valuable feedback and insights, ensuring that their products or services meet the needs of the target market. Ankur&#39;s emphasis on customer collaboration serves as a reminder that successful technology implementation requires a customer-centric approach, where the end-users&#39; needs and preferences are at the forefront of decision-making.</p>
<p>Ankur advocates for mentorship and continuous learning in leadership roles. He emphasizes the value of seeking guidance from experienced professionals and gradually growing within organizations. His insights remind us that leadership is a journey of growth and development, and that embracing mentorship and continuous learning can help individuals navigate the complexities of technology leadership more effectively.</p>
<p><a href="https://engineeringleadership.podbean.com/e/systemized-problem-solving-in-engineering-leadership-using-data-with-ankur-jain/">Download this episode here.</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/systemized-problem-solving-in-engineering-leadership-using-data-with-ankur</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/systemized-problem-solving-in-engineering-leadership-using-data-with-ankur</guid>
            <pubDate>Fri, 26 Jul 2024 11:10:47 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Kent C. Dodds on Why he Traded Angular for React, Launching EpicWeb.dev, + What’s Next for EpicReact.dev]]></title>
            <description><![CDATA[<p>Kent C. Dodds joins Tracy Lee and Rob Ocel at THAT Conference-WI for a discussion about his journey from being an Angular developer to becoming a leading figure in the React community. Kent shares his motivations for making the switch, detailing how React&#39;s focus on JavaScript fundamentals and its incremental adoption of new features appealed to him. He also reviews his current and upcoming projects, including the launch of EpicWeb.dev, a comprehensive platform designed to provide end-to-end web development education. Kent talks about the updates coming to EpicReact.dev, including a new tutorial where developers can build useState and useEffect from scratch, aimed at deepening their understanding of React hooks. </p>
<p>Chapters</p>
<p>00:00 Introduction and Background
02:41 Preparing for a 90-Minute Keynote
05:37 Writing a Book and Other Projects
08:04 Surrounded by Ambitious People
09:01 Personal Stories and Balance
10:22 Lessons from Domo and Joe Eames
11:21 Learning from Experienced Engineers
12:41 The Importance of Surroundings
13:33 Choosing the Right People to Associate With
14:46 Kent&#39;s Organizational Skills
15:41 Balancing Work and Family
17:06 Committing to Big Things
18:04 Avoiding Burnout and Assessing Priorities
19:26 Sharing Personal Stories in Talks
20:21 Finding Effectiveness and Efficiency
21:17 Dealing with Burnout and Overwhelm
22:46 The Entrepreneurial Mentality
23:15 Running to the Top and Figuring It Out
24:14 Kent&#39;s Various Projects
25:41 Transitioning from Angular to React</p>
<p>Follow Kent C Dodds on Social Media
Twitter: <a href="https://x.com/kentcdodds">https://x.com/kentcdodds</a>
Linkedin: <a href="https://www.linkedin.com/in/kentcdodds/">https://www.linkedin.com/in/kentcdodds/</a>
Bluesky: <a href="https://bsky.app/profile/kentcdodds.com">https://bsky.app/profile/kentcdodds.com</a>
GitHub: <a href="https://github.com/kentcdodds">https://github.com/kentcdodds</a></p>
<p>EpicWeb.dev: <a href="https://www.epicweb.dev/">https://www.epicweb.dev/</a>
EpicReact.dev: <a href="https://www.epicreact.dev/">https://www.epicreact.dev/</a></p>
<p>Learn More About THAT Conference Wisconsin 2024: <a href="https://thatconference.com/wi/2024/">https://thatconference.com/wi/2024/</a></p>
<p><a href="https://open.spotify.com/episode/3kAAnLxtfIu6VwuInQcxay?si=826b5084faf440ca">Listen to this episode here.</a></p>
<p><a href="https://www.wix.com/studio">Sponsored by Wix Studio</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/kent-c-dodds-on-why-he-traded-angular-for-react-launching-epicweb-dev-whats</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/kent-c-dodds-on-why-he-traded-angular-for-react-launching-epicweb-dev-whats</guid>
            <pubDate>Wed, 14 Aug 2024 14:58:14 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA["Do What Matters with Who Matters While It Matters" Mark Techson on Leadership]]></title>
            <description><![CDATA[<p>Tracy Lee and Rob Ocel sit down with Mark Thompson, aka &quot;Mark Techson&quot;, to explore key themes around personal branding, workplace culture, and leadership. Mark shares his journey of building a strong personal brand, discussing how consistent online presence can shape public perception and create professional opportunities. The conversation provides valuable insights into the art of leading without a formal title, emphasizing the importance of cultural sensitivity and gradual influence in workplace environments.</p>
<p>A significant portion of the discussion is devoted to Mark&#39;s motto of &quot;Do what matters with who matters while it matters.&quot; This practical approach encourages listeners to focus on meaningful work, prioritize important relationships, and make the most of their time. Mark also reflects on the challenges of maintaining resilience in the face of personal adversity, offering a candid look at how to balance personal and professional life effectively.</p>
<p>Listeners will find actionable advice on how to take control of their career trajectory, foster a positive work culture, and navigate the complexities of leadership, all while staying true to themselves.</p>
<p>Chapters 
[00:00] Introduction
[02:19] Personal Branding and Online Presence
[06:36] Navigating Workplace Culture
[08:56] The Story Behind &#39;Well Dressed Wednesdays&#39;
[11:12] Developing the &quot;Do What Matters&quot; Framework
[13:29] Balancing Public and Private Life
[18:10] Overcoming Personal Challenges
[20:28] Taking Control of Your Career
[22:49] Practical Takeaways for Listeners</p>
<p>Follow Mark on Social Media
<a href="https://x.com/marktechson">Twitter</a>
<a href="https://www.linkedin.com/in/marktechson/">Linkedin</a>
<a href="https://github.com/MarkTechson">Github</a>
<a href="https://bsky.app/profile/marktechson.com">Bluesky</a></p>
<p><a href="https://wix.com/studio">Sponsored by Wix Studio.</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/do-what-matters-with-who-matters-while-it-matters-mark-techson-on-leadership</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/do-what-matters-with-who-matters-while-it-matters-mark-techson-on-leadership</guid>
            <pubDate>Tue, 20 Aug 2024 13:13:38 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Detoxify Your Team Culture with Angela Nelms]]></title>
            <description><![CDATA[<p><a href="https://www.linkedin.com/in/angelagillnelms/">Angela Nelms</a> emphasizes humility, transparency, and continuous learning as essential elements of effective leadership and company culture. Her insights shed light on leadership&#39;s critical role in organizational success, the essential elements of great leadership, and how to embrace failure as a means of combating workplace toxicity.</p>
<p>Angela emphasizes that customer service and a positive work culture form the foundation of company success. By fostering humility and transparency, leaders create environments where employees feel valued and motivated to deliver outstanding service. Angela stresses the importance of building collaborative teams, nurturing healthy relationships, and addressing toxic cultures promptly, ensuring organizations thrive on trust and respect.</p>
<p>In this conversation, Angela believes that trust is a cornerstone of successful company culture. Building trust among team members and leaders encourages innovation and growth. Investing in employee development, both personally and professionally, fosters a motivated, engaged workforce committed to the organization&#39;s success, and helps retain great talent.</p>
<p>Throughout the episode, Angela emphasizes that effective communication and shared vision are paramount in leadership. Promoting transparency and open dialogue cultivates trust and collaboration. By embracing failures as opportunities for growth and openly discussing lessons learned, leaders foster environments where employees feel empowered to take risks and learn. Shared vision ensures alignment with the company&#39;s goals and values throughout the organization.</p>
<p><a href="https://engineeringleadership.podbean.com/e/detoxify-your-team-culture-with-angela-nelms/">Download this episode here.</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/detoxify-your-team-culture-with-angela-nelms</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/detoxify-your-team-culture-with-angela-nelms</guid>
            <pubDate>Thu, 14 Mar 2024 14:26:26 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Revolutionizing Pharma using Cutting-Edge Digital Innovation with Lee Dash]]></title>
            <description><![CDATA[<p>Lee Dash, SVP of Digital Innovation at Medistrava, sheds light on the pivotal role of user experience (UX) within pharmaceuticals. Lee underscores the importance of effectively delivering scientific content to healthcare professionals and the inherent challenges in innovating UX within an industry steeped in traditional systems. This episode navigates the complexities of adapting user-friendly interfaces to pharmaceutical contexts and the ongoing endeavors to elevate UX.</p>
<p>Lee stresses the significance of optimizing the content supply chain and user testing to ensure a seamless user experience. In an arena where scientific information holds paramount importance, presenting it in an easily accessible and comprehensible manner for healthcare professionals is essential. By integrating user feedback and conducting thorough testing, pharmaceutical companies can refine their digital platforms to meet the diverse needs of stakeholders, encompassing medical science liaisons, patients, researchers, and physicians.</p>
<p>A notable takeaway from the dialogue is the necessity for customized solutions tailored to the distinct requirements of various stakeholders within the pharmaceutical industry. Each faction possesses unique needs and preferences concerning the access and utilization of scientific content. By comprehending these specific needs, pharmaceutical entities can develop user-friendly interfaces that resonate with the preferences of each stakeholder group. This approach not only enhances user experience but also bolsters the overall efficacy of digital platforms.</p>
<p>Lee Dash shares the significance of assembling a versatile development team equipped with multifaceted skills. In an industry characterized by rapid evolution, having a team capable of adapting to shifting technologies and user expectations is imperative. Additionally, Lee talks about the importance of infusing technical acumen into leadership teams. By cultivating leaders well-versed in the technical intricacies of digital innovation, pharmaceutical companies can drive efficient and effective development processes.</p>
<p><a href="https://engineeringleadership.podbean.com/e/revolutionizing-pharma-using-cutting-edge-digital-innovation-with-lee-dash/">Download this episode here.</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/revolutionizing-pharma-using-cutting-edge-digital-innovation-with-lee-dash</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/revolutionizing-pharma-using-cutting-edge-digital-innovation-with-lee-dash</guid>
            <pubDate>Tue, 30 Apr 2024 14:13:35 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Building Bulletproof Teams: Why Blame Is Your Worst Enemy with Leon Revill]]></title>
            <description><![CDATA[<p><a href="https://twitter.com/DenoiseDigital">Leon Revill</a>, uncovers the profound impact of nurturing trust, collaboration, and shared responsibility within teams. His interview with <a href="https://twitter.com/ladyleet">Tracy Lee</a> explores the importance of fostering an environment where mistakes are embraced as valuable learning experiences, steering clear of blame and focusing on growth.</p>
<p>Firstly, Leon emphasizes the need for organizations to cultivate a blame-free atmosphere, encouraging team members to take risks and glean insights through failure. By reframing mistakes as learning opportunities, teams foster a culture that fuels growth and innovation. This approach not only instills a sense of ownership among individuals but also nurtures psychological safety, paving the way for enhanced idea-sharing and collaboration.</p>
<p>Secondly, Tracy and Leon underscore the pivotal role of transparent communication within teams. Through fostering honest dialogue, organizations can strengthen trust and deepen connections among team members. Such open communication fosters collective accountability, where each member bears responsibility for the team&#39;s triumphs. Empowering individuals to voice their thoughts and concerns fosters an environment where diverse perspectives are valued, ultimately enhancing decision-making and problem-solving.</p>
<p>Moreover, Leon offers examples of how organizations can drive continuous improvement by empowering their teams. Providing individuals with the autonomy to make decisions and take ownership not only spurs personal growth but also bolsters team success. By nurturing a growth mindset and facilitating skill development, organizations foster a culture that embraces learning and adapts to change.</p>
<p>The interview touches on the evolving technology landscape and its implications for team collaboration. With the emergence of artificial intelligence and automation, Revill stresses the importance of aligning development processes with these advancements. Collaboration becomes imperative in this scenario, as teams must collaborate to grasp and implement these technologies effectively. Through cultivating a culture of trust and collaboration, organizations can navigate technological shifts and remain at the forefront of innovation.</p>
<p><a href="https://engineeringleadership.podbean.com/e/building-bulletproof-teams-why-blame-is-your-worst-enemy-with-leon-revill/">Download this episode here.</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/building-bulletproof-teams-why-blame-is-your-worst-enemy-with-leon-revill</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/building-bulletproof-teams-why-blame-is-your-worst-enemy-with-leon-revill</guid>
            <pubDate>Thu, 16 May 2024 14:59:12 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Bad Ideas are Good Ideas with Cassidy Williams, CTO of Contenda]]></title>
            <description><![CDATA[<p>Cassidy Williams is the CTO at Contenda and shares her journey of how Contenda pivoted and the inside play on how the team was able to successfully change their thinking and strategy to accomplish this.</p>
<p>Contenda&#39;s current product focus is Brain Story, an app that utilizes AI to help people brainstorm ideas. </p>
<p>Rob and Cassidy highlight the importance of having &quot;bad ideas&quot; and normalizing them. Cassidy shares her team’s Slack channel called “bad ideas”. This channel allows team members to freely share and discuss ideas without fear of judgment. It fosters a culture of creativity and encourages everyone to contribute their thoughts, even if they may not initially seem like the best ideas.</p>
<p>Adaptability and the ability to pivot are emphasized throughout the episode as Cassidy highlights the importance of being able to adapt and pivot in both life and career. </p>
<p>Cassidy&#39;s experiences and advice serve as a reminder that success often comes from embracing new ideas, collaborating with others, and being willing to adapt.</p>
<p>Listen to the full episode here: <a href="https://engineeringleadership.podbean.com/e/bad-ideas-are-good-ideas-with-cassidy-williams-cto-of-contenda/">https://engineeringleadership.podbean.com/e/bad-ideas-are-good-ideas-with-cassidy-williams-cto-of-contenda/</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/bad-ideas-are-good-ideas-with-cassidy-williams-cto-of-contenda</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/bad-ideas-are-good-ideas-with-cassidy-williams-cto-of-contenda</guid>
            <pubDate>Tue, 30 Jan 2024 13:12:12 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[How to Expand Your Influence Beyond Your Engineering Team with Adrianna Bailey]]></title>
            <description><![CDATA[<p><a href="https://www.linkedin.com/in/adrianna-bailey/">Adrianna Bailey</a>, <a href="https://www.maersk.com/">Maersk&#39;s</a> SVP of Engineering and business CIO, underscores the shifting attitudes about engineering leadership by highlighting that employers are no longer solely looking for individual technical prowess but those who can foster team success. Adrianna emphasizes the importance of leaders staying abreast of technological advancements and continuously honing their skills to effectively guide and nurture their teams.</p>
<p>Central to effective leadership is a comprehensive understanding of the company&#39;s overarching strategy. Leaders must expand their influence beyond their immediate team and align their efforts with the organization&#39;s goals. Transparent decision-making and communication are paramount. Adrianna stresses the need for leaders to make trade-offs openly and ensure team alignment through effective communication channels. Being receptive to feedback, willing to reassess decisions, and prioritizing ongoing learning and collaboration are all integral facets of this process.</p>
<p>Finally, Adrianna discusses the art of delegating tasks and the importance of leaders focusing on activities that leverage their unique skills and delegating others effectively. She advocates for a mentorship approach that involves asking questions rather than providing direct answers, empowering team members to develop their problem-solving abilities.</p>
<p><a href="https://engineeringleadership.podbean.com/e/how-to-expand-your-influence-beyond-your-engineering-team-with-adrianna-bailey/">Download this episode here.</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/how-to-expand-your-influence-beyond-your-engineering-team-with-adrianna</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/how-to-expand-your-influence-beyond-your-engineering-team-with-adrianna</guid>
            <pubDate>Tue, 26 Mar 2024 15:19:24 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[The Key Elements of Effective Software Engineering Leadership with Revathi Pillai ]]></title>
            <description><![CDATA[<p><a href="https://www.linkedin.com/in/revathi-pillai/">Revathi Pillai</a>, the Chief Engineering Officer of <a href="https://mutualink.net/">Mutualink</a>, emphasized the significance of clear communication, collaboration, and transparency in software engineering leadership, and shared her journey from engineer to manager and shed light on the importance of self-advocacy for women in the workplace. </p>
<p>Along with <a href="https://twitter.com/ladyleet">Tracy Lee</a>, Revathi discussed the delicate balance between developing new features and maintaining existing products. By managing cost reduction projects and considering technical debt, engineering leaders can ensure that their products remain robust and efficient in the long run.</p>
<p>Revathi also emphasized the role of efficient processes and hiring individuals with the right mindset in driving cultural change within an organization. By implementing streamlined workflows and fostering a collaborative environment, engineering teams can minimize chaos and create a smoother work experience. </p>
<p>Lastly, they talked about the benefits of leveraging artificial intelligence in operations. By harnessing the power of AI, engineering teams can automate repetitive tasks, optimize workflows, and improve overall efficiency. Revathi’s perspective underscores the potential of AI in transforming the way engineering teams operate, enabling them to focus on more strategic and impactful work.</p>
<p><a href="https://engineeringleadership.podbean.com/e/the-key-elements-of-effective-software-engineering-leadership-with-revathi-pillai/">Download this episode here.</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/the-key-elements-of-effective-software-engineering-leadership-with-revathi-pillai</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/the-key-elements-of-effective-software-engineering-leadership-with-revathi-pillai</guid>
            <pubDate>Tue, 14 May 2024 17:56:41 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Empathy Driven Leadership with Joseph Bironas]]></title>
            <description><![CDATA[<p><a href="https://www.linkedin.com/in/josephbironas/">Joseph Bironas</a>, CTO of Stanza stresses the importance of empathy in leading startups, where high-pressure environments demand understanding team members&#39; motivations and fears. By cultivating a culture of psychological safety, empathetic leaders empower teams to excel and contribute their best work.</p>
<p>Beyond technical skills, empathetic leaders prioritize diverse perspectives and individual strengths in hiring practices, fostering collaboration and inclusivity within teams. The conversation highlighted the shift towards empathy in tech leadership. While traditionally task-oriented, leaders now recognize empathy&#39;s positive impact on team morale, creativity, and performance.</p>
<p>How do you prioritize empathy to create a work environment where team members feel valued, understood, and motivated to excel?</p>
<p><a href="https://engineeringleadership.podbean.com/e/empathy-driven-leadership-with-joseph-bironas/">Download this podcast episode here.</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/empathy-driven-leadership-with-joseph-bironas</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/empathy-driven-leadership-with-joseph-bironas</guid>
            <pubDate>Tue, 19 Mar 2024 16:30:09 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[You Can’t Mentor Junior and Senior Engineers the Same with Dan DiGangi]]></title>
            <description><![CDATA[<p>In this episode of the Engineering Leadership series, <a href="https://twitter.com/dandigangi">Dan Gigangi</a> shed light on the crucial role that engineering leaders play in promoting a culture of learning within their organizations, focusing on the importance of adapting communication styles, expanding impact beyond coding, setting long-term visions, and navigating the challenges of transitioning from technical roles to leadership positions.</p>
<p>One of the key points emphasized by Dan Gigangi is the significance of adapting communication styles to cater to different skill levels. Effective communication is essential for fostering collaboration and ensuring that complex technical concepts are understood by all team members. By tailoring their communication approaches, engineering leaders can bridge the gap between varying levels of expertise, creating a cohesive and productive work environment.</p>
<p>Dan highlights the need for senior engineers to make a broader impact beyond coding. While technical expertise is undoubtedly important, engineering leaders should encourage their team members to think beyond their immediate tasks and consider the larger organizational goals. By empowering senior engineers to take on leadership roles and contribute to strategic decision-making, leaders can foster a culture of innovation and drive long-term success.</p>
<p>The conversation between Dan Gigangi and Tracy Lee also emphasizes the importance of setting long-term visions and executing strategic plans for career progression. Engineering leaders should guide their team members in envisioning their professional goals and provide guidance on how to achieve them. By creating a roadmap, leaders can inspire their team members to continuously learn and grow, ultimately driving their own career advancement and contributing to the success of the organization.</p>
<p>Dan also discusses the challenges and pitfalls of transitioning from technical roles to leadership positions. While technical expertise is a solid foundation, engineering leaders must also develop non-technical skills such as communication, relationship-building, and strategic thinking. This transition requires a shift in mindset and the ability to navigate complex organizational dynamics. By recognizing and addressing these challenges, aspiring leaders can better prepare themselves for the demands of leadership roles.</p>
<p><a href="https://engineeringleadership.podbean.com/e/you-can-t-mentor-junior-and-senior-engineers-the-same-with-dan-digangi/">Download this episode here.</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/you-cant-mentor-junior-and-senior-engineers-the-same-with-dan-digangi</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/you-cant-mentor-junior-and-senior-engineers-the-same-with-dan-digangi</guid>
            <pubDate>Thu, 11 Apr 2024 06:01:50 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Transforming Auth in an AI World with Rod Boothby]]></title>
            <description><![CDATA[<p>In today&#39;s digital landscape, the need for secure identity verification has become paramount. With the increasing risks of personal data exposure and the rise of sophisticated cyber threats aided by Artificial Intelligence, it is crucial to adopt robust verification processes to protect individuals and organizations alike. In a recent discussion with <a href="https://twitter.com/rod11">Rod Boothby</a>, CEO of <a href="https://www.idpartner.com/">ID Partner Systems</a>, the significance of trusted institutions for identity verification was emphasized, particularly the efficiency of bank-based ID verification over traditional methods. One of the key takeaways from the conversation was the importance of continuous authentication. As technology advances, so do the methods employed by cybercriminals. Deepfake technology, for instance, poses a significant threat to identity verification systems.</p>
<p>To combat this, tighter security measures and continuous authentication are essential. By constantly verifying and validating user identities, organizations can stay one step ahead of potential fraudsters. Rod Boothby also highlighted the need for a developer-focused approach to identity verification. By providing developers with the tools and resources they need, companies like ID Partner Systems aim to streamline the verification process and enhance security. This approach not only ensures a more efficient experience for users but also allows for the integration of behavioral biometrics, which can further strengthen the verification process. </p>
<p><a href="https://engineeringleadership.podbean.com/e/transforming-auth-in-an-ai-world-with-rod-boothby/">Download this episode here.</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/transforming-auth-in-an-ai-world-with-rod-boothby</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/transforming-auth-in-an-ai-world-with-rod-boothby</guid>
            <pubDate>Mon, 22 Apr 2024 03:06:37 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Cultivating Value Through Developer Communities with Ronald Williams]]></title>
            <description><![CDATA[<p>In this JS Drops presentation, <a href="https://www.linkedin.com/in/ronaldwaynewilliams/">Ronald Williams</a>, Senior Community Programs Manager at Cypress.io, discusses key tips for cultivating and capturing value within a developer community. Ronald emphasized the need to differentiate between creating useful content and deriving business gains from it. He highlights the significance of reporting the impact of initiatives by combining both aspects, stressing that it&#39;s crucial not only to focus on providing valuable content for the community but also to understand how it translates into tangible business outcomes.</p>
<p>To illustrate this concept, Ronald provides examples such as ambassador programs and conferences, which offer useful content and networking opportunities for the community while directly impacting revenue generation. He emphasizes the importance of understanding revenue impact through metrics like conversion rates, retained members, influenced members, and expansion. By measuring these metrics, businesses can effectively gauge the success of their community initiatives and make informed decisions to enhance their strategies.</p>
<p>The presentation emphasizes the value of developer communities in various aspects. Data from the Developer Marketing Alliance supported the notion that developer communities are crucial for feedback gathering, creating ambassador programs, understanding the developer audience, and educating others about products. Ronald stresses the need for effective communication of the business value of community initiatives to stakeholders and advised refining existing programs to maximize value rather than starting from scratch. This approach allows businesses to build upon their current community initiatives and make them even more impactful.</p>
]]></description>
            <link>https://www.thisdot.co/blog/cultivating-value-through-developer-communities-with-ronald-williams</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/cultivating-value-through-developer-communities-with-ronald-williams</guid>
            <pubDate>Tue, 25 Jun 2024 14:55:57 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Transforming Platform Engineering Through Chargeback Programs with Shuchi Mittal]]></title>
            <description><![CDATA[<p><a href="https://www.linkedin.com/in/shuchi-mittal/">Shuchi Mittal</a>, the Head of Cloud enablement at Honeywell, discusses how she as a leader in platform engineering has been able to transform internal platform engineering teams at other organizations by providing teams with value-added services, and how they effectively managed costs. She talks about how to treat these teams as customers, and delivering services that meet the customer needs, but also charging them for their usage.</p>
<p>By providing policy-compliant infrastructure and unique services tailored to their requirements, platform engineering teams can showcase their commitment to supporting the success of other teams within an organization. This approach not only fosters trust but also positions platform engineering as a strategic partner rather than just a service provider. By going beyond the basic infrastructure provisioning, the platform engineering team can offer services that help development and product teams streamline their processes and improve efficiency.</p>
<p>Shuchi shares her work at Fiserv where she  implemented a chargeback system to track usage and costs effectively, incentivizing better development practices. This not only helped in managing costs but also encouraged teams to optimize their resource utilization, leading to improved overall efficiency and more easily scalable systems. Her platform engineering team recognized the importance of effectively managing financial aspects to secure upfront investment for product development. By implementing a billing system based on gigabyte hours, they aligned costs with usage, enabling teams to have a clear understanding of their resource consumption. This approach not only provided transparency but also incentivized teams to adopt cost-effective practices.</p>
<p>By strategically managing financial aspects, the platform engineering team gained the trust of stakeholders and secured the necessary resources to drive innovation and deliver value to the organization. Shuchi’s journey of platform engineering serves as a valuable example of how a team can transition from being a basic service provider to becoming an innovation partner within an organization. By building trust, offering value-added services, and strategically managing financial aspects, the platform engineering team successfully elevated their role and became a strategic enabler of innovation.</p>
<p><a href="https://engineeringleadership.podbean.com/e/transforming-platform-engineering-through-chargeback-programs-with-shuchi-mittal/">Download this episode here.</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/transforming-platform-engineering-through-chargeback-programs-with-shuchi</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/transforming-platform-engineering-through-chargeback-programs-with-shuchi</guid>
            <pubDate>Tue, 16 Apr 2024 15:42:00 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Are Product Roles Going Away? with Maggie Pint]]></title>
            <description><![CDATA[<p><a href="https://twitter.com/maggiepint">Maggie Pint</a>, Engineering Manager at <a href="https://istari-global.com/">Istari</a>, discusses the trend for companies to focus on streamlining their processes and reducing complexity to deliver products faster and more efficiently versus spending time on innovation this past year. This trend has led to the convergence (maybe consolidation) of product management and engineering roles, where engineers are not only responsible for building the product but also for shaping its strategic direction. Maggie and Tracy discuss whether this is good for the business, or something that should be more concerning to watch out for.</p>
<p>Maggie talks about design in the product role and emphasizes the significance of design beyond aesthetics. It&#39;s not just about making things look pretty; it&#39;s about understanding user needs and creating intuitive experiences. User research skills and customer engagement are becoming essential for engineers, as they need to build products that truly resonate with their target audience.</p>
<p>They talk about companies like Airbnb and Apple are leading the way in this regard and the successful integration of design thinking into their engineering processes, resulting in innovative and user-friendly products. By involving engineers in the entire product development lifecycle, from ideation to delivery, these companies are able to create seamless experiences that delight their customers.</p>
<p><a href="https://engineeringleadership.podbean.com/e/are-product-roles-going-away-with-maggie-pint/">Download this episode and listen now!</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/are-product-roles-going-away-with-maggie-pint</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/are-product-roles-going-away-with-maggie-pint</guid>
            <pubDate>Wed, 06 Mar 2024 17:41:33 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Acing the Interview for Software Engineers with Anthony D. Mays ]]></title>
            <description><![CDATA[<p>In this episode of the Modern Web Podcast, Anthony D. Mays discusses code exercises, landing your first job, and how to succeed in interviews. He shares his personal journey to becoming a software engineer and career coach, and emphasizes the importance of not only having technical knowledge but also a strong problem-solving process. The conversation also touches on the role of senior engineers in guiding and empowering junior engineers, the interview processes at different companies, and tips for success in interviews. In this conversation, Anthony covers the interview process and how candidates can best prepare for technical interviews. emphasizing the importance of collaborative problem-solving and being authentic in interviews. They also explore the idea of secret questions and whether or not they are effective in assessing a candidate&#39;s skills. </p>
<p>Chapters</p>
<p>00:00 Introduction and Guest Introduction
03:24 The Importance of Problem-Solving Process in Interviews
06:32 Guiding and Empowering Junior Engineers
10:20 Understanding Different Interview Processes
19:51 Tips for Success in Interviews
24:57 Collaborative Problem-Solving in Technical Interviews
26:26 The Effectiveness of Secret Questions
29:42 Defining the Interview Process
30:37 The Importance of Authenticity
32:30 Interviewer Training and Feedback
35:18 Selecting the Right Opportunity</p>
<p>Follow Anthony D. Mays Social Media</p>
<p><a href="https://x.com/anthonydmays">Twitter</a>
<a href="https://www.linkedin.com/in/anthonydmays/">Linkedin</a>
<a href="https://github.com/anthonydmays">Github</a>
<a href="https://bsky.app/profile/anthonydmays.com">Bluesky</a></p>
<p><a href="https://spotifyanchor-web.app.link/e/TJDGlVYV6Lb">Download this episode here.</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/acing-the-interview-for-software-engineers-with-anthony-d-mays</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/acing-the-interview-for-software-engineers-with-anthony-d-mays</guid>
            <pubDate>Fri, 16 Aug 2024 14:56:47 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Browser Native APIs with Rachel Nabors]]></title>
            <description><![CDATA[<p>Rachel Lee Nabors talks about the challenge of remaining current with new APIs and libraries, and how to prioritize which technologies you should invest your time in. Along with Tracy Lee, Rachel discusses their project of refactoring a demo using modern APIs and the benefits of challenging oneself with browser native APIs. The conversation also covers Rachel&#39;s involvement in standards development, and the evolving web technologies landscape.</p>
<p>Timestamps
[00:00:00] Intro.
[00:03:10] Refactored code, removed external libraries, streamlined.
[00:04:35] Understanding web APIs requires deep knowledge.
[00:05:19] Focus on problem solving, not memorization.
[00:06:39] Many regions, new technologies, use cases.
[00:10:03] React Docs collaboration inspires Angular.
[00:12:17] Career advice and success stories.</p>
<p>Social Media
<a href="nabors.bsky.social">Bluesky</a>
<a href="https://www.instagram.com/rachelnabors/?hl=en">Instagram</a>
<a href="https://x.com/rachelnabors">Twitter</a>
<a href="https://github.com/rachelnabors">GitHub</a>
<a href="https://www.linkedin.com/in/rachelnabors">LinkedIn</a></p>
<p>Links
<a href="https://cascadiajs.com/">CascadiaJS</a>
<a href="https://dribbble.com/rachelthegreat/projects/350942-Alice-in-Web-Animation-API-Land">Rachel Nabors’ “Alice” Project</a></p>
<p><a href="thisdot.co">Sponsored by This Dot</a>
<a href="https://spotifyanchor-web.app.link/e/OYSdtiDdULb">Download this episode here.</a>
<a href="https://www.thisdot.co/blog/browser-native-apis-with-rachel-nabors">Read more on our blog!</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/browser-native-apis-with-rachel-nabors</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/browser-native-apis-with-rachel-nabors</guid>
            <pubDate>Fri, 09 Aug 2024 14:05:46 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Don’t Make It Difficult to Engage with Your Brand: DevRel with Jonan Scheffler]]></title>
            <description><![CDATA[<p>Tracy Lee and Jonan Scheffler discuss the world of developer relations engineering and community engagement. They talk about the importance of building genuine relationships with developers and providing valuable resources to the community, and explore the challenges of aligning DevRel with marketing efforts and the effectiveness of different content formats. </p>
<p>Jonan Scheffler’s Socials
<a href="https://twitter.com/thejonanshow">Twitter</a>
<a href="https://github.com/thejonanshow">GitHub</a>
<a href="https://www.youtube.com/c/thejonanshow">YouTube</a>
<a href="https://www.linkedin.com/in/thejonanshow/">Linkedin</a></p>
<p>More Resources on DevRel
<a href="https://www.thisdot.co/blog/intro-to-devrel-5-reasons-why-devrel-teams-fail">Intro to DevRel: 5 Reasons Why DevRel Teams Fail</a>
<a href="https://www.thisdot.co/blog/this-is-the-worst-thing-a-devrel-team-could-do">This is the Worst Thing a DevRel Team Could Do</a>
<a href="https://www.thisdot.co/blog/how-to-provide-value-to-your-organization-in-a-developer-relations-role">How to Provide Value to Your Organization in a Developer Relations Role</a></p>
<p>Shownotes
[00:00:01] Introduction
[00:01:46] Engage developers in constant orbit.
[00:03:10] Marketing drives brand awareness for organizations.
[00:05:11] DevRel brand disconnect in marketing strategy.
[00:08:01] Leverage DevRel to influence and innovate.
[00:09:21] Feedback is key for growth. Trust, focus, collaboration.
[00:12:19] DevRel measures success by developer engagement.
[00:14:35] DevRel trend focused on business dollars.
[00:15:39] High-level conferences attract top developers.
[00:17:24] Marketing approach: swag and branding.</p>
<p><a href="https://spotifyanchor-web.app.link/e/wrsLvNj6TLb">Listen to this episode on Spotify!</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/dont-make-it-difficult-to-engage-with-your-brand-devrel-with-jonan-scheffler</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/dont-make-it-difficult-to-engage-with-your-brand-devrel-with-jonan-scheffler</guid>
            <pubDate>Thu, 08 Aug 2024 21:08:01 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Translating Developer Needs into Business Objectives with Sam Julien, Director of Developer Relations at Writer]]></title>
            <description><![CDATA[<p>As businesses strive to stay ahead of the curve, understanding the needs of developers and translating them into tangible business goals has become increasingly important. <a href="https://x.com/samjulien">Sam Julien</a>, Director of Developer Relations at <a href="https://writer.com/">Writer</a>, discusses the devrel role in a recent interview with <a href="https://x.com/robocell">Rob Ocel</a>. </p>
<p>According to Sam, one of the key aspects of developer relations is the ability to bridge the gap between technical and marketing teams. This requires individuals who can effectively communicate the needs and challenges of developers to the marketing team, and vice versa. By fostering this collaboration, businesses can ensure that their products and services align with the expectations and requirements of their target audience.</p>
<p>One of the tools that Sam leverages in his role is AI. He highlights the productivity and efficiency that AI brings to his work, using tools like Writer, ChatGPT, and Co-Pilot for tasks such as content drafting and coding. This showcases the practical applications of AI in the developer relations space, where time is of the essence and accuracy is crucial. Sam emphasizes that while AI tools can greatly enhance productivity, it is important to ensure that they are used ethically and responsibly.</p>
<p>Sam also discusses the challenges faced in transitioning from prototyping to production in the AI space. He highlights the importance of considering ethical implications and democratizing access to AI technology globally. The conversation underscores the need for practical AI applications that deliver tangible value, ensuring that the technology is not only advanced but also accessible and ethical.</p>
<p>The conversation provides valuable insights into the evolving role of developer relations. It highlights the significance of individuals who can bridge the gap between technical and marketing teams and the practical applications of AI in this field. As businesses continue to embrace AI technology, it is crucial to prioritize ethical considerations and ensure that AI tools deliver tangible value. The complexities of AI engineering and the importance of continuous learning are also emphasized, showcasing the practical application of mastering AI technology in the developer relations space.</p>
<p><a href="https://engineeringleadership.podbean.com/e/translating-developer-needs-into-business-objectives-with-sam-julien-director-of-developer-relations-at-writer/">Download this episode here.</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/translating-developer-needs-into-business-objectives-with-sam-julien</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/translating-developer-needs-into-business-objectives-with-sam-julien</guid>
            <pubDate>Wed, 31 Jul 2024 17:07:24 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Advanced TypeScript - Schema Validation with Zod - Type Inference & Generics with Josh Goldberg]]></title>
            <description><![CDATA[<p>TypeScript has become an essential tool for developers seeking to enhance the robustness and maintainability of their codebases. While many are familiar with the basics of TypeScript, there are advanced concepts that can take your TypeScript skills to the next level. In this JSDrops Training by <a href="https://x.com/JoshuaKGoldberg">Josh Goldberg</a>, viewers will explore some of these advanced TypeScript concepts, including schema validation using Zod, leveraging TypeScript&#39;s powerful type inference capabilities, and effectively utilizing TypeScript&#39;s type system.</p>
<p>One crucial aspect of building reliable applications is ensuring data integrity. This is where schema validation libraries like Zod come into play. Zod allows developers to define complex data models and validate incoming data against these models. By leveraging Zod&#39;s expressive API, developers can easily define object schemas, conditional types, and literal types, ensuring that the data meets the expected structure and constraints. This not only helps catch errors early but also provides a clear and concise way to handle data validation and error handling.</p>
<p>TypeScript&#39;s type inference is a powerful feature that can save developers time and effort. By allowing TypeScript to infer types based on context, developers can write cleaner and more concise code. Additionally, leveraging generics can further enhance type inference. Generics enable developers to write reusable code that adapts to different types, making logic inference more flexible and efficient. Understanding how to effectively use generics and type inference can greatly improve code readability and maintainability.</p>
<p>While advanced TypeScript concepts can greatly enhance code quality, it is important to strike a balance between utilizing these features and maintaining code simplicity. It is crucial to understand TypeScript terminology and best practices to avoid overcomplicating code. Additionally, debugging techniques specific to TypeScript can help identify and resolve issues more efficiently. By carefully considering when and where to apply advanced TypeScript concepts, developers can create code that is both powerful and easy to understand.</p>
]]></description>
            <link>https://www.thisdot.co/blog/advanced-typescript-schema-validation-with-zod-type-inference-and-generics</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/advanced-typescript-schema-validation-with-zod-type-inference-and-generics</guid>
            <pubDate>Fri, 26 Jul 2024 10:31:55 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Agentic AI: What Does AI Agency Mean Our Future?  Safety and Security with Tejas Kumar]]></title>
            <description><![CDATA[<p><a href="https://x.com/TejasKumar_">Tejas Kumar</a> and <a href="https://x.com/ladyleet">Tracy Lee</a> discuss AI models, tool calling, and Vercel&#39;s AI SDK for generating components. They explore AI agency, the importance of AI democratization, safety concerns, regulation, and the need for human oversight in AI development.</p>
<p>The conversation begins with AI models and tool calling, emphasizing AI agents&#39; abilities to self-reflect and collaborate. They highlighted Vercel&#39;s SDK for generating components, which simplifies the development process and makes AI engineering more accessible. Tejas discussed how AI is revolutionizing podcast production with automated transcription and content generation and transforming the music industry with AI-generated compositions pushing creative boundaries. He also clarifies the distinction between AI and machine learning. While machine learning is a subset of AI, AI encompasses a broader range of technologies and capabilities, enabling AI agents to reason, learn, and make decisions in various contexts.</p>
<p>They also addressed safety concerns and the need for regulations in AI development, advocating for human oversight to ensure ethical and responsible use. Tracy and Tejas noted that while AI has immense potential, it must be guided by human values and principles. The conversation concluded on an optimistic note, with Tracy and Tejas expressing excitement about AI&#39;s future. They highlighted its potential to drive innovation across industries, from healthcare and finance to transportation and entertainment. Continued research, collaboration, and investment in AI are essential to unlocking its full potential.</p>
<p><a href="https://spotifyanchor-web.app.link/e/MECCaaNTuLb">Download this episode here.</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/agentic-ai-what-does-ai-agency-mean-our-future-safety-and-security-with</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/agentic-ai-what-does-ai-agency-mean-our-future-safety-and-security-with</guid>
            <pubDate>Wed, 24 Jul 2024 16:50:08 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[What’s Great About TypeScript ESLint v8 + The “Trough of Disillusionment” in Adoption with Josh Goldberg]]></title>
            <description><![CDATA[<p>In this episode of Modern Web, <a href="https://x.com/JoshuaKGoldberg">Josh Goldberg</a> discusses the benefits of <a href="https://eslint.org/docs/latest/use/migrate-to-8.0.0">TypeScript ESLint v8</a> and as well as other various topics related to JavaScript tools, AI in coding, and industry dynamics. Josh breaks down the latest version of TypeScript ESLint, v8. He points out the big performance boosts and introduces the cool new feature of type-aware linting. With this, developers can catch potential errors and follow best practices by using TypeScript&#39;s static type checking. This not only cuts down on bugs but also makes the code easier to read and maintain.</p>
<p>Tracy and Josh talk about the importance of using the right tools for better coding results. They discuss how the Gartner hype cycle can influence developers&#39; choices and warn against adopting tools just because they’re trendy. Instead, they suggest carefully evaluating tools based on specific needs and project requirements. By picking the right tools, developers can simplify their workflows, improve code quality, and get better outcomes overall.</p>
<p>The conversation also touches on the impact of companies like Vercel and the unexpected consequences in tech development. While new tools and technologies can be super beneficial, they can also bring unexpected challenges. It’s important for developers to be aware of these potential issues and address them to ensure smooth development and successful projects. They also chat about the &quot;trough of disillusionment&quot; in tech adoption and mention upcoming typed linting tools which aim to further improve code quality and developer productivity.</p>
<p>Lastly, the two talk about the SquiggleConf conference, which focuses on web development tools. They explain the term &quot;squiggle&quot; in error indicators and why clear, informative error messages are important for helping developers debug and troubleshoot. The conference is a great place for developers to learn about the latest web development tools and share tips on improving the developer experience. <a href="https://2024.squiggleconf.com/">Check it out</a> on October 3-4, 2024 in Boston, MA!</p>
<p><a href="https://modernweb.podbean.com/e/modern-web-podcast-s12e14-what-s-great-about-typescript-eslint-v8-the-trough-of-disillusionment-in-adoption-with-josh-goldberg/">Download this episode here.</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/whats-great-about-typescript-eslint-v8-the-trough-of-disillusionment-in</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/whats-great-about-typescript-eslint-v8-the-trough-of-disillusionment-in</guid>
            <pubDate>Fri, 19 Jul 2024 20:34:00 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[These JavaScript Tools Make Collaboration and Deployment Easier with Jack Herrington]]></title>
            <description><![CDATA[<p>Join <a href="https://x.com/jherr">Jack Herrington</a> and <a href="https://x.com/ladyleet">Tracy Lee</a> at CascadiaJS 2024 as they talk about content creation, experimenting with new tools, and continuous learning. They cover some of the latest in what’s going on in the Vercel, Next.js, and Deno Deploy ecosystems, and what these teams are doing to enable easy deployment and better community collaboration.</p>
<p>Tracy and Jack&#39;s discussion highlights the lack of educational resources on advanced web development topics. This scarcity poses an opportunity for developers who want to showcase their expertise in areas like app router and React server components. By actively engaging in knowledge sharing, developers can demonstrate their deep understanding of these complex concepts and set themselves apart from the competition.</p>
<p>Jack emphasized Vercel&#39;s role in simplifying deployment with its easy commands and Next.js integration for seamless collaboration and performance. Next.js was highlighted for its advanced features and simplified development process, including efficient routing and SEO benefits. Deno Deploy, built on Deno, offers serverless deployment without traditional infrastructure management, focusing on code writing and scalability. Throughout, the importance of community and knowledge sharing in platforms like Vercel, Next.js, and Deno Deploy was underscored, enabling developers to share, receive feedback, and stay updated with advancements.</p>
<p><a href="https://modernweb.podbean.com/e/modern-web-podcast-s12e13-these-javascript-tools-make-collaboration-and-deployment-easier-with-jack-herrington/">Download this episode here.</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/these-javascript-tools-make-collaboration-and-deployment-easier-with-jack</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/these-javascript-tools-make-collaboration-and-deployment-easier-with-jack</guid>
            <pubDate>Thu, 18 Jul 2024 16:07:49 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Observables: Coming to a Browser Near You with Dominic Farolino (Google Chrome)]]></title>
            <description><![CDATA[<p><a href="https://x.com/domfarolino">Dominic Farolino</a>, a software engineer on the Chrome team, discusses his involvement in adding observables to the web platform. By incorporating observables into browsers, developers can simplify their workflows and streamline complex APIs (like the Resize Observer). This not only will saves time and effort but also allows developers to focus on creating exceptional user experiences.</p>
<p>Adding observables to the browser will drastically affect RxJS. RxJS is a popular library for reactive programming, and integrating observables into browsers will enable developers to handle asynchronous events and data streams more efficiently. This integration opens up a world of possibilities for creating responsive and interactive web applications. This does not mean RxJS will go away, but it will become more like LoDash for events.</p>
<p>The incorporation of observables into browsers brings several benefits for web developers. Firstly, it simplifies complex APIs, making them more intuitive and easier to use. This allows developers to write cleaner and more maintainable code. Secondly, by leveraging observables, developers can handle asynchronous events and data streams more effectively, resulting in improved performance and responsiveness. Lastly, it will empower developers with a powerful toolset for reactive programming, enabling them to build more sophisticated and interactive web applications.</p>
<p>The conversation in the podcast highlights the collaborative nature of advancing web technologies. Dominic discusses the importance of setting deadlines, sharing updates, and working together to achieve their goals. He expresses optimism about the progress of the project, with positive feedback from WebKit and plans for a future release. This collaborative approach ensures that the enhancements made to web performance benefit developers and users alike.</p>
<p><a href="https://modernweb.podbean.com/e/modern-web-podcast-s12e12-observables-coming-to-a-browser-near-you-with-dominic-farolino-google-chrome/">Download this episode here.</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/observables-coming-to-a-browser-near-you-with-dominic-farolino-google-chrome</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/observables-coming-to-a-browser-near-you-with-dominic-farolino-google-chrome</guid>
            <pubDate>Fri, 12 Jul 2024 04:48:26 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[Can Vercel Fix React?]]></title>
            <description><![CDATA[<p>Tracy Lee, Ben Lesh and Adam Rackis discuss the current state of React and its potential future. </p>
<p><strong>React and Vercel: A Positive Change?</strong></p>
<p>One of the key topics of conversation revolves around React and its relationship with Vercel. It&#39;s worth noting that high-profile React core team members have recently made the move to Vercel, which many see as a positive change. This move brings fresh perspectives and expertise to the project, potentially leading to exciting developments in the future.</p>
<p>However, the hosts also acknowledge the challenges of upgrading to React 18, as there are still many users on older versions. Upgrading can have a significant impact on tests and requires careful consideration. It&#39;s clear that React&#39;s evolution is an ongoing process, and the community plays a crucial role in shaping its future.</p>
<p><strong>The Potential of Bun: A New Default Runtime for Node Development</strong></p>
<p>Is there potential for the runtime called Bun to become the default for node development? Ben expresses skepticism about this possibility, highlighting the need for perfect execution and widespread community support. Switching to a new runtime involves extensive testing and potential risks, especially for critical financial services.</p>
<p>On the other hand, Adam presents a more optimistic view, emphasizing the advantages, such as its built-in TypeScript support and improved interop with node. He believes that as Bun continues to grow and improve, and could become a compelling option for new projects.</p>
<p><strong>The Future of Front-End Development: Likability and AI</strong></p>
<p>They also share their thoughts on the future of front-end development and various technologies. They highlight the importance of likability and the team behind a project, as these factors can greatly influence its success. Additionally, they discuss the potential impact of AI on framework development, raising interesting questions about the role of automation in shaping the future of web development.</p>
<p>While React is praised for its strengths, its perceived limitations compared to other frameworks are also mentioned. The advancements in state management in Angular are highlighted, showcasing the continuous evolution of front-end technologies and the need for developers to stay adaptable and open to new possibilities.</p>
<p>Listen to the full podcast here: <a href="https://modernweb.podbean.com/e/modern-web-podcast-s11e20-can-vercel-fix-react-a-conversation-with-tracy-lee-ben-lesh-adam-rackis/">https://modernweb.podbean.com/e/modern-web-podcast-s11e20-can-vercel-fix-react-a-conversation-with-tracy-lee-ben-lesh-adam-rackis/</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/can-vercel-fix-react</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/can-vercel-fix-react</guid>
            <pubDate>Thu, 25 Jan 2024 18:55:55 GMT</pubDate>
        </item>
        <item>
            <title><![CDATA[5 Tips for Leaders to Consider When Adopting Technology]]></title>
            <description><![CDATA[<p>Traditional businesses now must embrace digital changes to stay competitive. This means more than just using technology; it means making digital solutions a part of every aspect of your business.</p>
<p>Here are 5 essential tips that will help you navigate the path of adopting technology effectively.</p>
<p><strong>Integrate Technology Into Your Core Mission:</strong>
Mistake: Treating technology as a siloed entity, whether within your organization or outsourced.
Solution: Embrace the idea of being a tech company, integrate tech into your overall  strategy to get the best outcome.</p>
<p><strong>Promote Technology Ownership:</strong>
Mistake: Not encouraging active involvement in tech-related responsibilities or ownership beyond a single person or vendor.
Solution: Embrace direct involvement with the technology strategy and implementation. Nurture transparency and accountability to increase informed decision-making within the team and among stakeholders.</p>
<p><strong>Hiring Technologists Who Can Execute Your Vision:</strong>
Mistake: Hiring tech experts who focus solely on technology, not the business strategy. Hiring without clear roles and alignment.
Solution: Seek individuals who bridge the gap between tech and business.
Make sure to clearly define business goals and objectives from day one.</p>
<p><strong>Foster Curiosity and Stay Adaptable:</strong>
Mistake: Thinking technology is too difficult to understand or grasp.
Solution: Challenge your technical team to explain complex concepts. Trust that your insight and intuition adds more to the technical execution than you think. Ask questions to encourage innovative solutions. </p>
<p><strong>Embrace Trial and Error:</strong>
Mistake: Thinking plans won’t change. 
Solution: Understand that adopting technology and integrating it into your business is a learning process. Expect to make mistakes, and uncover new learnings along the way. Be open to adjustments and flexible with different paths.</p>
<p>These tips will make it easier for your organization to adopt a technology mindset and set your organization up for long-term success when adopting tech. Remember, it&#39;s not just about the adoption of technology; it&#39;s about creating a new culture that embraces learning, curiosity, and innovation in tech.</p>
<p>Listen to the full podcast episode here: <a href="https://engineeringleadership.podbean.com/e/5-tips-for-leaders-adopting-technology-with-rob-ocel-tracy-lee/">https://engineeringleadership.podbean.com/e/5-tips-for-leaders-adopting-technology-with-rob-ocel-tracy-lee/</a></p>
]]></description>
            <link>https://www.thisdot.co/blog/5-tips-for-leaders-to-consider-when-adopting-technology</link>
            <guid isPermaLink="true">https://www.thisdot.co/blog/5-tips-for-leaders-to-consider-when-adopting-technology</guid>
            <pubDate>Fri, 02 Feb 2024 16:06:26 GMT</pubDate>
        </item>
    </channel>
</rss>