<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Frank Snelling</title>
    <description>The latest articles on DEV Community by Frank Snelling (@frank-895).</description>
    <link>https://dev.to/frank-895</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/frank-895"/>
    <language>en</language>
    <item>
      <title>What MongoDB taught me about Postgres.</title>
      <dc:creator>Frank Snelling</dc:creator>
      <pubDate>Sun, 22 Feb 2026 20:02:49 +0000</pubDate>
      <link>https://dev.to/frank-895/what-mongodb-taught-me-about-postgres-570e</link>
      <guid>https://dev.to/frank-895/what-mongodb-taught-me-about-postgres-570e</guid>
      <description>&lt;p&gt;&lt;strong&gt;Using MongoDB arguably taught me more about Postgres than using Postgres did.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Hear me out. Previously, my knee-jerk reaction was to always opt for Postgres when starting a new project. Honestly — reasonably safe bet. But only using Postgres limited my understanding of it — as well as its benefits and its limitations.&lt;/p&gt;

&lt;p&gt;Sounds weird to say I learnt more about something by not using it. But it's true — and that's why I'm planning this as the first article of a series. Because I genuinely learnt so much from &lt;strong&gt;not&lt;/strong&gt; using Postgres.&lt;/p&gt;

&lt;h2&gt;
  
  
  How did I end up using MongoDB to start with?
&lt;/h2&gt;

&lt;p&gt;Well, my current startup had already opted for MongoDB when I started. This largely came down to the experience on the ground, as well as the fact that the schema was rapidly evolving. This is well suited to a document-based database where schema flexibility is effectively unlimited.&lt;/p&gt;

&lt;h2&gt;
  
  
  But this flexibility came at a cost.
&lt;/h2&gt;

&lt;p&gt;At first, it was a real joy not having a migrations-folder-of-death which harbored the scars of months of schema discovery in hundreds of back-and-forth migrations. Schema changes were fast and — when coupled with Pydantic — also continued to ensure a high-level of consistency across the codebase.&lt;/p&gt;

&lt;p&gt;Nothing broke. The system always continued running. But it did become progressively slower and more expensive. Without rigid schemas, repeated validation was required in application code, not just on writes but also on reads.&lt;/p&gt;

&lt;p&gt;Once Pydantic validation was happening at scale the CPU cost in our services became quickly noticeable. This was exacerbated by the fact that our validation relied on full models, limiting the effective use of projection. Simple queries started turning into full-document reads.&lt;/p&gt;

&lt;p&gt;I suddenly began to appreciate the intentional rigidity of Postgres. Keeping validation right next to the data is efficient — not only because it is optimized, but because validation only happens once. And Postgres constraints are powerful, turning assumptions into guarantees.&lt;/p&gt;

&lt;p&gt;That being said, it's also nice in a lot of ways to have validation close to the business logic. If we had handled migrations better, the need for defensive and expensive application code could have been reduced. And don't forget MongoDB schema validation — this is a feature I underutilized.&lt;/p&gt;

&lt;p&gt;From my initial joy at the flexibility of MongoDB I quickly learnt an important truth. Schema discipline doesn't disappear — but you can choose whether it lives in your application code or in the database.&lt;/p&gt;

</description>
      <category>sql</category>
      <category>database</category>
      <category>postgres</category>
      <category>mongodb</category>
    </item>
    <item>
      <title>lru_cache vs singleton in Python — they're not the same thing.</title>
      <dc:creator>Frank Snelling</dc:creator>
      <pubDate>Thu, 12 Feb 2026 22:58:05 +0000</pubDate>
      <link>https://dev.to/frank-895/lrucache-vs-singleton-in-python-theyre-not-the-same-thing-1bmc</link>
      <guid>https://dev.to/frank-895/lrucache-vs-singleton-in-python-theyre-not-the-same-thing-1bmc</guid>
      <description>&lt;p&gt;It's common to see &lt;code&gt;@lru_cache&lt;/code&gt; used for quick singletons in Python. As you might guess from the name — that's not the intended purpose of the function. But that doesn't necessarily mean this opportunistic "trick" is bad. Just that there are subtle differences between a classic singleton and the least recently used cache. &lt;/p&gt;

&lt;h2&gt;
  
  
  Using &lt;code&gt;@lru_cache&lt;/code&gt; for singletons has a number of benefits.
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Instantiation is lazy, meaning you don't waste resources creating an instance that isn't used.&lt;/li&gt;
&lt;li&gt;Less boilerplate, more readable. A simple decorator is all you need, rather than a verbose class.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But it also has important drawbacks, especially when instantiation is slow. This is for two key reasons. &lt;/p&gt;

&lt;h3&gt;
  
  
  1. Performance
&lt;/h3&gt;

&lt;p&gt;If instantiation is not instant, lazily creating your singleton can cause unnecessary delays. For example, in a FastAPI app, it isn't ideal if your first request is burdened with multiple long-running instantiations. These could be eagerly created during application startup. &lt;/p&gt;

&lt;h3&gt;
  
  
  2. Exactly-once guarantee
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;@lru_cache&lt;/code&gt; decorator is explicitly &lt;a href="https://docs.python.org/3/library/functools.html#functools.lru_cache" rel="noopener noreferrer"&gt;documented&lt;/a&gt; as thread-safe in the sense that its internal state will not be corrupted under concurrent access. But this does not guarantee a singleton factory or an "initialise-exactly-once" lifecycle contract. If the function is called twice with the same value before the value is computed and cached you could end up with multiple distinct objects created for the same key. In other words: &lt;strong&gt;not a singleton&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The risk of this race condition is generally negligible. But the risk is greater for long-running instantiations.&lt;/p&gt;

&lt;h2&gt;
  
  
  So, when is it definitely bad to use &lt;code&gt;@lru_cache&lt;/code&gt; as a singleton?
&lt;/h2&gt;

&lt;p&gt;Multiprocessing, where the cached object itself spawns worker processes, is a good example (e.g., &lt;code&gt;ProcessPoolExecutor&lt;/code&gt;). In this case, instantiation requires due-diligence and careful lifecycle management.&lt;/p&gt;

&lt;p&gt;If the process pool is created lazily, the first request will bear the full brunt of spawning — spiking latency. Also, if excess pools instantiate excess child processes, this can lead to unpredictable behaviour or unnecessary CPU and memory usage. The long-running nature of spawning not only compounds the latency issue, but increases the risk of multiple pools being created.&lt;/p&gt;

&lt;p&gt;Another consideration is teardown. While a classic singleton pattern will gracefully close clients, &lt;code&gt;@lru_cache&lt;/code&gt; does not include a hook to terminate deterministically. Again, not always a problem — but generally not ideal for multiprocessing.&lt;/p&gt;

&lt;h2&gt;
  
  
  With this in mind, is it ever ok to use &lt;code&gt;@lru_cache&lt;/code&gt; for singletons?
&lt;/h2&gt;

&lt;p&gt;I would argue yes. But with an important caveat:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You must accept that multiple distinct instances could be created in edge cases.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In many cases this is acceptable. For example, an unused database client is not going to hurt anyone. But in certain cases this can lead to subtle, yet dangerous, behaviour.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A good rule of thumb:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use &lt;code&gt;@lru_cache(maxsize=1)&lt;/code&gt; for lightweight objects where extra instances are harmless. &lt;/li&gt;
&lt;li&gt;Use lifespan (or classic singleton patterns) for anything with a real lifecycle, requiring controlled startup and teardown.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A purist might argue that using &lt;code&gt;lru_cache&lt;/code&gt; is not the right choice for lifecycle management. But in the words of &lt;a href="https://peps.python.org/pep-0020/" rel="noopener noreferrer"&gt;The Zen of Python&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Special cases aren't special enough to break the rules. &lt;strong&gt;Although practicality beats purity.&lt;/strong&gt;"&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>python</category>
      <category>architecture</category>
      <category>backend</category>
      <category>concurrency</category>
    </item>
    <item>
      <title>from typing import FinallyAnExplanation</title>
      <dc:creator>Frank Snelling</dc:creator>
      <pubDate>Sat, 31 Jan 2026 09:20:52 +0000</pubDate>
      <link>https://dev.to/frank-895/from-typing-import-finallyanexplanation-8m8</link>
      <guid>https://dev.to/frank-895/from-typing-import-finallyanexplanation-8m8</guid>
      <description>&lt;p&gt;Typing in Python has been a long evolution. A long evolution attempting to reconcile the flexibility of dynamic typing with the safety of static typing.&lt;/p&gt;

&lt;p&gt;Quoting from &lt;em&gt;The Zen of Python&lt;/em&gt; (&lt;a href="https://peps.python.org/pep-0020/" rel="noopener noreferrer"&gt;PEP 20&lt;/a&gt;), I would definitely argue that this principle:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"There should be one — and preferably only one — obvious way to do it."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;does not hold true for typing in Python. There is definitely not one way to do typing and the right way is definitely not obvious — unless you take the time to understand its evolution. Which is exactly what we're going to do.&lt;/p&gt;

&lt;p&gt;So without further ado, here are some "typing quirks" in Python and how to know when to use what. By using the most modern approach for your project, you can prevent backwards compatibility becoming an eternal migration.&lt;/p&gt;

&lt;h2&gt;
  
  
  But first. Let's establish two key terms for this discussion.
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Typing&lt;/strong&gt;: Typing is describing the intended shape and behaviour of values. It can be done in many ways, such as using &lt;code&gt;str&lt;/code&gt; or &lt;code&gt;int&lt;/code&gt; to indicate expected types. Typing is optional in Python and does not change runtime behaviour.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Annotation&lt;/strong&gt;: An annotation is metadata attached to code via &lt;code&gt;:&lt;/code&gt; or &lt;code&gt;-&amp;gt;&lt;/code&gt;. It's stored in an object called &lt;code&gt;__annotations__&lt;/code&gt;. The most common thing placed in an annotation is a type (but it can store other stuff too).&lt;/li&gt;
&lt;/ul&gt;






&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;my_var&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, &lt;code&gt;my_var&lt;/code&gt; is &lt;strong&gt;annotated&lt;/strong&gt; as the &lt;strong&gt;type&lt;/strong&gt; &lt;code&gt;int&lt;/code&gt;.&lt;/p&gt;




&lt;p&gt;During static time, before Python runs code, type-checkers like MyPy will use the typing information inside annotations to analyse your code.&lt;/p&gt;

&lt;h2&gt;
  
  
   When do I use the &lt;code&gt;typing&lt;/code&gt; library?
&lt;/h2&gt;

&lt;p&gt;Standardized typing in Python dates back to 3.5 (see &lt;a href="https://peps.python.org/pep-0484/" rel="noopener noreferrer"&gt;PEP 484&lt;/a&gt;). The first standard solution to typing was a new (and aptly named) library called &lt;code&gt;typing&lt;/code&gt;. In Python 3.5 &lt;code&gt;list[int]&lt;/code&gt; was not a type. That's why you would use &lt;code&gt;from typing import List&lt;/code&gt; for types lacking syntax.&lt;/p&gt;

&lt;p&gt;These type hints took advantage of annotations, a feature which was introduced in Python 3.0 (see &lt;a href="https://peps.python.org/pep-3107/" rel="noopener noreferrer"&gt;PEP 3107&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Typing using built-in generics, like &lt;code&gt;list&lt;/code&gt;, was introduced in Python 3.9 (see &lt;a href="https://peps.python.org/pep-0585/" rel="noopener noreferrer"&gt;PEP 585&lt;/a&gt;), while the use of &lt;code&gt;|&lt;/code&gt; for union types was introduced in Python 3.10 (see &lt;a href="https://peps.python.org/pep-0604/" rel="noopener noreferrer"&gt;PEP 604&lt;/a&gt;). Since 3.10, the only reason to use the &lt;code&gt;typing&lt;/code&gt; library is for "typing-only" ideas like &lt;code&gt;Protocol&lt;/code&gt; and &lt;code&gt;TypedDict&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  But the introduction of typing created fresh challenges.
&lt;/h2&gt;

&lt;p&gt;The problem? After static time, while running code, Python would also try to evaluate types (because they're sitting in your code). This means two big problems:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Forward references&lt;/strong&gt;: when you use a class (or model or method) before declaring it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Circular imports&lt;/strong&gt;: when two classes (or models or methods) import each other, causing a deadlock.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Circular imports were addressed soon after the initial release of 3.5.
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://peps.python.org/pep-0484/#runtime-or-type-checking" rel="noopener noreferrer"&gt;PEP 484&lt;/a&gt; goes into a lot more detail about this issue and proposes a solution(s), which was introduced in 3.5.2 (see &lt;a href="https://docs.python.org/3/library/typing.html#typing.TYPE_CHECKING" rel="noopener noreferrer"&gt;reference&lt;/a&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;typing&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;TYPE_CHECKING&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;TYPE_CHECKING&lt;/code&gt; is a variable that is &lt;code&gt;True&lt;/code&gt; during static time and &lt;code&gt;False&lt;/code&gt; during runtime. This lets you import modules when they're needed for static time without impacting runtime at all.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;TYPE_CHECKING&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
     &lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;myFile&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;myClass&lt;/span&gt;
     &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I use these two examples because they illustrate the two main uses of &lt;code&gt;TYPE_CHECKING&lt;/code&gt;. The first is already discussed: by only importing &lt;code&gt;myClass&lt;/code&gt; at static time we can avoid circular imports. The second use is for heavy libraries which are &lt;strong&gt;only&lt;/strong&gt; needed for type checking, and would be wasteful to load in at runtime when they are not used.&lt;/p&gt;

&lt;p&gt;However, if we don't import &lt;code&gt;myClass&lt;/code&gt; at runtime we can't actually refer to &lt;code&gt;myClass&lt;/code&gt; in our annotations — because runtime Python won't know it exists! When a function was defined at runtime, the actual object in the annotation was looked up and stored in &lt;code&gt;__annotations__&lt;/code&gt; - raising errors if it did not exist. &lt;/p&gt;

&lt;p&gt;This is where it becomes necessary to use strings for annotations. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;myFunction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;myInstance&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;myClass&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By wrapping the object in strings, it would not be looked up when defined. This not only allowed devs to use &lt;code&gt;TYPE_CHECKING&lt;/code&gt; for imports, it also prevented forward references.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automated stringization was formalized in Python 3.7, called "postponed evaluation".
&lt;/h2&gt;

&lt;p&gt;And that's all postponed evaluation is. A fancy phrase for automating something that was already being done. By including &lt;code&gt;from __future__ import annotations&lt;/code&gt;, Python would save annotations as strings by default.&lt;/p&gt;

&lt;p&gt;It's worth noting that &lt;a href="https://docs.python.org/3/library/__future__.html" rel="noopener noreferrer"&gt;future&lt;/a&gt; is a special module for features intended to become default in Python. &lt;a href="https://peps.python.org/pep-0563/" rel="noopener noreferrer"&gt;PEP 563&lt;/a&gt; introduced &lt;code&gt;from __future__ import annotations&lt;/code&gt; and the idea was for this feature to become mandatory by Python 3.11. &lt;/p&gt;

&lt;h2&gt;
  
  
  But this was scrapped and postponed evaluation was never made default in Python.
&lt;/h2&gt;

&lt;p&gt;PEP 563 was superseded by &lt;a href="https://peps.python.org/pep-0649/" rel="noopener noreferrer"&gt;PEP 649&lt;/a&gt; and &lt;a href="https://peps.python.org/pep-0749/" rel="noopener noreferrer"&gt;PEP 749&lt;/a&gt; which introduced a new concept — &lt;strong&gt;deferred evaluation&lt;/strong&gt;. The reason is discussed in detail in PEP 649. &lt;/p&gt;

&lt;p&gt;In brief, storing annotations as strings can have problems for runtime users of annotations — which may or may not be using the annotation for typing. To convert string annotations back to the original object was flaky, slow and difficult.&lt;/p&gt;

&lt;p&gt;Interestingly, &lt;a href="https://peps.python.org/pep-0484/" rel="noopener noreferrer"&gt;PEP 484&lt;/a&gt; refers to the "hope that type hints will eventually become the sole use for annotations", which clearly never came to be. It also introduces some creative ideas for distinguishing type hints from other uses of annotations and acknowledges that storing type hints as objects does have the benefit of enabling "runtime type-checkers" (which has become very popular with Pydantic!).&lt;/p&gt;

&lt;h2&gt;
  
  
  So what is deferred evaluation? Doesn't "deferred" = "postponed"?
&lt;/h2&gt;

&lt;p&gt;Kind of. The &lt;a href="https://www.oed.com/search/dictionary/?scope=Entries&amp;amp;q=deferred" rel="noopener noreferrer"&gt;Oxford English Dictionary&lt;/a&gt; does define the adjective "deferred" as:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"&lt;strong&gt;Postponed&lt;/strong&gt;, put off for a time, delayed."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But in the context of Python, postponed and deferred evaluations are different things. &lt;/p&gt;

&lt;p&gt;Deferred evaluation does not stringize annotations but stores them without evaluating them. Each function, class, and module with annotations gets an internal &lt;code&gt;__annotate__&lt;/code&gt; function. Annotations are only evaluated when requested. All happening without using strings. &lt;/p&gt;




&lt;h2&gt;
  
  
   Now is a good time to be precise about what "evaluation" exactly means.
&lt;/h2&gt;

&lt;p&gt;Evaluation means that Python determines the value of something (either by executing it or resolving it). For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="c1"&gt;# evaluates to 3 by execution
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;In this case:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;A&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;B&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;B&lt;/code&gt; is evaluated by resolution, which means finding the class &lt;code&gt;B&lt;/code&gt; (i.e., the value of &lt;code&gt;B&lt;/code&gt;). &lt;/p&gt;

&lt;p&gt;Deferred evaluation does not evaluate the class &lt;code&gt;B&lt;/code&gt; until it's explicitly needed by the users of annotations. This prevents forward references.&lt;/p&gt;

&lt;p&gt;On the other hand, postponed evaluation is not really "postponed". Because &lt;strong&gt;it is&lt;/strong&gt; evaluated. It's just evaluated as a string and then messily converted back to the type.&lt;/p&gt;

&lt;p&gt;So — compared to postponed evaluation — deferred evaluation truly is "deferred". It is not evaluated as a string or as anything. It is simply not evaluated until someone uses the &lt;a href="https://docs.python.org/3/library/annotationlib.html" rel="noopener noreferrer"&gt;annotationlib&lt;/a&gt; library, which provides annotations in different formats for different static and runtime users. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nf"&gt;get_annotations&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# returns real Python objects 
# {"b": &amp;lt;class B&amp;gt;}
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nf"&gt;get_annotations&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;Format&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;STRING&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# returns strings
# {"b": "B"}
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There's a lot of other cool tricks happening under the hood of deferred evaluation including caching and "fake globals" environments. You can read about these in the PEPs. But the key takeaway is that the need for previous workarounds is meaningfully reduced and life is easier for Python libraries who rely on annotations.&lt;/p&gt;

&lt;p&gt;Deferred evaluation was implemented in Python 3.14. &lt;code&gt;__future__ annotations&lt;/code&gt; is planned for deprecation when Python 3.13 reaches the end of its life (but this will likely be delayed). This highlights the importance of migrating to new techniques when possible, especially when writing fresh code in newer versions. &lt;/p&gt;

&lt;p&gt;So if you're using Python 3.14+ and not supporting older libraries, you can forgo many of the workarounds, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"typesAsStrings"&lt;/li&gt;
&lt;li&gt;&lt;code&gt;__future__ annotations&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One thing that is still useful in 3.14 is the &lt;code&gt;typing&lt;/code&gt; library. Not only for typing information that does not exist as built-in generics, but also for &lt;code&gt;if TYPE_CHECKING&lt;/code&gt;. While forward references have been addressed through deferral, circular imports remain unaddressed. The primary other options are to place import statements where they are actually used: for example, inside functions. Or to use a separate &lt;code&gt;types.py&lt;/code&gt; module. These options are respectively non-idiomatic and fragmented.&lt;/p&gt;

&lt;h2&gt;
  
  
  I find my AI agent often uses a mishmash of Python typing strategies.
&lt;/h2&gt;

&lt;p&gt;The reason I decided to do this deep dive was because I found my AI agents mixing various typing techniques and workarounds with no apparent correlation to the version of Python my project supports. I have a couple of theories why this happens:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;If a certain typing technique (like &lt;code&gt;__future__ annotations&lt;/code&gt;) exists already in a project — perhaps as an LLM mutation or legacy code — this may be replicated and amplified in a self-reinforcing way, even if it is not always necessary.&lt;/li&gt;
&lt;li&gt;LLMs are generally optimized for working code. Also, LLMs will optimize for our own objectives. And it's easy for us to also optimize for working code over best practice. Using the &lt;code&gt;typing&lt;/code&gt; library for &lt;code&gt;List&lt;/code&gt; may be unnecessary but it will never break anything. So if you're &lt;strong&gt;only&lt;/strong&gt; optimizing for working code, you might as well use &lt;code&gt;List&lt;/code&gt; everywhere!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Regardless, these peculiarities highlight the importance of understanding the evolution of typing. Because using outdated strategies could make migration harder when things like &lt;code&gt;__future__ annotations&lt;/code&gt; are deprecated down the line. &lt;/p&gt;

&lt;h2&gt;
  
  
  So, to recap:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Can I use standard typing in Python?&lt;/strong&gt; &lt;br&gt;
Yes, if you are using &amp;gt;3.5.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Should I use the &lt;code&gt;typing&lt;/code&gt; library?&lt;/strong&gt;&lt;br&gt;
Only when you need to.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Should I put my types in strings?&lt;/strong&gt; &lt;br&gt;
Maybe. But probably not. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Yes, if you are using &amp;lt;3.7.&lt;/li&gt;
&lt;li&gt;No, if you are using &amp;gt;=3.7: 

&lt;ul&gt;
&lt;li&gt;If &amp;gt;=3.7 and &amp;lt;3.14, use &lt;code&gt;__future__ annotations&lt;/code&gt; if you need stringized annotations. &lt;/li&gt;
&lt;li&gt;If &amp;gt;=3.14 you will rarely (if ever) need stringized annotations.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Should I use &lt;code&gt;if TYPE_CHECKING&lt;/code&gt;?&lt;/strong&gt;&lt;br&gt;
This is mainly useful in two situations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A heavy library only needed for type checking. &lt;/li&gt;
&lt;li&gt;Circular/forward import statements.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>python</category>
      <category>annotations</category>
      <category>typing</category>
      <category>peps</category>
    </item>
    <item>
      <title>Getting serious about FastAPI? Here's what I've learned.</title>
      <dc:creator>Frank Snelling</dc:creator>
      <pubDate>Fri, 23 Jan 2026 13:27:42 +0000</pubDate>
      <link>https://dev.to/frank-895/getting-serious-about-fastapi-heres-what-ive-learned-1l4o</link>
      <guid>https://dev.to/frank-895/getting-serious-about-fastapi-heres-what-ive-learned-1l4o</guid>
      <description>&lt;p&gt;FastAPI makes building backend services easier than ever - look at this endpoint masquerading as a simple function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nd"&gt;@app.get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/hello&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;hello&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;world&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Hello, &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Compare that to the boilerplate in this Java Servlet function I got ChatGPT to write up for me:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;java.io.IOException&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;javax.servlet.http.*&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;HelloServlet&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;HttpServlet&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="nd"&gt;@Override&lt;/span&gt;
  &lt;span class="kd"&gt;protected&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;doGet&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;HttpServletRequest&lt;/span&gt; &lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;HttpServletResponse&lt;/span&gt; &lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="kd"&gt;throws&lt;/span&gt; &lt;span class="nc"&gt;IOException&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getParameter&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"name"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"world"&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setStatus&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setContentType&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"application/json; charset=utf-8"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getWriter&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;write&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"{\"message\":\"Hello, "&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s"&gt;"!\"}"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To be fair, this simplicity is not strictly FastAPI-specific. When you consider the incredible level of abstraction provided by modern frameworks (including Java frameworks), plus the fact AI agents largely rule the world of manual coding, it can be difficult to grasp exactly what's going on under the hood of a framework like FastAPI. And when I was building simple CRUD apps, this was honestly fine. But - after being burnt a few too many times by AI (more on that) - I've been leveling up my Python skills. I don't have any intention of replacing AI, but it's very clear to me that an advanced and well-crafted backend service needs human expertise. Which means really understanding what FastAPI is doing behind the scenes (and most other async frameworks by extension).&lt;/p&gt;

&lt;h2&gt;
  
  
  Let me illustrate with an example. Consider this.
&lt;/h2&gt;

&lt;p&gt;When I asked ChatGPT how to sandbox an under-tested simulation pipeline, it suggested spinning it up in a separate process - good instinct. The problem was where it told me to do it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;run_simulation&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;process&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;multiprocessing&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Process&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;target&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;run_pipeline&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;start&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;started&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Looks fine right? Maybe not to a seasoned developer, but on first glance it definitely could seem fair enough. Deep in the codebase, this would have been a dangerous flaw leading to a new process spawned on every request. It's something you might not even realize is a problem with the low traffic of your local environment. &lt;/p&gt;

&lt;p&gt;Luckily, I've learnt (after being caught out) never to trust code you don't understand. And now I finally had a use for that pesky &lt;code&gt;lifespan()&lt;/code&gt; hook in &lt;code&gt;main.py&lt;/code&gt;! &lt;/p&gt;

&lt;h2&gt;
  
  
  When you're writing regular-looking functions, it's easy to forget that FastAPI is executing them in a highly concurrent, event-driven environment.
&lt;/h2&gt;

&lt;p&gt;Or whatever that means. It's all good to throw around these words, but it never actually helped me logically wrap my head around what's going on. So here's my explanation, grounded in actual examples for you to picture. &lt;/p&gt;

&lt;p&gt;When you start a FastAPI service you might write something like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uvicorn main:app &lt;span class="nt"&gt;--workers&lt;/span&gt; 4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Specifying &lt;strong&gt;4&lt;/strong&gt; workers just means you're running the service four times. Or eight times. Or whatever number you choose. So for the purpose of this discussion, we're talking about a single FastAPI worker/instance, but you can imagine this happening "times 4" or "times 8" respectively. &lt;/p&gt;

&lt;h3&gt;
  
  
  Let's start with processes and threads.
&lt;/h3&gt;

&lt;p&gt;Your first "hello world" script probably executed synchronously (i.e., one line after the other). But this won't cut it if you want to serve millions of requests. &lt;/p&gt;

&lt;p&gt;Before I discuss the nature of asynchronous frameworks, it is important to understand the concept of &lt;strong&gt;threads&lt;/strong&gt; and &lt;strong&gt;processes&lt;/strong&gt;. I was taught about these at university, but they always seemed a bit magical until I really understood how FastAPI (as an example), uses them in practice. &lt;/p&gt;

&lt;p&gt;When you run a simple Python script like &lt;code&gt;python script.py&lt;/code&gt;, the OS internally starts &lt;strong&gt;one&lt;/strong&gt; process with &lt;strong&gt;one&lt;/strong&gt; thread. The thread runs the script. Threads seem invisible in "normal" Python because the thread simply executes one line after the other. Internally, the thread has its own call stack (i.e., lines of code to run). I like to think of the thread as a worker. &lt;/p&gt;

&lt;p&gt;But this can be confusing. It's important to understand that threads, which I'm calling workers, are completely distinct from the FastAPI worker, which is actually a process. In this analogy, the FastAPI worker (or process) is really more like "the factory", as it owns the resources and completes jobs using workers. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Quick side note: from the perspective of Uvicorn/Gunicorn a FastAPI instance is more of a worker, but I digress! For our purposes and for the rest of this post, a thread is a worker and a process is a factory.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now if your factory only does one thing at a time, it probably only needs one worker. But no factory - FastAPI included - only does one thing at a time. So the natural solution is to hire more workers. Now we're approaching the way FastAPI works, but there's one more layer of complexity. &lt;/p&gt;

&lt;h2&gt;
  
  
  If you've ever managed a group of workers, you'll know one thing you definitely don't want is for them to be idle.
&lt;/h2&gt;

&lt;p&gt;But when you're serving requests, workers will often be idle - maybe for entire seconds, while waiting for external calls (for example, from databases). Seconds might not seem like much to us, but to these hard-working threads, &lt;strong&gt;a lot&lt;/strong&gt; can get done in these precious moments.&lt;/p&gt;

&lt;p&gt;So FastAPI does something unexpected (at least it was unexpected to me the first time I internalized it!). Each FastAPI instance mainly relies on a single worker, a lone thread doing unimaginable quantities of soul-crushing work. At least for asynchronous requests, but more on that ↓&lt;/p&gt;

&lt;h3&gt;
  
  
  Enter the coroutine and the event loop.
&lt;/h3&gt;

&lt;p&gt;Your FastAPI process exists within exactly one event loop. And the event loop manages coroutines. I like think of coroutines as jobs, incoming work. And these jobs can be paused. So the event loop is the manager of our lone thread and whenever the worker has a moment spare, the manager gives him a new job to work on.&lt;/p&gt;

&lt;p&gt;But how does the manager know when the worker has a moment to spare? Well, this is why we use &lt;code&gt;await&lt;/code&gt;. Every time you use this keyword, you're exposing the worker as "not busy" and saying "give him more work, he's waiting for the database to respond!" Instead of having a bunch of workers sitting around half the time, FastAPI works a single thread to the bone, never giving him an idle moment (again, with an important caveat).&lt;/p&gt;

&lt;p&gt;I don't know about other people, but I certainly feel like I initially rote-learned where to put the keyword &lt;code&gt;async&lt;/code&gt; and &lt;code&gt;await&lt;/code&gt;. But if you've made it this far in my post, you'll have a much better understanding of when to use them.  &lt;/p&gt;

&lt;p&gt;Whenever we use the keyword &lt;code&gt;async&lt;/code&gt; to define a function or an endpoint, it will be a coroutine (or a job). If you want to use &lt;code&gt;await&lt;/code&gt; you need to use it within a coroutine, because it doesn't make sense to &lt;code&gt;await&lt;/code&gt; a thread who's only doing one job! &lt;/p&gt;

&lt;h3&gt;
  
  
  So what happens if you don't use &lt;code&gt;async&lt;/code&gt; to define a function?
&lt;/h3&gt;

&lt;p&gt;Ok, remember the caveats about FastAPI depending on a single worker? Well this is not strictly true - only true for asynchronous endpoints. If you choose not to use &lt;code&gt;async&lt;/code&gt; to define a function, the task will be assigned to the threadpool - not the event loop. The threadpool is a distinct mechanism, essentially a separate group of workers, ready to do work that could freeze the very important event loop. You can explicitly offload work to the threadpool too.&lt;/p&gt;

&lt;h3&gt;
  
  
  This helped me un-rote-learn-and-logically-understand when to use &lt;code&gt;async&lt;/code&gt;.
&lt;/h3&gt;

&lt;p&gt;It's important not to be too liberal with your use of &lt;code&gt;async&lt;/code&gt;. If you have an endpoint or function that could block our lone worker for long periods of time, maybe because of CPU-heavy computation, don't use &lt;code&gt;async&lt;/code&gt;. You don't want to burden your star worker with the grunt work at which the threadpool excels. This can lead to the event loop getting overwhelmed and spending more time organising all the jobs that come in, rather than getting them done, a vicious cycle that spikes latency.&lt;/p&gt;

&lt;h2&gt;
  
  
  So in my example with the risky simulation, why not use a thread rather than a process?
&lt;/h2&gt;

&lt;p&gt;Using the threadpool for my under-tested simulation certainly would have taken significant grunt work away from the event loop. However, my main concern was the possibility of the simulation crashing. The event loop and the threadpool live in the same process, meaning they share memory, resources, and a Python interpreter. They are essentially operating in the same factory.&lt;/p&gt;

&lt;p&gt;And this is where the analogy mostly breaks down. If the thread that is computing the simulation gets stuck, you might think that we can tell it to "start again". But this is not how a program runs. Unfortunately, you can't simply kill a thread in Python, as it's intertwined memory-wise and interpreter-wise with our event loop. At least, you can't do it easily. So if the simulation crashes while being run on a thread, it's bringing down the whole factory with it. &lt;/p&gt;

&lt;p&gt;What's the solution? Give the simulation its own factory, its own process - a sandbox, if you like. The simulation now runs with its own Python instance, with its own allocation of memory and CPU that it can do with as it likes. And if the factory goes down, we can simply build a new one and get the simulation running in there. The event loop is protected and our simulation runs safely and separately.&lt;/p&gt;

&lt;h2&gt;
  
  
  So what was the issue with our AI suggestion at the beginning?
&lt;/h2&gt;

&lt;p&gt;ChatGPT was recommending that we build a new factory (or process) every time a simulation was run. This is not only inefficient, but would eventually lead to resources being stretched thin across thousands of unused processes.&lt;/p&gt;

&lt;p&gt;So instead, when the application starts up, we build a process(es). We only rebuild the process(es) if it (or they) time(s) out. Parentheses added because you can start more than one process at runtime.&lt;/p&gt;

&lt;p&gt;I hope this long-winded analogy is helpful! It certainly allowed me to understand what's going on underneath the innocent-looking functions used by FastAPI. And this is important, because the framework (and by extension, probably the AI agent you're using) doesn't know your intent, so it won't stop you doing something architecturally catastrophic - that part is still on you.&lt;/p&gt;

&lt;p&gt;Feel free to comment clarifications or your own insights in the comments!&lt;/p&gt;

&lt;h2&gt;
  
  
  Glossary if you get lost in the extended metaphor
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Threads&lt;/strong&gt;: I think of them as workers, completing work.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Processes&lt;/strong&gt;: Sort of like a factory, with its own Python interpreter, memory and CPU allocation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;FastAPI instance or FastAPI worker&lt;/strong&gt;: This is an example of a process, a specific type of factory if you will. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Coroutine&lt;/strong&gt;: A job or work request made to the factory. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Event loop&lt;/strong&gt;: The manager of the factory, albeit only managing a single worker.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Threadpool&lt;/strong&gt;: A group of workers ready for grunt work. &lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>fastapi</category>
      <category>asyncio</category>
      <category>python</category>
      <category>backend</category>
    </item>
  </channel>
</rss>
