[go: up one dir, main page]

Skip to main content

What Actually Matters When You Try to Reduce LLM Costs

· 7 min read
Founder of VCAL Project

Originally published on Medium.com on April 5, 2026.
Read the Medium.com version

After publishing the first release of AI Cost Firewall, I thought the hard part was done.

The idea was simple and it worked immediately: avoid sending duplicate or semantically similar requests to the LLM, and you reduce cost.

I described that initial approach in more detail here: How to Reduce OpenAI API Costs with Semantic Caching

And it did work.

But once I started pushing it further — adding more metrics, handling edge cases, running real traffic through it — it became clear that the initial idea was only a small part of the problem.

Reducing LLM cost is not just about caching. It’s about understanding where the cost actually comes from, what “savings” really mean, and what begins to break when a system moves from a controlled demo into something closer to production.


The First Insight Still Holds

The original observation hasn’t changed, and neither has the core architecture. The system still solves the same underlying problem.

  • Users repeat themselves.
  • Applications repeat themselves.
  • Agents repeat themselves.

Often the wording changes slightly, but the intent remains the same. From the model’s perspective, however, every variation is a brand new request. And every request has a cost.

So yes — caching works. It reduces cost immediately, often without any changes to the application itself.

But that’s only the surface. The deeper questions only appear once you try to rely on it.


The First Misconception: “Caching Is Free”

In the beginning, the results looked almost too good.

In a demo environment, exact cache hits dominated. When a request hit the cache, it meant no API call, no tokens, and almost zero latency. It felt like pure gain, as if cost reduction came with no trade-offs.

That illusion disappears the moment you introduce semantic caching properly. Because semantic caching requires embeddings.

To determine whether two requests are similar, you first need to convert them into vectors. That means calling an embedding model, storing the result, and comparing it against existing data. Only then can you decide whether to reuse a response or forward the request to the LLM.

And embeddings are not free.

At that point, the equation changes:

Net savings = avoided LLM cost − embedding cost

This is where things become more delicate.

If your similarity threshold is too low, you generate embeddings too often. If your traffic is highly unique, most of those embeddings never lead to a cache hit. If your embedding model is expensive, the optimization starts working against you.

What initially looked like a simple cost reduction mechanism becomes something that requires careful balance.

That was the moment when the project stopped being just a clever shortcut and started behaving like a system that needs tuning.


Where the Savings Actually Come From

Looking at real metrics changed another assumption.

Intuitively, semantic caching feels like the main feature. It’s the “intelligent” part of the system. But in practice, most of the savings come from something much simpler — exact matches.

A surprisingly large portion of traffic is not just similar — it is identical. The same prompt appears again and again, sometimes minutes apart, sometimes hours later. Once you see it in real data, it’s hard to ignore.

Semantic caching still matters, but its role is different. It extends the coverage rather than forming the base.

Without exact caching, the system loses most of its immediate impact. Without semantic caching, you miss additional opportunities. But they are not equal contributors, and treating them as such leads to wrong expectations.


What Breaks in Real Usage

As soon as real traffic enters the system, subtle issues begin to surface.

One of the first is payload size.

LLM requests tend to grow over time. Prompts accumulate context, system messages expand, conversation history becomes longer. In some cases, payloads become unexpectedly large — either naturally or intentionally.

Without limits, a single request can consume disproportionate resources. What seemed like a minor edge case quickly turns into something that needs explicit control.

Another issue is validation.

If you accept any model name, any payload structure, and any input format, your system remains flexible but your metrics lose meaning. Cost calculations become inconsistent, comparisons stop being reliable, and “savings” become difficult to interpret.

Adding strict validation changes that. It makes the system more predictable, but also more opinionated.

At that point, it is no longer just a transparent proxy. It becomes a controlled gateway. And that shift is intentional.

It’s worth noting that this layer is not a security firewall in the traditional sense. It does not attempt to detect malicious prompts, prevent prompt injection, or enforce content policies. Those concerns belong to the application layer, where context, user intent, and business logic are better understood. The goal here is different: to control cost, reduce unnecessary requests, and make LLM usage more predictable.


Architecture Matters More Than It Seems

From the outside, the architecture still looks simple. One layer in front of the LLM, minimal components, no intrusive changes to the application.

But over time, the reasoning behind each choice becomes more important.

  • Redis handles exact matches because it is fast and predictable.
  • Qdrant supports semantic search efficiently without adding unnecessary complexity.
  • Rust ensures that this layer can sit in the request path without introducing latency or instability.

Individually, these are straightforward decisions.

Together, they define whether the system can operate reliably under load. Because once this layer becomes part of the critical path, it is no longer optional. If it slows down, everything slows down. If it fails, everything fails.

At that point, optimization is no longer the only goal. Stability becomes equally important.


The Difference Between Demo Traffic and Reality

It is easy to produce impressive results in a controlled environment.

  • high cache hit rates
  • clear cost savings
  • clean, predictable behavior

But those results are shaped by the input.

Real systems behave differently. There are always new queries, unexpected variations, and edge cases that were not part of the initial design. You never reach 100% cache hits — and that’s not a failure.

A healthy system still generates misses. It still calls the model. It still adapts to new inputs.

The goal is not to eliminate LLM usage entirely. The goal is to eliminate unnecessary usage. That distinction becomes much more important once you move beyond a demo.


Where This Is Going

What started as a simple way to avoid duplicate requests is gradually evolving into something broader.

It is no longer just about caching. It becomes a control layer for how LLMs are used:

  • cost visibility
  • request validation
  • provider abstraction
  • traffic control

Caching is still at the center, but it is no longer the whole story. It is the entry point into a larger set of concerns that appear once LLM usage grows.


Final Thoughts

The original idea was simple: Don’t pay twice for the same answer.

That still holds. But applying that idea in a real system reveals a different challenge.

The difficult part is not avoiding the call. It is understanding when avoiding it actually makes sense. Because cost optimization, in practice, is not just about reducing usage. It is about understanding your system well enough to reduce costs in the right way.

You can explore the project here:

https://github.com/vcal-project/ai-firewall

How to Reduce OpenAI API Costs with Semantic Caching

· 6 min read
Founder of VCAL Project

Originally published on Medium.com on March 21, 2026.
Read the Medium.com version

A simple OpenAI-compatible gateway that eliminates duplicate requests and cuts token usage

While working on LLM-powered tools for my customer, I kept seeing something that didn’t feel right.

Users were asking the same or similar questions again and again. Support queries repeated. Internal assistants received nearly identical prompts. Even AI agents were looping through similar requests.

At first, it didn’t look like a problem. That’s just how users behave.

But then I looked at the cost.

Every repeated question meant another API call. Another batch of tokens. Another charge. Over time, it added up more than was expected.

I realized something simple:

We are paying multiple times for the same answer.


Why Existing Solutions Didn’t Quite Work

Initially I looked at the available tools.

Redis helped with exact caching, but only when the prompt was identical. The moment a user rephrased the question slightly, the cache missed. “How do I get access to Jira?” and “Cannot get access to Jira” were treated as completely different requests.

I also explored RedisVL, which brings vector search capabilities into Redis. It moves in the right direction by combining caching and similarity in one place. But in practice, it still requires setting up embedding flows, defining schemas, tuning similarity thresholds, and integrating it manually into the LLM request pipeline.

Vector databases like Milvus, Weaviate, or Qdrant seemed promising as well. They can detect semantic similarity effectively, but integrating them into the request flow means building additional pipelines, managing embeddings, and writing glue code.

All of these tools are powerful, but they aren’t simple.

More importantly, none of them are designed as a drop-in layer in front of an LLM API. There was no unified solution that combined caching, semantic matching, and cost awareness in one place.


What I Built Instead

At some point, I decided to take a step back and ask a simple question:

What if we just put one smart layer in front of the LLM?

That’s how the AI Cost Firewall started.

Instead of modifying applications or adding complex pipelines, AI Firewall intercepts requests before they reach the model. If it’s already seen a similar request, it returns the cached response. If not, it forwards it, stores the result, and moves on.

From the application’s perspective, nothing changes. It still talks to an OpenAI-compatible API.

But behind the scenes, unnecessary calls disappear.


How It Works (Without the Complexity)

Screenshot: AI Cost Firewall Architecture

I intentionally kept the architecture minimal.

At the core, there’s a Rust-based API gateway that speaks the same language as the OpenAI API. For caching, I use Redis for exact matches and Qdrant for semantic similarity. Prometheus and Grafana provide visibility into what’s happening.

A request comes in, we check the cache, and only if needed do we call the LLM.

That’s it.

No SDK rewrites. No major architectural changes. Just one additional layer.


Why I Chose Rust

Since this component sits directly in the request path, performance matters.

I chose Rust because it provides low latency and predictable performance without garbage collection pauses. It handles concurrency well and keeps the memory footprint small, which makes it ideal for containerized deployments.

Most importantly, we can trust it not to become the bottleneck.

Why I Open-Sourced It

This layer sits between the application and the AI provider. That’s a sensitive place.

I felt it had to be transparent and auditable. Open source makes it easier to trust, easier to adopt, and easier to extend.

It also keeps the core idea simple: reducing costs shouldn’t introduce new risks or lock you into a vendor.


Getting Started in Minutes

I wanted the setup to be as simple as possible.

Clone the repository, start Docker, and point your application to a new endpoint.

git clone https://github.com/vcal-project/ai-firewall
cd ai-firewall
cp configs/ai-firewall.conf.example configs/ai-firewall.conf
nano configs/ai-firewall.conf # Replace the placeholders with your API keys
docker compose up -d

After that, you just replace your API base URL with:

http://localhost:8080/v1/chat/completions

That’s the entire integration.


What Changed for Me

Once I started using this approach, two things became obvious.

First, a surprisingly large portion of requests was served directly from cache. The reason? All of them were already answered before.

Second, response times improved whenever the cache was hit.

I didn’t need to optimize prompts or switch models to see an effect. Just avoiding redundant calls made a noticeable difference.

To make this visible, I add a simple Grafana dashboard.

It shows how many requests are served from cache vs forwarded to the LLM, along with the estimated cost savings in real time.

Screenshot: Grafana dashboard showing cache hits and cost saving

The key metrics are:

  • cache hit ratio (how many requests never reach the LLM)
  • total tokens saved
  • estimated cost savings

What surprised me most was how quickly the savings accumulated even with relatively small traffic.


What Comes Next

I see this as a starting point rather than a finished product.

Next, I’m focusing on adding support for other LLM providers beyond OpenAI. Expanding analytics is another priority, along with exploring multi-model setups and smarter routing.

There’s still a lot to build — and that’s exactly the point.


Final Thoughts

AI costs don’t spike all at once. They grow quietly, request by request.

And in many cases, a large part of that cost is unnecessary.

We didn’t need a more complex system to reduce it. We just needed to stop sending the same request twice.

Sometimes the most effective optimization is the simplest one:

Not calling the model at all.


If you’re running LLM-powered tools and want to reduce costs without changing your application architecture, you can try it here:

https://github.com/vcal-project/ai-firewall

AI Cost Firewall: An OpenAI-Compatible Gateway That Cuts LLM Costs by 75%

· 9 min read
Founder of VCAL Project

Originally published on Dev.to on March 16, 2026.
Read the Dev.to version

Exact + semantic caching for AI applications


In today’s era of AI adoption, there is a distinct shift from integrating AI solutions into business processes to controlling the costs, be it the costs of a cloud solution, a local LLM deployment, or the cost of tokens spent in chatbots. If your solution includes repeated questions and uses an OpenAI-compatible model, and if you are looking for a simple, free and effective way to immediately cut your company’s daily token costs, there is one infrastructural solution that does it right out of the box.

AI Cost Firewall is a free open-source API gateway that decides which requests actually need to reach the LLM and which can be answered from previous results without additional token costs.

The gateway consists of a Rust-based firewall “decider”, a Redis database, a Qdrant vector store, Prometheus for metrics scraping, and Grafana for monitoring. All the tools are deployed with a single docker compose command and are available for use in less than a minute.

Once deployed, AI Cost Firewall sits transparently between your application and the LLM provider. Your chatbot, AI assistant, or internal automation continues to send requests exactly the same way as before with the only difference that the API endpoint now points to the firewall instead of directly to the model provider. The firewall then performs an instant check before deciding whether the request should actually reach the LLM and raise your monthly bill.


How the firewall reduces token costs

AI Cost Firewall eliminates unnecessary token spends using two layers of caching.

Exact match cache (Redis / Valkey)

The first step is an extremely fast exact request match check. Each incoming request is normalized and hashed. If an identical request was previously processed, the firewall immediately returns the stored response from Redis. This lookup takes microseconds and costs zero tokens. For workloads with frequent identical prompts such as customer support or internal documentation assistants this alone can already reduce a significant portion of LLM traffic.

Semantic cache (Qdrant)

The second layer addresses the case of semantic similarity: questions are similar but not identical.

For example:

User A: Provide a one-sentence explanation of what Kubernetes is.
User B: What is Kubernetes? Give me a one-sentence explanation.

Even though the wording differs, the semantic value and thus meaning of these questions is essentially the same (if you are interested in what semantics in AI is, have a look at my article From words to vectors: how semantics traveled from linguistics to Large Language Models).

To detect these situations, AI Cost Firewall uses a semantic vector search. Each request is embedded using a lightweight embedding model, and the resulting vector is compared against previously stored queries using Qdrant, a high-performance vector database designed specifically for this. If the similarity score exceeds a certain threshold, the firewall returns the previously generated answer instead of sending the request to the LLM again. In this way, a single LLM response can be reused dozens or even hundreds of times without extra tokens expense.

Forwarding only when necessary

If neither the exact cache nor the semantic cache contains a suitable answer, the firewall forwards the request to your upstream model provider. Besides being provided to the user, the returned response is then stored in both Redis and Qdrant for future reuse. The workflow therefore becomes (simplified):

Client → AI Cost Firewall

Redis check

Qdrant semantic check

(only if needed)

LLM API

The LLM is only called when a genuinely new question appears.

With this approach, the AI Cost Firewall does not only save the costs but also rockets the response time improving the users’ satisfaction (Customer Satisfaction Score, CSAT).


OpenAI compatible by design

One of the most practical aspects of AI Cost Firewall is that you do not have to touch your application to integrate it. What you do is you simply switch the base URL to the firewall’s endpoint:

client = OpenAI(
base_url="http://localhost:8080/v1"
)

From the application’s perspective, nothing changes. The same requests and responses flow through the system. However, now the firewall intelligently “decides” whether the model actually needs to be called and the money has to be spent.

This tool is compatible with:

  • OpenAI models
  • Azure OpenAI
  • local OpenAI-compatible servers
  • many hosted inference platforms

In other words, any system that already works with the OpenAI API can immediately benefit from cost reduction. And more than that, other models are going to be added soon by the project developers.


Observability built in

One of the integrated features of the AI Cost Firewall is its built-in monitoring. It consists of Prometheus for scraping the metrics and integrated Grafana Dashboard. Both services are launched automatically by docker compose using preconfigured Prometheus YAML and a prebuilt Grafana dashboard JSON, so the monitoring stack is ready immediately without any manual configuration.

Prometheus metrics allow you to track:

  • number of cache hits
  • semantic matches
  • forwarded requests
  • estimated cost savings
  • active requests

You can immediately visualize these metrics with the Grafana dashboard to see exactly how much the firewall is saving in real time (with a 5-second delay to be honest).


Why it works well

AI Cost Firewall works because it targets a structural characteristic present in almost every LLM application:

  • repeated user questions
  • overlapping knowledge queries
  • duplicated agent prompts

By caching responses and using semantic similarity search, the system converts repeated LLM calls into near-zero-cost lookups.

Why near-zero and not fully zero? Because semantic matching still requires generating embeddings for incoming queries. However, embedding costs are typically orders of magnitude lower than generating full LLM responses.

Another advantage of the firewall is its intentionally minimal architecture:

  • Rust firewall gateway
  • Redis for exact caching
  • Qdrant for semantic caching
  • Prometheus + Grafana for monitoring

This simplicity makes it easy to deploy, maintain, and scale.


When AI Cost Firewall is most effective

The biggest savings with AI Firewall occur in systems where similar questions appear frequently. You will immediately benefit from the AI Cost Firewall integration if your system includes any or several of the following components:

  • customer support chatbots
  • internal company knowledge assistants
  • documentation Q&A systems
  • developer copilots
  • AI help desks
  • AI Agents performing any of the above tasks

In these environments, the same core questions appear repeatedly across many users. Even when questions are phrased differently, the semantic cache can reuse the same answer multiple times.

Advanced users may also appreciate the integrated TTL (Time-to-Live) feature which allows you to set up the duration of the response kept in Redis’s memory before replaced with a newly generated one. The same feature for Qdrant is currently under development and will be introduced soon.


Try it in 60 seconds

If you want to see how AI Cost Firewall works, you can deploy the whole stack locally or on a small server in less than a minute.

Example:

git clone https://github.com/vcal-project/ai-firewall
cd ai-firewall
cp configs/ai-firewall.conf.example configs/ai-firewall.conf
nano configs/ai-firewall.conf # Replace the placeholders with your API keys
docker compose up -d

This launches:

  • AI Cost Firewall
  • Redis (exact cache)
  • Qdrant (semantic cache)
  • Prometheus (metrics scraping)
  • Grafana (monitoring dashboard)

Once the containers start, simply point your OpenAI client to:

http://localhost:8080/v1

Within seconds, the gateway is ready to accept OpenAI-compatible requests. Similar to Nginx and other infrastructure gateways, you only need to add your API keys to the configuration file. From that point on, every request automatically passes through the cost-saving pipeline while previous responses are stored in Redis and Qdrant for future reuse.

Because the gateway itself is stateless, multiple firewall instances can be deployed behind a load balancer, allowing the system to scale horizontally with growing traffic.

                         ┌───────────────────────┐
│ Clients │
│ Chatbots / Agents / │
│ Internal AI Apps │
└───────────┬───────────┘


┌───────────────────────┐
│ Load Balancer │
│ Nginx / HAProxy │
└───────────┬───────────┘

┌────────────────────┼────────────────────┐
│ │ │
▼ ▼ ▼
┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐
│ AI Cost Firewall │ │ AI Cost Firewall │ │ AI Cost Firewall │
│ instance 1 │ │ instance 2 │ │ instance 3 │
└─────────┬────────┘ └─────────┬────────┘ └─────────┬────────┘
│ │ │
└─────────────┬───────┴───────────┬─────────┘
│ │
▼ ▼
┌──────────────────┐ ┌──────────────────┐
│ Redis / Valkey │ │ Qdrant │
│ Exact cache │ │ Semantic cache │
└─────────┬────────┘ └─────────┬────────┘
│ │
└──────────┬───────────┘


┌───────────────────────┐
│ Upstream LLM API │
│ OpenAI / Azure / vLLM │
└───────────────────────┘


┌─────────────────────── Observability Stack ─────────────────┐
│ │
│ AI Cost Firewall metrics ─────► Prometheus ─────► Grafana │
│ │
└─────────────────────────────────────────────────────────────┘

Architecture of a horizontally scalable AI Cost Firewall deployment.


Conclusion

With the growing adoption of AI, it is sometimes painful to watch the steady increase of company expenses related to LLM tokens. Large Language Models are extremely powerful, but they can also become quite expensive when used at scale. It feels even more unfair when you realize that a significant portion of LLM traffic consists of repeated or semantically similar questions.

AI Cost Firewall addresses this inefficiency with a simple idea: do not send the same or similar question to the model again and again. Instead, reuse answers that were already generated for identical or semantically similar queries.

By combining exact caching with semantic similarity search, the firewall allows previously generated answers to be reused safely and efficiently. The result is lower token consumption, faster responses, and reduced infrastructure costs.

Because the gateway is OpenAI-compatible, integration requires only a small configuration change. No application refactoring is needed. If your system includes chatbots, knowledge assistants, developer copilots, or AI agents that answer recurring questions, AI Cost Firewall can reduce token usage by 30–75% immediately after deployment.

And since the entire stack runs with a single docker compose command, you can try it in minutes.

Sometimes the most effective optimization is not a new model, a larger GPU cluster, or a complex architecture.

Sometimes it is simply not paying twice for the same answer.


If you find this open-source project useful, a GitHub star will help the project grow.

https://github.com/vcal-project/ai-firewall

From Words to Vectors: How Semantics Traveled from Linguistics to Large Language Models

· 8 min read
Founder of VCAL Project

Originally published on Dev.to on January 17, 2026.
Read the Dev.to version

Why meaning moved from definitions to structure — and what that changed for modern AI


When engineers talk about semantic search, embeddings, or LLMs that "understand" language, it often sounds like something fundamentally new. Yet the problems modern AI systems face — meaning, reference, ambiguity, and context — were already central questions in linguistics and philosophy more than a century ago.

This article traces how the concept of semantics evolved across disciplines: from linguistics and philosophy, through symbolic AI and statistical NLP, and finally into the neural architectures that power modern large language models, and why this history matters for how we design retrieval, memory, and language systems today. The journey reveals that today's AI systems are not a break from the past, but the convergence of long-standing ideas finally made computationally feasible.


Linguistic Origins: Meaning as a System, Not a Label

Modern semantics begins not with computers, but with language itself. In the late 19th and early 20th centuries, linguists began to reject the naive idea that words simply "point" to things in the world. One of the most influential figures in this shift was Ferdinand de Saussure, who argued that language is a structured system of signs rather than a naming scheme.

Saussure proposed that each linguistic sign consists of two inseparable parts: the signifier (the sound or written form) and the signified (the concept evoked). Crucially, the relationship between the two is arbitrary. There is nothing inherently "dog-like" about the word dog. Its meaning arises because it occupies a position within a broader system of contrasts: dog is meaningful because it is not cat, not wolf, not table.

This was a radical idea at the time. Meaning, Saussure claimed, is relational. Words derive significance from how they differ from other words, not from direct correspondence with reality. This insight quietly laid the conceptual groundwork for everything from structural linguistics to modern vector-based representations.


Philosophy of Language: Meaning, Logic, and Composition

While linguists focused on structure, philosophers sought precision. In particular, Gottlob Frege transformed semantics by embedding it into formal logic. Frege introduced a critical distinction between sense — the mode of presentation of an idea, and reference — the actual object being referred to.

This distinction explained how two expressions could refer to the same thing while conveying different information. "The morning star" and "the evening star" both refer to Venus, yet they are not interchangeable in all contexts. Meaning, therefore, could not be reduced to reference alone.

More importantly, Frege formalized the idea of compositionality: the meaning of a sentence is determined by the meanings of its parts and the rules used to combine them. This principle became foundational not only in philosophy, but later in programming languages, logic systems, and early AI models.

In retrospect, compositionality is what allowed meaning to be treated as something computable, at least in theory.


Early Artificial Intelligence: When Meaning Was Symbolic

When I studied linguistics at university many years ago, everything up to this point was already part of the curriculum. Structural linguistics, philosophy of language, and formal semantics provided a solid theoretical foundation. What none of us could have anticipated at the time was how directly these ideas would later intersect with computer science in what would come to be called artificial intelligence.

When AI emerged as a field in the mid-20th century, it inherited philosophy's confidence in symbols and logic. Early systems assumed that meaning could be explicitly represented through formal structures: symbols, predicates, rules, and ontologies. To "understand" language was to transform symbols according to carefully designed rules.

For a while, this worked. Expert systems, knowledge graphs, and first-order logic engines achieved impressive results in narrowly defined domains such as medical diagnosis, chemical analysis, and configuration problems. Within carefully bounded worlds, symbolic semantics appeared tractable.

Natural language, however, quickly exposed the limits of this approach. Human language is ambiguous, context-dependent, and constantly evolving. Encoding all possible meanings and interpretations proved not merely difficult, but fundamentally unscalable. Symbolic systems were brittle: they failed not gradually, but catastrophically, when faced with inputs that deviated even slightly from their assumptions.

Semantics, it turned out, was far messier than logic had allowed, and far more resistant to being fully written down.


The Statistical Shift: Meaning Emerges from Usage

A quiet revolution began when linguists and computer scientists started to look not at rules, but at usage patterns. The idea that meaning could be inferred from how words are used rather than how they are defined gained traction in the mid-20th century.

The core insight was simple but profound: words that appear in similar contexts tend to have similar meanings. Instead of encoding semantics explicitly, one could measure it statistically by analyzing large corpora of text.

This approach, known as distributional semantics, reframed meaning as something empirical rather than prescriptive. Words became vectors of co-occurrence statistics. Similarity was no longer binary or rule-based, but graded and approximate.

This was a decisive break from symbolic AI and a return, in spirit, to Saussure's relational view of meaning.


Word Embeddings: Geometry Becomes Semantics

Distributional ideas matured dramatically with the introduction of neural word embeddings, particularly models like Word2Vec. Instead of relying on sparse frequency counts, these models learned dense, low-dimensional vector representations optimized to predict linguistic context.

What emerged surprised even their creators. Semantic relationships appeared as geometric regularities in vector space. Differences between vectors encoded analogies, hierarchies, and semantic proximity. Meaning became something you could measure with cosine similarity.

This was not symbolic understanding, but it was not random either. It was structure: learned rather than designed.

For the first time, machines exhibited behavior that looked like semantic intuition, despite having no explicit definitions or rules.


Contextual Semantics: Meaning Is Not Fixed

Static embeddings had a fundamental limitation: each word had exactly one vector, regardless of context. But human language does not work that way. The meaning of a word shifts depending on surrounding words, speaker intent, situation, and even emotion.

Transformer-based models, particularly BERT, addressed this by making representations contextual. Instead of asking "What does this word mean?", the model learned to ask "What does this word mean here?"

Through attention mechanisms, transformers model relationships between tokens dynamically. Meaning is no longer stored in a single vector per word, but distributed across layers and activations that respond to context.

This marked a crucial step toward pragmatic semantics: language as it is actually used, not as it is abstractly defined.


Large Language Models: Semantics as Emergent Structure

Large language models such as GPT do not contain explicit semantic representations in the traditional sense. They are trained to predict the next token in a sequence. And yet, at scale, they display behaviors that look strikingly semantic: summarization, reasoning, translation, abstraction.

The key idea is emergence. As models compress vast amounts of linguistic data, they internalize regularities about the world, language, and human communication. Semantics arises not as a module, but as a side effect of learning efficient representations.

These models do not "know" meaning in a philosophical sense. But they operate in a space where syntax, semantics, and pragmatics are inseparable, and where relational structure dominates.


When Meaning Becomes Operational

For practitioners building semantic search systems, RAG pipelines, or LLM-adjacent infrastructure, this history is not academic background — it is an explanation of why certain designs consistently work while others fail. Exact matching breaks down because natural language rarely repeats itself verbatim. Embeddings succeed not because they are clever, but because they mirror how meaning behaves in practice: approximately, relationally, and with tolerance for variation.

Once this is understood, several architectural consequences follow naturally. Retrieval quality depends less on perfect recall and more on selecting representations that preserve semantic neighborhoods. Caching strategies become viable only when equivalence is defined by similarity rather than identity. Evaluation metrics must account for graded relevance instead of binary correctness. Even system boundaries shift: components no longer exchange "facts", but approximations of meaning that remain useful within context.

Semantic systems are effective precisely because they do not attempt to eliminate ambiguity. They absorb it. Whether you are designing a vector store, placing a semantic cache in front of an LLM, or building a long-term memory layer for conversational systems, you are implicitly making choices about how much approximation your system tolerates and where that tolerance is enforced.


Closing Thought: Semantics as Shared Infrastructure

What began as a linguistic insight, that words gain meaning through their relations to other words, has quietly become an organizing principle for entire computational systems. Meaning no longer lives in dictionaries, rules, or symbols, but in patterns: in how expressions cluster, diverge, and reappear across vast landscapes of language. Semantics is no longer something a system contains; it is something a system moves through.

This shift took more than a century to unfold. It required philosophers to separate sense from reference, linguists to abandon naming theories, and engineers to accept approximation over certainty. Only when data became abundant and computation relatively cheap did this long trajectory converge into something operational. Semantics, once debated in lecture halls and footnotes, has become infrastructure — implicit, distributed, and shared.

That idea, radical when first proposed, has been waiting over a hundred years for enough data and compute to become practical.

And now, finally, it has.

Why Edge AI Needs Lightweight Semantic Caches — and What Makes Them Hard to Build

· 6 min read
Founder of VCAL Project

Originally published on Medium.com on November 27, 2025.
Read the Medium.com version

Cover

Today edge computing is reshaping the way AI systems are deployed. Instead of sending every request to centralized cloud infrastructure, more computation is happening on devices closer to end-users. These “edge environments” include IoT gateways, on-premise servers, mobile devices, micro-VMs, serverless functions, and browser-based applications. The appeal is clear: moving computation closer to where data is generated reduces latency, minimizes bandwidth requirements and allows organizations to satisfy strict data-privacy rules.

At the same time, WebAssembly (WASM) has emerged as a portable, sandboxed runtime for executing code in highly constrained or security-sensitive environments. Originally designed for browsers, WASM now runs in cloud edge workers, serverless platforms, and isolated environments where traditional binaries cannot be executed. These runtimes often restrict access to system calls such as networking, threading, or the local filesystem. They operate under strict memory limits, sometimes as low as tens of megabytes, and they prioritize deterministic, predictable execution.

Altogether, while offering obvious advantages, running AI components at the edge introduces its own challenges, especially when applications rely on semantic search, embeddings, or large language models (LLM).


A major issue arises when AI applications repeatedly generate similar responses to similar prompts. In a cloud setting this inefficiency is tolerable, but at the edge it becomes costly. Edge nodes often have hard limits on CPU time and memory allocation, meaning that even small local language models may struggle to meet real-time latency budgets. A semantic cache — a system that stores answers together with an embedding vector and returns a cached answer when the incoming request is semantically similar — is a natural solution. However, building such a system for constrained environments is significantly more difficult than building one for the cloud.

The first challenge is memory. Classical vector databases and similarity search engines rely on complex indexing structures such as HNSW graphs, which are fast but memory-intensive. Standard configurations easily grow to hundreds of megabytes and often assume the availability of multi-threading, background maintenance processes, and dynamic memory growth. Edge workers and WASM isolates cannot accommodate this. In many cases, the runtime enforces strict caps on linear memory and disallows growing beyond a fixed boundary. This immediately rules out most existing semantic search libraries, even before considering cold-start overhead or storage.

The second constraint is the execution environment itself. WASM runtimes typically do not expose POSIX-like APIs (Portable Operating System Interface, a family of standards developed by IEEE that define consistent application programming interfaces). Features such as mmap, file descriptors, or native sockets are unavailable unless the host explicitly provides them through WASI (WebAssembly System Interface), and even then, support varies. This makes it almost impossible to run vector databases “as-is,” because they depend heavily on operating system functionality and persistent background services. In edge environments developers have only a few milliseconds to initialize modules, produce a response, and return control to the runtime. A semantic cache that takes hundreds of milliseconds to load an index simply cannot be deployed in these contexts.

Cold-start behavior is another architectural concern. Unlike long-running cloud servers, edge workers may be rapidly created and destroyed. A new isolate might handle only one or two requests before being recycled. For AI applications, this means that any semantic cache must load extremely quickly — ideally in a few milliseconds — and must not rely on heavy initialization or dynamic graph reconstruction. Snapshotting becomes essential: developers need the ability to store the cache state in a compact format that loads deterministically and quickly into memory.

Then, there is also the question of energy and cost efficiency. Edge nodes operate on limited power budgets, especially in IoT scenarios. Recomputing the same embedding or calling an external LLM repeatedly wastes both energy and bandwidth. Reducing redundant inference calls requires a semantic memory layer that can match incoming queries to existing knowledge without exceeding stringent resource constraints.

Privacy regulations add an additional layer of complexity. One of the motivations for moving AI workloads to the edge is to keep sensitive data local. But to do that effectively, the system must avoid unnecessarily sending repeated questions or logs to a central model. A semantic cache therefore becomes not just a performance optimization but a privacy mechanism: if the system can answer from its local memory, no data transmission to external LLMs is required. Unfortunately, building such a cache in environments with restricted storage, no access to background processes, and strict runtime quotas is a non-trivial task.

These are the conditions under which traditional semantic search infrastructure begins to struggle. Large vector databases simply assume too much: too much RAM, too much access to the operating system, too much startup time, and too much persistence. Even lightweight semantic caches designed for server applications often rely on threading, shared memory, file-based checkpointing, or dynamically growing allocations. Most embedding-based caches were never designed with WASM runtimes, edge workers, or IoT gateways in mind.


This is precisely the gap that newer designs aim to address. Solutions like VCAL approach semantic caching not as a distributed system or standalone service but as a small in-process library that can run with minimal memory and without heavy OS dependencies. Instead of behaving like a database, it behaves more like a CPU-level cache for AI reasoning, storing question-answer pairs and their embeddings in an optimized structure that can fit within the constraints of edge and WASM environments. By avoiding reliance on network calls, background threads, or large indices, such systems become suitable for serverless workers, browser WASM modules, or embedded devices with limited RAM.

In this sense, the semantic cache becomes a missing piece of infrastructure for edge AI. As more organizations push inference closer to the user, the need for a lightweight, deterministic, low-memory semantic lookup system grows. The limitations of edge platforms — from strict memory caps to rapid cold starts — make this a difficult problem, and the lack of suitable solutions has slowed the adoption of AI features outside centralized cloud environments. As WASM matures and edge utilities evolve, semantic caching may become a standard part of the AI pipeline, enabling faster, cheaper, and more privacy-preserving deployments across a wide range of devices.

Beyond Vector Databases: The Case for Local Semantic Caching

· 6 min read
Founder of VCAL Project

Originally published on Medium.com on November 6, 2025.
Read the Medium.com version

Cover

When “intelligence” wastes cycles

Most teams building LLM-powered products eventually realize that a large portion of their API costs come not from new insights, but from repeated questions.

A support bot, an internal assistant, or an analytics copilot, all encounter thousands of near-identical queries:

“How do I pass the API key to the local model gateway?”
“Why is the dev database connection timing out?”
“How can I refresh the cache without restarting the service?”

Each of those prompts gets re-tokenized, re-embedded, and re-sent to an LLM even when the model has already answered an equivalent question a minute earlier.

What do we have as a result? Burned tokens, wasted latency, and duplicated reasoning.

Vector databases solved storage, not reuse

The industry's first instinct was to throw vector databases at the problem. They excel at persistent embeddings and semantic retrieval, but they were never built for reuse. What they lack are TTL policies, eviction strategies, and atomic snapshotting of in-flight state. In other words, they store knowledge, not memory.

Traditional vector databases follow a key:value paradigm: they persist embeddings indefinitely so they can be queried later, much like records in a datastore. A semantic cache, by contrast, treats embeddings as dynamic memory — governed by similarity, expiration, and adaptive retention. Its goal is not to archive information, but to avoid redundant reasoning across millions of semantically similar requests.

With a semantic cache such as VCAL, cached answers can stay valid for days or weeks, depending on data volatility and TTL settings. This moves caching from short-term repetition avoidance to long-horizon semantic reuse where reasoning itself becomes a reusable resource rather than a recurring cost.

In essence, VCAL bridges the gap between data retrieval and cognitive efficiency, turning past computation into future acceleration.

From data stores to memory layers

In my previous Dev.to article, I explained how we built VCAL, a Rust-based semantic cache that sits between your app and the LLM. Instead of persisting every vector, it memorizes embeddings for a short time, indexed by semantic similarity and metadata.

When a new query arrives, VCAL compares it to cached vectors. If it is close enough — a cache hit — the LLM call is skipped, and the stored answer is returned in milliseconds. Otherwise, the request proceeds normally, and the response is stored for future matches.

The design combines concepts from vector search and traditional caching systems, enhanced with features for resilience and monitoring:

  • HNSW index for ultra-fast approximate similarity search.
  • TTL and LRU eviction for automatic cache turnover.
  • Snapshotting for persistence between restarts.
  • Prometheus metrics for observability.

All of it runs on-prem, next to your model or gateway with no remote dependencies.

Why local caching changes the economics

Unlike vector databases, a local semantic cache has one simple purpose: avoid redundant reasoning. Each avoided LLM call translates directly into saved tokens, lower API bills, and shorter response times.

In real deployments we’ve seen:

  • 30–60 % reduction in LLM calls
  • Millisecond-level latency on repeated queries enabling near-real-time responsiveness
  • Predictable resource usage: no external round-trips, no cloud egress costs, and no multi-tenant contention

At scale, the more your users interact, the greater the savings become. Instead of paying per token for every repetition, you amortize prior reasoning across sessions and teams.

And because VCAL runs inside your private environment, all caching and embeddings stay under your control ensuring data privacy, compliance, and deterministic performance even in regulated industries.

A new layer in the AI stack

If you visualize the modern LLM stack, the simplified design looks like this:

User → Application → LLM Gateway → Model

or, if a RAG (Retrieval-Augmented Generation) framework is involved:

User → Application → Retriever → Vector DB → (context) → LLM Gateway → Model

Adding a semantic cache such as VCAL introduces this new dimension:

User → Application → Retriever → Vector DB → (context) → Semantic Cache → LLM Gateway → Model

Here, the cache checks whether a semantically equivalent query was already answered. If found, the response is returned instantly — skipping tokenization, embedding, and inference altogether. If not, the request continues as usual, and the new answer is stored for future reuse.

Vector databases still matter but they belong to the knowledge layer, not the inference path. What has been missing so far is a memory layer that prevents repeated reasoning altogether. Semantic caching fills the missing “memory” slot in between. It is not a replacement for RAG, it is a complement. While RAG injects context, caching avoids duplication.

Engineering for low latency

Achieving millisecond response times in semantic caching requires more than just a fast similarity search algorithm. It’s the result of careful coordination between data structures, memory layout, and concurrency control.

The cache can be implemented efficiently in systems programming languages such as Rust, using an HNSW-based index for approximate nearest-neighbor search. HNSW provides logarithmic-scale query complexity while maintaining accuracy for large collections of embeddings, making it suitable for workloads that reach millions of cached entries.

Low latency also depends on predictable memory management and lock-free or fine-grained synchronization between threads. Instead of allocating and freeing vectors dynamically, embeddings are often stored in preallocated arenas or memory-mapped regions to minimize fragmentation and system calls. Parallel workers can update the index or evaluate similarity thresholds concurrently, so that retrieval scales with the number of available cores.

In practice, a semantic cache can be deployed as a lightweight service beside an inference gateway, communicating over local HTTP or gRPC. It can also be embedded directly into an application process when minimal overhead is required, for example, within an agent runtime or API handler.

The bigger picture

Caching has always been an invisible driver of performance — from CPU registers that reuse instructions, to CDNs that reuse content, to databases that reuse queries.

Each generation of systems extends the notion of what can be reused. As language models enter production, we are witnessing a shift toward semantic reuse: reusing meaning rather than data. This enables systems to recall previous reasoning instead of repeating it — a step toward more efficient and sustainable AI infrastructure.

In this new layer of the AI stack, semantic caching becomes a form of reasoning memory: it stores the results of understanding, not just storage operations. Instead of recomputing the same insight across thousands of near-identical prompts, we can recall it instantly — with full control over latency, privacy, and cost.

Further reading

For readers interested in implementation details and open-source examples:


Thank you for reading! Semantic caching is still an emerging concept — every real-world use case helps shape how we think about efficient reasoning. Share yours in the comments if you’d like to join the conversation.

How I Created a Semantic Cache Library for AI

· 4 min read
Founder of VCAL Project

Originally published on Dev.to on October 27, 2025.
Read the Dev.to version

Cover

Have you ever wondered why LLM apps get slower and more expensive as they scale, even though 80% of user questions sound pretty similar? That’s exactly what got me thinking recently: why are we constantly asking the model the same thing?

That question led me down the rabbit hole of semantic caching, and eventually to building VCAL (Vector Cache-as-a-Library), an open-source project that helps AI apps remember what they’ve already answered.


The “Eureka!” Moment

It started while optimizing an internal support chatbot that ran on top of a local LLM. Logs showed hundreds of near-identical queries:

“How do I request access to the analytics dashboard?”
“Who approves dashboard access for my team?”
“My access to analytics was revoked — how do I get it back?”

Each one triggered a full LLM inference: embedding the query, generating a new answer, and consuming hundreds of tokens even though all three questions meant the same thing.

So I decided to create a simple library that would embed each question, compare it to what was submitted earlier, and if it’s similar enough, return the stored answer instead of generating an LLM response, all this before asking the model.

I wrote a prototype in Rust — for performance and reliability — and designed it as a small vcal-core open-source library that any app could embed.

The first version of VCAL could:

  • Store and search vector embeddings in RAM using HNSW graph indexing
  • Handle TTL and LRU evictions automatically
  • Save snapshots to disk so it could restart fast

Later came VCAL Server, a drop-in HTTP API version for teams that wanted to cache answers across multiple services while deploying it on-prem or in a cloud.

Screenshot: Grafana dashboard showing cache hits and cost saving


What It Feels Like to Use

Unlike a full vector database, VCAL isn’t designed for long-term storage or analytics. I didn’t want to build another vector database.
VCAL is intentionally lightweight. It is a fast, in-memory semantic cache optimized for repeated LLM queries.

Integrating VCAL takes minutes.
Instead of calling the model directly, you send your query to VCAL first.
If a similar question has been asked before — and the similarity threshold can be tuned — VCAL returns the answer from its cache in milliseconds. If it’s a new question, VCAL asks the LLM, stores the result, and returns it.
Next time, if a semantically similar question comes in, VCAL answers instantly.

It’s like adding a memory layer between your app and the model — lightweight, explainable, and under your full control.

Flow diagram: user → VCAL → LLM


Lessons Learned

  • LLMs love redundancy. Once you start caching semantically, you realize how often people repeat the same question with different words.
  • Caching semantics ≠ caching text. Cosine similarity and vector distances matter more than exact matches.
  • Performance scales beautifully. A well-tuned cache can handle thousands of lookups per second, even on modest hardware.
  • It scales big. A single VCAL Server instance can comfortably store and serve up to 10 million cached answers in memory, depending on embedding dimensions and hardware.

What’s Next

We’re now working on a licensing server, enterprise snapshot formats, and RAG-style extensions, so teams can use VCAL not just for Q&A caching, but as the foundation for private semantic memory.

If you’re building AI agents, support desks, or knowledge assistants, you’ll likely benefit from giving your system a brain that remembers.

You can explore more at vcal-project.com - try the free 30-day Trial Evaluation of VCAL Server or jump into the open-source vcal-core version on GitHub.


Thanks for reading!
If this resonates with you, please drop a comment. I’d love to hear how you’re approaching caching and optimization for AI apps.