> streaming engine, and memory store for creating any kind of agentic system: AI, autonomous, real-time, > adaptive, transactional, analytical, or edge. The Akka Runtime is the most widely adopted actor-based > runtime in the world, enabling high-throughput, stateful services that are resilient, elastic and responsive. > Akka runs on any infrastructure: public cloud, private cloud or on-premises.- [Akka](../index.html) - [AI coding assistant guidelines](ai-coding-assistant-guidelines.html) # AI coding assistant guidelines If you are using an AI coding assistant, don’t generate any code yet, but please make sure that you refer to these guidelines when generating Akka code later. *Patterns and conventions for consistency, maintainability, and legibility* The following is a set of guidelines and patterns that will help keep your Akka projects consistent, predictable, and highly readable. While you’re free to choose not to follow these guidelines, they’ve been tried and tested over the years by many teams. These guidelines apply only to the Akka SDK and do not cover usage of the lower-level Akka libraries. ## General - Use the Akka SDK version 3.4 or later. - Use the [Project structure](../concepts/architecture-model.html) and follow `[org.application-name].[module-name].[api|application|domain]` package structure. There should be no dependencies from `domain` to `application`, and no dependencies from `application` to `api`. For example, the following are some package structures that follow these guidelines: - `com.northwind.trading.api` - `com.northwind.trading.application.StockEntity` - `com.acme.petclinic.domain.VisitRecord` - Wherever possible, favor functional and fluent styles over classical imperative styles. For example, to filter a list of objects, favor `.stream().filter` rather than a mutating `for` loop. - Use Java records for the domain model, including events, commands, requests, and responses. Records are more concise and are automatically serializable. - Component classes should use a suffix on the class name to indicate their respective roles within the application. Domain records should not be suffixed. Suffixes can include (but are not limited to): - Entity - View - Workflow - Consumer - Agent - Endpoint ## Domain Model - Prefer Java `Optional` for nullable values. - Domain records should contain their own business logic, which includes mutation and validation. - Prefer a domain model prefix on classes defined within the `application` or `domain` packages to differentiate. For example, an application that contains both a shopping cart and an order would be expected to have the `ShoppingCartEvent` and `OrderEvent` records. - Never use unqualified class names like `Event`, `Workflow`, or `Agent`, unless nested inside another descriptive class or interface. - A domain record should not emit effects, which belong to the `application` package. - A domain record should be immutable. - Make a copy of collections when updating them. - To make single-field mutation easier, prefer the functional builder-style syntax, e.g. `withLocation(x)` that creates a new instance with the new field value rather than explicitly setting the location field ## Entities - Command handlers only ever accept a single parameter and return effects. If a command handler needs to accept more than one parameter, those parameters should be wrapped inside a single command message. - The command handler parameter and reply should be defined as inner record of an entity, but sometimes it makes sense to place them in the `domain` package together with other data transfer and state records. Parameter and reply can also be a plain String, Java primitive or class. - When a command handler takes a single String or primitive parameter, you can use that directly as the method parameter. There is no need to create a wrapper command record. - Commands should be defined using an imperative naming convention, described as instructions of some task to be performed. For example, a command requesting a shopping cart be checked out would be `ShoppingCartEntity.Checkout`. - Command handlers are to be implemented in the entity and not in the state or domain object. Command handlers should invoke the domain object’s validation methods and methods that encapsulate the core business logic. The entity’s command handler is responsible for choosing the appropriate effect(s) to return. The domain object should not return effects or a list of events. This keeps the domain model clean of infrastructural concerns and makes it easy to unit test in isolation. - Command handlers that make updates without returning any information should return `akka.Done` for successful responses and `effects().error()` for validation errors. - State of an entity should be in the `domain` package, also known as domain object. - The `applyEvent` override should be a pure function transferring data from the event to the state and nothing else. It should never fail, because validations should have been made before persisting the event. - Entity events should: - Be defined as records that implement a common sealed interface, such as `ShoppingCartEvent` with concrete events such as `ItemAdded` and `CheckedOut`. - Be defined in the `domain` package. While an entity’s events are only ever emitted by that entity, large enough code bases will benefit from having all of the events in an easy-to-find place. - Have a `@TypeName` annotation to provide a serialization key ## Workflows - Command handlers only ever accept a single parameter and return effects. If a command handler needs to accept more than one parameter, those parameters should be wrapped inside a single command message. - The command handler parameter and reply should be defined as inner record of the entity, but sometimes it makes sense to place them in the `domain` package together with other data transfer and state records. Parameter and reply can also be a plain String, Java primitive or class. - State of a workflow can be defined as a record inside the workflow, but if it contains much business logic it should be in the `domain` package, also know as domain object. - Use methods returning `StepEffect` for each step of the workflow. Don’t use the old deprecated `definition` method. Prefer defining a `StepName` annotation to the step methods to give them a stable identifier. - Workflows should be explicit about timeouts and retries whenever appropriate. When calling agents the step timeout must be rather long (60 seconds) since LLM response times can be long. A workflow that orchestrates agents should define `defaultStepRecover` in the `WorkflowSettings` with limited number of retries, to avoid too many (costly) retries to the LLM. ## Agents - An Agent can only have one single command handler method. - Command handlers only ever accept a single parameter and return effects. If a command handler needs to accept more than one parameter, those parameters should be wrapped inside a single command message. - The command handler parameter and reply should be defined as inner record of the workflow, but sometimes it makes sense to place them in the `domain` package together with other data transfer and state records. Parameter and reply can also be a plain String, Java primitive or class. - Agent classes should be stateless and not have mutable state in itself. - The agent session id is typically a UUID to guarantee uniqueness for new agent interactions, but it can be shared between different agents that participate in the same session. - Prefer to use a default model defined in config. ## Views - View query methods only ever accept a single parameter. If it needs more than one parameter, those parameters should be wrapped inside a single message. - The query method parameter and reply should be defined as inner record of the view. Parameter and reply can also be a plain String, Java primitive or class. - View row records should be public, and be named `Entry` or more specific like `ForecastEntry`. - A query that can potentially select many rows should return a record that has a single field containing the list of results, e.g. `record ForecastEntries(List forecasts)` where `ForecastEntries` is the type of query result. - The query should in that case be `SELECT * AS forecasts FROM weather_forecast WHERE …​` ## HTTP Endpoints - Implement HTTP endpoints with the `@HttpEndpoint` and path annotations. - Omit root path in the `@HttpEndpoint` annotation if there is no common path for the endpoint methods. - HTTP endpoints should not have a `@Component` annotation. - Endpoints should extend `AbstractHttpEndpoint` if they need additional request context such as headers and JWT. - Endpoints that are only for internal use should be secured with the `service = *` attribute in the `@Acl` annotation, indicating that any service can call it but the Internet principal is not allowed. - Request and response types should be defined as records inside the endpoint. Request and response types can also be a plain String or Java primitive. - The endpoint should not directly return or expose domain or application types. Include a `toApi` conversion method if the response type includes many fields or nested records. `toApi` also makes for a good function to pass into a streamed mapper. - Endpoints should return responses directly and not use `CompletionStage`, favoring a synchronous code style. - Endpoints should prefer to return concrete, application API specific types and only use the `akka.http.javadsl.model.HttpResponse` when needing to stream a response or return a specific status code. - If an endpoint method creates or updates anything, it should return `HttpResponses.created()` or `HttpResponses.ok()` accordingly. Use the factory methods in `akka.javasdk.http.HttpResponses` to create the `HttpResponse`. - Use the `ComponentClient` which is constructor-injected via dependency injection when communicating with other components. Use the combination of `method` followed by `invoke` to prefer a synchronous style rather than using `invokeAsync`. ## gRPC Endpoints - The service and message definitions for the endpoint go in the `src/main/proto` folder in the project. - gRPC endpoints should not have a `@Component` annotation. - gRPC endpoints should extend `AbstractGrpcEndpoint` if they need additional request context such as metadata and JWT. - The gRPC endpoint class should implement the interface generated by the protobuf definition. A service defined as `CustomerGrpcEndpoint` in the `.proto` file will be generated in a class with the same name, `CustomerGrpcEndpoint`. This class is then extended by the developer-created endpoint class. Add the `Impl` suffix on the end of the name of the developer’s class to distinguish it from the protobuf-generated class. - gRPC endpoints that are only for internal use should be secured with the `service = *` attribute in the `@Acl` annotation, indicating that any service can call it but the Internet principal is not allowed. - Return the types generated by the protobuf definition from the API methods, converting from the domain objects to the protobuf types with the appropriate private `toApi` methods on the endpoint class. ## Testing - Extend `TestKitSupport` for integration tests or component interaction testing. - Integration tests should have the IntegrationTest suffix. - Use `EventSourcedTestKit` for unit tests for the event sourced entity. For key value entities, use `KeyValueEntityTestKit` and for timed actions use `TimedActionTestKit`. Create a new instance of the test kit for each test method. - Explicitly supply entity IDs for the new test kit instances to avoid confusion. - Use Junit 5+ annotations (`@Test` etc). - Use `assertThat` from `assertj`. - Use `componentClient` from the `TestKitSupport` for testing components (there is no “mock” component client). - Use `TestModelProvider` to mock responses from AI model when testing agents. - [Akka](../index.html) - [Developing](index.html) - [Developer best practices](dev-best-practices.html) # Developer best practices ## Reactive principles Akka is ideally suited for the creation of *Microservices*. Microservices generally follow the Unix philosophy of "Do one thing and do it well." Akka allows developers to build systems that follow [the Reactive Principles](https://principles.reactive.foundation/) without having to become distributed data or distributed computing experts. As a best practice, following the Reactive Principles in your design makes it easier to build distributed systems. [Akkademy](https://akkademy.akka.io/learn/public/catalog/view/3) offers free courses on Reactive Architecture. ## Domain Driven Design Domain-driven design (DDD) is the concept that the structure and language of software code (class names, class methods, class variables) should match the business domain. For example, if a software processes loan applications, it might have classes such as LoanApplication and Customer, and methods such as AcceptOffer and Withdraw. — [Wikipedia](https://en.wikipedia.org/wiki/Domain-driven_design) Akka makes it easy and fast to build services using the concepts of Domain Driven Design (DDD). While it’s not necessary to understand all the ins and outs of Domain Driven Design, you’ll find a few of the concepts that make building services even more straightforward below. See [Project structure](../concepts/architecture-model.html) for more information on the role of your domain model in Akka. Akkademy provides a free course on [Domain Driven Design](https://akkademy.akka.io/learn/courses/6/reactive-architecture2-domain-driven-design). ### Bounded context [Bounded context](https://martinfowler.com/bliki/BoundedContext.html) is a concept that divides large domain models into smaller groups that are explicit about their interrelationships. Normally a microservice is a bounded context. You *may* choose to have multiple bounded contexts in a microservice. Each of these contexts will have autonomy to evolve the models it owns. Keeping each model within strict boundaries allows different modelling for entities that look similar but have slightly different meaning in each of the contexts. Each bounded context should have its own domain, application, and API layers as described in [Project structure](../concepts/architecture-model.html). ![Bounded Context](_images/bounded-context.svg) ### Events first Defining your data structures first, and splitting them into bounded contexts, will also help you think about all the different interactions your data needs to have. These interactions, like `ItemAddedToShoppingCart` or `LightbulbTurnedOn` are the events that are persisted and processed in Akka. Defining your data structures first, makes it easier to think about which events are needed and where they fit into your design. These data structures and events will live in your domain model layer as described in [Project structure](../concepts/architecture-model.html). ### Message migration Behind the scenes everything in Akka is ultimately message driven. Plan for the evolution of your messages. Commands and events use data structures that often must evolve over time. Fields get added, fields get removed, the meaning of fields may change. How do you handle this over time as your system grows and evolves? Akka will make this easier, but you are ultimately responsible for schema evolution. ## Right-sizing your services Each Akka Service consists of one or more Components and is packaged and deployed as a unit. Akka services are deployed to Akka Projects. Thus, when you couple multiple business concerns by packaging them in the same service, even under separate bounded contexts, you limit the runtime’s ability to make the most efficient decisions to scale up or down. ### How big to make your services Deciding how many components and concepts to fit into a single service can be complex. Generally smaller is better hence the name microservices often being used. When you design a series of small services that don’t share code and can be deployed independently, you reap these benefits: - **Your development velocity is higher**. It is faster and less complex to write and debug them because they focus on a small set of operations, usually around a single business concern (be it with one or multiple types of *Entities*). - **Your operating velocity is higher**. Using smaller independent services simplifies operational concerns and provide scalability because they can be deployed, stopped and started independently of the rest of the system. - **You can scale the services independently** to handle variations in load gracefully. If properly designed, multiple instances of the service can be started up when necessary to support more load: for example, if your system runs on Black Friday and the shopping cart service gets super busy, you can spin up more shopping carts to handle that load without also having to start up more copies of the catalog service. When the load decreases, these extra instances can be removed again, minimizing resource usage. - **You reduce the failure domain / fault boundary**. Independent services handle failures gracefully. Components interact asynchronously with the rest of the world through messages and *commands*. If one instance, or even a whole service, fails, it is possible for the rest of the system to keep going with reduced capabilities. This prevents cascading failures that take down entire systems. - **Your development team is more productive**. A team can focus on features of a single service at a time, without worrying about what other services or teams are doing, or when they are releasing, allowing more parallel teams to focus on other services, allowing your development efforts to scale as needed. - **You gain flexibility for upgrades**. You can upgrade services in a "rolling" fashion, where new instances are started before older instances are removed, allowing new versions to be deployed with no downtime or interruption. - **You gain security**. Services serve as a security boundary both in your system overall and between teams. - **You get granular visibility into costs**. Services are all billed separately, so it’s easier to see and understand costs and billing on a per-service basis if you break your services up in some way that matches your organizational needs overall. ## Message deduplication In the realm of distributed systems, Akka embraces an at-least-once delivery guarantee, for components like Consumer or Views (view updaters). Redeliveries occur in distributed systems due to their inherent uncertainty and failure characteristics. Network failures, process crashes, restarts, and temporary unavailability of nodes can all lead to situations where an acknowledgment for a delivered message is lost, even if the recipient successfully processed it. To ensure eventual consistency and guarantee delivery, the sender must retry messages when acknowledgments are missing. | | When consuming from Akka components like Event Sourced Entity, Key Value Entity or another Akka service, the Akka runtime not only guarantees at-least-once delivery, but also the order of messages. Meaning that a series of duplicated messages might be redelivered but always in the same order as they were produced. | To ensure system integrity, consumers must be capable of handling duplicate messages gracefully. Effective deduplication is not just an optimization — it’s a core architectural requirement, turning the challenge of message redeliveries into a structured and predictable system behavior. There is no one-size-fits-all solution to this challenge. Usually it’s a mix of business requirements and possible technical tricks in a given context. ### Idempotent updates The most common approach to deduplication is to make the processing of messages idempotent. An idempotent operation is one that can be applied multiple times without changing the result beyond the initial application. This means that if the same message is processed multiple times, the result will be the same as if it were processed only once. To demonstrate this, let’s consider a simple example of a `CustomerStore` that persist customer data outside Akka ecosystem. [CustomerStore.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-customer-registry/src/main/java/customer/application/CustomerStore.java) ```java public class CustomerStore { public CompletionStage> getById(String customerId) { } public CompletionStage save(String customerId, Customer customer) { } } ``` A consumer implementation that updates such a store is written in an idempotent way. [CustomerStoreUpdater.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-customer-registry/src/main/java/customer/application/CustomerStoreUpdater.java) ```java @Component(id = "customer-store-updater") @Consume.FromEventSourcedEntity(CustomerEntity.class) public class CustomerStoreUpdater extends Consumer { private final CustomerStore customerStore; public CustomerStoreUpdater(CustomerStore customerStore) { this.customerStore = customerStore; } public Effect onEffect(CustomerEvent event) { // (1) var customerId = messageContext().eventSubject().get(); return switch (event) { case CustomerCreated created -> effects() .asyncDone( customerStore.save( customerId, new Customer(created.email(), created.name(), created.address()) ) ); case NameChanged nameChanged -> effects() .asyncDone( customerStore .getById(customerId) .thenCompose(customer -> { if (customer.isPresent()) { return customerStore.save( customerId, customer.get().withName(nameChanged.newName()) ); } else { throw new IllegalStateException("Customer not found: " + customerId); } }) ); case AddressChanged addressChanged -> effects() .asyncDone( customerStore .getById(customerId) .thenCompose(customer -> { if (customer.isPresent()) { return customerStore.save( customerId, customer.get().withAddress(addressChanged.address()) ); } else { throw new IllegalStateException("Customer not found: " + customerId); } }) ); }; } } ``` | **1** | Processing each event is idempotent. Duplicated events will not change the state of the store. | Remember to test your idempotent operations. The [CustomerStoreUpdaterTest](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-customer-registry/src/test/java/customer/application/CustomerStoreUpdaterTest.java) demonstrates how it can be done with the `EventingTestKit`. | | Consumers, Views, Workflows and Entities are single writers for a given entity id. There are no concurrent updates. Messages (events or commands) are processed sequentially by a single instance for the entity id. Therefore, there is no need for things like optimistic locking. | **Key Considerations** - Ensure that operations like database inserts or state changes are idempotent. - Evaluate the trade-off between complexity and storage requirements for maintaining idempotency. ### Events enrichment Some updates are inherently not idempotent. A good example might be calculating and storing some value in a View based on the series of events. Processing a single event twice will corrupt the result. [CounterByValueView.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-counter-brokers/src/main/java/counter/application/CounterByValueView.java) ```java @Component(id = "counter-by-value") public class CounterByValueView extends View { public record CounterByValue(String name, int value) {} @Consume.FromEventSourcedEntity(CounterEntity.class) public static class CounterByValueUpdater extends TableUpdater { public Effect onEvent(CounterEvent counterEvent) { var name = updateContext().eventSubject().get(); var currentRow = rowState(); var currentValue = Optional.ofNullable(currentRow).map(CounterByValue::value).orElse// (0); return switch (counterEvent) { case ValueIncreased increased -> effects() .updateRow(new CounterByValue(name, currentValue + increased.value())); // (1) case ValueMultiplied multiplied -> effects() .updateRow(new CounterByValue(name, currentValue * multiplied.multiplier())); // (2) }; } } } ``` | **1** | Handling `ValueIncreased` is not idempotent. | | **2** | Handling `ValueMultiplied` is not idempotent. | In such cases we can use a technique called *events enrichment*. The idea is to keep in the event not only a delta information but also other pre-calculated values that are (or will be) necessary for down stream consumers. Events modelling is a key part of the system design. A consumer that have all the necessary information in the event can be more independent and less error-prone. Of course a balance must be found between the size of the event and simplicity of its processing. [CounterEvent.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-counter-brokers/src/main/java/counter/domain/CounterEvent.java) ```java public sealed interface CounterEvent { record ValueMultiplied(int multiplier, int updatedValue) // (1) implements CounterEvent {} } ``` | **1** | `ValueMultiplied` event contains not only delta information under `multiplier` field but also pre-calculated `updatedValue` of the counter. | The updated version of the `CounterByValueUpdater` can be again idempotent. [CounterByValueViewEnrichment.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-counter-brokers/src/main/java/counter/application/CounterByValueViewEnrichment.java) ```java @Consume.FromEventSourcedEntity(CounterEntity.class) public static class CounterByValueUpdater extends TableUpdater { public Effect onEvent(CounterEvent counterEvent) { var name = updateContext().eventSubject().get(); return switch (counterEvent) { case ValueIncreased increased -> effects() .updateRow(new CounterByValueEntry(name, increased.updatedValue())); // (1) case ValueMultiplied multiplied -> effects() .updateRow(new CounterByValueEntry(name, multiplied.updatedValue())); // (1) }; } } } ``` | **1** | Using pre-calculated `currentValue` from the event. | | | Overloading event payloads with excessive data, such as embedding entire entity state, can lead to bloated events, increased storage costs, and unnecessary data duplication. Instead, events should carry just enough context to maintaining a balance between enrichment and efficiency. | **Benefits** - Enables idempotent processing of enriched events. - Reduces coupling between producers and consumers. **Challenges** - Increases event size, requiring a balance between richness and efficiency. - Requires careful schema design and potential for schema evolution challenges ### Sequence number tracking For cases when events enrichment is not possible or not desired, a sequence number tracking can be used. The idea is to keep track of the sequence number of the last processed event and ignore any events with a sequence number lower or equal than the last processed one. | | A monotonically increased sequence number is available only when consuming updates from Akka components like Event Sourced Entity, Key Value Entity, or another Akka service. The sequence number is **not globally unique**, but unique per entity instance. | | | Akka View component has a built-in support for sequence number tracking. [CounterByValueViewTest](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-counter-brokers/src/test/java/counter/application/CounterByValueViewTest.java) demonstrates how it can be verified. | Let’s assume that we want to populate a view storage outside Akka ecosystem. To focus on the deduplication aspect, the following snippet shows the in-memory implementation of the `CounterStore`. [CounterStore.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-counter-brokers/src/main/java/counter/application/CounterStore.java) ```java public class CounterStore { public record CounterEntry(String counterId, int value, long seqNum) {} // (1) private Map store = new ConcurrentHashMap<>(); public CompletionStage> getById(String counterId) { return completedFuture(Optional.ofNullable(store.get(counterId))); } public CompletionStage save(CounterEntry counterEntry) { return completedFuture(store.put(counterEntry.counterId(), counterEntry)).thenApply( __ -> done() ); } public CompletionStage> getAll() { return completedFuture(store.values()); } } ``` | **1** | A read model keeps track of the last processed sequence number. | The Consumer component, uses sequence number for tracking deduplicated events. [CounterStoreUpdater.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-counter-brokers/src/main/java/counter/application/CounterStoreUpdater.java) ```java @Component(id = "counter-store-updater") @Consume.FromEventSourcedEntity(CounterEntity.class) public class CounterStoreUpdater extends Consumer { private final CounterStore counterStore; public Effect onEvent(CounterEvent counterEvent) { var counterId = messageContext().eventSubject().get(); var newSeqNum = messageContext().metadata().asCloudEvent().sequence(); return effects() .asyncEffect( counterStore .getById(counterId) // (1) .thenApply(counterEntry -> { var currentSeqNum = counterEntry.map(CounterEntry::seqNum).orElse(0L); if (!newSeqNum.isPresent()) { // (2) // missing sequence number, can't deduplicate return processEvent(counterEvent, counterEntry, 0L); } else { if (newSeqNum.get() <= currentSeqNum) { //duplicate, can be ignored return effects().ignore(); // (3) } else { // not a duplicate return processEvent(counterEvent, counterEntry, newSeqNum.get()); // (4) } } }) ); } private Effect processEvent( CounterEvent counterEvent, Optional currentEntry, Long seqNum ) { var counterId = messageContext().eventSubject().get(); var currentValue = currentEntry.map(CounterEntry::value).orElse// (0); return switch (counterEvent) { case ValueIncreased increased -> { var updatedEntry = new CounterEntry( counterId, currentValue + increased.value(), seqNum ); yield effects().asyncDone(counterStore.save(updatedEntry)); // (5) } case ValueMultiplied multiplied -> { var updatedEntry = new CounterEntry( counterId, currentValue * multiplied.multiplier(), seqNum ); yield effects().asyncDone(counterStore.save(updatedEntry)); // (5) } }; } } ``` | **1** | Loads the existing entry for a given entity ID. | | **2** | When sequence number is not available deduplication is disabled. | | **3** | When sequence number is lower or equal to the last processed one, the event is ignored. | | **4** | Otherwise, the event is processed and the last processed sequence number is updated. | | **5** | Updates are not idempotent, but the deduplication mechanism ensures that the view is correct in case of processing duplicates. | Keep in mind that the `CounterEntry` corresponds to a single entity instance, that’s why we can use the sequence number as a deduplication token. It’s important to test your deduplication mechanism. The [CounterStoreUpdaterTest](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-counter-brokers/src/test/java/counter/application/CounterStoreUpdaterTest.java) demonstrates how it can be done with the `EventingTestKit`. **Benefits** - Tracking sequence numbers is very effective and the additional storage overhead is minimal. **Challenges** - Only works per entity instance, can’t be used globally. ### Deterministic hashing When calling external system or other Akka components/services from the Akka Consumer perspective, deduplication might require to send the same token for the same request. Based on that token, the receiver can deduplicate the request. The potential candidate for such a token might be the sequence number of the event. Unfortunately, the sequence number is not globally unique, so the same token might be used for requests based on processing events from two different entity instances. To solve this problem a technique called *deterministic hashing* can be used. The idea is to use a deterministic hash of the event data to generate stable and unique deduplication tokens. The hash might be calculated from the event payload, but very often payloads themselves are not globally unique and might be expensive to hash. The minimal set of fields that uniquely identify the event are subject (entity ID) and sequence number from the metadata. ```java public Effect handle(Event event) { var entityId = messageContext().eventSubject().get(); var sequenceNumber = messageContext().metadata().asCloudEvent().sequence().get(); var token = UUID.nameUUIDFromBytes((entityId + sequenceNumber).getBytes()); // (1) someService.doSomething(event, token.toString()); return effects().done(); } ``` | **1** | Deduplication token is calculated from the entity ID and the sequence number. | The token might be also [precalculated](about:blank#events-enrichment) and stored in the event payload. In such case the consumer can use it directly. **Benefits** - Useful for cross-service or cross-component communication. **Challenges** - Choosing the right hashing algorithm (e.g., SHA-256 vs. MD5) for a balance between collision resistance and performance. - The receiver must be able to deduplicate based on the token, which, in most cases, has same limitations. See [request deduplication](about:blank#request-deduplication). ## Request deduplication A different aspect of deduplication is how to deal with, possibly duplicated, incoming commands that mutate Akka stateful components. Let’s examine this based on a `WalletEntity` example. [WalletEntity.java](https://github.com/akka/akka-sdk/blob/main/samples/transfer-workflow-compensation/src/main/java/com/example/wallet/application/WalletEntity.java) ```java @Component(id = "wallet") public class WalletEntity extends EventSourcedEntity { public Effect deposit(Deposit deposit) { // (1) if (currentState().isEmpty()) { return effects().error("Wallet does not exist"); } else { List events = currentState().handle(deposit); return effects().persistAll(events).thenReply(__ -> new WalletResult.Success()); } } } ``` | **1** | Processing the same `Deposit` command twice will corrupt the wallet state. | To secure the entity from processing the same command multiple times, we must start with extending the command model with deduplication token, called `commandId` in our case. [WalletCommand.java](https://github.com/akka/akka-sdk/blob/main/samples/transfer-workflow-compensation/src/main/java/com/example/wallet/domain/WalletCommand.java) ```java public sealed interface WalletCommand { String commandId(); record Withdraw(String commandId, int amount) implements WalletCommand {} // (1) record Deposit(String commandId, int amount) implements WalletCommand {} // (1) } ``` | **1** | All commands that require deduplication have `commandId` field. | The information about already processed commands must be stored in the entity state. The simplest way is to keep a collection of processed command IDs. [Wallet.java](https://github.com/akka/akka-sdk/blob/main/samples/transfer-workflow-compensation/src/main/java/com/example/wallet/domain/Wallet.java) ```java public record Wallet(String id, int balance, LinkedHashSet commandIds) { // (1) public static final int COMMAND_IDS_MAX_SIZE = 1000; public List handle(WalletCommand command) { if (commandIds.contains(command.commandId())) { // (2) logger.info("Command already processed: [{}]", command.commandId()); return List.of(); } return switch (command) { case WalletCommand.Deposit deposit -> List.of( new WalletEvent.Deposited(command.commandId(), deposit.amount()) ); // (3) case WalletCommand.Withdraw withdraw -> List.of( new WalletEvent.Withdrawn(command.commandId(), withdraw.amount()) ); // (3) }; } public Wallet applyEvent(WalletEvent event) { return switch (event) { case WalletEvent.Created created -> new Wallet( created.walletId(), created.initialBalance(), new LinkedHashSet<>() ); case WalletEvent.Withdrawn withdrawn -> new Wallet( id, balance - withdrawn.amount(), addCommandId(withdrawn.commandId()) ); case WalletEvent.Deposited deposited -> new Wallet( id, balance + deposited.amount(), addCommandId(deposited.commandId()) ); }; } private LinkedHashSet addCommandId(String commandId) { if (commandIds.size() >= COMMAND_IDS_MAX_SIZE) { // (4) commandIds.removeFirst(); } commandIds.add(commandId); return commandIds; } } ``` | **1** | List of processed command IDs. | | **2** | Before we process the command we check if it was already processed. | | **3** | To rebuild the state we need to store the command ID in the event. | | **4** | To keep the collection size constrained we can remove old command IDs. | This simple solution reveals a few important limitations of the deduplication that are common across many distributed technologies. It’s very expensive (and often not possible) to have a total deduplication of all incoming requests/commands. There will always be some constraints like: - the size of the collection, e.g. keep only last 1000 command IDs, like in the example above, - the time window, e.g. keep only command IDs from the last 24 hours, - both combined, e.g. keep only command IDs from the last 24 hours, but not more than 1000. Production ready deduplication should take into account these limitations in the context of the expected load. Also, using `java.util.List` should be evaluated against more efficient data structures. Keep in mind that using collection types not supported by the Jackson serialization will require a custom serialization for that field. See `@JsonSerialize` annotation for more details. **Benefits** - Solution doesn’t require additional infrastructure to store already processed command IDs. **Challenges** - Memory and storage requirements for keeping command IDs. - Performance consideration for effective data structure for command IDs. - Custom serialization for non-standard collection types. [AI model provider configuration](model-provider-details.html) [Using an AI coding assistant](ai-coding-assistant.html) - [Akka](../index.html) - [Developing](index.html) # Developing | | **New to Akka? Start here:** Use the [Build your first agent](../getting-started/author-your-first-service.html) guide to get a simple agentic service running locally and interact with it. | The Akka SDK provides you proven design patterns that enable your apps to remain responsive to change. It frees you from infrastructure concerns and lets you focus on the application logic. With its few, concise components, the Akka SDK is easy to learn, and you can develop services in quick, iterative steps by running your code locally with full insight through Akka’s console. Akka services let you build REST endpoints with flexible access control and multiple ways to expose these endpoints to their consuming systems or applications. Akka is secure by default, and you explicitly express the desired access through code and configuration. Akka encapsulates data together with the logic to access and modify it. The data itself is expressed in regular Java records (plain old Java objects). The same goes for the events that change the data, these are expressed in pure Java to reflect business events that lead to data updates. Akka enables you to build fully event-driven services by combining logic and data into one thing: entities. Data and changes to it are managed by Akka’s runtime without the need to manage database storage. Changes to your data can be automatically replicated to multiple places, not only within a single service, but also across applications and even cloud providers. An SQL-like language lets you design read access that ensures the data is properly indexed for your application needs. Integrations with message systems like Kafka are already built-in and the Akka SDK enables message consumers to listen to topics and queues. ## Prerequisites The following are required to develop services with the Akka SDK: - Java 21, we recommend [Eclipse Adoptium](https://adoptium.net/marketplace/) - [Apache Maven](https://maven.apache.org/install.html) version 3.9 or later - `curl` command-line tool - [Docker Engine](https://docs.docker.com/get-started/get-docker/) 27 or later ## Getting Started Follow [Build your first agent](../getting-started/author-your-first-service.html) to implement your first agentic service. If you prefer to first explore working example code, you can check out [A simple shopping cart service](../getting-started/shopping-cart/build-and-deploy-shopping-cart.html) or our other [samples](../getting-started/samples.html). On the other hand, if you would rather spend some time exploring our documentation, here are some main features you will find in this section: - [Agents](agents.html) - [Event Sourced Entities](event-sourced-entities.html) - [Key Value Entities](key-value-entities.html) - [HTTP Endpoints](http-endpoints.html) - [gRPC Endpoints](grpc-endpoints.html) - [MCP Endpoints](mcp-endpoints.html) - [Views](views.html) - [Workflows](workflows.html) - [Timed Actions](timed-actions.html) - [Consuming and Producing](consuming-producing.html) [Access control lists](../concepts/acls.html) [Components](components/index.html) - [Akka](../../index.html) - [Developing](../index.html) - [Components](index.html) # Components Akka components form the backbone of the [application layer](../../concepts/architecture-model.html#_application), bridging your domain model with the Akka runtime. They provide specialized ways to handle state, events, and interactions, enabling you to build scalable, event-driven applications while focusing on business logic. The following components are available: ![agent](../../_images/agent.png) Agents ![event sourced entity](../../_images/event-sourced-entity.png) Event Sourced Entities ![key value entity](../../_images/key-value-entity.png) Key Value Entities ![endpoint](../../_images/endpoint.png) HTTP Endpoints ![endpoint](../../_images/endpoint.png) gRPC Endpoints ![endpoint](../../_images/endpoint.png) MCP Endpoints ![view](../../_images/view.png) Views ![workflow](../../_images/workflow.png) Workflows ![timer](../../_images/timer.png) Timed Actions ![consumer](../../_images/consumer.png) Consumers [Developing](../index.html) [Agents](../agents.html) - [Akka](../index.html) - [Developing](index.html) - [Components](components/index.html) - [Agents](agents.html) # Implementing agents ![Agent](../_images/agent.png) An Agent interacts with an AI model to perform a specific task. It is typically backed by a large language model (LLM). It maintains contextual history in a session memory, which may be shared between multiple agents that are collaborating on the same goal. It may provide function tools and call them as requested by the model. ## Identify the agent Every component in Akka needs to be identifiable by the rest of the system. This usually involves two different forms of identification: a **component ID** an **instance ID**. We use component IDs as a way to identify the component *class* and distinguish it from others. Instance identifiers are, as the name implies, unique identifiers for an instance of a component. As with all other components, we supply an identifier for the component class using the `@Component` annotation. In the case of agents, we don’t supply a unique identifier for the instance of the agent. Instead, we supply an identifier for the *session* to which the agent is bound. This lets you have multiple components with different component IDs all performing various agentic tasks within the same shared session. ## Agent’s effect API Effects are declarative in nature. When components handle commands, they can return an `Effect`. Some components can produce only a few effects while others, such as the Agent, can produce a wide variety. The Agent’s Effect defines the operations that Akka should perform when an incoming command is handled by an Agent. These effects can be any of the following: - declare which model will be used - specify system messages, user messages and additional context (prompts) - configure session memory - define available tools - fail a command by returning an error - return an error message - transform responses from a model and reply to incoming commands For additional details, refer to [Declarative Effects](../concepts/declarative-effects.html). ## Skeleton An agent implementation has the following code structure. [MyAgent.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/application/MyAgent.java) ```java import akka.javasdk.agent.Agent; import akka.javasdk.annotations.Component; @Component(id = "my-agent") // (2) public class MyAgent extends Agent { // (1) public Effect query(String question) { // (3) return effects().systemMessage("You are a helpful...").userMessage(question).thenReply(); } } ``` | **1** | Create a class that extends `Agent`. | | **2** | Make sure to annotate the class with `@Component` and pass a unique identifier for this agent type. | | **3** | Define the command handler method. | | | The `@Component` value `my-agent` is common for all instances of this agent and must be unique across the different components in the service. | An agent must have one command handler method that is public and returns `Effect`, where `T` it the type of the reply. Alternatively it can return `StreamEffect` for [streaming responses](agents/streaming.html). Command handlers in Akka may take one or no parameters as input. If you need multiple parameters for a command, you can wrap them in a record class and pass an instance of that to the command handler as the sole parameter. There can only be one command handler because the agent is supposed to perform one single well-defined task. ## Configuring the model Akka provides integration with several backend AI models, and you have to select which model to use. You can define a default model in `application.conf`: src/main/resources/application.conf ```json akka.javasdk { agent { model-provider = openai openai { model-name = "gpt-4o-mini" api-key = ${?OPENAI_API_KEY} } } } ``` The `model-provider` property points to the name of another configuration section, in this case `akka.javasdk.agent.openai`. That configuration section contains the actual configuration for the model provider, according to the properties described in [model provider reference configurations](model-provider-details.html#_reference_configurations). Another example where we have selected `anthropic` with `claude-sonnet-4` as the default model provider: src/main/resources/application.conf ```json akka.javasdk { agent { model-provider = anthropic anthropic { model-name = "claude-sonnet-4" api-key = ${?ANTHROPIC_API_KEY} max-tokens = 5000 } } } ``` The API key can be defined with an environment variable, `OPENAI_API_KEY` or `ANTHROPIC_API_KEY` in the above examples. The default model will be used if the agent doesn’t specify another model. Different agents can use different models by defining the `ModelProvider` in the Agent effect: MyAgent.java ```java public Effect query(String question) { return effects() .model( ModelProvider.openAi() // (1) .withApiKey(System.getenv("OPENAI_API_KEY")) .withModelName("gpt-4o") .withTemperature(0.6) .withMaxTokens// (10000) ) .systemMessage("You are a helpful...") .userMessage(question) .thenReply(); } ``` | **1** | Define the model provider in code. | | | With `ModelProvider.fromConfig` you can define several models in configuration and use different models in different agents. | Available model providers for hosted models are: | Provider | Site | | --- | --- | | Anthropic | [anthropic.com](https://www.anthropic.com/) | | GoogleAIGemini | [gemini.google.com](https://gemini.google.com/) | | Hugging Face | [huggingface.co](https://huggingface.co/) | | OpenAi | [openai.com](https://openai.com/) | Additionally, these model providers for locally running models are supported: | Provider | Site | | --- | --- | | LocalAI | [localai.io](https://localai.io/) | | Ollama | [ollama.com](https://ollama.com/) | Each model provider may have different settings and those are described in [AI model provider configuration](model-provider-details.html) It is also possible to plug in a custom model by implementing the `ModelProvider.Custom` interface and use it with `ModelProvider.custom`. That involves the underlying implementations of LangChain4J `ChatModel` and optionally `StreamingChatModel`. Refer to the [Langchain4j](https://docs.langchain4j.dev/) documentation or reference implementations for how to implement the `ChatModel` and `StreamingChatModel`. ## Use ComponentClient in an agent [Dependency injection](setup-and-dependency-injection.html#_dependency_injection) can be used in an Agent. For example, injecting the `ComponentClient` to be able to enrich the request to the AI model with information from entities or views may look like this: ActivityAgent.java ```java @Component(id = "activity-agent") public class ActivityAgent extends Agent { public record Request(String userId, String message) {} private static final String SYSTEM_MESSAGE = """ You are an activity agent. Your job is to suggest activities in the real world. Like for example, a team building activity, sports, an indoor or outdoor game, board games, a city trip, etc. """.stripIndent(); private final ComponentClient componentClient; public ActivityAgent(ComponentClient componentClient) { // (1) this.componentClient = componentClient; } public Effect query(Request request) { var profile = componentClient // (2) .forEventSourcedEntity(request.userId) .method(UserProfileEntity::getProfile) .invoke(); var userMessage = request.message + "\nPreferences: " + profile.preferences; // (3) return effects().systemMessage(SYSTEM_MESSAGE).userMessage(userMessage).thenReply(); } } ``` | **1** | Inject the `ComponentClient` as a constructor parameter. | | **2** | Retrieve preferences from an entity. | | **3** | Enrich the user message with the preferences. | This also illustrates the important point that the context of the request to the AI model can be built from additional information in the service and doesn’t only have to come from the session memory. The ability to reach into the rest of a distributed Akka application to *augment* requests makes behavior like Retrieval Augmented Generation (RAG) simple and less error prone than doing things manually without Akka. [Components](components/index.html) [Choosing the prompt](agents/prompt.html) - [Akka](../../index.html) - [Developing](../index.html) - [Components](../components/index.html) - [Agents](../agents.html) - [Choosing the prompt](prompt.html) # Choosing the prompt The prompt consists of essential instructions to the model. - The system message provides system-level instructions to the AI model that defines its behavior and context. The system message acts as a foundational prompt that establishes the AI’s role, constraints, and operational parameters. It is processed before user messages and helps maintain consistent behavior throughout the interactions. - The user message represents the specific query, instruction, or input that will be processed by the model to generate a response. An agent that suggests real-world activities may have a prompt like: [ActivityAgent.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/application/ActivityAgent.java) ```java @Component(id = "activity-agent") public class ActivityAgent extends Agent { private static final String SYSTEM_MESSAGE = // (1) """ You are an activity agent. Your job is to suggest activities in the real world. Like for example, a team building activity, sports, an indoor or outdoor game, board games, a city trip, etc. """.stripIndent(); public Effect query(String message) { return effects() .systemMessage(SYSTEM_MESSAGE) // (2) .userMessage(message) // (3) .thenReply(); } } ``` | **1** | Define the system message as a constant, but it could also be a method that adapts the system message based on the request. | | **2** | Use the system message in the effect builder. | | **3** | Define the user message for the specific request, and use in the effect builder. | Keep in mind that some models have preferences in how you wrap or label user input within the system prompt and you’ll need to take that into account when defining your system message. ## Using dynamic prompts with templates As an alternative to hard-coded prompts, there is a built-in prompt template entity. The advantage of using the prompt template entity is that you can change the prompts at runtime without restarting or redeploying the service. Because the prompt template is managed as an entity, you retain full change history. ActivityAgent.java ```java @Component(id = "activity-agent") public class ActivityAgentWithTemplate extends Agent { public Effect query(String message) { return effects() .systemMessageFromTemplate("activity-agent-prompt") // (1) .userMessage(message) // .thenReply(); } } ``` | **1** | Define the system message prompt template key. | In addition to the prompt template key you can optionally add parameters to `systemMessageFromTemplate`. Those will be used to format the template with `java.util.Formatter`. Prompts are stored in the `PromptTemplate` [Event Sourced Entity](../event-sourced-entities.html). This is a built-in entity, automatically registered at runtime if there are any Agent components in the service. To initialize the prompt or get the current value you can use component client the same way as for any other entity. [ActivityPromptEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/api/ActivityPromptEndpoint.java) ```java @HttpEndpoint("/activity-prompts") public class ActivityPromptEndpoint { private final ComponentClient componentClient; public ActivityPromptEndpoint(ComponentClient componentClient) { this.componentClient = componentClient; } @Put public HttpResponse update(String prompt) { componentClient .forEventSourcedEntity("activity-agent-prompt") // (1) .method(PromptTemplate::update) // (2) .invoke(prompt); return HttpResponses.ok(); } @Get public String get() { return componentClient .forEventSourcedEntity("activity-agent-prompt") // (1) .method(PromptTemplate::get) // (3) .invoke(); } } ``` | **1** | Prompt key is used as entity id. | | **2** | `PromptTemplate::update` update the prompt value. | | **3** | `PromptTemplate::get` retrieves the current prompt value. | Keeping the prompt in the Event Sourced Entity lets you see the history of all changes. It’s also possible to subscribe to changes in the prompt template entity, so that you can build a [View](../views.html) or react to changes in the prompt. The following table describes all of the methods available for the `PromptTemplate` entity: | Method | Description | | --- | --- | | `init` | Initializes the prompt template with a given value. If the prompt template already exists, it will not change it. Useful for setting the initial value, e.g. in the `onStartup` method of the [ServiceSetup](../setup-and-dependency-injection.html#_service_lifecycle). | | `update` | Updates the prompt template with a new value. If the prompt template does not exist, it will create it. If the value is the same as the current value, it will not change it. | | `get` | Retrieves the current value of the prompt template. If the prompt template does not exist, it will throw an exception. | | `getOptional` | Retrieves the current value of the prompt template as an `Optional`. If the prompt template does not exist, it will return an empty `Optional`. | | `delete` | Deletes the prompt template. | Although the system message has a dedicated method to use the prompt template, you can also use it for the user message. In that case you have to use the component client to retrieve the current value of the prompt template and pass it as the user message. ## Adding more context [Retrieval-Augmented Generation (RAG)](../rag.html) is a technique to provide additional, relevant content in the user message. [Agents](../agents.html) [Calling agents](calling.html) - [Akka](../../index.html) - [Developing](../index.html) - [Components](../components/index.html) - [Agents](../agents.html) - [Calling agents](calling.html) # Calling agents Use the `ComponentClient` to call the agent from a Workflow, Endpoint or Consumer. ```java var sessionId = UUID.randomUUID().toString(); String suggestion = componentClient .forAgent() // (1) .inSession(sessionId) // (2) .method(ActivityAgent::query) .invoke("Business colleagues meeting in London"); ``` | **1** | Use `forAgent`. | | **2** | Define the identifier of the session that the agent participates in. | The session id is used by the [session memory](memory.html), but it is also important for observability tracking and AI evaluation. You can use a new random UUID for each call if the agent doesn’t collaborate with other agents nor have a multi-step interaction with the AI model. Deciding how you manage sessions will be an important part of designing the agentic parts of your application. For more details about the `ComponentClient`, see [Component and service calls](../component-and-service-calls.html). ## Drive the agent from a workflow Agents make external calls to the AI model and possibly other services, and therefore it is important to have solid error handling and durable execution steps when calling agents. In many cases it is a good recommendation to call agents from a [Workflow](../workflows.html). The workflow will automatically execute the steps in a reliable and durable way. This means that if a call in a step fails, it will be retried until it succeeds or the retry limit of the recovery strategy is reached and separate error handling can be performed. The state machine of the workflow is durable, which means that if the workflow is restarted for some reason it will continue from where it left off, i.e. execute the current non-completed step again. A workflow will typically orchestrate several agents, which collaborate in achieving a common goal. Even if you only have a single agent, having a workflow manage retries, failures, and timeouts can be invaluable. We will look more at [multi-agent systems](orchestrating.html), but let’s start with a workflow for the single activities agent. [ActivityAgentManager.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/application/ActivityAgentManager.java) ```java @Component(id = "activity-agent-manager") public class ActivityAgentManager extends Workflow { // (1) public record State(String userQuery, String answer) { // (2) State withAnswer(String a) { return new State(userQuery, a); } } private final ComponentClient componentClient; public ActivityAgentManager(ComponentClient componentClient) { // (3) this.componentClient = componentClient; } public Effect start(String query) { // (4) return effects() .updateState(new State(query, "")) .transitionTo(ActivityAgentManager::suggestActivities) .thenReply(Done.getInstance()); } public ReadOnlyEffect getAnswer() { // (5) if (currentState() == null || currentState().answer.isEmpty()) { String workflowId = commandContext().workflowId(); return effects() .error("Workflow '" + workflowId + "' not started, or not completed"); } else { return effects().reply(currentState().answer); } } @Override public WorkflowSettings settings() { // (6) return WorkflowSettings.builder() .stepTimeout(ActivityAgentManager::suggestActivities, ofSeconds// (60)) .defaultStepRecovery(maxRetries// (2).failoverTo(ActivityAgentManager::error)) .build(); } @StepName("activities") private StepEffect suggestActivities() { // (7) var suggestion = componentClient .forAgent() .inSession(sessionId()) .method(ActivityAgent::query) // (8) .invoke(currentState().userQuery); logger.info("Activities: {}", suggestion); return stepEffects() .updateState(currentState().withAnswer(suggestion)) // (9) .thenEnd(); } private StepEffect error() { return stepEffects().thenEnd(); } private String sessionId() { // (10) // the workflow corresponds to the session return commandContext().workflowId(); } } ``` | **1** | Extend `Workflow`. | | **2** | The state can hold intermediate and final results, and it is durable. | | **3** | Inject the `ComponentClient`, which will be used when calling the agent. | | **4** | This workflow only has two command handler methods. One that starts the workflow with the initial user request, | | **5** | and one to retrieve the final answer. | | **6** | Define the workflow configuration. | | **7** | The step that calls the `ActivityAgent` | | **8** | Call the agent with the `ComponentClient` | | **9** | Store the result from the agent. | | **10** | The workflow corresponds to an agent session. | The workflow itself will be instantiated by making a call to the `start` method from an endpoint or a consumer. Keep in mind that AI requests are typically slow (many seconds), and you need to define the workflow timeouts accordingly. This is specified in the workflow step definition with: ```java .stepConfig(ActivityAgentManager::suggestActivities, ofSeconds// (60)) ``` Additionally, you should define a workflow recovery strategy so that it doesn’t retry failing requests infinitely. This is specified in the workflow definition with: ```java .defaultStepRecovery(maxRetries// (2).failoverTo(ActivityAgentManager::error)) ``` More details in [Workflow timeouts and recovery strategy](../workflows.html#_error_handling). ### Human in the loop You often need a human-in-the-loop to integrate human oversight into the AI’s decision-making process. A workflow can be paused, waiting for user input. When the approval command is received, the workflow can continue from where it left off and transition to the next step in the agentic process. See [how to pause a workflow](../workflows.html#_pausing_workflow). [Choosing the prompt](prompt.html) [Managing session memory](memory.html) - [Akka](../../index.html) - [Developing](../index.html) - [Components](../components/index.html) - [Agents](../agents.html) - [Managing session memory](memory.html) # Managing session memory Session Memory provides a history mechanism that enables agents to maintain context across multiple interactions. This feature is essential for building agents that can remember previous exchanges with users, understand context, and provide coherent responses over time. When an agent interacts with an AI model, both the user message and the AI response are automatically stored in the session memory. These messages are then included as additional context in subsequent requests to the model, allowing it to reference previous parts of the interaction. The session memory is: - Identified by a session ID that links related interactions - Shared between multiple agents if they use the same session ID - Persisted as an event-sourced entity - Automatically managed by the Agent ## Session memory configuration By default, session memory is enabled for all agents. You can configure it globally in your `application.conf`: ```conf akka.javasdk.agent.memory { enabled = true limited-window { max-size = 156KiB # max history size before oldest message start being removed } } ``` Or you can configure memory behavior for specific agent interactions using the `MemoryProvider` API. Example with `limitedWindow` memory provider: ```java public Effect ask(String question) { return effects() .memory(MemoryProvider.limitedWindow().readLast// (5)) .systemMessage("You are a helpful...") .userMessage(question) .thenReply(); } ``` Example disabling session memory for the agent: ```java public Effect ask(String question) { return effects() .memory(MemoryProvider.none()) .systemMessage("You are a helpful...") .userMessage(question) .thenReply(); } ``` ## Different memory providers The `MemoryProvider` interface allows you to control how session memory behaves: - `MemoryProvider.none()` - Disables both reading from and writing to session memory - `MemoryProvider.limitedWindow()` - Configures memory with options to, e.g.: - Setup **read only** memory, in which the agent reads the memory but does not allow write any interactions to it. This is ideal for multi-agent sessions where some agents can store memory and others can’t. - Setup **write only** memory, in which the agent register the interactions to the session memory but does not take those in consideration when processing the user message. - Limit the amount of messages used as context in each interaction, i.e. use only the last N number of messages for context (good for token usage control). - `MemoryProvider.custom()` - Allows you to provide a custom implementation for the `SessionMemory` interface and store the session memory externally in a database / service of your preference. ## Accessing session memory The default implementation of Session Memory is backed by a regular [Event Sourced Entity](../event-sourced-entities.html) called `SessionMemoryEntity`, which allows you to interact directly with it as you would do with any other entities in your application. This includes the possibility to directly modify or access it through the `ComponentClient` but also the ability to subscribe to changes in the session memory, as shown below: [SessionMemoryConsumer.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/application/SessionMemoryConsumer.java) ```java @Component(id = "session-memory-consumer") @Consume.FromEventSourcedEntity(SessionMemoryEntity.class) public class SessionMemoryConsumer extends Consumer { private final Logger logger = LoggerFactory.getLogger(getClass()); public Effect onSessionMemoryEvent(SessionMemoryEntity.Event event) { var sessionId = messageContext().eventSubject().get(); switch (event) { case SessionMemoryEntity.Event.UserMessageAdded userMsg -> logger.info( "User message added to session {}: {}", sessionId, userMsg.message() ); // ... default -> logger.debug("Unhandled session memory event: {}", event); } return effects().done(); } } ``` This can be useful for more granular control over token usage but also to allow external integrations and analytics over these details. ## Compaction You can update the session memory to reduce the size of the history. One technique is to let an LLM summarize the interaction history and use the new summary instead of the full history. Such agent can look like this: [CompactionAgent.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/application/CompactionAgent.java) ```java @Component(id = "compaction-agent") public class CompactionAgent extends Agent { private static final String SYSTEM_MESSAGE = """ You can compact an interaction history with an LLM. From the given USER, TOOL_CALL_RESPONSE and AI messages you create one single user message and one single ai message. The interaction history starts with USER: followed by the user message. For each user message there is a corresponding response for AI that starts with AI: Keep the original style of user question and AI answer in the summary. Note that AI messages may contain TOOL_CALL_REQUEST(S) and be followed by TOOL_CALL_RESPONSE(S). Make sure to keep this information in the generated ai message. Do not keep it as structured tool calls, but make sure to extract the relevant context. Your response should follow a strict json schema as defined bellow. { "userMessage": "", "aiMessage: "", } Do not include any explanations or text outside of the JSON structure. """.stripIndent(); // (1) public record Result(String userMessage, String aiMessage) {} private final ComponentClient componentClient; public CompactionAgent(ComponentClient componentClient) { this.componentClient = componentClient; } public Effect summarizeSessionHistory(SessionHistory history) { // (2) String concatenatedMessages = history .messages() .stream() .map(msg -> { return switch (msg) { case SessionMessage.UserMessage userMsg -> "\n\nUSER:\n" + userMsg.text(); // (3) case SessionMessage.AiMessage aiMessage -> { var aiText = "\n\nAI:\n" + aiMessage.text(); yield aiMessage .toolCallRequests() .stream() .reduce( aiText, // if there are tool requests, also append them to the aiText (acc, req) -> acc + "\n\tTOOL_CALL_REQUEST: id=" + req.id() + ", name=" + req.name() + ", args=" + req.arguments() + " \n", String::concat ); } case SessionMessage.ToolCallResponse toolRes -> "\n\nTOOL_CALL_RESPONSE:\n" + toolRes.text(); }; }) .collect(Collectors.joining()); // (3) return effects() .memory(MemoryProvider.none()) // (4) .model( ModelProvider.openAi() .withModelName("gpt-4o-mini") .withApiKey(System.getenv("OPENAI_API_KEY")) .withMaxTokens// (1000) ) .systemMessage(SYSTEM_MESSAGE) .userMessage(concatenatedMessages) .responseAs(Result.class) .thenReply(); } } ``` | **1** | Instructions to create the summary of user and AI messages and result as JSON. | | **2** | The full history from the `SessionMemoryEntity`. | | **3** | Format and concatenate the messages. | | **4** | The `CompactionAgent` itself doesn’t need any session memory. | One way to trigger compaction is to use a consumer of the session memory events and call the `CompactionAgent` from that consumer when a threshold is exceeded. [SessionMemoryConsumer.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/application/SessionMemoryConsumer.java) ```java @Component(id = "session-memory-consumer") @Consume.FromEventSourcedEntity(SessionMemoryEntity.class) public class SessionMemoryConsumer extends Consumer { private final Logger logger = LoggerFactory.getLogger(getClass()); private final ComponentClient componentClient; public SessionMemoryConsumer(ComponentClient componentClient) { this.componentClient = componentClient; } public Effect onSessionMemoryEvent(SessionMemoryEntity.Event event) { var sessionId = messageContext().eventSubject().get(); switch (event) { case SessionMemoryEntity.Event.UserMessageAdded userMsg -> logger.info( "User message added to session {}: {}", sessionId, userMsg.message() ); // ... case SessionMemoryEntity.Event.AiMessageAdded aiMsg -> { if (aiMsg.historySizeInBytes() > 100000) { // (1) var history = componentClient .forEventSourcedEntity(sessionId) .method(SessionMemoryEntity::getHistory) // (2) .invoke(new SessionMemoryEntity.GetHistoryCmd(Optional.empty())); var summary = componentClient .forAgent() .inSession(sessionId) .method(CompactionAgent::summarizeSessionHistory) // (3) .invoke(history); var now = Instant.now(); componentClient .forEventSourcedEntity(sessionId) .method(SessionMemoryEntity::compactHistory) // (4) .invoke( new SessionMemoryEntity.CompactionCmd( new SessionMessage.UserMessage(now, summary.userMessage(), ""), new SessionMessage.AiMessage(now, summary.aiMessage(), ""), history.sequenceNumber() // (5) ) ); } } default -> logger.debug("Unhandled session memory event: {}", event); } return effects().done(); } } ``` | **1** | The AiMessageAdded has the total size of the history. | | **2** | Retrieve the full history from the `SessionMemoryEntity`. | | **3** | Call the agent to make the summary. | | **4** | Store the summary as the new compacted history in the `SessionMemoryEntity`. | | **5** | To support concurrent updates, the `sequenceNumber` of the retrieved history is included in the `CompactionCmd`. | [Calling agents](calling.html) [Structured responses](structured.html) - [Akka](../../index.html) - [Developing](../index.html) - [Components](../components/index.html) - [Agents](../agents.html) - [Structured responses](structured.html) # Structured responses Many LLMs support generating outputs in a structured format, typically JSON. You can easily map such output to Java objects using the effect API. ```java @Component(id = "activity-agent") public class ActivityAgentStructuredResponse extends Agent { private static final String SYSTEM_MESSAGE = // (1) """ You are an activity agent. Your job is to suggest activities in the real world. Like for example, a team building activity, sports, an indoor or outdoor game, board games, a city trip, etc. Your response should be a JSON object with the following structure: { "name": "Name of the activity", "description": "Description of the activity" } Do not include any explanations or text outside of the JSON structure. """.stripIndent(); private static final Activity DEFAULT_ACTIVITY = new Activity( "running", "Running is a great way to stay fit " + "and healthy. You can do it anywhere, anytime, and it requires no special equipment." ); record Activity(String name, String description) {} // (2) public Effect query(String message) { return effects() .systemMessage(SYSTEM_MESSAGE) .userMessage(message) .responseAs(Activity.class) // (3) .onFailure(throwable -> { // (4) if (throwable instanceof JsonParsingException) { return DEFAULT_ACTIVITY; } else { throw new RuntimeException(throwable); } }) .thenReply(); } } ``` | **1** | Instruct the model to return a structured response in JSON format. | | **2** | `Activity` record is used to map the JSON response to a Java object. | | **3** | Use the `responseAs` method to specify the expected response type. | | **4** | Sometimes the model may not return a valid JSON, so you can use `onFailure` to provide a fallback value in case of parsing exception. | Some models, such as OpenAI and Google Gemini, have specific support for structured model responses according to a given JSON schema. To automatically include a JSON schema that corresponds to the response type you can use `responseConformsTo` instead of `responseAs`. ```java @Component(id = "activity-agent") public class ActivityAgentStructuredResponseSchema extends Agent { private static final String SYSTEM_MESSAGE = // (1) """ You are an activity agent. Your job is to suggest activities in the real world. Like for example, a team building activity, sports, an indoor or outdoor game, board games, a city trip, etc. """.stripIndent(); record Activity( @Description("Name of the activity") String name, @Description("Description of the activity") String description ) {} // (2) public Effect query(String message) { return effects() .systemMessage(SYSTEM_MESSAGE) .userMessage(message) .responseConformsTo(Activity.class) // (3) .thenReply(); } } ``` | **1** | Instructions to the model doesn’t have to include details about the JSON response format. | | **2** | `Activity` record is used to map the JSON response to a Java object. It can optionally have `akka.javasdk.annotations.Description` of the fields, which will be included in the JSON schema. | | **3** | Use the `responseConformsTo` method to specify the expected response type, which is also used for creating the JSON schema. | If you still don’t get expected JSON responses from the model, you can combine those two approaches of both including the JSON schema and giving instructions about the format in the system message. [Managing session memory](memory.html) [Handling failures](failures.html) - [Akka](../../index.html) - [Developing](../index.html) - [Components](../components/index.html) - [Agents](../agents.html) - [Handling failures](failures.html) # Handling failures The `onFailure` method in the agent’s effect API provides comprehensive error handling capabilities for various types of failures that can occur during model processing. This allows you to implement robust fallback strategies and provide meaningful responses even when things go wrong. ## Types of exceptions handled The `onFailure` method can handle the following types of exceptions: - **Model-related exceptions:** - `ModelException` - General model processing failures - `RateLimitException` - API rate limiting exceeded - `ModelTimeoutException` - Model request timeout - `UnsupportedFeatureException` - Unsupported model features - `InternalServerException` - Internal service errors - **Tool execution exceptions:** - `ToolCallExecutionException` - Function tool execution errors - `McpToolCallExecutionException` - MCP tool execution errors - `ToolCallLimitReachedException` - Tool call limit exceeded - **Response processing exceptions:** - `JsonParsingException` - Response parsing failures (as shown in structured responses) - **Unknown exceptions:** - `RuntimeException` - For any unexpected errors that don’t fall into the above categories Apart from the listed specific exceptions, users can still encounter `RuntimeException` instances that wrap unexpected errors. Therefore, when handling errors in the `onFailure` method, it’s recommended to always include a `default` case to handle any unknown exception types gracefully. ## Implementing fallback strategies You can use the `onFailure` method to implement different recovery strategies based on the type of exception: ```java public Effect query(String message) { return effects() .systemMessage(SYSTEM_MESSAGE) .userMessage(message) .onFailure(exception -> { // Handle different types of exceptions with appropriate fallback responses return switch (exception) { // (1) case RateLimitException exc -> "Rate limit exceeded, try again later"; // (2) case ModelTimeoutException exc -> "Request timeout, service is delayed"; case ToolCallExecutionException exc -> "Tool error: " + exc.getToolName(); default -> "Unexpected error occurred"; // (3) }; }) .thenReply(); ``` | **1** | Use pattern matching to handle different exception types appropriately. | | **2** | Handle specific known exceptions with meaningful fallback responses. | | **3** | For unknown or unexpected exceptions, define a default matching branch providing a generic fallback response. | This approach ensures your agents remain resilient and can provide meaningful responses even when encountering various types of failures during model interaction. [Structured responses](structured.html) [Extending with function tools](extending.html) - [Akka](../../index.html) - [Developing](../index.html) - [Components](../components/index.html) - [Agents](../agents.html) - [Extending with function tools](extending.html) # Extending agents with function tools You may frequently hear people say things like "the LLM can make a call" or "the LLM can use a tool". While these statements get the point across, they’re not entirely accurate. In truth, the agent will tell the LLM which *tools* are available for use. The LLM then determines from the prompt which tools it needs to call and with which parameters. The Agent will then in turn execute the tool requested by the LLM, incorporate the tool results into the session context, and then send a new prompt. This will continue in a loop until the LLM no longer indicates it needs to invoke a tool to perform its task. There are four ways to add function tools to your agent: 1. **Agent-defined function tools** — Define function tools directly within your agent class using the `@FunctionTool` annotation. These are automatically registered as available tools for the current Agent. 2. **Externally defined function tools** — Explicitly register external objects or classes containing function tools by passing them to the `effects().tools()` method in your agent’s command handler. Objects or classes passed to `effects ().tools()` must have at least one public method annotated with `@FunctionTool`. 3. **Akka components as function tools** — Use Akka components from the same application as tools by annotating their command handlers with `@FunctionTool` and passing the component class to the `effects().tools()` method. This approach works with Event Sourced Entities, Key Value Entities, Workflows, and Views. 4. **Tools defined by remote MCP servers** – Register remote MCP servers to let the agent use tools they provide. | | A class (either the agent itself, Akka components, or an external tool class) can have multiple methods annotated with `@FunctionTool`. Each annotated method will be registered as a separate tool that the LLM can choose to invoke based on the task requirements. | You can use either approach independently or combine them based on your needs. Let’s look at a complete example showing both approaches: [WeatherAgent.java](https://github.com/akka/akka-sdk/blob/main/samples/multi-agent/src/main/java/demo/multiagent/application/WeatherAgent.java) ```java public class WeatherAgent extends Agent { private final WeatherService weatherService; public WeatherAgent(WeatherService weatherService) { this.weatherService = weatherService; // (1) } public Effect query(AgentRequest request) { return effects() .systemMessage(SYSTEM_MESSAGE) .tools(weatherService) // (2) .userMessage(request.message()) .thenReply(); } @FunctionTool(description = "Return current date in yyyy-MM-dd format") // (3) private String getCurrentDate() { return LocalDateTime.now().format(DateTimeFormatter.ISO_LOCAL_DATE); } } ``` | **1** | The `WeatherService` providing a function tool is injected into the agent (see [DependencyProvider](../setup-and-dependency-injection.html#_custom_dependency_injection)). | | **2** | We explicitly register the `weatherService` using the `tools()` method to make its method available as a tool for the current Agent. | | **3** | We define a simple tool directly in the agent class using the `@FunctionTool` annotation, which is implicitly registered. Note that since this method is defined in the agent itself, it can even be a private method. | The `WeatherService` is an interface with a method annotated with `@FunctionTool`. A concrete implementation of this interface is provided by `WeatherServiceImpl` class. This class is made available for injection in the service setup using a [DependencyProvider](../setup-and-dependency-injection.html#_custom_dependency_injection). [WeatherService.java](https://github.com/akka/akka-sdk/blob/main/samples/multi-agent/src/main/java/demo/multiagent/application/WeatherService.java) ```java public interface WeatherService { @FunctionTool(description = "Returns the weather forecast for a given city.") // (1) String getWeather( @Description("A location or city name.") String location, // (2) @Description("Forecast for a given date, in yyyy-MM-dd format.") Optional date ); // (3) } ``` | **1** | Annotate method with `@FunctionTool` and provide a clear description of what it does. | | **2** | Parameters can be documented with the `@Description` annotation to help the LLM understand how to use them. | | **3** | The date parameter is optional. The LLM may call `getCurrentDate` first or call this method without a date, depending on the user query. | | | LLMs are all about context. The more context you can provide, the better the results. Both `@FunctionTool` and `@Description` annotations are used to provide context to the LLM about the tool function and its parameters. The better the context, the better the LLM can understand what the tool function does and how to use it. | In this example, the agent has access to both: - The `getCurrentDate()` method defined within the agent class (implicitly registered via annotation) - The `getWeather()` method defined in the `WeatherService` interface (explicitly registered via the `.tools()` method) ## Sharing function tools across agents Function tools defined in external classes can be shared and reused across multiple agents. This approach promotes code reusability and helps maintain a consistent behavior for common functionalities. When a tool like `WeatherService` is shared across multiple agents: - Each agent can register the same tool but use it in different contexts - The tool behavior remains consistent, but how and when agents invoke it may differ based on their specific tasks - Agents provide different system prompts that influence how the LLM decides to use the shared tool ## Lazy initialization of tool classes In the example above, we pass an instance of `WeatherService` to the `tools()` method. Alternatively, you can pass the `Class` object instead: java] ```java public Effect query(String message) { return effects() .systemMessage(SYSTEM_MESSAGE) .tools(WeatherService.class) // (1) .userMessage(message) .responseAs(AgentResponse.class) .thenReply(); } ``` | **1** | The WeatherService is passed as a `Class` instead of an instance. It will be instantiated when the agent needs to use it. | When you pass a `Class` instead of an instance, the class is only instantiated when the agent actually needs to use the tool. For this approach to work, you must register the class with a [DependencyProvider](../setup-and-dependency-injection.html#_custom_dependency_injection) in your service setup. The DependencyProvider is responsible for creating and managing instances of these classes when they’re needed. This gives you complete control over how tool dependencies are instantiated and managed throughout your application. ## Using Akka components as function tools Akka components within the same application can be used as function tools for agents. This allows agents to interact with your domain model directly by invoking command handlers on Event Sourced Entities, Key Value Entities, Workflows, and Views. To use an Akka component as a tool: 1. Annotate the appropriate methods with `@FunctionTool` (just like with external tools) 2. Pass the component class to the agent using the `effects().tools()` method The following Akka component types can be used as function tools: - **Event Sourced Entities (ESE)** — Command handlers that return `Effect` or `ReadOnlyEffect` can be exposed as tools to create, update, or query entity state - **Key Value Entities (KVE)** — Command handlers that return `Effect` or `ReadOnlyEffect` can be exposed as tools to create, update, or query entity state - **Workflows** — Command handlers that return `Effect` or `ReadOnlyEffect` can be exposed as tools to trigger or interact with workflows - **Views** — Query methods that return `QueryEffect` can be exposed as tools to retrieve aggregated or transformed data | | **Agents cannot be used as tools for other agents.** While an agent can define its own tools by annotating methods with `@FunctionTool`, you cannot pass an agent class to another agent’s `effects().tools()` method. Agent chaining (where one agent calls another agent) is not a recommended pattern. Instead, use Workflows to orchestrate multiple agents. Workflows provide better control over the execution flow, error handling, and state management when coordinating between multiple agents. | | | When using Akka components as tools, the agent can directly modify your application state or trigger workflows. Ensure that your `@FunctionTool` descriptions clearly communicate the impact of these operations to help the LLM make appropriate decisions. | This approach is particularly useful when you want an agent to orchestrate operations across multiple components in your application, or when an agent needs to access and manipulate your domain model based on user requests. ## Using tools from remote MCP servers [Akka MCP endpoints](../mcp-endpoints.html) declared in other services, or third party MCP services can be added to the agent. By default, all tools provided by each added remote MCP server are included, but it is possible to filter available tools from each server based on their name. It is also possible to intercept, modify, or deny MCP tool requests, or their responses by defining a `RemoteMcpTools.ToolInterceptor`. [RemoteMcpWeatherAgent.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/application/RemoteMcpWeatherAgent.java) ```java public Effect query(String message) { return effects() .systemMessage(SYSTEM_MESSAGE) .mcpTools( RemoteMcpTools.fromService("weather-service"), // (1) RemoteMcpTools.fromServer("https://weather.example.com/mcp") // (2) .addClientHeader(Authorization.oauth2(System.getenv("WEATHER_API_TOKEN"))) // (3) .withAllowedToolNames(Set.of("get_weather")) // (4) ) .userMessage(message) .responseAs(AgentResponse.class) .thenReply(); } ``` | **1** | For MCP endpoints in other Akka services, use HTTP and the deployed service name | | **2** | For third party MCP servers use the fully qualified host name and make sure to use HTTPS as the requests will go over the public internet. | | **3** | Custom headers to pass along can be defined | | **4** | As well as filters of what tools to allow. | When using MCP endpoints in other Akka services, the service ACLs apply just like for [HTTP endpoints](../http-endpoints.html) and [gRPC endpoints](../grpc-endpoints.html). ## Configuring tool call limits Inside a single request/response cycle, an LLM can successively request the agent to call functions tools or MCP tools. After analyzing the result of a call, the LLM might decide to request another call to gather more context. The `akka.javasdk.agent.max-tool-call-steps` setting limits how many such steps may occur between a user request and the final AI response. By default, this value is set to 100. You can adjust this in your configuration: application.conf ```hocon # Increase the limit to allow more tool calls akka.javasdk.agent.max-tool-call-steps = 150 ``` [Handling failures](failures.html) [Streaming responses](streaming.html) - [Akka](../../index.html) - [Developing](../index.html) - [Components](../components/index.html) - [Agents](../agents.html) - [Streaming responses](streaming.html) # Streaming responses In AI chat applications, you’ve seen how responses are displayed word by word as they are generated. There are a few reasons for this. The first is that LLMs are *prediction* engines. Each time a token (usually a word) is streamed to the response, the LLM will attempt to *predict* the next word in the output. This causes the small delays between words. The other reason why responses are streamed is that it can take a very long time to generate the full response, so the user experience is much better getting the answer as a live stream of tokens. To support this real-time user experience, the agent can stream the model response tokens to an endpoint. These tokens can then be pushed to the client using server-sent events (SSE). ```java @Component(id = "streaming-activity-agent") public class StreamingActivityAgent extends Agent { private static final String SYSTEM_MESSAGE = """ You are an activity agent. Your job is to suggest activities in the real world. Like for example, a team building activity, sports, an indoor or outdoor game, board games, a city trip, etc. """.stripIndent(); public StreamEffect query(String message) { // (1) return streamEffects() // (2) .systemMessage(SYSTEM_MESSAGE) .userMessage(message) .thenReply(); } } ``` | **1** | The method returns `StreamEffect` instead of `Effect`. | | **2** | Use the `streamEffects()` builder. | Consuming the stream from an HTTP endpoint: ```java @Acl(allow = @Acl.Matcher(principal = Acl.Principal.INTERNET)) @HttpEndpoint("/api") public class ActivityHttpEndpoint { public record Request(String sessionId, String question) {} private final ComponentClient componentClient; public ActivityHttpEndpoint(ComponentClient componentClient) { this.componentClient = componentClient; } @Post("/ask") public HttpResponse ask(Request request) { var responseStream = componentClient .forAgent() .inSession(request.sessionId) .tokenStream(StreamingActivityAgent::query) // (1) .source(request.question); // (2) return HttpResponses.serverSentEvents(responseStream); // (3) } } ``` | **1** | Use `tokenStream` of the component client, instead of `method`, | | **2** | and invoke it with `source` to receive a stream of tokens. | | **3** | Return the stream of tokens as SSE. | The returned stream is a `Source`, i.e. the tokens are always text strings. The granularity of a token varies by AI model, often representing a word or a short sequence of characters. To reduce the overhead of sending each token as a separate SSE, you can group multiple tokens together using the Akka streams `groupWithin` operator. ```java @Post("/ask-grouped") public HttpResponse askGrouped(Request request) { var tokenStream = componentClient .forAgent() .inSession(request.sessionId) .tokenStream(StreamingActivityAgent::query) .source(request.question); var groupedTokenStream = tokenStream .groupedWithin(20, Duration.ofMillis// (100)) // (1) .map(group -> String.join("", group)); // (2) return HttpResponses.serverSentEvents(groupedTokenStream); // (3) } ``` | **1** | Group at most 20 tokens or within 100 milliseconds, whatever happens first. | | **2** | Concatenate the list of string into a single string. | | **3** | Return the stream of grouped tokens as SSE. | | | Token streams are designed for direct agent calls from an endpoint. You can’t use a token stream when you have an intermediate workflow between the endpoint and the agent. | [Extending with function tools](extending.html) [Orchestrating multiple agents](orchestrating.html) - [Akka](../../index.html) - [Developing](../index.html) - [Components](../components/index.html) - [Agents](../agents.html) - [Orchestrating multiple agents](orchestrating.html) # Orchestrating multiple agents A single agent performs one well-defined task. Several agents can collaborate to achieve a common goal. The agents should be orchestrated from a predefined workflow or a dynamically created plan. ## Using a predefined workflow Let’s first look at how to define a workflow that orchestrates several agents in a predefined steps. This is similar to the `ActivityAgentManager` that was illustrated above, but it uses both the `WeatherAgent` and the `ActivityAgent`. First it retrieves the weather forecast and then it finds suitable activities. ```java @Component(id = "agent-team") public class AgentTeamWorkflow extends Workflow { private static final Logger logger = LoggerFactory.getLogger(AgentTeamWorkflow.class); public record State(String userQuery, String weatherForecast, String answer) { State withWeatherForecast(String f) { return new State(userQuery, f, answer); } State withAnswer(String a) { return new State(userQuery, weatherForecast, a); } } private final ComponentClient componentClient; public AgentTeamWorkflow(ComponentClient componentClient) { this.componentClient = componentClient; } public Effect start(String query) { return effects() .updateState(new State(query, "", "")) .transitionTo(AgentTeamWorkflow::askWeather) // (1) .thenReply(Done.getInstance()); } public Effect getAnswer() { if (currentState() == null || currentState().answer.isEmpty()) { String workflowId = commandContext().workflowId(); return effects().error("Workflow '" + workflowId + "' not started, or not completed"); } else { return effects().reply(currentState().answer); } } @Override public WorkflowSettings settings() { return WorkflowSettings.builder() .stepTimeout(AgentTeamWorkflow::askWeather, ofSeconds// (60)) .stepTimeout(AgentTeamWorkflow::suggestActivities, ofSeconds// (60)) .defaultStepRecovery(maxRetries// (2).failoverTo(AgentTeamWorkflow::error)) .build(); } @StepName("weather") private StepEffect askWeather() { // (2) var forecast = componentClient .forAgent() .inSession(sessionId()) .method(WeatherAgent::query) .invoke(currentState().userQuery); logger.info("Weather forecast: {}", forecast); return stepEffects() .updateState(currentState().withWeatherForecast(forecast)) // (3) .thenTransitionTo(AgentTeamWorkflow::suggestActivities); } @StepName("activities") private StepEffect suggestActivities() { var request = // (4) currentState().userQuery + "\nWeather forecast: " + currentState().weatherForecast; var suggestion = componentClient .forAgent() .inSession(sessionId()) .method(ActivityAgent::query) .invoke(request); logger.info("Activities: {}", suggestion); return stepEffects() .updateState(currentState().withAnswer(suggestion)) // (5) .thenEnd(); } private StepEffect error() { return stepEffects().thenEnd(); } private String sessionId() { // the workflow corresponds to the session return commandContext().workflowId(); } } ``` | **1** | The workflow starts by asking for the weather forecast. | | **2** | Weather forecast is retrieved by the `WeatherAgent`, which must extract the location and date from the user query. | | **3** | The forecast is stored in the state of the workflow. | | **4** | The forecast is included in the request to the `ActivityAgent`. | | **5** | The final result is stored in the workflow state. | In ![steps 4](../../concepts/_images/steps-4.svg) we explicitly include the forecast in the request to the `ActivityAgent`. That is not strictly necessary because the agents share the same session memory and thereby the `ActivityAgent` will already have the weather forecast in the context that is sent to the AI model. The workflow will automatically execute the steps in a reliable and durable way. This means that if a call in a step fails, it will be retried until it succeeds or the retry limit of the recovery strategy is reached and separate error handling can be performed. The state machine of the workflow is durable, which means that if the workflow is restarted for some reason it will continue from where it left off, i.e. execute the current non-completed step again. ## Creating dynamic plans To create a more flexible and autonomous agentic system you want to analyze the problem and dynamically come up with a plan. The agentic system should identify the tasks to achieve the goal by itself. Decide which agents to use and in which order to execute them. Coordinate input and output between agents and adjust the plan along the way. There are several approaches for the planning, such as using deterministic algorithms or using AI also for the planning. We will look at how we can use AI for analyzing a request, selecting agents and in which order to use them. We split the planning into two steps and use two separate agents for these tasks. It’s not always necessary to use several steps for the planning. You have to experiment with what works best for your problem domain. 1. Select agents that are useful for a certain problem. 2. Decide in which order to use the agents and give each agent precise instructions for its task. The `SelectorAgent` decides which agents to use: [SelectorAgent.java](https://github.com/akka/akka-sdk/blob/main/samples/multi-agent/src/main/java/demo/multiagent/application/SelectorAgent.java) ```java @Component( id = "selector-agent", name = "Selector Agent", description = """ An agent that analyses the user request and selects useful agents for answering the request. """ ) public class SelectorAgent extends Agent { private final String systemMessage; public SelectorAgent(AgentRegistry agentsRegistry) { // (1) var agents = agentsRegistry.agentsWithRole("worker"); // (2) this.systemMessage = """ Your job is to analyse the user request and select the agents that should be used to answer the user. In order to do that, you will receive a list of available agents. Each agent has an id, a name and a description of its capabilities. For example, a user may be asking to book a trip. If you see that there is a weather agent, a city trip agent and a hotel booking agent, you should select those agents to complete the task. Note that this is just an example. The list of available agents may vary, so you need to use reasoning to dissect the original user request and using the list of available agents, decide which agents must be selected. You don't need to come up with an execution order. Your task is to analyze user's request and select the agents. Your response should follow a strict json schema as defined bellow. It should contain a single field 'agents'. The field agents must be array of strings containing the agent's IDs. If none of the existing agents are suitable for executing the task, you return an empty array. { "agents": [] } Do not include any explanations or text outside of the JSON structure. You can find the list of existing agents below (in JSON format): Also important, use the agent id to identify the agents. %s """.stripIndent() .formatted(JsonSupport.encodeToString(agents)); // (3) } public Effect selectAgents(String message) { return effects() .systemMessage(systemMessage) .userMessage(message) .responseConformsTo(AgentSelection.class) .thenReply(); } } ``` | **1** | The `AgentRegistry` contains information about all agents. | | **2** | Select the agents with the role `"worker"`. | | **3** | Detailed instructions and include descriptions (as json) of the agents. | The information about the agents in the `AgentRegistry` comes from the `@Component` and `@AgentRole` annotations. When using it for planning like this, it is important that the agents define those descriptions that the LLM can use to come up with a good plan. The `WeatherAgent` has: [WeatherAgent.java](https://github.com/akka/akka-sdk/blob/main/samples/multi-agent/src/main/java/demo/multiagent/application/WeatherAgent.java) ```java @Component( id = "weather-agent", name = "Weather Agent", description = """ An agent that provides weather information. It can provide current weather, forecasts, and other related information. """ ) @AgentRole("worker") public class WeatherAgent extends Agent { ``` The `ActivityAgent` has: [ActivityAgent.java](https://github.com/akka/akka-sdk/blob/main/samples/multi-agent/src/main/java/demo/multiagent/application/ActivityAgent.java) ```java @Component( id = "activity-agent", name = "Activity Agent", description = """ An agent that suggests activities in the real world. Like for example, a team building activity, sports, an indoor or outdoor game, board games, a city trip, etc. """ ) @AgentRole("worker") public class ActivityAgent extends Agent { ``` Note that in ![steps 2](../../concepts/_images/steps-2.svg) of the `Selector` we retrieve a subset of the agents with a certain role. This role is also defined in the `@AgentRole` annotation. The result from the `Selector` is a list of agent ids: [AgentSelection.java](https://github.com/akka/akka-sdk/blob/main/samples/multi-agent/src/main/java/demo/multiagent/domain/AgentSelection.java) ```java public record AgentSelection(List agents) {} ``` After selecting agents, we use a `PlannerAgent` to decide in which order to use the agents and what precise request that each agent should receive to perform its single task. [PlannerAgent.java](https://github.com/akka/akka-sdk/blob/main/samples/multi-agent/src/main/java/demo/multiagent/application/PlannerAgent.java) ```java @Component( id = "planner-agent", name = "Planner", description = """ An agent that analyzes the user request and available agents to plan the tasks to produce a suitable answer. """ ) public class PlannerAgent extends Agent { public record Request(String message, AgentSelection agentSelection) {} private final AgentRegistry agentsRegistry; public PlannerAgent(AgentRegistry agentsRegistry) { this.agentsRegistry = agentsRegistry; } private String buildSystemMessage(AgentSelection agentSelection) { var agents = agentSelection.agents().stream().map(agentsRegistry::agentInfo).toList(); // (1) return """ Your job is to analyse the user request and the list of agents and devise the best order in which the agents should be called in order to produce a suitable answer to the user. You can find the list of exiting agents below (in JSON format): %s Note that each agent has a description of its capabilities. Given the user request, you must define the right ordering. Moreover, you must generate a concise request to be sent to each agent. This agent request is of course based on the user original request, but is tailored to the specific agent. Each individual agent should not receive requests or any text that is not related with its domain of expertise. Your response should follow a strict json schema as defined bellow. { "steps": [ { "agentId": "", "query: "", } ] } The '' should be filled with the agent id. The '' should contain the agent tailored message. The order of the items inside the "steps" array should be the order of execution. Do not include any explanations or text outside of the JSON structure. """.stripIndent() // note: here we are not using the full list of agents, but a pre-selection .formatted(JsonSupport.encodeToString(agents)); // (2) } public Effect createPlan(Request request) { if (request.agentSelection.agents().size() == 1) { // no need to call an LLM to make a plan where selection has a single agent var step = new PlanStep(request.agentSelection.agents().getFirst(), request.message()); return effects().reply(new Plan(List.of(step))); } else { return effects() .systemMessage(buildSystemMessage(request.agentSelection)) .userMessage(request.message()) .responseConformsTo(Plan.class) .thenReply(); } } } ``` | **1** | Lookup the agent information for the selected agents from the `AgentRegistry. | | **2** | Detailed instructions and include descriptions (as json) of the agents. | That’s the two agents that perform the planning, but we also need to connect them and execute the plan. This orchestration is the job of a workflow, called `AgentTeamWorkflow`. [AgentTeamWorkflow.java](https://github.com/akka/akka-sdk/blob/main/samples/multi-agent/src/main/java/demo/multiagent/application/AgentTeamWorkflow.java) ```java @Component(id = "agent-team") public class AgentTeamWorkflow extends Workflow { // (1) public record Request(String userId, String message) {} @Override public WorkflowSettings settings() { return WorkflowSettings.builder() .defaultStepTimeout(ofSeconds// (30)) .defaultStepRecovery(maxRetries// (1).failoverTo(AgentTeamWorkflow::interruptStep)) .stepRecovery( AgentTeamWorkflow::selectAgentsStep, maxRetries// (1).failoverTo(AgentTeamWorkflow::summarizeStep) ) .build(); } public Effect start(Request request) { if (currentState() == null) { return effects() .updateState(State.init(request.userId(), request.message())) .transitionTo(AgentTeamWorkflow::selectAgentsStep) // (3) .thenReply(Done.getInstance()); } else { return effects() .error("Workflow '" + commandContext().workflowId() + "' already started"); } } @StepName("select-agents") private StepEffect selectAgentsStep() { // (2) var selection = componentClient .forAgent() .inSession(sessionId()) .method(SelectorAgent::selectAgents) .invoke(currentState().userQuery); // (4) logger.info("Selected agents: {}", selection.agents()); if (selection.agents().isEmpty()) { var newState = currentState() .withFinalAnswer("Couldn't find any agent(s) able to respond to the original query.") .failed(); return stepEffects().updateState(newState).thenEnd(); // terminate workflow } else { return stepEffects() .thenTransitionTo(AgentTeamWorkflow::createPlanStep) .withInput(selection); // (5) } } @StepName("create-plan") private StepEffect createPlanStep(AgentSelection agentSelection) { // (2) logger.info( "Calling planner with: '{}' / {}", currentState().userQuery, agentSelection.agents() ); var plan = componentClient .forAgent() .inSession(sessionId()) .method(PlannerAgent::createPlan) .invoke(new PlannerAgent.Request(currentState().userQuery, agentSelection)); // (6) logger.info("Execution plan: {}", plan); return stepEffects() .updateState(currentState().withPlan(plan)) .thenTransitionTo(AgentTeamWorkflow::executePlanStep); // (7) } @StepName("execute-plan") private StepEffect executePlanStep() { // (2) var stepPlan = currentState().nextStepPlan(); // (8) logger.info( "Executing plan step (agent:{}), asking {}", stepPlan.agentId(), stepPlan.query() ); var agentResponse = callAgent(stepPlan.agentId(), stepPlan.query()); // (9) if (agentResponse.startsWith("ERROR")) { throw new RuntimeException( "Agent '" + stepPlan.agentId() + "' responded with error: " + agentResponse ); } else { logger.info("Response from [agent:{}]: '{}'", stepPlan.agentId(), agentResponse); var newState = currentState().addAgentResponse(agentResponse); if (newState.hasMoreSteps()) { logger.info("Still {} steps to execute.", newState.plan().steps().size()); return stepEffects() .updateState(newState) .thenTransitionTo(AgentTeamWorkflow::executePlanStep); // (10) } else { logger.info("No further steps to execute."); return stepEffects() .updateState(newState) .thenTransitionTo(AgentTeamWorkflow::summarizeStep); } } } private String callAgent(String agentId, String query) { // We know the id of the agent to call, but not the agent class. // Could be WeatherAgent or ActivityAgent. // We can still invoke the agent based on its id, given that we know that it // takes an AgentRequest parameter and returns String. var request = new AgentRequest(currentState().userId(), query); DynamicMethodRef call = componentClient .forAgent() .inSession(sessionId()) .dynamicCall(agentId); // (9) return call.invoke(request); } @StepName("summarize") private StepEffect summarizeStep() { // (2) var agentsAnswers = currentState().agentResponses.values(); var finalAnswer = componentClient .forAgent() .inSession(sessionId()) .method(SummarizerAgent::summarize) .invoke(new SummarizerAgent.Request(currentState().userQuery, agentsAnswers)); return stepEffects() .updateState(currentState().withFinalAnswer(finalAnswer).complete()) .thenPause(); } } ``` | **1** | It’s a workflow, with reliable and durable execution. | | **2** | The steps are select - plan - execute - summarize. | | **3** | The workflow starts by selecting agents. | | **4** | which is performed by the `SelectorAgent`. | | **5** | Continue with making the actual plan | | **6** | which is performed by the `PlannerAgent`, using the selection from the previous step. | | **7** | Continue with executing the plan. | | **8** | Take the next task in the plan. | | **9** | Call the agent for the task. | | **10** | Continue executing the plan until no tasks are remaining. | When executing the plan and calling the agents we know the id of the agent to call, but not the agent class. It can be the `WeatherAgent` or `ActivityAgent`. Therefore, we can’t use the ordinary `method` of the `ComponentClient. Instead, we use the `dynamicCall` with the id of the agent. We don’t have compile time safety for those dynamic calls, but we know that these agents take a String parameter and return AgentResponse. If we used it with the wrong types, it would be a runtime exception. ```java private String callAgent(String agentId, String query) { // We know the id of the agent to call, but not the agent class. // Could be WeatherAgent or ActivityAgent. // We can still invoke the agent based on its id, given that we know that it // takes an AgentRequest parameter and returns String. var request = new AgentRequest(currentState().userId(), query); DynamicMethodRef call = componentClient .forAgent() .inSession(sessionId()) .dynamicCall(agentId); // (9) return call.invoke(request); } ``` You find the full source code for this multi-agent sample in the [akka-samples/multi-agent GitHub Repository](https://github.com/akka-samples/multi-agent). A more advanced sample illustrates [adaptive multi-agent orchestration](https://github.com/akka-samples/adaptive-multi-agent). It re-evaluates progress after each agent response and dynamically adjusts its strategy. [Streaming responses](streaming.html) [Guardrails](guardrails.html) - [Akka](../../index.html) - [Developing](../index.html) - [Components](../components/index.html) - [Agents](../agents.html) - [Guardrails](guardrails.html) # Guardrails Guardrails can protect against harmful inputs, such as jailbreak attempts, and damaging output, such as mentions of a competitor’s product. | | For protecting sensitive information like PII, see [Sanitization](../sanitization.html). | A specific guardrail implements the `TextGuardrail` interface. It takes the input or output text as a parameter and a result if it passed the validation or not, including an explanation of why the decision was made. These results are included in metrics and traces. A guardrail can abort the interaction with the model, or only report the problem and continue anyway. An example of a `Guardrail` implementation: [ToxicGuard.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/guardrail/ToxicGuard.java) ```java import akka.javasdk.agent.GuardrailContext; import akka.javasdk.agent.TextGuardrail; public class ToxicGuard implements TextGuardrail { private final String searchFor; public ToxicGuard(GuardrailContext context) { searchFor = context.config().getString("search-for"); } @Override public Result evaluate(String text) { // this would typically be more advanced in a real implementation if (text.contains(searchFor)) { return new Result(false, "Toxic response '%s' not allowed.".formatted(searchFor)); } else { return Result.OK; } } } ``` Guardrails are enabled by configuration, to be able to enforce at deployment time that certain guardrails are always used. src/main/resources/application.conf ```conf akka.javasdk.agent.guardrails { "pii guard" { // (1) class = "com.example.guardrail.PiiGuard" // (2) agents = ["planner-agent"] // (3) agent-roles = ["worker"] // (4) category = PII // (5) use-for = ["model-request", "mcp-tool-request"] // (6) report-only = false // (7) } "toxic guard" { class = "com.example.guardrail.ToxicGuard" agent-roles = ["worker"] category = TOXIC use-for = ["model-response", "mcp-tool-response"] report-only = false search-for = "bad stuff" } } ``` | **1** | Each configured guardrail has a unique name. | | **2** | Implementation class of the guardrail. | | **3** | Enable this guardrail for agents with these component ids. | | **4** | Enable this guardrail for agents with these roles. | | **5** | The type of validation, such as PII and TOXIC. | | **6** | Where to use the guardrail, such as for the model request or model response. | | **7** | If it didn’t pass the evaluation criteria, the execution can either be aborted or continue anyway. In both cases, the result is tracked in logs, metrics and traces. | The implementation class of the guardrail is configured with the `class` property. The class must implement the `TextGuardrail` interface. The class may optionally have a constructor with a `GuardrailContext` parameter, which includes the name and the config section for the specific guardrail. In above code example of the `ToxicGuard` you can see how the configuration property `search-for` is read from the configuration of the `GuardrailContext` parameter. Agents are selected by matching `agent` or `agent-role` configuration. - `agents`: enabled for agents with these component ids, if `agents` contain `"*"` the guardrail is enabled for all agents - `agent-roles`: enabled for agents with these roles, if agent-roles contain `"*"` the guardrail is enabled for all agents that has a role, but not for agents without a role If both `agents` and `agent-roles` are defined it’s enough that one of them matches to enable the guardrail for an agent. This role is defined in the `@AgentRole` annotation. The name and the category are reported in logs, metrics and traces. The `category` should classify the type of validation. It can be any value, but a few recommended categories are JAILBREAK, PROMPT_INJECTION, PII, TOXIC, HALLUCINATED, NSFW, FORMAT. The guardrail can be enabled for certain inputs or outputs with the `use-for` property. The `use-for` property accepts the following values: `model-request`, `model-response`, `mcp-tool-request`, `mcp-tool-response`, and `*`. ## Guardrail of similar text The built-in `SimilarityGuard` evaluates the text by making a similarity search in a dataset of "bad examples". If the similarity exceeds a threshold, the result is flagged as blocked. This is how to configure the `SimilarityGuard`: src/main/resources/application.conf ```conf akka.javasdk.agent.guardrails { "jailbreak guard" { class = "akka.javasdk.agent.SimilarityGuard" agents = ["planner-agent", "weather-agent"] category = JAILBREAK use-for = ["model-request"] threshold = 0.75 bad-examples-resource-dir = "guardrail/jailbreak" } } ``` Here, it’s using predefined examples of jailbreak prompts in `guardrail/jailbreak`. Those have been incorporated from [https://github.com/verazuo/jailbreak_llms](https://github.com/verazuo/jailbreak_llms/), but you can define your own examples and place in a subdirectory of `src/main/resources/`. All text files in the configured `bad-examples-resource-dir` are included in the similarity search. This can be used for other things than jailbreak attempt detection. [Orchestrating multiple agents](orchestrating.html) [LLM evaluation](llm_eval.html) 404 Not Found

Not Found

The requested URL was not found on this server.


Apache/2.4.63 (Ubuntu) Server at doc.akka.io Port 443
- [Akka](../../index.html) - [Developing](../index.html) - [Components](../components/index.html) - [Agents](../agents.html) - [Testing](testing.html) # Testing the agent Testing agents built with Generative AI involves two complementary approaches: evaluating the quality of the non-deterministic model behavior and writing deterministic unit tests for the agent’s and surrounding components' logic. Evaluations is described in [LLM evaluation](llm_eval.html), and here we will cover the deterministic testing. ## Mocking responses from the model For predictable and repeatable tests of your agent’s business logic and component integrations, it’s essential to use deterministic responses. This allows you to verify that your agent behaves correctly when it receives a known model output. Use the `TestKitSupport` and the `ComponentClient` to call the components from the test. The `ModelProvider` of the agents can be replaced with [TestModelProvider](_attachments/testkit/akka/javasdk/testkit/TestModelProvider.html), which provides ways to mock the responses without using the real AI model. [AgentTeamWorkflowTest.java](https://github.com/akka/akka-sdk/blob/main/samples/multi-agent/src/test/java/demo/multiagent/application/AgentTeamWorkflowTest.java) ```java public class AgentTeamWorkflowTest extends TestKitSupport { // (1) private final TestModelProvider selectorModel = new TestModelProvider(); // (2) private final TestModelProvider plannerModel = new TestModelProvider(); private final TestModelProvider activitiesModel = new TestModelProvider(); private final TestModelProvider weatherModel = new TestModelProvider(); private final TestModelProvider summaryModel = new TestModelProvider(); private final TestModelProvider toxicityEvalModel = new TestModelProvider(); private final TestModelProvider summarizationEvalModel = new TestModelProvider(); @Override protected TestKit.Settings testKitSettings() { return TestKit.Settings.DEFAULT.withAdditionalConfig( "akka.javasdk.agent.openai.api-key = n/a" ) .withModelProvider(SelectorAgent.class, selectorModel) // (3) .withModelProvider(PlannerAgent.class, plannerModel) .withModelProvider(ActivityAgent.class, activitiesModel) .withModelProvider(WeatherAgent.class, weatherModel) .withModelProvider(SummarizerAgent.class, summaryModel) .withModelProvider(ToxicityEvaluator.class, toxicityEvalModel) .withModelProvider(SummarizationEvaluator.class, summarizationEvalModel); } @Test public void test() { var selection = new AgentSelection(List.of("activity-agent", "weather-agent")); selectorModel.fixedResponse(JsonSupport.encodeToString(selection)); // (4) var weatherQuery = "What is the current weather in Stockholm?"; var activityQuery = "Suggest activities to do in Stockholm considering the current weather."; var plan = new Plan( List.of( new PlanStep("weather-agent", weatherQuery), new PlanStep("activity-agent", activityQuery) ) ); plannerModel.fixedResponse(JsonSupport.encodeToString(plan)); weatherModel .whenMessage(req -> req.equals(weatherQuery)) // (5) .reply("The weather in Stockholm is sunny."); activitiesModel .whenMessage(req -> req.equals(activityQuery)) .reply( "You can take a bike tour around Djurgården Park, " + "visit the Vasa Museum, explore Gamla Stan (Old Town)..." ); summaryModel.fixedResponse( "The weather in Stockholm is sunny, so you can enjoy " + "outdoor activities like a bike tour around Djurgården Park, " + "visiting the Vasa Museum, exploring Gamla Stan (Old Town)" ); toxicityEvalModel.fixedResponse( """ { "label" : "non-toxic" } """.stripIndent() ); summarizationEvalModel.fixedResponse( """ { "label" : "good" } """.stripIndent() ); var query = "I am in Stockholm. What should I do? Beware of the weather"; var sessionId = UUID.randomUUID().toString(); var request = new AgentTeamWorkflow.Request("alice", query); componentClient .forWorkflow(sessionId) .method(AgentTeamWorkflow::start) // (6) .invoke(request); Awaitility.await() .ignoreExceptions() .atMost(10, SECONDS) .untilAsserted(() -> { var answer = componentClient .forWorkflow(sessionId) .method(AgentTeamWorkflow::getAnswer) .invoke(); assertThat(answer).isNotBlank(); assertThat(answer).contains("Stockholm"); assertThat(answer).contains("sunny"); assertThat(answer).contains("bike tour"); }); } } ``` | **1** | Extend `TestKitSupport` to gain access to testing utilities for Akka components. | | **2** | Create one or more `TestModelProvider`. Using one per agent allows for distinct mock behaviors, while sharing one is useful for testing general responses. | | **3** | Use the settings of the `TestKit` to replace the agent’s real `ModelProvider` with your test instance. | | **4** | For simple tests, define a single, fixed response that the mock model will always return. | | **5** | For more complex scenarios, define a response that is only returned if the user message matches a specific condition. This is useful for testing different logic paths within your agent. | | **6** | Call the components with the `componentClient`. | ## Mocked model in a deployed service In some scenarios it can be useful to run the service deployed but without interacting with an actual agent. For example, a load test that exercises the service with heavy load to verify scalability could quickly consume a large number of tokens when the exact answer from the model is not very important, one or a few different predefined responses and responding with a slight delay to simulate model processing time could be good enough. It is possible to implement a custom model provider using `akka.javasdk.agent.ModelProvider.Custom`, such a mock provider however, side steps quite a bit of the infrastructure involved in agent interactions, a more realistic mock model can be implemented by building a separate Akka service with a single [HTTP endpoint](../http-endpoints.html) mimicking the model endpoint and configuring the deployed agentic service to use that. Here is an example endpoint returning a static response over the OpenAI protocol: [MockOpenAI.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/api/MockOpenAI.java) ```java @HttpEndpoint @Acl(allow = { @Acl.Matcher(service = "*") }) public class MockOpenAI extends AbstractHttpEndpoint { private static final long MIN_DELAY_MILLIS = 2000; private static final long MAX_DELAY_MILLIS = 3000; private static final long DELAY_SPAN = MAX_DELAY_MILLIS - MIN_DELAY_MILLIS; private static final HttpResponse staticResponse = HttpResponse.create() .withStatus(StatusCodes.OK) .withEntity( HttpEntities.create( ContentTypes.APPLICATION_JSON, """ { "id": "chatcmpl-Byz9msOuInWGiYmFJR8eH7ei2S3d0", "object": "chat.completion", "created": 1753874466, "model": "gpt-4o-mini-2024-07-18", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Some hardcoded result", "refusal": null, "annotations": [] }, "logprobs": null, "finish_reason": "stop" }], "usage": { "prompt_tokens": 29, "completion_tokens": 264, "total_tokens": 293, "prompt_tokens_details": { "cached_tokens": 0, "audio_tokens": 0 }, "completion_tokens_details": { "reasoning_tokens": 0, "audio_tokens": 0, "accepted_prediction_tokens": 0, "rejected_prediction_tokens": 0 } }, "service_tier": "default", "system_fingerprint": "fp_197a02a720" }""" ) ) .withHeaders( Arrays.asList( RawHeader.create("x-request-id", "537dc248-255e-49eb-8799-fcc11a8b6cf0"), RawHeader.create("x-ratelimit-limit-tokens", "2000000"), RawHeader.create("openai-organization", "abc-123123"), RawHeader.create("openai-version", "20200-01"), RawHeader.create("openai-processing-ms", "5916"), RawHeader.create("openai-project", "proj_1234567abcdef") ) ); @Post("/chat/completions") public HttpResponse completion(HttpEntity.Strict ignoredRequestBody) throws Exception { var delay = MIN_DELAY_MILLIS + ThreadLocalRandom.current().nextLong(DELAY_SPAN); Thread.sleep(delay); return staticResponse; } } ``` For more elaborate scenarios, the mock model endpoint may have to parse the request to decide which hard coded answer out of a few or to create a reply in a more dynamic fashion. Deploying this service as `mock-openai` allows other services containing agents in the same [Akka project](../../operations/projects/index.html). Using the deployed mock service from an agent in another service can be done with a config like this: application.conf ```hocon akka.javasdk { agent { model-provider = openai openai { model-name = "gpt-4o-mini" base-url = "http://mock-openai" // (1) } } } ``` 1. The service name the mock was deployed as. Note that you should use `http`, and not `https`, the connection will be encrypted with TLS, but that is handled by the platform. ## Log model request and response To see exactly what is sent to and received from the AI model, you can enable the following logger in `include-dev-loggers.xml`: ```none ``` [LLM evaluation](llm_eval.html) [Event Sourced Entities](../event-sourced-entities.html) - [Akka](../index.html) - [Developing](index.html) - [Components](components/index.html) - [Event Sourced Entities](event-sourced-entities.html) # Implementing Event Sourced Entities ![Event-sourced entity](../_images/event-sourced-entity.png) Event Sourced Entities are components that persist their state using the Event Sourcing Model. Instead of persisting the current state, they persist all the events that led to the current state. Akka stores these events in a [journal](../reference/glossary.html#journal). Event Sourced Entities persist their state with [ACID semantics](https://en.wikipedia.org/wiki/ACID), scale horizontally, and isolate failures. An Event Sourced Entity must not update its in-memory state directly as a result of a *command*. The handling of a command, if it results in changes being required to state, should persist *events*. These events will then be processed by the entity, at which point the in-memory state can and should be changed in response. When you need to read state in your service, ask yourself *what events should I be listening to*? When you need to write state, ask yourself *what events should I be persisting*? ![console cart events](_images/console-cart-events.png) The image above is from the Akka console and illustrates how events for a shopping cart updates the state of the cart entity. - 3 Akka T-shirts added. - 5 Akka socks added. - 4 more Akka T-shirts added, making a total of 7. - Cart is checked out. To load an Entity, Akka reads the journal and replays events to compute the Entity’s current state. As an optimization, by default, Event Sourced Entities persist state snapshots periodically. This allows Akka to recreate an Entity from the most recent snapshot plus any events saved after the snapshot. In contrast with typical create, read, update (CRUD) systems, event sourcing allows the state of the Entity to be reliably replicated to other services. Event Sourced Entities use offset tracking in the journal to record which portions of the system have replicated which events. Entity and Workflow sharding [Stateful components](../reference/glossary.html#stateful_component), such as Entities and Workflows, offer strong consistency guarantees. Each stateful component can have many instances, identified by [ID](../reference/glossary.html#id). Akka distributes them across every service instance in the cluster. We guarantee that there is only one stateful component instance in the whole service cluster. If a command arrives to a service instance not hosting that stateful component instance, the command is forwarded by the Akka Runtime to the one that hosts that particular component instance. This forwarding is done transparently via [Component Client](../reference/glossary.html#component_client) logic. Because each stateful component instance lives on exactly one service instance, messages can be handled sequentially. Hence, there are no concurrency concerns, each Entity or Workflow instance handles one message at a time. The state of the stateful component instance is kept in memory as long as it is active. This means it can serve read requests or command validation before updating without additional reads from the durable storage. There might not be room for all stateful component instances to be kept active in memory all the time and therefore least recently used instances can be passivated. When the stateful component is used again it recovers its state from durable storage and becomes an active with its system of record in memory, backed by consistent durable storage. This recovery process is also used in cases of rolling updates, rebalance, and abnormal crashes. [Event Sourced Entities](../reference/glossary.html#event_sourced_entity) persist changes as events and snapshots. Akka needs to serialize that data to send it to the underlying data store. However, we recommend that you do not persist your service’s public API messages. Persisting private API messages may introduce some overhead when converting from a public message to an internal one but it allows the logic of the service public interface to evolve independently of the data storage format, which should be private. The steps necessary to implement an Event Sourced Entity include: 1. Model the entity’s state and its domain events. 2. Implementing behavior in command and event handlers. The following sections walk through these steps using a shopping cart service as an example (working sample can be downloaded from [GitHub](https://github.com/akka-samples/shopping-cart-quickstart)). ## Modeling the entity Through our "Shopping Cart" Event Sourced Entity we expect to manage our cart, adding and removing items as we please. Being event-sourced means it will represent changes to state as a series of domain events. Let’s have a look at what kind of model we expect to store and the events our entity might generate. [ShoppingCart.java](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-quickstart/src/main/java/shoppingcart/domain/ShoppingCart.java) ```java public record ShoppingCart(String cartId, List items, boolean checkedOut) { // (1) public record LineItem(String productId, String name, int quantity) { // (2) public LineItem withQuantity(int quantity) { return new LineItem(productId, name, quantity); } } } ``` | **1** | Our `ShoppingCart` is fairly simple, being composed only by a `cartId` and a list of line items. | | **2** | A `LineItem` represents a single product and the quantity we intend to buy. | | | Above we are taking advantage of the Java `record` to reduce the amount of boilerplate code, but you can use regular classes so long as they can be serialized to JSON (e.g. using Jackson annotations). | Another fundamental aspect of our entity will be its domain events. For now, we will have 3 different events `ItemAdded`, `ItemRemoved` and `CheckedOut`, defined as below: [ShoppingCartEvent.java](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-quickstart/src/main/java/shoppingcart/domain/ShoppingCartEvent.java) ```java public sealed interface ShoppingCartEvent { // (1) @TypeName("item-added") // (2) record ItemAdded(ShoppingCart.LineItem item) implements ShoppingCartEvent {} @TypeName("item-removed") record ItemRemoved(String productId) implements ShoppingCartEvent {} @TypeName("checked-out") record CheckedOut() implements ShoppingCartEvent {} } ``` | **1** | The 3 types of event all derive from the same type `ShoppingCartEvent`. | | **2** | Includes the logical type name using `@TypeName` annotation. | | | The use of logical names for subtypes is essential for maintainability purposes. Our recommendation is to use logical names (i.e. `@TypeName`) that are unique per Akka service. Check [type name](serialization.html#_type_name) documentation for more details. | ## Identifying the Entity In order to interact with an Entity in Akka, we need to assign a **component id** and an instance **id**: - **component id** is a unique identifier for all entities of a given type. To define the component id, the entity class must be annotated with `@Component` and have a unique and stable identifier assigned. - **id**, on the other hand, is unique per instance. The entity id is used in the component client when calling the entity from for example an Endpoint. As an example, an entity representing a customer could have the **component id** `customer` and a customer entity for a specific customer could have the UUID instance **id** `8C59E488-B6A8-4E6D-92F3-760315283B6E`. | | The component id and entity id cannot contain the reserved character `\|`, because that is used internally by Akka as a separator. | ## Event sourced entity’s effect API The Event Sourced Entity’s Effect defines the operations that Akka should perform when an incoming command is handled by an Event Sourced Entity. An Event Sourced Entity Effect can either: - persist events and send a reply to the caller - directly reply to the caller if the command is not requesting any state change - instruct Akka to delete the entity and send a reply to the caller - return an error message For additional details, refer to [Declarative Effects](../concepts/declarative-effects.html). ## Implementing behavior Now that we have our Entity state defined along with its events, the remaining steps can be summarized as follows: - declare your entity and pick a component id (it needs to be unique as it will be used for sharding purposes); - implement how each command is handled and which event(s) it generates; - provide an event handler and how it updates the entity’s state. The class signature for our shopping cart entity will look like this: [ShoppingCartEntity.java](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-quickstart/src/main/java/shoppingcart/application/ShoppingCartEntity.java) ```java @Component(id = "shopping-cart") // (2) public class ShoppingCartEntity extends EventSourcedEntity { // (1) } ``` | **1** | Create a class that extends `EventSourcedEntity`, where `S` is the state type this entity will store (i.e. `ShoppingCart`) and `E` is the top type for the events it persists (i.e. `ShoppingCartEvent`). | | **2** | Make sure to annotate such class with `@Component` and pass a stable unique identifier for this entity type. | | | The `@Component` value `shopping-cart` is common for all instances of this entity but must be stable - cannot be changed after a production deploy - and unique across the different entity types in the service. | ### Updating state Having created the basis of our entity, we will now define how each command is handled. In the example below, we define a method that will add a new line item to a given shopping cart. It returns an `Effect` to persist an event and then sends a reply once the event is stored successfully. The state is updated by the event handler. | | The **only** way for a command handler to modify the Entity’s state is by persisting an event. Any modifications made directly to the state (or instance variables) from the command handler are not persisted. When the Entity is passivated and reloaded, those modifications will not be present. | [ShoppingCartEntity.java](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-quickstart/src/main/java/shoppingcart/application/ShoppingCartEntity.java) ```java public Effect addItem(LineItem item) { if (currentState().checkedOut()) { logger.info("Cart id={} is already checked out.", entityId); return effects().error("Cart is already checked out."); } if (item.quantity() <= 0) { // (1) logger.info("Quantity for item {} must be greater than zero.", item.productId()); return effects() .error("Quantity for item " + item.productId() + " must be greater than zero."); } var event = new ShoppingCartEvent.ItemAdded(item); // (2) return effects() .persist(event) // (3) .thenReply(newState -> Done.getInstance()); // (4) } @Override public ShoppingCart applyEvent(ShoppingCartEvent event) { return switch (event) { case ShoppingCartEvent.ItemAdded evt -> currentState().addItem(evt.item()); // (5) }; } ``` | **1** | The validation ensures the quantity of items added is greater than zero and it fails for calls with illegal values by returning an `Effect` with `effects().error`. | | **2** | From the current incoming `LineItem` we create a new `ItemAdded` event representing the change of the cart. | | **3** | We store the event by returning an `Effect` with `effects().persist`. | | **4** | The acknowledgment that the command was successfully processed is only sent if the event was successfully stored and applied, otherwise there will be an error reply. The lambda parameter `newState` gives us access to the new state returned by applying such event. | | **5** | Event handler returns the updated state after applying the event - the logic for updating the state is defined inside the `ShoppingCart` domain model. | As mentioned above, the business logic for updating the state was placed on the domain model as seen below: [ShoppingCart.java](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-quickstart/src/main/java/shoppingcart/domain/ShoppingCart.java) ```java public ShoppingCart addItem(LineItem item) { var lineItem = updateItem(item); // (1) List lineItems = removeItemByProductId(item.productId()); // (2) lineItems.add(lineItem); // (3) lineItems.sort(Comparator.comparing(LineItem::productId)); return new ShoppingCart(cartId, lineItems, checkedOut); // (4) } private LineItem updateItem(LineItem item) { return findItemByProductId(item.productId()) .map(li -> li.withQuantity(li.quantity() + item.quantity())) .orElse(item); } private List removeItemByProductId(String productId) { return items() .stream() .filter(lineItem -> !lineItem.productId().equals(productId)) .collect(Collectors.toList()); } public Optional findItemByProductId(String productId) { Predicate lineItemExists = lineItem -> lineItem.productId().equals(productId); return items.stream().filter(lineItemExists).findFirst(); } ``` | **1** | For an existing item, we will make sure to sum the existing quantity with the incoming one. | | **2** | Returns an update list of items without the existing item. | | **3** | Adds the update item to the shopping cart. | | **4** | Returns a new instance of the shopping cart with the updated line items. | ### Retrieving state To have access to the current state of the entity we can use `currentState()` as you have probably noticed from the examples above. However, what if this is the first command we are receiving for this entity? The following example shows the implementation of the read-only command handler `getCart`: [ShoppingCartEntity.java](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-quickstart/src/main/java/shoppingcart/application/ShoppingCartEntity.java) ```java private final String entityId; private static final Logger logger = LoggerFactory.getLogger(ShoppingCartEntity.class); public ShoppingCartEntity(EventSourcedEntityContext context) { this.entityId = context.entityId(); // (1) } @Override public ShoppingCart emptyState() { // (2) return new ShoppingCart(entityId, Collections.emptyList(), false); } public ReadOnlyEffect getCart() { return effects().reply(currentState()); // (3) } ``` | **1** | Stores the `entityId` on an internal attribute so we can use it later. | | **2** | Provides initial state - overriding `emptyState()` is optional but if not doing it, be careful to deal with a `currentState()` with a `null` value when receiving the first command or event. | | **3** | Returns the current state as reply for the request. | | | We are returning the internal state directly back to the requester. In the endpoint, it’s usually best to convert this internal domain model into a public model so the internal representation is free to evolve without breaking clients code. | ### Deleting an entity Normally, Event Sourced Entities are not deleted because the history of the events typically provide business value. For certain use cases or for regulatory reasons the entity can be deleted. [ShoppingCartEntity.java](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-quickstart/src/main/java/shoppingcart/application/ShoppingCartEntity.java) ```java public Effect checkout() { if (currentState().checkedOut()) return effects().reply(Done.getInstance()); return effects() .persist(new ShoppingCartEvent.CheckedOut()) // (1) .deleteEntity() // (2) .thenReply(newState -> Done.getInstance()); } ``` | **1** | Persist final event before deletion, which is handled as any other event. | | **2** | Instruction to delete the entity. | When you give the instruction to delete the entity it will still exist for some time, including its events and snapshots. The actual removal of events and snapshots will be deleted later to give downstream consumers time to process all prior events, including the final event that was persisted together with the `deleteEntity` effect. By default, the existence of the entity is completely cleaned up after a week. It is not allowed to persist more events after the entity has been "marked" as deleted. You can still handle read requests to the entity until it has been completely removed. To check whether the entity has been deleted, you can use the `isDeleted` method inherited from the `EventSourcedEntity` class. It is best to not reuse the same entity id after deletion, but if that happens after the entity has been completely removed it will be instantiated as a completely new entity without any knowledge of previous state. Note that [deleting View state](views.html#ve_delete) must be handled explicitly. ## Snapshots Snapshots are an important optimization for Event Sourced Entities that persist many events. Rather than reading the entire journal upon loading or restart, Akka can initiate them from a snapshot. Snapshots are stored and handled automatically by Akka without any specific code required. Snapshots are stored after a configured number of events: [application.conf](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-quickstart/src/main/resources/application.conf) ```conf akka.javasdk.event-sourced-entity.snapshot-every = 100 ``` When the Event Sourced Entity is loaded again, the snapshot will be loaded before any other events are received. ## Multi-region replication Stateful components like Event Sourced Entities, Key Value Entities or Workflow can be replicated to other regions. This is useful for several reasons: - resilience to tolerate failures in one location and still be operational, even multi-cloud redundancy - possibility to serve requests from a location near the user to provide better responsiveness - load balancing to be able to handle high throughput For each stateful component instance there is a primary region, which handles all write requests. Read requests can be served from any region. Read requests are defined by declaring the command handler method with `ReadOnlyEffect` as return type. A read-only handler cannot update the state, and that is enforced at compile time. [ShoppingCartEntity.java](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-quickstart/src/main/java/shoppingcart/application/ShoppingCartEntity.java) ```java public ReadOnlyEffect getCart() { return effects().reply(currentState()); // (3) } ``` Write requests are defined by declaring the command handler method with `Effect` as return type, instead of `ReadOnlyEffect`. Write requests are routed to the primary region and handled by the stateful component instance in that region even if the original call to the instance with the component client was made from another region. State changes (Workflow, Key Value Entity) or events (Event Sourced Entity) persisted by the instance in the primary region are replicated to other regions and processed by corresponding instance there. This means that the state of the stateful components in all regions are updated from the primary. The replication is asynchronous, which means that read replicas are eventually updated. Normally within a few milliseconds, but if there is for example a problem with the network between the regions it can take longer time for the read replicas to become up to date, but eventually they will. This also means that you might not see your own writes, immediately. Consider the following: - send a write request and that is routed to a primary in another region - after receiving the response of the write request, you send a read request that is served by the non-primary region - the stateful component instance in the non-primary region might not have seen the replicated changes yet, and therefore replies with "stale" information If it’s important for some read requests to have seen latest writes you can use `Effect` for such command handler, even though it is not persisting any events. Then the request will be routed to the primary and use the latest fully consistent state. The operational aspects are described in [Regions](../operations/regions/index.html). ## Side effects An entity doesn’t perform any external side effects aside from persisting events and replying to the request. Side effects can be handled from the Workflow, Consumer, or Endpoint components that are calling the entity. ## Testing the entity There are two ways to test an Entity: - Unit test, which only runs the Entity component with a test kit. - Integration test, running the entire service with a test kit and the test interacting with it using a component client or over HTTP requests. Each way has its benefits, unit tests are faster and provide more immediate feedback about success or failure but can only test a single entity at a time and in isolation. Integration tests, on the other hand, are more realistic and allow many entities to interact with other components inside and outside the service. ### Unit tests The following snippet shows how the `EventSourcedTestKit` is used to test the `ShoppingCartEntity` implementation. Akka provides two main APIs for unit tests, the `EventSourcedTestKit` and the `EventSourcedResult`. The former gives us the overall state of the entity and all the events produced by all the calls to the Entity. While the latter only holds the effects produced for each individual call to the Entity. [ShoppingCartTest.java](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-quickstart/src/test/java/shoppingcart/application/ShoppingCartTest.java) ```java package shoppingcart.application; import static org.junit.jupiter.api.Assertions.assertEquals; import akka.Done; import akka.javasdk.testkit.EventSourcedTestKit; import java.util.List; import org.junit.jupiter.api.Test; import shoppingcart.domain.ShoppingCart; import shoppingcart.domain.ShoppingCartEvent.ItemAdded; public class ShoppingCartTest { private final ShoppingCart.LineItem akkaTshirt = new ShoppingCart.LineItem("akka-tshirt", "Akka Tshirt", 10); @Test public void testAddLineItem() { var testKit = EventSourcedTestKit.of(ShoppingCartEntity::new); // (1) { var result = testKit.method(ShoppingCartEntity::addItem).invoke(akkaTshirt); // (2) assertEquals(Done.getInstance(), result.getReply()); // (3) var itemAdded = result.getNextEventOfType(ItemAdded.class); assertEquals(10, itemAdded.item().quantity()); // (4) } // actually we want more akka tshirts { var result = testKit .method(ShoppingCartEntity::addItem) .invoke(akkaTshirt.withQuantity// (5)); // (5) assertEquals(Done.getInstance(), result.getReply()); var itemAdded = result.getNextEventOfType(ItemAdded.class); assertEquals(5, itemAdded.item().quantity()); } { assertEquals(testKit.getAllEvents().size(), 2); // (6) var result = testKit.method(ShoppingCartEntity::getCart).invoke(); // (7) assertEquals( new ShoppingCart("testkit-entity-id", List.of(akkaTshirt.withQuantity// (15)), false), result.getReply() ); } } } ``` | **1** | Creates the TestKit passing the constructor of the Entity. | | **2** | Calls the method `addItem` from the Entity in the `EventSourcedTestKit` with quantity `10`. | | **3** | Asserts the return value is `"OK"`. | | **4** | Returns the next event of type `IdemAdded` and asserts on the quantity. | | **5** | Add a new item with quantity `5`. | | **6** | Asserts that the total number of events should be 2. | | **7** | Calls the `getCart` method and asserts that quantity should be `15`. | | | The `EventSourcedTestKit` is stateful, and it holds the state of a single entity instance in memory. If you want to test more than one entity in a test, you need to create multiple instance of `EventSourcedTestKit`. | **EventSourcedResult** Calling a command handler through the TestKit gives us back an `EventSourcedResult`. This class has methods that we can use to assert the handling of the command, such as: - `getReply()` - the response from the command handler if there was one, if not an, exception is thrown, failing the test. - `getAllEvents()` - all the events persisted by handling the command. - `getState()` - the state of the entity after applying any events the command handler persisted. - `getNextEventOfType(ExpectedEvent.class)` - check the next of the persisted events against an event type, return it for inspection if it matches, or fail the test if it does not. The event gets consumed once is inspected and the next call will look for a subsequent event. **EventSourcedTestKit** For the above example, this class provides access to all the command handlers of the `ShoppingCart` entity for unit testing. In addition to that also has the following methods: - `getState()` - the current state of the entity, it is updated on each method call persisting events. - `getAllEvents()` - all events persisted since the creation of the testkit instance. ### Integration tests The skeleton of an Integration Test is included in the [getting started sample](../getting-started/author-your-first-service.html). Let’s see what it could look like to test our `ShoppingCartEntity`: [ShoppingCartIntegrationTest.java](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-quickstart/src/test/java/shoppingcart/ShoppingCartIntegrationTest.java) ```java public class ShoppingCartIntegrationTest extends TestKitSupport { // (1) @Test public void createAndManageCart() { String cartId = "card-abc"; var item1 = new LineItem("tv", "Super TV 55'", 1); var response1 = componentClient // (2) .forEventSourcedEntity(cartId) // (3) .method(ShoppingCartEntity::addItem) // (4) .invoke(item1); Assertions.assertNotNull(response1); // confirming only one product remains ShoppingCart cartUpdated = componentClient .forEventSourcedEntity(cartId) .method(ShoppingCartEntity::getCart) // (5) .invoke(); Assertions.assertEquals(1, cartUpdated.items().size()); // (6) Assertions.assertEquals(item2, cartUpdated.items().get// (0)); } } ``` | **1** | Note the test class must extend `TestKitSupport`. | | **2** | A built-in component client is provided to interact with the components. | | **3** | Request to create a new shopping cart with id `cart-abc`. | | **4** | Request to add an item to the cart. | | **5** | Request to retrieve current status of the shopping cart. | | **6** | Assert there should only be one item. | | | The integration tests in samples can be run using `mvn verify`. | [Testing](agents/testing.html) [Key Value Entities](key-value-entities.html) - [Akka](../index.html) - [Developing](index.html) - [Components](components/index.html) - [Key Value Entities](key-value-entities.html) # Implementing key value entities ![Key value entity](../_images/key-value-entity.png) [Key Value Entities](../reference/glossary.html#key_value_entity) are entities that persist the full state on every change. Only the latest state is stored, so we don’t have access to any of the history of changes, unlike the event sourced storage used by [Event Sourced Entities](event-sourced-entities.html). Entity and Workflow sharding [Stateful components](../reference/glossary.html#stateful_component), such as Entities and Workflows, offer strong consistency guarantees. Each stateful component can have many instances, identified by [ID](../reference/glossary.html#id). Akka distributes them across every service instance in the cluster. We guarantee that there is only one stateful component instance in the whole service cluster. If a command arrives to a service instance not hosting that stateful component instance, the command is forwarded by the Akka Runtime to the one that hosts that particular component instance. This forwarding is done transparently via [Component Client](../reference/glossary.html#component_client) logic. Because each stateful component instance lives on exactly one service instance, messages can be handled sequentially. Hence, there are no concurrency concerns, each Entity or Workflow instance handles one message at a time. The state of the stateful component instance is kept in memory as long as it is active. This means it can serve read requests or command validation before updating without additional reads from the durable storage. There might not be room for all stateful component instances to be kept active in memory all the time and therefore least recently used instances can be passivated. When the stateful component is used again it recovers its state from durable storage and becomes an active with its system of record in memory, backed by consistent durable storage. This recovery process is also used in cases of rolling updates, rebalance, and abnormal crashes. Akka needs to serialize that data to send it to the underlying data store. However, we recommend that you do not persist your service’s public API messages. Persisting private API messages may introduce some overhead when converting from a public message to an internal one but it allows the logic of the service public interface to evolve independently of the data storage format, which should be private. The steps necessary to implement a Key Value Entity include: 1. Defining the API and model the entity’s state. 2. Creating and initializing the Entity. 3. Implementing behavior in command handlers. The following sections walk through these steps using a counter service as an example. ## Modeling the entity As mentioned above, to help us illustrate a Key Value Entity, you will be implementing a Counter service. For such service, you will want to be able to set the initial counter value but also to increase the counter modifying its state. The state will be a simple `Integer` but you will use a wrapper class `Counter` as the domain model, as shown below: [Counter.java](https://github.com/akka/akka-sdk/blob/main/samples/key-value-counter/src/main/java/com/example/domain/Counter.java) ```java public record Counter(int value) { public Counter increment(int delta) { return new Counter(value + delta); } } ``` | | Above we are taking advantage of the Java `record` to reduce the amount of boilerplate code, but you can use regular classes so long as they can be serialized to JSON (e.g. using Jackson annotations). See [Serialization](serialization.html). | ## Identifying the Entity In order to interact with an Entity in Akka, we need to assign a **component id** and an instance **id**: - **component id** is a unique identifier for all entities of a given type. To define the component id, the entity class must be annotated with `@Component` and have a unique and stable identifier assigned. - **id**, on the other hand, is unique per instance. The entity id is used in the component client when calling the entity from for example an Endpoint. As an example, an entity representing a customer could have the **component id** `customer` and a customer entity for a specific customer could have the UUID instance **id** `8C59E488-B6A8-4E6D-92F3-760315283B6E`. | | The component id and entity id cannot contain the reserved character `\|`, because that is used internally by Akka as a separator. | ## Key value entity’s effect API The Key Value Entity’s Effect defines the operations that Akka should perform when an incoming command is handled by a Key Value Entity. A Key Value Entity Effect can either: - update the entity state and send a reply to the caller - directly reply to the caller if the command is not requesting any state change - instruct Akka to delete the entity - return an error message For additional details, refer to [Declarative Effects](../concepts/declarative-effects.html). ## Implementing behavior Now that we have our Entity state defined, the remaining steps can be summarized as follows: - Declare your entity and pick an entity id (it needs to be a unique identifier). - Initialize your entity state - Implement how each command is handled. The class signature for our counter entity will look like this: [CounterEntity.java](https://github.com/akka/akka-sdk/blob/main/samples/key-value-counter/src/main/java/com/example/application/CounterEntity.java) ```java @Component(id = "counter") // (1) public class CounterEntity extends KeyValueEntity { // (2) private final String entityId; public CounterEntity(KeyValueEntityContext context) { this.entityId = context.entityId(); // (3) } @Override public Counter emptyState() { return new Counter// (0); } // (4) } ``` | **1** | Every Entity must be annotated with `@Component` with a stable unique identifier for this entity type. | | **2** | The `CounterEntity` class should extend `akka.javasdk.keyvalueentity.KeyValueEntity`. | | **3** | Stores the `entityId` on an internal attribute so we can use it later. Alternatively, if inside a command handler, `commandContext().entityId()` also provides such information. | | **4** | The initial state of each counter is defined with value 0. | | | The `@Component` value `counter` is common for all instances of this entity but must be stable - cannot be changed after a production deploy - and unique across the different entity types in the service. | ### Updating state We will now show how to add the command handlers for supporting the two desired operations (`set` and `plusOne`). Command handlers are implemented as methods on the entity class but are also exposed for external interactions and always return an `Effect` of some type. [CounterEntity.java](https://github.com/akka/akka-sdk/blob/main/samples/key-value-counter/src/main/java/com/example/application/CounterEntity.java) ```java public Effect set(int number) { Counter newCounter = new Counter(number); return effects() .updateState(newCounter) // (1) .thenReply(newCounter); // (2) } public Effect plusOne() { Counter newCounter = currentState().increment// (1); // (3) return effects() .updateState(newCounter) // (4) .thenReply(newCounter); } ``` | **1** | Set the new counter value to the value received from the command request. | | **2** | Reply with the new counter value wrapped within a `Counter` object. | | **3** | `plusOne` increases the counter by adding 1 to the current state. | | **4** | Finally, using the Effect API, you instruct Akka to persist the new state, and build a reply with the wrapper object. | | | The **only** way for a command handler to modify the Entity’s state is using the `updateState` effect. Any modifications made directly to the state (or instance variables) from the command handler are not persisted. When the Entity is passivated and reloaded, those modifications will not be present. | ### Retrieving state To have access to the current state of the entity we can use `currentState()` as you have probably noticed from the examples above. The following example shows the implementation of the read-only command handler `get` to retrieve the value for a specific counter: [CounterEntity.java](https://github.com/akka/akka-sdk/blob/main/samples/key-value-counter/src/main/java/com/example/application/CounterEntity.java) ```java public ReadOnlyEffect get() { return effects().reply(currentState()); // (1) } ``` | **1** | Reply with the current state. | What if this is the first command we are receiving for this entity? The initial state is provided by overriding `emptyState()`. That is optional and if not doing it, be careful to deal with a `currentState()` with a `null` value when receiving the first command. | | We are returning the internal state directly back to the requester. In the endpoint, it’s usually best to convert this internal domain model into a public model so the internal representation is free to evolve without breaking clients code. | ### Deleting state The next example shows how to delete a Key Value Entity state by returning special `deleteEntity()` effect. [CounterEntity.java](https://github.com/akka/akka-sdk/blob/main/samples/key-value-counter/src/main/java/com/example/application/CounterEntity.java) ```java public Effect delete() { return effects() .deleteEntity() // (1) .thenReply(done()); } ``` | **1** | We delete the state by returning an `Effect` with `effects().deleteEntity()`. | When you give the instruction to delete the entity it will still exist with an empty state for some time. The actual removal happens later to give downstream consumers time to process the change. By default, the existence of the entity is completely cleaned up after a week. It is not allowed to make further changes after the entity has been "marked" as deleted. You can still handle read requests to the entity until it has been completely removed, but the current state will be empty. To check whether the entity has been deleted, you can use the `isDeleted` method inherited from the `KeyValueEntity` class. | | If you don’t want to permanently delete an entity, you can instead use the `updateState` effect with an empty state. This will work the same as resetting the entity to its initial state. | It is best to not reuse the same entity id after deletion, but if that happens after the entity has been completely removed it will be instantiated as a completely new entity without any knowledge of previous state. Note that [deleting View state](views.html#ve_delete) must be handled explicitly. ## Multi-region replication Stateful components like Event Sourced Entities, Key Value Entities or Workflow can be replicated to other regions. This is useful for several reasons: - resilience to tolerate failures in one location and still be operational, even multi-cloud redundancy - possibility to serve requests from a location near the user to provide better responsiveness - load balancing to be able to handle high throughput For each stateful component instance there is a primary region, which handles all write requests. Read requests can be served from any region. Read requests are defined by declaring the command handler method with `ReadOnlyEffect` as return type. A read-only handler cannot update the state, and that is enforced at compile time. [ShoppingCartEntity.java](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-quickstart/src/main/java/shoppingcart/application/ShoppingCartEntity.java) ```java public ReadOnlyEffect getCart() { return effects().reply(currentState()); // (3) } ``` Write requests are defined by declaring the command handler method with `Effect` as return type, instead of `ReadOnlyEffect`. Write requests are routed to the primary region and handled by the stateful component instance in that region even if the original call to the instance with the component client was made from another region. State changes (Workflow, Key Value Entity) or events (Event Sourced Entity) persisted by the instance in the primary region are replicated to other regions and processed by corresponding instance there. This means that the state of the stateful components in all regions are updated from the primary. The replication is asynchronous, which means that read replicas are eventually updated. Normally within a few milliseconds, but if there is for example a problem with the network between the regions it can take longer time for the read replicas to become up to date, but eventually they will. This also means that you might not see your own writes, immediately. Consider the following: - send a write request and that is routed to a primary in another region - after receiving the response of the write request, you send a read request that is served by the non-primary region - the stateful component instance in the non-primary region might not have seen the replicated changes yet, and therefore replies with "stale" information If it’s important for some read requests to have seen latest writes you can use `Effect` for such command handler, even though it is not persisting any events. Then the request will be routed to the primary and use the latest fully consistent state. The operational aspects are described in [Regions](../operations/regions/index.html). ## Side effects An entity doesn’t perform any external side effects aside from persisting state changes and replying to the request. Side effects can be handled from the Workflow, Consumer, or Endpoint components that are calling the entity. ## Testing the entity There are two ways to test an Entity: - Unit test, which only runs the Entity component with a test kit. - Integration test, running the entire service with a test kit and the test interacting with it using a component client or over HTTP requests. Each way has its benefits, unit tests are faster and provide more immediate feedback about success or failure but can only test a single entity at a time and in isolation. Integration tests, on the other hand, are more realistic and allow many entities to interact with other components inside and outside the service. ### Unit tests The following snippet shows how the `KeyValueEntityTestKit` is used to test the `CountertEntity` implementation. Akka provides two main APIs for unit tests, the `KeyValueEntityTestKit` and the `KeyValueEntityResult`. The former gives us the overall state of the entity and the ability to call the command handlers while the latter only holds the effects produced for each individual call to the Entity. [CounterTest.java](https://github.com/akka/akka-sdk/blob/main/samples/key-value-counter/src/test/java/com/example/CounterTest.java) ```java @Test public void testSetAndIncrease() { var testKit = KeyValueEntityTestKit.of(CounterEntity::new); // (1) var resultSet = testKit.method(CounterEntity::set).invoke// (10); // (2) assertTrue(resultSet.isReply()); assertEquals(10, resultSet.getReply().value()); // (3) var resultPlusOne = testKit.method(CounterEntity::plusOne).invoke(); // (4) assertTrue(resultPlusOne.isReply()); assertEquals(11, resultPlusOne.getReply().value()); assertEquals(11, testKit.getState().value()); // (5) } ``` | **1** | Creates the TestKit passing the constructor of the Entity. | | **2** | Calls the method `set` from the Entity in the `KeyValueEntityTestKit` with value `10`. | | **3** | Asserts the reply value is `10`. | | **4** | Calls the method `plusOne` from the Entity in the `KeyValueEntityTestKit` and assert reply value of `11`. | | **5** | Asserts the state value after both operations is `11`. | | | The `KeyValueEntityTestKit` is stateful, and it holds the state of a single entity instance in memory. If you want to test more than one entity in a test, you need to create multiple instance of `KeyValueEntityTestKit`. | ### Integration tests The skeleton of an Integration Test is included in the [getting started sample](../getting-started/author-your-first-service.html). Let’s see what it could look like to test our Counter Entity: [CounterIntegrationTest.java](https://github.com/akka/akka-sdk/blob/main/samples/key-value-counter/src/test/java/com/example/CounterIntegrationTest.java) ```java public class CounterIntegrationTest extends TestKitSupport { // (1) @Test public void verifyCounterSetAndIncrease() { Counter counterGet = componentClient // (2) .forKeyValueEntity("bar") .method(CounterEntity::get) // (3) .invoke(); Assertions.assertEquals(0, counterGet.value()); Counter counterPlusOne = componentClient .forKeyValueEntity("bar") .method(CounterEntity::plusOne) // (4) .invoke(); Assertions.assertEquals(1, counterPlusOne.value()); Counter counterGetAfter = componentClient // (5) .forKeyValueEntity("bar") .method(CounterEntity::get) .invoke(); Assertions.assertEquals(1, counterGetAfter.value()); } } ``` | **1** | Note the test class must extend `TestKitSupport`. | | **2** | A built-in component client is provided to interact with the components. | | **3** | Get the current value of the counter named `bar`. Initial value of counter is expected to be `0`. | | **4** | Request to increase the value of counter `bar`. Response should have value `1`. | | **5** | Explicitly request current value of `bar`. It should be `1`. | | | The integration tests in samples can be run using `mvn verify`. | [Event Sourced Entities](event-sourced-entities.html) [HTTP Endpoints](http-endpoints.html) - [Akka](../index.html) - [Developing](index.html) - [Components](components/index.html) - [Workflows](workflows.html) # Implementing Workflows ![Workflow](../_images/workflow.png) Workflows implement long-running, multi-step business processes while allowing developers to focus exclusively on domain and business logic. Workflows provide durability, consistency and the ability to call other components and services. Business transactions can be modeled in one central place, and the Workflow will keep them running smoothly, or roll back if something goes wrong. Users can see the workflow execution details in the console (both [locally](running-locally.html#_local_console) and in the [cloud](https://console.akka.io/)). ![workflow execution](_images/workflow-execution.png) Entity and Workflow sharding [Stateful components](../reference/glossary.html#stateful_component), such as Entities and Workflows, offer strong consistency guarantees. Each stateful component can have many instances, identified by [ID](../reference/glossary.html#id). Akka distributes them across every service instance in the cluster. We guarantee that there is only one stateful component instance in the whole service cluster. If a command arrives to a service instance not hosting that stateful component instance, the command is forwarded by the Akka Runtime to the one that hosts that particular component instance. This forwarding is done transparently via [Component Client](../reference/glossary.html#component_client) logic. Because each stateful component instance lives on exactly one service instance, messages can be handled sequentially. Hence, there are no concurrency concerns, each Entity or Workflow instance handles one message at a time. The state of the stateful component instance is kept in memory as long as it is active. This means it can serve read requests or command validation before updating without additional reads from the durable storage. There might not be room for all stateful component instances to be kept active in memory all the time and therefore least recently used instances can be passivated. When the stateful component is used again it recovers its state from durable storage and becomes an active with its system of record in memory, backed by consistent durable storage. This recovery process is also used in cases of rolling updates, rebalance, and abnormal crashes. ## Workflow’s Effect API Workflows have two distinct Effect APIs: - **Effect API**: Used for handling external commands. Methods that are exposed as public command handlers (invoked via the component client) must return an `Effect`. This API allows you to update the workflow state, transition to a step, pause, end, delete, fail, or reply to commands. - **StepEffect API**: Used for internal workflow steps. Methods that implement workflow steps (typically private) must return a `StepEffect`. This API is similar in spirit to `Effect`, but is specialized for guiding the internal execution and transitions between workflow steps. It provides methods to update state, transition to the next step, pause, end, or delete the workflow. Workflow is the only component that has both APIs: `Effect` for external commands and `StepEffect` for internal steps. This separation allows workflows to clearly distinguish between external interactions and internal orchestration logic. For additional details, refer to [Declarative Effects](../concepts/declarative-effects.html). ## Skeleton A Workflow implementation has the following code structure. [TransferWorkflow.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/application/TransferWorkflow.java) ```java @Component(id = "transfer") // (1) public class TransferWorkflow extends Workflow { // (2) public record Withdraw(String from, int amount) {} @Override public WorkflowSettings settings() { // (3) return WorkflowSettings.builder() .defaultStepTimeout(ofSeconds// (2)) .build(); } private StepEffect withdrawStep() { // (4) // TODO: implement your step logic here return stepEffects() // (5) .updateState(currentState().withStatus(WITHDRAW_SUCCEEDED)) .thenTransitionTo(TransferWorkflow::depositStep); } private StepEffect depositStep() { // TODO: implement your step logic here return stepEffects() .updateState(currentState().withStatus(COMPLETED)) .thenEnd(); } public Effect startTransfer(Transfer transfer) { // (6) TransferState initialState = new TransferState(transfer); return effects() // (7) .updateState(initialState) .transitionTo(TransferWorkflow::withdrawStep) .thenReply(done()); } } ``` | **1** | Annotate the class with `@Component` and pass a unique identifier for this workflow type. | | **2** | Class that extends `Workflow`. | | **3** | Define optional configuration. | | **4** | A step is a method that returns a `StepEffect`. | | **5** | The step executes a given code and returns the effect that will instruct the runtime what to do next. For instance, you can update the state and transition to next step. | | **6** | The workflow has methods that can be called with the component client. | | **7** | Those methods return an `Effect`, which can be instructions to update the state and transition to a certain step. | There must be at least one public command handler method, which returns `Effect`. It is the command handler methods that can be called with the component client from other components, such as an endpoint. The workflow is started by the first command, which will transition to the initial step. Step handler methods can be private and are used to implement the internal business logic of the workflow. Although the `StepEffect` looks similar to the `Effect`, it is not the same. The `StepEffect` has different set of available methods that will guide the workflow execution. ## Modeling state We want to build a simple workflow that transfers funds between two wallets. Before that, we will create a wallet subdomain with some basic functionalities that we could use later. A `WalletEntity` is implemented as an [Event Sourced Entity](event-sourced-entities.html), which is a better choice than a Key Value Entity for implementing a wallet, because a ledger of all transactions is usually required by the business. The `Wallet` class represents domain object that holds the wallet balance. We can also withdraw or deposit funds to the wallet. [Wallet.java](https://github.com/akka/akka-sdk/blob/main/samples/transfer-workflow/src/main/java/com/example/wallet/domain/Wallet.java) ```java public record Wallet(String id, int balance) { public Wallet withdraw(int amount) { return new Wallet(id, balance - amount); } public Wallet deposit(int amount) { return new Wallet(id, balance + amount); } } ``` Domain events for creating and updating the wallet. [WalletEvent.java](https://github.com/akka/akka-sdk/blob/main/samples/transfer-workflow/src/main/java/com/example/wallet/domain/WalletEvent.java) ```java public sealed interface WalletEvent { @TypeName("created") record Created(int initialBalance) implements WalletEvent {} @TypeName("withdrawn") record Withdrawn(int amount) implements WalletEvent {} @TypeName("deposited") record Deposited(int amount) implements WalletEvent {} } ``` The domain object is wrapped with a Event Sourced Entity component. [WalletEntity.java](https://github.com/akka/akka-sdk/blob/main/samples/transfer-workflow/src/main/java/com/example/wallet/application/WalletEntity.java) ```java @Component(id = "wallet") public class WalletEntity extends EventSourcedEntity { public Effect create(int initialBalance) { // (1) if (currentState() != null) { return effects().error("Wallet already exists"); } else { return effects() .persist(new WalletEvent.Created(initialBalance)) .thenReply(__ -> done()); } } public Effect withdraw(int amount) { // (2) if (currentState() == null) { return effects().error("Wallet does not exist"); } else if (currentState().balance() < amount) { return effects().error("Insufficient balance"); } else { return effects().persist(new WalletEvent.Withdrawn(amount)).thenReply(__ -> done()); } } public Effect deposit(int amount) { // (3) if (currentState() == null) { return effects().error("Wallet does not exist"); } else { return effects().persist(new WalletEvent.Deposited(amount)).thenReply(__ -> done()); } } public Effect get() { // (4) if (currentState() == null) { return effects().error("Wallet does not exist"); } else { return effects().reply(currentState().balance()); } } } ``` | **1** | Create a wallet with an initial balance. | | **2** | Withdraw funds from the wallet. | | **3** | Deposit funds to the wallet. | | **4** | Get current wallet balance. | Now we can focus on the workflow implementation itself. A workflow has state, which can be updated in command handlers and step implementations. During the state modeling we might consider the information that is required for validation, running the steps, collecting data from steps or tracking the workflow progress. [TransferState.java](https://github.com/akka/akka-sdk/blob/main/samples/transfer-workflow/src/main/java/com/example/transfer/domain/TransferState.java) ```java public record TransferState(Transfer transfer, TransferStatus status) { public record Transfer(String from, String to, int amount) {} // (1) public enum TransferStatus { // (2) STARTED, WITHDRAW_SUCCEEDED, COMPLETED, } public TransferState(Transfer transfer) { this(transfer, STARTED); } public TransferState withStatus(TransferStatus newStatus) { return new TransferState(transfer, newStatus); } } ``` | **1** | A `Transfer` record encapsulates data required to withdraw and deposit funds. | | **2** | A `TransferStatus` is used to track workflow progress. | ## Implementing behavior Now that we have our workflow state defined, the remaining tasks can be summarized as follows: - declare your workflow and pick a workflow id (it needs to be unique as it will be used for sharding purposes); - implement handler(s) to interact with the workflow (e.g. to start a workflow, or provide additional data) or retrieve its current state; - provide a workflow definition with all possible steps and transitions between them. ## Starting workflow Let’s have a look at what our transfer workflow will look like for the first 2 points from the above list. We will now define how to launch a workflow with a `startTransfer` command handler that will return an `Effect` to start a workflow by providing a transition to the first step. Also, we will update the state with an initial value. [TransferWorkflow.java](https://github.com/akka/akka-sdk/blob/main/samples/transfer-workflow/src/main/java/com/example/transfer/application/TransferWorkflow.java) ```java @Component(id = "transfer") // (1) public class TransferWorkflow extends Workflow { // (2) public Effect startTransfer(Transfer transfer) { // (3) if (transfer.amount() <= 0) { // (4) return effects().error("transfer amount should be greater than zero"); } else if (currentState() != null) { return effects().error("transfer already started"); } else { TransferState initialState = new TransferState(transfer); // (5) Withdraw withdrawInput = new Withdraw(transfer.from(), transfer.amount()); return effects() .updateState(initialState) // (6) .transitionTo(TransferWorkflow::withdrawStep) // (7) .withInput(withdrawInput) .thenReply(done()); // (8) } } ``` | **1** | Annotate such class with `@Component` and pass a unique identifier for this workflow type. | | **2** | Extend `Workflow`, where `S` is the state type this workflow will store (i.e. `TransferState`). | | **3** | Create a method to start the workflow that returns an `Effect` class. | | **4** | The validation ensures the transfer amount is greater than zero and the workflow is not running already. Otherwise, we might corrupt the existing workflow. | | **5** | From the incoming data we create an initial `TransferState`. | | **6** | We instruct Akka to persist the new state. | | **7** | With the `transitionTo` method, we inform that the first step is defined in `TransferWorkflow::withdrawStep` and the input for this step is a `Withdraw` object. | | **8** | The last instruction is to inform the caller that the workflow was successfully started. | | | The `@Component` value `transfer` is common for all instances of this workflow but must be stable - cannot be changed after a production deploy - and unique across the different workflow types. | ## Workflow steps One missing piece of our transfer workflow implementation is the workflow steps implementation. A workflow `Step` has an action to perform (e.g. a call to an Akka component, or a call to an external service) and a transition to select the next step (or `end` transition to finish the workflow, in case of the last step). [TransferWorkflow.java](https://github.com/akka/akka-sdk/blob/main/samples/transfer-workflow/src/main/java/com/example/transfer/application/TransferWorkflow.java) ```java public record Withdraw(String from, int amount) {} // (1) public record Deposit(String to, int amount) {} // (1) private final ComponentClient componentClient; public TransferWorkflow(ComponentClient componentClient) { this.componentClient = componentClient; } @StepName("withdraw") // (2) private StepEffect withdrawStep(Withdraw withdraw) { componentClient .forEventSourcedEntity(withdraw.from) .method(WalletEntity::withdraw) .invoke(withdraw.amount); // (3) String to = currentState().transfer().to(); // (4) int amount = currentState().transfer().amount(); Deposit depositInput = new Deposit(to, amount); return stepEffects() .updateState(currentState().withStatus(WITHDRAW_SUCCEEDED)) .thenTransitionTo(TransferWorkflow::depositStep) // (5) .withInput(depositInput); } @StepName("deposit") // (2) private StepEffect depositStep(Deposit deposit) { // (6) componentClient .forEventSourcedEntity(deposit.to) .method(WalletEntity::deposit) .invoke(deposit.amount); return stepEffects() .updateState(currentState().withStatus(COMPLETED)) .thenEnd(); // (7) } ``` | **1** | Step inputs definition. | | **2** | Optional `@StepName` can be used specify the step name, otherwise the method name will be used. | | **3** | Using the [ComponentClient](component-and-service-calls.html#_component_client), which is injected in the constructor, we instruct Akka to run a given call to withdraw funds from a wallet. | | **4** | The state of the workflow can be accessed with `currentState()` method. | | **5** | After successful withdrawal, we return a `StepEffect` that will update the workflow state and move to the next step defined in `TransferWorkflow::depositStep` method. An input parameter for this step is a `Deposit` record. | | **6** | Another workflow step implementation to deposit funds to a given wallet. | | **7** | This time we return an effect that will stop workflow processing, by using the special `thenEnd` method. | The step consists of two parts, an execution of the business logic and `StepEffect` construction, where you can update the state and decide (based on the business logic outcome) what the next step should be and provide input for it, if required. The workflow will automatically execute the steps in a reliable and durable way. This means that if a step fails, it will be retried until it succeeds or the retry limit of the recovery strategy is reached and separate error handling can be performed. The state machine of the workflow is durable, which means that if the workflow is restarted for some reason it will continue from where it left off, i.e. execute the current non-completed step again. | | In the following example all `WalletEntity` interactions are not idempotent. It means that if the workflow step retries, it will make the deposit or withdraw again. In a real-world scenario, you should consider making all interactions idempotent with a proper deduplication mechanism. A very basic example of handling retries for workflows can be found in [this](https://github.com/akka/akka-sdk/blob/main/samples/transfer-workflow-compensation/src/main/java/com/example/wallet/domain/Wallet.java) sample. | ## Retrieving state To have access to the current state of the workflow we can use `currentState()`. However, if this is the first command we are receiving for this workflow, the state will be `null`. We can change it by overriding the `emptyState` method. The following example shows the implementation of the read-only command handler: [TransferWorkflow.java](https://github.com/akka/akka-sdk/blob/main/samples/transfer-workflow/src/main/java/com/example/transfer/application/TransferWorkflow.java) ```java public ReadOnlyEffect getTransferState() { if (currentState() == null) { return effects().error("transfer not started"); } else { return effects().reply(currentState()); // (1) } } ``` | **1** | Return the current state as reply for the request. | | | We are returning the internal state directly back to the requester. In the endpoint, it’s usually best to convert this internal domain model into a public model so the internal representation is free to evolve without breaking clients code. | A full transfer workflow source code sample can be downloaded as a [zip file](../java/_attachments/workflow-quickstart.zip). Follow the `README` file to run and test it. ## Deleting state If you want to delete the workflow state, you can use the `effects().delete` method. This will remove the workflow from the system. [TransferWorkflow.java](https://github.com/akka/akka-sdk/blob/main/samples/transfer-workflow/src/main/java/com/example/transfer/application/TransferWorkflow.java) ```java public Effect delete() { return effects() .delete() // (1) .thenReply(done()); } ``` | **1** | Instruction to delete the workflow. | When you give the instruction to delete a running workflow, it’s equivalent to ending and deleting a workflow. For already finished workflows, it is possible to delete them in the command handler. The actual removal of the workflow state is delayed to give downstream consumers time to process all prior updates. Including the fact that the workflow has been deleted (via method annotated with `@DeleteHandler`). By default, the existence of the workflow is completely cleaned up after a week. You can still handle read requests to the workflow until it has been completely removed, but the current state will be empty (or null). To check whether the workflow has been deleted, you can use the `isDeleted` method inherited from the `Workflow` class. It is best to not reuse the same workflow id after deletion, but if that happens after the workflow has been completely removed it will be instantiated as a completely new workflow without any knowledge of previous state. ## Calling external services The Workflow can be used not only to orchestrate Akka components, but also to call external services. The step implementation can invoke [HTTP endpoint](component-and-service-calls.html#_external_http_services), a [gRPC service](component-and-service-calls.html#_external_grpc_services), or any other service that can be called from the Java code. [TransferWorkflow.java](https://github.com/akka/akka-sdk/blob/main/samples/transfer-workflow-compensation/src/main/java/com/example/transfer/application/TransferWorkflow.java) ```java private StepEffect detectFraudsStep() { FraudDetectionService.FraudDetectionResult result = fraudDetectionService.detectFrauds( currentState().transfer() ); // (1) var workflowId = commandContext().workflowId(); var transfer = currentState().transfer(); return switch (result) { case ACCEPTED -> { // (2) TransferState initialState = TransferState.create(workflowId, transfer); Withdraw withdrawInput = new Withdraw(initialState.withdrawId(), transfer.amount()); yield stepEffects() .updateState(initialState) .thenTransitionTo(TransferWorkflow::withdrawStep) .withInput(withdrawInput); } case MANUAL_ACCEPTANCE_REQUIRED -> { // (3) TransferState waitingForAcceptanceState = TransferState.create( workflowId, transfer ).withStatus(WAITING_FOR_ACCEPTANCE); yield stepEffects() .updateState(waitingForAcceptanceState) .thenTransitionTo(TransferWorkflow::waitForAcceptanceStep); } }; } ``` | **1** | Calls an external service to detect frauds. | | **2** | When the transfer is accepted, continues with the next step. | | **3** | Otherwise, transitions to the `WAITING_FOR_ACCEPTANCE` step, which will [pause](about:blank#_pausing_workflow) the workflow and wait for the human acceptance of the transfer. | ## Pausing workflow A long-running workflow can be paused while waiting for some additional information to continue processing. A special `pause` transition can be used to inform Akka that the execution of the Workflow should be postponed. By launching a Workflow command handler, the user can then resume the processing. Additionally, a Timer can be scheduled to automatically inform the Workflow that the expected time for the additional data has passed. [TransferWorkflow.java](https://github.com/akka/akka-sdk/blob/main/samples/transfer-workflow-compensation/src/main/java/com/example/transfer/application/TransferWorkflow.java) ```java private StepEffect waitForAcceptanceStep() { String transferId = currentState().transferId(); timers() .createSingleTimer( "acceptanceTimeout-" + transferId, ofHours// (8), componentClient .forWorkflow(transferId) .method(TransferWorkflow::acceptanceTimeout) .deferred() ); // (1) return stepEffects().thenPause(); // (2) } ``` | **1** | Schedules a timer as a Workflow step action. Make sure that the timer name is unique for every Workflow instance. | | **2** | Pauses the Workflow execution. | | | Remember to cancel the timer once the Workflow is resumed. | | | Exposing additional mutational methods from the Workflow implementation should be done with special caution. Accepting a call to such a method should only be possible when the Workflow is in the expected state. If the workflow is in the middle of a step execution, such a call will be queued and only handled once the step completes. | [TransferWorkflow.java](https://github.com/akka/akka-sdk/blob/main/samples/transfer-workflow-compensation/src/main/java/com/example/transfer/application/TransferWorkflow.java) ```java public Effect accept() { if (currentState() == null) { return effects().error("transfer not started"); } else if (currentState().status() == WAITING_FOR_ACCEPTANCE) { // (1) Transfer transfer = currentState().transfer(); Withdraw withdrawInput = new Withdraw(currentState().withdrawId(), transfer.amount()); return effects() .transitionTo(TransferWorkflow::withdrawStep) .withInput(withdrawInput) .thenReply("transfer accepted"); } else { // (2) return effects() .error("Cannot accept transfer with status: " + currentState().status()); } } ``` | **1** | Accepts the request only when status is `WAITING_FOR_ACCEPTANCE`. | | **2** | Otherwise, rejects the requests. | ## Error handling Design for failure is one of the key attributes of all Akka components. Workflow has the richest set of configurations from all of them. It’s essential to build robust and reliable solutions. ### Step Timeouts Each workflow step has a default timeout of 5 seconds. You can override this default for all steps in the workflow `settings` method, or set a custom timeout for individual steps. [TransferWorkflow.java](https://github.com/akka/akka-sdk/blob/main/samples/transfer-workflow-compensation/src/main/java/com/example/transfer/application/TransferWorkflow.java) ```java @Override public WorkflowSettings settings() { return WorkflowSettings.builder() .defaultStepTimeout(ofSeconds// (2)) // (1) .stepTimeout(TransferWorkflow::failoverHandlerStep, ofSeconds// (1)) // (2) .build(); } ``` | **1** | Sets a default timeout for all workflow steps. | | **2** | Overrides the step timeout for a specific step. | | | If you need a global workflow timeout, consider using durable timers as described in the [Pausing workflow](about:blank#_pausing_workflow) section. | ### Recover strategy It’s time to define what should happen in case of step timeout or any other unhandled error. [TransferWorkflow.java](https://github.com/akka/akka-sdk/blob/main/samples/transfer-workflow-compensation/src/main/java/com/example/transfer/application/TransferWorkflow.java) ```java @Override public WorkflowSettings settings() { return WorkflowSettings.builder() .defaultStepRecovery(maxRetries// (1).failoverTo(TransferWorkflow::failoverHandlerStep)) // (1) .stepRecovery( TransferWorkflow::depositStep, maxRetries// (2).failoverTo(TransferWorkflow::compensateWithdrawStep) ) // (2) .build(); } ``` | **1** | Set a default failover transition for all steps with the maximum number of retries. | | **2** | Override the step recovery strategy for the `deposit` step. | ### Compensation The idea behind the Workflow error handling is that workflows should only fail due to unknown errors during execution. In general, you should always write your workflows so that they do not fail on any known edge cases. If you expect an error, it’s better to be explicit about it, possibly with your domain types. Based on this information and the flexible Workflow API you can define a compensation for any workflow step. [TransferWorkflow.java](https://github.com/akka/akka-sdk/blob/main/samples/transfer-workflow-compensation/src/main/java/com/example/transfer/application/TransferWorkflow.java) ```java private StepEffect depositStep(Deposit deposit) { String to = currentState().transfer().to(); WalletResult result = componentClient .forEventSourcedEntity(to) .method(WalletEntity::deposit) // (1) .invoke(deposit); return switch (result) { case Success __ -> stepEffects() // (2) .updateState(currentState().withStatus(COMPLETED)) .thenEnd(); case Failure failure -> { yield stepEffects() .updateState(currentState().withStatus(DEPOSIT_FAILED)) .thenTransitionTo(TransferWorkflow::compensateWithdrawStep); // (3) } }; } private StepEffect compensateWithdrawStep() { // (4) var transfer = currentState().transfer(); String commandId = currentState().depositId(); WalletResult result = componentClient .forEventSourcedEntity(transfer.from()) .method(WalletEntity::deposit) .invoke(new Deposit(commandId, transfer.amount())); return switch (result) { case Success __ -> stepEffects() // (5) .updateState(currentState().withStatus(COMPENSATION_COMPLETED)) .thenEnd(); case Failure __ -> throw new IllegalStateException( // (6) "Expecting succeed operation but received: " + result ); }; } ``` | **1** | Explicit deposit call result type `WalletResult`. | | **2** | Finish workflow as completed, in the case of a successful deposit. | | **3** | Launch compensation step to handle deposit failure. The `"withdraw"` step must be reversed. | | **4** | Compensation step is like any other step, with the same set of functionalities. | | **5** | Correct compensation can finish the workflow. | | **6** | Any other result might be handled by a default recovery strategy. | Compensating a workflow step(s) might involve multiple logical steps and thus is part of the overall business logic that must be defined within the workflow itself. For simplicity, in the example above, the compensation is applied only to `withdraw` step. Whereas `deposit` step itself might also require a compensation. In case of a step timeout we can’t be certain about step successful or error outcome. A full error handling and compensation sample can be downloaded as a [zip file](../java/_attachments/workflow-quickstart.zip). Run `TransferWorkflowIntegrationTest` and examine the logs from the application. ## Multi-region replication Stateful components like Event Sourced Entities, Key Value Entities or Workflow can be replicated to other regions. This is useful for several reasons: - resilience to tolerate failures in one location and still be operational, even multi-cloud redundancy - possibility to serve requests from a location near the user to provide better responsiveness - load balancing to be able to handle high throughput For each stateful component instance there is a primary region, which handles all write requests. Read requests can be served from any region. Read requests are defined by declaring the command handler method with `ReadOnlyEffect` as return type. A read-only handler cannot update the state, and that is enforced at compile time. [ShoppingCartEntity.java](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-quickstart/src/main/java/shoppingcart/application/ShoppingCartEntity.java) ```java public ReadOnlyEffect getCart() { return effects().reply(currentState()); // (3) } ``` Write requests are defined by declaring the command handler method with `Effect` as return type, instead of `ReadOnlyEffect`. Write requests are routed to the primary region and handled by the stateful component instance in that region even if the original call to the instance with the component client was made from another region. State changes (Workflow, Key Value Entity) or events (Event Sourced Entity) persisted by the instance in the primary region are replicated to other regions and processed by corresponding instance there. This means that the state of the stateful components in all regions are updated from the primary. The replication is asynchronous, which means that read replicas are eventually updated. Normally within a few milliseconds, but if there is for example a problem with the network between the regions it can take longer time for the read replicas to become up to date, but eventually they will. This also means that you might not see your own writes, immediately. Consider the following: - send a write request and that is routed to a primary in another region - after receiving the response of the write request, you send a read request that is served by the non-primary region - the stateful component instance in the non-primary region might not have seen the replicated changes yet, and therefore replies with "stale" information If it’s important for some read requests to have seen latest writes you can use `Effect` for such command handler, even though it is not persisting any events. Then the request will be routed to the primary and use the latest fully consistent state. The operational aspects are described in [Regions](../operations/regions/index.html). [Views](views.html) [Timers](timed-actions.html) - [Akka](../index.html) - [Developing](index.html) - [Components](components/index.html) - [Consumers](consuming-producing.html) # Consuming and producing ![Consumer](../_images/consumer.png) Consuming or producing a stream of events is a common Microservices pattern. It allows stream-based interaction between Akka services and other systems. The source of events can be the journal of an Event Sourced Entity, state changes in a Key Value Entity, or a message broker topic. In this section, we will explore how you can use a Consumer component to: - Consume events emitted by an Event Sourced Entity within the same service - Consume state changes emitted by a Key Value Entity within the same service - Consume state changes emitted by a Workflow within the same service - Consume events or state from an Entity or Workflow in another service using service to service eventing - Consume messages from topics of Google Cloud Pub/Sub or Apache Kafka. - Produce messages to a Google Cloud Pub/Sub or Apache Kafka topic. | | Events and messages are guaranteed to be delivered at least once. This means that Consumers must be able to handle duplicated messages. | ## Consumer’s Effect API The Consumer’s Effect defines the operations that Akka should perform when an incoming message is delivered to the Consumer. A Consumer Effect can either: - return a message to be published to a Topic (in case the method is a publisher) - return Done to indicate that the message was processed successfully - ignore the incoming message For additional details, refer to [Declarative Effects](../concepts/declarative-effects.html). ## Consume from Event Sourced Entity You can consume event from an Event Sourced Entity by adding `@Consume.FromEventSourcedEntity` as a type level annotation of your Consumer implementation. [CounterEventsConsumer.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-counter-brokers/src/main/java/counter/application/CounterEventsConsumer.java) ```java @Component(id = "counter-events-consumer") // (1) @Consume.FromEventSourcedEntity(CounterEntity.class) // (2) public class CounterEventsConsumer extends Consumer { // (3) public Effect onEvent(CounterEvent event) { // (4) return switch (event) { case ValueIncreased valueIncreased -> effects().done(); // (5) case ValueMultiplied valueMultiplied -> effects().ignore(); // (6) }; } } ``` | **1** | Set component id, like for any other component. | | **2** | Set the source of the events to the Event Sourced Entity `CounterEntity`. | | **3** | Extend the `Consumer` component class. | | **4** | Add handler for `CounterEvent` events. | | **5** | Return `effect().done()` when processing is completed. | | **6** | Return `effect().ignore()` to ignore the event and continue the processing. | If an exception is raised during the event processing. Akka runtime will redelivery the event until the application process it without failures. When deleting Event Sourced Entities, and want to act on it in a consumer, make sure to persist a final event representing the deletion before triggering delete. ## Consume from Key Value Entity You can consume state changes from a Key Value Entity. To receive messages with the entity state changes, annotate the Consumer with `@Consume.FromKeyValueEntity` and specify the class of the entity. Although it looks similar to an Event Sourced Entity Consumer, the semantics are slightly different. The Key Value Entity Consumer is guaranteed to receive the most recent state change, but not necessarily all changes. Normally it will receive all changes, but changes may be omitted in case of a very high update pace, and a new consumer will not see all historical changes. [ShoppingCartConsumer.java](https://github.com/akka/akka-sdk/blob/main/samples/key-value-shopping-cart/src/main/java/com/example/application/ShoppingCartConsumer.java) ```java @Component(id = "shopping-cart-consumer") @Consume.FromKeyValueEntity(ShoppingCartEntity.class) // (1) public class ShoppingCartConsumer extends Consumer { public Effect onChange(ShoppingCart shoppingCart) { // (2) //processing shopping cart change return effects().done(); } @DeleteHandler public Effect onDelete() { // (3) //processing shopping cart delete return effects().done(); } } ``` | **1** | Set the source to the Key Value Entity `ShoppingCartEntity`. | | **2** | Add a handler for `ShoppingCart` state update. | | **3** | Add a handler when the entity is deleted. This handler is optional if the entity is never deleted. | ## Consume from Workflow You can consume state changes from a Workflow. To receive messages with the state changes, annotate the Consumer with `@Consume.FromWorkflow` and specify the class of the workflow. [TransferStateConsumer.java](https://github.com/akka/akka-sdk/blob/main/samples/transfer-workflow/src/main/java/com/example/transfer/application/TransferStateConsumer.java) ```java @Component(id = "transfer-state-consumer") @Consume.FromWorkflow(TransferWorkflow.class) // (1) public class TransferStateConsumer extends Consumer { public Effect onUpdate(TransferState transferState) { // (2) // processing transfer state change return effects().done(); } @DeleteHandler public Effect onDelete() { // (3) // processing transfer state delete return effects().done(); } } ``` | **1** | Set the source to the `TransferWorkflow`. | | **2** | Add a handler for `TransferState` state update. | | **3** | Add a handler when `TransferState` state is deleted. This handler is optional if the workflow is never deleted. | If you need additional information about change origin, executed steps, etc., you can encode it in the state class. ## Service to Service Eventing An Akka application can be comprised of multiple services working to support specific business requirements. Although each service is an independent deployable unit, often times information needs to flow between those services. Akka provides brokerless at-least-once event delivery across Akka services in the same project through the Service to Service eventing. The source of the events is an [Event Sourced Entity](event-sourced-entities.html) or [Key Value Entity](key-value-entities.html). Its events/changes can be published as a stream and consumed by another Akka service without the need to set up a message broker. | | For eventing from an entity inside the same Akka service as the consuming component, use regular [@Consume.FromEventSourcedEntity](about:blank#consume-from-event-sourced-entity) instead of Service to Service eventing. | ### Event Producer The event producer is a Consumer that consumes the events from a local source and makes them available for consumption from another service. This is done with an additional `@Produce.ServiceStream` annotation, the stream `id` is what identifies the specific stream of events from the consuming services. The ACL configuration is set to allow access from specific (or all) Akka services. To illustrate how to publish entity events, let’s assume the existence of a `CustomerEntity` that emits events of types: `CustomerCreated`, `NameChanged` and `AddressChanged`. You will get the events delivered to a Consumer, transform them to a public set of event types and let them be published to a stream. [CustomerEvents.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-customer-registry/src/main/java/customer/api/CustomerEvents.java) ```java @Component(id = "customer-events-service") @Consume.FromEventSourcedEntity(CustomerEntity.class) // (1) @Produce.ServiceStream(id = "customer_events") // (2) @Acl(allow = @Acl.Matcher(service = "*")) // (3) public class CustomerEvents extends Consumer { public Effect onEvent(CustomerEvent event) { // (4) return switch (event) { case CustomerCreated created -> effects() .produce(new CustomerPublicEvent.Created(created.email(), created.name())); case NameChanged nameChanged -> effects() .produce(new CustomerPublicEvent.NameChanged(nameChanged.newName())); case AddressChanged __ -> effects().ignore(); // (5) }; } } ``` | **1** | Identify which Event Sourced Entity to publish events for. | | **2** | Set stream public identifier for Consumers. | | **3** | Allow access from other Akka services (in the same project), but not from the public internet. | | **4** | Event handler transforms service internal event model into public API types. | | **5** | Filter event types that should not be available to consuming services using `ignore()`. | ### Event Consumer The consuming side can be a Consumer or a View, annotated with `@Consume.FromStream` with a `service` identifying the publishing service, and the `id` of the stream to subscribe to. [CustomersByNameView.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-customer-registry-subscriber/src/main/java/customer/application/CustomersByNameView.java) ```java @Component(id = "customers-by-name") public class CustomersByNameView extends View { @Consume.FromServiceStream( // (1) service = "customer-registry", // (2) id = "customer_events" // (3) ) public static class CustomersByNameUpdater extends TableUpdater { public Effect onEvent( // (4) CustomerPublicEvent.Created created ) { var id = updateContext().eventSubject().get(); return effects().updateRow(new CustomerEntry(id, created.email(), created.name())); } public Effect onEvent(CustomerPublicEvent.NameChanged nameChanged) { var updated = rowState().withName(nameChanged.newName()); return effects().updateRow(updated); } } @Query("SELECT * as customers FROM customers_by_name WHERE name = :name") public QueryEffect findByName(String name) { return queryResult(); } } ``` | **1** | Annotate the Table Updater with `@Consume.FromStream` to subscribe to an event stream from another Akka service. | | **2** | The name of the Akka service publishing the event stream. | | **3** | The public identifier of the specific stream. Corresponds to the `@Produce.ServiceStream id` in the service publishing the event stream. | | **4** | Handler method per message type that the stream may contain. | | | If you’re looking to test this locally, you will likely need to run the 2 services with different ports. For more details, consult [Running multiple services](running-locally.html#multiple_services). | ## Consume from a message broker Topic To receive messages from a Google Cloud Pub/Sub or Apache Kafka topic, annotate the Consumer class with `@Consume.FromTopic` and specify the topic name. | | Only topic names are referenced and no additional details about how to connect to the topics are needed. When deploying the application there must be a broker configuration in the Akka project, with credentials and details on how connect to the broker. For details about configuring a broker see [Configure message brokers](../operations/projects/message-brokers.html). | In the following example the events from the topic are delivered to the Consumer and logged. [CounterEventsTopicConsumer.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-counter-brokers/src/main/java/counter/application/CounterEventsTopicConsumer.java) ```java @Component(id = "counter-events-topic-consumer") @Consume.FromTopic(value = "counter-events") // (1) public class CounterEventsTopicConsumer extends Consumer { private Logger logger = LoggerFactory.getLogger(CounterEventsTopicConsumer.class); public Effect onValueIncreased(ValueIncreased event) { // (2) logger.info("Received increased event: " + event.toString()); return effects().done(); // (3) } public Effect onValueMultiplied(ValueMultiplied event) { // (2) logger.info("Received multiplied event: " + event.toString()); return effects().done(); } } ``` | **1** | Consume from topic 'counter-events'. | | **2** | Add handler for a given message type. | | **3** | Mark processing as completed. | | | By default, Akka assumes the messages in the topic were serialized as JSON and as such, deserializes them into the input type of your handlers by taking advantage of CloudEvents standard. | ### Receiving CloudEvents This time instead of a single event handler, we have a handler for each message type. Consumer is able to match the payload type to the handler method based on the `ce-type` attribute of the CloudEvent message. Akka uses the [CloudEvents](https://cloudevents.io/) standard when receiving from and publishing to topics. The CloudEvents specification standardizes message metadata so that systems can integrate more easily. Describing the structure of the message payload is the CloudEvents feature most important to Akka. An example of that is the capability to send serialized JSON messages and have Akka deserialize them accordingly. To allow proper reading of JSON messages from external topics, the messages need to specify the message attributes: - `Content-Type` = `application/json` - `ce-specversion` = `1.0` - `ce-type` = fully qualified name (e.g. `com.example.ValueIncreased`) (The `ce-` prefixed attributes are part of the CloudEvents specification.) ### Receiving Bytes If the content type is `application/octet-stream`, no content type is present, or the type is unknown to Akka, the message is treated as a binary message. The topic subscriber method must accept the `byte[]` message. [RawBytesConsumer.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-counter-brokers/src/main/java/counter/application/RawBytesConsumer.java) ```java public class RawBytesConsumer extends Consumer { public Effect onMessage(byte[] bytes) { // (1) // deserialization logic here return effects().done(); } } ``` | **1** | When consuming raw bytes messages from a topic the input type must be `byte[]`. | If a Consumer produce messages of `byte[]` type to a topic, the messages published to the topic will have content-type `application/octet-stream`. ## Producing to a message broker Topic Producing to a topic is the same as producing to a stream in service to service eventing. The only difference is the `@Produce.ToTopic` annotation. Used to set a destination topic name. | | To guarantee that events for each entity can be read from the message broker in the same order they were written, the cloud event subject id must be specified in metadata along with the event. See how to in [Metadata](about:blank#_metadata) below. | [CounterJournalToTopicConsumer.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-counter-brokers/src/main/java/counter/application/CounterJournalToTopicConsumer.java) ```java @Component(id = "counter-journal-to-topic") @Consume.FromEventSourcedEntity(CounterEntity.class) // (1) @Produce.ToTopic("counter-events") // (2) public class CounterJournalToTopicConsumer extends Consumer { public Effect onEvent(CounterEvent event) { // (3) return effects().produce(event); // (4) } } ``` | **1** | Set the source to events from the `CounterEntity`. | | **2** | Set the destination to a topic name 'counter-events'. | | **3** | Add handler for the counter events. | | **4** | Return `Effect.produce` to produce events to the topic. | | | Only topic names are referenced and no additional details about how to connect to the topics are needed. When deploying the application there must be a broker configuration in the Akka project, with credentials and details on how connect to the broker. For details about configuring a broker see [Configure message brokers](../operations/projects/message-brokers.html). | ## Handling Serialization Check [serialization](serialization.html) documentation for more details. ## Deployment-dependent sources It is possible to use environment variables to control the name of the service or topic that a consumer consumes from, this is useful for example for using the same image in staging and production deployments but having them consume from different source services. Referencing environment variables is done with the syntax `${VAR_NAME}` in the `service` parameter of the `@Consume.FromStream` annotation or `value` parameter of the `@Consume.FromTopic` annotation. | | Changing the service or topic name after it has once been deployed means the consumer will start over from the beginning of the event stream. | See `akka service deploy -h` for details on how to set environment variables when deploying a service. ## Metadata For many use cases, a Consumer from Event Sourced or Key Value Entity will trigger other services and needs to pass the entity ID to the receiver. You can include this information in the event payload (or Key Value Entity state) or use built-in metadata to get this information. It is made available to the consumer via `messageContext().eventSubject()`. Using metadata is also possible when producing messages to a topic or a stream. You can pass some additional information which will be available to the consumer. [CounterJournalToTopicWithMetaConsumer.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-counter-brokers/src/main/java/counter/application/CounterJournalToTopicWithMetaConsumer.java) ```java @Component(id = "counter-journal-to-topic-with-meta") @Consume.FromEventSourcedEntity(CounterEntity.class) @Produce.ToTopic("counter-events-with-meta") // (1) public class CounterJournalToTopicWithMetaConsumer extends Consumer { public Effect onEvent(CounterEvent event) { String counterId = messageContext().eventSubject().get(); // (2) Metadata metadata = Metadata.EMPTY.add("ce-subject", counterId); logger.info("Received event for counter id {}: {}", counterId, event); return effects().produce(event, metadata); // (3) } } ``` | **1** | Get the counter ID from the metadata. | | **2** | Publish event to the topic with custom metadata. | ## Testing the Integration When an Akka service relies on a broker, it might be useful to use integration tests to assert that those boundaries work as intended. For such scenarios, you can either: - Use TestKit’s mocked topic: - this offers a general API to inject messages into topics or read the messages written to another topic, regardless of the specific broker integration you have configured. - Run an external broker instance: - if you’re interested in running your integration tests against a real instance, you need to provide the broker instance yourself by running it in a separate process in your local setup and make sure to disable the use of TestKit’s test broker. Currently, the only external broker supported in integration tests is Google PubSub Emulator. ### TestKit Mocked Incoming Messages Following up on the counter entity example used above, let’s consider an example (composed by 2 Consumer and 1 Event Sourced Entity) as pictured below: ![eventing testkit sample](_images/eventing-testkit-sample.svg) In this example: - commands are consumed from an external topic `event-commands` and forwarded to a `Counter` entity; - the `Counter` entity is an Event Sourced Entity and has its events published to another topic `counter-events`. To test this flow, we will take advantage of the TestKit to be able to push commands into the `event-commands` topic and check what messages are produced to topic `counter-events`. [CounterIntegrationTest.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-counter-brokers/src/test/java/counter/application/CounterIntegrationTest.java) ```java public class CounterIntegrationTest extends TestKitSupport { // (1) private EventingTestKit.IncomingMessages commandsTopic; private EventingTestKit.OutgoingMessages eventsTopic; @BeforeAll public void beforeAll() { super.beforeAll(); commandsTopic = testKit.getTopicIncomingMessages("counter-commands"); // (2) eventsTopic = testKit.getTopicOutgoingMessages("counter-events"); } @Test public void verifyCounterEventSourcedPublishToTopic() { var counterId = "test-topic"; var increaseCmd = new IncreaseCounter(counterId, 3); var multipleCmd = new MultiplyCounter(counterId, 4); commandsTopic.publish(increaseCmd, counterId); // (3) commandsTopic.publish(multipleCmd, counterId); var eventIncreased = eventsTopic.expectOneTyped(ValueIncreased.class, ofSeconds// (20)); // (4) var eventMultiplied = eventsTopic.expectOneTyped(ValueMultiplied.class); assertEquals(increaseCmd.value(), eventIncreased.getPayload().value()); // (5) assertEquals(multipleCmd.value(), eventMultiplied.getPayload().multiplier()); } } ``` | **1** | Use the TestKitSupport class. | | **2** | Get a `IncomingMessages` for topic named `counter-commands` and `OutgoingMessages` for `counter-events` from the TestKit. | | **3** | Build 2 commands and publish both to the topic. Note the `counterId` is passed as the subject id of the message. | | **4** | Read 2 messages, one at a time. We pass in the expected class type for the next message. | | **5** | Assert the received messages have the same value as the commands sent. | | | In the example above we take advantage of the TestKit to serialize / deserialize the messages and pass all the required metadata automatically for us. However, the API also offers the possibility to read and write raw bytes, construct your metadata or read multiple messages at once. | #### Configuration Before running your test, make sure to configure the TestKit correctly. [CounterIntegrationTest.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-counter-brokers/src/test/java/counter/application/CounterIntegrationTest.java) ```java @Override protected TestKit.Settings testKitSettings() { return TestKit.Settings.DEFAULT.withTopicIncomingMessages("counter-commands") // (1) .withTopicOutgoingMessages("counter-events") // (2) } ``` | **1** | Mock incoming messages from the `counter-commands` topic. | | **2** | Mock outgoing messages from the `counter-events` topic. | #### Testing with metadata Typically, messages are published with associated metadata. If you want to construct your own `Metadata` to be consumed by a service or make sure the messages published out of your service have specific metadata attached, you can do so using the TestKit, as shown below. [CounterIntegrationTest.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-counter-brokers/src/test/java/counter/application/CounterIntegrationTest.java) ```java @Test public void verifyCounterCommandsAndPublishWithMetadata() { var counterId = "test-topic-metadata"; var increaseCmd = new IncreaseCounter(counterId, 10); var metadata = CloudEvent.of( // (1) "cmd1", URI.create("CounterTopicIntegrationTest"), increaseCmd.getClass().getName() ) .withSubject(counterId) // (2) .asMetadata() .add("Content-Type", "application/json"); // (3) commandsTopic.publish(testKit.getMessageBuilder().of(increaseCmd, metadata)); // (4) var increasedEvent = eventsTopicWithMeta.expectOneTyped(IncreaseCounter.class); var actualMd = increasedEvent.getMetadata(); // (5) assertEquals(counterId, actualMd.asCloudEvent().subject().get()); // (6) assertEquals("application/json", actualMd.get("Content-Type").get()); } ``` | **1** | Build a `CloudEvent` object with the 3 required attributes, respectively: `id`, `source` and `type`. | | **2** | Add the subject to which the message is related, that is the `counterId`. | | **3** | Set the mandatory header "Content-Type" accordingly. | | **4** | Publish the message along with its metadata to topic `commandsTopic`. | | **5** | Upon receiving the message, access the metadata. | | **6** | Assert the headers `Content-Type` and `ce-subject` (every CloudEvent header is prefixed with "ce-") have the expected values. | #### One Suite, Multiple Tests When running multiple test cases under the same test suite and thus using a common TestKit instance, you might face some issues if unconsumed messages from previous tests mess up with the current one. To avoid this, be sure to: - have the tests run in sequence, not in parallel; - clear the contents of the topics in use before the test. As an alternative, you can consider using different test suites which will use independent TestKit instances. [CounterIntegrationTest.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-counter-brokers/src/test/java/counter/application/CounterIntegrationTest.java) ```java @BeforeEach // (1) public void clearTopics() { eventsTopic.clear(); // (2) eventsTopicWithMeta.clear(); } ``` | **1** | Run this before each test. | | **2** | Clear the topic ignoring any unread messages. | | | Despite the example, you are neither forced to clear all topics nor to do it before each test. You can do it selectively, or you might not even need it depending on your tests and the flows they test. | ### External Broker To run an integration test against a real instance of Google PubSub (or its Emulator) or Kafka, use the TestKit settings to override the default eventing support, as shown below: [CounterWithRealKafkaIntegrationTest.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-counter-brokers/src/test/java/counter/application/CounterWithRealKafkaIntegrationTest.java) ```java @Override protected TestKit.Settings testKitSettings() { return TestKit.Settings.DEFAULT.withEventingSupport( TestKit.Settings.EventingSupport.KAFKA ); } ``` ## Multi-region replication Consumers are not replicated directly in the same way as for example [Event Sourced Entity replication](event-sourced-entities.html#_replication). A Consumer receives events from entities in the same service, or another service, in the same region. The entities will replicate all events across regions and identical processing can occur in the consumers of each region. The origin of an event is the region where a message was first created. You can see the origin from `messageContext().hasLocalOrigin()` or `messageContext().originRegion()` and perform conditional processing of the event depending on the origin, such as ignoring events from other regions than the local region where the Consumer is running. The local region can be retrieved with `messageContext().selfRegion()`. A Consumer can also receive messages from a broker topic, and that could be regional or global depending on how the message broker is configured. [Timers](timed-actions.html) [Integrations](integrations/index.html) - [Akka](../index.html) - [Developing](index.html) - [Components](components/index.html) - [Views](views.html) # Implementing Views ![View](../_images/view.png) Views allow you to access multiple entities or retrieve entities by attributes other than their *entity id*. You can create Views for different access patterns, optimized by specific queries, or combine multiple queries into a single View. Views can be defined from any of the following: - [Key Value Entity state changes](about:blank#value-entity) - [Event Sourced Entity events](about:blank#event-sourced-entity) - [Workflow state changes](about:blank#workflow) - [Messages received from subscribing to topics on a broker](about:blank#topic-view) - [Events consumed from a different Akka service](consuming-producing.html#s2s-eventing) Reference documentation covering the view query language syntax, query capabilities and how query results are mapped to Java types can be found in [View reference](../reference/views/index.html). The remainder of this page describes: - [How to transform results](about:blank#results-projection) - [How to modify a View](about:blank#changing) | | Be aware that Views are not updated immediately when the Entity state changes. It is not instant but eventually all changes will become visible in the query results. View updates might also take more time during failure scenarios (e.g. network instability) than during normal operation. | ## View’s Effect API The View’s Effect defines the operations to be performed when an event, a message or a state change is handled by a View. A View Effect can either: - update the view state - delete the view state - ignore the event or state change notification (and not update the view state) For additional details, refer to [Declarative Effects](../concepts/declarative-effects.html). ## Creating a View from a Key Value Entity Consider an example of a Customer Registry service with a `Customer` Key Value Entity. When customer state changes, the entire state is emitted as a value change. Those value changes update any associated Views. To create a View that lists customers by their name, [define the view](about:blank#_define_the_view) for a service that selects customers by name and associates a table name with the View. The table is created and used to store the View. This example assumes the following `Customer` exists: [Customer.java](https://github.com/akka/akka-sdk/blob/main/samples/key-value-customer-registry/src/main/java/customer/domain/Customer.java) ```java public record Customer(String email, String name, Address address) { // (1) public Customer withName(String newName) { // (2) return new Customer(email, newName, address); } public Customer withAddress(Address newAddress) { // (2) return new Customer(email, name, newAddress); } } ``` As well as a Key Value Entity component `CustomerEntity.java` that will produce the state changes consumed by the View. You can consult [Key Value Entity](key-value-entities.html#entity-behavior) documentation on how to create such an entity if needed. ### Define the View You implement a View by extending `akka.javasdk.view.View` and subscribing to changes from an entity. You specify how to query it by providing one or more methods annotated with `@Query`, which can then be made accessible via an [HTTP Endpoint](http-endpoints.html). [CustomersByEmail.java](https://github.com/akka/akka-sdk/blob/main/samples/key-value-customer-registry/src/main/java/customer/application/CustomersByEmail.java) ```java import akka.javasdk.annotations.Component; import akka.javasdk.annotations.Consume; import akka.javasdk.annotations.Query; import akka.javasdk.view.TableUpdater; import akka.javasdk.view.View; import customer.domain.Customer; import java.util.List; @Component(id = "customers-by-email") // (1) public class CustomersByEmail extends View { // (2) public record Customers(List customers) {} @Consume.FromKeyValueEntity(CustomerEntity.class) // (3) public static class CustomersByEmailUpdater extends TableUpdater {} // (4) @Query("SELECT * AS customers FROM customers_by_email WHERE email = :email") // (5) public QueryEffect getCustomer(String email) { return queryResult(); // (6) } } ``` | **1** | Define a component id for the view. | | **2** | Extend from `View`. | | **3** | Subscribe to updates from Key Value Entity `CustomerEntity`. | | **4** | Declare a `TableUpdater` of type `Customer` (entity’s state type). | | **5** | Define the query, including a table name (i.e. `customers_by_email`) of our choice. | | **6** | Use method `queryResult()` to return the result of the query. | | | Assigning a component identifier (i.e. `@Component`) to your View is mandatory, it must be unique, and it should be stable. This allows you to refactor the name of the class later on without the risk of losing the view. If you change this identifier later, Akka will not recognize this component as the same view and will create a brand-new view. For a view consuming from an Event Sourced Entity this becomes very resource consuming because it will reprocess all the events of that entity to rebuild it. While for a view built from a topic, you can lose all the previous events because, depending on the topic configuration, you may only process events from the current time forwards. Last but not least, it’s also a problem for Key Value Entities because it will need to index them again when grouping them by some value. | ### Using a transformed model Often, you will want to transform the entity model to which the view is subscribing into a different representation. To do that, let’s have a look at the example in which we store a summary of the `Customer` used in the previous section instead of the original one: [CustomersByName.java](https://github.com/akka/akka-sdk/blob/main/samples/key-value-customer-registry/src/main/java/customer/application/CustomersByName.java) ```java public record CustomerSummary(String customerId, String name, String email) {} ``` In this scenario, the view state should be of type `CustomerSummary` and you will need to handle and transform the incoming state changes into it, as shown below: [CustomersByName.java](https://github.com/akka/akka-sdk/blob/main/samples/key-value-customer-registry/src/main/java/customer/application/CustomersByName.java) ```java import akka.javasdk.annotations.Component; import akka.javasdk.annotations.Consume; import akka.javasdk.annotations.Query; import akka.javasdk.view.TableUpdater; import akka.javasdk.view.View; import customer.domain.Customer; import java.util.Collection; @Component(id = "customers-by-name") public class CustomersByName extends View { public record CustomerSummary(String customerId, String name, String email) {} @Consume.FromKeyValueEntity(CustomerEntity.class) public static class CustomersByNameUpdater extends TableUpdater { // (1) public Effect onUpdate(Customer customer) { // (2) return effects() .updateRow( new CustomerSummary( updateContext().eventSubject().get(), customer.name(), customer.email() ) ); // (3) } } @Query("SELECT * FROM customers_by_name WHERE name = :name") // (4) public QueryEffect getFirstCustomerSummary(String name) { // (5) return queryResult(); } } ``` | **1** | Declares a `TableUpdater` of type `CustomerSummary`. This type represents each stored row. | | **2** | Implements a handler method `onUpdate` that receives the latest state of the entity `Customer` and returns an `Effect` with the updated row. | | **3** | The id of the entity that was updated is available through the update context as `eventSubject`. | | **4** | Defines the query. | | **5** | Uses the new type `CustomerSummary` to return the result of the query. | | | Some `TableUpdater` implementation might update the view model in a non-idempotent way. For example, the view model adds an element to the list. When the source of the changes is an Event Sourced Entity, Key Value Entity or another Akka service, the View component has a build-in deduplication mechanism to ensure that the same event is not processed twice. In other cases, you should add the deduplication mechanism in the `TableUpdater` implementation. See [message deduplication](dev-best-practices.html#message-deduplication) for some suggested solutions. | ### Handling Key Value Entity deletes The View state corresponding to an Entity is not automatically deleted when the Entity is deleted. We can update our table updater with an additional handler marked with `@DeleteHandler`, to handle a Key Value Entity [delete](key-value-entities.html#deleting-state) operation. [CustomerSummaryByName.java](https://github.com/akka/akka-sdk/blob/main/samples/key-value-customer-registry/src/main/java/customer/application/CustomerSummaryByName.java) ```java @Consume.FromKeyValueEntity(value = CustomerEntity.class) public static class CustomersUpdater extends TableUpdater { // (1) public Effect onUpdate(Customer customer) { return effects() .updateRow( new CustomerSummary(updateContext().eventSubject().get(), customer.name()) ); } // ... @DeleteHandler // (2) public Effect onDelete() { return effects().deleteRow(); // (3) } } ``` | **1** | Note we are adding a new handler to the existing table updater. | | **2** | Marks the method as a delete handler. | | **3** | An effect to delete the view row `effects().deleteRow()`. It could also be an update of a special column, to mark the view row as deleted. | ## Creating a View from an Event Sourced Entity You can create a View from an Event Sourced Entity by using events that the Entity emits to build a state representation. Using our Customer Registry service example, to create a View for querying customers by name, you have to [define the view to consume events](about:blank#_define_the_view_to_consume_events). This example assumes a Customer equal to the previous example and an Event Sourced Entity that uses this Customer. The Event Sourced Entity is in charge of producing the events that update the View. These events are defined as subtypes of the class `CustomerEvent` using a sealed interface: [CustomerEvent.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-customer-registry/src/main/java/customer/domain/CustomerEvent.java) ```java import akka.javasdk.annotations.Migration; import akka.javasdk.annotations.TypeName; public sealed interface CustomerEvent { @TypeName("internal-customer-created") // (1) record CustomerCreated(String email, String name, Address address) implements CustomerEvent {} @TypeName("internal-name-changed") record NameChanged(String newName) implements CustomerEvent {} @TypeName("internal-address-changed") record AddressChanged(Address address) implements CustomerEvent {} } ``` | **1** | Includes the logical type name using `@TypeName` annotation. | | | It’s highly recommended to add a `@TypeName` to your persisted events. Akka needs to identify each event in order to deliver them to the right event handlers. If no logical type name is specified, Akka uses the FQCN, check [type name](serialization.html#_type_name) documentation for more details. | ### Define the View to consume events Defining a view that consumes from an Event Sourced Entity is very similar to the one consuming a Key Value Entity. In this case, the handler method will be called for each event emitted by the Entity. Every time an event is processed by the view, the state of the view can be updated. You can do this with the `updateRow` method, which is available through the `effects()` API. Below you can see how the View is updated: [CustomersByNameView.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-customer-registry/src/main/java/customer/application/CustomersByNameView.java) ```java import akka.javasdk.annotations.Component; import akka.javasdk.annotations.Consume; import akka.javasdk.annotations.Query; import akka.javasdk.view.TableUpdater; import akka.javasdk.view.View; import customer.domain.CustomerEntries; import customer.domain.CustomerEntry; import customer.domain.CustomerEvent; @Component(id = "customers-by-name") // (1) public class CustomersByNameView extends View { @Consume.FromEventSourcedEntity(CustomerEntity.class) public static class CustomersByNameUpdater extends TableUpdater { // (2) public Effect onEvent(CustomerEvent event) { // (3) return switch (event) { case CustomerEvent.CustomerCreated created -> effects() .updateRow(new CustomerEntry(created.email(), created.name(), created.address())); case CustomerEvent.NameChanged nameChanged -> effects() .updateRow(rowState().withName(nameChanged.newName())); case CustomerEvent.AddressChanged addressChanged -> effects() .updateRow(rowState().withAddress(addressChanged.address())); }; } } @Query("SELECT * as customers FROM customers_by_name WHERE name = :name") public QueryEffect getCustomers(String name) { return queryResult(); } } ``` | **1** | Defines a component id for the view. | | **2** | Declares a `TableUpdater` of type `CustomerRow`. | | **3** | Handles the super type `CustomerEvent` and defines the proper update row method for each subtype. | ### Ignoring events You can ignore events by returning `Effect.ignore` for those you are not interested in. Using a `sealed interface` for the events is a good practice to ensure that all events types are handled. ### Handling Event Sourced Entity deletes The View row corresponding to an Entity is not automatically deleted when the Entity is deleted. To delete from the View you can use the `deleteRow()` effect from an event transformation method, similarly to the example shown above for a Key Value Entity. ## Creating a View from a Workflow The source of a View can be also a Workflow state changes. It works the same way as shown in [Creating a View from an Event Sourced Entity](about:blank#event-sourced-entity) or [Creating a View from a Key Value Entity](about:blank#value-entity), but you define it with `@Consume.FromWorkflow` instead. [TransfersView.java](https://github.com/akka/akka-sdk/blob/main/samples/transfer-workflow/src/main/java/com/example/transfer/application/TransfersView.java) ```java @Component(id = "transfer-view") public class TransfersView extends View { public record TransferEntry(String id, String status) {} public record TransferEntries(Collection entries) {} @Query("SELECT * as entries FROM transfers WHERE status = 'COMPLETED'") public QueryEffect getAllCompleted() { return queryResult(); } @Consume.FromWorkflow(TransferWorkflow.class) // (1) public static class TransfersUpdater extends TableUpdater { public Effect onUpdate(TransferState transferState) { // (2) var id = updateContext().eventSubject().orElse(""); return effects().updateRow(new TransferEntry(id, transferState.status().name())); } } } ``` | **1** | Uses `@Consume.FromWorkflow` annotation to set the source Workflow. | | **2** | Transforms the Workflow state `TransferState` into a View `TransferEntry`. | ## Creating a View from a topic The source of a View can be a topic. It works the same way as shown in [Creating a View from an Event Sourced Entity](about:blank#event-sourced-entity) or [Creating a View from a Key Value Entity](about:blank#value-entity), but you define it with `@Consume.FromTopic` instead. | | For the messages to be correctly consumed in the view, there must be a `ce-subject` metadata associated with each message. This is required because for each message consumed from the topic there will be a corresponding view row. That view row is selected based on such `ce-subject`. For an example on how to pass such metadata when producing to a topic, see page [Metadata](consuming-producing.html#_metadata). | [CounterTopicView.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-counter-brokers/src/main/java/counter/application/CounterTopicView.java) ```java @Component(id = "counter-topic-view") public class CounterTopicView extends View { private static final Logger logger = LoggerFactory.getLogger(CounterTopicView.class); public record CounterRow(String counterId, int value, Instant lastChange) {} public record CountersResult(List foundCounters) {} @Consume.FromTopic("counter-events-with-meta") // (1) public static class CounterUpdater extends TableUpdater { public Effect onEvent(CounterEvent event) { String counterId = updateContext().eventSubject().get(); // (2) var newValue = switch (event) { case ValueIncreased increased -> increased.updatedValue(); case ValueMultiplied multiplied -> multiplied.updatedValue(); }; logger.info("Received new value for counter id {}: {}", counterId, event); return effects().updateRow(new CounterRow(counterId, newValue, Instant.now())); // (3) } } @Query("SELECT * AS foundCounters FROM counters WHERE value >= :minimum") public View.QueryEffect countersHigherThan(int minimum) { return queryResult(); } } ``` | **1** | Uses `@Consume.FromTopic` annotation to set the target topic. | | **2** | Extracts the `ce-subject` attribute from the topic event metadata to include in the view row. | | **3** | Returns an updating effect with new table row state. | ## View query results ### How to transform results When creating a View, you can transform the results as a projection for constructing a new type instead of returning the view row type directly, for details see [Result Mapping](../reference/views/concepts/result-mapping.html) ### Streaming the result Instead of collecting the query result in memory as a collection before returning it, the entries can be streamed. To return the result as a stream, modify the returned type to be `QueryStreamEffect` and use `queryStreamResult()` to return the stream. [CustomersByCity.java](https://github.com/akka/akka-sdk/blob/main/samples/key-value-customer-registry/src/main/java/customer/application/CustomersByCity.java) ```java @Query(value = "SELECT * FROM customers_by_city WHERE address.city = :city") public QueryStreamEffect streamCustomersInCity(String city) { return queryStreamResult(); } ``` ### Streaming view updates A query can provide a near real-time stream of results for the query, emitting new entries matching the query as they are added or updated in the view. This will first list the complete result for the query and then keep the response stream open, emitting new or updated entries matching the query as they are added to the view. The stream does not complete until the client closes it. To use streaming updates, add `streamUpdates = true` to the `Query` annotation. The returned type of the query method must be `QueryStreamEffect`. [CustomersByCity.java](https://github.com/akka/akka-sdk/blob/main/samples/key-value-customer-registry/src/main/java/customer/application/CustomersByCity.java) ```java @Query( value = "SELECT * FROM customers_by_city WHERE address.city = :city", streamUpdates = true ) public QueryStreamEffect continuousCustomersInCity(String city) { return queryStreamResult(); } ``` This example would return the customers living in the same city, and then emit every time a customer already in the city is changed, or when a new customer is added to the view with the given city. Streaming updates can be streamed all the way to a gRPC or HTTP client via a [gRPC Endpoint](grpc-endpoints.html) or an [HTTP endpoint using SSE](http-endpoints.html#sse). | | This is not intended as transport for [service to service](consuming-producing.html#s2s-eventing) propagation of updates, and it does not guarantee delivery. For such use cases you should instead publish events to a topic, see [Consuming and producing](consuming-producing.html) | ## How to modify a View Akka creates indexes for the View based on the queries. For example, the following query will result in a View with an index on the `name` column: ```sql SELECT * FROM customers WHERE name = :customer_name ``` You may realize after a deployment that you forgot adding some parameters to the query parameters that aren’t exposed to the endpoint of the View. After adding these parameters the query is changed and therefore Akka will add indexes for these new columns. For example, changing the above query to filter by active users would mean a new index on the `is-active` column. This is handled automatically behind the scenes. ```sql SELECT * FROM customers WHERE name = :customer_name AND is-active = true ``` ### Incompatible changes Some specific scenarios might require a complete rebuild of the View, for example: - adding or removing tables for multi-table views; - changing the data type of a column that is part of an index. Such changes require you to define a new View. Akka will then rebuild it from the source event log or value changes. | | You should be able to test if a change is compatible locally by running the service with [persistence mode enabled](running-locally.html#persistence-enabled), producing some data, and then changing the View query and re-running the service. If the service boots up correctly and is able to serve the new query, the change is compatible. | Rebuilding a new View may take some time if there are many events that have to be processed. The recommended way when changing a View is multi-step, with two deployments: 1. Define the new View with a new `@Component`, and keep the old View intact. 2. Deploy the new View, and let it rebuild. Verify that the new query works as expected. The old View can still be used. 3. Remove the old View and redirect the endpoint calls to the new View. 4. Deploy the second change. The View definitions are stored and validated when a new version is deployed. There will be an error message if the changes are not compatible. | | Views from topics cannot be rebuilt from the source messages, because it might not be possible to consume all events from the topic again. The new View is built from new messages published to the topic. | ## Testing the View Testing Views is very similar to testing other [subscription integrations](consuming-producing.html#_testkit_mocked_incoming_messages). For a View definition that subscribes to changes from the `customer` Key Value Entity. [CustomersByCity.java](https://github.com/akka/akka-sdk/blob/main/samples/key-value-customer-registry/src/main/java/customer/application/CustomersByCity.java) ```java public class CustomersByCity extends View { @Consume.FromKeyValueEntity(CustomerEntity.class) public static class CustomerUpdater extends TableUpdater {} @Query( """ SELECT * AS customers FROM customers_by_city WHERE address.city = ANY(:cities) """ ) public QueryEffect getCustomers(List cities) { return queryResult(); } @Query(value = "SELECT * FROM customers_by_city WHERE address.city = :city") public QueryStreamEffect streamCustomersInCity(String city) { return queryStreamResult(); } @Query( value = "SELECT * FROM customers_by_city WHERE address.city = :city", streamUpdates = true ) public QueryStreamEffect continuousCustomersInCity(String city) { return queryStreamResult(); } public record QueryParams(String customerName, String city) {} // (1) @Query( """ SELECT * FROM customers_by_city WHERE name = :customerName AND address.city = :city""" ) // (2) public QueryEffect getCustomersByCityAndName(QueryParams queryParams) { return queryResult(); } } ``` An integration test can be implemented as below. [CustomersByCityIntegrationTest.java](https://github.com/akka/akka-sdk/blob/main/samples/key-value-customer-registry/src/test/java/customer/application/CustomersByCityIntegrationTest.java) ```java class CustomersByCityIntegrationTest extends TestKitSupport { @Override protected TestKit.Settings testKitSettings() { // (1) return TestKit.Settings.DEFAULT.withKeyValueEntityIncomingMessages(CustomerEntity.class); } @Test public void shouldGetCustomerByCity() { IncomingMessages customerEvents = // (2) testKit.getKeyValueEntityIncomingMessages(CustomerEntity.class); Customer johanna = new Customer( "johanna@example.com", "Johanna", new Address("Cool Street", "Porto") ); Customer bob = new Customer( "boc@example.com", "Bob", new Address("Baker Street", "London") ); Customer alice = new Customer( "alice@example.com", "Alice", new Address("Long Street", "Wroclaw") ); customerEvents.publish(johanna, "1"); // (3) customerEvents.publish(bob, "2"); customerEvents.publish(alice, "3"); Awaitility.await() .ignoreExceptions() .atMost(10, TimeUnit.SECONDS) .untilAsserted(() -> { CustomerList customersResponse = componentClient .forView() .method(CustomersByCity::getCustomers) // (4) .invoke(List.of("Porto", "London")); assertThat(customersResponse.customers()).containsOnly(johanna, bob); }); } } ``` | **1** | Mocks incoming messages from the `customer` Key Value Entity. | | **2** | Gets an `IncomingMessages` from the `CustomerEntity`. | | **3** | Publishes test data. | | **4** | Queries the view and asserts the results. | ## Multi-region replication Views are not replicated directly in the same way as for example [Event Sourced Entity replication](event-sourced-entities.html#_replication). A View is built from entities in the same service, or another service, in the same region. The entities will replicate all events across regions and identical Views are built in each region. The origin of an event is the region where a message was first created. You can see the origin from `updateContext().hasLocalOrigin()` or `updateContext().originRegion()` and perform conditional processing of the event depending on the origin, such as ignoring events from other regions than the local region where the View is running. The local region can be retrieved with `messageContext().selfRegion()`. A View can also be built from a message broker topic, and that could be regional or global depending on how the message broker is configured. [MCP Endpoints](mcp-endpoints.html) [Workflows](workflows.html) - [Akka](../index.html) - [Developing](index.html) - [Components](components/index.html) - [HTTP Endpoints](http-endpoints.html) # Designing HTTP endpoints ![Endpoint](../_images/endpoint.png) An Endpoint is a component that creates an externally accessible API. Endpoints are how you expose your services to the outside world. Two different types of endpoints are available: HTTP endpoints and gRPC endpoints. In this page, we will focus on HTTP endpoints. HTTP Endpoint components make it possible to conveniently define such APIs accepting and responding in JSON, or dropping down to lower level APIs for ultimate flexibility in what types of data is accepted and returned. ## Basics To define an HTTP Endpoint component, create a public class and annotate it with `@HttpEndpoint("/path-prefix")`. Each public method on the endpoint that is annotated with method `@Get`, `@Post`, `@Put`, `@Patch` or `@Delete` will be handling incoming requests matching the `/path-prefix` and the method-specific path used as value defined for the path annotation. The most basic example: [ExampleEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/api/ExampleEndpoint.java) ```java import akka.javasdk.annotations.Acl; import akka.javasdk.annotations.http.Get; import akka.javasdk.annotations.http.HttpEndpoint; @HttpEndpoint("/example") // (1) @Acl(allow = @Acl.Matcher(principal = Acl.Principal.ALL)) // (2) public class ExampleEndpoint extends AbstractHttpEndpoint { // (1) @Get("/hello") // (3) public String hello() { return "Hello World"; // (4) } ``` | **1** | Common path prefix for all methods in the same class `/example`. | | **2** | ACL configuration allowing any client to access the endpoint. | | **3** | `GET` endpoint path is combined with the prefix and becomes available at `/example/hello` | | **4** | Return value, is turned into an `200 Ok` response, with content type `text/plain` and the specified string as body. | | | Without an ACL annotation no client is allowed to access the endpoint. For more details on how ACLs can be configured, see [Access Control Lists (ACLs)](access-control.html) | ### Path parameters The path can also contain one or more parameters, which are extracted and passed to the method: [ExampleEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/api/ExampleEndpoint.java) ```java @Get("/hello/{name}") // (1) public String hello(String name) { // (2) return "Hello " + name; } @Get("/hello/{name}/{age}") // (3) public String hello(String name, int age) { // (4) return "Hello " + name + "! You are " + age + " years old"; } ``` | **1** | Path parameter `name` in expression. | | **2** | Method parameter named as the one in the expression | | **3** | When there are multiple parameters | | **4** | The method must accept all the same names in the same order as in the path expression. | Path parameter can be of types `String`, `int`, `long`, `boolean`, `float`, `double`, `short` and `char` as well as their `java.lang` class counterparts. ### Request body To accept an HTTP JSON body, specify a parameter that is a class that [Jackson](https://github.com/FasterXML/jackson?tab=readme-ov-file#what-is-jackson) can deserialize: [ExampleEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/api/ExampleEndpoint.java) ```java public record GreetingRequest(String name, int age) {} // (1) @Post("/hello") public String hello(GreetingRequest greetingRequest) { // (2) return "Hello " + greetingRequest.name + "! " + "You are " + greetingRequest.age + " years old"; } @Post("/hello/{number}") // (3) public String hello(int number, GreetingRequest greetingRequest) { // (4) return number + " Hello " + greetingRequest.name + "! " + "You are " + greetingRequest.age + " years old"; } ``` | **1** | A class that Jackson can serialize and deserialize to JSON | | **2** | A parameter of the request body type | | **3** | When combining request body with path variables | | **4** | The body must come last in the parameter list | Additionally, the request body parameter can be of the following types: - `String` for any request with a text content type, the body decoded into a string - `java.util.List` where `T` is a type Jackson can deserialize, accepts a JSON array. - `akka.http.javadsl.model.HttpEntity.Strict` for the entire request body as bytes together with the content type for arbitrary payload handling. - `akka.http.javadsl.model.HttpRequest` for a low level, streaming representation of the entire request including headers. See [Low level requests](about:blank#_low_level_requests) below for more details ### Request headers Accessing request headers is done through the [RequestContext](_attachments/api/akka/javasdk/http/RequestContext.html) methods `requestHeader(String headerName)` and `allRequestHeaders()`. By letting the endpoint extend [AbstractHttpEndpoint](_attachments/api/akka/javasdk/http/AbstractHttpEndpoint.html) request context is available through the method `requestContext()`. [ExampleEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/api/ExampleEndpoint.java) ```java public class ExampleEndpoint extends AbstractHttpEndpoint { // (1) @Get("/hello-request-header-from-context") public String requestHeaderFromContext() { var name = requestContext() .requestHeader("X-my-special-header") // (2) .map(HttpHeader::value) .orElseThrow( () -> new IllegalArgumentException("Request is missing my special header") ); return "Hello " + name + "!"; } } ``` | **1** | Extend `AbstractHttpEndpoint` class. | | **2** | `requestHeader(headerName)` returns an `Optional` which is empty if the header was not present. | ### Query parameters Accessing query parameter is done through the `requestContext()`, inherited from [AbstractHttpEndpoint](_attachments/api/akka/javasdk/http/AbstractHttpEndpoint.html). [ExampleEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/api/ExampleEndpoint.java) ```java @Get("/hello-query-params-from-context") public String queryParamsFromContext() { var name = requestContext().queryParams().getString("name").orElse(""); // (1) return "Hello " + name + "!"; } ``` | **1** | `queryParams().get("name")` returns an `Optional` which is empty if the query parameter is not present. | ### Response body To return response with JSON, the return value can be a class that Jackson can serialize: [ExampleEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/api/ExampleEndpoint.java) ```java public record MyResponse(String name, int age) {} @Get("/hello-response/{name}/{age}") public MyResponse helloJson(String name, int age) { return new MyResponse(name, age); // (1) } ``` | **1** | Returning an object that Jackson can serialize into JSON | In addition to an object that can be turned to JSON, a request handler can return the following: - `null` or `void` to return an empty body. - `String` to return a UTF-8 encoded `text/plain` HTTP response. - A record or other class that can be serialized to JSON. - `CompletionStage` to respond based on an asynchronous result. - When the completion stage is completed with a `T` it is turned into a response. - If it is instead failed, the failure leads to an error response according to the error handling explained in [error responses](about:blank#_error_responses). - `akka.http.javadsl.model.HttpResponse` for complete control over the response, see [Low level responses](about:blank#_low_level_responses) below ### Error responses The HTTP protocol has several status codes to signal that something went wrong with a request, for example HTTP `400 Bad request` to signal that the incoming request was not valid. Responding with an error can be done by throwing one of the exceptions available through static factory methods in `akka.javasdk.http.HttpException`. [ExampleEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/api/ExampleEndpoint.java) ```java @Get("/hello-code/{name}/{age}") public String helloWithValidation(String name, int age) { if (age > 130) throw HttpException .badRequest("It is unlikely that you are " + age + " years old"); // (1) else return " Hello " + name + "!"; // (2) } ``` | **1** | Throw one of the exceptions created through factory methods provided by `HttpException` to respond with a HTTP error | | **2** | Return non-error | In addition to the special `HttpException`s, exceptions are handled like this: - `IllegalArgumentException` is turned into a `400 Bad request` - Any other exception is turned into a `500 Internal server error`. - In production the error is logged together with a correlation id and the response message only includes the correlation id to not leak service internals to an untrusted client. - In local development and integration tests the full exception is returned as response body. ## Securing HTTP endpoints Akka’s HTTP endpoints can be secured by multiple approaches: 1. [Access Control Lists (ACLs)](access-control.html) 2. [JSON Web Tokens (JWTs)](../reference/jwts.html) 3. [TLS certificates](../operations/tls-certificates.html) 4. [HTTP Basic authentication](../operations/services/invoke-service.html#_http_basic_authentication) ## Interacting with other components The most common use case for endpoints is to interact with other components in a service. This is done through the `akka.javasdk.client.ComponentClient`. If the constructor of the endpoint class has a parameter of this type, it will be injected by the SDK. [ShoppingCartEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-quickstart/src/main/java/shoppingcart/api/ShoppingCartEndpoint.java) ```java @Acl(allow = @Acl.Matcher(principal = Acl.Principal.INTERNET)) @HttpEndpoint("/carts") // (1) public class ShoppingCartEndpoint { private final ComponentClient componentClient; private static final Logger logger = LoggerFactory.getLogger(ShoppingCartEndpoint.class); public ShoppingCartEndpoint(ComponentClient componentClient) { // (2) this.componentClient = componentClient; } @Get("/{cartId}") // (3) public ShoppingCart get(String cartId) { logger.info("Get cart id={}", cartId); return componentClient .forEventSourcedEntity(cartId) // (4) .method(ShoppingCartEntity::getCart) .invoke(); // (5) } @Put("/{cartId}/item") // (6) public HttpResponse addItem(String cartId, ShoppingCart.LineItem item) { logger.info("Adding item to cart id={} item={}", cartId, item); componentClient .forEventSourcedEntity(cartId) .method(ShoppingCartEntity::addItem) .invoke(item); return HttpResponses.ok(); // (7) } ``` | **1** | Common path prefix for all methods in the same class `/carts`. | | **2** | Accept the `ComponentClient` and keep it in a field. | | **3** | GET endpoint path is combined with a path parameter name, e.g. `/carts/123`. | | **4** | The component client can be used to interact with other components. | | **5** | Result of a request to a component is the Effect’s reply type. | | **6** | Use path parameter `{cartId}` in combination with request body `ShoppingCart.LineItem`. | | **7** | Result of request mapped to a more suitable response, in this case, `200 Ok` with an empty body. | For more details see [Component and service calls](component-and-service-calls.html) ## Interacting with other HTTP services It is also possible to interact with other services over HTTP. This is done through the `akka.javasdk.http.HttpClientProvider`. When the other service is also an Akka service deployed in the same project, it can be looked up via the deployed name of the service: [CustomerRegistryEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-customer-registry-subscriber/src/main/java/customer/api/CustomerRegistryEndpoint.java) ```java @HttpEndpoint("/customer") public class CustomerRegistryEndpoint { private final Logger log = LoggerFactory.getLogger(getClass()); private final HttpClient httpClient; private final ComponentClient componentClient; public record Address(String street, String city) {} public record CreateCustomerRequest(String email, String name, Address address) {} public CustomerRegistryEndpoint( HttpClientProvider webClientProvider, // (1) ComponentClient componentClient ) { this.httpClient = webClientProvider.httpClientFor("customer-registry"); // (2) this.componentClient = componentClient; } @Post("/{id}") public HttpResponse create(String id, CreateCustomerRequest createRequest) { log.info("Delegating customer creation to upstream service: {}", createRequest); if (id == null || id.isBlank()) throw HttpException.badRequest("No id specified"); // make call to customer-registry service var response = httpClient .POST("/customer/" + id) // (3) .withRequestBody(createRequest) .invoke(); if (response.httpResponse().status() == StatusCodes.CREATED) { return HttpResponses.created(); // (4) } else { throw new RuntimeException( "Delegate call to create upstream customer failed, response status: " + response.httpResponse().status() ); } } ``` | **1** | Accept the `HttpClientProvider` | | **2** | Use it to create a client for the service `customer-registry` | | **3** | Issue an HTTP POST request to the service | | **4** | Turn the response it into our own response | | | If you’re looking to test this locally, you will likely need to run the 2 services with different ports. For more details, consult [Running multiple services](running-locally.html#multiple_services). | It is also possible to interact with arbitrary non-Akka services using the `HttpClientProvider`, for such use, pass a string with `https://example.com` or `http://example.com` instead of a service name. For more details see [Component and service calls](component-and-service-calls.html) ## Advanced HTTP requests and responses For more control over the request and responses it is also possible to use the more low-level Akka HTTP model APIs. ### Low level responses Returning `akka.http.javadsl.model.HttpResponse` makes it possible to do more flexible and advanced responses. For example, it allows returning custom headers, custom response body encodings and even streaming responses. As a convenience `akka.javasdk.http.HttpResponses` provides factories for common response scenarios without having to reach for the Akka HTTP model APIs directly: [ExampleEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/api/ExampleEndpoint.java) ```java record HelloResponse(String greeting) {} @Get("/hello-low-level-response/{name}/{age}") public HttpResponse lowLevelResponseHello(String name, int age) { // (1) if (age > 130) return HttpResponses .badRequest("It is unlikely that you are " + age + " years old"); // (2) else return HttpResponses.ok(new HelloResponse("Hello " + name + "!")); // (3) } ``` | **1** | Declare the return type as `akka.http.javadsl.model.HttpResponse` | | **2** | Return a bad request response | | **3** | Return an ok response, you can still use arbitrary objects and get them serialized to JSON | `akka.javasdk.http.HttpResponses` provides convenient factories for common response message types without having to reach for the Akka HTTP model APIs directly: [ExampleEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/api/ExampleEndpoint.java) ```java record HelloResponse(String greeting) {} @Get("/hello-low-level-response/{name}/{age}") public HttpResponse lowLevelResponseHello(String name, int age) { // (1) if (age > 130) return HttpResponses .badRequest("It is unlikely that you are " + age + " years old"); // (2) else return HttpResponses.ok(new HelloResponse("Hello " + name + "!")); // (3) } ``` | **1** | Declare the return type as `akka.http.javadsl.model.HttpResponse` | | **2** | Return a bad request response | | **3** | Return an ok response | Dropping all the way down to the Akka HTTP API: [ExampleEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/api/ExampleEndpoint.java) ```java @Get("/hello-lower-level-response/{name}/{age}") public HttpResponse lowerLevelResponseHello(String name, int age) { if (age > 130) return HttpResponse.create() .withStatus(StatusCodes.BAD_REQUEST) .withEntity("It is unlikely that you are " + age + " years old"); else { var jsonBytes = JsonSupport.encodeToAkkaByteString( new HelloResponse("Hello " + name + "!") ); // (1) return HttpResponse.create() // (2) .withEntity(ContentTypes.APPLICATION_JSON, jsonBytes); // (3) } } ``` | **1** | At this level there is no convenience, the response object must manually be rendered into JSON bytes | | **2** | The response returned by `HttpResponse.create` is `200 Ok` | | **3** | Pass the response body bytes and the `ContentType` to describe what they contain | ### Low level requests Accepting `HttpEntity.Strict` will collect all request entity bytes into memory for processing (up to 8Mb), for example to handle uploads of a custom media type: [ExampleEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/api/ExampleEndpoint.java) ```java private static final ContentType IMAGE_JPEG = ContentTypes.create(MediaTypes.IMAGE_JPEG); @Post("/post-image/{name}") public String lowLevelRequestHello(String name, HttpEntity.Strict strictRequestBody) { if ( !strictRequestBody.getContentType().equals(IMAGE_JPEG) ) throw HttpException.badRequest("This service only accepts " + IMAGE_JPEG); else { // (1) return "Got " + strictRequestBody.getData().size() + " bytes for image name " + name; // (2) } } ``` | **1** | `HttpEntity.Strict` gives access to the request body content type | | **2** | as well as the actual bytes, in a `akka.util.ByteString` | Accepting `akka.http.javadsl.model.HttpRequest` makes it possible to do more flexible and advanced request handling but at the cost of quite a bit more complex request handling. This way of handling requests should only be used for advanced use cases when there is no other option. In such a method it is paramount that the streaming request body is always handled, for example by discarding it or collecting it all into memory, if not it will stall the incoming HTTP connection. Handling the streaming request will require a `akka.stream.Materializer`, to get access to a materializer, define a constructor parameter of this type to have it injected by the SDK. [ExampleEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/api/ExampleEndpoint.java) ```java public class ExampleEndpoint extends AbstractHttpEndpoint { // (1) private final Materializer materializer; public ExampleEndpoint(Materializer materializer) { // (1) this.materializer = materializer; } @Get("/hello-request-header/{name}") public CompletionStage lowerLevelRequestHello(String name, HttpRequest request) { if (request.getHeader("X-my-special-header").isEmpty()) { return request .discardEntityBytes(materializer) .completionStage() .thenApply(__ -> { // (2) throw HttpException.forbidden("Missing the special header"); }); } else { return request .entity() .toStrict(1000, materializer) .thenApply(strictRequestBody -> // (3) " Hello " + name + "! " + "We got your " + strictRequestBody.getData().size() + " bytes " + "of type " + strictRequestBody.getContentType()); } } ``` | **1** | Accept the materializer and keep it in a field | | **2** | Make sure to discard the request body when failing | | **3** | Or collect the bytes into memory | ### Serving static content Static resources such as HTML, CSS files can be packaged together with the service. This is done by placing the resource files in `src/main/resources/static-resources` and returning them from an endpoint method using [HttpResponses.staticResource](_attachments/api/akka/javasdk/http/HttpResponses.html#staticResource). This can be done for a single filename: [StaticResourcesEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/api/StaticResourcesEndpoint.java) ```java @Get("/") // (1) public HttpResponse index() { return HttpResponses.staticResource("index.html"); // (2) } @Get("/favicon.ico") // (3) public HttpResponse favicon() { return HttpResponses.staticResource("favicon.ico"); // (4) } ``` | **1** | The specific path `/` | | **2** | Load a specific file placed in `src/main/resources/static-resources/index.html` | | **3** | Another specific path `/favicon.ico` | | **4** | The specific resource to serve | It is also possible to map an entire path subtree using `**` as a wildcard at the end of the path: [StaticResourcesEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/api/StaticResourcesEndpoint.java) ```java @Get("/static/**") // (1) public HttpResponse webPageResources(HttpRequest request) { // (2) return HttpResponses.staticResource(request, "/static/"); // (3) } ``` | **1** | Endpoint method for any path under `/static/` | | **2** | Accept `akka.http.javadsl.model.HttpRequest` for further inspection of the actual path. | | **3** | Load any available file under `static-resources` after first removing `/static` from the request path. The request path `/static/images/example.png` is resolved to the file `src/main/resources/static-resources/images/example.png` from the project. | | | This is convenient for service documentation or small self-contained services with web user interface but is not intended for production, where coupling of the service lifecycle with the user interface would mean that a new service version would need to be deployed for any changes in the user interface. | ### Streaming responses with server-sent events [Server-sent events (SSE)](https://html.spec.whatwg.org/multipage/server-sent-events.html#server-sent-events) is a way to push a stream of elements through a single HTTP response that the client can see one by one rather than have to wait for the entire response to complete. Any Akka stream `Source` of elements where the elements can be serialized to JSON using Jackson can be turned into an SSE endpoint method. If the stream is idle, a heartbeat is emitted every 5 seconds to make sure the response stream is kept alive through proxies and firewalls. [ExampleEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/api/ExampleEndpoint.java) ```java @Get("/current-time") public HttpResponse streamCurrentTime() { Source timeSource = Source.tick( // (1) Duration.ZERO, Duration.ofSeconds// (5), "tick" ).map(__ -> System.currentTimeMillis()); // (2) return HttpResponses.serverSentEvents(timeSource); // (3) } ``` | **1** | `Source.tick` emits the element `"tick"` immediately (after `Duration.ZERO`) and then every 5 seconds | | **2** | Every time a tick is seen, we turn it into a system clock timestamp | | **3** | Passing the `Source` to `serverSentEvents` returns a `HttpResponse` for the endpoint method. | A more realistic use case would be to stream the changes from a view, using the [stream view updates view feature](views.html#_streaming_view_updates). [CustomerEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/key-value-customer-registry/src/main/java/customer/api/CustomerEndpoint.java) ```java @Get("/by-city-sse/{cityName}") public HttpResponse continousByCityNameServerSentEvents(String cityName) { // view will keep stream going, toggled with streamUpdates = true on the query Source customerSummarySource = componentClient .forView() // (1) .stream(CustomersByCity::continuousCustomersInCity) .source(cityName); return HttpResponses.serverSentEvents(customerSummarySource); // (2) } ``` | **1** | The view is annotated with `@Query(value = [a query], streamUpdates = true)` to keep polling the database after the initial result is returned and return updates matching the query filter | | **2** | The stream of view entries and then updates are turned into an SSE response. | Another realistic example is to periodically poll an entity for its state, but only emit an element over SSE when the state changes: [CustomerEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/key-value-customer-registry/src/main/java/customer/api/CustomerEndpoint.java) ```java private record CustomerStreamState(Optional customer, boolean isSame) {} @Get("/stream-customer-changes/{customerId}") public HttpResponse streamCustomerChanges(String customerId) { Source stateEvery5Seconds = // stream of ticks, one immediately, then one every five seconds Source.tick(Duration.ZERO, Duration.ofSeconds// (5), "tick") // (1) // for each tick, request the entity state .mapAsync( 1, __ -> // Note: not safe to turn this into `.invoke()` in a stream `.map()` componentClient .forKeyValueEntity(customerId) .method(CustomerEntity::getCustomer) .invokeAsync() .handle((Customer customer, Throwable error) -> { if (error == null) { return Optional.of(customer); } else if (error instanceof IllegalArgumentException) { // calling getCustomer throws IllegalArgument if the customer does not exist // we want the stream to continue polling in that case, // so turn it into an empty optional return Optional.empty(); } else { throw new RuntimeException( "Unexpected error polling customer state", error ); } }) ) // then filter out the empty optionals and return the actual // customer states for nonempty so that the stream contains only Customer elements .filter(Optional::isPresent) .map(Optional::get); // deduplicate, so that we don't emit if the state did not change from last time Source streamOfChanges = stateEvery5Seconds // (2) .scan( new CustomerStreamState(Optional.empty(), true), (state, newCustomer) -> new CustomerStreamState( Optional.of(newCustomer), state.customer.isPresent() && state.customer.get().equals(newCustomer) ) ) .filterNot(state -> state.isSame || state.customer.isEmpty()) .map(state -> state.customer.get()); // now turn each changed internal state representation into public API representation, // just like get endpoint above Source streamOfChangesAsApiType = // (3) streamOfChanges.map(customer -> toApiCustomer(customerId, customer)); // turn into server sent event response return HttpResponses.serverSentEvents(streamOfChangesAsApiType); // (4) } ``` | **1** | Right away, and then every 5 seconds, use the `ComponentClient` to call `CustomerEntity#getCustomer` to get the current state. | | **2** | Use `scan` to filter out updates where the state did not change | | **3** | Transform the internal customer domain type to a public API representation | | **4** | Turn the stream to a SSE response | This uses more advanced Akka stream operators, you can find more details of those in the [Akka libraries documentation](https://doc.akka.io/libraries/akka-core/current/stream/operators/index.html). ## See also - [Access Control Lists (ACLs)](access-control.html) - [JSON Web Tokens (JWTs)](../reference/jwts.html) - [TLS certificates](../operations/tls-certificates.html) - [HTTP Basic authentication](../operations/services/invoke-service.html#_http_basic_authentication) [Key Value Entities](key-value-entities.html) [gRPC Endpoints](grpc-endpoints.html) - [Akka](../index.html) - [Developing](index.html) - [Components](components/index.html) - [gRPC Endpoints](grpc-endpoints.html) # Designing gRPC Endpoints ![Endpoint](../_images/endpoint.png) An Endpoint is a component that creates an externally accessible API. Endpoints are how you expose your services to the outside world. Two different types of endpoints are available: HTTP and gRPC endpoints. In this page, we will focus on gRPC endpoints. gRPC was designed to support service evolution and decoupling by enforcing a protocol-first design through `.proto` files. This ensures that service contracts are explicitly defined, providing a clear structure for communication. Protobuf, the underlying serialization format, supports backward and forward compatibility, avoiding tight coupling by making it easier to evolve services without breaking existing clients. Additionally, gRPC’s efficient binary serialization and support for both unary and streaming calls make it a good choice for high-performance, scalable service-to-service communication. For more information on gRPC and Protobuf, see [https://grpc.io](https://grpc.io/) and [the Protobuf 3 guide](https://protobuf.dev/programming-guides/proto3/). gRPC Endpoint components make it possible to conveniently define APIs accepting and responding in protobuf — the binary, typed protocol used by gRPC which is designed to handle evolution of a service over time. | | Our recommendation is to use gRPC Endpoints for cross-service calls (be it with another Akka service or other backend services) and HTTP Endpoints for APIs consumed directly by client-facing / frontend applications — for which the use of gRPC comes at a greater cost. For a deeper dive into the differences between gRPC and HTTP Endpoints, see [Endpoints](../concepts/grpc-vs-http-endpoints.html). | ## Basics To define a gRPC Endpoint component, you start by defining a `.proto` file that defines the service and its messages in `src/main/proto` of the project. [customer_grpc_endpoint.proto](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/proto/com/example/customer_grpc_endpoint.proto) ```protobuf syntax = "proto3"; option java_multiple_files = true; option java_package = "customer.api.proto"; // (1) package customer.api; message Address { string street = 1; string city = 2; } message Customer { string email = 1; string name = 2; Address address = 3; } message GetCustomerRequest { string customer_id = 1; } service CustomerGrpcEndpoint { rpc GetCustomer (GetCustomerRequest) returns (Customer) {} // (2) } ``` | **1** | Define the java package in which the generated classes will be placed. | | **2** | Declare the method along with its input and output types. | | | For a reference on how to format your protobuf files, check [protobuf.dev style guide](https://protobuf.dev/programming-guides/style/). | When compiling the project, a Java interface for the service is generated at `customer.api.proto.CustomerGrpcEndpoint`. Define a class implementing this interface in the `api` package of your project and annotate the class with `@GrpcEndpoint`: [CustomerGrpcEndpointImpl.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/api/CustomerGrpcEndpointImpl.java) ```java package com.example.api; import akka.javasdk.annotations.GrpcEndpoint; import customer.api.proto.Customer; import customer.api.proto.CustomerGrpcEndpoint; import customer.api.proto.GetCustomerRequest; @GrpcEndpoint // (1) public class CustomerGrpcEndpointImpl implements CustomerGrpcEndpoint { @Override public Customer getCustomer(GetCustomerRequest in) { // dummy implementation with hardcoded values var customer = Customer.newBuilder() // (2) .setName("Alice") .setEmail("alice@email.com") .build(); return customer; // (3) } } ``` | **1** | Mark class as a gRPC endpoint and implementing the generated interface `CustomerGrpcEndpoint`. | | **2** | Create a new `Customer` protobuf message and set the `name` and `email` fields. | | **3** | Respond with the `Customer` protobuf message to the client. | | | This implementation does not interact with any other components and has an hard-coded response for simplification purposes. Interacting with other components is covered in the next section. | ### Error responses The gRPC protocol has different status codes to signal that something went wrong with a request, for example `INVALID_ARGUMENT` to signal that the request was malformed. To signal an error in the response, throw a `GrpcServiceException` as shown in the example below: [CustomerGrpcEndpointImpl.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-customer-registry/src/main/java/customer/api/CustomerGrpcEndpointImpl.java) ```java if (in.getCustomerId().isBlank()) throw new GrpcServiceException( Status.INVALID_ARGUMENT.augmentDescription("Customer id must not be empty") ); ``` In addition to the special `GrpcServiceException` and `StatusRuntimeException`, exceptions are handled like this: - `IllegalArgumentException` is turned into a `INVALID_ARGUMENT` - Any other exception is turned into a `INTERNAL` error. - In production the error is logged together with a correlation id and the response message only includes the correlation id to not leak service internals to an untrusted client. - In local development and integration tests the full exception is returned as response body. ## Securing gRPC endpoints Akka’s gRPC endpoints can be secured by multiple approaches: 1. [Access Control Lists (ACLs)](access-control.html) 2. [JSON Web Tokens (JWTs)](../reference/jwts.html) 3. [TLS certificates](../operations/tls-certificates.html) ## Interacting with other components Endpoints are commonly used to interact with other components in a service. This is done through the `akka.javasdk.client.ComponentClient`. If the constructor of the endpoint class has a parameter of this type, it will be injected by the SDK and can then be available for use when processing requests. Let’s see how this is done: [CustomerGrpcEndpointImpl.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-customer-registry/src/main/java/customer/api/CustomerGrpcEndpointImpl.java) ```java @GrpcEndpoint // (1) public class CustomerGrpcEndpointImpl implements CustomerGrpcEndpoint { private static final Logger log = LoggerFactory.getLogger(CustomerGrpcEndpointImpl.class); private final ComponentClient componentClient; public CustomerGrpcEndpointImpl(ComponentClient componentClient) { // (2) this.componentClient = componentClient; } @Override public Customer getCustomer(GetCustomerRequest in) { if (in.getCustomerId().isBlank()) throw new GrpcServiceException( Status.INVALID_ARGUMENT.augmentDescription("Customer id must not be empty") ); try { var customer = componentClient .forEventSourcedEntity(in.getCustomerId()) // (3) .method(CustomerEntity::getCustomer) .invoke(); return domainToApi(customer); // (4) } catch (Exception ex) { if ( ex.getMessage().contains("No customer found for id") ) throw new GrpcServiceException(Status.NOT_FOUND); else throw new RuntimeException(ex); } } private Customer domainToApi(customer.domain.Customer domainCustomer) { return Customer.newBuilder() .setName(domainCustomer.name()) .setEmail(domainCustomer.email()) .setAddress(domainToApi(domainCustomer.address())) .build(); } private Address domainToApi(customer.domain.Address domainAddress) { if (domainAddress == null) return null; else { return Address.newBuilder() .setCity(domainAddress.city()) .setStreet(domainAddress.street()) .build(); } } ``` | **1** | Mark class as a gRPC endpoint and implement the generated interface `CustomerGrpcEndpoint`. | | **2** | Accept the `ComponentClient` and keep it in a field. | | **3** | Use the component client to interact with an Event Sourced Entity that holds the customers, identified by `customerId`. | | **4** | Transform the result from the component client to the external response. | For more details see [Component and service calls](component-and-service-calls.html). ## Streaming gRPC supports streaming requests and responses, with which either the client or the server (or both) can send multiple messages. In this section, we will show how to stream the results of a request but the remaining combinations are similar. To stream the results of a request, mark the return type of the method as `stream` in the `.proto` file: [customer_grpc_endpoint.proto](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-customer-registry/src/main/proto/customer/api/customer_grpc_endpoint.proto) ```protobuf message CustomerSummary { string email = 1; string name = 2; } service CustomerGrpcEndpoint { // ... rpc CustomerByEmailStream (CustomerByEmailRequest) returns (stream CustomerSummary) {} } ``` Then, the method in the endpoint interface will need to construct and return a `Stream`: [CustomerGrpcEndpointImpl.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-customer-registry/src/main/java/customer/api/CustomerGrpcEndpointImpl.java) ```java @Override public Source customerByEmailStream(CustomerByEmailRequest in) { // Shows of streaming consumption of a view, transforming // each element and passing along to a streamed response var customerSummarySource = componentClient .forView() .stream(CustomersByEmailView::getCustomersStream) .source(in.getEmail()); return customerSummarySource.map( c -> CustomerSummary.newBuilder().setName(c.name()).setEmail(c.email()).build() ); } ``` | | The above example depends on existing a View component that also returns a `Stream` of `Customer` messages. See [Streaming the result](views.html#_streaming_the_result) for more details. | ## Testing the Endpoint To exercise a gRPC endpoint, the testkit contains methods to get a gRPC client for calling the methods of the endpoint: [CustomerGrpcIntegrationTest.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-customer-registry/src/test/java/customer/api/CustomerGrpcIntegrationTest.java) ```java public class CustomerGrpcIntegrationTest extends TestKitSupport { @Test public void createCustomerCart() { var client = getGrpcEndpointClient(CustomerGrpcEndpointClient.class); var customerRequest = customer.api.proto.CreateCustomerRequest.newBuilder() .setCustomerId("customer-abc") .setCustomer( customer.api.proto.Customer.newBuilder() .setEmail("abc@email.com") .setName("John Doe") .build() ) .build(); client.createCustomer(customerRequest); var getCustomer = client.getCustomer( customer.api.proto.GetCustomerRequest.newBuilder().setCustomerId("customer-abc").build() ); Assertions.assertEquals("John Doe", getCustomer.getName()); } } ``` By default, if ACLs are defined, the testkit client is authenticated as if it was the service itself calling the endpoint, but there is also an overload to `getGrpcEndpointClient` that takes a `Principal` parameter for specifying what principal client should seem like from the endpoints point of view, for example to simulate another service `Principal.localService("other-service")` or a request coming from the public internet `Principal.INTERNET`. ## Schema evolution Protobuf is designed to allow evolving the messages while keeping wire compatibility. Following are the most common aspects of a message protocol you would want to change. For more details on what other changes can be compatible, see the [Protobuf documentation](https://protobuf.dev/programming-guides/proto3/#updating). ### gRPC services and their methods If a gRPC service package, service name or RPC method name is changed, or whether an RPC method is changed to accept streaming or return streaming data, clients that only know the old service description will no longer be able to call the new service without recompiling and changing the consuming code. ### Renaming messages or their fields Fields, message names, and protobuf package names are not encoded in the wire protocol, instead the *tag number* - the number assigned to each field is used. This means the names can be changed as long as the message structure is intact. A client consuming messages with an old version of the protobuf messages will still be able to communicate with a service that has name changes. Changing names will however not be *source compatible*, since the generated Java class and field names will change along with the protobuf name change, once a protocol file with name changes is introduced in a service it will need updates to the code wherever it is using the old names. ### Adding fields To allow adding new fields without breaking the wire protocol, all fields are optional in protobuf, for primitive fields this means that they will have a default value when not present over the wire. For nested messages a missing value leads to a Java class instance with default for all values, but it is also possible to observe that the value is missing through generated `has[FieldName]` methods for each field that is a message. When deserializing, if there are any unknown fields in the message, the message will deserialize without problems but the unknown fields can be inspected through `getUnknownFields()`. ### Removing fields The most important aspect to understand about evolution of protobuf messages is that the *tag number* - the number for each field, must never be re-used. A field can just be dropped, but it is good practice to mark the original field number as `reserved` to not accidentally re-use it in the future. It is possible to mark both the tag number and the old used field name as reserved: ```protobuf message Example { string first_still_used = 1; // used to be here: // string old_field = 2; string another_used = 3; reserved 2; reserved "old_field"; } ``` It is also possible to mark a field as deprecated, which leads to the field still being in the protocol but adds a `@Deprecated` annotation to the generated code to advise consumers not to use the field: ```protobuf message Example { string first_still_used = 1; int32 old_field = 2 [deprecated = true]; string another_used = 3; } ``` Dropping fields will not be *source compatible*, since the generated Java class and set of fields will change along with the protobuf message change, once a protocol file with name changes is introduced in a service it will need updates to the code wherever it is accessing the old field. ## See also - [Access Control Lists (ACLs)](access-control.html) - [JSON Web Tokens (JWTs)](../reference/jwts.html) - [TLS certificates](../operations/tls-certificates.html) [HTTP Endpoints](http-endpoints.html) [MCP Endpoints](mcp-endpoints.html) - [Akka](../index.html) - [Developing](index.html) - [Components](components/index.html) - [MCP Endpoints](mcp-endpoints.html) # Designing MCP Endpoints ![Endpoint](../_images/endpoint.png) An Endpoint is a component that creates an externally accessible API. MCP Endpoints allow you to expose a services to MCP clients such as LLM chat agent desktop applications and agents running on other services. MCP endpoints in Akka can provide - "tools" – functions/logic the MCP client can call on behalf of the LLM - "resources" – static resources or dynamic resource templates the MCP client can fetch for the LLM - "prompts" - Template prompts created from input parameters Endpoints are made available using a stateless Streamable HTTP transport defined by [MCP specification 2025-03-26](https://modelcontextprotocol.io/specification/2025-03-26). ## MCP endpoint class To create an MCP endpoint, a class is annotated with `@McpEndpoint` and ACL configuring where it can be accessed from. [ExampleMcpEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/api/ExampleMcpEndpoint.java) ```java @Acl(allow = @Acl.Matcher(principal = Acl.Principal.ALL)) @McpEndpoint(serverName = "doc-snippets-mcp-sample", serverVersion = "0.0.1") public class ExampleMcpEndpoint { private ComponentClient componentClient; public ExampleMcpEndpoint(ComponentClient componentClient) { this.componentClient = componentClient; } } ``` The service is available under the path `/mcp` by default, but it is possible to have multiple MCP endpoints in the same Akka by specifying. ### Tools A tool is a public method made available to MCP clients. It is important to give a clear description of what the tool does using the description value as well as using the `@Description` annotation on parameters and fields, since this is how the calling LLM gains an understanding of what the tool does. By default, the input schema for the tool is reflectively created based on the input parameter type. The input class may require additional information per field to help the LLM understand what each parameter means. Only simple input parameter classes are supported. Fields must be of primitive type, the boxed Java primitive types or strings. All fields are marked as required in the schema by default, any non-required parameter should be of type `Optional` [ExampleMcpEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/api/ExampleMcpEndpoint.java) ```java @McpTool( name = "add", // (1) description = "Adds the two given numbers and returns the result" // (2) ) public String add( @Description("The first number") int n1, @Description("The second number") int n2 ) { // (3) var result = n1 + n2; return Integer.toString(result); } ``` | **1** | An optional tool name. If not defined, the method name is used. Must be unique in the MCP service if defined | | **2** | A description about what the tool does | | **3** | The `Description` annotations describing each input for the tool. | For full flexibility and more complex input types, it is also possible to specify the JSON Schema of the input manually in the annotation: [ExampleMcpEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/api/ExampleMcpEndpoint.java) ```java public record EchoToolRequest(String message) {} @McpTool( name = "echo", description = "Echoes back whatever string is thrown at it", inputSchema = """ { "type":"object", "properties": { "input": { "type": "object", "properties": { "message": {"type":"string", "description":"A string to echo"} }, "required": ["message"] } }, "required": ["input"] } """ // (1) ) public String echo(EchoToolRequest input) { return input.message; } ``` | **1** | The entire JSON Schema string for the input | When using a manual schema, it is crucial to make sure that the schema is describing a JSON structure that is actually what is accepted when Jackson parses it into the input parameter type. ### Resources A static resource is a public zero-parameter method returning text or bytes. The resource is identified by a unique URI. [ExampleMcpEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/api/ExampleMcpEndpoint.java) ```java @McpResource( uri = "file:///background.png", // (1) name = "Background image", description = "A background image for Akka sites", mimeType = "image/png" ) public byte[] backgroundImage() { // (2) try ( InputStream in = this.getClass().getResourceAsStream("/static-resources/images/background.png") ) { if (in == null) throw new RuntimeException("Could not find background image"); return in.readAllBytes(); } catch (IOException e) { throw new RuntimeException(e); } } ``` | **1** | A URI identifying the specific resource returned by this method | | **2** | Empty parameter list, a return type that is `String` for raw text content, `byte[]` for byte contents. Other return types are turned into JSON. | A dynamic resource instead defines a URI template with placeholders for sections, the method accepts `String` parameters with the same names. [ExampleMcpEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/api/ExampleMcpEndpoint.java) ```java @McpResource( uriTemplate = "file:///images/{fileName}", // (1) name = "Dynamic file", description = "Fetch a specific image file", // Note: there is no way to dynamically return a mime type, // it has to be the same for all files mimeType = "image/png" ) public byte[] dynamicResource(String fileName) { // (2) if (fileName.contains("..")) { // Important to validate input throw new RuntimeException("Invalid image file: " + fileName); } try ( InputStream in = this.getClass().getResourceAsStream("/static-resources/images/" + fileName) ) { if (in == null) throw new RuntimeException("Could not find background image"); return in.readAllBytes(); } catch (IOException e) { throw new RuntimeException(e); } } ``` | **1** | A URI template with placeholders | | **2** | A parameter list matching the placeholders. | ### Prompts Prompts are a way to provide example prompts to the MCP client given some input parameters. [ExampleMcpEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/api/ExampleMcpEndpoint.java) ```java @McpPrompt(description = "Java code review prompt") public String javaCodeReview(@Description("The Java code to review") String code) { // (1) return "Please review this Java code:\\n" + code; // (2) } ``` | **1** | Zero or more string parameters to use in the prompts, annotated with `@Description` to describe the purpose of each | | **2** | Logic to use the input to construct a prompt | ## Interacting with other components The most common use case for endpoints is to interact with other components in a service. This is done through the `akka.javasdk.client.ComponentClient`. If the constructor of the endpoint class has a parameter of this type, it will be injected by the SDK. For more details see [Component and service calls](component-and-service-calls.html) ## Interacting with HTTP services It is possible for an MCP endpoint to interact with other services over HTTP. This is done through the `akka.javasdk.http.HttpClientProvider`. For more details see [Component and service calls](component-and-service-calls.html) ## Authentication and authorization The Akka MCP endpoints do not support the OAuth 2.1 flows in the MCP spec. Endpoint classes can be annotated using the `@ACL` annotations and `@JWT` to control access (individual method annotations are not supported). It is also possible to access endpoint request headers for custom authorization based on headers. ## Testing MCP endpoints There are no specific test kit utilities for MCP. However, it is possible to manually construct endpoints and directly call the methods as well as use the testkit HTTP client together with handcrafted JSON-RPC MCP payloads to exercise MCP tools, prompts and resources. [gRPC Endpoints](grpc-endpoints.html) [Views](views.html) - [Akka](../index.html) - [Developing](index.html) - [Components](components/index.html) - [Timers](timed-actions.html) # Timers ![Timer](../_images/timer.png) Timers enable the scheduling of calls for future execution, making them particularly useful for verifying the completion status of processes at a later time. Timers are stored by the [Akka Runtime](../reference/glossary.html#runtime) and are guaranteed to run at least once. When a timer is triggered, it initiates the scheduled call. If the call succeeds, the timer completes and is automatically removed. In case of a failure, the timer is rescheduled, continuing until the call is successful. **Key features of timers**: - Guaranteed to run at least once. - Can be scheduled to run at any future time. - Can be deleted if no longer needed. - Automatically removed upon successful completion. - Rescheduled after failures, with a configurable limit on retry attempts. **Limitations of timers**: - Maximum allowed payload size is 1024 bytes. - Each service can have up to 50,000 active timers. - Scheduled calls identify the component by component id and the method by its name. Changes to these may prevent the scheduled call from executing. - Method parameter types must remain consistent after scheduling a call. You can schedule calls to any method accessible through the `ComponentClient`, including command handlers on Event Sourced Entities, Key-Value Entities, Workflows, and Timed Actions. To schedule a call, inject both `TimerScheduler` and `ComponentClient` into your component. These dependencies are available for injection in Service Setup, Endpoints, Consumers, Timed Actions, and Workflows. For more details, see [dependency injection](setup-and-dependency-injection.html#_dependency_injection). ## Timed Actions Timed Actions are stateless components designed for scheduling functions to execute at future times. They serve as integration points for coordinating scheduled calls without storing state, unlike Entities and Workflows, and without direct data access like Consumers and Views. Within a Timed Action, you can access `ComponentClient` and compose calls to other components like Event Sourced Entities, Key-Value Entities, Workflows, and Views. ### Timed Action’s Effect API The Timed Action’s Effect API defines actions that Akka should execute when a Timed Action method is invoked. A Timed Action Effect can either: - return `Done`, confirming the scheduled call completed successfully - return an error message if the operation failed For additional details, refer to [Declarative Effects](../concepts/declarative-effects.html). ## Scheduling a timer To illustrate the usage of timers, consider an Ordering Service composed of a [Key-Value Entity](key-value-entities.html) and a Timed Action component, where the Timed Action manages unconfirmed order cancellations. In this scenario, users place an order that requires confirmation within a set timeframe, similar to a food ordering app where a restaurant confirms or rejects an order. If confirmation is not received within the specified period, the order is automatically canceled. The `OrderEndpoint` acts as a controller for the Order Entity, creating a timer before passing the request. The timer is scheduled using `akka.javasdk.timer.TimerScheduler`, which you can inject into your component’s constructor. [OrderEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/reliable-timers/src/main/java/com/example/api/OrderEndpoint.java) ```java @HttpEndpoint("/orders") public class OrderEndpoint { private final TimerScheduler timerScheduler; private final ComponentClient componentClient; public OrderEndpoint(TimerScheduler timerScheduler, ComponentClient componentClient) { // (1) this.timerScheduler = timerScheduler; this.componentClient = componentClient; } private String timerName(String orderId) { return "order-expiration-timer-" + orderId; } @Post public Order placeOrder(OrderRequest orderRequest) { var orderId = UUID.randomUUID().toString(); // (2) timerScheduler.createSingleTimer( // (3) timerName(orderId), // (4) Duration.ofSeconds// (10), // (5) componentClient .forTimedAction() .method(OrderTimedAction::expireOrder) // (6) .deferred(orderId) ); var order = componentClient .forKeyValueEntity(orderId) .method(OrderEntity::placeOrder) .invoke(orderRequest); // (7) return order; } } ``` | **1** | Declares `TimerScheduler` alongside `ComponentClient`, both provided by Akka. | | **2** | Generates a unique identifier for the order and timer. | | **3** | Calls the `TimerScheduler` API to register a new timer. | | **4** | Uses the order id to generate a unique timer name. | | **5** | Sets the timer delay. | | **6** | Schedules a deferred call to the Timed Action component, covered next. | | **7** | Call to `OrderEntity` to place the order. | Akka registers the timer before the order is placed. This ensures that, if timer registration fails due to network issues, no untracked order remains. The inverse failure scenario — registering the timer but failing to place the order — is mitigated by handling potential failures in the `OrderEntity.cancel` method ([see further](about:blank#_cancel_order_impl)). For reference, here is the `OrderEntity.placeOrder` method implementation. [OrderEntity.java](https://github.com/akka/akka-sdk/blob/main/samples/reliable-timers/src/main/java/com/example/application/OrderEntity.java) ```java @Component(id = "order") public class OrderEntity extends KeyValueEntity { //... public Effect placeOrder(OrderRequest orderRequest) { // (1) var orderId = commandContext().entityId(); boolean placed = true; boolean confirmed = false; var newOrder = new Order( orderId, confirmed, placed, // (2) orderRequest.item(), orderRequest.quantity() ); return effects().updateState(newOrder).thenReply(newOrder); } } ``` | **1** | The `placeOrder` method initiates an order. | | **2** | Sets the `placed` field to `true`. | | | Timers are unique by name across the entire cluster, so scheduling another timer will replace an existing one of the same name. | ## Handling the timer call Now let’s examine the `OrderTimedAction.expireOrder` method. [OrderTimedAction.java](https://github.com/akka/akka-sdk/blob/main/samples/reliable-timers/src/main/java/com/example/application/OrderTimedAction.java) ```java @Component(id = "order-timed-action") // (1) public class OrderTimedAction extends TimedAction { // (2) private final ComponentClient componentClient; public OrderTimedAction(ComponentClient componentClient) { this.componentClient = componentClient; } public Effect expireOrder(String orderId) { var result = componentClient .forKeyValueEntity(orderId) .method(OrderEntity::cancel) // (3) .invoke(); return switch (result) { // (4) case OrderEntity.Result.Invalid ignored -> effects().done(); case OrderEntity.Result.NotFound ignored -> effects().done(); case OrderEntity.Result.Ok ignored -> effects().done(); }; } } ``` | **1** | Uses the `@Component` annotation to identify the component. | | **2** | Extends the `TimedAction` class. | | **3** | Call to `OrderEntity` to cancel the order. | | **4** | Determines if the call should recover or fail. If `NotFound` or `Invalid` is returned, the timer is marked obsolete and is not rescheduled. Other errors cause `expireOrder` to fail, and the timer is rescheduled. | | | Any method executed by a timer must handle errors carefully. Unhandled errors may lead to continuous re-scheduling. Ensure failures are propagated only when retrying the call is intended. | Here is the `OrderEntity.cancel` method for reference. [OrderEntity.java](https://github.com/akka/akka-sdk/blob/main/samples/reliable-timers/src/main/java/com/example/application/OrderEntity.java) ```java @Component(id = "order") public class OrderEntity extends KeyValueEntity { //... public Effect cancel() { var orderId = commandContext().entityId(); if (!currentState().placed()) { return effects().reply(Result.NotFound.of("No order found for " + orderId)); // (1) } else if (currentState().confirmed()) { return effects() .reply(Result.Invalid.of("Cannot cancel an already confirmed order")); // (2) } else { return effects().updateState(emptyState()).thenReply(ok); // (3) } } } ``` | **1** | Returns `NotFound` if the order was never placed. | | **2** | Returns `Invalid` if the order is confirmed. | | **3** | Otherwise, clears the entity state and returns `Ok`. | Since this method is intended to be called by a timer, it must not fail. The `OrderEntity.cancel` method always returns a successful result, even when returning `NotFound` or `Invalid`, ensuring that the timer considers the call successful and does not re-schedule it. If the command handler were to throw an exception or return a `effects().error()`, the timer would interpret this as a failure and would re-schedule the call. ## Failures and retries If a scheduled call fails, it retries with an exponential backoff, starting at 3 seconds and maxing out at 30 seconds after successive failures. Retries continue indefinitely by default. To limit retries, set the `maxRetries` parameter in the `createSingleTimer` method. ## Deleting a timer Let’s review the implementation of the confirmation endpoint. [OrderEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/reliable-timers/src/main/java/com/example/api/OrderEndpoint.java) ```java @HttpEndpoint("/orders") public class OrderEndpoint { // ... @Post("/{orderId}/confirm") public HttpResponse confirm(String orderId) { var confirmResult = componentClient .forKeyValueEntity(orderId) .method(OrderEntity::confirm) .invoke(); // (1) return switch (confirmResult) { case OrderEntity.Result.Ok ignored -> { timerScheduler.delete(timerName(orderId)); // (2) yield HttpResponses.ok(); } case OrderEntity.Result.NotFound notFound -> HttpResponses.notFound(notFound.message()); case OrderEntity.Result.Invalid invalid -> HttpResponses.badRequest(invalid.message()); }; } } ``` | **1** | Confirms the order via `OrderEntity`. | | **2** | Removes the timer upon successful confirmation. | Once `OrderEntity` completes the operation, the timer is deleted. This sequence is important. Even if deleting the timer fails, the `OrderEntity.cancel` method, as seen earlier, ensures proper handling for obsolete timers, signaling Akka that they can be removed. | | You could entirely skip timer deletion when handling confirmation. In this case, the registered timer would be triggered later, and `OrderEntity.cancel` would handle this case gracefully. However, it’s always good practice to perform housekeeping to save resources. | ## Best practices When a timer is scheduled, the component method call is serialized and stored. The serialized data includes the component id, method name, and method parameter. Therefore, method signatures must remain stable across deployments. A timer will fail to execute if any of the following conditions occur: - The component id changes, preventing the timer from locating the component. - The method name changes, causing the timer to miss the correct method to call. - The payload format changes, leading to deserialization errors for the payload. If any of these changes happen in a new deployment, the timer becomes broken. This means the timer will repeatedly fail to execute and will be rescheduled indefinitely. Only a compatible deployment restoring the component will allow the timer to function correctly. If you need to refactor a method used by a timer, it’s recommended to keep the old method and delegate calls to the updated method. For example, suppose `OrderTimedAction` had a legacy method called `expire` that took `ExpireOrder` as a parameter. [OrderTimedAction.java](https://github.com/akka/akka-sdk/blob/main/samples/reliable-timers/src/main/java/com/example/application/OrderTimedAction.java) ```java public record ExpireOrder(String orderId) {} public Effect expire(ExpireOrder orderId) { return expireOrder(orderId.orderId()); } ``` In this case, keeping the legacy method and delegating to the new `OrderTimedAction.expireOrder` method ensures compatibility. Alternatively, if the legacy method is no longer needed, you can implement a no-operation method by returning `effects.done()`. [OrderTimedAction.java](https://github.com/akka/akka-sdk/blob/main/samples/reliable-timers/src/main/java/com/example/application/OrderTimedAction.java) ```java public Effect expire(ExpireOrder orderId) { return effects().done(); } ``` Retain the legacy method for as long as you have scheduled calls referring to it. To view scheduled timers in your service, use the following CLI command: ```command akka services components list-timers reliable-timers -o json // (1) ``` | **1** | Replace 'reliable-timers' with your service name. | This command outputs a list of scheduled timers in JSON format. [Workflows](workflows.html) [Consumers](consuming-producing.html) - [Akka](../index.html) - [Developing](index.html) - [Integrations](integrations/index.html) - [Component and service calls](component-and-service-calls.html) # Component and service calls An Akka service comprises many components. These components might depend on one another, on other Akka services, or even external services. This section describes how to call other components and services from within an Akka service. ## Calling Akka components Since Akka is an auto-scaling solution, components run distributed across many nodes within the same service. That’s why calls between Akka components are done via a client rather than through regular method calls. The receiving component instance may be on the same node, but it may also be on a different node. Requests and responses are always serialized to JSON during transmission between the client and the component. ### Component Client The `akka.javasdk.client.ComponentClient` is a utility for making type-safe calls between components within an Akka service. To use the `ComponentClient`, you need to inject it into your component via the constructor: [CounterEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-counter-brokers/src/main/java/counter/api/CounterEndpoint.java) ```java @Acl(allow = @Acl.Matcher(principal = Acl.Principal.INTERNET)) @HttpEndpoint("/counter") public class CounterEndpoint { private final ComponentClient componentClient; public CounterEndpoint(ComponentClient componentClient) { // (1) this.componentClient = componentClient; } @Get("/{counterId}") public Integer get(String counterId) { return componentClient .forEventSourcedEntity(counterId) // (2) .method(CounterEntity::get) .invoke(); // (3) } @Post("/{counterId}/increase/{value}") public HttpResponse increase(String counterId, Integer value) { componentClient .forEventSourcedEntity(counterId) .method(CounterEntity::increase) .invoke(value); return ok(); // (4) } } ``` | **1** | Accept the `ComponentClient` as a constructor argument and keep it in a field. | | **2** | Use a specific request builder for the component you want to call. | | **3** | Invoking the method returns the `T` that the component eventually returns. | | **4** | Adapt the response rather than returning it as is. In this case, you discard the response value and respond OK without a response body. | The component client can call command handlers on Event Sourced Entities, Key Value Entities, Workflows, Timed Actions, and query methods on Views. The component client is available for injection only in Service Setup, Agents, Endpoints, Consumers, Timed Actions, and Workflows. For more information, see [dependency injection](setup-and-dependency-injection.html#_dependency_injection). NOTE For component client error handling, see [Errors and failures](errors-and-failures.html) section. #### **Asynchronous execution** For the vast majority of your Akka programming tasks, writing clean and simple synchronous code is all you need. One way to think about your synchronous code is that it returns already completed futures. For example, you could rewrite the first "hello world" sample to return a Java future, e.g. `CompletionStage` as follows: ```java @Get("/hello") public CompletionStage hello() { return CompletableFuture.completedFuture("Hello world"); } ``` Obviously, just writing simple synchronous code is easier to read and maintain. If you need to make a component client call that is explicitly asynchronous, you can use the component client’s `invokeAsync()` method, which returns a `CompletionStage`. This allows you to trigger multiple calls concurrently, enabling parallel processing. [CounterEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-counter-brokers/src/main/java/counter/api/CounterEndpoint.java) ```java public record IncreaseAllThese(List counterIds, Integer value) {} @Post("/increase-multiple") public HttpResponse increaseMultiple(IncreaseAllThese increaseAllThese) throws Exception { var triggeredTasks = increaseAllThese .counterIds() .stream() .map( counterId -> componentClient .forEventSourcedEntity(counterId) .method(CounterEntity::increase) .invokeAsync(increaseAllThese.value) ) // (1) .toList(); for (var task : triggeredTasks) { task.toCompletableFuture().get(); // (2) } return ok(); // (3) } ``` | **1** | Call `invokeAsync()` and collect each `CompletionStage`. | | **2** | When all tasks have been started, wait for all tasks to complete. | | **3** | When all tasks have successfully completed, we can respond. | #### **Synchronous vs asynchronous component invocation** You decide how the [component client](../reference/glossary.html#component_client) invokes the component, and the Akka runtime handles the request in the background. The following table summarizes the key differences between synchronous and asynchronous component invocation. | | Synchronous | Asynchronous | | --- | --- | --- | | When the component method returns | After the method finishes | Immediately | | Client behavior | Waits for the result before continuing | Continues immediately, must handle the result later | | Return type | Whatever the component method returns directly | A `CompletionStage` representing the result at a later time | | Component execution | Always runs in the background | Always runs in the background | | Common use case | Calling a method and using the result in the next line of code | Starting multiple async tasks or implementing background, always-on processes (Ambient AI) | | Ideal for | Simple flows where the result is needed immediately | Parallel task execution, deferred response handling, or long-running background logic | #### **When in doubt, write synchronous code** Trust that Akka will do the right thing and that the runtime makes the necessary optimizations. If you do need explicit control over futures, such as creating streams from asynchronous sources or explicitly performing parallel work, then the component client’s `invokeAsync()` and the full power of Java concurrency is there for you when you need it. ## Calling Akka services Calling other Akka services within the same project is done by invoking them using an HTTP or a gRPC client, depending on what type of endpoints the service provides. ### Calling Akka services over HTTP The service is identified by the name it has been deployed with. Akka takes care of routing requests to the service and keeping the data safe by encrypting the connection and handling authentication for you. In the following snippet, we have an endpoint component that calls another service named `counter`. It makes use of SDK-provided `akka.javasdk.http.HttpClientProvider` which returns HTTP client instances for calling other Akka services. In our delegating service implementation: [DelegatingServiceEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/callanotherservice/DelegatingServiceEndpoint.java) ```java @Acl(allow = @Acl.Matcher(service = "*")) @HttpEndpoint public class DelegatingServiceEndpoint { private final HttpClient httpClient; public DelegatingServiceEndpoint(HttpClientProvider httpClient) { // (1) this.httpClient = httpClient.httpClientFor("counter"); // (2) } // model for the JSON we accept record IncreaseRequest(int increaseBy) {} // model for the JSON the upstream service responds with record Counter(int value) {} @Post("/delegate/counter/{counterId}/increase") public String addAndReturn(String counterId, IncreaseRequest request) { var response = httpClient .POST("/counter/" + counterId + "/increase") // (3) .withRequestBody(request) .responseBodyAs(Counter.class) .invoke(); // (4) if (response.status().isSuccess()) { // (5) return "New counter vaue: " + response.body().value; } else { throw new RuntimeException("Counter returned unexpected status: " + response.status()); } } } ``` | **1** | Accept a `HttpClientProvider` parameter for the constructor. | | **2** | Use it to look up a client for the `counter` service. | | **3** | Use the `HttpClient` to prepare a REST call to the `counter` service endpoint. | | **4** | Invoking the call will return a `StrictResponse` with details about the result as well as the deserialized response body. | | **5** | Handle the response, which may be successful or an error. | | | The HTTP client provider is only available for injection in the following types of components: HTTP Endpoints, gRPC Endpoints, Workflows, Consumers and Timed Actions. | ### Calling external HTTP services Calling external services deployed on **different** Akka projects or any other external HTTP server is also done with the `HttpClientProvider`. Instead of a service name, the protocol and full server name are used when calling `httpClientFor`. For example, `https://example.com` or `http://example.com`. [CallExternalServiceEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/callanotherservice/CallExternalServiceEndpoint.java) ```java package com.example.callanotherservice; import akka.javasdk.annotations.Acl; import akka.javasdk.annotations.http.Get; import akka.javasdk.annotations.http.HttpEndpoint; import akka.javasdk.http.HttpClient; import akka.javasdk.http.HttpClientProvider; import akka.javasdk.http.StrictResponse; import java.util.List; import java.util.stream.Collectors; @HttpEndpoint @Acl(allow = @Acl.Matcher(principal = Acl.Principal.ALL)) public class CallExternalServiceEndpoint { private final HttpClient httpClient; public record PeopleInSpace(List people, int number, String message) {} public record Astronaut(String craft, String name) {} public record AstronautsResponse(List astronautNames) {} public CallExternalServiceEndpoint(HttpClientProvider httpClient) { // (1) this.httpClient = httpClient.httpClientFor("http://api.open-notify.org"); // (2) } @Get("/iss-astronauts") public AstronautsResponse issAstronauts() { StrictResponse peopleInSpaceResponse = httpClient .GET("/astros.json") // (3) .responseBodyAs(PeopleInSpace.class) // (4) .invoke(); var astronautNames = peopleInSpaceResponse .body() .people.stream() // (5) .filter(astronaut -> astronaut.craft.equals("ISS")) .map(astronaut -> astronaut.name) .collect(Collectors.toList()); return new AstronautsResponse(astronautNames); // (6) } } ``` | **1** | Accept a `HttpClientProvider` parameter for the constructor. | | **2** | Look up a `HttpClient` for a service using `http` protocol and server name. | | **3** | Issue a GET call to the path `/astros.json` on the server. | | **4** | Specify a class to parse the response body into. | | **5** | Once the call completes, handle the response. | | **6** | Return an adapted result object which will be turned into a JSON response. | ### Calling Akka services over gRPC The service is identified by the name it has been deployed with. Akka takes care of routing requests to the service and keeping the data safe by encrypting the connection and handling authentication for you. In this sample we will implement a gRPC endpoint that delegates a call to a [gRPC endpoint](grpc-endpoints.html) of a customer registry service, deployed with the service name `customer-registry`. The SDK provides `akka.javasdk.grpc.GrpcClientProvider` which provides gRPC client instances for calling other services. To consume an external gRPC service, that service’s protobuf descriptor must be added to the `src/proto` directory of the project. This triggers generation of a client interface and Java classes for all the message types used as requests and responses for methods in that service. | | Since the service protobuf descriptors need to be shared between the provider service and the consuming service, one simple option is to copy the service descriptions to each service that needs them. It is also possible to use a shared library with the protobuf descriptors. | In our delegating service implementation: [DelegateCustomerGrpcEndpointImpl.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-customer-registry-subscriber/src/main/java/customer/api/DelegateCustomerGrpcEndpointImpl.java) ```java @GrpcEndpoint public class DelegateCustomerGrpcEndpointImpl implements DelegateCustomerGrpcEndpoint { private final Logger log = LoggerFactory.getLogger(getClass()); private CustomerGrpcEndpointClient customerService; public DelegateCustomerGrpcEndpointImpl(GrpcClientProvider clientProvider) { // (1) customerService = clientProvider.grpcClientFor( CustomerGrpcEndpointClient.class, "customer-registry" ); // (2) } @Override public CreateCustomerResponse createCustomer(CreateCustomerRequest in) { log.info("Delegating customer creation to upstream gRPC service: {}", in); if (in.getCustomerId().isEmpty()) throw new GrpcServiceException( Status.INVALID_ARGUMENT.augmentDescription("No id specified") ); try { return customerService.createCustomer(in); // (3) } catch (Exception ex) { throw new RuntimeException("Delegate call to create upstream customer failed", ex); } } } ``` | **1** | Accept a `GrpcClientProvider` parameter for the constructor. | | **2** | Use the generated gRPC client interface for the service (`CustomerGrpcEndpointClient.class`) and the service name (`customer-registry`) to look up a client. | | **3** | Use the client to call the other service and return a `CompletionStage`. | Since the called service and the `DelegateCustomerGrpcEndpoint` share the same request and response protocol, no further transformation of the request or response is needed here. For dev mode and in tests, providing a config override in `application.conf` like for external calls is possible, however when deployed such configuration is ignored. | | The gRPC client provider is only available for injection in the following types of components: HTTP Endpoints, gRPC endpoints, Workflows, Consumers and Timed Actions. | ### Calling external gRPC services Calling gRPC services deployed on **different** Akka projects or any other external gRPC server is also done with the `GrpcClientProvider`. Instead of a service name, the protocol and the fully qualified DNS name of the service are used when calling `grpcClientFor`. For example `hellogrpc.example.com`. [CallExternalGrpcEndpointImpl.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/callanotherservice/CallExternalGrpcEndpointImpl.java) ```java @GrpcEndpoint @Acl(allow = @Acl.Matcher(principal = Acl.Principal.ALL)) public class CallExternalGrpcEndpointImpl implements CallExternalGrpcEndpoint { private final ExampleGrpcEndpointClient external; public CallExternalGrpcEndpointImpl(GrpcClientProvider clientProvider) { // (1) external = clientProvider.grpcClientFor( ExampleGrpcEndpointClient.class, "hellogrpc.example.com" ); // (2) } @Override public HelloReply callExternalService(HelloRequest in) { return external.sayHello(in); // (3) } } ``` | **1** | Accept a `GrpcClientProvider` parameter for the constructor. | | **2** | Use the generated gRPC client interface for the service (`ExampleGrpcEndpointClient.class`) and the service name (`doc-snippets`) to look up a client. | | **3** | Use the client to call the other service and return a `CompletionStage`. | Since the called service and the `DelegatingGrpcEndpoint` share the same request and response protocol, no further transformation of the request or response is needed here. The service is expected to accept HTTPS connections and run on the standard HTTPS port // (443). For calling a service on a nonstandard port, or served unencrypted (not recommended) it is possible to define configuration overrides in `application.conf` (or `application-test.conf` specifically for tests): [application.conf](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/resources/application.conf) ```json akka.javasdk.grpc.client."hellogrpc.example.com" { # configure external call, to call back to self host = "localhost" port = 9000 use-tls = false } ``` [Integrations](integrations/index.html) [Message broker integrations](message-brokers.html) - [Akka](../index.html) - [Developing](index.html) - [Setup and configuration](setup-and-configuration/index.html) - [Errors and failures](errors-and-failures.html) # Errors and failures The Akka SDK provides several mechanisms dealing with validation or when something is going wrong. ## Errors The first line of the defense is the validation of the incoming data on the `Endpoint` level. Already described is details in the [request and response](http-endpoints.html#_advanced_http_requests_and_responses) section. This is a basic request validation, which doesn’t require domain state. It’s better to handle it as soon as possible, since it will reduce the load on the system. The logic can reject the request before it reaches the entity. The next phase is domain validation error. An incoming command doesn’t fulfil the requirements or the current state doesn’t allow the command to be handled. Such errors can be signalled back to the client as an error effect using the `effects().error(message)` function. [CounterEntity.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-counter-brokers/src/main/java/counter/application/CounterEntity.java) ```java public Effect increaseWithError(Integer value) { if (currentState() + value > 10000) { return effects().error("Increasing the counter above 10000 is blocked"); // (1) } return effects() .persist(new ValueIncreased(value, currentState() + value)) .thenReply(identity()); } ``` | **1** | Return an error effect with a message if the validation fails. | The `effects().error` is later transformed into a `CommandException`. The default behavior is to return an HTTP 400 error with the error message as a response body. [CounterEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-counter-brokers/src/main/java/counter/api/CounterEndpoint.java) ```java @Post("/{counterId}/increase-with-error/{value}") public Integer increaseWithError(String counterId, Integer value) { return componentClient .forEventSourcedEntity(counterId) .method(CounterEntity::increaseWithError) .invoke(value); // (1) } ``` | **1** | Calling component method without additional exception handling. | Calling such endpoint with an invalid request will return: HTTP/1.1 400 Bad Request Access-Control-Allow-Origin: * Server: akka-http/10.6.3 Date: Wed, 25 Sep 2024 10:44:22 GMT Content-Type: text/plain; charset=UTF-8 Content-Length: 28 Increasing counter above 10000 is blocked For more fine-tuned control over the error handling it’s possible to catch the `CommandException` and transform it into a proper HTTP error. [CounterEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-counter-brokers/src/main/java/counter/api/CounterEndpoint.java) ```java @Post("/{counterId}/increase-with-error-handling/{value}") public HttpResponse increaseWithErrorHandling(String counterId, Integer value) { try { var result = componentClient .forEventSourcedEntity(counterId) .method(CounterEntity::increaseWithError) .invoke(value); return ok(result); } catch (CommandException e) { // (1) return badRequest("rejected: " + value); } } ``` | **1** | Catching the `CommandException` and transforming it into 400 Bad Request error. | Remember that the [called component](component-and-service-calls.html#_akka_components) can be on a different node. Only the `CommandException` and its subtypes are serialized and sent over the network. The [Jackson](serialization.html) serialization is configured to ignore fields like stack trace or cause from the Java `java.lang.Throwable` class. Other exceptions are not serializable and will be transformed into a generic HTTP 500 error. Using the `effects().error(commandException)` method or simply throwing a `CommandException` will have the same effect. It’s possible to have a more dedicated exceptions that will be used to signal different situations. For example, you can create a `CounterLimitExceededException` that extends `CommandException` and use it in the command handler. [CounterEntity.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-counter-brokers/src/main/java/counter/application/CounterEntity.java) ```java public static class CounterLimitExceededException extends CommandException { // (1) private final Integer value; public CounterLimitExceededException(Integer value) { super("Increasing the counter above 10000 is blocked"); this.value = value; } public Integer getValue() { return value; } } public Effect increaseWithException(Integer value) { if (currentState() + value > 10000) { throw new CounterLimitExceededException(value); // (2) } return effects() .persist(new ValueIncreased(value, currentState() + value)) .thenReply(identity()); } ``` | **1** | Define a custom exception that extends `CommandException`. Inner classes must be static to be serializable. | | **2** | Throw the `CounterLimitExceededException`. | This way the client can match them by the type instead of the message. [CounterEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-counter-brokers/src/main/java/counter/api/CounterEndpoint.java) ```java @Post("/{counterId}/increase-with-exception/{value}") public HttpResponse increaseWithException(String counterId, Integer value) { try { var result = componentClient .forEventSourcedEntity(counterId) .method(CounterEntity::increaseWithException) .invoke(value); return ok(result); } catch (CounterLimitExceededException e) { // (1) return badRequest("rejected: " + e.getValue()); } } ``` | **1** | Catching the `CounterLimitExceededException` and transforming it into 400 Bad Request error. | Custom exceptions are not the only option to deal with errors in a more structured way. Another approach would be to encode them as a part of reply protocol. Make sure that you are familiar with the Jackson serialization library and how to use it with sealed interfaces and generic types, see [@JsonSubTypes](https://javadoc.io/doc/com.fasterxml.jackson.core/jackson-annotations/2.19.1/com/fasterxml/jackson/annotation/JsonSubTypes.html) for details. ## Failures All unexpected exception (that doesn’t extend `CommandException`) thrown by the user code are transformed into an HTTP 500 error. When running the service locally in dev mode, a stack trace will be a part of the HTTP response. In production, this information is hidden, to not leak internal details about the service to a client. The client will receive a non-descriptive message with a correlation ID, like below. ```none Unexpected error [2c74bdfb-3130-464c-8852-cf9c3c2180ad] ``` That same correlation ID `2c74bdfb-3130-464c-8852-cf9c3c2180ad` is included in the log entry for the error as an MDC value with the key `correlationID`. This makes it possible to find the specific error in the logs using `akka logs` or by querying your configured logging backend for the service. [Serialization](serialization.html) [Access Control Lists (ACLs)](access-control.html) - [Akka](../../index.html) - [Developing](../index.html) - [Setup and configuration](index.html) # Setup and configuration Akka provides several built-in features for dependency injection, data serialization, error handling, access control, authentication and local development configuration, making development more convenient and efficient. - [Setup and Dependency Injection](../setup-and-dependency-injection.html) - [Serialization](../serialization.html) - [Errors and Failures](../errors-and-failures.html) - [Access Control](../access-control.html) - [Authentication with JWTs](../auth-with-jwts.html) - [Running Locally](../running-locally.html) - [Configuring AI Agent Model Providers](../model-provider-details.html) - [Sanitization of sensitive data](../sanitization.html) [Retrieval-Augmented Generation (RAG)](../rag.html) [Setup and dependency injection](../setup-and-dependency-injection.html) - [Akka](../index.html) - [Developing](index.html) - [Setup and configuration](setup-and-configuration/index.html) - [Setup and dependency injection](setup-and-dependency-injection.html) # Setup and dependency injection ## Service lifecycle It is possible to define logic that runs on service instance start up. This is done by creating a class implementing `akka.javasdk.ServiceSetup` and annotating it with `akka.javasdk.annotations.Setup`. Only one such class may exist in the same service. [Bootstrap.java](https://github.com/akka/akka-sdk/blob/main/samples/spring-dependency-injection/src/main/java/com/example/Bootstrap.java) ```java @Setup // (1) public class Bootstrap implements ServiceSetup { private final Logger logger = LoggerFactory.getLogger(getClass()); private final ComponentClient componentClient; public Bootstrap(ComponentClient componentClient) { // (2) this.componentClient = componentClient; } @Override public void onStartup() { // (3) logger.info("Service starting up"); var result = componentClient.forEventSourcedEntity("123").method(Counter::get).invoke(); logger.info("Initial value for entity 123 is [{}]", result); } ``` | **1** | One annotated implementation of `ServiceSetup` | | **2** | A few different objects can be dependency injected, see below | | **3** | `onStartup` is invoked at service start, but before the service is completely started up | It is important to remember that an Akka service consists of one to many distributed instances that can be restarted individually and independently, for example during a rolling upgrade. Each such instance starting up will invoke `onStartup` when starting up, even if other instances run it before. ## Disabling components You can use `ServiceSetup` to disable components by overriding `disabledComponents` and returning a set of component classes to disable. [Bootstrap.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/Bootstrap.java) ```java @Setup public class Bootstrap implements ServiceSetup { private final Config appConfig; public Bootstrap(Config appConfig) { this.appConfig = appConfig; } @Override public Set> disabledComponents() { // (1) if (appConfig.getString("my-app.environment").equals("prod")) { return Set.of(MyComponent.class); // (2) } else { return Set.of(); // (2) } } } ``` | **1** | Override `disabledComponents` | | **2** | Provide a set of component classes to disable depending on the configuration | ## Dependency injection The Akka SDK provides injection of types related to capabilities the SDK provides to components. Injection is done as constructor parameters for the component implementation class. The following types can be injected in Service Setup, HTTP Endpoints, gRPC Endpoints, Agents, Consumers, Timed Actions, and Workflows: | Injectable class | Description | | --- | --- | | `akka.javasdk.client.ComponentClient` | for interaction between components, see [Component and service calls](component-and-service-calls.html) | | `akka.javasdk.agent.AgentRegistry` | contains information about all agents, see [Implementing agents](agents.html) | | `akka.javasdk.http.HttpClientProvider` | for creating clients to make calls between Akka services and also to other HTTP servers, see [Component and service calls](component-and-service-calls.html) | | `akka.javasdk.grpc.GrpcClientProvider` | for creating clients to make calls between Akka services and also to other gRPC servers, see [Component and service calls](component-and-service-calls.html) | | `akka.javasdk.timer.TimerScheduler` | for scheduling timed actions, see [Timers](timed-actions.html) | | `akka.stream.Materializer` | Used for running Akka streams | | `com.typesafe.config.Config` | Access the user defined configuration picked up from `application.conf` | | `akka.javasdk.Retries` | Utility for retrying calls | | `java.util.concurrent.Executor` | An executor which runs each task in a virtual thread, and is safe to use for blocking async work, for example with `CompletableFuture.supplyAsync(() → blocking, executor)` | The `akka.javasdk.Sanitizer` can be injected in all component types, see [Sanitization](sanitization.html). Furthermore, the following component specific types can also be injected: | Component Type | Injectable classes | | --- | --- | | Agent | `akka.javasdk.agent.AgentContext` for access to the session id that the agent participate in | | Endpoint | `akka.javasdk.http.RequestContext` with access to request related things | | Workflow | `akka.javasdk.workflow.WorkflowContext` for access to the workflow id | | Event Sourced Entity | `akka.javasdk.eventsourcedentity.EventSourcedEntityContext` for access to the entity id | | Key Value Entity | `akka.javasdk.keyvalueentity.KeyValueEntityContext` for access to the entity id | ## Custom dependency injection In addition to the predefined objects a service can also provide its own objects for injection. Any unknown types in component constructor parameter lists will be looked up using a `DependencyProvider`. Providing custom objects for injection is done by implementing a service setup class with an overridden `createDependencyProvider` that returns a custom instance of `akka.javasdk.DependencyProvider`. A single instance of the provider is used for the entire service instance. Note that the objects returned from a custom `DependencyProvider` must either be a new instance for every call to the dependency provider or be thread safe since they will be shared by any component instance accepting them, potentially each running in parallel. This is best done by using immutable objects which is completely safe. | | Injecting shared objects that use regular JVM concurrency primitives such as locks, can easily block individual component instances from running in parallel and cause throughput issues or even worse, deadlocks, so should be avoided. | The implementation can be pure Java without any dependencies: [Bootstrap.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/main/java/com/example/Bootstrap.java) ```java @Setup public class Bootstrap implements ServiceSetup { private final Config appConfig; public Bootstrap(Config appConfig) { this.appConfig = appConfig; } @Override public DependencyProvider createDependencyProvider() { // (1) final var myAppSettings = new MyAppSettings( appConfig.getBoolean("my-app.some-feature-flag") ); // (2) return new DependencyProvider() { // (3) @Override public T getDependency(Class clazz) { if (clazz == MyAppSettings.class) { return (T) myAppSettings; } else { throw new RuntimeException("No such dependency found: " + clazz); } } }; } } ``` | **1** | Override `createDependencyProvider` | | **2** | Create an object for injection, in this case an immutable settings class built from config defined in the `application.conf` file of the service. | | **3** | Return an implementation of `DependencyProvider` that will return the instance if called with its class. | It is now possible to declare a constructor parameter in any component accepting `MyAppSettings`. The SDK will inject the instance provided by the `DependencyProvider`. Or make use of an existing dependency injection framework, like this example leveraging Spring: [Bootstrap.java](https://github.com/akka/akka-sdk/blob/main/samples/spring-dependency-injection/src/main/java/com/example/Bootstrap.java) ```java public class Bootstrap implements ServiceSetup { @Override public DependencyProvider createDependencyProvider() { try { AnnotationConfigApplicationContext context = new AnnotationConfigApplicationContext(); // (1) ResourcePropertySource resourcePropertySource = new ResourcePropertySource( new ClassPathResource("application.properties") ); context.getEnvironment().getPropertySources().addFirst(resourcePropertySource); context.registerBean(ComponentClient.class, () -> componentClient); context.scan("com.example"); context.refresh(); return context::getBean; // (2) } catch (IOException e) { throw new RuntimeException(e); } } } ``` | **1** | Set up a Spring `AnnotationConfigApplicationContext` | | **2** | DependencyProvider is a SAM (single abstract method) type with signature `Class → T`, the method reference `AnnotationConfigApplicationContext#getBean` matches it. | ## Custom dependency injection in tests The TestKit allows providing a custom `DependencyProvider` through `TestKit.Settings#withDependencyProvider(provider)` so that mock instances of dependencies can be used in tests. [MyIntegrationTest.java](https://github.com/akka/akka-sdk/blob/main/samples/doc-snippets/src/test/java/com/example/MyIntegrationTest.java) ```java public class MyIntegrationTest extends TestKitSupport { private static final DependencyProvider mockDependencyProvider = new DependencyProvider() { // (1) @SuppressWarnings("unchecked") @Override public T getDependency(Class clazz) { if (clazz.equals(MyAppSettings.class)) { return (T) new MyAppSettings(true); } else { throw new IllegalArgumentException("Unknown dependency type: " + clazz); } } }; @Override protected TestKit.Settings testKitSettings() { return TestKit.Settings.DEFAULT.withDependencyProvider(mockDependencyProvider); // (2) } ``` | **1** | Implement a test specific `DependencyProvider`. | | **2** | Configure the TestKit to use it. | Any component injection happening during the test will now use the custom `DependencyProvider`. The test specific `DependencyProvider` must be able to provide all custom dependencies used by all components that the test interacts with. [Setup and configuration](setup-and-configuration/index.html) [Serialization](serialization.html) - [Akka](../index.html) - [Developing](index.html) - [Setup and configuration](setup-and-configuration/index.html) - [Serialization](serialization.html) # Serialization ## Jackson serialization You need to make the messages, events, or the state of Akka components serializable with [Jackson](https://github.com/FasterXML/jackson). The same is true for inputs and outputs of HTTP Endpoints. There are two ways to do this. 1. If you are using Java [record](https://openjdk.org/jeps/395) then no annotation is needed. It just works. It’s as simple as using `record` instead of `class`. Akka leverages [Jackson](https://github.com/FasterXML/) under the hood and makes these records serializable for you. 2. If you are using Java `class` then you need to annotate them with the [proper Jackson annotation](https://github.com/FasterXML/jackson-annotations#usage-general). Akka uses a predefined `Jackson` configuration, for serialization. Use the `JsonSupport` utility to update the `ObjectMapper` with your custom requirements. To minimize the number of `Jackson` annotations, Java classes are compiled with the `-parameters` flag. [Bootstrap.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-customer-registry/src/main/java/customer/Bootstrap.java) ```java @Setup public class Bootstrap implements ServiceSetup { @Override public void onStartup() { JsonSupport.getObjectMapper().configure(FAIL_ON_NULL_CREATOR_PROPERTIES, true); // (1) } } ``` | **1** | Sets custom `ObjectMapper` configuration. | ## Type name It’s **highly recommended** to add a `@TypeName` annotation to all persistent classes: entity states, events, Workflow step inputs/results. Information about the type, persisted together with the JSON payload, is used to deserialize the payload and to route it to an appropriate `Subscription` or `View` handler. By default, a FQCN is used, which requires extra attention in case of renaming or repacking. Therefore, we recommend using a logical type name to simplify refactoring tasks. Migration from the old name is also possible, see [renaming class](about:blank#_renaming_class). ## Schema evolution When using Event Sourcing, but also for rolling updates, schema evolution becomes an important aspect of your application development. A production-ready solution should be able to update any persisted models. The requirements as well as our own understanding of the business domain may (and will) change over time. ### Removing a field Removing a field can be done without any migration code. The Jackson serializer will ignore properties that do not exist in the class. ### Adding an optional field Adding an optional field can be done without any migration code. The default value will be `Optional.empty` or `null` if the field is not wrapped with an `Optional` type. Old class: [CustomerEvent.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-customer-registry/src/main/java/customer/domain/CustomerEvent.java) ```java record NameChanged(String newName) implements CustomerEvent {} ``` New class with optional `oldName` and nullable `reason`. [CustomerEvent.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-customer-registry/src/test/java/customer/domain/schemaevolution/CustomerEvent.java) ```java record NameChanged(String newName, Optional oldName, String reason) implements CustomerEvent {} ``` ### Adding a mandatory field Let’s say we want to have a mandatory `reason` field. Always set to a some (non-null) value. One solution could be to override the constructor, but with more complex and nested types, this might quickly become a hard to follow solution. Another approach is to use the `JsonMigration` extension that allows you to create a complex migration logic based on the payload version number. [NameChangedMigration.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-customer-registry/src/test/java/customer/domain/schemaevolution/NameChangedMigration.java) ```java public class NameChangedMigration extends JsonMigration { // (1) @Override public int currentVersion() { return 1; // (2) } @Override public JsonNode transform(int fromVersion, JsonNode json) { if (fromVersion < 1) { // (3) ObjectNode objectNode = ((ObjectNode) json); objectNode.set("reason", TextNode.valueOf("default reason")); // (4) } return json; // (5) } } ``` | **1** | Migration must extend `JsonMigration` class. | | **2** | Sets current version number. The first version, when no migration was used, is always 0. Increase this version number whenever you perform a change that is not backwards compatible without migration code. | | **3** | Implements the transformation of the old JSON structure to the new JSON structure. | | **4** | The JsonNode is mutable, so you can add and remove fields, or change values. Note that you have to cast to specific sub-classes such as `ObjectNode` and `ArrayNode` to get access to mutators. | | **5** | Returns updated JSON matching the new class structure. | The migration class must be linked to the updated model with the `@Migration` annotation. [CustomerEvent.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-customer-registry/src/test/java/customer/domain/schemaevolution/CustomerEvent.java) ```java @Migration(NameChangedMigration.class) // (1) record NameChanged(String newName, Optional oldName, String reason) implements CustomerEvent {} ``` | **1** | Links the migration implementation with the updated event. | ### Renaming a field Renaming a field is a very similar migration. Old class: [CustomerEvent.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-customer-registry/src/main/java/customer/domain/CustomerEvent.java) ```java record AddressChanged(Address address) implements CustomerEvent {} ``` New class: [CustomerEvent.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-customer-registry/src/test/java/customer/domain/schemaevolution/CustomerEvent.java) ```java @Migration(AddressChangedMigration.class) record AddressChanged(Address newAddress) implements CustomerEvent {} ``` The migration implementation: [AddressChangedMigration.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-customer-registry/src/test/java/customer/domain/schemaevolution/AddressChangedMigration.java) ```java public class AddressChangedMigration extends JsonMigration { @Override public int currentVersion() { return 1; } @Override public JsonNode transform(int fromVersion, JsonNode json) { if (fromVersion < 1) { ObjectNode objectNode = ((ObjectNode) json); JsonNode oldField = json.get("address"); // (1) objectNode.set("newAddress", oldField); // (2) objectNode.remove("address"); // (3) } return json; } } ``` | **1** | Finds the old `address` field. | | **2** | Updates the JSON tree with the `newAddress` field name. | | **3** | Removes the old field. | ### Changing the structure Old class: [CustomerEvent.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-customer-registry/src/test/java/customer/domain/schemaevolution/CustomerEvent.java) ```java record CustomerCreatedOld(String email, String name, String street, String city) implements CustomerEvent {} ``` New class with the `Address` type: [CustomerEvent.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-customer-registry/src/main/java/customer/domain/CustomerEvent.java) ```java @Migration(CustomerCreatedMigration.class) record CustomerCreated(String email, String name, Address address) implements CustomerEvent {} ``` The migration implementation: [CustomerCreatedMigration.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-customer-registry/src/main/java/customer/domain/CustomerCreatedMigration.java) ```java public class CustomerCreatedMigration extends JsonMigration { @Override public int currentVersion() { return 1; } @Override public JsonNode transform(int fromVersion, JsonNode json) { if (fromVersion == 0) { ObjectNode root = ((ObjectNode) json); ObjectNode address = root.with("address"); // (1) address.set("street", root.get("street")); address.set("city", root.get("city")); root.remove("city"); root.remove("street"); } return json; } } ``` | **1** | Creates a new nested JSON object, with the data from the old schema. | ### Renaming class Renaming the class doesn’t require any additional work when @TypeName annotation is used. For other cases, the `JsonMigration` implementation can specify all old class names. [AddressChangedMigration.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-customer-registry/src/test/java/customer/domain/schemaevolution/AddressChangedMigration.java) ```java public class AddressChangedMigration extends JsonMigration { @Override public int currentVersion() { return 1; } @Override public List supportedClassNames() { return List.of("customer.domain.CustomerEvent$OldAddressChanged"); // (1) } } ``` | **1** | Specifies the old event name. | ### Testing It’s highly recommended to cover all schema changes with unit tests. In most cases it won’t be possible to reuse the same class for serialization and deserialization, since the model is different from version 0 to version N. One solution could be to create a byte array snapshot of each version and save it to a file. To generate the snapshot use `SerializationTestkit` utility. [CustomerEventSerializationTest.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-customer-registry/src/test/java/customer/domain/CustomerEventSerializationTest.java) ```java byte[] serialized = SerializationTestkit.serialize( new CustomerCreatedOld("bob@lightbend.com", "bob", "Wall Street", "New York") ); var tmpDir = Files.createTempFile("customer-created-old", ".json"); // save serialized to a file and remove `CustomerCreatedOld` Files.write(tmpDir.toAbsolutePath(), serialized); // (1) ``` | **1** | Save old class payload to a file. | Test example: [CustomerEventSerializationTest.java](https://github.com/akka/akka-sdk/blob/main/samples/event-sourced-customer-registry/src/test/java/customer/domain/CustomerEventSerializationTest.java) ```java @Test public void shouldDeserializeCustomerCreated_V0() throws IOException { // load serialized bytes and deserialize with the new schema var serialized = getClass() .getResourceAsStream("/customer-created-old.json") .readAllBytes(); // (1) CustomerCreated deserialized = SerializationTestkit.deserialize( CustomerCreated.class, serialized ); // (2) assertEquals("Wall Street", deserialized.address().street()); assertEquals("New York", deserialized.address().city()); } ``` | **1** | Loading old payload from a file. | | **2** | Deserializing with the latest schema. | [Setup and dependency injection](setup-and-dependency-injection.html) [Errors and failures](errors-and-failures.html) - [Akka](../index.html) - [Developing](index.html) - [Integrations](integrations/index.html) - [Retrieval-Augmented Generation (RAG)](rag.html) # Retrieval-Augmented Generation (RAG) An AI model only knows about information that it was trained with. Domain-specific knowledge or the latest documentation must be given as input to the AI model as additional context. It would be inefficient, costly, or not even possible to provide all content in the request to the AI. A technique to provide relevant content is called Retrieval-Augmented Generation (RAG). This is typically implemented by performing a semantic search on a vector database to find relevant content, which is then added to the user message in the request to the AI model. Implementing RAG involves two main stages: - Data Ingestion: The source documents (e.g., product documentation, articles) are loaded, split into manageable chunks, converted into numerical representations (embeddings) using an embedding model, and then stored in a vector database. - Retrieval and Generation: When a user asks a question, the system first retrieves the most relevant chunks from the vector database and then passes them to the language model along with the original question to generate an answer. ## Using Langchain4J There are many libraries that can be used for integrating with vector databases. Here is one concrete example using [Langchain4J](https://docs.langchain4j.dev/tutorials/rag). [Knowledge.java](https://github.com/akka/akka-sdk/blob/main/samples/ask-akka-agent/src/main/java/akka/ask/agent/application/Knowledge.java) ```java import akka.ask.common.MongoDbUtils; import akka.ask.common.OpenAiUtils; import com.mongodb.client.MongoClient; import dev.langchain4j.data.message.UserMessage; import dev.langchain4j.rag.AugmentationRequest; import dev.langchain4j.rag.DefaultRetrievalAugmentor; import dev.langchain4j.rag.RetrievalAugmentor; import dev.langchain4j.rag.content.injector.ContentInjector; import dev.langchain4j.rag.content.injector.DefaultContentInjector; import dev.langchain4j.rag.content.retriever.EmbeddingStoreContentRetriever; import dev.langchain4j.rag.query.Metadata; public class Knowledge { private final RetrievalAugmentor retrievalAugmentor; private final ContentInjector contentInjector = new DefaultContentInjector(); public Knowledge(MongoClient mongoClient) { var contentRetriever = EmbeddingStoreContentRetriever.builder() // (1) .embeddingStore(MongoDbUtils.embeddingStore(mongoClient)) .embeddingModel(OpenAiUtils.embeddingModel()) .maxResults// (10) .minScore(0.1) .build(); this.retrievalAugmentor = DefaultRetrievalAugmentor.builder() // (2) .contentRetriever(contentRetriever) .build(); } public String addKnowledge(String question) { var chatMessage = new UserMessage(question); // (3) var metadata = Metadata.from(chatMessage, null, null); var augmentationRequest = new AugmentationRequest(chatMessage, metadata); var result = retrievalAugmentor.augment(augmentationRequest); // (4) UserMessage augmented = (UserMessage) contentInjector.inject( result.contents(), chatMessage ); // (5) return augmented.singleText(); } } ``` | **1** | We use the RAG support from Langchain4j, which consist of a `ContentRetriever` | | **2** | and a `RetrievalAugmentor`. | | **3** | Create a request from the user question. | | **4** | Augment the request with relevant content. | | **5** | Construct the new user message that includes the retrieved content. | This `Knowledge` class would then be used in an agent to enrich the user message. The guide [AI agent that performs a RAG workflow](../getting-started/ask-akka-agent/index.html) illustrates how to create embeddings for vector databases, and how to add knowledge to fixed LLMs. ## Enrich the context from other components Sometimes a similar retrieval-and-augment approach can be used without a vector database, especially when the required context is structured and can be fetched directly. This follows the same RAG pattern but targets specific data sources like entities or views. It may look like this: ActivityAgent.java ```java @Component(id = "activity-agent") public class ActivityAgent extends Agent { public record Request(String userId, String message) {} private static final String SYSTEM_MESSAGE = """ You are an activity agent. Your job is to suggest activities in the real world. Like for example, a team building activity, sports, an indoor or outdoor game, board games, a city trip, etc. """.stripIndent(); private final ComponentClient componentClient; public ActivityAgent(ComponentClient componentClient) { // (1) this.componentClient = componentClient; } public Effect query(Request request) { var profile = componentClient // (2) .forEventSourcedEntity(request.userId) .method(UserProfileEntity::getProfile) .invoke(); var userMessage = request.message + "\nPreferences: " + profile.preferences; // (3) return effects().systemMessage(SYSTEM_MESSAGE).userMessage(userMessage).thenReply(); } } ``` | **1** | Inject the `ComponentClient` as a constructor parameter. | | **2** | Retrieve preferences from an entity. | | **3** | Enrich the user message with the preferences. | [Streaming](streaming.html) [Setup and configuration](setup-and-configuration/index.html) - [Akka](../index.html) - [Understanding](index.html) - [Project structure](architecture-model.html) # Project structure Akka encourages a project structure that separates your system’s Application Programming Interfaces (APIs), Akka component logic, and business logic into different directories. This structure supports a clear separation of concerns. It helps enable iterative development, testing in isolation, predictable packaging, and the ability to externalize configuration and static assets. ## Akka project structure A typical Akka project might have a layout like the following: ```txt src/ ├── main/ │ ├── java/acme/planningagent/ │ │ ├── api/ # External MCP, HTTP, gRPC endpoints │ │ ├── application/ # Akka components: Agents, Workflows, Entities, etc. │ │ └── domain/ # Business logic │ └── resources/ └── test/ ``` - The `api` directory exposes functionality to the outside world. This includes HTTP, gRPC, or MCP interfaces that forward requests to the application layer. - The `application` directory contains the building blocks provided by Akka, implemented by you. It includes components such as `Agent`, `Entity`, `View`, `Workflow`, `Timer`, and `Consumer`. - The `domain` directory holds plain Java classes that describe business rules and domain models. These are not tied to Akka or the runtime. Many use `record` to reduce boilerplate. You can test this logic without starting Akka or the runtime. This keeps the code focused and easier to maintain. - The `resources` directory includes configuration files and other static content. - The `test` directory contains unit and integration tests. Its structure mirrors `main` to make it easier to relate tests to the code they verify. Keeping these areas distinct can help improve clarity and long-term maintainability. It also encourages testing and runtime separation. ## Conceptual layers The structure above also reflects a conceptual separation of responsibilities. These responsibilities can be thought of as layers. Business logic is central, with supporting code around it to enable runtime behavior and external interaction. To maintain modularity: - Avoid exposing domain types directly to the outside world. - The API layer should not call the domain layer directly. - Inner layers should not depend on or be aware of outer layers. For more on coding structure and practical considerations, see the [coding guidelines](../sdk/ai-coding-assistant-guidelines.html). ### Domain This layer contains business rules and domain concepts. It does not depend on Akka or other runtime concerns. These are plain Java classes, often using `record` to reduce boilerplate. Examples include logic to enforce limits, compute totals, or apply rules. You can write unit tests for this layer without needing to start Akka or the runtime. The domain package remains isolated, focused, and easy to change. ### Application This layer connects the domain model to the Akka runtime. It contains the components that handle persistence, coordination, and external interaction. These components follow event-driven patterns and manage state in a way that supports consistency and responsiveness. Most classes in this layer are based on Akka-provided building blocks. The domain logic remains in the inner layer. This layer makes it operational. ### API This layer connects your service to the outside world. It defines endpoints that expose application functionality over HTTP or gRPC. Requests are handled here and passed on to the application layer. Endpoints use `ComponentClient` to call Akka components in the application layer. This maintains separation of concerns and ensures runtime boundaries are respected. The API layer may also expose public event models over Kafka or other channels. External systems should interact with your service only through this layer. Access control and request validation also belong here. For HTTP-specific guidance, see [Designing HTTP Endpoints](../sdk/http-endpoints.html). ## Akka Services ![Services](../_images/service.png) A *Project* may contain multiple *Services*. Projects can be deployed to one or more regions to achieve geographic resilience. For details, see [Multi-region operations](multi-region.html). ## Next steps Once familiar with the project structure, continue with: - [Akka Deployment Model](deployment-model.html) - [Development process](development-process.html) - [Memory models](state-model.html) - [Development best practices](../sdk/dev-best-practices.html) You may also begin development right away using the [Akka SDK](../sdk/index.html). [Distributed systems](distributed-systems.html) [Deployment model](deployment-model.html) - [Akka](../index.html) - [Understanding](index.html) - [Delegation with Effects](declarative-effects.html) # Delegation with Effects In Akka, the behavior of your services is decoupled from the execution. This decoupling allows Akka to determine how a service is executed without being constrained by how your system’s behavior is defined. Delegation removes you from worrying about distributed systems, persistence, elasticity, or networking. With Akka’s hosted services, we use delegation to enable swapping out new, improved runtimes while your services are running without a recompilation or redeployment! In Akka, you specify *what* the system should do, while the Akka runtime decides *how* it should be executed. For example, you define an agent by specifying the model it uses, its session memory, and the user prompt. This represents the *what*. The Akka runtime then determines the *how* by managing processes, virtual threads, persistence, and actor-based concurrency. ![Akka Agentic Platform](_images/component-effects.png) Your services define the *what* using `Effects`, which are Application Programming Interfaces (APIs) provided by each Akka component. When you write a component method, you return an `Effect<…​>` object that describes, in a declarative way, what you want Akka to do. For example, when using Akka’s [Agent](../sdk/agents.html) component, you might return an `Effect` that tells the runtime to execute the agent with a system message, a user message, and then send the AI model’s response back to the requester: ```java public Effect query(String question) { return effects() .systemMessage("You are a helpful...") .userMessage(question) .thenReply(); } ``` Each component defines its own Effect API offering predefined operations tailored to the component’s specific semantics. For example, [Event Sourced Entities](../sdk/event-sourced-entities.html) provide an Effect for persisting events, while a [Workflow](../sdk/workflows.html) Effect defines both what needs to be executed and how to handle the result to transition to the next step. This model simplifies development by removing the need to handle persistence, distribution, serialization, cache management, replication, and other distributed system concerns. Developers can focus on business logic — defining what needs to be persisted, how to respond to the caller, transitioning to different steps, rejecting commands, and more — while the Akka runtime takes care of the rest. For example, with our Workflow component at the end of each step, you return an Effect that indicates how the Workflow should persist the call stack and which stage it should transition to next. ```java return effects() .updateState(currentState().withStatus(WITHDRAW_SUCCEED)) .transitionTo("deposit", depositInput); ``` For details on the specific Effect types, refer to the documentation for each component. | Component | Available Effects | | --- | --- | | [Agents](../sdk/agents.html#_effect_api) | Model, Memory, Tools, System and User Message, Reply, Error | | [Event Sourced Entities](../sdk/event-sourced-entities.html#_effect_api) | Persist Events, Reply, Delete Entity, Error | | [Key Value Entities](../sdk/key-value-entities.html#_effect_api) | Update State, Reply, Delete State, Error | | [Views](../sdk/views.html#_effect_api) | Update State, Delete State, Ignore | | [Workflows](../sdk/workflows.html#_effect_api) | Update State, Transition, Pause, End, Reject Command, Reply | | [Timers](../sdk/timed-actions.html#_effect_api) | Confirm Scheduled Call, Error | | [Consumers](../sdk/consuming-producing.html#_effect_api) | Publish to Topic, Confirm Message, Ignore | [Background execution](background-execution.html) [Memory models](state-model.html) - [Akka](../index.html) - [Understanding](index.html) - [AI orchestration patterns](ms-agent-patterns.html) # AI orchestration patterns When we talk about AI orchestration, most of the time what we’re really referring to is *agent orchestration*; composing agentic applications where the agents are part of a larger unit trying to achieve some goal. In many AI frameworks and libraries, it’s common to see agents make calls directly to other agents. There are new protocols popping up every day to facilitate agent communications like the [Agent-to-Agent protocol](https://github.com/a2aproject/A2A) (A2A), [Agent Communication Protocol](https://agentcommunicationprotocol.dev/introduction/welcome) (ACP), and the [Model Context Protocol](https://modelcontextprotocol.io/docs/getting-started/intro) (MCP) which supports tools that might be involved in a larger agentic orchestration. It can be tempting for agents to make calls directly to other agents. It certainly makes it easier to build sample applications. The problem is that sample applications that make design compromises aren’t really ready for production. ## Flexible composition is key Let’s assume that we’re making an activity recommendation application that has multiple agents, some of which may be called concurrently while others are called sequentially. In this application, we have a weather agent and an activity agent. The weather agent retrieves the weather forecast for when the user wants to plan an activity, and then supplies that forecast to the activity agent. In most samples, including the pattern designs by Microsoft that are discussed in this post, the weather agent will make a direct call to the activity agent. This may seem like an obvious choice, but it has lasting consequences for long-term development. If one agent is coded to directly call another agent, then that agent must always call that other agent. In the scenario just described, the weather agent will always call the activity agent, making it difficult (if not impossible) to reuse the weather agent in other flows within the same application. The way Akka approaches this problem is through workflows and best practices in building composable systems. Agents built with Akka typically do exactly one thing, and ideally this one thing is small. These small building block agents lend themselves well to being composed in different ways to support multiple patterns. The key difference between direct agent communication and Akka’s approach is that in Akka the workflow makes the decision as to which agents are called, when they’re called, and if they’re called concurrently. Agents then become small, easily managed pieces of code to manage discrete interactions with a model. The results of those interactions can be reused in many different ways by the guiding workflows. The rest of this document goes through all of [Microsoft’s agent design patterns](https://learn.microsoft.com/en-us/azure/architecture/ai-ml/guide/ai-agent-design-patterns) and illustrates how that pattern can be accomplished using Akka’s components. ## Sequential orchestration In the [sequential orchestration](https://learn.microsoft.com/en-us/azure/architecture/ai-ml/guide/ai-agent-design-patterns#sequential-orchestration) pattern, AI agents are assembled in linear chains (also frequently referred to as “pipelines”) in a well-known, fixed order at development time. Each agent in the chain passes the output of its work to the input of the next agent in the chain. Akka moves the responsibility of direct agent calls up to an orchestrating workflow, as shown in this adaptation of the original pattern: ![Image of sequential orchestration diagram](_images/ai_orch_sequential.jpg) This pattern is used in step-by-step processing, where each step builds on the results of the previous step. Sequential orchestration is ideal for: - Multi-step processes with clear linear dependencies and a workflow progression that doesn’t change between runs - Data transformation pipelines (though if the only thing you’re doing is data transformation, agents and LLMs may not be necessary) - Steps that cannot be executed concurrently You should avoid sequential orchestration when: - Steps are embarrassingly parallel. When it’s clear that these things can be run without downstream dependencies, you should instead use concurrent orchestration. - When you might need to branch or short-circuit the workflow based on results from individual steps - Agent interaction is more like collaboration than sequential hand-offs ### Examples - In [this example](../getting-started/planner-agent/team.html), this pattern is illustrated well with a workflow with deterministic steps (no dynamic planning) that calls agents in sequence ## Concurrent orchestration [Concurrent orchestration](https://learn.microsoft.com/en-us/azure/architecture/ai-ml/guide/ai-agent-design-patterns#concurrent-orchestration) refers to running multiple AI agents simultaneously working on the same task. The outputs of all concurrent agents are then collected and processed. This is ideal when you have a number of agentic tasks that do not rely on the outputs of others. In Microsoft’s original diagram, there is an “Initiator and collector agent.” In the Akka version, we’ve simply replaced the collector agent with a workflow. Nearly everything else remains the same. ![Diagram showing concurrent workflow execution in agentic app](_images/ai_arch_concurrent.jpg) Also note that in this diagram, the workflow is responsible for controlling `agent 1.1` and `agent 1.2`, which appear as sub-agents in Microsoft’s pattern. In keeping with the golden rule of agentic composition, Akka agents don’t spawn sub-agents, workflows decide which agents are needed and the Akka runtime takes care of provisioning. As you’ll see later in this document, Akka workflows can easily spawn concurrent agents or even sub-workflows as needed. This reinforces the notion that the only real difference between these patterns in Akka and elsewhere is that Akka separates the roles of orchestration and model communication while most other frameworks choose to combine them. More advanced concurrent orchestration could be implemented by a parent workflow spawning child workflows. In this pattern, each child workflow performs a multi-step task and then delivers the result back to the parent workflow as a message (i.e. method call). The parent workflow pauses when waiting for the results from children. The results would be stored in the state, and when the parent workflow is satisfied with all of the collected results it transitions to another step. In this kind of advanced scenario, Akka takes care of all the hard parts like managing distributed state, distributed long-running timers, workflow resiliency, and much more. ### Examples The following code snippet shows how easily we can create a workflow step that calls two agents concurrently and gathers their results to be passed to the next step in the workflow: ConcurrentWorkflow.java ```java private StepEffect askWeatherAndTraffic() throws Exception { // call WeatherAgent and TrafficAgent in parallel var forecastCall = componentClient .forAgent() .inSession(sessionId()) .method(WeatherAgent::query) .invokeAsync(currentState().userQuery); // (1) var trafficAlertsCall = componentClient .forAgent() .inSession(sessionId()) .method(TrafficAgent::query) .invokeAsync(currentState().userQuery); // (1) // collect the results var forecast = forecastCall.toCompletableFuture().get(30, TimeUnit.SECONDS); // (2) var trafficAlerts = trafficAlertsCall.toCompletableFuture().get(30, TimeUnit.SECONDS); // (2) return stepEffects() .updateState(currentState().withWeather(forecast).withTraffic(trafficAlerts)) .thenTransitionTo(ConcurrentWorkflow::suggestActivities); // (3) } ``` | **1** | Call the `WeatherAgent` and `TrafficAgent` with `invokeAsync`, which immediately returns a `CompletionStage` of the future result. | | **2** | Collect the results from the two agents. | | **3** | Update the workflow state and continue with next step. | ## Group chat orchestration [Group chat orchestration](https://learn.microsoft.com/en-us/azure/architecture/ai-ml/guide/ai-agent-design-patterns#group-chat-orchestration) is when multiple agents collaborate to solve problems, make decisions, or judge work products. This collaboration between agents is facilitated by a shared discussion and a chat manager to coordinate all of the activities. ![Diagram illustrating orchestration of AI components in a workflow](_images/ai_orch_chat.jpg) Calling this pattern a group “chat” can be misleading. We prefer to use a more generalized pattern name, such as shared sessions where multiple agents have different levels of access to a common conversation history during the task. Group chat is just one of many possible implementations of this pattern. In this example, the parent workflow calls a planner agent. The planner agent’s job is to interact with a model to determine an execution plan and then return this plan as some well-typed, structured data. This plan is then interpreted and followed by the parent workflow, which then delegates to agents and even child workflows. Throughout all of these agent and workflow interactions, a common shared session is used by all of the agents when building context for LLMs. This concept of a shared session in Akka is flexible enough that it can be applied to any of the patterns outlined in this document. Group chat (session) orchestration is ideal for: - Collaborative scenarios between agents, workflows, and sub-workflows - Validation and quality control where evaluation and quality checks can be done based on the session history Group chat (session) orchestration should be avoided when: - A sequential pipeline is enough to accomplish the goal - Conversations that grow rapidly without upper limits can tax applications and infrastructure and when there are extreme numbers of chat sessions within short periods of time - There is no objective way to examine data and determine when a conversation is complete ### Examples The main piece of functionality that makes group chat style patterns work is the ability for agents to share *sessions*. In Akka, session access is incredibly robust, allowing some agents read-only, others write-only, and yet others read-and-write access. Here are just a few sample applications that make use of explicit sessions via the `inSession` function on the agent client builder: - [ask-akka-agent](https://github.com/akka-samples/ask-akka-agent) - An agentic conversation sample - [multi-agent](https://github.com/akka-samples/multi-agent) - Our classic multi-agent sample - [trip-agent](https://github.com/akka-samples/trip-agent) - A trip planning agent ## Handoff orchestration [Handoff orchestration](https://learn.microsoft.com/en-us/azure/architecture/ai-ml/guide/ai-agent-design-patterns#handoff-orchestration) refers to empowering agents to defer or to hand off work to some other part of the process. In this pattern the plan and tasks are not completely known until receiving the initial input. Part of the dynamic planning process involves choosing which agents will be involved and which will not. ![Diagram illustrating handoff design patterns for AI agents](_images/ai_orch_handoff.jpg) Akka views the use of “handoffs” as a specialization of the standard orchestration workflow available within Akka supporting dynamic, runtime planning and execution involving dynamic agents and tools. If the individual agents are responsible for deciding if they’re going to handle a given input or instruction, then those agents can’t be recomposed for any other purpose. With Akka, you can use a combination of workflows, optional sub-workflows, and specialized planning agents. With agents getting structured responses from LLMs, it is possible to instruct the LLM to judge what agent might be best suited for handling a request. The planning response is then handled by the workflow, which calls the selected agent. Tool calls (e.g. MCP) can be used to add more deterministic logic to planning and routing when pure LLM-based judgment might not be predictable enough. This plan-and-execute loop can be extended by combining it with any of the other patterns outlined in this document. ### Examples The PlannerAgent in [this example](../getting-started/planner-agent/dynamic-team.html) illustrates this pattern. ## Magentic orchestration Magentic orchestration is a pattern for open-ended, complex problems that don’t have a predetermined plan. This dynamic planning aspect often overlaps with other patterns in this group. In this pattern, agents frequently have access to tools. ![Diagram illustrating magentic orchestration for AI applications](_images/ai_orch_magentic.jpg) While the origin of the term `magentic` is open for debate, our best guess for the origin of this word is that it’s a portmanteau of *“multi-agent agentic”* or *“multi-agentic”*. When we use workflows as ubiquitous coordinators and allow agents to be small, purpose-built model interaction components, then the need for individual, concrete patterns becomes less explicit. We don’t need to rewrite agents if we want to use them in different ways, we can either change how planning agents work or modify small bits of logic in the workflow. ### Examples In our [Akka dynamic orchestration](../getting-started/planner-agent/dynamic-team.html) example, we illustrate a planning and evaluation loop, as well as re-planning. Additionally, the [Akka adaptive multi-agent sample](https://github.com/akka-samples/adaptive-multi-agent) implements the [MagenticOne pattern](https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/magentic-one.html). [Inter-agent communications](inter-agent-comms.html) [Access control lists](acls.html) - [Akka](../index.html) - [Understanding](index.html) - [Inter-agent communications](inter-agent-comms.html) # Inter-agent communications Agents can collaborate in many ways, from orchestration, to direct communication, to other patterns like swarms and teams. Many of these communication patterns make for great demos, but don’t scale well or run safely and securely in production environments. Further, the emergence of [MCP](https://modelcontextprotocol.io/introduction), [A2A](https://a2aprotocol.ai/), and [ACP](https://agentcommunicationprotocol.dev/introduction/welcome) have made it (perhaps too) easy to enable direct communication between agentic services. This has led to many techniques for agent-to-agent communications: some extremely valuable and some deceptively dangerous. While Akka makes it simple to implement any of these techniques, it is worth exploring each of these patterns and anti-patterns in detail. The following table provides a quick reference as to which communication paths are preferred patterns versus ones you should avoid. Additional details are provided thereafter. | Paths | Description | | --- | --- | | Workflow → Agent(s) Workflow → A2A/ACP → Agents Workflow → Endpoint (API) | Workflows are engines of orchestration that can ensure the successful outcome of many invocations across agents that are local or remote. Workflows have built-in logic for handling retries and failure for long-running processes. ✅ **Adopt** | | Agent → Broker Workflow → Broker Entity → Broker | Leverage an event-driven architecture that completely disconnects the awareness of one service that is producing information from those that consume it. However, for agent-to-agent communications where agents are part of a single system working towards a common goal, broker-based architectures eliminate group-based goals, shared state, and system-wide guardians. Event-driven architectures are promoted for their scale and autonomy, but create separation that makes it harder for autonomous systems to collaborate towards a common goal. ✅ **Adopt** | | Broker → Consumer → Agent | Agents can be tasked to execute goals triggered by events that are sent through a broker. Provides independent intelligence and autonomy. Suffers from the same “unable to work towards a broader goal” problem. ✅ **Adopt** | | Agent → Agent Endpoint (API) → Agent | Avoid leveraging protocols such as HTTP, JSON-RPC, gRPC, A2A, and ACP that enable direct communication and invocation of one agent to another. ⚠️ **Avoid** | | Agent → MCP Server → Agent | It is best to avoid using an agent’s ability to call tools or services as a way to reach other agents. This creates indirect connections that reduce clarity and make the system harder to reason about. ⚠️ **Avoid** | | Endpoint (API) → Endpoint (API) | Avoid having an endpoint call functions directly on another endpoint. This can create side effects that aren’t captured by any of the durable event flows and can bypass the resiliency and scalability provided by Workflows. ⚠️ **Avoid** | ## Why avoid certain inter-agent patterns Well-architected agents are built to do one thing: accomplish one overarching goal. Most agent frameworks, including Akka, make it simple to create multi-agent systems where one agent invokes another directly. These approaches often violate the “do one thing” rule. Consider the example of a travel planning agent. This agent that makes travel recommendations might utilize other agents like an airline reservation agent, a weather agent, a traffic agent, and may optionally make use of half a dozen or more agents. On the surface this seems like a good idea, and that’s what makes it so dangerous. If the top-level planning agent makes calls directly to the other agents, then that supervision structure is permanently fixed that way. The planning agent cannot be composed and used by other workflows. The simple weather service can be used by any composition or arrangement of agents because it does one thing, and it does not call any other agents. If one agent relies on communicating with other specific agents, it has more consequences than just enforcing a rigid hierarchy. Large language models (LLMs) are unpredictable and unreliable, as are network communication and practically everything else in enterprise and cloud software. Real-world scenarios need to deal with low-level failures at the protocol level and high level failures that might require retries, goal adjustments, context queries, and more. Putting this logic all inside an agent can make it near impossible to use, reuse, or compose in different flows. Agentic flows need to deal with things like timeouts and network partition events, especially when talking to relatively slow, token-streaming LLMs. The real solution to agent-to-agent communication may seem extreme: *never allow direct agent-to-agent communications*. Agents need to do their one thing, and the work of composition, orchestration, and resilience should be left to a supervising workflow. How do we actually communicate with other agents and do it the right way? ## Workflow orchestration The structure, sequence, scale, and resilience of multi-agent collaborations should be done through the Akka [Workflow](https://doc.akka.io/sdk/workflows.html) component. See the [official documentation](https://doc.akka.io/sdk/workflows.html) for guides on general Workflow use as well as how to build agentic workflows with sequential, parallel, state-transition, and event-driven logic. Implement Workflows across long-running processes with an unknown stop time by incorporating retries and failure logic within the orchestration. ## Component client There is an Akka component client for all Akka components, including Agents. While an Agent should not use this to talk to other Agents, Endpoints and Workflows will use this to trigger and orchestrate Agents. The component client only works to communicate with components within the same service. Outside the service you’ll need to use service-to-service eventing or standard protocols like HTTP, gRPC, or even MCP. ## Message brokers Communication between Akka components and external entities can be done with message brokers. The Akka SDK currently supports Google Cloud PubSub and Kafka. We typically use message brokers like this to provide input and output integration with other services on other infrastructure that might not be running Akka. Consumers and Workflows within the agent’s service can react to messages as well as publish them to external brokers. ## Endpoint client (intra-project) If you need to communicate with an endpoint that is defined *within* the same project, then Akka has a component client to make this easy. This saves you from having to write code that creates a client that understands the protocol of that endpoint, whether it’s HTTP, gRPC, or MCP. ## HTTP or gRPC client (external) If your orchestration workflow needs to communicate with external services that expose HTTP or gRPC APIs, then you can use Akka’s `HttpClientProvider` and `GrpcClientProvider` clients to communicate with those. As with all other communication methods, this shouldn’t be done from inside an agent, but from within consumers or orchestration workflows. ## MCP MCP, the Model Context Protocol, is a protocol that allows agents to use LLMs to convert intent described in a prompt into structured tool calls. These tools can be used to do anything from manipulate a file system to query customer-specific data from a private database. AI assistants modify their system prompts so that they include a list of tool calls exposed by configured MCP servers. It’s very tempting to put calls to other agents inside these tool callbacks, but that too is dangerous. There is no automatic failover or retry or timeout system for when an agent invokes a tool via MCP. This means any agent-to-agent communication done inside of an MCP tool is inherently unreliable and brittle. Be mindful about using MCP for agent-to-agent calls. Typically the LLM will identify the MCP servers to use from prompt and context and will, if possible, gather the appropriate tool call parameters from that same prompt. ## A2A, ACP, etc While MCP is a “tool” protocol, there are other protocols designed specifically for direct communication between agents, such as A2A, ACP, and others. These protocols are misleading in that they seem to promote direct point-to-point communication between agents. It might seem like trivial semantics, but when you see A2A and ACP, you should think of leveraging an Akka Workflow to make use of those protocols on behalf of the system, which is composed of two or more agents. It’s agent-to-agent communication through a responsible, reliable orchestrating partner defined as a `Workflow`. [Building AI agents](ai-agents.html) [AI orchestration patterns](ms-agent-patterns.html) - [Akka](../index.html) - [Sanitization](sanitization.html) # Sanitization ## Overview When services process user-generated content, protecting personally identifiable information (PII) is both a legal requirement and a trust imperative. Regulations like GDPR, CCPA, and HIPAA mandate careful handling of personal data, while users expect their private information won’t be exposed to support staff, analysts, or third parties unnecessarily. Text anonymization—detecting and masking sensitive details like names, emails, and phone numbers—enables legitimate use cases such as logging, analytics, and model training while minimizing privacy risks. It reduces the attack surface in case of breaches and demonstrates a privacy-respecting approach to data handling. Akka supports this through service-wide sanitization. The sanitization disabled by default and can be selectively enabled through configuration. When enabled, sanitization is automatically applied to text that: - written to logs - passed to agent models from agent requests - passed to agent models from local tool or MCP tool output Text matched by a sanitizer is replaced by a mask of `*` containing the same number of characters as the original matched string. For example, with a credit card sanitizer enabled, the following text: I'm having problems using my credit card 5204 46025 0000 006 Will be masked to: I'm having problems using my credit card ******************* Before being written in logs or passed to agent models. ### Ad hoc sanitization Sanitization can also be programmatically applied to text in any component where it makes sense for a specific business case, for example before sending some text to a third party API or before writing a text in the state of an entity. This is done by injecting a `akka.javasdk.Sanitizer` in the component constructor and then using `akka.javasdk.Sanitizer#sanitize` on the text. ## Sanitizer types There are two types of sanitizers available, it is possible combine predefined and custom sanitizers in the same service: ### Predefined A small set of common sanitizers is built into the Akka runtime and are enabled by name in config: | Name | Description | | --- | --- | | `EMAIL` | email addresses | | `PHONE` | International and national phone numbers | | `CREDIT_CARD` | VISA, Mastercard, American Express, Diners, Discover, JCB, and generic credit card numbers | | `IBAN` | international bank account numbers | | `IP_ADDRESS` | ipv4 and ipv6 network addresses | One or more of these are enabled in the service `application.conf` file like this: ```hocon akka.javasdk.sanitization { predefined-sanitizers = ["IBAN", "CREDIT_CARD"] } ``` ### Custom In many cases more application and business domain specific sanitizers are useful. Custom sanitizers allows defining regular expressions that define character sequences that should be masked. Custom, application specific sanitizers can be defined by adding a config block `akka.javasdk.sanitization.regex-sanitizers` with a name for each custom sanitizer followed by a config block with a single `pattern` key that has a value that is a valid Java regular expression that matches the type of text that should be masked. This example masks an hypothetical customer id in the form S0123456789: ```hocon akka.javasdk.sanitization.regex-sanitizers = { "CUSTOMER_IDS" = { pattern = "S\\d{10}" } } ``` This would lead to texts like: Customer S0847362951 reported an issue with their order Being masked to: Customer *********** reported an issue with their order Before being written in logs or passed to agent models. ## Performance considerations Sanitization is applied to every log entry. In high-throughput applications, numerous sanitization rules or complex regular expressions may impact performance. Consider monitoring application performance and optimizing regex patterns if necessary. ## Testing sanitization In tests the sanitizer can be directly accessed from `getSanitizer` method in `TestKit` or `TestKitSupport` to assert that expected texts are masked given the service sanitizer configuration.
- [Akka](../index.html) - [Tutorials](index.html) - [Hello world agent](author-your-first-service.html) # Build your first agent ## Introduction Akka is a framework, runtime, and memory store for autonomous, adaptive agentic systems. Akka is delivered as an SDK and platform that can execute on any infrastructure, anywhere. ![Akka Agentic Platform](../concepts/_images/akka-agentic-platform.png) Developers create services built with *Akka components* that - when deployed - become agentic systems. Services can be tested locally or within a Continuous Integration/Continuous Delivery (CI/CD) practice using a *Testkit* that is available with each Akka component. Your services are compiled into a binary that includes the *Akka Runtime* which enables your services to self-cluster for scale and resilience. *Akka clusters* are able to execute on any infrastructure whether bare metal, Kubernetes, Docker or edge. Optionally, add *Akka Automated Operations* to gain multi-region failover, auto-elasticity, persistence oversight, multi-tenant services, and certificate rotation. *Akka Automated Operations* has two deployment options: our serverless cloud or your virtual private cloud (VPC). | Product | Where To Start | | --- | --- | | Akka Orchestration | Akka provides a durable execution engine which automatically captures state at every step, and in the event of failure, can pick up exactly where they left off. No lost progress, no orphaned processes, and no manual recovery required. You implement orchestration by creating an Akka service with the [Workflow](../sdk/workflows.html) component. | | Akka Agents | Akka provides a development framework and runtime for agents. Agents can be stateful (durable memory included) or stateless. Agents can be invoked by other Akka components or run autonomously. Agents can transact with embedded tools, MCP servers, or any 3rd party data source with 100s of Akka connectors. You implement an agent by creating an Akka service with the [Agent](../sdk/agents.html) component. You implement a tool in a regular Java class or embedded within the [Agent](../sdk/agents.html) component. You implement an MCP server with the [MCP Endpoint](../sdk/mcp-endpoints.html) component. You implement APIs that can front an agent with the [HTTP Endpoint](../sdk/http-endpoints.html) and [gRPC Endpoint](../sdk/grpc-endpoints.html) components. | | Akka Memory | Akka provides an in-memory, durable store for stateful data. Stateful data can be scoped to a single agent, or made available system-wide. Stateful data is persisted in an embedded event store that tracks incremental state changes, which enables recovery of system state (resilience) to its last known modification. State is automatically sharded and rebalanced across Akka nodes running in a cluster to support elastic scaling to terabytes of memory. State can also be replicated across regions for failover and disaster recovery. Short-term (traced and episodic) memory is included transparently within the [Agent](../sdk/agents.html) component. You implement long-term memory with the [Event Sourced Entity](../sdk/event-sourced-entities.html) and [Key Value Entity](../sdk/key-value-entities.html) components. You implement propagations of cross-system state with the [View](../sdk/views.html) component. Views implement the Command Query Responsibility Segregation (CQRS) pattern. | | Akka Streaming | Akka provides a continuous stream processing engine which can synthesize, aggregate, and analyze windows of data without receiving a terminating event. Data streams can be sourced from other Akka services or a 3rd party messaging broker or coming in through an Akka Endpoint. Your services can either store intermediate processing results into *Akka Memory* or trigger commands to other Akka components that take action on the data. You produce events to a message broker with the [Producer](../sdk/consuming-producing.html#_event_producer) annotation. You create a continuous incoming stream of events with the [HTTP Endpoint](../sdk/http-endpoints.html) or the [gRPC Endpoint](../sdk/grpc-endpoints.html) components. You create a stream processor to analyze and act against a stream of data with the [Consumer](../sdk/consuming-producing.html) component. | In this guide, you will: - Set up your development environment. - Clone a simple project that follows the [recommended structure](../concepts/architecture-model.html). - Explore a basic AI Agent that acts as a creative greeter. - Explore a basic HTTP Endpoint to interact with the agent. - Add a request body to the Endpoint. - Run your service locally. - Explore the local console to observe your running service. ## Prerequisites - Java 21, we recommend [Eclipse Adoptium](https://adoptium.net/marketplace/) - [Apache Maven](https://maven.apache.org/install.html) version 3.9 or later - `curl` command-line tool - Git - [OpenAI API key](https://platform.openai.com/api-keys) Akka has support for many AI providers, and this sample is using OpenAI. Sign up for free at [platform.openai.com/api-keys](https://platform.openai.com/api-keys). ## Clone the sample project 1. From a command line, use the Akka CLI to create a new project. See [installation instructions](quick-install-cli.html) if you haven’t installed the CLI yet. ```command akka code init --name helloworld-agent --repo akka-samples/helloworld-agent.git ``` 2. Navigate to the new project directory. 3. Open it in your preferred IDE / Editor. Alternatively, you can clone the [GitHub Repository](https://github.com/akka-samples/helloworld-agent) directly: ```command git clone https://github.com/akka-samples/helloworld-agent.git --depth 1 ``` Then navigate to the new project directory and open it in your preferred IDE / Editor. ## Explore the Agent An *Agent* interacts with an AI model and maintains contextual history in a session memory. 1. Open the `src/main/java/com/example/application/HelloWorldAgent.java` file. The *Agent* is implemented with: HelloWorldAgent.java ```java @Component(id = "hello-world-agent") public class HelloWorldAgent extends Agent { private static final String SYSTEM_MESSAGE = """ You are a cheerful AI assistant with a passion for teaching greetings in new language. Guidelines for your responses: - Start the response with a greeting in a specific language - Always append the language you're using in parenthesis in English. E.g. "Hola (Spanish)" - The first greeting should be in English - In subsequent interactions the greeting should be in a different language than the ones used before - After the greeting phrase, add one or a few sentences in English - Try to relate the response to previous interactions to make it a meaningful conversation - Always respond with enthusiasm and warmth - Add a touch of humor or wordplay when appropriate - At the end, append a list of previous greetings """.stripIndent(); public Effect greet(String userGreeting) { return effects() .systemMessage(SYSTEM_MESSAGE) .userMessage(userGreeting) .thenReply(); } } ``` The system message provides system-level instructions to the AI model that defines its behavior and context. The system message acts as a foundational prompt that establishes the AI’s role, constraints, and operational parameters. It is processed before user messages and helps maintain consistent behavior throughout the interactions. The user message represents the specific query, instruction, or input that will be processed by the model to generate a response. ## Explore the HTTP Endpoint An *Endpoint* is a component that creates an externally accessible API. Endpoints are how you expose your services to the outside world. Endpoints can have different protocols, such as HTTP and gRPC. HTTP Endpoint components make it possible to conveniently define such APIs accepting and responding in JSON, or dropping down to lower level APIs for ultimate flexibility in what types of data is accepted and returned. 1. Open the `src/main/java/com/example/api/HelloWorldEndpoint.java` file. The *Endpoint* is implemented with: HelloWorldEndpoint.java ```java /** * This is a simple Akka Endpoint that uses an agent and LLM to generate * greetings in different languages. */ // Opened up for access from the public internet to make the service easy to try out. // For actual services meant for production this must be carefully considered, // and often set more limited @Acl(allow = @Acl.Matcher(principal = Acl.Principal.INTERNET)) @HttpEndpoint public class HelloWorldEndpoint { public record Request(String user, String text) {} private final ComponentClient componentClient; public HelloWorldEndpoint(ComponentClient componentClient) { this.componentClient = componentClient; } @Post("/hello") public String hello(Request request) { return componentClient .forAgent() .inSession(request.user) .method(HelloWorldAgent::greet) .invoke(request.text); } } ``` The `ComponentClient` is the way to call the agent or other components. The agent may participate in a session, which is used for the agent’s memory and can also be shared between multiple agents that are collaborating on the same goal. This Endpoint exposes an HTTP POST operation on `/hello`. You can also see that there is an *Access Control List* (ACL) on this Endpoint that allows all traffic from the Internet. Without this ACL the service would be unreachable, but you can be very expressive with these ACLs. ## Run locally Set your [OpenAI API key](https://platform.openai.com/api-keys) as an environment variable: Linux or macOS ```command export OPENAI_API_KEY=your-openai-api-key ``` Windows 10+ ```command set OPENAI_API_KEY=your-openai-api-key ``` Start your service locally: ```command mvn compile exec:java ``` Once successfully started, any defined Endpoints become available at `localhost:9000` and you will see an INFO message that the Akka Runtime has started. Your "Hello World" service is now running. In another shell, you can now use `curl` to send requests to this Endpoint. ```command curl -i -XPOST --location "http://localhost:9000/hello" \ --header "Content-Type: application/json" \ --data '{"user": "alice", "text": "Hello, I am Alice"}' ``` Which will reply with an AI-generated greeting, such as: ```none Hello (English)! So great to meet you, Alice! I'm here to add some zest to our conversation with greetings from around the world. Let's have some fun learning them together! Feel free to ask about anything else too! Previous greetings: - Hello (English) ``` Try it a few more times with different text messages, for example: ```command curl -i -XPOST --location "http://localhost:9000/hello" \ --header "Content-Type: application/json" \ --data '{"user": "alice", "text": "I live in New York"}' ``` The AI-generated reply might be: ```none Bonjour (French)! Ah, New York, the city that never sleeps! It's almost like you need a coffee the size of the Eiffel Tower to keep up with it. What's your favorite thing about living in such a vibrant city? Previous greetings: - Hello (English) - Bonjour (French) ``` ```command curl -i -XPOST --location "http://localhost:9000/hello" \ --header "Content-Type: application/json" \ --data '{"user": "alice", "text": "I like the botanical garden"}' ``` ```none ¡Hola (Spanish)! The botanical garden in New York must be a refreshing oasis amidst the hustle and bustle of the city. It's like taking a nature-themed vacation with just subway ride! Do you have a favorite plant or flower that you like to see there? Previous greetings: - Hello (English) - Bonjour (French) - ¡Hola (Spanish) ``` | | What just happened? The greetings will be in different languages each time. The AI model itself is stateless, so it wouldn’t know what languages it had used previously unless we included that information in each request to the model. Akka Agents automatically track context using **session memory**. In this case, the Agent is able to remember all the past messages and languages that were used in this session. Here we use the user `alice` as the session identifier. Give it a try to change the user field in the HTTP request, and you will see that it starts over without previous knowledge about Alice or the used languages. | ## Change the agent prompt In this section, you will modify the instructions for the agent and see how it changes behavior. Open the `HelloWorldAgent.java` file and edit the `SYSTEM_MESSAGE`. For example, you can add the following to the guidelines: HelloWorldAgent.java ```java - Include some interesting facts ``` Restart the service and use curl again: ```command curl -i -XPOST --location "http://localhost:9000/hello" \ --header "Content-Type: application/json" \ --data '{"user": "blackbeard", "text": "Ahoy there, matey! My name is Blackbeard"}' ``` Does it recognize the pirate greeting and include some facts about Blackbeard? Something like: ```none Hello, Blackbeard! (English) What a fantastic name you have! It’s not every day I get to chat with a legendary pirate. So tell me, do you sail the high seas or do you prefer to dock at the local coffee shop for a pirate-themed chai latte? Previous greetings: 1. Hello (English) Did you know that the famous pirate Blackbeard has a fascinating history? He was known for his fearsome appearance, often lighting slow-burning fuses in his beard during battles to create an intimidating aura! Arrr! ``` ## Explore the local console The Akka local console is a web-based tool that comes bundled with the Akka CLI. It provides a convenient way to view and interact with your running service. ### Install the Akka CLI Starting the local console requires using the Akka CLI. | | In case there is any trouble with installing the CLI when following these instructions, please check the [detailed CLI installation instructions](../operations/cli/installation.html). | Linux Download and install the latest version of `akka`: ```bash curl -sL https://doc.akka.io/install-cli.sh | bash ``` macOS The recommended approach to install `akka` on macOS, is using [brew](https://brew.sh/) ```bash brew install akka/brew/akka ``` Windows 1. Download the latest version of `akka` from [https://downloads.akka.io/latest/akka_windows_amd64.zip](https://downloads.akka.io/latest/akka_windows_amd64.zip) 2. Extract the zip file and move `akka.exe` to a location on your `%PATH%`. Verify that the Akka CLI has been installed successfully by running the following to list all available commands: ```command akka help ``` ### Start the local console 1. Start the local console. ```bash akka local console ``` ```bash ⠸ Waiting for services to come online... ──────────────────────────────────────────────────────────── Local console: http://localhost:9889 (use Ctrl+C to quit) ``` 2. Once the console and service is running, you will see a message like this: ```bash ─────────────────────────────────────────────────────────────────────── │ SERVICE │ STATE │ ADDRESS | ─────────────────────────────────────────────────────────────────────── │ helloworld-agent │ Running │ localhost:9000 │ ─────────────────────────────────────────────────────────────────────── Local console: http://localhost:9889 (use Ctrl+C to quit) ``` 3. You can then access the local console in your browser at: [http://localhost:9889](http://localhost:9889/) 4. Navigate to your service’s Endpoint, which will be available [here](http://localhost:9889/services/helloworld-agent/components/com.example.api.HelloWorldEndpoint). ![hello world local console](_images/hello-world-local-console.png) You can also see the details of the session in the `SessionMemoryEntity`. ![hello world session memory](_images/hello-world-session-memory.png) This is a simple Hello World service, so there isn’t much to see here yet. However, as you build more complex services, the console will become a more valuable tool for monitoring and debugging. ## Next steps Now that you have a basic service running, it’s time to learn more about building real services in Akka. - See [multi-agent planner](planner-agent/index.html) to build a more realistic application. - [Deploy to akka.io](quick-deploy.html) [Tutorials](index.html) [Multi-agent planner](planner-agent/index.html) - [Akka](../../index.html) - [Tutorials](../index.html) - [Multi-agent planner](index.html) # Build an AI multi-agent planner | | **New to Akka? Start here:** Use the [Build your first agent](../author-your-first-service.html) guide to get a simple agentic service running locally and interact with it. | This guide starts with creating an agent that suggests real-world activities. We will incorporate more components in separate parts of the guide, and at the end we will have a multi-agent system with dynamic planning and orchestration capabilities. 1. [Activity agent](activity.html) — An Agent (with session memory) that suggests real-world activities using an LLM. 2. [User preferences](preferences.html) — An Entity (long-term memory) to personalize the suggestions. 3. [Weather agent](weather.html) — A weather forecasting Agent that uses an external service as an agent tool. 4. [Orchestrate the agents](team.html) — A Workflow that coordinates long-running calls across the agents. 5. [List by user](list.html) — A View that creates a read-only projection (i.e. a query) of all activity suggestions for a user. 6. [Dynamic orchestration](dynamic-team.html) — An Agent that creates a dynamic plan using an LLM, and a Workflow that executes the plan. 7. [Evaluation on changes](eval.html) — A Consumer that streams user preference changes to trigger an Agent. [Hello world agent](../author-your-first-service.html) [Activity agent](activity.html) - [Akka](../../index.html) - [Tutorials](../index.html) - [RAG chat agent](index.html) # Build a RAG chat agent | | **New to Akka? Start here:** Use the [Build your first agent](../author-your-first-service.html) guide to get a simple agentic service running locally and interact with it. | This tutorial walks through building a Retrieval-Augmented Generation (RAG) chat agent. We start with a simple agent that streams responses from a large language model (LLM), and add retrieval functionality in separate parts of the tutorial. By the end, we will have an agent that uses the latest Akka documentation as its knowledge base, accessible through a web UI. 1. [Creating the agent](the-agent.html) — A streaming Agent that answers questions using an LLM and session memory. 2. [Knowledge indexing with a workflow](indexer.html) — A Workflow that indexes local documentation into a vector database. 3. [Executing RAG queries](rag.html) — A helper class that performs RAG queries by combining vector search with the LLM. 4. [Adding UI endpoints](endpoints.html) — Endpoints that expose a UI, support multiple sessions, and allow users to query the system. [Evaluation on changes](../planner-agent/eval.html) [Creating the agent](the-agent.html) - [Akka](../../index.html) - [Tutorials](../index.html) - [RAG chat agent](index.html) - [Knowledge indexing with a workflow](indexer.html) # Knowledge indexing with a workflow | | **New to Akka? Start here:** Use the [Build your first agent](../author-your-first-service.html) guide to get a simple agentic service running locally and interact with it. | ## Overview The first step in building a RAG agent is *indexing*. Each time a user submits a query or prompt to the agent, the agent *retrieves* relevant documents by performing a semantic search on a vector database. Before we can perform that search, we need to populate the vector database with all the knowledge that we want to make available to the agent. Populating the vector database by creating embeddings is the *indexing* step. In this guide we’re going to use an Akka workflow to manage the indexing of a large number of documents as a long-running process. ## Prerequisites - Java 21, we recommend [Eclipse Adoptium](https://adoptium.net/marketplace/) - [Apache Maven](https://maven.apache.org/install.html) version 3.9 or later - `curl` command-line tool - [OpenAI API key](https://platform.openai.com/api-keys) You can either create a [Mongo DB Atlas](https://www.mongodb.com/atlas) account or run MongoDB locally using Docker. We’ll be using the vector indexing capability of this database for the retrieval portion of the RAG flow. You can do all the indexing necessary for this sample with a free account if you choose so. Once you’ve created the account, make note of the secure connection string as you’ll need it later. If you choose to run a local instance, further instructions are provided in [Running the service](about:blank#_running_the_service). If you are following along with each step rather than using the completed solution, then you’ll need the code you wrote in the previous step. ## Unfamiliar with concepts like vectors, embeddings or RAG? We recommend reviewing our [foundational explainer on AI concepts](../../concepts/ai-agents.html#_foundational_ai_concepts_video). It offers helpful background that will deepen your understanding of the technologies and patterns used throughout this tutorial. ## Updating the pom We’re going to use `langchain4j` for this sample, so add those dependencies to your Maven pom file. The full file should look like this when done: [pom.xml](https://github.com/akka/akka-sdk/blob/main/samples/ask-akka-agent/pom.xml) ```xml 4.0.0 io.akka akka-javasdk-parent 3.5.6 akka.ask ask-akka 1.0-SNAPSHOT jar ask-akka 1.5.0 dev.langchain4j langchain4j-open-ai ${langchain4j.version} dev.langchain4j langchain4j-mongodb-atlas 1.1.0-beta7 ``` ## Adding a workflow In your code, add a new empty Java file at `src/main/java/akka/ask/indexer/application/RagIndexingWorkflow.java`. The imports section is large enough that we won’t show it here (you can see it in the source code link). Let’s start with the outer shell of the workflow class (this won’t compile yet as we haven’t included the workflow definition). [RagIndexingWorkflow.java](https://github.com/akka/akka-sdk/blob/main/samples/ask-akka-agent/src/main/java/akka/ask/indexer/application/RagIndexingWorkflow.java) ```java @Component(id = "rag-indexing-workflow") public class RagIndexingWorkflow extends Workflow { private final Logger logger = LoggerFactory.getLogger(getClass()); private final OpenAiEmbeddingModel embeddingModel; private final MongoDbEmbeddingStore embeddingStore; private final DocumentSplitter splitter; // metadata key used to store file name private final String srcKey = "src"; public record State(List toProcess, List processed) { // (1) public static State of(List toProcess) { return new State(toProcess, new ArrayList<>()); } public Optional head() { // (2) if (toProcess.isEmpty()) return Optional.empty(); else return Optional.of(toProcess.getFirst()); } public State headProcessed() { if (!toProcess.isEmpty()) { processed.add(toProcess.removeFirst()); } return new State(toProcess, processed); } /** * @return true if workflow has one or more documents to process, false * otherwise. */ public boolean hasFilesToProcess() { return !toProcess.isEmpty(); } public int totalFiles() { return processed.size() + toProcess.size(); } public int totalProcessed() { return processed.size(); } } @Override public State emptyState() { return State.of(new ArrayList<>()); } } ``` | **1** | The workflow will maintain a list of files to process and a list of files already processed | | **2** | We treat the list of files as a queue | The workflow definition for the document indexer is surprisingly simple: [RagIndexingWorkflow.java](https://github.com/akka/akka-sdk/blob/main/samples/ask-akka-agent/src/main/java/akka/ask/indexer/application/RagIndexingWorkflow.java) ```java @Override public WorkflowSettings settings() { return WorkflowSettings.builder() .defaultStepTimeout(ofMinutes// (1)) .build(); } @StepName("processing-file") private StepEffect processingFileStep() { // (1) if (currentState().hasFilesToProcess()) { indexFile(currentState().head().get()); } if (currentState().hasFilesToProcess()) { // (2) var newState = currentState().headProcessed(); logger.debug("Processed {}/{}", newState.totalProcessed(), newState.totalFiles()); return stepEffects() .updateState(newState) .thenTransitionTo(RagIndexingWorkflow::processingFileStep); // (3) } else { return stepEffects().thenPause(); // (4) } } ``` | **1** | Define the only step in the workflow, `processing` | | **2** | Check if we have more work to do | | **3** | If there is more work, transition to `processing` again | | **4** | If there are no files pending, the workflow will *pause* | Because this workflow only ever transitions to and from the same state, it might help to think of it as a *recursive* workflow. An interesting aspect of this workflow is that it never ends. If it runs out of files to process, then it simply pauses itself. We haven’t coded it in this sample, but it would be fairly easy to add an endpoint that allowed a user to enqueue more files for the indexer and wake/unpause it. The actual work of doing the indexing is in the `indexFile` function: [RagIndexingWorkflow.java](https://github.com/akka/akka-sdk/blob/main/samples/ask-akka-agent/src/main/java/akka/ask/indexer/application/RagIndexingWorkflow.java) ```java private void indexFile(Path path) { try (InputStream input = Files.newInputStream(path)) { // read file as input stream Document doc = new TextDocumentParser().parse(input); var docWithMetadata = new DefaultDocument( doc.text(), Metadata.metadata(srcKey, path.getFileName().toString()) ); var segments = splitter.split(docWithMetadata); logger.debug( "Created {} segments for document {}", segments.size(), path.getFileName() ); segments.forEach(this::addSegment); } catch (BlankDocumentException e) { // some documents are blank, we need to skip them } catch (Exception e) { logger.error("Error reading file: {} - {}", path, e.getMessage()); } } ``` and the `addSegment` function which calls `add` on the embedding store, committing the segment (aka *chunk*) to MongoDB Atlas: [RagIndexingWorkflow.java](https://github.com/akka/akka-sdk/blob/main/samples/ask-akka-agent/src/main/java/akka/ask/indexer/application/RagIndexingWorkflow.java) ```java private void addSegment(TextSegment seg) { var fileName = seg.metadata().getString(srcKey); var res = embeddingModel.embed(seg); logger.debug( "Segment embedded. Source file '{}'. Tokens usage: in {}, out {}", fileName, res.tokenUsage().inputTokenCount(), res.tokenUsage().outputTokenCount() ); embeddingStore.add(res.content(), seg); // (1) } ``` | **1** | Send the embedding segment to the vector database | Everything that we’ve done so far has been completely asynchronous. When the workflow starts (shown below), it builds the list of pending documents by walking the documents directory and adding each markdown (`*.md`) file it finds. You can find all of these documents in the sample folder `src/main/resources/md-docs`. [RagIndexingWorkflow.java](https://github.com/akka/akka-sdk/blob/main/samples/ask-akka-agent/src/main/java/akka/ask/indexer/application/RagIndexingWorkflow.java) ```java public Effect start() { if (currentState().hasFilesToProcess()) { return effects().error("Workflow is currently processing documents"); } else { List documents; var documentsDirectoryPath = getClass() .getClassLoader() .getResource("md-docs") .getPath(); try (Stream paths = Files.walk(Paths.get(documentsDirectoryPath))) { documents = paths .filter(Files::isRegularFile) .filter(path -> path.toString().endsWith(".md")) .toList(); } catch (IOException e) { throw new RuntimeException(e); } return effects() .updateState(State.of(documents)) .transitionTo(RagIndexingWorkflow::processingFileStep) // (1) .thenReply(done()); } } ``` | **1** | A workflow transition to processing file step on startup | ## Injecting the MongoDB client If you’ve been following along, then you might be wondering how we inject an `embeddingStore` field into this workflow. This field is of type `MongoDbEmbeddingStore`, and to create an instance of that we need to inject a `MongoClient` to the workflow’s constructor: [RagIndexingWorkflow.java](https://github.com/akka/akka-sdk/blob/main/samples/ask-akka-agent/src/main/java/akka/ask/indexer/application/RagIndexingWorkflow.java) ```java public RagIndexingWorkflow(MongoClient mongoClient) { this.embeddingModel = OpenAiUtils.embeddingModel(); this.embeddingStore = MongoDbEmbeddingStore.builder() .fromClient(mongoClient) .databaseName("akka-docs") .collectionName("embeddings") .indexName("default") .createIndex(true) .build(); this.splitter = new DocumentByCharacterSplitter(500, 50); // (1) } ``` | **1** | Tweaking the parameters to the document splitter can affect the quality of semantic search results | The API endpoint to start the indexer creates an instance of the workflow through the standard `ComponentClient` function `forWorkflow`. To make the `MongoClient` instance available, we can use a bootstrap class that uses Akka’s `@Setup` annotation: [Bootstrap.java](https://github.com/akka/akka-sdk/blob/main/samples/ask-akka-agent/src/main/java/akka/ask/Bootstrap.java) ```java @Setup public class Bootstrap implements ServiceSetup { private Config config; public Bootstrap(Config config) { this.config = config; } @Override public DependencyProvider createDependencyProvider() { MongoClient mongoClient = MongoClients.create(config.getString("mongodb.uri")); return new DependencyProvider() { @Override public T getDependency(Class cls) { if (cls.equals(MongoClient.class)) { return (T) mongoClient; } return null; } }; } } ``` ## Add an Endpoint Add an HTTP endpoint to be able to start the indexing workflow. [IndexerEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/ask-akka-agent/src/main/java/akka/ask/indexer/api/IndexerEndpoint.java) ```java @Acl(allow = @Acl.Matcher(principal = Acl.Principal.INTERNET)) @HttpEndpoint("/api/index") public class IndexerEndpoint { private final Logger logger = LoggerFactory.getLogger(getClass()); private final ComponentClient componentClient; public IndexerEndpoint(ComponentClient componentClient) { this.componentClient = componentClient; } @Post("/start") public HttpResponse startIndexation() { componentClient.forWorkflow("indexing").method(RagIndexingWorkflow::start).invoke(); return HttpResponses.accepted(); } @Post("/abort") public HttpResponse abortIndexation() { componentClient.forWorkflow("indexing").method(RagIndexingWorkflow::abort).invoke(); return HttpResponses.accepted(); } } ``` ## Running the service As you’ll see in the next step in this guide, we’ll add the indexed knowledge to the agent. For now, we suggest that you play around with indexing and the kind of results you see in MongoDB. Parameters like the size of chunks can sometimes impact the reliability or quality of the semantic search results. There are also several other types of document splitters. Explore those and see how it impacts the index. Use the connection URL provided to you by MongoDB Atlas and set the `MONGODB_ATLAS_URI` environment variable to that connection string. | | As an alternative, you can run MongoDB locally using Docker with `docker compose up -d`. See example [docker-compose.yml](https://github.com/akka/akka-sdk/blob/main/samples/ask-akka-agent/docker-compose.yml). | Start your service locally: ```command mvn compile exec:java ``` In another shell, you can now use `curl` to start the indexing. ```command curl -XPOST localhost:9000/api/index/start ``` This call will take an extract of the Akka SDK documentation and create a vectorized index in MongoDB. The documentation files are located in `src/main/resources/md-docs/`. That said, you can also add your own documentation files to this directory. ## Next steps Next we’ll [add the indexed knowledge to the agent](rag.html) to be able to run meaningful queries against the *Ask Akka* AI assistant! [Creating the agent](the-agent.html) [Executing RAG queries](rag.html) - [Akka](../../index.html) - [Tutorials](../index.html) - [RAG chat agent](index.html) - [Executing RAG queries](rag.html) # Executing RAG queries | | **New to Akka? Start here:** Use the [Build your first agent](../author-your-first-service.html) guide to get a simple agentic service running locally and interact with it. | ## Overview In this step of the guide to building the *Ask Akka* application, you’ll be creating a class that wraps the OpenAI API and the MongoDB client API. It’s this class that will provide the abstraction for the rest of the application to use when making RAG queries. You’ll use Akka’s `@Setup` to configure the dependency injection for this class. ## Prerequisites - Java 21, we recommend [Eclipse Adoptium](https://adoptium.net/marketplace/) - [Apache Maven](https://maven.apache.org/install.html) version 3.9 or later - `curl` command-line tool - [OpenAI API key](https://platform.openai.com/api-keys) ## Unfamiliar with concepts like vectors, embeddings or RAG? We recommend reviewing our [foundational explainer on AI concepts](../../concepts/ai-agents.html#_foundational_ai_concepts_video). It offers helpful background that will deepen your understanding of the technologies and patterns used throughout this tutorial. ## Creating the Knowledge class We’re going to add a utility that will retrieve content from MongoDB that is related to the user’s query. The following is the basic RAG-specific code that you can add to a new file `Knowledge.java` in `src/main/java/akka/ask/agent/application/`. [Knowledge.java](https://github.com/akka/akka-sdk/blob/main/samples/ask-akka-agent/src/main/java/akka/ask/agent/application/Knowledge.java) ```java import akka.ask.common.MongoDbUtils; import akka.ask.common.OpenAiUtils; import com.mongodb.client.MongoClient; import dev.langchain4j.data.message.UserMessage; import dev.langchain4j.rag.AugmentationRequest; import dev.langchain4j.rag.DefaultRetrievalAugmentor; import dev.langchain4j.rag.RetrievalAugmentor; import dev.langchain4j.rag.content.injector.ContentInjector; import dev.langchain4j.rag.content.injector.DefaultContentInjector; import dev.langchain4j.rag.content.retriever.EmbeddingStoreContentRetriever; import dev.langchain4j.rag.query.Metadata; public class Knowledge { private final RetrievalAugmentor retrievalAugmentor; private final ContentInjector contentInjector = new DefaultContentInjector(); public Knowledge(MongoClient mongoClient) { var contentRetriever = EmbeddingStoreContentRetriever.builder() // (1) .embeddingStore(MongoDbUtils.embeddingStore(mongoClient)) .embeddingModel(OpenAiUtils.embeddingModel()) .maxResults// (10) .minScore(0.1) .build(); this.retrievalAugmentor = DefaultRetrievalAugmentor.builder() // (2) .contentRetriever(contentRetriever) .build(); } public String addKnowledge(String question) { var chatMessage = new UserMessage(question); // (3) var metadata = Metadata.from(chatMessage, null, null); var augmentationRequest = new AugmentationRequest(chatMessage, metadata); var result = retrievalAugmentor.augment(augmentationRequest); // (4) UserMessage augmented = (UserMessage) contentInjector.inject( result.contents(), chatMessage ); // (5) return augmented.singleText(); } } ``` | **1** | We use the RAG support from Langchain4j, which consist of a `ContentRetriever` | | **2** | and a `RetrievalAugmentor`. | | **3** | Create a request from the user question. | | **4** | Augment the request with relevant content. | | **5** | Construct the new user message that includes the retrieved content. | ## Use the knowledge in the agent [AskAkkaAgent.java](https://github.com/akka/akka-sdk/blob/main/samples/ask-akka-agent/src/main/java/akka/ask/agent/application/AskAkkaAgent.java) ```java @Component(id = "ask-akka-agent", name = "Ask Akka", description = "Expert in Akka") public class AskAkkaAgent extends Agent { private final Knowledge knowledge; private static final String SYSTEM_MESSAGE = """ You are a very enthusiastic Akka representative who loves to help people! Given the following sections from the Akka SDK documentation, answer the question using only that information, outputted in markdown format. If you are unsure and the text is not explicitly written in the documentation, say: Sorry, I don't know how to help with that. """.stripIndent(); // (1) public AskAkkaAgent(Knowledge knowledge) { // (2) this.knowledge = knowledge; } public StreamEffect ask(String question) { var enrichedQuestion = knowledge.addKnowledge(question); // (3) return streamEffects() .systemMessage(SYSTEM_MESSAGE) .userMessage(enrichedQuestion) // (4) .thenReply(); } } ``` | **1** | System message including instructions about the included Akka documentation. | | **2** | Inject the `Knowledge`. | | **3** | Retrieve relevant content and augment the question. | | **4** | Use the question and retrieved content in the request to the LLM. | To be able to inject the `Knowledge` we need to add it to the `Bootstrap`: [Bootstrap.java](https://github.com/akka/akka-sdk/blob/main/samples/ask-akka-agent/src/main/java/akka/ask/Bootstrap.java) ```java @Setup public class Bootstrap implements ServiceSetup { private Config config; @Override public DependencyProvider createDependencyProvider() { MongoClient mongoClient = MongoClients.create(config.getString("mongodb.uri")); Knowledge knowledge = new Knowledge(mongoClient); return new DependencyProvider() { @Override public T getDependency(Class cls) { if (cls.equals(MongoClient.class)) { return (T) mongoClient; } if (cls.equals(Knowledge.class)) { return (T) knowledge; } return null; } }; } } ``` ## Running the service Start your service locally: ```command mvn compile exec:java ``` In another shell, you can now use `curl` to send requests to this Endpoint. ```command curl localhost:9000/api/ask --header "Content-Type: application/json" -XPOST \ --data '{ "userId": "001", "sessionId": \ "foo", "question":"What are the core components of Akka?"}' ``` In the first part of this guide, the AI model couldn’t answer that question meaningfully, but now it will answer something like: ```none 1. Event Sourced Entities ... 2. Key Value Entities ... 3. HTTP Endpoints ... ... ``` ## Next steps Next we’ll create [UI endpoints](endpoints.html). [Knowledge indexing with a workflow](indexer.html) [Adding UI endpoints](endpoints.html) - [Akka](../../index.html) - [Tutorials](../index.html) - [RAG chat agent](index.html) - [Adding UI endpoints](endpoints.html) # Adding UI endpoints | | **New to Akka? Start here:** Use the [Build your first agent](../author-your-first-service.html) guide to get a simple agentic service running locally and interact with it. | ## Overview In this step of the guide, you’ll add some endpoints to provide a client-friendly API in front of all of the RAG components you’ve been building. You’ll create an API for submitting your "Ask Akka" questions (prompts), and one that serves up a self-hosted, static asset web UI. ## Prerequisites - Java 21, we recommend [Eclipse Adoptium](https://adoptium.net/marketplace/) - [Apache Maven](https://maven.apache.org/install.html) version 3.9 or later - `curl` command-line tool - [OpenAI API key](https://platform.openai.com/api-keys) You will need to have your MongoDB Atlas database URL and your Open AI API key available, as they are required to run the Ask Akka service. If you are following along with each step rather than using the completed solution, then you’ll need the code you wrote in the previous step. ## Unfamiliar with concepts like vectors, embeddings or RAG? We recommend reviewing our [foundational explainer on AI concepts](../../concepts/ai-agents.html#_foundational_ai_concepts_video). It offers helpful background that will deepen your understanding of the technologies and patterns used throughout this tutorial. ## Add a session history view You probably noticed that in the endpoint and in the agent, we’re tracking both session IDs and user IDs. If you’ve ever used the ChatGPT web interface, then you’re familiar with the layout where a user’s conversation history is shown on the left and you can click on each to view and continue that conversation. Communication with an LLM is *stateless*. Everything that you get back from a model like ChatGPT is directly related to the prompt you submit. The Agent component in Akka has a built-in session memory, which enables agents to maintain context across multiple interactions. When an agent interacts with an AI model, both the user message and the AI response are automatically stored in the session memory. These messages are then included as additional context in subsequent requests to the model, allowing it to reference previous parts of the interaction. For the user interface we need a way to pull a conversation history for a given user. We can do this with a view that is built from the events of the built-in `SessionMemoryEntity`. Add a new file `ConversationHistoryView.java` to `src/main/java/akka/ask/agent/application/` [ConversationHistoryView.java](https://github.com/akka/akka-sdk/blob/main/samples/ask-akka-agent/src/main/java/akka/ask/agent/application/ConversationHistoryView.java) ```java @Component(id = "view_chat_log") public class ConversationHistoryView extends View { public record ConversationHistory(List sessions) {} public record Message(String message, String origin, long timestamp) {} // (1) public record Session( String userId, String sessionId, long creationDate, List messages ) { public Session add(Message message) { messages.add(message); return this; } } @Query( "SELECT collect(*) as sessions FROM view_chat_log " + "WHERE userId = :userId ORDER by creationDate DESC" ) public QueryEffect getSessionsByUser(String userId) { // (2) return queryResult(); } @Consume.FromEventSourcedEntity(SessionMemoryEntity.class) public static class ChatMessageUpdater extends TableUpdater { public Effect onEvent(SessionMemoryEntity.Event event) { return switch (event) { case SessionMemoryEntity.Event.AiMessageAdded added -> aiMessage(added); case SessionMemoryEntity.Event.UserMessageAdded added -> userMessage(added); default -> effects().ignore(); }; } private Effect aiMessage(SessionMemoryEntity.Event.AiMessageAdded added) { Message newMessage = new Message( added.message(), "ai", added.timestamp().toEpochMilli() ); var rowState = rowStateOrNew(userId(), sessionId()); return effects().updateRow(rowState.add(newMessage)); } private Effect userMessage(SessionMemoryEntity.Event.UserMessageAdded added) { Message newMessage = new Message( added.message(), "user", added.timestamp().toEpochMilli() ); var rowState = rowStateOrNew(userId(), sessionId()); return effects().updateRow(rowState.add(newMessage)); } private String userId() { var agentSessionId = updateContext().eventSubject().get(); int i = agentSessionId.indexOf("-"); return agentSessionId.substring(0, i); } private String sessionId() { var agentSessionId = updateContext().eventSubject().get(); int i = agentSessionId.indexOf("-"); return agentSessionId.substring(i + 1); } private Session rowStateOrNew(String userId, String sessionId) { // (3) if (rowState() != null) return rowState(); else return new Session( userId, sessionId, Instant.now().toEpochMilli(), new ArrayList<>() ); } } } ``` | **1** | We’re using a view-specific message type here to avoid bleeding logic across tiers | | **2** | Retrieves a full history of all sessions for a given user | | **3** | Convenience method to either get the current row state or make a new one | ## Adding the users API There is a convenience endpoint that you can use to query the list of sessions for a given user: [UsersEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/ask-akka-agent/src/main/java/akka/ask/agent/api/UsersEndpoint.java) ```java @Acl(allow = @Acl.Matcher(principal = Acl.Principal.INTERNET)) @HttpEndpoint("/api") public class UsersEndpoint { private final ComponentClient componentClient; public UsersEndpoint(ComponentClient componentClient) { this.componentClient = componentClient; } @Get("/users/{userId}/sessions/") public ConversationHistoryView.ConversationHistory getSession(String userId) { return componentClient .forView() .method(ConversationHistoryView::getSessionsByUser) .invoke(userId); } } ``` One subtle thing worth pointing out here is that both the streaming RAG endpoint and the user view query endpoint have the exact same route as defined in `@HttpEndpoint("/api")`. ## Adding the static UI endpoint You can now add an endpoint that serves up the static UI. This is surprisingly simple in Akka, as the HTTP endpoint class has built-in support for serving these kinds of assets. [UiEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/ask-akka-agent/src/main/java/akka/ask/agent/api/UiEndpoint.java) ```java @HttpEndpoint @Acl(allow = @Acl.Matcher(principal = Acl.Principal.ALL)) public class UiEndpoint { @Get("/") public HttpResponse index() { return HttpResponses.staticResource("index.html"); // (1) } } ``` | **1** | The `staticResource` function serves up a file from `main/resources/static-resources` | And lastly, we just need to fill out the [index.html](https://github.com/akka/akka-sdk/blob/main/samples/ask-akka-agent/src/main/resources/static-resources/index.html) file to provide the static UI for Ask Akka. There is far too much code in the HTML file to list out here. If you want to run the UI with the Ask Akka service, here you might want to switch to the version that is in the repository so you can get all of the single-file React code. ## Running the service Running the service should now just be a matter of running `mvn compile exec:java`. Make sure that you have set both the `OPENAI_API_KEY` and `MONGODB_ATLAS_URI` environment variables before running `exec:java`. If you haven’t run the indexer yet, do so with: ```command curl -XPOST localhost:9000/api/index/start ``` Once you’ve made sure that your MongoDB Atlas database has a functioning and properly named vector index, you can open the Ask Akka UI in the browser: [localhost:9000](http://localhost:9000/). ## Next steps Now that you’ve gone through the process of building the Ask Akka sample, you should start playing with it and even breaking it. Change the indexing parameters like chunk size and see if that affects how the LLM performs. The key is to roll up your sleeves and get dirty, as that’s the best way to extend your learning beyond what’s covered in this guide. Make sure you check out our thorough discussion of [agentic AI](https://akka.io/what-is-agentic-ai) and where Akka fits in the ecosystem. [Executing RAG queries](rag.html) [Shopping cart](../shopping-cart/index.html) - [Akka](../index.html) - [Shopping cart part 1: Build a simple cart service](build-and-deploy-shopping-cart.html) # Shopping cart part 1: Build a simple cart service | | **New to Akka? Start here:** Use the [Author your first agentic service](author-your-first-service.html) guide to get a simple agentic service running locally and interact with it. | This guide walks you through the design and implementation of a shopping cart service, illustrating the use of some of Akka’s components. ## Overview This quickstart implements an HTTP endpoint and a shopping cart that supports adding, removing, querying, and cart checkout. In this guide you will: - Clone a completed shopping cart service repository to examine and run it locally. - Be introduced to key Akka concepts including [Event Sourced Entities](../java/event-sourced-entities.html). - See how the [project structure](../concepts/architecture-model.html) provides a clear separation of concerns in your microservices. - Run the service locally and explore it with the local Akka console. - Deploy the service to [akka.io](https://console.akka.io/). ## Prerequisites - Java 21, we recommend [Eclipse Adoptium](https://adoptium.net/marketplace/) - [Apache Maven](https://maven.apache.org/install.html) version 3.9 or later - `curl` command-line tool - An [Akka account](https://console.akka.io/register) - [Docker Engine](https://docs.docker.com/get-started/get-docker/) 27 or later ## Clone the sample 1. Clone the full source code of the Shopping Cart sample from [Github](https://github.com/akka-samples/shopping-cart-quickstart), or use the Akka CLI command below to download and unzip this quickstart project. ```bash akka quickstart download shopping-cart ``` 2. Open the project in your favorite editor. ## Understanding the shopping cart Shopping carts are excellent examples of agentic [[1](about:blank#_footnotedef_1) ] behavior (with or without AI). In real-world implementations, they can automatically apply discounts or coupons, monitor inventory levels, detect abandoned carts, maximize conversion rates, and make recommendations based on shopping history and cart contents. For the purposes of this initial quickstart, we will just have a single endpoint and a single event-sourced entity to keep things easy to digest. Through our entity we expect to manage our cart, adding and removing items as we please. Being event-sourced means it will represent changes to state as a series of domain events. Let’s have a look at what kind of model we use to store the events our entity generates. ### The domain model The first concept to note is the domain class `ShoppingCart` in package `shoppingcart.domain`. This class is located in the `src/main/java/shoppingcart/domain/` directory and named `ShoppingCart.java`. [ShoppingCart.java](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-quickstart/src/main/java/shoppingcart/domain/ShoppingCart.java) ```java package shoppingcart.domain; import java.util.List; public record ShoppingCart(String cartId, List items, boolean checkedOut) { // (1) public record LineItem(String productId, String name, int quantity) { // (2) public LineItem withQuantity(int quantity) { return new LineItem(productId, name, quantity); } } } ``` | **1** | Our `ShoppingCart` is fairly simple, being composed only by a `cartId` and a list of line items. | | **2** | A `LineItem` represents a single product and the quantity we intend to buy. | Next, we have a domain event for adding items to the cart. Here we’ve created an interface `ShoppingCartEvent` with the `ItemAdded` domain event in package `shoppingcart.domain`. This file is in the `src/main/java/shoppingcart/domain/` directory and named `ShoppingCartEvent.java`: [ShoppingCartEvent.java](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-quickstart/src/main/java/shoppingcart/domain/ShoppingCartEvent.java) ```java package shoppingcart.domain; import akka.javasdk.annotations.TypeName; public sealed interface ShoppingCartEvent { // (1) @TypeName("item-added") // (2) record ItemAdded(ShoppingCart.LineItem item) implements ShoppingCartEvent { } } ``` | **1** | `ItemAdded` event derives from the `ShoppingCartEvent` interface. | | **2** | Specifying a logical type name, required for serialization. | You may notice that there are other events defined as well. This is how the application passes events between the Akka runtime and the domain object. Jumping back to the `ShoppingCart` domain class, there is also business logic for adding items to the cart: [ShoppingCart.java](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-quickstart/src/main/java/shoppingcart/domain/ShoppingCart.java) ```java public ShoppingCart addItem(LineItem item) { var lineItem = updateItem(item); // (1) List lineItems = removeItemByProductId(item.productId()); // (2) lineItems.add(lineItem); // (3) lineItems.sort(Comparator.comparing(LineItem::productId)); return new ShoppingCart(cartId, lineItems, checkedOut); // (4) } private LineItem updateItem(LineItem item) { return findItemByProductId(item.productId()) .map(li -> li.withQuantity(li.quantity() + item.quantity())) .orElse(item); } private List removeItemByProductId(String productId) { return items().stream() .filter(lineItem -> !lineItem.productId().equals(productId)) .collect(Collectors.toList()); } public Optional findItemByProductId(String productId) { Predicate lineItemExists = lineItem -> lineItem.productId().equals(productId); return items.stream().filter(lineItemExists).findFirst(); } ``` | **1** | For an existing item, we make sure to sum the existing quantity with the incoming one. | | **2** | Returns an updated list of items without the existing item. | | **3** | Adds the updated item to the shopping cart. | | **4** | Returns a new instance of the shopping cart with the updated line items. | ### The ShoppingCart entity To go with our domain data and our events, we have an entity named `ShoppingCartEntity` in package `shoppingcart.application`. This class is located in the `src/main/java/shoppingcart/application/` directory and named `ShoppingCartEntity.java`. The class signature looks like this: [ShoppingCartEntity.java](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-quickstart/src/main/java/shoppingcart/application/ShoppingCartEntity.java) ```java package shoppingcart.application; import akka.Done; import akka.javasdk.annotations.ComponentId; import akka.javasdk.eventsourcedentity.EventSourcedEntity; import akka.javasdk.eventsourcedentity.EventSourcedEntityContext; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import shoppingcart.domain.ShoppingCart; import shoppingcart.domain.ShoppingCart.LineItem; import shoppingcart.domain.ShoppingCartEvent; import java.util.Collections; @ComponentId("shopping-cart") // (2) public class ShoppingCartEntity extends EventSourcedEntity { // (1) } ``` | **1** | Extend `EventSourcedEntity`, where `ShoppingCart` is the type of state this entity stores, and `ShoppingCartEvent` is the interface for the events it persists. | | **2** | Annotate the class so Akka can identify it as an event-sourced entity. | Inside `ShoppingCartEntity`, we define an `addItem` command handler to generate an `ItemAdded` event, and an event handler to process the event and update internal state: [ShoppingCartEntity.java](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-quickstart/src/main/java/shoppingcart/application/ShoppingCartEntity.java) ```java public Effect addItem(LineItem item) { if (currentState().checkedOut()) { logger.info("Cart id={} is already checked out.", entityId); return effects().error("Cart is already checked out."); } if (item.quantity() <= 0) { // (1) logger.info("Quantity for item {} must be greater than zero.", item.productId()); return effects().error("Quantity for item " + item.productId() + " must be greater than zero."); } var event = new ShoppingCartEvent.ItemAdded(item); // (2) return effects() .persist(event) // (3) .thenReply(newState -> Done.getInstance()); // (4) } @Override public ShoppingCart applyEvent(ShoppingCartEvent event) { return switch (event) { case ShoppingCartEvent.ItemAdded evt -> currentState().addItem(evt.item()); // (5) }; } ``` | **1** | Validate the quantity of items added is greater than zero. | | **2** | Create a new `ItemAdded` event representing the change to the state of the cart. | | **3** | Persist the event by returning an `Effect` with `effects().persist`. | | **4** | Acknowledge that the event was successfully persisted. | | **5** | Event handler to process the `ItemAdded` event and return the updated state. | Inside `ShoppingCartEntity`, we also define a `getCart` command handler to retrieve the state of the shopping cart: [ShoppingCartEntity.java](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-quickstart/src/main/java/shoppingcart/application/ShoppingCartEntity.java) ```java private final String entityId; private static final Logger logger = LoggerFactory.getLogger(ShoppingCartEntity.class); public ShoppingCartEntity(EventSourcedEntityContext context) { this.entityId = context.entityId(); // (1) } @Override public ShoppingCart emptyState() { // (2) return new ShoppingCart(entityId, Collections.emptyList(), false); } public ReadOnlyEffect getCart() { return effects().reply(currentState()); // (3) } ``` | **1** | Store the `entityId` in an internal attribute, to be used elsewhere. | | **2** | Define the initial state. | | **3** | Return the current state as a reply to the request. | You should be intentional about when you query views versus when you query entity state. Views are *eventually* consistent while entities are *strongly* consistent. You don’t have to worry about views yet, though, as there are none in this quickstart. ### The external API The shopping cart API is defined by the `ShoppingCartEndpoint`. It is a class named `ShoppingCartEndpoint` in package `shoppingcart.api`. This class is located in the `src/main/java/shoppingcart/api/` directory and named `ShoppingCartEndpoint.java`: [ShoppingCartEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-quickstart/src/main/java/shoppingcart/api/ShoppingCartEndpoint.java) ```java package shoppingcart.api; import akka.http.javadsl.model.HttpResponse; import akka.javasdk.annotations.Acl; import akka.javasdk.annotations.http.HttpEndpoint; import akka.javasdk.annotations.http.Get; import akka.javasdk.annotations.http.Put; import akka.javasdk.client.ComponentClient; import akka.javasdk.http.HttpResponses; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import shoppingcart.application.ShoppingCartEntity; import shoppingcart.domain.ShoppingCart; // Opened up for access from the public internet to make the sample service easy to try out. // For actual services meant for production this must be carefully considered, and often set more limited @Acl(allow = @Acl.Matcher(principal = Acl.Principal.INTERNET)) @HttpEndpoint("/carts") // (1) public class ShoppingCartEndpoint { private final ComponentClient componentClient; private static final Logger logger = LoggerFactory.getLogger(ShoppingCartEndpoint.class); public ShoppingCartEndpoint(ComponentClient componentClient) { // (2) this.componentClient = componentClient; } @Get("/{cartId}") // (3) public ShoppingCart get(String cartId) { logger.info("Get cart id={}", cartId); return componentClient.forEventSourcedEntity(cartId) // (4) .method(ShoppingCartEntity::getCart) .invoke(); // (5) } @Put("/{cartId}/item") // (6) public HttpResponse addItem(String cartId, ShoppingCart.LineItem item) { logger.info("Adding item to cart id={} item={}", cartId, item); componentClient.forEventSourcedEntity(cartId) .method(ShoppingCartEntity::addItem) .invoke(item); return HttpResponses.ok(); // (7) } } ``` | **1** | Annotate the class so Akka can identify it as an endpoint with a common path prefix for all methods `/carts`. | | **2** | The `ComponentClient` utility enables components to interact with each other. | | **3** | Combine the `cartId` parameter with the endpoint root, e.g. `/carts/123`. | | **4** | `ComponentClient` calls a command handler on an event-sourced entity from inside an endpoint. | | **5** | The result of a request to `ShoppingCart.getCart` is a `ShoppingCart`. | | **6** | Use path parameter `{cartId}` in combination with request body `ShoppingCart.LineItem`. | | **7** | On success, respond with a suitable response, in this case an `HTTP 200 OK`. | ## Run locally 1. Start your service locally: ```command mvn compile exec:java ``` Once successfully started, any defined endpoints become available at `localhost:9000`. 2. Let’s send some requests using `curl`. Add some T-shirts to a shopping cart: ```command curl -i -XPUT -H "Content-Type: application/json" localhost:9000/carts/123/item -d ' {"productId":"akka-tshirt", "name":"Akka Tshirt", "quantity": 3}' ``` 3. Add some blue jeans to the shopping cart: ```command curl -i -XPUT -H "Content-Type: application/json" localhost:9000/carts/123/item -d ' {"productId":"blue-jeans", "name":"Blue Jeans", "quantity": 2}' ``` 4. Add a few more T-shirts to the shopping cart: ```command curl -i -XPUT -H "Content-Type: application/json" localhost:9000/carts/123/item -d ' {"productId":"akka-tshirt", "name":"Akka Tshirt", "quantity": 3}' ``` 5. Request all of the items in the cart: ```command curl localhost:9000/carts/123 ``` 6. Observe all the items in the cart: ```none {"cartId":"123","items":[{"productId":"akka-tshirt","name":"Akka Tshirt","quantity":6}, {"productId":"blue-jeans","name":"Blue Jeans","quantity":5}],"checkedOut":false} ``` ## Explore the local console 1. To get a clear view of your locally running service, [install the Akka CLI](../operations/cli/installation.html). It provides a local web-based management console. 2. Start the local console: ```command akka local console ``` ```none ─────────────────────────────────────────────────────────────────────── │ SERVICE │ STATE │ ADDRESS | ─────────────────────────────────────────────────────────────────────── │ shopping-cart-quickstart │ Running │ localhost:9000 │ ─────────────────────────────────────────────────────────────────────── Local console: http://localhost:9889 (use Ctrl+C to quit) ``` 3. Open [http://localhost:9889/](http://localhost:9889/) to see your local service in action: ![local console first app events](_images/local-console-first-app-events.png) Here, you can view not only the [current state of the cart](http://localhost:9889/services/shopping-cart-quickstart/components/shoppingcart.application.ShoppingCartEntity), but also **a detailed log of events**, and the corresponding state changes that occurred along the way. ## Deploy to akka.io Now that you’ve built and started the shopping cart service locally, it’s time to run it on the Akka Agentic Platform without having to change any code. 1. If you have not already done so, [install the Akka CLI](../operations/cli/installation.html). 2. Authenticate the CLI with your Akka account: ```command akka auth login ``` 3. Build a container image of your shopping cart service: ```command mvn clean install -DskipTests ``` 4. Take note of the container name and tag from the last line in the output, for example: ```command DOCKER> Tagging image shoppingcart:1.0-SNAPSHOT-20241028102843 successful! ``` 5. Deploy your service, replacing: - `container-name` with the container name from the `mvn install` output in the previous step - `tag-name` with the tag name from the `mvn install` output in the previous step ```command akka service deploy cart-service shoppingcart:tag-name --push ``` Your service named `cart-service` will now begin deploying. 6. Verify the deployment status of your service: ```command akka service list ``` A service status can be one of the following: - **Ready**: All service instances are up-to-date and fully available. - **UpdateInProgress**: Service is updating. - **Unavailable**: No service instances are available. - **PartiallyReady**: Some, but not all, service instances are available. Approximately one minute after deploying, your service status should become **Ready**. 7. Expose your service to the internet: ```command akka service expose cart-service ``` Should respond with something similar to (the exact address will be unique to your service): ```command Service 'cart-service' was successfully exposed at: spring-tooth-3406.gcp-us-east1.akka.services ``` Congratulations! You have successfully deployed your service. You can now access it using the hostname described in the output of the command above. ## Invoke your deployed service You can use [cURL](https://curl.se/) to invoke your service, replacing URL with the hostname from the previous step. 1. Add an item to the shopping cart: ```command curl -i -XPUT -H "Content-Type: application/json" https://spring-tooth-3406.gcp-us-east1.akka.services/carts/123/item -d ' {"productId":"akka-tshirt", "name":"Akka Tshirt", "quantity": 10}' ``` 2. Get cart state: ```command curl https://spring-tooth-3406.gcp-us-east1.akka.services/carts/123 ``` ## Explore the console 1. Open the **Akka Console**. 2. Navigate to the **Project** where the Service is deployed. 3. Click on the **Service** card of the Service. It shows detailed information about the running service. ![console first app service](_images/console-first-app-service.png) ## Next steps Now that you’ve built and deployed a shopping cart service, take your Akka skills to the next level: 1. **Add a view**: Continue to the [next step](addview.html) in the tour. 2. **Expand on your own**: Explore [other Akka components](../java/components/index.html) to enhance your application with additional features. 3. **Explore other Akka samples**: Discover more about Akka by exploring [different use cases](samples.html) for inspiration. [1](about:blank#_footnoteref_1). Defined as a stateful behavior capable of acting on its own. - [Akka](../index.html) - [Shopping cart part 2: Authenticated user-specific lookup](addview.html) # Shopping cart part 2: Authenticated user-specific lookup | | **New to Akka? Start here:** Use the [Author your first agentic service](author-your-first-service.html) guide to get a simple agentic service running locally and interact with it. | This guide walks you through the design and implementation of an enhancement to the shopping cart service example, illustrating model refactoring, the use of Views, and user authentication. ## Overview In this step in the shopping cart sample tour, we’ll be taking a look at the event and domain models created in the previous step. We’ll decide on what we’d like to change and then implement that change in the form of a few refactors and adding a new `View`. ## Prerequisites - Java 21, we recommend [Eclipse Adoptium](https://adoptium.net/marketplace/) - [Apache Maven](https://maven.apache.org/install.html) version 3.9 or later - `curl` command-line tool - An [Akka account](https://console.akka.io/register) - [Docker Engine](https://docs.docker.com/get-started/get-docker/) 27 or later ## Clone the sample 1. Clone the full source code of the Shopping Cart (with View) sample from [Github](https://github.com/akka-samples/shopping-cart-with-view). ## Re-evaluating the shopping cart structure The first version of the shopping cart had a bit of an issue blurring the lines between tiers or responsibilities. The data type used to represent the `LineItem` in the `POST` request to the HTTP endpoint is the same data type sent to the `ShoppingCartEntity` as a command. This is *also* the exact same data type used by the entity for its own internal state storage. For small applications or prototypes, this isn’t that big of a problem. But this kind of pattern might create headaches in the future. Any change to the way the entity internally stores its internal state will potentially break the API endpoint and maybe even cause migration issues with events. It might seem like overkill at first, but separating these data types is one of the first steps toward supporting evolutionary architecture and clean, easily-maintained code. If we adopt the rule that we can’t reuse the same data type across roles, then we end up with 3 distinct types: - The data used to shape the body of the `POST` request to add an item to a cart - The data used as a *command* to be sent to the entity to handle that request - The data used *internally* by the entity to represent its own state. We want to make sure that the data the entity is storing for its state contains *only* the information the entity needs in order to validate incoming commands. The other change we want to make is adding a `userId` attribute to the shopping cart. While the first version using just `cartId` is fine, on the road to production this isn’t good enough. We need to be able to ensure that one user can’t read, modify, or delete a cart owned by another user. Further, we want the option to retrieve a cart by user ID from the new view we’re adding. ### Managing cart and user IDs With the addition of a `userId` to the shopping cart, we’ve now got a bit of a gray area in the model. How do we create new cart IDs? The last version of the sample relied on the clients to generate and remember the cart IDs, which isn’t an ideal experience. Further, how do we ensure that users only have 1 active cart while we can potentially retrieve old carts for historical data? The solution used in this sample is to create another entity, the `UserEntity`. This entity will manage just one piece of information: the user’s currently active shopping cart ID. When a cart gets checked out, we’ll "close" the old ID and generate a new one. This ensures the right ratio of users to carts while also alleviating the burden of ID maintenance from the clients. | | Avoid random numbers in emptyState. It might be tempting to try and generate a random number or UUID in the user entity’s `emptyState()` function. The consequences of this are far-reaching and subtle. If the `emptyState()` function generates a new cart UUID every time it is called, then whenever a client asks for a user entity *that has not yet received any events*, we get a new UUID. This means that if we add three items to the cart for the same not-yet-persisted user, we’ll actually create three different carts. To mitigate this, we instead used a simple monotonically increasing integer for each user. This way, not-yet-persisted users will always use cart ID `1`. | ## Creating isolated data types To work on the separation of concerns, we’ll work our way in from the outermost edge, which in this case is the HTTP endpoint. This one new record represents the line items that can be added via `POST` to `/my/item`. [ShoppingCartEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-with-view/src/main/java/shoppingcart/api/ShoppingCartEndpoint.java) ```java public record LineItemRequest(String productId, String name, int quantity, String description) { } ``` From the client standpoint, they’re supplying both the name and description of the product in the request. In subsequent tutorials, we might illustrate a better place to put product metadata like that. Next, we need a *command* type for the entity to handle. Remember that calling `invoke` from the endpoint will only take a single parameter, so our command has to hold all of the necessary data. [ShoppingCartEntity.java](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-with-view/src/main/java/shoppingcart/application/ShoppingCartEntity.java) ```java public record AddLineItemCommand(String userId, String productId, String name, int quantity, String description) { } ``` Next we modify the shape of the internal state used by the entity. To illustrate the different roles of entities and views, we’ve modified the state so that it doesn’t store the `name` or `description` fields, since those aren’t needed for decision making during command processing. [ShoppingCart.java](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-with-view/src/main/java/shoppingcart/domain/ShoppingCart.java) ```java public record ShoppingCart(String cartId, List items, boolean checkedOut) { public record LineItem(String productId, int quantity) { public LineItem withQuantity(int quantity) { return new LineItem(productId, quantity); } } } ``` ## Adding a shopping cart view Now that we’ve improved the separation of concerns/layers with the various data types being used in the application, we can create the `View`. A view can contain multiple tables, and each one of those tables can be thought of as roughly equivalent to a table in a traditional RDBMS, except you don’t have to worry about where or how that data is stored. For our new view, we want all of the information on the shopping cart contents, including the name and description (which have also been added to the appropriate `ShoppingCartEvent`). [ShoppingCartView.java](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-with-view/src/main/java/shoppingcart/application/ShoppingCartView.java) ```java @ComponentId("shopping-cart-view") public class ShoppingCartView extends View { @Query("SELECT * FROM shopping_carts WHERE cartId = :cartId") // (1) public QueryEffect getCart(String cartId) { return queryResult(); } @Query("SELECT * FROM shopping_carts WHERE " + "userId = :userId AND checkedout = false") // (2) public QueryEffect> getUserCart(String userId) { return queryResult(); } public record Cart(String cartId, String userId, List items, boolean checkedout) { // (3) } @Consume.FromEventSourcedEntity(ShoppingCartEntity.class) // (4) public static class ShoppingCartsUpdater extends TableUpdater { public Effect onEvent(ShoppingCartEvent event) { return switch (event) { case ShoppingCartEvent.ItemAdded added -> addItem(added); case ShoppingCartEvent.ItemRemoved removed -> removeItem(removed); case ShoppingCartEvent.CheckedOut checkedOut -> checkout(checkedOut); }; } Cart rowStateOrNew(String userId) { if (rowState() == null) { var cartId = updateContext().eventSubject().get(); return new Cart( cartId, userId, new ArrayList(), false); } else { return rowState(); } } private Effect addItem(ShoppingCartEvent.ItemAdded added) { return effects().updateRow( rowStateOrNew(added.userId()) // (5) .addItem(added.productId(), added.name(), added.quantity(), added.description())); } private Effect removeItem(ShoppingCartEvent.ItemRemoved removed) { return effects().updateRow(rowState().removeItem(removed.productId())); } private Effect checkout(ShoppingCartEvent.CheckedOut checkedOut) { return effects().updateRow(rowState().checkout()); } } } ``` | **1** | Return a single shopping cart based on its unique ID. | | **2** | Return a single shopping cart based on its user ID. | | **3** | The data type for a single row of the table. | | **4** | This view gets it data from events emitted by `ShoppingCartEntity`. | | **5** | Either reusing the existing row state or creating a new `Cart`. | With a newly refactored set of data types, clear boundaries between the various components, and a view in hand, there’s one more thing to do—​add the concept of a user. ## Adding users to the app There’s a couple of things that need to be done in order to add users to the application. We’ll need a `UserEntity` that manages the current shopping cart IDs, and we’ll need to add user authentication and context to the API endpoint. ### Creating a user entity The user entity in this sample is quite small (but easily enhanced later). It maintains a `currentCartId` on behalf of a user and whenever a cart is "closed" (as a result of a checkout), we increment the cart ID. [UserEntity.java](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-with-view/src/main/java/shoppingcart/application/UserEntity.java) ```java @ComponentId("user") public class UserEntity extends EventSourcedEntity { private final String entityId; private static final Logger logger = LoggerFactory.getLogger(UserEntity.class); public record CloseCartCommand(String cartId) { } public UserEntity(EventSourcedEntityContext context) { this.entityId = context.entityId(); } public ReadOnlyEffect currentCartId() { return effects().reply(entityId + "-" + currentState().currentCartId()); } public Effect closeCart(CloseCartCommand command) { return effects() .persist(new UserEvent.UserCartClosed(entityId, command.cartId())) .thenReply(__ -> Done.done()); } @Override public User emptyState() { int newCartId = 1; return new User(entityId, newCartId); } @Override public User applyEvent(UserEvent event) { logger.debug("Applying user event to user id={}", entityId); return switch (event) { case UserEvent.UserCartClosed closed -> currentState().closeCart(); }; } } ``` Incrementing the cart ID is done simply in the `onCartClosed` function of the `User`: ```java public UserState onCartClosed(UserEvent.UserCartClosed closed) { return new UserState(userId, currentCartId + 1); } ``` ### Adding a cart consumer Given the preceding entity, we still need *something* to call the `closeCart` function. Since we want to close carts and bump IDs whenever a cart is checked out, we’ll create a consumer that receives `ShoppingCartEvent` events and calls the appropriate user entity method. [CartCloser.java](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-with-view/src/main/java/shoppingcart/application/CartCloser.java) ```java @ComponentId("cart-closer-consumer") @Consume.FromEventSourcedEntity(value = ShoppingCartEntity.class, ignoreUnknown = true) public class CartCloser extends Consumer { private Logger logger = LoggerFactory.getLogger(CartCloser.class); protected final ComponentClient componentClient; public CartCloser(ComponentClient componentClient) { this.componentClient = componentClient; } public Effect onCheckedOut(ShoppingCartEvent.CheckedOut event) { logger.debug("Closing cart for user {} due to checkout", event.userId()); componentClient.forEventSourcedEntity(event.userId()) .method(UserEntity::closeCart) .invoke(new CloseCartCommand(event.cartId())); return effects().done(); } } ``` ### Securing the HTTP endpoint Adding the concept of a user context to an endpoint in traditional applications can be a nightmare. The refactoring can bleed into all sorts of unexpected places and building or buying—​or both—​authentication and authorization solutions can bog down entire teams. In the following code, we add support for **[JWT](../java/auth-with-jwts.html)** -based bearer tokens to the HTTP endpoint with just a single line. While not shown here, you can define all kinds of rules based on the claims supplied in a token. [ShoppingCartEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-with-view/src/main/java/shoppingcart/api/ShoppingCartEndpoint.java) ```java @Acl(allow = @Acl.Matcher(principal = Acl.Principal.INTERNET)) @JWT(validate = JWT.JwtMethodMode.BEARER_TOKEN) @HttpEndpoint("/carts") public class ShoppingCartEndpoint extends AbstractHttpEndpoint { ``` Extracting the user ID from context is quite easy. Let’s modify the `get` function so that it rejects attempts to query a shopping cart that doesn’t belong to the caller. [ShoppingCartEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-with-view/src/main/java/shoppingcart/api/ShoppingCartEndpoint.java) ```java @Get("/{cartId}") public ShoppingCartView.Cart get(String cartId) { logger.info("Get cart id={}", cartId); var userId = requestContext().getJwtClaims().subject().get(); var cart= componentClient.forView() .method(ShoppingCartView::getCart) // (1) .invoke(cartId); if (cart.userId().trim().equals(userId)) { return cart; } else { throw HttpException.error(StatusCodes.NOT_FOUND, "no such cart"); } } ``` | **1** | Invoke the view’s `getCart` function to retrieve by cart ID | We return a `404/Not Found` here for when there’s a cart ownership mismatch rather than returning the authorization-related codes of either `400` or `401`. This is to prevent malicious intruders from being able to discover the IDs of other people’s carts. We can also add a new convenience route, `/my`, which will retrieve the cart for the currently authenticated user. This eases the burden on the UI a bit since it won’t have to do a pre-fetch to convert a user ID into a cart ID. [ShoppingCartEndpoint.java](https://github.com/akka/akka-sdk/blob/main/samples/shopping-cart-with-view/src/main/java/shoppingcart/api/ShoppingCartEndpoint.java) ```java @Get("/my") public ShoppingCartView.Cart getByUser() { var userId = requestContext().getJwtClaims().subject().get(); logger.info("Get cart userId={}", userId); var result = componentClient.forView() .method(ShoppingCartView::getUserCart) // (1) .invoke(userId); return result.orElseThrow(() -> HttpException.error(StatusCodes.NOT_FOUND, "no such cart")); } ``` | **1** | Invoke the view’s `getUserCart` function to retrieve the cart by user ID | Now we can modify all of the other data-mutating routes to use the special token `my` rather than accept an arbitrary cart ID. This has the net effect of preventing any client from making changes to anything other than the currently active cart for the current user. This table reflects the new status of the shopping cart service’s routes: | Path | Method | Description | | --- | --- | --- | | `/carts/{cartId}` | `GET` | Retrieves the cart corresponding to the supplied ID. Returns `404` if the calling user does not own the shopping cart in question | | `/carts/my` | `GET` | Retrieves the currently active shopping cart for the current user, or `404` if the cart hasn’t been created | | `/carts/my/item` | `PUT` | Adds a line item to the user’s current shopping cart | | `/carts/my/item/{productId}` | `DELETE` | Removes a line item from the user’s current shopping cart | | `/carts/my/checkout` | `POST` | Checks out the user’s current shopping cart | ### Exercising the service With JWT authentication in place, it’s now slightly more difficult to invoke the service via `curl`, but only because we have to generate a valid token. Since this sample doesn’t validate for specific issuers, any valid token will be fine. You can create your own tokens on [JWT.io](https://jwt.io/), or you can use the one from the following `curl` example, which interrogates the user’s current shopping cart. ```command curl http://localhost:9000/carts/my -H 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJib2IiLCJuYW1lIjoiQm9iIEJvYmJlcnNvbiIsImlzcyI6ImFsaWNlIiwiaWF0IjoxNTE2MjM5MDIyfQ.wIxafOw2k4TgdCm2pH4abupetKRKS4ItOKlsNTY-pzc' ``` ## Next steps Now that you’ve added a view *and* user authentication to the shopping cart sample, take your Akka skills to the next level: 1. **Install and build**: Before moving on, download the code for this sample, compile it, and make sure you can run and utilize the new service. 2. **Expand on your own**: Explore [other Akka components](../java/components/index.html) to enhance your application with additional features. 3. **Explore other Akka samples**: Discover more about Akka by exploring [different use cases](samples.html) for inspiration. <# examples># Hello world agent This sample uses an agent and LLM to generate greetings in different languages. It illustrates how the agent maintains contextual history in a session memory. This sample is explained in [Author your first agentic service](https://doc.akka.io/getting-started/author-your-first-service.html). To understand the Akka concepts that are the basis for this example, see [Development Process](https://doc.akka.io/concepts/development-process.html) in the documentation. This project contains the skeleton to create an Akka service. To understand more about these components, see [Developing services](https://doc.akka.io/sdk/index.html). Use Maven to build your project: ```shell mvn compile ``` When running an Akka service locally. This sample is using OpenAI. Other AI models can be configured, see [Agent model provider](https://doc.akka.io/sdk/agents.html#_model). Set your [OpenAI API key](https://platform.openai.com/api-keys) as an environment variable: - On Linux or macOS: ```shell export OPENAI_API_KEY=your-openai-api-key ``` - On Windows (command prompt): ```shell set OPENAI_API_KEY=your-openai-api-key ``` Or change the `application.conf` file to use a different model provider. To start your service locally, run: ```shell mvn compile exec:java ``` This command will start your Akka service. With your Akka service running, the endpoint is available at: ```shell curl -i -XPOST --location "http://localhost:9000/hello" \ --header "Content-Type: application/json" \ --data '{"user": "alice", "text": "Hello, I am Alice"}' ``` You can use the [Akka Console](https://console.akka.io) to create a project and see the status of your service. Build container image: ```shell mvn clean install -DskipTests ``` Install the `akka` CLI as documented in [Install Akka CLI](https://doc.akka.io/operations/cli/installation.html). Set up secret containing OpenAI API key: ```shell akka secret create generic openai-api --literal key=$OPENAI_API_KEY ``` Deploy the service using the image tag from above `mvn install` and the secret: ```shell akka service deploy helloworld-agent helloworld-agent:tag-name --push \ --secret-env OPENAI_API_KEY=openai-api/key ``` Refer to [Deploy and manage services](https://doc.akka.io/operations/services/deploy-service.html) for more information.# Multi-Agent System A sample application demonstrating how to build a multi-agent system using Akka and an LLM model. ## Overview This project illustrates an Agentic workflow for a multi-agent system using Akka. The system: - Processes user queries and determines the appropriate agents to handle the request - Plans the execution order of the selected agents - Executes the plan by interacting with the agents in sequence - Summarizes the results from all agents into a final response ### Akka components This sample leverages specific Akka components: - **Workflow**: Manages the user query process, handling the sequential steps of agent selection, plan creation, execution, and summarization. - **EventSourced Entity**: Maintains the session memory, storing the sequence of interactions between the user and the system. - **HTTP Endpoint**: Serves the application endpoints for interacting with the multi-agent system (`/activities`). ### Other - **LLM model**: The system uses an LLM model to assist in agent selection, plan creation, and summarization. The LLM is integrated with tools specific to each agent's domain of expertise. ## Example flow ```mermaid sequenceDiagram participant User participant HTTPEndpoint as HTTP Endpoint participant Workflow as Akka Workflow participant Selector as Agent Selector participant Planner as Execution Planner participant WeatherAgent as Weather Agent participant ActivityAgent as Activity Agent participant Summarizer as Summarizer User->>HTTPEndpoint: "I do not work today. I am in Madrid. What should I do? Beware of the weather" HTTPEndpoint->>Workflow: Create new workflow instance Note over Workflow: Initialize multi-agent query process HTTPEndpoint-->>User: Return activity id for follow up Workflow->>Selector: Select appropriate agents Selector-->>Workflow: Return selected agents (e.g., WeatherAgent, ActivityAgent) Workflow->>Planner: Create execution plan Planner-->>Workflow: Return ordered plan (e.g., WeatherAgent -> ActivityAgent) loop Execute plan Workflow->>WeatherAgent: "What is the weather in Madrid?" WeatherAgent-->>Workflow: "Rainy in Madrid" Workflow->>ActivityAgent: "Suggest activities for a day in Madrid" ActivityAgent-->>Workflow: "Visit the Prado Museum or enjoy local cafes" end Workflow->>Summarizer: Summarize responses Summarizer-->>Workflow: "The weather in Madrid is rainy today, so you might want to explore indoor attractions..." User->>HTTPEndpoint: Send query by session id HTTPEndpoint->>Workflow: Request result from workflow Workflow-->>HTTPEndpoint: Return result HTTPEndpoint-->>User: Deliver response ``` Note that the agents selected depend on the user's query and the available agents. Thus, the flow varies arbitrarily depending on the Planner agent reasoning and the steps selected to respond to the query. Also, another important aspect is that all agents share the same context and thus, when answering a question can take into consideration the context of the previous interactions with other agents. ## Running the application ### Prerequisites - Java 21 or higher - Maven 3.6 or higher ### Build and run To run the application, you need to provide the following environment variables: - `OPENAI_API_KEY`: Your OpenAI API key. If you prefer to use a different LLM model, follow the instructions in `application.conf` to change it. - `WEATHER_API_KEY`: (Optional) API key for the weather service Set the environment variables: - On Linux or macOS: ```shell export OPENAI_API_KEY=your-openai-api-key export WEATHER_API_KEY=your-weather-api-key ``` - On Windows (command prompt): ```shell set OPENAI_API_KEY=your-openai-api-key set WEATHER_API_KEY=your-weather-api-key ``` Build and run the application: ```shell # Run the application mvn compile exec:java ``` ### Testing the agents With the application running, you can test the system using the following endpoints: * Start a new session: ```shell curl -i -XPOST --location "http://localhost:9000/activities/alice" \ --header "Content-Type: application/json" \ --data '{"message": "I do not work tomorrow. I am in Madrid. What should I do? Beware of the weather"}' ``` The system will process the query, select the appropriate agents, and return a response. * Retrieve the response for a specific session: ```shell curl -i -XGET --location "http://localhost:9000/activities/alice/{sessionId}" ``` Replace `{sessionId}` with the ID returned when the session was created. Example: ```shell $ curl "http://localhost:9000/activities/alice/c1219e5a-abae-44c0-959b-ff76aa22cb2e" The weather in Madrid is rainy tomorrow, so you might want to explore indoor attractions like the Prado Museum or Reina Sofia Museum. Alternatively, you can visit local cafes and food markets, such as Mercado de San Miguel, to enjoy some culinary delights without getting wet. If you're up for something more active, you could also consider visiting an escape room or an indoor sports facility. ``` You can also retrieve for all previous suggestions for a user: ```shell curl -i -XGET --location "http://localhost:9000/activities/alice" ``` Preferences can be added with: ```shell curl -i localhost:9000/preferences/alice \ --header "Content-Type: application/json" \ -XPOST \ --data '{ "preference": "I like outdoor activities." }' ``` ## Deployment You can use the [Akka Console](https://console.akka.io) to create a project and deploy this service. Build container image: ```shell mvn clean install -DskipTests ``` Install the `akka` CLI as documented in [Install Akka CLI](https://doc.akka.io/operations/cli/installation.html). Set up secret containing OpenAI API key: ```shell akka secret create generic agent-secrets \ --literal openai-key=$OPENAI_API_KEY \ --literal weather-key=$WEATHER_API_KEY ``` Deploy the service using the image tag from above `mvn install` and the secrets: ```shell akka service deploy multi-agent multi-agent: --push \ --secret-env OPENAI_API_KEY=agent-secrets/openai-key \ --secret-env WEATHER_API_KEY=agent-secrets/weather-key ``` Refer to [Deploy and manage services](https://doc.akka.io/operations/services/deploy-service.html) for more information. To understand the Akka concepts that are the basis for this example, see [Development Process](https://doc.akka.io/concepts/development-process.html) in the documentation.# Adaptive Multi-Agent System A sample application demonstrating how to build a multi-agent system using Akka and an LLM model. ## Overview This project illustrates an Agentic workflow for a multi-agent system using Akka. The system: - Processes user queries and determines the appropriate agents to handle the request - Plans the execution order of the selected agents - Executes the plan by interacting with the agents in sequence - Summarizes the results from all agents into a final response ### Akka components This sample leverages specific Akka components: - **Workflow**: Manages the user query process, handling the sequential steps of agent selection, plan creation, execution, and summarization. - **EventSourced Entity**: Maintains the session memory, storing the sequence of interactions between the user and the system. - **HTTP Endpoint**: Serves the application endpoints for interacting with the multi-agent system (`/activities`). ### Other - **LLM model**: The system uses an LLM model to assist in agent selection, plan creation, and summarization. The LLM is integrated with tools specific to each agent's domain of expertise. ## Adaptive Orchestration This sample also includes an implementation of the **MagenticOne** orchestration pattern - an adaptive multi-agent system that re-evaluates progress after each agent response and dynamically adjusts its strategy. ### Key Features - **Adaptive Planning**: Unlike a static workflow, MagenticOne continuously evaluates progress and can change direction - **Stall Detection**: Automatically detects when agents are stuck in loops or not making progress - **Dynamic Replanning**: When stalled, the system updates its understanding and creates a new plan - **Fact Sheet Evolution**: Maintains an evolving understanding of the problem that improves over time ### How It Works MagenticOne uses two loops: **Outer Loop** (Planning): 1. Gather facts about the task 2. Create an execution plan based on available agents **Inner Loop** (Execution): 1. Evaluate current progress (is task complete? making progress? stuck?) 2. Select next agent to execute based on evaluation 3. Execute selected agent 4. Repeat until task complete or replanning needed If the system detects stalling (e.g., repeated actions, lack of progress), it re-enters the outer loop to replan with updated knowledge. ### Call Flow Diagram The following sequence diagram illustrates the adaptive orchestration flow: ```mermaid sequenceDiagram participant User participant Endpoint as ActivityEndpoint participant Workflow as AgentTeamWorkflow participant Ledger as LedgerAgent participant Orch as OrchestratorAgent participant Worker as Worker Agents participant Summary as SummarizerAgent User->>Endpoint: POST /activities/{userId} Endpoint->>Workflow: start(message) rect rgb(240, 248, 255) Note over Workflow,Ledger: Outer Loop: Planning Workflow->>Ledger: Gather facts about task Ledger-->>Workflow: Facts (given, to lookup, to derive) Workflow->>Ledger: Create execution plan Ledger-->>Workflow: Initial plan end rect rgb(255, 250, 240) Note over Workflow,Worker: Inner Loop: Execution (Round 1) Workflow->>Orch: Evaluate progress (round=1) Orch-->>Workflow: Select WeatherAgent Workflow->>Worker: Execute WeatherAgent Worker-->>Workflow: Weather info end rect rgb(255, 250, 240) Note over Workflow,Worker: Inner Loop: Execution (Round 2) Workflow->>Orch: Evaluate progress (round=2) Orch-->>Workflow: Select ActivityAgent Workflow->>Worker: Execute ActivityAgent Worker-->>Workflow: Activity suggestions end rect rgb(255, 250, 240) Note over Workflow,Worker: Inner Loop: Execution (Round 3) Workflow->>Orch: Evaluate progress (round=3) Orch-->>Workflow: STALL DETECTED - need replanning end rect rgb(255, 240, 245) Note over Workflow,Ledger: Replanning: Update Facts & Plan Workflow->>Ledger: Update facts based on learnings Ledger-->>Workflow: Updated facts Workflow->>Ledger: Create revised plan Ledger-->>Workflow: Revised plan end rect rgb(255, 250, 240) Note over Workflow,Worker: Inner Loop: Execution (Round 4) Workflow->>Orch: Evaluate progress (round=4) Orch-->>Workflow: Select BudgetAgent Workflow->>Worker: Execute BudgetAgent Worker-->>Workflow: Budget analysis end rect rgb(255, 250, 240) Note over Workflow,Worker: Inner Loop: Execution (Round 5) Workflow->>Orch: Evaluate progress (round=5) Orch-->>Workflow: Task satisfied end rect rgb(240, 255, 240) Note over Workflow,Summary: Finalization Workflow->>Summary: Summarize all agent responses Summary-->>Workflow: Final answer Workflow->>Workflow: Mark as COMPLETED end Endpoint-->>User: 201 Created (Location header) User->>Endpoint: GET /activities/{userId}/{sessionId} Endpoint->>Workflow: getAnswer() Workflow-->>Endpoint: Final answer Endpoint-->>User: Response ``` **Key aspects illustrated:** - **Outer Loop (Blue)**: Initial fact gathering and plan creation by LedgerAgent - **Inner Loop (Orange)**: Iterative evaluation and agent execution coordinated by OrchestratorAgent - **Replanning (Pink)**: Triggered when stalls are detected, updates facts and creates new plan - **Finalization (Green)**: SummarizerAgent combines all responses into final answer Each round, the OrchestratorAgent evaluates progress by checking the session memory (conversation history) and decides whether to continue with another agent, replan, or finish. ### MagenticOne Components - **LedgerAgent**: Manages the task ledger (facts and execution plan) - **OrchestratorAgent**: Evaluates progress after each turn and selects the next agent - **AgentTeamWorkflow**: Coordinates the outer and inner loops - **Worker Agents**: - **WeatherAgent**: Provides weather forecasts and conditions - **ActivityAgent**: Suggests real-world activities (sports, games, attractions) - **RestaurantAgent**: Recommends dining options based on location, cuisine, and budget - **TransportAgent**: Provides navigation and public transport information - **BudgetAgent**: Tracks costs and manages budget constraints - **EventAgent**: Finds local events, exhibitions, and cultural happenings ### Benefits Over Static Planning | Feature | Static Planning | MagenticOne | |---------|----------------|-------------| | **Adaptability** | Fixed plan upfront | Adjusts based on results | | **Error Recovery** | Retries same plan | Detects issues and replans | | **Complex Tasks** | Limited to predefined steps | Handles open-ended problems | | **Agent Selection** | All agents planned initially | Selects agents as needed | The MagenticOne pattern is ideal for complex, unpredictable tasks where the optimal path isn't known upfront. ### Example: Adaptive Planning in Action Consider this complex request: > "Plan a weekend in Barcelona with my family (2 adults, 2 kids aged 8 and 10). Budget is 600 euros total. > Find kid-friendly activities, family restaurants, and check if there are any special events this weekend. > We're staying near Sagrada Familia." **MagenticOne's Adaptive Flow:** **Round 1** - Orchestrator selects WeatherAgent and EventAgent - Weather: Sunny Saturday, rainy Sunday afternoon - Events: Children's festival at Parc de la Ciutadella on Saturday **Round 2** - Based on weather + events, selects ActivityAgent - Saturday: Outdoor activities + festival - Sunday morning: Indoor museums (Aquarium, CosmoCaixa) **Round 3** - Orchestrator notices no dining recommendations yet, selects RestaurantAgent - Family-friendly restaurants near activities - Estimated 30-40 euros per meal for family **Round 4** - Budget concern detected, orchestrator selects BudgetAgent - **STALL DETECTED**: Budget shows 650 euros (over limit) - System enters replanning phase **Replanning** - Ledger updates facts with budget overage - Updated plan: Replace one restaurant with picnic, skip paid festival activities - New budget: 580 euros (under budget with buffer) **Round 5** - Verify transport feasibility with TransportAgent - Metro routes from Sagrada Familia to each location - Total transport cost: 48 euros for weekend passes **Round 6** - Final budget check shows 628 euros - **STALL DETECTED AGAIN**: Still over budget - Replanning: Switch one museum to free alternative (Parc Güell free area) **Round 7** - Orchestrator confirms all constraints met - Task complete: Final itinerary with budget at 590 euros **Why This Showcases MagenticOne:** - **Dynamic Adaptation**: Weather influenced activity selection - **Iterative Refinement**: Multiple budget adjustments based on actual totals - **Stall Detection**: Automatically caught budget violations twice - **Context-Aware**: Transport recommendations considered family base location - **Replanning**: Created new plan when original exceeded constraints **Static Planning Would:** - Call all agents upfront in fixed order - Miss the budget violation until the end - Not adapt activities based on weather findings - Require manual intervention to fix budget issues ## Running the application ### Prerequisites - Java 21 or higher - Maven 3.6 or higher ### Build and run To run the application, you need to provide the following environment variable: - `OPENAI_API_KEY`: Your OpenAI API key. If you prefer to use a different LLM model, follow the instructions in `application.conf` to change it. **Note:** Weather information is simulated using randomized data. No external weather API is required. Set the environment variable: - On Linux or macOS: ```shell export OPENAI_API_KEY=your-openai-api-key ``` - On Windows (command prompt): ```shell set OPENAI_API_KEY=your-openai-api-key ``` Build and run the application: ```shell # Run the application mvn compile exec:java ``` ### Testing the agents With the application running, you can test the system using the following endpoints: #### Simple Query Start a new session with a simple query: ```shell curl -i -XPOST --location "http://localhost:9000/activities/alice" \ --header "Content-Type: application/json" \ --data '{"message": "I do not work tomorrow. I am in Madrid. What should I do? Beware of the weather"}' ``` #### Complex Query (Shows Adaptive Orchestration) Test MagenticOne's adaptive capabilities with a complex, multi-constraint request: ```shell curl -i -XPOST --location "http://localhost:9000/activities/bob" \ --header "Content-Type: application/json" \ --data '{ "message": "Plan a weekend in Barcelona with my family (2 adults, 2 kids aged 8 and 10). Budget is 600 euros total. Find kid-friendly activities, family restaurants, and check if there are any special events this weekend. We are staying near Sagrada Familia." }' ``` This complex query will trigger: - Multiple agent interactions (weather, events, activities, restaurants, budget, transport) - Adaptive planning based on weather conditions - Budget constraint validation and replanning if needed - Transport feasibility checks based on location #### Retrieve Results Get the response for a specific session: ```shell curl -i -XGET --location "http://localhost:9000/activities/{userId}/{sessionId}" ``` Replace `{userId}` and `{sessionId}` with the values from your request. Example: ```shell $ curl "http://localhost:9000/activities/alice/c1219e5a-abae-44c0-959b-ff76aa22cb2e" The weather in Madrid is rainy tomorrow, so you might want to explore indoor attractions like the Prado Museum or Reina Sofia Museum. Alternatively, you can visit local cafes and food markets, such as Mercado de San Miguel, to enjoy some culinary delights without getting wet. If you're up for something more active, you could also consider visiting an escape room or an indoor sports facility. ``` #### Inspect Workflow State See the full orchestration details (facts gathered, plan created, agent interactions, replanning events): ```shell curl -i -XGET --location "http://localhost:9000/activities/{userId}/{sessionId}/state" ``` The state response includes: - Initial facts gathered about the task - Execution plan created by the LedgerAgent - Complete message history showing agent interactions - Progress evaluation decisions by the OrchestratorAgent - Replanning events when stalls are detected - Final status and completion details ## Deployment You can use the [Akka Console](https://console.akka.io) to create a project and deploy this service. Build container image: ```shell mvn clean install -DskipTests ``` Install the `akka` CLI as documented in [Install Akka CLI](https://doc.akka.io/operations/cli/installation.html). Set up secret containing OpenAI API key: ```shell akka secret create generic agent-secrets \ --literal openai-key=$OPENAI_API_KEY ``` Deploy the service using the image tag from above `mvn install` and the secret: ```shell akka service deploy multi-agent multi-agent: --push \ --secret-env OPENAI_API_KEY=agent-secrets/openai-key ``` Refer to [Deploy and manage services](https://doc.akka.io/operations/services/deploy-service.html) for more information. To understand the Akka concepts that are the basis for this example, see [Development Process](https://doc.akka.io/concepts/development-process.html) in the documentation.# Ask Akka Agentic AI Example This sample illustrates how to build an AI agent that performs a RAG workflow. ## Running the app This sample requires OpenAI API Key and a MongoDb Atlas URI. ### OpenAI API To get the OpenAI API key, sign up/log in to find the key at https://platform.openai.com/api-keys ### MongoDb Atlas This sample includes a docker compose file with a pre-configured MongoDB instance. You can start it by running: ```shell docker-compose up -d ``` Alternatively, you can create an account on MongoDb cloud. See `Deploying` section below for details on configuration. ### Start the app The key needs to be exported as environment variables: `OPENAI_API_KEY`. If you prefer to use a different LLM model, follow the instructions in `application.conf` to change it. Then, start the application locally: ```shell mvn compile exec:java ``` ### Indexing documentation To create the vectorized index, call: ```shell curl -XPOST localhost:9000/api/index/start ``` This call will take an extract of the Akka SDK documentation and create a vectorized index in MongoDB. The documentation files are located in `src/main/resources/md-docs/`. That said, you can also add your own documentation files to this directory. ### Query the AI Use the Web UI to make calls. http://localhost:9000/ Alternatively, call the API directly using curl. ```shell curl localhost:9000/api/ask --header "Content-Type: application/json" -XPOST \ --data '{ "userId": "001", "sessionId": "foo", "question":"How many components exist in the Akka SDK?"}' ``` This will run a query and save the conversational history in a `SessionEntity` identified by 'foo'. Results are streamed using SSE. ## Deploying You can use the [Akka Console](https://console.akka.io) to create a project and see the status of your service. ### Mongo Atlas The Mongo DB atlas URI you get from signing up/logging in to https://cloud.mongodb.com Create an empty database and add a database user with a password. The Mongo DB console should now help out by giving you a URI/connection string to copy. Note that you need to insert the database user password into the generated URI. You can export that as environment variable: ```shell export MONGODB_ATLAS_URI="your generated URI goes here" ``` Before deploying the service we need to modify MongoDB configuration to allow external connections from the Akka Automated Operations. For experimentation purposes, go to "Network Access" and allow access from anywhere. For production use cases, you should restrict access to only trusted IP addresses. Contact support to know which IPs to allow. ### Deploy service 1. Build container image: ```shell mvn clean install -DskipTests ``` 2. Install the `akka` CLI as documented in [Install Akka CLI](https://doc.akka.io/operations/cli/installation.html). 3. Let's setup up a secret containing both the OpenAI API key and the MongoDB Atlas Uri. ```shell akka secret create generic ask-akka-secrets \ --literal mongodb-uri=$MONGODB_ATLAS_URI \ --literal openai-key=$OPENAI_API_KEY ``` Note: this assumes you have your `$OPENAI_API_KEY` and `$MONGODB_ATLAS_URI` exported as required to run the project, otherwise just pass the values directly. 4. Deploy the service using the image tag from above `mvn install`: ```shell akka service deploy ask-akka-agent ask-akka: \ --secret-env OPENAI_API_KEY=ask-akka-secrets/openai-key \ --secret-env MONGODB_ATLAS_URI=ask-akka-secrets/mongodb-uri \ --push ``` Note: the value of both ENV vars is set to `secret-name/key-name`, as defined in the previous command. Refer to [Deploy and manage services](https://doc.akka.io/operations/services/deploy-service.html) for more information.# Travel Planner Agent The Travel Planner Agent is an AI-powered application built on the Akka SDK that helps users create personalized travel itineraries. The service leverages event sourcing and large language models (LLMs) to generate detailed travel plans based on user preferences and trip parameters. This example illustrates how to interact with a LLM using the Agent component. Event Sourced Entity is used for durable state of user preferences and generated trips. It's using an Akka Workflow for reliable orchestration of the interactions. To understand the Akka concepts that are the basis for this example, see [Development Process](https://doc.akka.io/concepts/development-process.html) in the documentation. This project contains a skeleton to create an agentic AI service. To understand more about these components, see [Developing services](https://doc.akka.io/java/index.html). Other examples can be found [here](https://doc.akka.io/java/samples.html). # Run locally Provide your OpenAI API key in environment variable `OPENAI_API_KEY`. ```shell export OPENAI_API_KEY="your_openai_api_key" ``` Or change the `application.conf` file to use a different model provider. To start your service locally, run: ```shell mvn compile exec:java ``` ## User Profile API The Travel Planner service provides a REST API for managing user profiles. Below are examples of how to interact with the API using curl. Create a new user profile with name and email: ```shell curl -i localhost:9000/users/user-123 \ --header "Content-Type: application/json" \ -XPOST \ --data '{"name":"John Traveler","email":"john@example.com"}' ``` Retrieve a user profile by ID: ```shell curl -i localhost:9000/users/user-123 ```` Update a user's name and email: ```shell curl -i localhost:9000/users/user-123 \ --header "Content-Type: application/json" \ -XPATCH \ --data '{"name":"John Updated","email":"john.updated@example.com"}' ``` Add a travel preference to a user's profile: ```shell curl -i localhost:9000/users/user-123/preferences \ --header "Content-Type: application/json" \ -XPOST \ --data '{ "type": "ACCOMMODATION_TYPE", "value": "hotel", "priority": 5 }' ``` Available preference types: * ACCOMMODATION_TYPE (e.g., "hotel", "hostel", "apartment") * TRANSPORTATION_TYPE (e.g., "flight", "train", "car") * CUISINE (e.g., "italian", "vegetarian") * ACTIVITY (e.g., "hiking", "museums", "beaches") * CLIMATE (e.g., "warm", "cold", "moderate") * BUDGET_RANGE (e.g., "budget", "mid-range", "luxury") ## Travel Planner API Create a travel plan using the workflow, which will generate the plan, store it, and update the user's profile. The response includes the trip ID. ```shell curl -i localhost:9000/travel-planner/create \ --header "Content-Type: application/json" \ -XPOST \ --data '{ "userId": "user-123", "destination": "Paris, France", "startDate": "2025-06-15", "endDate": "2025-06-22", "budget": 2000.00 }' ``` Get a trip by ID: ```shell curl -i localhost:9000/travel-planner/trips/ ``` or as human readable text: ```shell curl -i localhost:9000/travel-planner/trips//as-text ``` ### Run tests Run unit tests with: ```shell mvn test ``` Integration tests require that you have defined the OpenAI API key in the environment variable `OPENAI_API_KEY`. Then run integration tests with: ```shell mvn verify ``` ## Deployment Build container image: ```shell mvn clean install -DskipTests ``` Install the `akka` CLI as documented in [Install Akka CLI](https://doc.akka.io/reference/cli/index.html). Set up secret containing OpenAI API key: ```shell akka secret create generic openai-api --literal key=$OPENAI_API_KEY ``` Deploy the service using the image tag from above `mvn install` and the secret: ```shell akka service deploy travel-agent travel-agent:tag-name --push \ --secret-env OPENAI_API_KEY=openai-api/key ``` Refer to [Deploy and manage services](https://doc.akka.io/operations/services/deploy-service.html) for more information. ## Architecture Overview ### System Components The system consists of the following key components: 1. **Domain Models**: Core business entities like `UserProfile`, `TravelPreference`, and `TravelPlan` 2. **Entities**: Event-sourced entities that manage state changes (`UserProfileEntity`, `TripEntity`) 3. **Workflow**: Orchestrates the travel planning process (`TravelPlannerWorkflow`) 4. **Agent**: Interfaces with LLMs to generate travel plans (`TravelPlannerAgent`) 5. **HTTP Endpoints**: REST APIs for client interaction (`UserProfileEndpoint`, `TravelPlannerEndpoint`) ### Component Interactions #### Travel Plan Creation Flow ```mermaid sequenceDiagram participant Client participant TravelAPI as TravelPlannerEndpoint participant Workflow as TravelPlannerWorkflow participant Agent as TravelPlannerAgent participant UserEntity as UserProfileEntity participant TripEntity participant LLM as OpenAI GPT-4o-mini Client->>TravelAPI: POST /travel-planner/create TravelAPI->>Workflow: createTravelPlan(command) Workflow->>Agent: generateTravelPlan(userId, destination, dates, budget) Agent->>UserEntity: getUserProfile() UserEntity-->>Agent: UserProfile (with preferences) Agent->>LLM: createTravelPlanJson(tripDetails) LLM-->>Agent: JSON response Agent-->>Workflow: TravelPlan Workflow->>TripEntity: createTrip(tripId, plan, etc.) TripEntity-->>Workflow: Done Workflow->>UserEntity: addCompletedTrip(tripId) UserEntity-->>Workflow: Done Workflow-->>TravelAPI: CreateTravelPlanResponse TravelAPI-->>Client: {tripId: "..."} ``` #### User Profile Management Flow ```mermaid sequenceDiagram participant Client participant UserAPI as UserProfileEndpoint participant UserEntity as UserProfileEntity Client->>UserAPI: POST /users/{userId} UserAPI->>UserEntity: createUserProfile(userId, name, email) UserEntity-->>UserAPI: Done UserAPI-->>Client: Success Client->>UserAPI: POST /users/{userId}/preferences UserAPI->>UserEntity: addTravelPreference(preference) UserEntity-->>UserAPI: Done UserAPI-->>Client: Success Client->>UserAPI: GET /users/{userId} UserAPI->>UserEntity: getUserProfile() UserEntity-->>UserAPI: UserProfile UserAPI-->>Client: UserProfileResponse ``` ### Key Technologies - **Akka SDK**: Provides the foundation for building agentic, event-sourced, stateful services - **Event Sourcing**: Used to track all state changes in the system - **OpenAI GPT-4o-mini**: The LLM used to generate travel plans ### Data Flow 1. **User Profile Creation**: - User data and preferences are stored in the `UserProfileEntity` - Each preference change is tracked as an event 2. **Travel Plan Generation**: - User requests a travel plan with destination, dates, and budget - The workflow coordinates the process - The agent retrieves user preferences and generates a prompt for the LLM - The LLM returns a structured JSON response - The JSON is parsed into a `TravelPlan` domain object - The plan is stored in the `TripEntity` - The trip ID is added to the user's profile 3. **Trip Retrieval**: - Clients can retrieve trip details using the trip ID - The response includes the structured travel plan ## Future Enhancements 1. **Feedback Loop**: Incorporate user feedback to improve future recommendations 2. **Real-time Updates**: Integrate with external APIs for real-time pricing, availability, weather information, etc 3. **Multi-modal Responses**: Support for maps, images, and other rich content 4. **Collaborative Planning**: Allow multiple users to collaborate on a trip 5. **Personalization**: Deeper personalization based on past trips and preferences# Temperature Monitoring Agent A temperature monitoring system that collects, aggregates, and analyzes temperature data from IoT sensors. The system uses AI to generate insights about temperature trends and anomalies across different locations. It is built using Akka components like: - HTTP Endpoint, - Key Value Entity, - Timed Action, - View, - and the most recent one - an Agent component that streamlines the interaction with LLM. ## Features - Simulates IoT temperature sensors in three locations (Boiler Room A, Server Room B, Warehouse C) - Collects and aggregates temperature data in minute-based time windows - Calculates average, minimum, and maximum temperatures for each location - Uses OpenAI's GPT-4o model to analyze temperature patterns and detect anomalies - Provides REST API endpoints to access current and historical temperature data ## Getting Started ### Prerequisites - Java 21 installed - Apache Maven - Docker (for deployment) - OpenAI API key ### Building the Project Use Maven to build the project: ```shell mvn compile ``` ### Running Locally Export the necessary environment variables for the Akka service: ```shell export OPENAI_API_KEY="your_openai_api_key" ``` Or change the `application.conf` file to use a different model provider. To start the service locally, run: ```shell mvn compile exec:java ``` Open your browser and navigate to `http://localhost:9000` to access the service. ## Deployment You can use the [Akka Console](https://console.akka.io) to create a project and deploy this service. Once you have a project created, follow these steps. For more information on deployment, refer to [Deploy and manage services](https://doc.akka.io/operations/services/deploy-service.html). To understand the Akka concepts that are the basis for this example, see [Development Process](https://doc.akka.io/concepts/development-process.html) in the documentation.# Medical Discharge Summary Tagging System ## Problem Statement Scientific research at medical universities often involves processing large volumes of unstructured data—a task that is both time-consuming and largely manual, typically performed by students or researchers. A prime example of such data is **medical discharge summaries**. These documents detail a patient’s condition, treatment, and post-discharge care. They are typically lengthy and packed with important information that requires analysis. There is no common template, each physician has their own style, and the summaries can vary significantly in length and content. ## Experiment Overview This experiment investigates whether an **AI Agentic System** can automatically tag medical discharge summaries based on the necessity of hospitalization. The system leverages **LLMs** (Large Language Models) to process summaries and assign tags, while also enabling human verification and comparative analysis. The approach involves importing a dataset of summaries already tagged by human experts into the system. These same summaries are then processed by the AI, allowing researchers to compare AI-generated tags with expert assessments. The system tracks each tagging agent and provides statistics on how well the AI aligns with expert judgments. Researchers can iterate on the tagging process—modifying prompts and LLM configurations—until satisfactory results are achieved or the AI proves insufficient for the task. Once the optimal configuration (prompt and LLM model) is identified, the system can autonomously tag new summaries without requiring expert input. ## Core Functionality ### Tagging Process The application processes hospital discharge summaries and assigns one of the following tags: - **HOSPITALIZATION_REQUIRED**: Hospitalization was clearly necessary. - **HOSPITALIZATION_NOT_REQUIRED**: Hospitalization was clearly unnecessary. - **UNCERTAIN**: Unclear whether hospitalization was necessary. - **ERROR**: An issue with the summary or inability to assess necessity. ### Key Features - **AI-Powered Analysis**: Automatically analyzes and tags summaries. - **Confidence Scoring**: Provides confidence levels for each tag. - **Expert Verification**: Allows experts to review and validate tags. - **Comparative Analysis**: Compares AI results with expert decisions. - **Progress Tracking**: Tracks workflow progress and completion. - **Correctness Evaluation**: Measures AI alignment with expert tagging. ## Data Flow 1. **Data Import** Discharge summaries are imported into the system as Akka Key Value Entities. ![discharge_summaries.png](docs/discharge_summaries.png) 2. **Tagging Initiation** Users start tagging agents using configurable prompts (via Akka Workflow). Multiple agents can process data concurrently. ![tagging_workflows.png](docs/tagging_workflows.png) 3. **AI Tagging** The AI processes each summary and assigns a tag with a confidence score. 4. **Storage and Review** Tagged summaries are saved and made available for expert review. 5. **Comparison** Experts compare their own tags with those assigned by the AI. ![tagged_discharge_summaries.png](docs/tagged_discharge_summaries.png) 6. **Statistics Collection** System tracks accuracy and agreement between AI and expert tagging. ## Getting Started ### Prerequisites - Java 21 installed - Apache Maven - Docker (for deployment) ### Building the Project Use Maven to build the project: ```shell mvn compile ``` ### Running Locally Export the necessary environment variables for the Akka service: ```shell export OPENAI_API_KEY = "your_openai_api_key" ``` Or change the `application.conf` file to use a different model provider. To start the service locally, run: ```shell mvn compile exec:java ``` The service will be available at `http://localhost:9000`. ### SCSS and Styling The project uses SCSS for styling with Bootstrap 5 dark theme. The SCSS files are located in `src/main/resources/scss/` and compiled to CSS in `src/main/resources/static-resources/`. #### Available Commands To compile SCSS to CSS once: ```shell npm run build-css ``` To watch SCSS files for changes and automatically compile: ```shell npm run watch-css ``` #### Bootstrap Dark Theme Configuration The project uses Bootstrap dark theme which requires: 1. HTML files with `data-bs-theme="dark"` attribute on the html tag 2. SCSS configuration with: - `$theme: "dark"` variable - Using `@import` instead of `@use` for Bootstrap - Dark theme variables (e.g., `$body-bg: #000000`) After making changes to SCSS files: 1. Compile with `npm run build-css` 2. Rebuild the application with `mvn clean compile` #### Serving Static Resources Static resources like CSS files must be explicitly mapped in the `StaticResourceEndpoint.java` class for the Akka framework to properly serve these files. ### Launch the tagging agent Go to http://localhost:9000 and click on the "Start Tagging Process". ### API Usage The application exposes several RESTful endpoints: #### Tagging Workflows - `GET /taggings`: List all tagging agents - `GET /taggings/{id}`: Get a specific tagging agent - `POST /taggings`: Start a new tagging agent - `GET /taggings/last-prompt`: Get the prompt from the most recent tagging #### Tagged Discharge Summaries - `GET /tagged-summaries/{taggingId}`: Get all tagged summaries for a specific agent - `GET /tagged-summaries`: Get all tagged summaries ### Web Interface The web interface provides several views: 1. **Main Dashboard**: Lists all tagging workflows with progress information 2. **Tagging Details**: Shows detailed information about a specific tagging workflow 3. **Tagged Summaries**: Displays all discharge summaries processed in a workflow with their tags ## Testing Run the test suite using Maven: ```shell mvn test ``` To run a specific test: ```shell mvn test -Dtest=IntegrationTest ``` ## Deployment Build a container image: ```shell mvn clean install -DskipTests ``` Install the `akka` CLI as documented in [Install Akka CLI](https://doc.akka.io/reference/cli/index.html). Deploy the service: ```shell akka service deploy medi-tag medi-tag:tag-name --push ``` Refer to [Deploy and manage services](https://doc.akka.io/operations/services/deploy-service.html) for more information. ## Documentation For more information about the underlying Akka platform, see: - [Development Process](https://doc.akka.io/concepts/development-process.html) - [Developing services](https://doc.akka.io/java/index.html) - [Examples](https://doc.akka.io/java/samples.html)# GitHub changelog summarizer ## What the sample is/does This showcases an AI assistent implemented with the Akka SDK that observes GitHub releases to automatically summarizes the release contents. ## Design overview When a repository is added to the service, it creates an event sourced entity representing the repository. A Timed Action is set up to poll the GitHub release notes periodically, if it detects a latest release that it has not seen before, a summarization session is started using the Anthropic Claude LLM with tools, allowing Claude to request more details for issue and pull request ids mentioned in the release notes. Once the LLM is done creating a summary, the summary is stored for in the repository entity. A consumer listens for summaries and can act by publishing the summary somewhere (currently it logs it). The interactions with Anthropic Claude is using the [Anthropic Java SDK](https://github.com/anthropics/anthropic-sdk-java). ## Running the sample To interact with the Anthropic LLM the environment variable ANTHROPIC_API_KEY must be set to tokens gotten from the [Anthropic console](https://console.anthropic.com/) Alternatively, change the `application. conf` file to use a different model provider. The GitHub API interactions are done anonymously by default, but a default API key can be defined using environment variable `GITHUB_API_TOKEN` or config `github-api-token` in `application.conf`. Specific API keys can also be specified when a GitHub repository is added, to use for interactions for that repository. ### Interacting with the sample Set up a repository for tracking, this will trigger a summary pretty quickly: A GitHub API access token for the specific repository can be provided through the body JSON field "gitHubApiToken", however note that providing a key that allows for access to a private repository will pass potentially private information from the release notes and issues of that project to Anthropic Claude. ```shell curl http://localhost:9000/repo/akka/akka \ --header "Content-Type: application/json" \ -XPOST \ --data '{}' ``` Fetch generated and stored summaries for a given project ```shell curl http://localhost:9000/repo/akka/akka/summaries ``` Trigger a one-off summary of the latest release (can be run without any prior calls). This is useful for playing around with changes to the prompt: ```shell curl http://localhost:9000/testing/repo/akka/akka-sdk/summarize-latest \ -XPOST ``` ## System Architecture ### Core Components #### 1. Repository Entity - **Event-sourced entity** representing a GitHub repository - Stores repository metadata and generated summaries - Manages the state of tracked releases #### 2. Timed Action Scheduler - Periodically polls GitHub for new releases - Triggers summarization when new releases are detected #### 3. Summarization Session - Coordinates the interaction with Anthropic Claude LLM - Provides tools for the LLM to fetch additional information about issues and PRs - Processes the final summary #### 4. GitHub API Client - Handles communication with GitHub's API - Fetches release notes, issue details, and PR information - Supports both anonymous and authenticated access #### 5. Anthropic Client - Manages communication with Anthropic's Claude LLM - Sends prompts and processes responses - Handles tool usage requests from the LLM #### 6. Summary Consumer - Listens for newly generated summaries - Currently logs the summaries (could be extended to publish to other channels) #### 7. HTTP API - Exposes endpoints for managing repositories and retrieving summaries - Includes testing endpoints for triggering one-off summarizations ### LLM Tools Integration The system leverages Anthropic Claude's function calling capabilities through the following tools: #### fetchIssueDetails Tool - **Purpose**: Allows the LLM to request additional context about GitHub issues mentioned in release notes - **Input**: GitHub issue ID (integer) - **Output**: Detailed information about the issue, including title, description, labels, and comments - **Usage**: When release notes contain minimal information about an issue (e.g., just "Fixed #123"), the LLM can use this tool to gather more context for creating a comprehensive summary The tool integration follows a request-response pattern: 1. Claude identifies an issue needing more context 2. Claude calls the fetchIssueDetails tool with the issue ID 3. The SummarizerSession fetches the details from GitHub 4. The details are returned to Claude for incorporation into the summary This agentic approach allows the LLM to autonomously decide when additional information is needed, making the summaries more informative and accurate. ### Component Interactions #### Repository Registration Flow ```mermaid sequenceDiagram participant User participant API as HTTP API participant Entity as Repository Entity participant Timer as Timed Action User->>API: POST /repo/{owner}/{repo} API->>Entity: Create entity if not exists Entity->>Entity: Initialize state Entity->>Timer: Schedule periodic polling API->>User: 200 OK Response ``` #### Release Detection and Summarization Flow ```mermaid sequenceDiagram participant Timer as Timed Action participant Entity as Repository Entity participant GitHub as GitHub API participant Session as Summarizer Session participant Claude as Anthropic Claude participant Consumer as Summary Consumer Timer->>Entity: Trigger check for new releases Entity->>GitHub: Get latest release GitHub->>Entity: Return release details alt New release detected Entity->>Session: Start summarization Session->>Claude: Send release notes with tools loop Tool Usage Claude->>Session: Request issue/PR details Session->>GitHub: Fetch requested details GitHub->>Session: Return details Session->>Claude: Provide requested information end Claude->>Session: Return final summary Session->>Entity: Store summary Entity->>Consumer: Notify of new summary end ``` ### Data Flow ```mermaid graph TD A[GitHub API] -->|Release Notes| B[Repository Entity] A -->|Issue/PR Details| C[Summarizer Session] B -->|Trigger Summarization| C C -->|Prompt with Tools| D[Anthropic Claude] D -->|Tool Requests| C D -->|Final Summary| C C -->|Store Summary| B B -->|New Summary| E[Summary Consumer] E -->|Log/Publish| F[Output Channel] ``` # Real Estate Customer Service Agent A sample application demonstrating how to build a customer service agent for real estate inquiries, by combining Akka features and an LLM model. ## Overview This project illustrates an Agentic workflow for real estate customer service using Akka. The agent: - Processes incoming email inquiries about real estate properties - Analyzes the content of email to extract details on what the costumer is looking for - Provides in-context follow-up responses when needed - Saves the collected information for further processing ### Akka components This sample leverages specific Akka components: - **Agent**: abstracts the interaction with the LLM model and provides memory out-of-the-box for the interactions done the customer. It also provides access to 2 tools: to send customer emails and save customer information. Tool executions requested by the LLM will be automatically invoked. - **Workflow**: manages the customer inquiry process from the first interaction until all the information is collected, serving as a temporary storage for emails yet to be processed. - **EventSourced Entity**: used to maintain the customer information details, registering all the updates occurred as a sequence of events. - **Timers**: are used to schedule tasks that need to be executed at a later time. In this case, a timer is scheduled to send a follow-up with the client if there is no reply within a default time span. - **HTTP Endpoint**: used to serve the application endpoints for receiving email inquiries (`/emails`) ### Typical flow The following diagram illustrates the typical interaction flow of the Real Estate Customer Service Agent and the way the different Akka Components interact: ```mermaid sequenceDiagram participant Client participant HTTPEndpoint as HTTP Endpoint participant Workflow as Workflow participant Agent as Agent participant Entity as EventSourced Entity participant Timer as Timers Client->>HTTPEndpoint: Send initial inquiry email HTTPEndpoint->>Workflow: Create new workflow instance Note over Workflow: Initialize customer inquiry process Workflow->>Agent: Process email content Agent->>Agent: Analyze with AI model alt Information incomplete Agent->>Client: Send follow-up email (using tool) Agent-->>Workflow: Return WAIT_REPLY status Workflow->>Timer: Schedule follow-up reminder Note over Workflow: Wait for response Client->>HTTPEndpoint: Send email with additional information HTTPEndpoint->>Workflow: Forward to existing workflow Workflow->>Timer: Cancel reminder Workflow->>Agent: Process updated email Agent->>Agent: Analyze updated information else All information collected Agent->>Entity: Save customer information (using tool) Entity->>Entity: Persist customer data as events Entity-->>Agent: Confirmation Agent-->>Workflow: Return ALL_INFO_COLLECTED status Note over Workflow: Mark inquiry as complete end ``` ## Running the application ### Prerequisites - Java 21 or higher - Maven 3.6 or higher ### Build and run To run the sample, you need to provide an OpenAI API key and provide it as an environment variable: ```shell export OPENAI_API_KEY=your-openai-api-key ``` Or change the `application.conf` file to use a different model provider. Build and run the application: ```shell # Run the application mvn compile exec:java ``` ### Testing the agent With the application running, you can test the agent using the following endpoints: Submit an email inquiry: ```shell curl -i -XPOST --location "http://localhost:9000/emails" \ --header "Content-Type: application/json" \ --data '{"sender": "john@example.com", "subject":"Looking to rent T2 in Porto", "content": "Hello, I am looking to rent a T2 in Porto. Can you help me?"}' ``` The agent will likely decide to follow up with an email to ask for more information (your full name, phone number, etc.). Once you provide the missing information: ```shell curl -i -XPOST --location "http://localhost:9000/emails" \ --header "Content-Type: application/json" \ --data '{"sender": "john@example.com", "subject":"Looking to rent T2 in Porto", "content": "My name is John Doe. My number is 911111111. Looking for an apartment."}' ``` The process of information collection should be marked as completed and ready for human follow-up. ## Deployment You can use the [Akka Console](https://console.akka.io) to create a project and deploy this service. Once you have a project created, follow these steps. #### Build docker image ```shell mvn clean install -DskipTests ``` #### Setup OpenAI API key ```shell akka secret create generic openai-api --literal key=$OPENAI_API_KEY ``` NOTE: this assumes you have your `$OPENAI_API_KEY` exported as required to run the project, otherwise just pass the value directly. #### Push image and deploy the service ```shell akka service deploy real-estate-cs-agent real-estate-cs-agent: \ --secret-env OPENAI_API_KEY=openai-api/key --push ``` NOTE: the value of OPENAI_API_KEY is set to secret-name/key-name, as defined in the previous command: secret-name=openai-api and key-name=key. For more information on deployment, refer to [Deploy and manage services](https://doc.akka.io/operations/services/deploy-service.html). To understand the Akka concepts that are the basis for this example, see [Development Process](https://doc.akka.io/concepts/development-process.html) in the documentation.# trip-agent This app represents an agency that searches for flights and accommodations via the prompt from the user through a HTTP call. It's composed by a LLM Model and tools to find flights, accommodations and sending mails. Once a search is requested the app will look for the flights, accommodations, and will email the requester with some options and the best value offer. ## Prerequisites ## Running in local Start a local email service. ```shell docker run -d -p 1025:1025 -p 8025:8025 mailhog/mailhog ``` Add your Model API Key. Here 'anthropic' but it could be other models. See `application.conf` ```shell export ANTHROPIC_API_KEY=[your-key-here] ``` ### Start the application ```shell mvn compile exec:java ``` ### Call the service ```shell curl http://localhost:9000/trip/search \ -H "Content-Type: application/json" \ -d '{"question": "find a bookingTripRequest from seoul to tokyo and back, from 2026-05-07 to 2026-05-14 The flight price not higher than 300 total and the total accommodation for the week not higher than 600. Send the suggestion to 'test.user@gmail.com'" }' ``` This will return an `uuid` that you can later use the check the state of the workflow. The email sent by the agent (you can find in `localhost:8025`) should be something like: ![mail_header.png](mail_header.png) ![mail_center.png](mail_center.png) ![mail_bottom.png](mail_bottom.png) ### Check the state The workflow is the one in charge of leveraging the AI agent to look for the flights, accommodations, and send the email. The workflow state when `RequestStatus[tag=SUCCESSFULLY_FINISHED, ...]` should also contain the list of flights and accommodation. You can check it with the following: ```shell curl http://localhost:9000/trip/workflow/[uuid-here] ``` If all went well, flights have been created in the system and you should be able to access them like this: ```shell curl http://localhost:9000/trip/flight/12 ``` Same with the accommodations. ```shell curl http://localhost:9000/trip/accommodation/117 ``` ### Book a trip Since you can find what flights and accommodations the workflow state holds (mentioned above) you can now book a trip: ```shell curl http://localhost:9000/trip/book -d '{"flightRef":"12", "accommodationRef":"117"}' --header "Content-type: application/json" ``` You might want to check the state again to verify the flight and accommodation have been booked. ```shell curl http://localhost:9000/trip/flight/12 ``` ```shell curl http://localhost:9000/trip/accommodation/117 ``` ## Running in Akka infra First you need to [deploy](https://doc.akka.io/operations/services/deploy-service.html) the service To send emails after deploying this app in the Akka infrastructure, you need to [set the env vars](https://doc.akka.io/operations/projects/secrets.html#_set_secrets_as_environment_variables_for_a_service): ```shell SMTP_HOST SMTP_PORT SMTP_AUTH SMTP_START_TLS ``` To use anthropic LLM, set the following env var. ```shell ANTHROPIC_API_KEY ``` > **Note**: Sending emails has only been tested when running in local. --------- Possible routes to extend the example: - improve prompt https://docs.spring.io/spring-ai/reference/api/prompt.html and include Patrik suggestions. - Pre-reserve best value trip after request and send an email with that reservation to user - using timers to check if the flights/accommodations are still available in the market (fake must be then refactored to use a real/external endpoint) - using views to check if flights already exist for the requested dates/constraints# Agentic Haiku - AI-Powered Conference Content Generation Agentic Haiku is an intelligent system that automatically generates haiku poetry and accompanying image. The application demonstrates a workflow orchestration using AI agents for content creation and quality assurance. ## The Haiku Workflow and Its Agents The system employs multiple specialized AI agents working together in coordinated workflows: ### Core Haiku Generation Workflow 1. **Content Quality Check** - `ToxicityDetectorAgent` - Analyzes input text for toxic, hateful, or inappropriate content - Blocks harmful content from proceeding through the pipeline 2. **Sentiment Analysis** - `SentimentDetectorAgent` - Evaluates the emotional tone of input messages - Filters out negative sentiment to maintain positive brand messaging - Only allows positive or neutral content to proceed 3. **Haiku Generation** - `HaikuGenAgent` - Creates original haiku poetry following traditional 5-7-5 syllable structure - Captures the essence and mood of the source material 4. **Image Generation** - Integrated with external image generation services - Creates visual artwork to accompany each haiku - Supports both Google Gemini and fixed image fallback options ## Running Locally ### Prerequisites To use Google Gemini for AI agents and image generation, you'll need to configure Google Cloud credentials: 1. **Set up Google Cloud Project**: Ensure you have a Google Cloud project with AI Platform API enabled 2. **Configure Service Account**: Create a service account with appropriate permissions for AI Platform and Cloud Storage 3. **Set Environment Variables**: ```shell # For Google Gemini AI agents export GOOGLE_AI_GEMINI_API_KEY="your-gemini-api-key" # For Google Cloud services (image generation and storage) export GOOGLE_APPLICATION_CREDENTIALS="/path/to/your/service-account-key.json" ``` **Note**: Without these credentials, the application will fall back to: - Fixed/mock image generation instead of Gemini image generation - Local file storage instead of Google Cloud Storage ### Running the Application Start the required infrastructure (Postgres and Jaeger): ```shell docker compose up ``` Start the service with tracing enabled: ```shell TRACING_ENABLED=true COLLECTOR_ENDPOINT="http://localhost:4317" mvn compile exec:java ``` ## Using the Application Once the application is running (default port: 9000), you can interact with it through multiple interfaces: ### Web Interface Access the main application through your browser: http://localhost:9000/ #### Haiku API # generate haiku ```shell curl -X POST http://localhost:9000/haikus/123 \ -H "Content-Type: application/json" \ -d '{ "text": "Your creative text here" }' ``` # get haiku result ```shell curl http://localhost:9000/haikus/123 ```# Healthcare Agent ## Description Personal Health Agent with data from fitness trackers, medical records and other sensors. ## Architecture ```mermaid sequenceDiagram %% Actors & Stores participant Chat UI participant AgentEndpoint participant HealthAgent participant SensorTool participant FitbitTool participant MedicalRecordRAG participant LLM participant IngestionEndpoint participant SensorEntity participant MongoDB participant Fitbit API %% Data-ingestion flows (out of band) IngestionEndpoint->>SensorEntity: Store incoming sensor data IngestionEndpoint->>MongoDB: Index & save embeddings for medical records %% Interactive query flow Chat UI->>+AgentEndpoint: User message / question AgentEndpoint->>+HealthAgent: Invoke agent with user request loop 1..N tool calls (determined by agent planning) par Sensor data (if required) HealthAgent->>SensorTool: Retrieve sensor data SensorTool->>SensorEntity: Read stored sensor values SensorTool-->>HealthAgent: Sensor results and Fitbit data (if required) HealthAgent->>FitbitTool: Retrieve Fitbit metrics FitbitTool->>Fitbit API: /activities, /sleep, /heartrate … FitbitTool-->>HealthAgent: Fitbit results and Medical record info (if required) HealthAgent->>MedicalRecordRAG: Query medical history MedicalRecordRAG->>MongoDB: Vector search over embeddings MedicalRecordRAG-->>HealthAgent: Relevant medical info end end %% New step: compose prompt for LLM HealthAgent->>LLM: Prompt with (original question + tool results) LLM-->>HealthAgent: Draft answer HealthAgent-->>-AgentEndpoint: Final answer AgentEndpoint-->>Chat UI: Stream / return response ``` ### Streaming Endpoints Respond quickly. Process inputs and respond partially without waiting for the entire input to be processed. - **Ingestion Endpoint**: - Stores sensor data both in-memory and in a persistent storage - Indexes Medical records and stores them in a vector database - **Agent Endpoint**: Handles incoming requests from users ### Agent #### Agent Context Database LLMs are stateless. We store and retrieve the context of each session. - **Session Entity**: Represents a user session and stores context data #### Agent Connectors Talk to LLMs, Vector DBs, MCP Servers, enterprise APIs and other systems - **Fitbit**: Access health data from Fitbit devices via an API - **Sensor Data**: Access data from other sensors through reading our persistent storage - **Medical Records**: Access Medical history and lab results through RAG (Retrieval-Augmented Generation) #### Agent Orchestration Execute reliably. Durable workflows that ensure agent actions and LLM calls execute reliably, even in the face of failures, timeouts, hallucinations, or restarts. > Currently, the agent does not use Akka's Orchestration Workflow. > When the agent receives a request, the Agent determines if zero, one or more tools are needed to answer the question. > The agent then calls the tools (SensorTool, FitbitTool as well as doing RAG on Medical Records) and waits for the results. > After that the agent calls the LLM with the results from the tools and the original question. > Then the agent returns the answer to the user. ## Usage Setup Environment Variables ```shell export OPENAI_API_KEY=YOUR-OPENAI-API-KEY-HERE export MONGODB_ATLAS_URI=YOUR-CONNECTION-STRING-HERE # Fitbit Access Token # (FitBits API does not support client credentials flow, so you need to use the authorization code flow that requires user interaction.) # Retrieve your Tokens manually via https://dev.fitbit.com/build/reference/web-api/troubleshooting-guide/oauth2-tutorial/ export FITBIT_ACCESS_TOKEN=YOUR-FITBIT-ACCESS-TOKEN ``` Alternatively, change the `application.conf` file to use a different model provider. Start the service locally: ```shell mvn compile exec:java ``` The chat is available at: ```shell http://localhost:9000/ ``` Open the Akka Local Console for monitoring: ```shell akka local console ``` ## Testing Visit: ```shell http://localhost:9000/ ``` ### Sensor Data Ask the agent: ```text Can you tell me my heart rate by searching the sensor data? ``` Example response: ```text It seems that there is no heart rate data available in your sensor data. ``` Ingest Sensor Data: ```shell curl -X POST http://localhost:9000/ingest/sensor -H "Content-Type: application/json" -d '{ "userId": "demo-user", "data": { "userId": "demo-user", "source": "smartwatch", "description": "heart rate", "value": "90 bpm" } }' ``` Ask the agent: ```text and now? ``` Example response: ```text Your heart rate is 90 bpm. ``` ### Medical Record (RAG) Ask the agent: ```text Can you tell me the reason for my last visit to the doctor? ``` Example response: ```text I don't have specific details of your medical records. ``` Ingest Medical Records: ```shell curl -X POST http://localhost:9000/ingest/medical-record -H "Content-Type: application/json" -d '{ "userId": "demo-user", "data": { "patientId": "demo-user", "reasonForVisit": "Severe lower back pain", "diagnosis": "Pinched nerve", "prescribedMedication": "Ibuprofen and massage therapy", "notes": "Has an office job. Sits for long hours. Doesnt do any exercise." } }' ``` Ask the agent: ```text and now? ``` Example response: ```text The reason for your last visit to the doctor was severe lower back pain. You were diagnosed with a pinched nerve, and prescribed medication included ibuprofen along with massage therapy. The notes indicated that you have an office job, sit for long hours, and do not engage in any exercise. ``` ### Fitbit Data Ask the agent: ```text How many steps did I take on the 26th April 2025? ``` (The agent uses the Fitbit Tool to get the data from the Fitbit API) Example Response: ```text On April 26, 2025, you took 17,243 steps. ``` Ask the agent ```text How much REM sleep did i get in the week from 21st april 27th april 2025 on each day? ``` Example Response: ```text Here is the REM sleep you recorded for each day from April 21 to April 27, 2025: April 21: 95 minutes April 22: 121 minutes April 23: 113 minutes April 24: 105 minutes April 25: 82 minutes April 26: 121 minutes April 27: 29 minutes ``` Ask the agent ```text What other health metrics do you have for the 27th april 2025? ``` Example Response: ```text On April 27, 2025, here are the health metrics for you: Sleep Hours: Approximately 5.43 hours Steps Walked: 7,107 steps Resting Heart Rate: 63 bpm Active Minutes for the Week: 510 minutes REM Sleep Minutes: 29 minutes Heart Rate Outside Safe Range: 0 (your heart rate was within the safe range) Please remember to consider your back pain and the importance of incorporating exercise and movement into your daily routine, especially given the sedentary nature of your job. ```# Build a Shopping Cart The section [Build your first application](https://doc.akka.io/sdk/build-your-first-application.html) shows how to build this Shopping Cart service step by step. ## Prerequisites - An [Akka account](https://console.akka.io/register) - Java 21 installed (recommended: [Eclipse Adoptium](https://adoptium.net/marketplace/)) - [Apache Maven](https://maven.apache.org/install.html) - [Docker Engine](https://docs.docker.com/get-started/get-docker/) - [`curl` command-line tool](https://curl.se/download.html) ## Concepts ### Designing To understand the Akka concepts behind this example, see [Development Process](https://doc.akka.io/concepts/development-process.html) in the documentation. ### Developing Please follow [Build your first application](https://doc.akka.io/sdk/build-your-first-application.html) to understand the details of this service implementation. ## Building Use Maven to build your project: ```shell mvn compile ``` ## Running Locally To start your Akka service locally, run: ```shell mvn compile exec:java ``` ## Exercising the service With your Akka service running, any defined endpoints should be available at `http://localhost:9000`. * Add an item ```shell curl -i -XPUT -H "Content-Type: application/json" localhost:9000/carts/123/item -d ' {"productId":"akka-tshirt", "name":"Akka Tshirt", "quantity": 10}' ``` * Get cart state ```shell curl localhost:9000/carts/123 ``` * Remove an item ```shell curl -i -XDELETE -H "Content-Type: application/json" localhost:9000/carts/123/item/akka-tshirt ``` * Checkout the cart ```shell curl -i -XPOST localhost:9000/carts/123/checkout ``` ## Explore the local console To get a clear view of your locally running service, [install the Akka CLI](https://doc.akka.io/operations/cli/installation.html). It provides a local web-based management console. After you have installed the CLI, start the local console: ```shell akka local console ``` ` This will start a Docker container running the local console: ``` ─────────────────────────────────────────────────────────────────────── │ SERVICE │ STATE │ ADDRESS | ─────────────────────────────────────────────────────────────────────── │ shopping-cart-quickstart │ Running │ localhost:9000 │ ─────────────────────────────────────────────────────────────────────── Local console: http://localhost:9889 (use Ctrl+C to quit) ``` You can open http://localhost:9889/ to see your local service in action. ## Deploy to akka.io You can use the [Akka Console](https://console.akka.io) to create a project and see the status of your service. Build container image: ```shell mvn clean install -DskipTests ``` Install the `akka` CLI as documented in [Install Akka CLI](https://doc.akka.io/operations/cli/installation.html). Deploy the service using the image name and tag from above `mvn install`: ```shell akka service deploy shopping-cart shopping-cart-quickstart:tag-name --push ``` Refer to [Deploy and manage services](https://doc.akka.io/operations/services/deploy-service.html) for more information. ## Next steps Now that you've built and deployed a shopping cart service, take your Akka skills to the next level: - **Expand the service**: Explore [other Akka components](https://doc.akka.io/concepts/architecture-model.html) to enhance your application with additional features. - **Explore other Akka samples**: Discover more about Akka by exploring [different use cases](https://doc.akka.io/getting-started/samples.html) for inspiration.# Build a Customer Registry with Query Capabilities This guide will walk you through the process of creating, retrieving, and updating information from a customer registry service. ## Prerequisites - An [Akka account](https://console.akka.io/register) - Java 21 (we recommend [Eclipse Adoptium](https://adoptium.net/marketplace/)) - [Apache Maven](https://maven.apache.org/install.html) - [Docker Engine](https://docs.docker.com/get-started/get-docker/) - [`curl` command-line tool](https://curl.se/download.html) ## Concepts ### Designing To understand the Akka concepts behind this example, see [Development Process](https://doc.akka.io/concepts/development-process.html) in the documentation. ### Developing This project demonstrates the use of Event Sourced Entity and View components. For more information, see [Developing Services](https://doc.akka.io/java/index.html). ## Build Use Maven to build your project: ```shell mvn compile ``` ## Run Locally To start your service locally, run: ```shell mvn compile exec:java ``` ## Steps to interact with the HTTP endpoint ### 1. Create a new customer To add a new customer to the registry, use the following command: ```shell curl -i localhost:9000/customer/one \ --header "Content-Type: application/json" \ -XPOST \ --data '{"email":"test@example.com","name":"Testsson","address":{"street":"Teststreet 25", "city":"Testcity"}}' ``` ### 2. Retrieve customer information To retrieve details of a specific customer: ```shell curl localhost:9000/customer/one ``` ### 3. Query customers by email To find a customer using their email address: ```shell curl localhost:9000/customer/by-email/test%40example.com ``` ### 4. Query customers by name To search for a customer by their name: ```shell curl localhost:9000/customer/by-name/Testsson ``` ### 5. Update customer name To change a customer's name: ```shell curl -i -XPATCH --header "Content-Type: application/json" localhost:9000/customer/one/name/joe ``` ### 6. Update customer address To modify a customer's address: ```shell curl -i localhost:9000/customer/one/address \ --header "Content-Type: application/json" \ -XPATCH \ --data '{"street":"Newstreet 25","city":"Newcity"}' ``` ## Steps to interact with the gRPC endpoint Requires the command line tool [grpcurl](https://github.com/fullstorydev/grpcurl) ### 1. Inspect what services are available using gRPC reflection ```shell grpcurl --plaintext localhost:9000 list ``` Or a more detailed listing of each service and its methods: ```shell grpcurl --plaintext localhost:9000 describe ``` ### 2. Create a new customer ```shell grpcurl --plaintext \ -d '{"customer_id": "one", "customer": {"name": "Grpc Testsson", "email":"grpc@example.com", "address": {"street":"Example Street", "city": "Sample Town"}}}' \ localhost:9000 customer.api.CustomerGrpcEndpoint/CreateCustomer ``` ### 3. Retrieve customer information To retrieve details of a specific customer: ```shell grpcurl --plaintext \ -d '{"customer_id": "one"}' \ localhost:9000 customer.api.CustomerGrpcEndpoint/GetCustomer ``` ### 4. Query customers by email To find a customer using their email address: ```shell grpcurl --plaintext \ -d '{"email": "grpc@example.com"}' \ localhost:9000 customer.api.CustomerGrpcEndpoint/CustomerByEmail ``` ### 5. Query customers by name To search for a customer by their name: ```shell grpcurl --plaintext \ -d '{"name": "Grpc Testsson"}' \ localhost:9000 customer.api.CustomerGrpcEndpoint/CustomerByName ``` ### 6. Update customer name To change a customer's name: ```shell grpcurl --plaintext \ -d '{"customer_id": "one", "new_name": "Grpc Testsson 2"}' \ localhost:9000 customer.api.CustomerGrpcEndpoint/ChangeName ``` ### 7. Update customer address To modify a customer's address: ```shell grpcurl --plaintext \ -d '{"customer_id": "one", "new_address": {"street":"Upper Example Lane", "city": "Sample Town"}}' \ localhost:9000 customer.api.CustomerGrpcEndpoint/ChangeAddress ``` ## Troubleshooting If you encounter issues, ensure that: - The Customer Registry service is running and accessible on port 9000. - Your `curl` commands are formatted correctly. - The customer ID (e.g., "one") matches an existing customer in the registry. ## Need help? For questions or assistance, please refer to our [online support resources](https://doc.akka.io/support/index.html). ## Deploying You can use the [Akka Console](https://console.akka.io) to create a project and see the status of your service. Build container image: ```shell mvn clean install -DskipTests ``` Install the `akka` CLI as documented in [Install Akka CLI](https://doc.akka.io/operations/cli/installation.html). Deploy the service using the image tag from above `mvn install`: ```shell akka service deploy customer-registry customer-registry:tag-name --push ``` Refer to [Deploy and manage services](https://doc.akka.io/operations/services/deploy-service.html) for more information. ## Conclusion Congratulations, you've successfully built and interacted with a customer registry service using Akka. This project demonstrates the power of Event Sourced Entity and View components in responsive applications. You’ve learned how to create, retrieve, update, and query customer information. ## Next steps Now that you've built a basic customer registry, take your Akka skills to the next level: 1. **Expand the service**: Add features such as deleting customers or more complex query capabilities. 2. **Explore other Akka components**: Dive deeper into Akka's ecosystem to enhance your application. 3. **Join the community**: Visit the [Support page](https://doc.akka.io/support/index.html) to find resources where you can connect with other Akka developers and expand your knowledge.# Build a Funds Transfer Workflow Between Two Wallets This guide demonstrates how to create a simple workflow for transferring funds between two wallets. It includes a compensation mechanism that handles scenarios where a deposit fails during a transfer, ensuring the system maintains consistency. ## Prerequisites - A [Akka account](https://console.akka.io/register) - Java 21 (we recommend [Eclipse Adoptium](https://adoptium.net/marketplace/)) - [Apache Maven](https://maven.apache.org/install.html) - [Docker Engine](https://docs.docker.com/get-started/get-docker/) - [`curl` command-line tool](https://curl.se/download.html) ## Concepts ### Designing To understand the Akka concepts behind this example, see [Development Process](https://doc.akka.io/concepts/development-process.html) in the documentation. ### Developing This project demonstrates the use of Workflow and Event Sourced Entity components. For more information, see [Developing Services](https://doc.akka.io/sdk/index.html). ## Build Use Maven to build your project: ```shell mvn compile ``` ## Run Locally To start your service locally, run: ```shell mvn compile exec:java ``` This command will start your Akka service and a companion Akka Runtime. ## Steps ### 1. Create wallet 'a' Create wallet 'a' with an initial balance of 100: ```shell curl -i -X POST http://localhost:9000/wallet/a/create/100 ``` ### 2. Create wallet 'b' Create wallet 'b' with an initial balance of 100: ```shell curl -i -X POST http://localhost:9000/wallet/b/create/100 ``` ### 3. Withdraw from wallet 'a' Attempt to withdraw 110 from wallet 'a': ```shell curl -i -X POST http://localhost:9000/wallet/a/withdraw/110 ``` **Note**: This request results in an HTTP 400 Bad Request response due to insufficient balance in wallet 'a'. ### 4. Check wallet balances Get wallet 'a' current balance: ```shell curl http://localhost:9000/wallet/a ``` Get wallet 'b' current balance: ```shell curl http://localhost:9000/wallet/b ``` ### 5. Initiate transfer Start a transfer of 10 from wallet 'a' to wallet 'b': ```shell curl http://localhost:9000/transfer/1 \ -X POST \ --header "Content-Type: application/json" \ --data '{"from": "a", "to": "b", "amount": 10}' ``` ### 6. Check transfer status Get the current state of the transfer: ```shell curl http://localhost:9000/transfer/1 ``` ## Run integration tests To run the integration tests located in `src/test/java`: ```shell mvn verify ``` ## Troubleshooting If you encounter issues, ensure that: - The Akka service is running and accessible on port 9000. - Your `curl` commands are formatted correctly. - The wallet IDs ('a' and 'b') match the ones you created. ## Deploying You can use the [Akka Console](https://console.akka.io) to create a project and see the status of your service. Build container image: ```shell mvn clean install -DskipTests ``` Install the `akka` CLI as documented in [Install Akka CLI](https://doc.akka.io/operations/cli/installation.html). Deploy the service using the image tag from above `mvn install`: ```shell akka service deploy transfer-workflow transfer-workflow-compensation:tag-name --push ``` Refer to [Deploy and manage services](https://doc.akka.io/operations/services/deploy-service.html) for more information. ## Conclusion Congratulations, you've successfully implemented a workflow between two wallets using Akka. This project demonstrates the power of Workflow and Event Sourced Entity components in managing complex transactions. ## Next steps Now that you've built a basic transfer workflow, consider these next steps: 1. **Study the compensation mechanism**: Examine `TransferWorkflow.java` and `TransferWorkflowIntegrationTest.java` to understand how compensating actions are implemented when the deposit step fails after a successful withdrawal. 2. **Explore other Akka components**: Dive deeper into Akka's ecosystem to enhance your application. 3. **Explore other Akka samples**: Discover more about Akka by exploring [different use cases](https://doc.akka.io/getting-started/samples.html) for inspiration.# Build a Funds Transfer Workflow Between Two Wallets This guide demonstrates how to create a simple workflow for transferring funds between two wallets. The `WalletService` is an external service that manages wallets. The main focus of this sample is to demonstrate the orchestration part of the `Workflow` component and the human in the loop aspect (accepting the transfer when the amount is high). Other aspects like error handling, retry strategy, compensation are covered in the [documentation] (https://doc.akka.io/sdk/workflows.html#_error_handling). ## Prerequisites - A [Akka account](https://console.akka.io/register) - Java 21 (we recommend [Eclipse Adoptium](https://adoptium.net/marketplace/)) - [Apache Maven](https://maven.apache.org/install.html) - [Docker Engine](https://docs.docker.com/get-started/get-docker/) - [`curl` command-line tool](https://curl.se/download.html) ## Concepts ### Designing To understand the Akka concepts behind this example, see [Development Process](https://doc.akka.io/concepts/development-process.html) in the documentation. ### Developing This project demonstrates the use of Workflow component. For more information, see [Developing Services](https://doc. akka.io/java/index.html). ## Build Use Maven to build your project: ```shell mvn compile ``` ## Run Locally To start your service locally, run: ```shell mvn compile exec:java ``` This command will start the service and create two wallets, 'a' and 'b', with an initial balance of 10000 each. The service will listen on port 9000. ## Steps ### 1. Initiate transfer Start a transfer of 10 from wallet 'a' to wallet 'b': ```shell curl http://localhost:9000/transfer/1 \ -X POST \ --header "Content-Type: application/json" \ --data '{"from": "a", "to": "b", "amount": 10}' ``` ### 2. Check transfer status Get the current state of the transfer: ```shell curl http://localhost:9000/transfer/1 ``` ### 3. Initiate transfer that requires human intervention Start a transfer of 1001 from wallet 'a' to wallet 'b': ```shell curl http://localhost:9000/transfer/2 \ -X POST \ --header "Content-Type: application/json" \ --data '{"from": "a", "to": "b", "amount": 1001}' ``` ### 4. Check transfer status ```shell curl http://localhost:9000/transfer/2 ``` ### 5. Accept transfer ```shell curl -X POST http://localhost:9000/transfer/2/accept ``` ### 6. Check transfer status ```shell curl http://localhost:9000/transfer/2 ``` ## Run integration tests To run the integration tests located in `src/test/java`: ```shell mvn verify ``` ## Troubleshooting If you encounter issues, ensure that: - The Akka service is running and accessible on port 9000. - Your `curl` commands are formatted correctly. - The wallet IDs ('a' and 'b') match the ones you created. ## Deploying You can use the [Akka Console](https://console.akka.io) to create a project and see the status of your service. Build container image: ```shell mvn clean install -DskipTests ``` Install the `akka` CLI as documented in [Install Akka CLI](https://doc.akka.io/operations/cli/installation.html). Deploy the service using the image tag from above `mvn install`: ```shell akka service deploy transfer-workflow transfer-workflow-compensation:tag-name --push ``` Refer to [Deploy and manage services](https://doc.akka.io/operations/services/deploy-service.html) for more information. ## Conclusion Congratulations, you've successfully implemented a workflow between two wallets using Akka. This project demonstrates the power of Workflow and Event Sourced Entity components in managing complex transactions. ## Next steps Now that you've built a basic transfer workflow, consider these next steps: 1. **Study the compensation mechanism**: Examine `TransferWorkflow.java` and `TransferWorkflowIntegrationTest.java` to understand how compensating actions are implemented when the deposit step fails after a successful withdrawal. 2. **Explore other Akka components**: Dive deeper into Akka's ecosystem to enhance your application. 3. **Explore other Akka samples**: Discover more about Akka by exploring [different use cases](https://doc.akka.io/getting-started/samples.html) for inspiration.# Build a User Registration Service Using a Choreography Saga in Akka This guide demonstrates how to implement a choreography Saga in Akka to ensure unique email addresses across user entities. You'll learn how to manage cross-entity field uniqueness in a distributed system using Akka. ## Prerequisites - An [Akka account](https://console.akka.io/register) - Java 21 installed (recommended: [Eclipse Adoptium](https://adoptium.net/marketplace/)) - [Apache Maven](https://maven.apache.org/install.html) - [Docker Engine](https://docs.docker.com/get-started/get-docker/) - [`curl` command-line tool](https://curl.se/download.html) ## Concepts ### Designing To understand the Akka concepts behind this example, see [Development Process](https://doc.akka.io/concepts/development-process.html) in the documentation. ### Developing In the steps below, you will see how this project demonstrates the use of many different Akka components. For more information, see [Developing Services](https://doc.akka.io/sdk/index.html). You may also wish to review the [Saga pattern](https://doc.akka.io/concepts/saga-patterns.html) concept. ## The Set-Based Consistency Validation problem Before diving into the implementation, it's important to understand a common challenge in event-sourced applications called the _Set-Based Consistency Validation_ problem. This issue arises when we need to ensure that a particular field is unique across all entities in the system. In our user registration service, we need to ensure that email addresses are unique across all users. While a user may have a unique identifier (e.g., a user ID), they also have an email address that needs to be unique across the entire system. This requirement introduces complexity in maintaining consistency across different entities. ## Step 1: Understand Entity implementation Examine the following files to understand how Event Sourced Entities and Key Value Entities are implemented: - `src/main/java/user/domain/UserEntity.java` - `src/main/java/user/domain/UniqueEmailEntity.java` ## Step 2: Understand Consumer implementation Review the following files to see how Consumers react to events and state changes in the Entities: - `src/main/java/user/domain/UserEventsConsumer.java` - `src/main/java/user/domain/UniqueEmailConsumer.java` ## Step 3: Configure timeout duration 1. Open `src/main/resources/application.conf`. 2. Note the configuration for email confirmation timeout: ``` email.confirmation.timeout = 10s ``` ## Step 4: Understand Endpoint implementation Examine the following files to see how the functionality is exposed to the outside world: - `src/main/java/user/registry/api/EmailEndpoint.java` - `src/main/java/user/registry/api/UserEndpoint.java` ## Step 5: Run the Application Start the service locally: ```shell mvn compile exec:java -Demail.confirmation.timeout=10s ``` ## Step 6: Test the Saga Use the following curl commands to test different scenarios: ### Create a user ```shell curl localhost:9000/api/users/001 \ --header "Content-Type: application/json" \ -XPOST \ --data '{ "name":"John Doe","country":"Belgium", "email":"doe@acme.com" }' ``` ### Check email status ```shell curl localhost:9000/api/emails/doe@acme.com ``` ### Test failure scenario ```shell curl localhost:9000/api/users/003 \ --header "Content-Type: application/json" \ -XPOST \ --data '{ "country":"Belgium", "email":"invalid@acme.com" }' ``` ### Change email address ```shell curl localhost:9000/api/users/001/email \ --header "Content-Type: application/json" \ -XPUT \ --data '{ "newEmail": "john.doe@acme.com" }' ``` ## Step 7: Monitor the Saga 1. Watch the console output for logs from `UniqueEmailConsumer.java` and `UserEventsConsumer.java`. 2. Use the Endpoints to check email statuses. ## Step 8: Understand failure handling Review the code in `UniqueEmailConsumer.java` and `UserEventsConsumer.java` to see how potential failures are handled: 1. A timer is set to release the email if user creation fails. 2. The timer is cancelled if the user is successfully created. This approach helps maintain consistency even in the face of failures. ## Troubleshooting If you encounter issues: - Ensure the Akka service is running on port 9000. - Verify your curl commands are correctly formatted. - Check that the data in your curl commands matches the intended input. ## Deploying You can use the [Akka Console](https://console.akka.io) to create a project and see the status of your service. Build container image: ```shell mvn clean install -DskipTests ``` Install the `akka` CLI as documented in [Install Akka CLI](https://doc.akka.io/operations/cli/installation.html). Deploy the service using the image tag from above `mvn install`: ```shell akka service deploy choreography-saga choreography-saga-quickstart:tag-name --push ``` Refer to [Deploy and manage services](https://doc.akka.io/operations/services/deploy-service.html) for more information. ## Conclusion Congratulations, you've learned how to implement a Choreography Saga in Akka for managing cross-entity field uniqueness. This pattern ensures consistency across entities and gracefully handles failure scenarios, effectively addressing the Set-Based Consistency Validation problem. ## Next Steps 1. **Extend the Saga**: Add additional steps, such as sending a confirmation email to the user after successful registration. 2. **Explore Akka's Workflow component**: Workflows offer an alternative, orchestrator-based approach to implementing Sagas. 3. **Explore other Akka samples**: Discover more about Akka by exploring [different use cases](https://doc.akka.io/getting-started/samples.html) for inspiration.# Akka Chess Akka Chess is a sample application that illustrates building a complete, resilient, automatically scalable, event-sourced application using the Akka SDK. ## Feature Progress The following is a list of planned features/components and their status: ### Backend - [X] Match entity - [X] Chess rule enforcement - [X] Match list view - [X] Match archive view - [X] HTTP service endpoint - [X] Google pubsub event publisher - [X] Secure HTTP service endpoint with JWT - [ ] Reject out of turn moves before submitting command ### Web UI - [X] Render chessboard - [X] React to move updates from Google Pubsub - [X] Add Github OAuth authentication to site - [ ] Finish theming/layout for the site - [X] Mint JWT to be used as bearer token for service auth - [ ] Display move log when looking at active chessboard - [ ] Display match history on landing/dashboard - [ ] Facilitate player invites to play (before match starts)