[go: up one dir, main page]

AWS Lambda Functions

Enable Hummingbird AI Analysis for Pipelines

Pipelines are powered by Opsera’s Hummingbird AI, a cutting-edge AI technology introduced by Opsera to leverage the power of generative Al across your software delivery lifecycle. The AI analysis, summarizes the CI/CD pipelines that are executed, enabling users to step back and assess deployment pipelines holistically, rather than getting caught up in isolated step-by-step evaluations when issues arise. To learn more, read here. This feature is enabled on request. To have this enabled for your Pipelines, please get in touch with us at [email protected]

Users can create templates of AWS Lambda Functions through Opsera Tasks and deploy them live via the Opsera Pipeline. The AWS Lambda Service workflow requires the setup in both Tasks and Pipeline. Once the user enters a certain static information for the task, the task must be linked to the respective step in the Pipeline.

In this section

Setup AWS Lambda Function Task

  1. Login to Opsera, and navigate to Tasks.

  2. Click + New Task.

  3. In the Type drop down, select AWS Lambda Function Creation.

  4. Enter the values required for task template creation:

  • AWS Tool: Select an AWS Tool.

  • Function Name: Create a unique name for the function. Click Validate to confirm that the name is unique and does not exist in AWS yet. If the name already exists, an error will be displayed.

  • Handler: Enter a syntax. Example for Java 8: example.Hello::handleRequest

  • IAM Role: Select a role fetched from AWS.

  • Runtime: Select the language to write the function. Java 8 is supported.

5. Click Save.

Once the templates are created the user can create and deploy the functions via the Opsera pipeline.

Setup AWS Lambda Pipeline Step

Three steps are required to set up a Lambda pipeline workflow:

  • Maven Build - Uses Jenkins Tool.

  • S3 Push - Uses Publish to S3 Tool.

  • Publish AWS Lambda - Uses Publish AWS Lambda Tool.

Configuring the steps for Maven Build and S3 Push, and then start creating the step for AWS Lambda.

To configure step for Publish AWS Lambda

  1. In the pipeline, click + to create a new step.

  2. In the step setup, give the step name, enter the Tool as Publish AWS Lambda and click Save.

  3. In the Step Settings, enter details for the following:

  • Action: Select Create from the drop down to trigger function creation.

  • AWS Tool: Select the AWS Tool that matches the tool used in template creation in Tasks.

  • Lambda Function ↔︎ S3 Push Mapping: Select the Lambda function templates and map it to the respective s3 step(s).

    • Select Lambda Function - Select the Lambda function templates created in Tasks.

    • Select S3 Push Step - Select the S3 Push Step to map the function to. If the user has multiple s3 steps in the pipeline the user can map individual functions to different s3 steps thereby giving them the ability to create multiple functions as part of one pipeline step.

5. Click Save.

View Pipeline Logs for AWS Lambda

Once the pipeline is executed, you can view the logs regarding the success for failure of each function that you have created.

  • To view logs, navigate to the Summary tab of the Pipeline, and scroll down to view Pipeline Logs.

  • To view console logs, click Console Output in the Action column of a step. Logs will generate a message per each function that has been created.

Frequently Asked Questions

Handler is a function or a method with complete path for a runtime ( eg: java)

  1. What are the possibilities that the handler of a given lambda function will get updated ? There are two possibilities: 1. User had created a function that has a typo mistake in handler. Later the user finds out and will update the handler. 2. User wants to map the lambda function with a different handler.

  2. How many handlers can be mapped to one function? A single function can be mapped to a handler [ 1-1 mapping].

  3. How many lambda functions can be created from one source( eg: jar file)? Any number of functions can be created from a single source.

  4. How does the data flow in the pipeline? The data flow happens as per the following steps: Step 1: Connect to maven repo and build a jar file. Step 2: Push the jar file to S3 location.

    • Pass bucket and artifacts details from Step2 to Step3.

    Step 3: Use s3 location and fill the data in step form.

    • Persist the data in mongoDB and stream to Kafka Topic during runtime.

    • Application listens kafka topic && Create Function through async process and post the status to kafka topic.

Last updated

Was this helpful?