OpenResty Inc.’s cover photo
OpenResty Inc.

OpenResty Inc.

IT Services and IT Consulting

San Francisco, California 457 followers

A Machine Coding Company providing enterprise-grade diagnostic tools for your business-critical software applications

About us

OpenResty Inc., founded by the team behind the No. 1 fastest-growing open-source application server, is an intelligent software company that provides enterprise-grade development and production tools to manage and optimize the performance, reliability, and security of business-critical software applications. OpenResty Inc. strives to become the world’s first automated programming company. OpenResty is deep-rooted with OpenSource DNA, driven by developers and loved by businesses. • 4,000,000+ Downloads • 1,200,000+ Companies • 35,000+ GitHub Stars OpenResty XRay is the company’s SaaS platform that diagnoses and optimizes various software systems in real-time. Built based on the technical learnings from OpenResty’s massively growing open-source community, and a clear grasp of the key pain-points accumulated from the team’s experiences on helping large Internet companies in operation and maintenance, OpenResty XRay helps open-source software enterprise users to monitor, track, and improve the performance, reliability, and security of their software systems. Apply to Join the Beta Test Community Today https://openresty.com/en/contact/

Website
http://openresty.com/
Industry
IT Services and IT Consulting
Company size
11-50 employees
Headquarters
San Francisco, California
Type
Privately Held
Founded
2017
Specialties
Consulting, OpenResty, Nginx, LuaJIT, and APM

Locations

Employees at OpenResty Inc.

Updates

  • OpenResty 1.29.2.3 is now available as a new open-source release! Here are the highlights: 1. We have backported a comprehensive set of official Nginx security patches, addressing several high-severity vulnerabilities including DAV module issues, MP4 buffer overflow, CRAM-MD5/APOP null pointer dereference, auth_http injection, OCSP bypass, and SSL upstream injection — significantly strengthening the overall security posture. 2. lua-nginx-module has been upgraded to v0.10.30rc2, introducing powerful new directives such as `precontent_by_lua` and `proxy_ssl_verify_by_lua*`, along with useful APIs including `tcpsock:getsslsession` and TCP socket options `keepintvl`/`keepcnt`. This release also resolves critical bugs including SIGSEGV and use-after-free in the QUIC connection close path. 3. stream-lua-nginx-module, lua-resty-core, and LuaJIT have all received major upgrades. New SSL capabilities include `proxy_ssl_certificate_by_lua` and - `serversslhandshake`, while LuaJIT addresses multiple issues across ARM64, FFI, and the JIT compiler — delivering notable improvements in cross-platform stability. #OpenResty

  • After deploying a CDN or Layer 4 load balancer, many teams encounter a common problem: the client IP address received by the gateway is actually the address of the proxy server. At this layer, source IP-based rate limiting rules, access control policies, and audit logs all become ineffective because the original client IP is not properly propagated through the network path. There are several common solutions: Proxy Protocol, X-Forwarded-For, X-Real-IP, and custom headers. Each approach is suitable for different proxy topologies and has distinct security implications. Choosing the wrong solution or having a lax trust chain configuration can lead to either information loss or IP spoofing vulnerabilities. For a truly complete solution, it's also crucial to consider the upstream direction: Is the real client IP also consistently passed to the backend origin server? The problem is only genuinely resolved when both inbound and outbound directions are correctly configured. We have compiled these configuration principles into a comprehensive guide on OpenResty Edge, covering solution selection, configuration steps, and security considerations: https://lnkd.in/gzkdBNjQ #OpenRestyEdge #OpenResty #proxyprotocol

  • After months of dedicated work by the team, we are thrilled to announce the major release of the new open-source version, OpenResty 1.29.2.1! Key highlights include: • Core upgraded to Nginx 1.29.2 • Upgraded to OpenSSL 3.5.5 • Upgraded to PCRE 10.47 • New `proxy_ssl_verify_by_lua` directive for enhanced SSL proxy verification • New `lua_ssl_key_log` directive for Wireshark-compatible SSL key logging • Added support for TCP/UDP binding and socket file descriptor retrieval Full Announcement: https://lnkd.in/gpZdrC-W Downloads: https://lnkd.in/g694aEuC Binary Packages: https://lnkd.in/g3iKQAC Thank you for your continued support and interest in OpenResty! #OpenResty #WebServer #Nginx #LuaJIT

  • Multi-cloud and microservices didn't eliminate complexity. They moved it — from the application layer to the traffic management layer. Teams now wrestle with provisioning SSL certificates across thousands of domains, waiting through slow CDN cache purges, absorbing WAF performance penalties, and wiring up routing logic across container clusters by hand. OpenResty Edge was built to solve this directly: one control plane for ingress, routing, security, caching, and observability. - Deep integration with ACME services: like Let's Encrypt and ZeroSSL automates the provisioning and renewal of certificates at scale, letting your machines handle tedious configuration maintenance. - Native integration with Kubernetes: automatically discovers service changes, eliminating the need for manual configuration synchronization. - Native support for complex container clusters: allows for sophisticated traffic management and intelligent routing. Cross-cluster load balancing, canary releases, and more can be configured effortlessly within the control plane. - Build your own private CDN, deeply integrating compute and cache. This enables "edge computing" by executing business logic for millisecond response times closest to your users. Purge your entire network cache in seconds and manage edge nodes like microservices, all while balancing security, compliance, and cost. - A built-in, high-performance WAF engine: reduces performance overhead by an order of magnitude, proving that robust security protection doesn't have to come at the expense of application latency. - Support for request-level hot updates: for both rule adjustments and logic changes ensures zero-downtime deployments and no service interruptions. OpenResty Edge is more than just a universal gateway; it represents a modern architectural approach to traffic management. It integrates the core capabilities of the traffic ingress into a unified platform to address the complexities of distributed applications and containerized deployments, ultimately delivering a holistic optimization of performance, security, cost, and efficiency. Link: https://lnkd.in/gvSRfdmF #OpenRestyEdge #UniversalGateway #OpenResty #Kubernetes

  • APM budgets grow. MTTR doesn't. The problem isn't spending, it's strategy. Cloud-native systems fail across distributed processes in seconds. Traditional tools respond with agent sprawl and blanket data collection. This mismatch, static tools chasing dynamic failures, creates data floods that bury the critical context you need. OpenResty XRay flips the model: dynamic post-mortem analysis. Instead of collecting everything, we capture high-fidelity snapshots automatically when failures occur: 1. Fully Dynamic Post-Mortem Analysis: Abandon the resource consumption of 24/7 full data collection. Only when abnormal fluctuations occur in CPU, memory, or I/O, the system will automatically trigger analysis, precisely recording the stack and context when the failure occurred. This allows developers to conduct reviews based on real "black box" data, rather than relying on guesswork to reconstruct the scene. 2. Thorough Zero-Intrusion: Utilizing dynamic tracing technology, no need to modify a single line of code, no need for restarts, and not even needing Debug Symbols, directly see through the operational details of the production environment. 3. De-Sidecar-ization: In a K8s environment, directly cut in from the host machine side, avoiding complex Sidecar maintenance costs and intrusion into container images. 4. Binary-Level Security Compliance: Vulnerability scanning driven by binary evidence, directly seeing through running processes, completely getting rid of the false positive traps of version number matching. Whether it's LTS Backport patches or compiler optimization trimming, it can be accurately identified, allowing security teams to say goodbye to "interpretive compliance" and return to high-value work. The future of observability should not be more expensive storage bills, but deeper, lower-level system insight. Regarding how OpenResty XRay achieves fully dynamic post-mortem diagnosis, you can check out this in-depth analysis: https://lnkd.in/gu8hWAHy #OpenResty #Observability #DynamicTracing

  • In many enterprises' high-concurrency gateway scenarios, whether based on OpenResty or directly built on the Nginx ecosystem, we consistently encounter a common challenge: business events need to enter the message system as early as possible, yet the request ingress path has extremely low tolerance for latency jitter and stability issues. Many teams aim to complete high-frequency data collection or real-time decision event reporting right at the request entry point. However, on this critical path, if Kafka writes lack clear timing constraints, their overhead can often escalate under high concurrency, becoming a significant instability factor for the entire processing chain. In practice, while some general open-source implementations perform well under normal load or in background processing scenarios, when deployed at the gateway ingress, continuously handling high concurrency or sudden traffic, they are prone to problems such as magnified latency fluctuations and unpredictable behavior, thereby limiting their applicability. Against this backdrop, our team developed a proprietary library, `lua-resty-kafka-fast`. Its primary goal is not to achieve peak single-node performance, but rather to provide stable and predictable Kafka writing capabilities at the OpenResty / Nginx-based gateway ingress. By decoupling the write operation from the critical request processing path, it maintains clear time boundaries even in high-pressure situations, preventing adverse amplification effects on the main processing flow. In our tests and actual usage, the additional overhead introduced by Kafka writes has consistently remained within a controllable and predictable range. This enables the gateway to capture event data earlier, without disrupting its original processing cadence, for real-time decision-making, behavior analysis, and comprehensive observability. For developers, it still presents a clear, straightforward Lua API, while offering more explicit system-level stability guarantees. If you are also grappling with the trade-offs between stability and controllability at the ingress layer, we hope this article offers valuable engineering insights: https://lnkd.in/gzZjHmYz #Kafka #APIGateway #OpenResty #Nginx

  • GSLB (Global Server Load Balancing), which dynamically resolves DNS requests to distribute user traffic to the optimal node, is a critical technology for large-scale services to ensure an optimal user experience. Frequently, when addressing high-concurrency failures, we encounter a paradoxical situation: the GSLB on the monitoring dashboard indicates healthy network connectivity (Ping/TCP checks are normal), and DNS resolution also functions correctly. However, the backend services are actually in a state of degraded performance or partial unresponsiveness due to overload. This represents a fundamental misalignment in current traffic scheduling architectures: while business layer complexity has long since grown exponentially, the core logic for DNS scheduling decisions still relies on network layer probes. In today's landscape of dynamic content and significant disparities in computing resources, relying solely on ICMP or TCP handshakes to assess a node's carrying capacity is clearly too coarse-grained. When the network layer appears "healthy" but the business layer is "underperforming," traditional GSLB cannot detect this. DNS continues to direct traffic to these struggling nodes, forcing us to resort to manual intervention for traffic failover. The core design philosophy of OpenResty Edge's GSLB is to base DNS scheduling decisions on real business metrics. Instead of merely checking "whether the network path is available," we focus on "whether the service can sustain the load." By collecting business layer metrics such as node RPS (Requests Per Second), active connections, and system load (1/5/15 minutes), we dynamically adjust DNS resolution results. This, combined with a high/low watermark-based damping mechanism, simulates the decision-making logic of experienced engineers when facing fluctuations: it both prevents DNS resolution oscillations caused by a single threshold and ensures decisive traffic shifts. The granularity of traffic governance often determines the upper limit of system stability. For more details, please refer to this : https://lnkd.in/gPggEr6x #gslb #DNS #apigateway #nginx

  • We have just released OpenResty Edge version 25.12.5-1. Key highlights of this new version include: * IPv6 Support: IP lists now fully support IPv6 addresses, enabling better compatibility with modern network architectures. * New Page Rule Action: The `proxy-pass-request-headers/body` action offers more flexible control over request headers and bodies. * Enhanced System Monitoring: Proactive alerts are now triggered when there's a significant time drift between the gateway and the console, helping you identify potential issues early. * Edgelang Feature Enhancement: Supports direct use of application-defined user variables by name, simplifying rule creation. Additionally, this new version unifies and optimizes a significant amount of Chinese text and UI display, and addresses several issues related to DNS, cache preheating, Kubernetes upstream, and more. For a complete list of updates, please refer to the full release announcement here: https://lnkd.in/gfe-J_KF #OpenRestyEdge##APIGateway#

  • In complex production systems, the key to resolving performance issues is to establish a deterministic methodology, rather than relying on guesswork or the "reboot as a quick fix" approach. Consider a common Nginx memory leak scenario: what happens when a single worker's memory continuously grows from hundreds of MBs to over 1GB, making an Out-Of-Memory (OOM) error imminent, and the service cannot be restarted? The true challenge lies in whether one can safely perform correlated analysis across the three distinct layers of the C runtime library (Libc), Nginx memory pool, and application C code within a production environment. Through a series of rigorous observations using OpenResty XRay, we ultimately obtained verifiable and deterministic evidence. We have thoroughly analyzed and documented this process, sharing our findings in an article: https://lnkd.in/gdiNp6XK. #PerformanceEngineering #Nginx #OpenRestyXRay #MemoryLeak

Similar pages

Browse jobs