[go: up one dir, main page]

CN117560373A - A multi-tenant cloud IDE management system based on cloud native - Google Patents

A multi-tenant cloud IDE management system based on cloud native Download PDF

Info

Publication number
CN117560373A
CN117560373A CN202311510625.8A CN202311510625A CN117560373A CN 117560373 A CN117560373 A CN 117560373A CN 202311510625 A CN202311510625 A CN 202311510625A CN 117560373 A CN117560373 A CN 117560373A
Authority
CN
China
Prior art keywords
tenant
ide
workspace
cloud
working space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311510625.8A
Other languages
Chinese (zh)
Inventor
马国浩
林菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202311510625.8A priority Critical patent/CN117560373A/en
Publication of CN117560373A publication Critical patent/CN117560373A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/30Profiles
    • H04L67/306User profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5682Policies or rules for updating, deleting or replacing the stored data
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a cloud-protogenesis-based multi-tenant cloud IDE management system which comprises data storage, IDE management service, a control plane, a tenant working space and a gateway. The data storage module provides data persistence functionality, providing support for IDE management services and tenant workspaces. The IDE management service provides a working space management service for tenants through an open HTTP interface, and the GRPC is called through a remote procedure provided by a call control plane to realize a working space management function. The tenant workspace is comprised of a plurality of isolated workspaces. The gateway is used for load balancing of IDE management services, providing service discovery of workspaces for the control plane, and providing reverse proxy of tenant workspaces. The invention occupies less local resources, lightens the burden of a local computer, protects the privacy and safety of codes and data of users, and obviously improves the working efficiency and the use experience of developers.

Description

Multi-tenant cloud IDE management system based on cloud primordia
Technical Field
The invention belongs to the field of cloud protogenesis, and particularly relates to a multi-tenant cloud IDE management system based on cloud protogenesis.
Background
Traditional software programming approaches typically rely on a local Integrated Development Environment (IDE) and corresponding compiled operating environments, such as Visual Studio and Intellij IDEA, and the like. Before using these tools, the user needs to download and install the client software and do a series of configuration tasks, such as installing Java Development Kits (JDKs), tomcat servers, IDEA, etc.
However, there are some problems with the local development approach. First, installing client software and configuring a complex compiled runtime environment is a difficult task for many users, requiring significant time and effort. Particularly when installing multiple versions of a development suite or environments involving multiple programming languages, environmental conflicts may occur. Second, most local IDEs occupy higher resources, which can cause problems of run-on when running on a computer with a lower configuration, and the compiling and running speed of the program can be slower.
Development on the cloud has attracted a great deal of attention as an emerging development way. An Integrated Development Environment (IDE) on the cloud is used as a solution for deployment on a server side, and an uninterrupted cloud working space is provided for users. With the powerful computing power of the server, the on-cloud IDE can provide a development environment with a high degree of configuration flexibility. The user can start a complete cloud IDE by opening a browser without performing a complicated installation process by using the cloud IDE, wherein the complete cloud IDE comprises various development tools and supports functions of plug-in extension, compiling and debugging and the like.
Disclosure of Invention
In view of the above-mentioned drawbacks or improvements of the prior art, the present invention provides a cloud-native-based multi-tenant cloud IDE management system, which aims to provide a cloud-Integrated Development Environment (IDE) management system for users, through which users can quickly create an IDE in the cloud, and support multi-tenant cloud IDE management, diversified development environment support, inter-tenant environment isolation, on-demand resource configuration, and instant starting capabilities; the working space among tenants is isolated, so that each tenant can obtain a stable and reliable development environment, a cloud integrated development environment with smooth operation and dynamic allocation of hardware resources can be provided for users by means of powerful configuration of a server, difficulty and time consumption of environment configuration when the users develop locally are reduced, anxiety of the users is reduced, the users can concentrate on software programming, and working efficiency is improved.
A cloud-protogenesis-based multi-tenant cloud IDE management system comprises five modules, namely a data storage module, an IDE management service module, a control plane module, a tenant working space module and a gateway module, wherein the deployment of the modules is based on a cloud protogenesis container orchestration system (Kubernetes).
The data storage module provides data persistence functionality, providing support for IDE management services and tenant workspaces.
The IDE management service provides a workspace management service to tenants through an open HTTP interface, which implements the relevant workspace management functions by invoking remote procedure calls (GRPCs) provided by the control plane.
The control plane contains three functions: controlling the working space of the tenant through the Kubernetes; providing a management interface for IDE management services; the tenant workspace is registered with the gateway through a service discovery interface provided by the gateway.
The tenant workspace is made up of a plurality of isolated workspaces, each workspace containing an integrated development environment Server (Code-Server), a persistent storage volume declaration (PVC), and a persistent storage volume (PV) running in a container (Pod) of Kubernetes.
The gateway is used for load balancing of IDE management services, providing service discovery of workspaces for the control plane, and providing reverse proxy of tenant workspaces.
Optionally, the data storage module includes a database component, a cache component and a network file system component, where the database component is used to store various data such as a working space template, a tenant working space and a tenant account in the IDE management service; the cache component is used for storing information of the working space in operation; the network file system component is used for storing data such as generated plug-ins and codes in the tenant working space.
Preferably, the IDE management service module comprises a Web server cluster, and the Web server provides functions of workspace template browsing, tenant workspace management and workspace access; the workspace template is a pre-constructed container mirror image, different workspace templates comprise development environments (such as Java development environments, C++ development environments and the like) with different languages, and a tenant can select a specific template to create a workspace with corresponding functions; the tenant workspace management functions include creating, starting, stopping, and deleting workspaces.
Further, the IDE management service module uses the database provided by the data storage module to store data of tenants and workspaces, and one workspace uniquely belongs to one tenant, while one tenant may have a plurality of workspaces, however, at the same time, at most one workspace can be in operation by one tenant; the IDE management module realizes the corresponding management function of the working space by calling the GRPC interface provided by the control plane.
Preferably, the control plane contains a controller that controls and coordinates the state of the workspace through Kubernetes; a Custom Resource (CRD) of Kubernetes is used, which defines a workspace, so that the resource can be created and deleted by Kubernetes, and describes its desired and actual states by a configuration file; the controller controls and coordinates the workspace according to the desired state of the workspace to ensure that the actual state of the workspace coincides with the desired state after a certain time.
Further, the description field in the configuration file of the workspace comprises a unique number, a container mirror image, a hardware resource, a monitored port, a data volume mount directory and an execution command, wherein the hardware resource comprises a CPU core number, a memory and a persistent storage space size, and the execution command comprises starting and stopping the workspace;
the controller acquires event changes of a working space from the Kubernetes in a List and listen (List-Watch) mode, wherein the event changes comprise adding, updating and deleting events; according to the acquired event, the controller coordinates the actual state of the working space, and the working steps are as follows:
adding an event: the controller only updates the local cache;
update event: the controller performs different operations according to the execution command; for example, when starting up a workspace, creating a persistent storage volume declaration (PVC) according to the size of the storage space in the configuration file, if the PVC already exists, then creating a Pod according to the container image in the configuration file, and mounting the PVC into the data volume mounting directory; when the working space is stopped, the controller deletes the corresponding Pod, but reserves the PVC;
deletion of events: and the controller deletes the Pod and the PVC corresponding to the working space.
Further, the controller limits the available resources of the Pod according to the hardware resources in the workspace configuration file when creating the Pod, which includes defining the minimum resources (RequestResource) required for the Pod to run and the maximum resources (Limit Resource) that can be used;
the controller provides a set of GRPC interfaces including functions to create, start, stop, delete and acquire workspaces.
Further, after the controller starts the working space, the IP address and the port of the working space are registered in the gateway through a service discovery interface provided by the gateway, so that the gateway can reversely proxy the request of the tenant into the working space; and after the workspace has stopped, its IP address and port are deleted from the gateway. By registering address information of the workspaces, the gateway can effectively route requests, ensuring that tenant requests can correctly reach the corresponding workspaces.
Preferably, the workspaces are associated with tenant IDs through unique numbers so as to realize the distinction of tenant workspaces; the workspaces of each tenant run in an isolated container, so that workspaces between tenants do not affect each other, providing isolation of data resources and higher security.
Preferably, the gateway is composed of gateway components that provide the functions of load balancing, reverse proxy and workspace service discovery; the load balancing function ensures that the requests can be uniformly distributed to IDE management services, and the performance and usability of the system are improved; the workspace service discovery and reverse proxy function allows the gateway to dynamically acquire available workspaces and forward requests as a middle tier from external clients to the corresponding workspaces, so that the control module can timely update the state and address information of the workspaces, ensuring that the system can correctly route requests to the corresponding workspaces.
Preferably, when the tenant accesses the IDE management service and the workspace, the gateway serves as a traffic portal, all requests will flow in, and the processing steps are as follows:
accessing an IDE management service: the gateway distributes the request to a certain Web server of the IDE management service through load balancing, the Web server queries and stores data through a database of the data storage module, and management operation of the tenant working space is completed through a GRPC interface provided by the controller;
accessing a workspace: the tenant requests carry the unique number (SID) of the workspace, the gateway extracts the SID from the request, then queries the set of workspaces registered by the controller, queries the IP address and port of the workspace from it, and then reverse-proxies the request into the workspace.
In general, the above technical solutions conceived by the present invention have the following advantages compared with the prior art:
firstly, the invention provides development environments of different programming languages by utilizing a plurality of pre-constructed container images, and provides diversified choices for users. The user can select a proper development environment according to own requirements without self configuration, and greater flexibility and convenience are provided.
Second, based on the pre-built image, the invention can quickly create and launch an IDE in just a few seconds when launching the workspace. The user does not need to pay attention to the complicated development environment configuration, and can access the IDE only through the browser, so that the convenience of use is greatly improved. Meanwhile, as the development environment runs on the cloud, the occupation of local resources is small, and the burden of a local computer is reduced.
Thirdly, the invention adopts the working spaces of a plurality of tenants to run in the isolated container, realizes the mutual noninterference among the tenants, and provides higher security. Each tenant can obtain independent development environments, and privacy and safety of codes and data of users are protected.
Fourth, the present invention can dynamically configure the hardware resources available to the workspace through the resource quota function provided by Kubernetes. By means of powerful configuration of the server, the IDE is smoother to use, and the compiling speed of the building program is faster, so that the working efficiency is improved. The user can flexibly configure according to the own requirements, and the problem of insufficient resources is not needed to worry about.
In summary, compared with the prior art, the technical scheme provided by the invention has various advantages, including various development environment selections, quick starting of working space, resource isolation and security, dynamic configuration of hardware resources and the like, and the advantages can obviously improve the working efficiency and the use experience of developers.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a multi-tenant cloud IDE system based on cloud native provided in an embodiment of the present invention;
FIG. 2 is a timing diagram of tenant and IDE management services and IDE management service and controller interactions;
fig. 3 is a timing diagram of tenant and gateway and workspace interactions.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present application based on the embodiments herein.
First, several terms referred to in this application are described:
custom resource definition (CRD, custom Resource Definition): CRD is an extended mechanism in Kubernetes, with which a Custom Resource (CR) can be defined and registered in Kubernetes. Custom resources, like built-in resources in Kubernetes such as Pod, etc., can be managed by the user through Kubernetes using command or declaration means as well as managing built-in resources.
Workspace (Workspace): a workspace is a custom resource defined by CRD in the present invention, abstracting a workspace resource, which can be described by a series of fields, including a unique number (SID) of the workspace, container mirror, open port, data volume mount location, execution operations, and allocated hardware resource information, where the hardware resource includes CPU, memory, and persistent storage. When the working space runs, an IDE Server program running in the Pod is started, the IDE Server program uses a Code-Server, a storage volume is declared by using PVC, and the storage volume is mounted in the Pod to provide persistent storage of data.
IDE Server (Code-Server): an IDE server program of Microsoft open source can run Microsoft VS Code in a remote server, can be remotely accessed through a browser, and has most of the functions of VSCODE. Code-Server can generate an isolated cloud development environment by means of containerized deployment.
Kubernetes extension mechanism (Operator): operators is an extended mechanism in Kubernetes to manage and automate applications, which is a combination of Custom Resources (CRs) and controllers, aimed at simplifying and automating the deployment, configuration, extension, and management of a particular application or service. The custom resource is used for describing various attributes of a resource, and the controller is responsible for monitoring and processing events and state changes related to the custom resource object. The controller automatically performs operations to manage the lifecycle of the application according to defined rules and logic.
Container group (Pod): pod is a collection of related containers, which are the basic units for running applications, and Pod is a logical entity for organizing and managing these containers; the containers in Pod share the same network namespaces and storage volumes, which can work cooperatively, share resources, communicate, and cooperate to form a complete application.
Persistent storage volume (PV, persistent Volume): in Kubernetes, PV is an abstract storage volume that represents the actual storage resources in a cluster, which may be physical storage devices, network storage volumes, or cloud storage services. PV, which is independent of Pod lifecycle, can be dynamically created, deleted, and managed, defines storage capacity, access mode (e.g., read-write permissions), type of persistent storage (e.g., local storage, network storage), etc.
Persistent storage volume declaration (PVC, persistent Volume Claim): in Kubernetes, PVC is a request for PV to declare the Pod's need for persistent storage. Kubernetes will automatically select and bind the appropriate PV to the PVC based on the PVC's request and the PV available in the cluster.
Gateway component (openness): openness is a full-function Web application software based on nmginx that provides a high-performance, extensible Web application development platform by integrating nmginx with a set of powerful Lua modules and third party libraries.
Embodiments of the present invention are described in further detail below with reference to the accompanying drawings.
Example 1:
fig. 1 illustrates an architecture diagram of a cloud-native based multi-tenant cloud IDE system. The cloud IDE system includes a control plane, IDE management services, data stores, gateways, and tenant workspaces. The whole cloud IDE system is built based on a cloud primordial container orchestration system, kubernetes, cluster, which provides one infrastructure for deploying and managing all applications and tenant workspaces.
The structure and function of each module are described in detail below:
1) Control plane:
the control plane contains a custom controller that is used to control and coordinate the tenant workspaces. The workspace is a custom resource defined by CRD using Kubernetes, which describes the basic properties of a workspace, including the container Image name (Image), open Port (Port), data volume Mount location (Mount-Path), command to execute (Command), and Hardware configuration of the workspace (Hardware) used at startup.
If the JSON data format is used to describe the workspace, the above description information is saved under the JSON path ". Spec", representing the desired state of the workspace; wherein the commands to be executed include two types, namely a "Start workspace (Start)" and a "Stop workspace (Stop)", for starting and stopping the workspaces, and the hardware resources of the workspaces include a CPU, a Memory (Memory), and a persistent Storage (Storage).
In addition, each workspace has a status sub-resource stored under JSON path ". Status" for describing the actual status of the workspace instance currently in, and the actual status of the workspace may be "Running" or "Stopped"; when the workspace has just been created, in the stopped state, switching of the real state of the workspace can be accomplished by updating the state of the workspace, which is done by the controller.
The controller is mainly used for coordinating the actual state of the working space to reach the expected state, the controller monitors the change events of the working space from the Kubernetes through a List and monitor (List-Watch) mechanism, when the working space is created, updated and deleted, the Kubernetes pushes the corresponding type of events and the description information of the corresponding working space objects to the controller, the controller caches the corresponding type of events locally and processes the occurred events, and the controller controls and coordinates the states of the corresponding type of events according to the description information of the working space objects, so that the resource objects can reach the states expected by users after being coordinated.
The control and coordination process steps are as follows:
new workspace addition event: the controller only updates the local cache, and delays creation of related resources to improve the resource utilization of the system.
Workspace update event: when the Command field becomes Start, indicating that the workspace is expected to Start, creating a PVC by the controller, wherein the PVC is used for declaring the information such as the size and the type of a storage volume required by the workspace, if the PVC exists, the PVC does not need to be created, and then creating a Pod according to the description information of the workspace object by the controller, wherein the running container mirror image and an open port are designated, the minimum hardware Resource (Request Resource) required by the Pod and the maximum hardware Resource (Limittresource) which can be used are designated, and finally mounting the PVC into the Pod to provide the persistent storage of the data such as codes in the workspace; when the Command field is changed into Stop, the expected working space stops running, the controller deletes the Pod corresponding to the working space, but reserves PVC (polyvinyl chloride) so as to reserve the data of codes, programs and the like of the tenant, and the file data state can be restored to the last running state when the working space is restarted.
In addition, the controller updates the actual state of the working space according to the created state of the Pod, and when the Pod is in a Ready state, the actual state of the working space is updated to be Running (Running); when Pod is deleted, then the actual state of the update workspace is Stopped (Stopped).
Workspace deletion event: the workspace will then be unavailable and therefore all resources corresponding to the workspace, including Pod, PVC and PV, need to be cleaned, the controller deletes Pod and PVC corresponding to the workspace, and when PVC is deleted, the data in PV will be automatically deleted and reclaimed for the next reuse.
The container image used in the above description is an image built in advance by using a Code-Server, and comprises various programming languages and development tools such as C++, java, go and the like, and comprises a plurality of common auxiliary tools such as Make, git and the like besides specific compiling tools. After the Pod is created by using the mirror image, the tenant can access the Code-Server therein through the browser, so as to obtain an isolated integrated development environment and Linux environment.
The PVC described above will be automatically bound by Kubernetes to a conforming PV, with the underlying storage used by the PV in the present invention provided by the Network File System (NFS). Since all data will be deleted together after Pod is deleted, if data needs to be persisted, a persistent storage volume manner may be used, where the persistent storage type includes local storage and network storage, but there are some problems with using local storage, such as when Pod drifts, it means that Pod is deleted, and when Pod is restarted, it will be allocated to another node, and the previously created storage cannot be accessed, thereby making the data inaccessible. But this problem can be solved by means of network storage. Therefore, the invention adopts NFS to persist the data generated by the tenant when using the working space, namely, the corresponding storage is created by using the PV, and the data such as the tenant code is persisted by binding and mounting the data into the Pod through the PVC.
In managing workspaces, communication with Kubernetes is required, which can result in excessive load if too many components are in direct communication therewith, so the controller also provides a set of interfaces for managing workspaces through which IDE management services can manage workspaces, including querying, creating, starting, stopping, and deleting workspaces, via the GRPC protocol. In this way, kubernetes' stress can be reduced and the IDE management module can be decoupled therefrom.
2) IDE management service:
IDE management services are an important component of cloud IDE systems. It contains a Web server for handling log-in authentication of tenants and management of workspaces and workspaces templates. The workspace template is a mirror image of a variety of pre-built programming environments, and the relevant information is stored in a database.
Through Web access, the tenant may perform the following operations:
1. login authentication: the tenant can log into the cloud IDE system by providing credentials to ensure that only authorized tenants can access the system.
2. Workspace management: the tenant may browse the already created workspace, looking at its state and other information. New workspaces can also be created, workspace templates tailored to their own needs selected, and names and other configuration options for the workspaces specified. Once the workspace is created successfully, the tenant can launch the workspace and access it for development work. The tenant may also stop the workspace to free up resources and pause work.
3. Limiting and controlling: the system sets some restrictions for each tenant. Each tenant allows up to 10 workspace instances to be created and reserved to prevent resource abuse. Furthermore, the system limits each tenant to running only one workspace instance at a time to ensure efficient use of resources.
Implementation of the IDE management service function relies on a database in the data storage module, a cache, and a workspace management interface provided by the controller in the control plane. The Web server stores tenant accounts, the working space templates and the working space information in a database; storing the operating workspace information in a cache component; the Web server communicates with the controller through GRPC protocol, and invokes the working space management interface provided by the Web server to realize the management of the working space.
3) And a data storage module:
the data storage module comprises three components: database, cache, and Network File System (NFS), each of which functions as follows:
1. database: the system is used for storing tenant accounts, the workspace template information and workspaces created by tenants. The tenant account comprises identity verification information such as a user name, a password and the like and is used for login and authentication. The workspace template information includes related data of various pre-built programming environment images, such as names, description information, workspace image names, etc. The workspace information created by the tenant includes a configuration of the workspace, such as a workspace name, an ID, a mirror name, a hardware configuration, and the like. Through the database, the system can persist and manage these data so that tenants access and manage the workspace at different points in time.
2. And (3) caching: for storing state information of the working space in operation. The state of the workspace includes the tenant ID, the workspace ID, the running state, the hardware configuration, the start-up time, etc. By storing this information in a cache, the system can quickly access and update the state of the workspace so that the state information of the workspace is displayed in real-time in the user interface.
3. Network File System (NFS): for storing data, such as code, programs, etc., generated by the tenant when using the workspace. NFS provides a solution to network file systems, where by storing data in NFS, the persistence of the data is guaranteed, and tenant data is still available even if the workspace is stopped or restarted.
In summary, the data storage module provides persistent storage and management capabilities for tenant information, workspace template information, workspace status, and data in the tenant workspace through components such as databases, caches, NFS, etc., to support various functions and requirements of the cloud IDE system.
4) Gateway:
the gateway contains an openness component that can be used as a seven-layer load balancing and reverse proxy component based on the powerful capabilities of nginnx. In this context, the gateway has the following two main roles:
1. load balancing: as a load balancing component, the gateway distributes tenant requests to individual nodes in the IDE management service cluster. Through the configuration file, the upstream server cluster can be defined, so that the load balance of the request is realized, each node can process proper request load, and the performance and the expandability of the system are improved.
2. Workspace service discovery: an interface for workspace service discovery is provided that allows a console to register the workspace in a gateway after it is started. In this way, the gateway can dynamically obtain the address and state information of the workspace and use it for routing of requests, ensuring that tenant requests can be properly proxied into the corresponding workspace.
3. Reverse proxy: and resolving the request of the tenant when accessing the working space, inquiring the address of the corresponding working space from the working space set registered by the controller, and reversely proxy the request in the working space. The IDE server of the tenant workspace runs in pods, and each Pod has an IP address accessible within the cluster. Thus, in order to access the workspace, a reverse proxy component is required to forward the tenant request into the workspace that is appropriate for him.
The implementation method is as follows: the HTTP interface is written by using the Lua script, and the controller uses the interface to complete the registration and the de-registration of the working space. When the workspace is started, registering it in the gateway; when the workspace stops, it is logged off the gateway.
Each workspace is assigned a unique number (SID). During registration, SIDs and "IP address: ports" are saved as key value pairs in the shared memory of openness. When the tenant access path is "/ws/$ { SID }", the gateway parses out the SID from the access path (URL), queries the IP address and port of the workspace from the shared memory, and then completes the reverse proxy using the IP address and port. When the workspace stops, the corresponding key-value pairs will be deleted from the shared memory.
Through the implementation, the gateway can effectively load balance tenant requests and dynamically proxy the tenant requests to corresponding workspaces, so that the functional requirements of the cloud IDE system are met.
In summary, the highly extensible architecture implemented by the cloud-native-based multi-tenant cloud IDE system provided by the embodiment of the present invention can provide a stable, isolated, safe and flexible development environment for users, and is specifically as follows:
1. the development environments of different programming languages are contained by utilizing a plurality of pre-constructed container images, so that diversified choices are provided for users. The user can select a proper development environment according to own requirements without self configuration, and greater flexibility and convenience are provided.
2. Based on the pre-built image, the invention can quickly create and start an IDE in a few seconds when starting the workspace. The user does not need to pay attention to the complicated development environment configuration, and can access the IDE only through the browser, so that the convenience of use is greatly improved. Meanwhile, as the development environment runs on the cloud, the occupation of local resources is small, and the burden of a local computer is reduced.
3. According to the invention, the working spaces of a plurality of tenants are operated in the isolated container, so that mutual noninterference among the tenants is realized, and higher security is provided. Each tenant can obtain independent development environments, and privacy and safety of codes and data of users are protected.
4. The invention can dynamically configure the hardware configuration of the working space through the resource quota function provided by the Kubernetes. By means of powerful configuration of the server, the IDE is smoother to use, and the compiling speed of the building program is faster, so that the working efficiency is improved. The user can flexibly configure according to the own requirements, and the problem of insufficient resources is not needed to worry about.
Example 2:
FIG. 2 is a timing diagram of tenant and IDE management services interacting with controllers, FIG. 3 is a timing diagram of tenant and gateway and workspace interactions; next, the flow of the IDE management service and the workspace requested by the tenant will be described in conjunction with fig. 2 and 3.
1) Creating a workspace
When a tenant creates a workspace, it is necessary to select a workspace template and hardware configuration of the workspace. Accessing IDE management services via HTTP requests, the workspace template contains C++, java, go, etc. development environments and provides several pre-set hardware specifications, such as:
professional type (8 CPU, 16G memory, 32G storage)
Computing type (4 CPU, 4G memory, 16G storage)
Standard (2 CPU, 2G memory, 4G storage)
The request of the tenant comprises a work space template ID and a hardware specification ID, the Web server processes the request and generates a record in a database, and the record work space comprises the information of the tenant ID, the work space ID, a used container mirror image, the hardware specification, the creation time and the like.
2) Starting up a workspace
When the tenant starts the workspace, the IDE management service and controller performs the following processing steps:
(1) Inquiring detailed information of a working space to be started by the tenant from a database;
(2) The interface provided by the controller is called by the GRPC to start the working space, and the request contains the container mirror image, the data volume mount catalog and the hardware specification used by the working space template;
(3) The controller detects that the workspace is started, then creates a PVC (first start creation, then multiplexing);
(4) Creating a Pod according to the container mirror image in the request, mounting PVC into the data volume mounting catalog, and limiting the minimum required resource and the maximum used resource of the Pod;
(5) Waiting for Pod creation to be completed and in a Ready state, and then registering a working space through a service discovery interface of the gateway so that the gateway can dynamically and reversely proxy a tenant request to the belonging working space;
(6) Finally, the information of the started working space instance is stored in a cache component;
through the steps, the starting of the working space and the registration in the gateway are completed. The tenant may access the launched workspace by carrying a workspace ID in the access path when accessing the workspace.
3) Stopping working space
When the workspace is stopped, the IDE management service and controller performs the following processing steps:
(1) Deleting corresponding working space operation instance information from the cache;
(2) Calling a GRPC service interface provided by the controller to stop the working space, wherein the request carries the name and ID of the working space;
(3) The controller detects that the working space is stopped, firstly queries the local cache, checks whether the corresponding Pod exists, and deletes the corresponding Pod if the corresponding Pod exists;
(4) Logging out the working space from the gateway;
through the steps, the working space is stopped, and the corresponding working space is logged out from the gateway, so that the tenant is prevented from continuing to access after stopping the working space, and errors and safety problems are caused.
4) Deleting workspaces
When a tenant deletes a working space, related resources need to be cleaned, which comprises the following steps:
(1) Checking whether the working space is running, and if so, reminding the tenant to stop the running of the working space before continuing to delete;
(2) Deleting the record corresponding to the working space from the database;
(3) Calling a GRPC service interface provided by the controller to delete the working space;
(4) The controller discovers that the working space is deleted, then deletes the corresponding PVC, and the data in the tenant working space can be automatically cleaned;
the cleaning work of the working space resources is completed through the steps so as to recycle the resources.
5) Accessing a workspace
When a tenant accesses a workspace, the path (URL) accessed first passes through the gateway is: "/ws/$ { SID }" where SID represents the ID of the workspace to be accessed, the workspaces of different tenants are distinguished by ID.
Firstly, a gateway component analyzes SID from URL, and inquires IP address and port from shared memory according to SID;
next, the gateway reversely proxies the tenant request into the corresponding workspace;
it should be noted that in this application, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is merely a specific embodiment of the application to enable one skilled in the art to understand or practice the application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (13)

1. The multi-tenant cloud IDE management system based on the cloud protogenesis is characterized by comprising five modules, namely data storage, IDE management service, a control plane, a tenant working space and a gateway;
the deployment of the five modules is based on a cloud primary container arranging system Kubernetes;
the data storage module provides a data persistence function and supports IDE management service and tenant working space;
the IDE management service provides working space management service for tenants through an open HTTP interface, and the GRPC is called through a remote procedure provided by a call control plane to realize the working space management function;
the tenant working space consists of a plurality of isolated working spaces, and each working space comprises an integrated development environment service end Code-Server, a persistent storage volume declaration PVC and a persistent storage volume PV which run in a container Pod of Kubernetes;
the gateway is used for load balancing of IDE management services, providing service discovery of workspaces for a control plane, and providing reverse proxy of tenant workspaces.
2. The cloud-native based multi-tenant cloud IDE management system of claim 1, wherein the control plane comprises three functions: controlling the working space of the tenant through the Kubernetes; providing a management interface for IDE management services; the tenant workspace is registered with the gateway through a service discovery interface provided by the gateway.
3. The cloud-native based multi-tenant cloud IDE management system of claim 1, wherein the data storage module comprises a database component, a cache component, and a network file system component;
the database component is used for storing a working space template, a tenant working space and a tenant account in the IDE management service; the cache component is used for storing information of the working space in operation; the network file system component is used for storing generated plug-ins and codes in the tenant working space.
4. The cloud-native-based multi-tenant cloud IDE management system of claim 1, wherein the IDE management service module comprises a Web server cluster, the Web server providing functions of workspace template browsing, tenant workspace management, and workspace access; the workspace template is a pre-constructed container mirror image, different workspace templates comprise development environments of different languages, and tenants select templates to create workspaces with corresponding functions; the tenant workspace management functions include creating, starting, stopping, and deleting workspaces.
5. The cloud-native based multi-tenant cloud IDE management system of claim 1, wherein the IDE management service module stores data of tenants and workspaces using a database provided by the data storage module, and one workspace uniquely belongs to one tenant, one tenant can possess a plurality of workspaces, but at the same time, at most one tenant can only have one workspace in an operational state; the IDE management module realizes the corresponding management function of the working space by calling the GRPC interface provided by the control plane.
6. The cloud-native based multi-tenant cloud IDE management system of claim 1, wherein the control plane comprises a controller that controls and coordinates states of the workspace through Kubernetes; using the custom resource CRD of Kubernetes, defining a working space, and creating and deleting the resource through Kubernetes, wherein the resource describes the expected state and the actual state through a configuration file; the controller controls and coordinates the workspace according to the desired state of the workspace.
7. The cloud-native based multi-tenant cloud IDE management system of claim 1, wherein the description field in the configuration file of the workspace comprises a unique number, a container mirror, a hardware resource, a listening port, a data volume mount directory, and an execute command; the hardware resources include CPU core number, memory and persistent storage space size, and the execution commands include starting and stopping the working space.
8. The cloud-native based multi-tenant cloud IDE management system of claim 1, wherein the controller obtains workspace event changes from Kubernetes by way of listing and listening, including adding, updating, and deleting events.
9. The cloud-native based multi-tenant cloud IDE management system of claim 8, wherein the controller coordinates the actual state of the workspace according to the acquired events by:
adding an event: the controller only updates the local cache;
update event: the controller performs different operations according to the execution command;
deletion of events: and the controller deletes the Pod and the PVC corresponding to the working space.
10. The cloud-native based multi-tenant cloud IDE management system of claim 9, wherein the controller limits resources available to the Pod according to hardware resources in the workspace configuration file when creating the Pod, including defining minimum resources required for the Pod to run and maximum resources used;
the controller provides a set of GRPC interfaces including functions to create, start, stop, delete and acquire workspaces.
11. The cloud-native based multi-tenant cloud IDE management system of claim 10, wherein the controller, upon launching the workspace, registers an IP address and port of the workspace into the gateway through a service discovery interface provided by the gateway such that the gateway reverse proxies the tenant's request into the workspace; after the working space is stopped, the IP address and the port of the working space are deleted from the gateway;
the workspaces are associated with tenant IDs through unique numbers, so that the tenant workspaces are distinguished; the workspaces of each tenant run in an isolated container, the workspaces between tenants do not affect each other.
12. The cloud-native based multi-tenant cloud IDE management system of claim 11, wherein the gateway is comprised of gateway components that provide load balancing, reverse proxy, and workspace service discovery functions; the load balancing function ensures that requests can be evenly distributed to IDE management services; the workspace service discovery and reverse proxy function allows the gateway to dynamically acquire the available workspaces and forward requests as a middle tier from external clients to the corresponding workspaces.
13. The cloud-native based multi-tenant cloud IDE management system of claim 12, wherein when a tenant accesses the IDE management service and the workspace, the gateway serves as an entry for traffic, and all requests will flow in, implemented as follows:
accessing an IDE management service: the gateway distributes the request to a Web server of the IDE management service through load balancing, the Web server queries and stores data through a database of the data storage module, and management operation of the tenant working space is completed through a GRPC interface provided by the controller;
accessing a workspace: the tenant requests carry the unique serial number SID of the working space, the gateway extracts the SID from the request, then queries the working space set registered by the controller, queries the IP address and port of the working space from the working space set, and then reversely proxies the request into the working space.
CN202311510625.8A 2023-11-14 2023-11-14 A multi-tenant cloud IDE management system based on cloud native Pending CN117560373A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311510625.8A CN117560373A (en) 2023-11-14 2023-11-14 A multi-tenant cloud IDE management system based on cloud native

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311510625.8A CN117560373A (en) 2023-11-14 2023-11-14 A multi-tenant cloud IDE management system based on cloud native

Publications (1)

Publication Number Publication Date
CN117560373A true CN117560373A (en) 2024-02-13

Family

ID=89814015

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311510625.8A Pending CN117560373A (en) 2023-11-14 2023-11-14 A multi-tenant cloud IDE management system based on cloud native

Country Status (1)

Country Link
CN (1) CN117560373A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118132200A (en) * 2024-03-11 2024-06-04 上海和今信息科技有限公司 Project caching method, system, device and medium based on cloud native development environment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118132200A (en) * 2024-03-11 2024-06-04 上海和今信息科技有限公司 Project caching method, system, device and medium based on cloud native development environment

Similar Documents

Publication Publication Date Title
US11853748B2 (en) Methods and systems that share resources among multiple, interdependent release pipelines
CA2990252C (en) Systems and methods for blueprint-based cloud management
CN109582441A (en) For providing system, the method and apparatus of container service
US8290998B2 (en) Systems and methods for generating cloud computing landscapes
WO2024077885A1 (en) Management method, apparatus and device for container cluster, and non-volatile readable storage medium
US20170364844A1 (en) Automated-application-release-management subsystem that supports insertion of advice-based crosscutting functionality into pipelines
CN110462589A (en) On-demand code execution in a local device coordinator
CN107370796B (en) Intelligent learning system based on Hyper TF
JPH05181814A (en) Object-oriented computing system
EP2776936A1 (en) System and method for managing dedicated caches
CN114168179B (en) Micro-service management method, micro-service management device, computer equipment and storage medium
CN110352401A (en) Local device coordinator with on-demand code execution capability
CN115357198B (en) Storage volume mounting method and device, storage medium and electronic equipment
CN109117259A (en) Method for scheduling task, platform, device and computer readable storage medium
CN110098952A (en) A kind of management method and device of server
CN113709810A (en) Method, device and medium for configuring network service quality
CN106484458B (en) Open type software warehouse management system and management method thereof
CN119620958B (en) A model file loading method, system, computer device and storage medium
CN116010027A (en) Method for managing task processing cluster, method for executing task and container cluster
CN115086166A (en) Computing system, container network configuration method, and storage medium
CN117560373A (en) A multi-tenant cloud IDE management system based on cloud native
WO2023084345A1 (en) Automated deployment of enterprise archive with dependency on application server via script
CN117478634A (en) Network address access method and device, storage medium and electronic device
CN117729251A (en) Edge computing device, embedded device, control system and construction method thereof
CN108809715A (en) A kind of method and device of deployment management platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination