Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application.
The term "and/or" is used herein to describe an association relationship of associated objects, and means that there may be three relationships, for example, a and/or B, and that there may be three cases where a exists alone, while a and B exist together, and B exists alone. The symbol "/" herein indicates that the associated object is or is a relationship, e.g., A/B indicates A or B.
The terms "first" and "second" and the like in the description and in the claims are used for distinguishing between different objects and not for describing a particular sequential order of objects. For example, the first response message and the second response message, etc. are used to distinguish between different response messages, and are not used to describe a particular order of response messages.
In embodiments of the application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, unless otherwise specified, the meaning of "plurality" means two or more, for example, a plurality of processing units means two or more processing units and the like, and a plurality of elements means two or more elements and the like.
Before introducing the technical scheme of the application, explaining several technical terms related to the technical scheme of the application in advance is respectively as follows:
A software system is a complex aggregate that encompasses multiple components and layers in order to provide specific functions and services. Software systems typically contain elements of applications, databases, middleware, operating systems, network protocols and communications, security, user interfaces, dependency libraries and frameworks, configuration files, test code, and the like. The delivery process of the software system is divided into a plurality of stages including project start, demand investigation, system development, system trial-and-drop, system popularization, continuous integration and continuous deployment, monitoring and feedback, blue-green deployment and gray release, containerization and infrastructure, namely code, project ending and the like. The continuous integration and continuous deployment phase is a key link for realizing rapid software delivery, and the whole software delivery flow from code writing to deployment is accelerated through automation and flow optimization. The practice of this stage mainly includes automated construction, automated testing, and automated deployment.
Local refers to the specific customer site of software deployment in the delivery of software, including the customer's data center or branch structure, as well as the software and hardware environments deployed at these locations.
The execution plan is a changing operation required to be executed to specify how the current state is changed to the target state. The change operations may include CREATE, DELETE, and UPDATE operations. The CREATE operation refers to an operation requiring creation of a new resource. The DELETE operation refers to an operation that requires the removal of an old resource. An UPDATE operation refers to an operation requiring modification of an existing resource. These operations ensure that changes to the resources can be accurately performed as planned.
Next, the technical scheme provided by the application is described.
Generally, for delivery of complex software systems, the flow design is very complex. In the continuous integration and continuous deployment phase, designers must go deep into understanding a number of key factors, such as compatibility among subsystems in a software system, dependencies among steps, operating time of each step, performance criteria of the delivery tool, and technical limitations of each subsystem. In addition, they need to combine the actual deployment configuration information of the customer to accurately design the delivery process. This process is time consuming and laborious and typically requires the involvement of a sophisticated expert. In the face of a plurality of client bureaus with disconnected networks and geographic isolation, expert resources become bottlenecks, which have serious influence on the efficiency of delivery operation.
In view of this, an embodiment of the present application provides an execution plan generation method, which may utilize a change flow template designed by a technical expert according to version baseline data and own working experience, and then render the change flow template with received office point information to generate a change flow of a specific office point. The method is beneficial to reducing the technical threshold of the design of the delivery flow of the complex software system, relieving the shortage of expert resources and reducing the delivery cost.
The method also utilizes version baseline data to automatically schedule a plurality of change steps in the change flow of each office point, and creates a change plan which comprehensively considers factors such as compatibility among subsystems, dependency among steps, operation time of each step, delivery tool performance standard, technical limits of each subsystem and the like. Such a change plan can ensure more efficient and accurate execution of the delivery flow.
Fig. 1 is a schematic structural diagram of a software management system according to an embodiment of the present application. As shown in fig. 1, the software management system 100 may include an information gathering tool 110, a design tool 120, and an implementation tool 130.
The information collection tool 110 is used to collect local point information required for delivering a design. The local point information can be related information of the software system deployed by the local point of the client, such as information of a list, a version number, a deployment scale, a deployment form, nonstandard configuration and the like of the software system. The office point information may be stored in a configuration management database (configuration management database, CMDB), in some configuration files, or in other various systems. Wherein, the inventory of the software system refers to listing all software products and components that have been deployed in the customer premises, ensuring that the delivery team knows about all software entities present in the environment. The version number refers to recording version information of each software component in the software system. The deployment scale refers to the deployment scale describing the software system, including the number of users, the number of concurrency, the data volume, etc. Deployment modality refers to a deployment architecture that specifies a software system, such as a monolithic application, a micro-service, a containerized or cloud service, etc. Nonstandard configuration refers to recording nonstandard configurations of software systems in customer premises, which may be customized to meet specific business needs.
Illustratively, as shown in FIG. 1, the information collection tool 110 may include a data collection framework 111, a data collection script 112, and a data desensitization module 113.
The data collection framework 111 may provide a systematic set of methods to collect, process, and store data. It can integrate a variety of data sources, such as databases, file systems, and network interfaces, and provide a unified access point for data collection. The data collection framework 111 is suitable for tools that handle large-scale data streams, such as Apache flash and logflash, ensuring stable transmission of data. The developer can add or modify components of the data collection and processing as needed to accommodate different data collection scenarios. In addition, the data collection framework 111 has a fault tolerant mechanism to ensure recovery in the event of a failure during data collection, thereby ensuring data integrity. The data collection framework 111 can perform preprocessing operations, including filtering, conversion, aggregation, etc., before the data enters the storage system. Meanwhile, the data collection framework 111 also includes a function of monitoring the data collection status and logging to facilitate problem investigation and analysis.
In the embodiment of the present application, the software management system 100 is configured to instruct each subsystem in the software system to provide the data sources, such as office point information, that need to be collected by the subsystem as required by the subsystem through a custom data collection framework 111, and then perform unified scheduling.
The data collection script 112 is custom written to meet specific data collection requirements and enables personalized collection of specific data sources. The data collection script 112 may be configured to run automatically, thereby reducing manual operations and improving work efficiency. The data collection script 112 can contain complex logic to handle various data collection scenarios, including condition judgment, loop processing, and the like. It can also parse and process data of different formats, such as JavaScript object notation (JavaScript object notation, JSON), extensible markup language (Extensible markup language, XML), comma separated value (comma-SEPARATED VALUES, CSV), etc. In addition, the data collection script 112 can also contain error handling mechanisms, such as retry logic, to address temporary failures that may occur during the data collection process. The data collection script 112 may also be used to test and verify the data collection process, ensuring that the collected data is both accurate and valid.
In embodiments of the present application, the data collection framework 111 provides a stable, extensible platform to support complex data collection tasks, while the data collection script 112 provides flexibility and customization to accommodate specific data collection requirements. The two are combined for use, so that the collection work of local point information can be effectively completed.
The data desensitization module 113 performs desensitization processing on the local point information after receiving the local point information. For example, the data desensitization module 113 filters out sensitive information in the office point information to prevent leakage of sensitive information of the customer. The sensitive information may be information such as the client's internet protocol (internet protocol, IP) address, password, key, etc.
The design tool 120 is used for generating a change plan of the office point according to the office point information, the change flow template and the version baseline data.
Illustratively, as shown in FIG. 1, the design tool 120 may include a template management module 121, a data management module 122, a flow orchestration module 123, and a plan generation module 124.
The template management module 121 is used for managing the change flow template and providing the change flow template to the flow orchestration module 123. Wherein, the change flow template is a standardized document, which aims at guiding each step of the project or organization in the change management. It helps to ensure transparency, consistency and normalization of the change process, thereby minimizing the impact of changes on organization and business operations. The change flow templates may include change application forms, change management flows, change management system templates, project change management form templates, project plan change flow templates, change management plan templates, change management roadmap templates, change request templates, change control flow templates, and the like. By using these templates, the design tool 120 can more effectively manage and control changes in the project, thereby ensuring achievement of project goals.
In the embodiment of the application, the change flow template is designed by a technical expert according to version baseline data and own working experience. The change flow template is used for supporting whether a certain flow node is unfolded or not and whether a certain flow node is generated or not based on the local point information. The flow node represents a delivery step of the software system at the time of delivery. For example, the change flow template may support determining whether a corresponding flow node is exposed based on fields in the local point information and version baseline data. For another example, the change flow template may support whether to generate multiple flow nodes for respective subsystems deploying multiple copies based on the local point information.
The data management module 122 is configured to manage preset version baseline data, such as dynamically adjust the version baseline data adaptively according to changes in version, and may provide the version baseline data to the plan generation module 124. The version baseline data refers to the stable state of a project at a specific time point in project management and software development, and the state covers various aspects of documents, designs, codes, configurations, test cases and the like of the software. Version baselines are key tools to ensure project consistency and stability, and provide a reference point and baseline for projects, helping to control variations and risks of projects.
In the embodiment of the application, the version baseline data is used for indicating the flow nodes of the change flow to schedule. Version baseline data may include data for upgrade paths (describing from which source version can be raised to which target version), compatibility baselines between subsystems, estimated delivery time for each subsystem, performance baselines for delivery tools, dependencies between subsystems, change order among multiple logic modules or subsystems, product materials, and the like. Version baseline data is typically carried through unstructured documents and cannot be consumed by code.
The version baseline data may also include information such as change time window plan values, concurrency, etc. Wherein the change time window plan value generally refers to a time window reserved for change operations in project management and system maintenance. This time window is used to plan and implement system changes to reduce the impact on the business. For example, in the maintenance of an information technology (information technology, IT) system or software, the change time window may be a low-peak period of traffic, so that system upgrades or configuration changes can be made without affecting the user. The planning of the change time window needs to consider factors such as business requirements, system stability, work arrangement of maintenance team, and the like.
Concurrency refers to the number of requests that a system can handle at the same time, reflecting the load capacity and performance of the system. In multi-threaded programming and system design, high concurrency means that more threads or processes can be managed and scheduled for execution by system resources (e.g., central processing units (central processing unit, CPU), memory, etc.) simultaneously. The degree of concurrency directly affects the performance and response capability of the system. For example, in performance testing, the performance of a system under high load, including response time and throughput, can be evaluated by simulating a high concurrency scenario. Optimization of concurrency may be achieved by adding hardware resources, optimizing code logic, using concurrency control mechanisms (e.g., mutex locks, semaphores), etc.
After receiving the user requirements input by the user and the local point information collected by the information collecting tool 110, the flow arrangement module 123 may input the user requirements and the local point information into a change flow template, render the change flow template, and determine which flow nodes are expanded and which flow nodes are generated. The flow orchestration module 123 may construct a change flow for the office point based on the expanded flow nodes and/or the generated flow nodes.
The change flow may exist in the form of a "delivery" such that the change flow may be imported into the implementation tool of the customer premises to make the change. In some cloud service scenarios, the service generated by the client's network environment setup and change plan is not in the same network environment, so that it is necessary to transmit in an offline package manner.
If the design tool 120 receives a user demand entered by a user, the user demand may be passed to the flow orchestration module 123. The process orchestration module 123 may input the user requirements and the local point information into a change process template, thereby obtaining a change process for the local point. Optionally, after the modification procedure is scheduled, the procedure scheduling module 123 may integrate the operation node corresponding to the user requirement into the modification procedure, so as to obtain the integrated modification procedure.
The schedule generation module 124 is configured to automatically schedule a plurality of process nodes (i.e., delivery steps) in the change process of the local point using the version baseline data to generate a change schedule. The specific implementation process is as follows:
The schedule generation module 124 may intersect the subsystem manifest in the version baseline data with the subsystem manifest at the point of the change flow to obtain the subsystem manifest involved in the current delivery. The plan generation module 124 may compare the source version of the upgrade path in the version baseline data with the actual version of the local point in the change flow to determine whether the software system deployed at the local point is upgraded or changed. The plan generation module 124 may generate a change order for each subsystem based on compatibility baselines and dependencies between subsystems in the version baseline data. The schedule generation module 124 may generate inter-step parallel or serial rules based on the performance baselines of the delivery tool in the version baseline data. The schedule generation module 124 may calculate how many time windows are needed and the delivery steps to be performed in each time window based on the estimated delivery duration and the changed time window schedule value for each subsystem in the version baseline data.
The schedule generation module 124 may assemble information about the subsystem list related to the present delivery, whether to upgrade or change, the change order of each subsystem, the parallel or serial rule between steps, how many time windows are needed and the delivery steps to be executed in each time window, etc. into an execution schedule of a directed acyclic graph (DIRECTED ACYCLIC GRAPH, DAG) according to the concurrency, and may use the features of the DAG that the DAG has no loop and can perform topology ordering, so as to effectively represent and process complex dependency relationships.
A DAG is a special graph structure consisting of vertices (nodes) and directed edges, and does not contain any loops in the graph. Vertices (nodes) refer to the basic elements in the graph, representing data, tasks, states, etc. Directed edges refer to arrows connecting two vertices, representing a one-way relationship from one vertex to the other. Loop-free means that there is no path in the graph from a vertex that can return to the vertex after passing through several edges. An important feature of DAGs is that topological ordering is possible. Topological ordering is to arrange all vertices in the DAG into a linear sequence such that each edge in the graph is preceded by a vertex. Topological ordering may be used for task scheduling, dependency resolution, etc. In the present application, each node of the DAG represents a delivery step. Each edge of the DAG represents the order between steps. The delivery steps to be performed for each time window are marked by a frame.
The implementation tool 130 is used to guide the software system deployed at the local site to upgrade or change according to the change plan.
Implementation tool 130 may pre-build a structured data model that defines the various types of metadata required at the time of delivery of the complex software system. These metadata may be used by the software management system 100 to generate a wizard-type delivery process, as well as to automatically create documents for reference by a technician. In this way, a consistent source of data across multiple systems is achieved.
In the present application, the implementation tool 130 may input the change plan into the structured data model, and may convert the change plan into a guided change flowchart, so that an implementation person may automatically upgrade or change a complex software system at low cost according to the direction on the change flowchart.
It should be understood that the functional modules, functional devices, and the like, which are referred to in the software management system 100, may be implemented by software or hardware, and are not limited herein, as they may be according to practical situations. In addition, the functional modules, functional devices, and the like involved in the software management system 100 may be arranged singly or in an integrated manner, which is not limited herein.
The foregoing is a description of the software management system 100 provided by an embodiment of the present application. It will be appreciated that the software management system 100 described above may be configured on a cloud management platform, for example, deployed on at least one instance of a virtual machine or container, etc., such that the cloud management platform may provide software management delivery services. Of course, the software management system 100 may also be configured on a node other than the cloud management platform, for example, may be disposed in at least one data center, or disposed on at least one server, where the specific situation may be determined, and is not limited herein. The cloud management platform can provide pages related to public cloud services for users to remotely access the public cloud services. In this embodiment, the user may purchase the software management service that the software management system 100 can provide in advance on the cloud management platform. For ease of understanding, the following describes the form of interaction between the user and the cloud management platform.
As shown in fig. 2, the interaction between the user and the cloud management platform mainly includes that the user logs in to the cloud management platform 200 through a webpage of the client, selects and purchases a cloud service (i.e., a software management service) related to the software management system 100 in the cloud management platform 200, and after the user purchases the cloud service, the user can generate the software management system 100 on the cloud management platform 200 based on the function provided by the software management service. The cloud management platform 200 is mainly used for managing an infrastructure running a software management service. For example, the infrastructure of the software management service may include a plurality of data centers disposed in different areas, each data center including a plurality of servers. The data center may provide underlying resources, such as computing resources, storage resources, etc., for the software management service. Thus, users pay for the resources used when purchasing and using the software management services. When a user uses the software management service, after inputting the requirement of the user for the software management service through a configuration interface, an application program interface (application program interface, API) or an interface interacting with the user provided by the cloud management platform 200, the cloud management platform 200 can generate the software management service matched with the requirement of the user according to the requirement input by the user (or other software/hardware, etc.).
In addition, a part of the modules in the software management system 100 may be configured on the cloud side, and another part may be configured on the end side, so that the software management service is implemented by the end-cloud collaboration. In addition, the software management system 100 may be configured on the end side, and may be specific to the actual situation, which is not limited herein.
The cloud management platform 200 may include a plurality of cloud services, take different cloud services as a large number of local sites existing on the existing network, and distinguish different modalities. Each local point cloud service deployment condition and each version condition are different. Illustratively, as shown in fig. 3, the point cloud services may include databases, disaster recovery, storage pools, artificial intelligence (ARTIFICIAL INTELLECT, AI) cloud services, network components, public components, and cloud resource pools.
The database is a core component in the cloud management platform 200 for storing, managing, and processing data.
Disaster tolerant services ensure high availability of data and services in the event of a disaster.
Storage pools are an approach to integrating and sharing storage resources in cloud computing, allowing point cloud service providers to dynamically and efficiently allocate resources according to demand.
The AI cloud service provides AI-related computing power and services, including machine learning, data analysis, natural language processing, etc., to help enterprises achieve intelligent transformation.
Network components include virtual networks, load balancers, virtual private clouds, etc., that ensure network connectivity, security, and reliability in point cloud services.
The computing components include a CPU, an image processor (graphics processing unit, GPU), etc., which provide computing resources for the point cloud service.
Common components such as Linux server cluster systems and Nginx, HAProxy provide load balancing and traffic distribution for local point cloud services, and ensure high availability and performance of the services.
The cloud resource pool is a local point cloud service provider that groups together computing resources such as server time, network storage, and other IT resources and provides services to multiple clients according to a multi-tenant model. Pooling of resources enables point-of-sale cloud service providers to optimize resource utilization and provide measurable services.
As shown in fig. 4, when the cloud management platform 200 performs the functions of the software management system 100, it may be divided into pre-delivery pre-operations, pre-delivery preparation, delivery implementation, and post-implementation inspection phases.
In the pre-delivery pre-operation stage, after the new version of the software system is released, the cloud management platform 200 may receive information input by a developer into cloud service information, such as a cloud service name, a cloud service version, a path supported by the cloud service, and the like, and may import the cloud service information into the data management module 122 in the design tool 120, so that the data management module 122 dynamically adjusts version baseline data.
The cloud management platform 200 may receive node information in the upgrade process entered by an expert. Taking an upgrade cloud resource pool base as an example, the node information can be execution conditions, operation descriptions, operation influences, related cloud services, upgrade dependency relationships and the like. The cloud management platform 200 may import node information into the module management module 121 in the design tool 120. The template management module 121 may draw the node information recorded by the expert as a flow node, and arrange the node into a standard change flow template according to the upgrade scene.
The cloud management platform 200 may instruct the information collection tool 110 to collect the office point information of the present network office point and import the desensitized office point information into the design tool 120. The design tool 120 renders the change flow templates according to the local point information, renders an outgoing point personalized delivery schedule, a personalized implementation guideline, a personalized software package, and a personalized construction period schedule.
The cloud management platform 200 may instruct the design tool 120 to generate an execution plan based on the execution flow and version baseline data of the selected point of presence and import the execution plan into the implementation tool 130.
In a pre-delivery preparation phase, cloud management platform 200 may perform pre-delivery risk closed loops (e.g., risk identification, risk assessment, risk planning, risk monitoring, risk communication, etc.), software package and file preparation (e.g., software package construction, versioning, documentation, license and compliance files, backup and recovery planning, etc.), delivery tool preparation (e.g., deployment tools, configuration management tools, monitoring and logging tools, testing tools, containerization and virtualization tools, backup tools, disaster recovery tools, etc.).
In the delivery implementation phase, the cloud management platform 200 may instruct the implementation tool 130 to allow the current network implementation personnel to perform implementation operations in a guided manner, visually view the progress and details of each implementation node, and view personalized implementation guidelines and implementation period plans. The implementation tool 130 may create wizard alterations, engineering creation, engineering execution, post execution inspection, and the like.
In the post-implementation inspection phase, the cloud management platform 200 may export a report on the implementation tool 130 side after implementation is completed, and post-delivery inspection is performed based on the report.
The foregoing is a description of a software management system provided by an embodiment of the present application. Next, based on the above, a simulation method provided by the embodiment of the present application will be described.
Fig. 5 is a schematic flow chart of an execution plan generating method according to an embodiment of the present application. It can be appreciated that the execution plan generation method can be executed by the design tool 120 in the software management system 100, and the specific implementation procedure is as follows:
Step S501, acquiring office point information of a target office point.
Upon receiving the location information, the design tool 120 desensitizes the location information. For example, the design tool 120 filters out sensitive information in the local point information to prevent leakage of sensitive information by the customer. The sensitive information may be information such as the client's IP address, password, key, etc.
Step S502, obtaining a change flow of the target local point according to the local point information and the change flow template.
After receiving the user requirement and the local point information input by the user, the design tool 120 may input the user requirement and the local point information into the change flow template, render the change flow template, and determine which flow nodes are expanded and which flow nodes are generated. The design tool 120 may construct a change flow for the office point based on the expanded flow nodes and/or the generated flow nodes. The change flow may exist in the form of a "delivery" so that the change flow may be imported into the implementation tool at the customer premises to effect the change. In some cloud service scenarios, the service generated by the client's network environment setup and change plan is not in the same network environment, so that it is necessary to transmit in an offline package manner.
If the design tool 120 receives a user requirement entered by a user, the user requirement and the local point information may be entered into a change flow template, thereby obtaining a change flow for the local point. Optionally, the design tool 120 may integrate the operation node corresponding to the user requirement into the change procedure, so as to obtain the integrated change procedure.
Step S503, according to the changing flow and the version baseline data, the execution plan of the target office point is obtained.
The design tool 120 is configured to automatically schedule a plurality of process nodes (i.e., delivery steps) in the change process of the local point using the version baseline data to generate a change plan. The specific implementation process is as follows:
The design tool 120 may intersect the subsystem list in the version baseline data with the subsystem list of the local point in the change flow to obtain the subsystem list related to the current delivery. The design tool 120 may compare the source version of the upgrade path in the version baseline data with the actual version of the local point in the change flow to determine whether the software system deployed at the local point is upgraded or changed. Design tool 120 may generate a change order for each subsystem based on compatibility baselines and dependencies between subsystems in the version baseline data. Design tool 120 may generate inter-step parallel or serial rules based on the performance baselines of the delivery tool in the version baseline data. The design tool 120 may calculate how many time windows are needed and the delivery steps to be performed in each time window based on the estimated delivery duration and the change time window plan values for each subsystem in the version baseline data.
The design tool 120 may assemble information such as a subsystem list related to the present delivery, whether to upgrade or change, a change order of each subsystem, a parallel or serial rule between steps, how many time windows are needed and a delivery step to be executed in each time window into an execution plan of a DAG according to the concurrency, and may use the characteristics of the DAG such as no loop and being capable of performing topology ordering, so as to effectively represent and process complex dependency relationships.
In the embodiment of the present application, the design tool 120 may render the received office point information to the change flow template by using the change flow template designed by the technical expert according to the version baseline data and the working experience of the technical expert, so as to generate a change flow of a specific office point. The method is beneficial to reducing the technical threshold of the design of the delivery flow of the complex software system, relieving the shortage of expert resources and reducing the delivery cost. The design tool 120 can also utilize the version baseline data to automatically schedule a plurality of change steps in the change flow for each office point, creating a change plan that comprehensively considers factors such as inter-subsystem compatibility, inter-step dependencies, time of operation of each step, delivery tool performance criteria, and subsystem technology limitations. Such a change plan can ensure more efficient and accurate execution of the delivery flow.
Based on the above description, an embodiment of the present application provides an execution plan generation apparatus 600. As shown in fig. 6, the apparatus 600 includes:
The first processing module 610 is configured to obtain office point information of a target office point, where the office point information includes information related to a software system deployed by the target office point, and at least one office point includes the target office point, the second processing module 620 is configured to obtain a change flow of the target office point according to the office point information and a change flow template, where the change flow template is configured to support whether to deploy a flow node and whether to generate the flow node based on the office point information, the flow node is a delivery step when the software system delivers, the change flow includes the deployed flow node and/or the generated flow node, and the third processing module 630 is configured to obtain an execution plan of the target office point according to the change flow and version baseline data, where the version baseline data is configured to instruct the deployed flow node and/or the generated flow node in the change flow to schedule, and the execution plan is configured to instruct the software system deployed by the target office point to upgrade or change.
In one embodiment, the first processing module 610 is further configured to perform a desensitization process on the location information after obtaining the location information of the target location.
In one embodiment, the second processing module 620 is further configured to obtain a user requirement input by the user before obtaining the change procedure of the target office point according to the office point information and the change procedure template, and the second processing module is configured to input the user requirement and the office point information to the change procedure template to obtain the change procedure of the target office point.
In one embodiment, the third processing module 630 is further configured to input the execution plan of the target office point into a data model, to obtain a wizard-type modification flowchart, where the data model includes metadata of various types required when delivering the software system.
In one embodiment, the target office point execution plan exists in the form of a directed acyclic graph DAG, nodes in the DAG representing one delivery step, and edges in the DAG representing the order of precedence between the delivery steps.
The first processing module 610, the second processing module 620, and the third processing module 630 may be implemented by software, or may be implemented by hardware. Illustratively, an implementation of the first processing module 610 is described next as an example of the first processing module 610. Similarly, the implementation of the second processing module 620 and the third processing module 630 may refer to the implementation of the first processing module 610.
Module as an example of a software functional unit, the first processing module 610 may include code that runs on a computing instance. The computing instance may include at least one of a physical host (computing device), a virtual machine, and a container, among others. Further, the above-described computing examples may be one or more. For example, the first processing module 610 may include code running on multiple hosts/virtual machines/containers. It should be noted that, multiple hosts/virtual machines/containers for running the code may be distributed in the same region (region), or may be distributed in different regions. Further, multiple hosts/virtual machines/containers for running the code may be distributed in the same availability zone (availability zone, AZ) or may be distributed in different AZs, each AZ comprising one data center or multiple geographically close data centers. Wherein typically a region may comprise a plurality of AZs.
Also, multiple hosts/virtual machines/containers for running the code may be distributed in the same virtual private cloud (virtual private cloud, VPC) or may be distributed in multiple VPCs. In general, one VPC is disposed in one region, and a communication gateway is disposed in each VPC for implementing inter-connection between VPCs in the same region and between VPCs in different regions.
Module as an example of a hardware functional unit, the first processing module 610 may include at least one computing device, such as a server or the like. Alternatively, the first processing module 610 may be a device implemented using an application-specific integrated circuit (ASIC), a programmable logic device (programmable logic device, PLD), or the like. The PLD may be implemented as a complex program logic device (complex programmable logical device, CPLD), a field-programmable gate array (FPGA) GATE ARRAY, a general-purpose array logic (GENERIC ARRAY logic, GAL), or any combination thereof.
The plurality of computing devices included in the first processing module 610 may be distributed in the same region or may be distributed in different regions. The plurality of computing devices included in the first processing module 610 may be distributed in the same AZ or may be distributed in different AZ. Likewise, the plurality of computing devices included in the first processing module 610 may be distributed in the same VPC or may be distributed in a plurality of VPCs. Wherein the plurality of computing devices may be any combination of computing devices such as servers, ASIC, PLD, CPLD, FPGA, and GAL.
It should be noted that, in other embodiments, the first processing module 610 may be configured to perform any step in the method shown in fig. 5, the second processing module 620 may be configured to perform any step in the method shown in fig. 5, the third processing module 630 may be configured to perform any step in the method shown in fig. 5, and the steps that the first processing module 610, the second processing module 620, and the third processing module 630 are responsible for implementing may be specified as needed, and the first processing module 610, the second processing module 620, and the third processing module 630 implement different steps in the method shown in fig. 5, respectively, to implement the overall functions of the apparatus 600.
Fig. 7 is a schematic structural diagram of a computing device according to an embodiment of the present application. As shown in fig. 7, the computing device 700 includes a bus 710, a processor 720, a memory 730, and a communication interface 740. Communication between processor 720, memory 730, and communication interface 740 is via bus 710. The computing device 700 may be a server, computer, portable notebook, cabinet, or the like. It should be understood that the present application is not limited to the number of processors, memories in computing device 700.
Bus 710 may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one line is shown in fig. 7, but not only one bus or one type of bus. Bus 710 may include a path for transferring information between various components of computing device 700 (e.g., processor 720, memory 730, communication interface 740).
The processor 720 may be any one or more of a central processing unit (central processing unit, CPU), a graphics processor (graphics processing unit, GPU), a Microprocessor (MP), or a digital signal processor (DIGITAL SIGNAL processor, DSP).
Memory 730 may include volatile memory (RAM), such as random access memory (random access memory). Memory 730 may also include non-volatile memory (non-volatile memory), such as read-only memory (ROM), flash memory, mechanical hard disk (HARD DISK DRIVE, HDD) or solid state disk (SSD STATE DRIVE).
The memory 730 stores executable program codes, and the processor 720 executes the executable program codes to implement the functions of the aforementioned modules, such as the first processing module 610, the second processing module 620, the third processing module 630, and the like, respectively, thereby implementing the method as shown in fig. 5. That is, the memory 730 has instructions stored thereon for performing the method shown in fig. 5.
Or the memory 730 stores executable code that is executed by the processor 720 to implement the functions of the aforementioned modules, respectively, to implement the method illustrated in fig. 5. That is, the memory 730 has instructions stored thereon for performing the method shown in fig. 5.
Communication interface 740 enables communication between computing device 700 and other devices or communication networks using transceiver modules such as, but not limited to, network interface cards, transceivers, and the like.
The embodiment of the application also provides a computing device cluster. The cluster of computing devices includes at least one computing device. The computing device may be a server, such as a central server, an edge server, or a local server in a local data center. In some embodiments, the computing device may also be a terminal device such as a desktop, notebook, or smart phone.
As shown in fig. 8, a cluster of computing devices includes at least one computing device 700. The same instructions for performing the method shown in fig. 5 may be stored in memory 730 in one or more computing devices 700 in the computing device cluster.
In some possible implementations, portions of the instructions for performing the method shown in fig. 5 may also be stored separately in memory 730 of one or more computing devices 700 in the computing device cluster. In other words, a combination of one or more computing devices 700 may collectively execute instructions for performing a method as shown in fig. 5.
It should be noted that, the memory 730 in different computing devices 700 in the computing device cluster may store different instructions for performing part of the functions of the first processing module 610. That is, the instructions stored in the memory 730 of the different computing devices 700 may implement the functionality of one or more of the second processing module 620 and the third processing module 630 described above.
In some possible implementations, one or more computing devices in a cluster of computing devices may be connected through a network. Wherein the network may be a wide area network or a local area network, etc. Fig. 9 shows one possible implementation. As shown in fig. 9, two computing devices are connected between computing device 700A and computing device 700B, respectively, via a network. Specifically, the connection to the network is made through a communication interface in each computing device. In this type of possible implementation, instructions to perform the functions of some of the first processing module 610, the second processing module 620, and the third processing module 630 described above are stored in the memory 730 in the computing device 700A. Meanwhile, the memory 730 in the computing device 700B stores instructions for performing the functions of the other of the first processing module 610, the second processing module 620, and the third processing module 630.
The manner in which the clusters of computing devices shown in fig. 9 may be connected may be in view of the large amount of data that may be stored in the method of fig. 5 provided by the present application, and thus, in view of the functionality implemented by another part of the first processing module 610, the second processing module 620, and the third processing module 630, the functionality implemented by another part of the modules may be implemented by the computing device 700B.
It should be appreciated that the functionality of computing device 700A shown in fig. 9 may also be performed by multiple computing devices 700. Likewise, the functionality of computing device 700B may also be performed by multiple computing devices 700.
The embodiment of the application also provides another computing device cluster. The connection relationship between the computing devices in the computing device cluster may be similar with reference to the connection manner of the computing device cluster in fig. 7 and 8. In contrast, the same instructions for performing the method illustrated in FIG. 5 may be stored in memory 730 in one or more computing devices 700 in the computing device cluster.
In some possible implementations, portions of the instructions for performing the method shown in fig. 5 may also be stored separately in memory 730 of one or more computing devices 700 in the computing device cluster. In other words, a combination of one or more computing devices 700 may collectively execute instructions for performing a method as shown in fig. 5.
It should be noted that the memory 730 in different computing devices 700 in a computing device cluster may store different instructions for performing part of the functions of the computing device 700. That is, the instructions stored in the memory 730 of the different computing devices 700 may implement the functionality of one or more of the first processing module 610, the second processing module 620, and the third processing module 630 described above.
Embodiments of the present application also provide a computer program product comprising instructions. The computer program product may be a software or program product containing instructions capable of running on a computing device or stored in any useful medium. The computer program product, when run on at least one computing device, causes the at least one computing device to perform the method as shown in fig. 5.
The embodiment of the application also provides a computer readable storage medium. Computer readable storage media can be any available media that can be stored by a computing device or data storage device such as a data center containing one or more available media. Usable media may be magnetic media (e.g., floppy disks, hard disks, magnetic tape), optical media (e.g., DVD), or semiconductor media (e.g., solid state disk), among others. The computer-readable storage medium includes instructions that instruct a computing device to perform the method as shown in fig. 5.
It should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present application, and not for limiting the same, and although the present application has been described in detail with reference to the above-mentioned embodiments, it should be understood by those skilled in the art that the technical solution described in the above-mentioned embodiments may be modified or some technical features may be equivalently replaced, and these modifications or substitutions do not make the essence of the corresponding technical solution deviate from the protection scope of the technical solution of the embodiments of the present application.