Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an embodiment of the present application, there is provided a method embodiment of a data processing method, it being noted that the steps shown in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and although a logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in an order different from that herein.
FIG. 1 is a flow chart of a data processing method according to an embodiment of the present application, as shown in FIG. 1, the method includes the steps of:
Step S101, target authentication information of the distributed storage system is acquired, and the target authentication information is stored in the container management system.
Among other things, distributed storage systems such as Ceph, are used to provide high availability object storage, block storage, and file system storage services, aimed at building large-scale, high-performance, reliable, and scalable storage solutions. Ceph's design goal is to achieve data center level storage resource pooling while ensuring data security and consistency. It uses an algorithm called CRUSH (Controlled Replication Under Scalable Hashing) to distribute data and copies, ensures balanced distribution of data among the nodes of the storage cluster, and is able to tolerate node failures without losing data. The container management system, for example, kubernetes, (abbreviated as "k8 s") is an open-source container orchestration platform for automated deployment, extension, and management of containerized applications. Kubernetes provides an efficient, scalable way to run containerized applications that breaks down applications into smaller, manageable services or micro-services and wraps those services in containers. Through Kubernetes, a user can define the resource requirements of a container cluster, the network connection of the service, the storage requirements, and how to automatically expand the application to accommodate different workloads. The Kubernetes automatic scheduling mechanism may ensure that the containers of the application run on the best nodes in the cluster while providing health checking, self-healing, and load balancing functions.
In step S101, a Kubernetes Secret is created for storing authentication information of Ceph, where in Kubernetes, secret is an object type for storing sensitive information such as passwords, SSH keys, TLS certificates or database credentials, etc. Kubernetes Secret include the user name, key, and other necessary information required to access the Ceph store. The Secret may be named "userName-libvirt" and stored in the "kubevirt-cdi" namespace of Kubernetes. This Secret is added to the CDI Config through the Kubernetes API, ensuring that the Containerized Data Importer (CDI) has access to these authentication information.
Step S102, when the container management system creates a storage volume of a block type, if a persistent volume statement corresponding to the storage volume is detected to include a preset label, generating a target file including distributed object storage block device information.
In step S102, when creating DataVolume of block type in Kubernetes, CEPH CSIDRIVER checks the label "cdi. If the tag is set to true, CSIDRIVER does not perform the conventional RBD mapping operation, but generates a file containing the RBD name and related RBD information. This file will contain all RBD mapping information needed to start the virtual machine without conversion by the host kernel.
Step S103, performing deployment operation of the containerized data import related component by using the target authentication information, and adding a preset label in a persistent volume statement corresponding to the data storage volume if the direct input/output corresponding value type in the protocol description information corresponding to the data storage volume is detected to be true after the deployment operation is completed.
In step S103, during CDI deployment, the CDI component is started using the authentication information stored in step S101. Once the CDI completes the preparation and storage of the data into DataVolume, it checks the direct IO attributes in the spec description of DataVolume (directio). If the attribute is set to true, CDI will add a preset tag "cdi.kubev irt.io/storage.direction" =true "to the corresponding PVC. This indicates that the data on the PVC can be accessed directly by the virtual machine without the need for traditional kernel layer mapping.
Step S104, in the process of starting the target container set by the virtual machine instance in the virtual machine management platform, if the persistent volume statement corresponding to the data storage volume is detected to include the preset label, target encryption information required by mapping the distributed object storage block device is obtained from the data storage volume, and the target encryption information is mounted in the preset catalog.
Wherein a virtual machine management platform KubeVirt, kubeVirt, for example, is an open source project that provides a set of extensions to Kubernetes that enable Kubernetes to manage and run virtual machines, just like a management container. KubeVirt is directed to combining virtualization technology with containerization technology to run and manage virtualization workload in Kubernetes clusters. Through KubeVirt, a user can use APIs and CLI tools of Kubernetes (e.g., kubectl) to operate virtual machines, including creating, starting, stopping, and deleting virtual machines. KubeVirt also provide high-level functions such as virtual machine lifecycle management, virtual machine image management, virtual machine network configuration, and management of storage volumes.
In step S104, when KubeVirt starts a Virtual machine instance (Virtual MACHINE INSTANCE, VMI), it is checked whether a preset label "cdi.kubev irt.io/storage.direction" =true "exists on the PVC corresponding to DataVolume. Once this tag is detected, the virt-launcher component will read the encryption information needed for the complete RBD mapping from DataVolume and install it in read-only form into the preset directory "/etc/ceph" for use by the virtual machine instance.
It should be noted that the virt-launcher component is a key component in the KubeVirt item, which is responsible for starting and managing virtual machine instances (VirtualMachineInstance, VMI) on nodes of the Kubernetes cluster. Specifically, the primary responsibilities of the virt-launcher include 1. Lifecycle management of virtual machine instances virt-launcher monitors events and state changes related to virtual machine instances. When KubeVirt API SERVER receives a request to create a VMI, it will inform the virt-launcher to start the virtual machine on the appropriate node. virt-launcher also handles stopping, restarting, and deleting operations of the virtual machine. 2. Preparation before starting the virtual machine, the virt-launcher performs a series of preparation operations, such as setting network configuration, disk mapping, memory and CPU resources of the virtual machine, and the like, before starting the virtual machine. It also handles storage volumes on which the virtual machine depends, e.g., loading data from Ceph RBD, NFS, iSCSI, etc., storage systems and mapping into the virtual machine. QEMU process management-virt-launcher will launch QEMU processes to perform hardware simulation and running of virtual machines. The configuration information of the virtual machine is transmitted to the QEMU, and the QEMU comprises parameters such as disk mapping, a network interface, a CPU, a memory and the like. The virt-launcher also monitors the state of the QEMU process, ensuring the normal operation of the virtual machine.
QEMU (Quick Emulator) is an open source machine simulator and virtualization technology that enables creation and operation of virtual machines. QEMU supports a variety of processor architectures and operating systems and may function as a complete system simulator or simulate only part of the behavior of a particular hardware device. In a virtualized scenario, the QEMU acts as a software-level CPU and hardware device simulator, allowing multiple independent virtual machines to run on one physical host. QEMU processes refer to program instances that run QEMU software to simulate a hardware environment. When KubeVirt creates a virtual machine instance, it initiates the QEMU process to perform hardware simulation, which includes 1. Simulating the CPU and memory of the virtual machine, allowing the virtual machine to run different operating systems and applications. 2. Simulate various storage devices such as hard disk, CD-ROM, etc., and support various storage back ends including local files, network storage (such as NFS, iSCSI, ceph RBD), etc. 3. The virtual network adapter is emulated, allowing the virtual machine to communicate with an external network through a virtual network interface. 4. For virtual machines requiring graphical user interfaces, QEMU may emulate a graphics adapter and display supporting remote display protocols such as Spice, VNC, etc. The QEMU process is not directly physically connected with the actual operating system and hardware, but can run on a physical host computer in a software simulation mode. In KubeVirt, the QEMU process can be started and managed through the libvirt interface, a generic API for managing and accessing virtualized resources that provides support for multiple virtualization technologies, including QEMU/KVM.
Step S105, if the target authentication information exists in the preset catalog, adding the target authentication information to the configuration information of the target virtual machine, and starting the target virtual machine.
In step S105, the virt-launcher will check if authentication information exists under the "/etc/ceph" directory before the virtual machine is started. If so, it will generate the corresponding secret xml file and define these secrets through virsh interface secret-definition so that the virtual machine can access the Ceph store. When defining the virtual machine, the authentication information and the RBD information are integrated into the virtual machine configuration. When the virt-launcher traverses the disk information to be added to the virtual machine, if the disk to be mounted is found to be RBD information of a block type, the disk information is added to the virtual machine configuration in networkdisk mode. Finally, the QEMU is called by libvirt to start the virtual machine, so that the virtual machine can directly access Ceph RBD, and efficient data reading and writing are realized.
According to the steps, the method has the advantages that 1. The access path is reduced, the traditional path is a virtual machine- > QEMU- > path mapping in a container > host machine kernel- > Ceph RBD, the optimized path is the virtual machine- > QEMU- > Ceph RBD, and the links of path mapping in the container and the host machine kernel are omitted, so that delay and CPU utilization rate are reduced. 2. And the librbd is appointed to access the library, namely, the librbd library with a specific version is directly called by the QEMU, so that the interaction stability and performance with Ceph RBD can be ensured, and the problem of librbd library version difference caused by different host kernel versions is avoided. 3. The compatibility and stability are improved, and the QEMU is directly communicated with the Ceph RBD and is not influenced by the kernel version of the host, so that the compatibility and stability of the whole system can be improved. 4. And optimizing the read-write performance, namely, by reducing the access path and designating the librbd library version, the read-write performance can be further optimized, the data transmission delay is reduced, and the response speed and the overall performance of the virtual machine are improved.
The steps shown in fig. 1 are exemplarily illustrated and explained below.
According to some optional embodiments of the present application, if the target authentication information is detected to exist in the preset directory, the steps of generating an encrypted file corresponding to the target authentication information and calling the preset interface to access the storage portion in the distributed storage system may be further performed.
In the above embodiment, in KubeVirt environment, if the presence of the target authentication information (authentication information of Ceph) in the preset directory (e.g./etc/Ceph) is detected, a step of generating an encrypted file containing the authentication information using a tool or library (e.g., ceph-authtool or librbd library) provided by Ceph can be performed to securely access the storage portion of the distributed storage system (e.g., ceph). The key ring is handled and the store is accessed using an interface of libvirt or QEMU. For Ceph RBD, the virt-laboncher component invokes the secret-define interface of libvirt to define a secret that is used to store authentication information for Ceph. Subsequently, this secret is referenced in the virtual machine definition, ensuring that the virtual machine can access the Ceph store in a secure manner.
According to other optional embodiments of the application, starting the target virtual machine can be achieved by traversing a disk to be added of the target virtual machine, when traversing the disk to be added of the target virtual machine, if a block to be mounted is a target file and distributed object storage block equipment information in the target file is readable information, adding monitor information into the distributed object storage block equipment information, adding disk information to the target virtual machine in a network disk mode, and starting the target virtual machine after adding the disk information to the target virtual machine.
In the above embodiment, in the KubeVirt environment, the process of traversing the disk to be added by the target virtual machine and determining how to mount according to the disk type and the storage information is a key step for implementing efficient access of the virtual machine to the Ceph storage, and the following is a detailed explanation of this process, namely, when the virt-launcher prepares to start a target virtual machine, it traverses all the disk information contained in the definition of the target virtual machine. The disk information includes configuration parameters such as the type, size, storage source, etc. of the disk. During the traversal, the virt-launcher checks the storage source of each disk. If the disk source is a target file and the file contains readable distributed object storage block device information, i.e., RBD information, then the virt-launcher will process the disk further. For files marked as directly accessible to the RBD, the virt-launcher will read RBD information from the file, including at least the pool name, mirror name, and authentication information of the RBD. The virt-launcher then adds monitor (mon) information to the RBD information. The monitor is the component in the Ceph cluster responsible for managing the block device image metadata, and mon information is added to ensure that QEMU can directly find and access the correct Ceph cluster. Next, the virt-launcher adds disk information to the target virtual machine in the form of a network disk (network disk) using a configuration containing RBD and mon information. This means that the disk of the virtual machine will mount directly in the form of a Ceph RBD, rather than the traditional way of mapping through a file system or host kernel. After all disk information is properly added to the virtual machine configuration, the virt-launcher will pass the configuration information to the QEMU, starting the target virtual machine. The virtual machine can now directly access the data in the Ceph RBD without going through an additional middle tier.
In some alternative embodiments of the present application, storing the target authentication information in the container management system may be accomplished by storing the target authentication information in the form of a first encrypted object in a namespace of a containerized data import-related component of a virtual machine management platform in the container management system and adding the target authentication information to configuration information of the containerized data import-related component.
In the above embodiment, first, target authentication information (such as an authentication key and ID of Ceph) is converted into an encrypted Secret object. In Kubernetes, secret is a type of data used to store sensitive data, such as passwords, keys, etc., that are stored in clusters and can be mounted in encrypted form to Pod. By creating a Secret object, the authentication information can be securely stored in the Kubernetes cluster, rather than being hard coded in the configuration file or source code. The created Secret object may be stored in a namespace of KubeVirt CDI related components, such as the kubevirt-cdi namespace. Namespaces are used in Kubernetes to logically isolate resources, which helps organize and manage resources, improving security. This Secret object containing authentication information is then added to the configuration of the CDI-related component (e.g., CDI-discovery). Thus, when the component is running, it will have the right to access the Ceph store. The CDI component can read the authentication information in Secret for subsequent storage operations, such as importing virtual machine images to the Ceph RBD.
As some optional embodiments of the present application, after adding the preset tag to the persistent volume declaration corresponding to the data storage volume, the steps of reading the target authentication information and storing the target authentication information in the form of a second encrypted object, and accessing the storage portion in the distributed storage system through the target authentication information in the second encrypted object may be further performed.
It will be appreciated that to further increase security, the authentication information should be in an encrypted state even during storage. Thus, the read authentication information is encrypted as a second encryption object. In the Kubernetes environment, this means that a new Secret object is created to ensure that the information is encrypted and cannot be accessed by unauthorized users or services, even when stored.
FIG. 2 is a schematic diagram of an optimization objective of a data processing method according to an embodiment of the present application, i.e., the optimization objective of the method shown in FIG. 1, and as shown in FIG. 2, client Clients issue virtual machine management requests to KubeVirt (WM) through Kubernetes API SERVER. KubeVirt manages the lifecycle of the virtual machine through the virt-launcher component and libvirt interface. The virt-launcher detects the storage information which the virtual machine needs to access, directly calls librbd the library to interact with the Ceph cluster, and does not carry out RBD mapping through the host kernel, so that the hierarchy of the data access path is reduced. The Hypervisor directly uses the RBD information provided by librbd to start the virtual machine, so that the virtual machine can directly access the data in the Ceph storage, and the I/O efficiency is improved. Ceph clusters provide data storage services with which virtual machines communicate directly through improved paths.
Wherein a client may be any entity requesting KubeVirt services, such as an end user, an automation script, or an application. The client interacts with KubeVirt through Kubernetes API SERVER, requesting creation, starting, stopping, or deleting of Virtual Machine Instances (VMIs). The virtual machine VM is the primary resource provided by KubeVirt, created by the client through the Kubernetes API, can run on the nodes managed by KubeVirt, and accesses the block devices in the Ceph storage system. In KubeVirt architecture, the Hypervisor may be QEMU or KVM, which is a virtualization layer running directly on the physical host, responsible for managing the running environment of the virtual machine, including virtualized CPU, memory, storage, network, etc. librbd is a library for managing block devices (RBDs) in a Ceph storage system. In the modified architecture of fig. 2, librbd is invoked directly by the QEMU to bypass the traditional host kernel path, enabling direct communication of the virtual machine with the Ceph store. Ceph Cluster C Cluster is a distributed storage system that can provide block storage, object storage, and file system storage services. In the KubeVirt environment, ceph block storage (RBD) is used as back-end storage for virtual machine disks.
As shown in FIG. 1 and FIG. 2, the application can effectively reduce the data access path and improve the data transmission efficiency by directly connecting the QEMU process in KubeVirt to the Ceph RBD. The method avoids the additional expense caused by RBD mapping through the host machine kernel, because the data is not required to pass through the layer of the host machine kernel, but is directly transferred from the virtual machine to the QEMU, and then the QEMU directly interacts with the Ceph RBD.
The application provides a detailed scheme to enable kubevirt virtual machines to be directly used ceph, and the virtual machines are greatly different from the kernel mapping form. In addition, the application has the following advantages that 1. Through a direct IO mechanism, the virtual machine can directly access Ceph block equipment, thereby avoiding the overhead of the traditional file system layer and remarkably improving the data reading and writing speed and the I/O performance. 2. The number of intermediate links is reduced, the virtual machine can access storage only through conversion of a plurality of levels in the traditional RBD mapping mode, and the patent technology reduces the number of the intermediate links, so that a data path is shorter, and delay is reduced. 4. By dynamically detecting and applying specific labels, the system can automatically select the optimal data access mode according to different workload demands, the flexibility and adaptability of the system are enhanced, and whether to directly connect ceph can be freely selected. 5. Because the traditional kernel mounting depends on kernel compatibility, after being directly connected by the QEMU, only the compatibility of the library carried by the application needs to be considered, and the compatibility is enhanced. 6. Because unnecessary data copying and conversion are reduced, the system can more efficiently utilize storage resources, the total ownership cost is reduced, and the resource utilization rate is improved.
FIG. 3 is a flowchart of a method for starting up a virtual machine according to an embodiment of the present application, as shown in FIG. 3, the method comprising the steps of:
Step S301, start virt-lancher. That is, virt-launcher is started ready for initialization and configuration of virtual machine instances.
Step S302, judging whether the electric power/etc/ceph exists. The virt-launcher checks whether a preset directory/etc/Ceph exists, which is used to store authentication information and configuration for accessing the Ceph storage system. If so, the flow proceeds to step S303,
In step S303, if the/etc/Ceph directory exists, the virt-laycher uses the secret-define command of the libvirt tool virsh to define a secret that contains the authentication information of Ceph. This is to ensure that the virtual machine can securely access the Ceph store, rather than directly exposing sensitive information in the Pod.
Step S304, after step S303, or in the absence of the etc/ceph directory, the disk information is traversed. Specifically, virt-launcher begins traversing all disk information of the target virtual machine, checking the source and type of each disk.
In step S305, in the process of traversing the disk information, it is determined whether each disk is an RBD type file block, that is, whether it is derived from a Ceph RBD, if so, the flow goes to step S306, and if not, the process continues to traverse other disk information.
In step S306, for a disk labeled RBD type, the virt-launcher mounts the disk in networkdisk, which means that the virtual machine can directly communicate with the Ceph RBD to access the storage resources without mapping through the host kernel.
In step S307, after the disc information processing and mounting are completed, the virt-launcher will continue to execute the start-up procedure of the virtual machine. The virtual machine can now access the storage resources provided by the Ceph RBD to interact data in a direct IO manner.
Fig. 4 is a block diagram of a data processing apparatus according to an embodiment of the present application, as shown in fig. 4, including:
the obtaining module 41 is configured to obtain target authentication information of the distributed storage system, and store the target authentication information in the container management system.
And the generating module 42 is configured to generate, when the container management system creates a storage volume of a block type, a target file including the distributed object storage block device information if it is detected that the persistent volume declaration corresponding to the storage volume includes a preset label.
The adding module 43 is configured to perform a deployment operation of the containerized data import related component by using the target authentication information, and after the deployment operation is completed, if it is detected that a value type corresponding to direct input and output in the protocol description information corresponding to the data storage volume is true, add a preset tag to a persistent volume declaration corresponding to the data storage volume.
And the processing module 44 is configured to, when detecting that the persistent volume statement corresponding to the data storage volume includes the preset label in the process of starting the target container set by the virtual machine instance, obtain target encryption information required by mapping the distributed object storage block device from the data storage volume, and mount the target encryption information in the preset directory.
The starting module 45 is configured to add the target authentication information to the configuration information of the target virtual machine and start the target virtual machine if it is detected that the target authentication information exists in the preset directory.
Optionally, if the target authentication information is detected to exist in the preset directory, the method further includes generating an encrypted file corresponding to the target authentication information, and calling a preset interface to access a storage part in the distributed storage system.
Optionally, starting the target virtual machine, including traversing a disk to be added by the target virtual machine; when traversing a disk to be added of a target virtual machine, if a block to be mounted is a target file and distributed object storage block equipment information in the target file is readable information, adding monitor information into the distributed object storage block equipment information, adding disk information to the target virtual machine in a network disk form, and starting the target virtual machine after adding the disk information to the target virtual machine.
Optionally, storing the target authentication information in the container management system includes storing the target authentication information in the form of a first encrypted object in a namespace of a containerized data import related component of a virtual machine management platform in the container management system and adding the target authentication information to configuration information of the containerized data import related component.
Optionally, after adding the preset label in the persistent volume statement corresponding to the data storage volume, the method further comprises the steps of reading target authentication information, storing the target authentication information in the form of a second encryption object, and accessing a storage part in the distributed storage system through the target authentication information in the second encryption object.
Optionally, the preset label comprises cdi.kubev irt.io/storage.direction "=true.
Optionally, the target encryption information is mounted in a preset catalogue, wherein the target encryption information is mounted in the preset catalogue in a read-only mode, and the preset catalogue comprises:/etc/ceph.
It should be noted that each module in fig. 4 may be a program module (for example, a set of program instructions for implementing a specific function), or may be a hardware module, and for the latter, it may be expressed in a form, but is not limited to, that each module is expressed in a form of one processor, or the functions of each module are implemented by one processor.
It should be noted that, the preferred implementation manner of the embodiment shown in fig. 4 may refer to the related description of the embodiment shown in fig. 1, which is not repeated herein.
Fig. 5 shows a block diagram of a hardware structure of a computer terminal for implementing a data processing method. As shown in fig. 5, the computer terminal 50 may include one or more processors 502 (shown in the figures as 502a, 502 b..the term..502 n.) the processor 502 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA, a memory 504 for storing data, and a transmission module 506 for communication functions. Among other things, a display, an input/output interface (I/O interface), a Universal Serial BUS (USB) port (which may be included as one of the ports of the BUS BUS), a network interface, a power supply, and/or a camera. It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 5 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, the computer terminal 50 may also include more or fewer components than shown in FIG. 5, or have a different configuration than shown in FIG. 5.
It should be noted that the one or more processors 502 and/or other data processing circuits described above may be referred to herein generally as "data processing circuits. The data processing circuit may be embodied in whole or in part in software, hardware, firmware, or any other combination. Furthermore, the data processing circuitry may be a single stand-alone processing module or incorporated, in whole or in part, into any of the other elements in the computer terminal 50. As referred to in embodiments of the application, the data processing circuit acts as a processor control (e.g., selection of the path of the variable resistor termination connected to the interface).
The memory 504 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the data processing methods in the embodiments of the present application, and the processor 502 executes the software programs and modules stored in the memory 504 to perform various functional applications and data processing, that is, implement the data processing methods described above. Memory 504 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 504 may further comprise memory located remotely from the processor 502, which may be connected to the computer terminal 50 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission module 506 is used to receive or transmit data via a network. The specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 50. In one example, the transmission module 506 includes a network adapter (Network Interface Controller, NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission module 506 may be a Radio Frequency (RF) module, which is used to communicate with the internet wirelessly.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 50.
It should be noted here that, in some alternative embodiments, the computer terminal shown in fig. 5 may include hardware elements (including circuits), software elements (including computer code stored on a computer readable medium), or a combination of both hardware and software elements. It should be noted that fig. 5 is only one example of a specific example, and is intended to illustrate the types of components that may be present in the computer terminal described above.
It should be noted that, the computer terminal shown in fig. 5 is configured to execute the data processing method shown in fig. 1, so that the explanation of the method for executing the command is also applicable to the electronic device, and will not be repeated herein.
The embodiment of the application also provides a nonvolatile storage medium, which comprises a stored program, wherein the program controls the equipment where the storage medium is located to execute the data processing method when running.
The non-volatile storage medium executes a program of acquiring target authentication information of a distributed storage system and storing the target authentication information in a container management system, when the container management system creates a storage volume of a block type, if a preset label is detected to be included in a persistent volume statement corresponding to the storage volume, generating a target file including information of a distributed object storage block device, executing deployment operation of a containerized data import related component by using the target authentication information, if a direct input/output corresponding value type in protocol description information corresponding to the data storage volume is detected to be true after the deployment operation is completed, adding the preset label in the persistent volume statement corresponding to the data storage volume, and when a virtual machine instance in a virtual machine management platform starts a target container set, if the preset label is detected to be included in the persistent volume statement corresponding to the data storage volume, acquiring target encryption information required by mapping of the distributed object storage block device from the data storage volume, and mounting the target encryption information in a preset directory, if the target authentication information is detected to be included in the preset directory, adding the target authentication information to configuration information of a target virtual machine, and starting the target virtual machine.
The embodiment of the application also provides electronic equipment, which comprises a memory and a processor, wherein the processor is used for running a program stored in the memory, and the data processing method is executed when the program runs.
The processor is used for running a program for executing the following functions of acquiring target authentication information of a distributed storage system and storing the target authentication information in a container management system, when the container management system creates a storage volume of a block type, if a preset label is detected to be included in a persistent volume statement corresponding to the storage volume, generating a target file comprising equipment information of a distributed object storage block, executing deployment operation of a containerized data import related component by using the target authentication information, if the direct input and output corresponding value type in protocol description information corresponding to the data storage volume is detected to be true after the deployment operation is completed, adding the preset label in the persistent volume statement corresponding to the data storage volume, and when a virtual machine instance in a virtual machine management platform starts a target container set, if the preset label is detected to be included in the persistent volume statement corresponding to the data storage volume, acquiring target encryption information required by mapping of distributed object storage block equipment from the data storage volume, and mounting the target encryption information in a preset catalog, if the target authentication information is detected to be included in the preset catalog, adding the target authentication information to the configuration information of a target virtual machine, and starting the target virtual machine.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the above embodiment of the present application, the collected information is information and data authorized by the user or sufficiently authorized by each party, and the processes of collection, storage, use, processing, transmission, provision, disclosure, application, etc. of the related data all comply with the related laws and regulations and standards, necessary protection measures are taken without violating the public welfare, and corresponding operation entries are provided for the user to select authorization or rejection.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the related art or all or part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. The storage medium includes a U disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, etc. which can store the program code.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.