[go: up one dir, main page]

CN121166596A - Network access method and system for native protocol I/O equipment in virtualized environment - Google Patents

Network access method and system for native protocol I/O equipment in virtualized environment

Info

Publication number
CN121166596A
CN121166596A CN202511369170.1A CN202511369170A CN121166596A CN 121166596 A CN121166596 A CN 121166596A CN 202511369170 A CN202511369170 A CN 202511369170A CN 121166596 A CN121166596 A CN 121166596A
Authority
CN
China
Prior art keywords
protocol
network
virtual machine
native
namely
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202511369170.1A
Other languages
Chinese (zh)
Inventor
梁琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202511369170.1A priority Critical patent/CN121166596A/en
Publication of CN121166596A publication Critical patent/CN121166596A/en
Pending legal-status Critical Current

Links

Landscapes

  • Computer And Data Communications (AREA)

Abstract

本发明涉及虚拟化网络通信技术领域,具体的说是虚拟化环境下的原生协议I/O设备网络接入方法及系统,包括以下步骤:S1:在物理服务器上部署I/O虚拟化服务,用于连接物理I/O设备;S2:在虚拟机中安装虚拟机I/O客户端,用于向上层应用提供标准I/O设备接口,并与I/O虚拟化服务建立网络连接;S3:在I/O虚拟化服务与虚拟机I/O客户端之间建立原生I/O协议管道;S4:通过原生I/O协议管道传输I/O设备数据;S5:在虚拟机迁移过程中,保持I/O会话状态并重建网络连接;本发明,通过创新性的会话持久化与状态同步机制,在虚拟机迁移过程中能够无缝保持I/O连接,确保业务操作无感知中断,极大地提高了系统的可靠性和用户体验。This invention relates to the field of virtualized network communication technology, specifically to a method and system for network access of native protocol I/O devices in a virtualized environment, comprising the following steps: S1: Deploying an I/O virtualization service on a physical server for connecting physical I/O devices; S2: Installing a virtual machine I/O client in a virtual machine to provide standard I/O device interfaces to upper-layer applications and establish a network connection with the I/O virtualization service; S3: Establishing a native I/O protocol channel between the I/O virtualization service and the virtual machine I/O client; S4: Transmitting I/O device data through the native I/O protocol channel; S5: Maintaining the I/O session state and rebuilding the network connection during virtual machine migration. This invention, through an innovative session persistence and state synchronization mechanism, can seamlessly maintain I/O connections during virtual machine migration, ensuring uninterrupted business operations and greatly improving system reliability and user experience.

Description

Network access method and system for native protocol I/O equipment in virtualized environment
Technical Field
The invention relates to the technical field of virtualized network communication, in particular to a network access method and system of a native protocol I/O device in a virtualized environment.
Background
With the development of virtualization technology, more and more business applications are running in a virtual machine environment. However, access to local I/O hardware (e.g., serial ports, USB interfaces, etc.) by virtualization platforms typically relies on the manner in which physical machines are directly mapped. This approach can cause the original I/O mapping to fail during virtual machine migration or drift (VM MIGRATI/On), thereby causing I/O connection interruption, which can adversely affect the continuity and reliability of the upper layer applications.
The prior art mainly has the following solutions:
After the I/O data is analyzed, the remote I/O access is performed by adopting a general network transmission mode, and some of the existing schemes can be presented in a compromise mode, namely, the existing schemes are not original equipment, and the data is acquired in a certain compatible mode. Although the problem of local I/O mapping failure can be solved to a certain extent, I/O data often needs to be encapsulated or converted, and the characteristics of the native I/O protocol cannot be maintained, which may cause the following problems:
1. the compatibility of the application layer is limited, namely, the application which depends on the specific I/O protocol characteristics can not work normally;
2. The data transmission efficiency is reduced, namely, performance loss is caused by protocol conversion and extra encapsulation;
3. Special control commands are lost, namely, the special control commands of some devices cannot be reserved in the protocol conversion process;
4. the time sequence requirement can not be met, I/O equipment sensitive to time sequence can not maintain the original accurate time sequence requirement;
5. Session interruption in migration process, which is to lack perfect session keeping mechanism, resulting in interruption of I/O operation during migration.
Disclosure of Invention
(One) solving the technical problems
Aiming at the defects of the prior art, the invention provides a network access method and a network access system for a native protocol I/O device in a virtualized environment, which are used for solving the technical problems of the interruption of the connection of the I/O device in the migration process of a virtual machine and the loss of the native I/O protocol characteristic in the prior solution.
(II) technical scheme
In order to achieve the above purpose, the invention provides a network I/O device access method based on a native protocol in a virtualized environment, comprising the following steps:
s1, deploying an I/O virtualization service on a physical server, wherein the I/O virtualization service is used for connecting physical I/O equipment and providing equipment registration, session management and protocol adaptation services;
S2, installing a virtual machine I/O client in the virtual machine, providing a standard I/O device interface for upper-layer application, and establishing network connection with an I/O virtualization service;
S3, a native I/O protocol pipeline is established between the I/O virtualization service and the virtual machine I/O client, and the pipeline maintains the complete characteristics of the original I/O protocol;
S4, transmitting the I/O equipment data through the native I/O protocol pipeline to realize transparent access of the virtual machine to the physical I/O equipment;
S5, in the migration process of the virtual machine, maintaining the I/O session state and reestablishing network connection to ensure service continuity;
The I/O virtualization service provides multiplexing access capability to the same physical I/O device, and supports multiple virtual machines to access the same physical I/O device at the same time.
Further, in the step S3, the method for establishing the native I/O protocol pipeline includes:
Identifying protocol features of the physical I/O device, including command set, timing requirements, and data format;
Packaging the original I/O protocol into a network transmission protocol, and keeping the complete characteristics of the original protocol;
unpacking the packed data at the receiving end to restore the original I/O protocol data;
And executing corresponding I/O operation or returning operation results according to the unpacked protocol data.
Further, in the step S5, the method for maintaining the I/O connection during the migration process of the virtual machine includes:
before virtual machine migration, current session state information is sent to an I/O virtualization service and is stored in a lasting mode;
suspending the I/O operation and freezing the current session in the migration process of the virtual machine;
After the virtual machine migration is completed, the virtual machine I/O client initiates a reconnection request to the I/O virtualization service;
the I/O virtualization service verifies the session information and restores the session state;
The client and the server perform state synchronization to ensure the consistency of the states of the I/O devices;
Normal I/O operation is resumed, and service continuity is ensured.
Further, the I/O virtualization service implements the following functions:
Automatic discovery, registration, and status monitoring of physical I/O devices;
adaptation and native property preservation of multiple I/O protocols;
establishing and maintaining a connection session between the virtual machine and the I/O device;
The persistence storage and state synchronization of session information;
security control and authentication of device access.
Further, the virtual machine I/O client implements the following functions:
providing a standard I/O device interface to the upper layer application;
Converting the I/O request of the application program into a network transmission format;
Establishing and maintaining a network connection with the I/O virtualization service;
network access is reduced through the local cache, and response speed is improved;
Detecting network connection faults and realizing an automatic reconnection mechanism.
A network I/O device access system based on a native protocol in a virtualized environment, comprising:
an I/O virtualization service deployed on the physical server for connecting the physical I/O devices and providing a networked I/O service;
A virtual machine I/O client deployed within the virtual machine for providing a standard I/O interface to an application and communicating with the I/O virtualization service;
a native I/O protocol pipeline, a network transmission channel which is established between the server and the client and keeps the original I/O protocol characteristics;
and the session management module is used for maintaining the equipment connection state and supporting connection recovery after the migration of the virtual machine.
Further, the I/O virtualization service includes:
the device management module is responsible for discovery, registration and state monitoring of the physical I/O devices;
The protocol adaptation engine is responsible for identifying and processing protocol characteristics of different types of I/O devices;
a session manager, responsible for establishing and maintaining a connection session between the virtual machine and the I/O device;
The network transmission module is responsible for reliable transmission of data on a network;
The security control module is responsible for connection authentication and access control;
The equipment resource scheduling module is responsible for reasonably distributing equipment resources in a multi-virtual machine environment;
and the cluster management module is responsible for realizing the cooperative work and state synchronization of the nodes in a multi-server environment.
Further, the virtual machine I/O client includes:
a device interface layer providing a standard I/O device interface to an upper layer application;
The protocol conversion layer is responsible for converting the network format of the I/O request;
A session connection manager responsible for establishing and maintaining network connections;
The local caching module reduces network access through caching;
The fault detection recovery module is responsible for detecting connection faults and realizing a recovery mechanism;
the performance optimization module is used for improving the system efficiency through request batch processing and operation combination;
And the monitoring and diagnosing module is used for collecting the statistical information and the performance index of the I/O operation.
Further, the I/O device types supported by the native I/O protocol pipeline include, but are not limited to, USB devices, serial devices, parallel devices, encryption card devices, special instrument devices, SCSI devices, industrial bus devices.
Further, the native I/O protocol pipeline performs the following functions:
Maintaining the timing characteristics and control commands of the original I/O protocol;
providing reliability guarantee required by network transmission;
data compression and optimization are supported, and transmission efficiency is improved;
data encryption and integrity verification are realized, and safety is ensured;
Unified encapsulation and transport of multiple I/O protocol types is supported.
Further, the system also comprises a device resource scheduling module, which is used for distributing device resources according to a preset strategy or priority rule when a plurality of virtual machines request to access the same I/O device.
Further, the system also comprises a device simulation module, which is used for maintaining the normal running of the application program by simulating the device behavior when the physical I/O device is temporarily unavailable.
Furthermore, the system supports multipath transmission at a network transmission layer, and improves the reliability of the system by establishing redundant network connection.
Further, the method also comprises the following steps:
performing audit records on the operation of the physical I/O equipment, wherein the audit records comprise operation time, operation type, operation content and operation result;
And realizing an operation playback function according to the audit record, and being used for fault analysis and system debugging.
Further, the I/O virtualization service also has a load balancing function, and when the load of a single server is too high, part of I/O equipment access tasks can be automatically transferred to other I/O virtualization services.
Further, the system also includes a cluster management mechanism implemented in a multi-server deployment environment:
automatic discovery and registration of server nodes;
state synchronization and consistency maintenance among cluster nodes;
task dynamic distribution based on load conditions;
node fault detection and automatic fault transfer;
Centralized management and distribution of cluster configurations.
Further, the system also comprises a multi-tenant supporting mechanism for realizing the isolation and the safety boundary of resources among different tenants.
Further, the system also comprises a multi-tenant supporting mechanism for realizing tenant-level resource quota and priority management.
Further, the system also comprises a multi-tenant supporting mechanism for realizing a tenant-specific device allocation and sharing mode.
Further, the system also comprises a multi-tenant supporting mechanism for realizing tenant-level performance monitoring and reporting.
Further, the encapsulation format of the original I/O protocol into the network transmission protocol includes:
A header identifier for identifying the start of the data packet;
A protocol type identifying a type of an original I/O protocol;
a session ID that uniquely identifies a session between the virtual machine and the I/O device;
A sequence number identifying the order of the data packets;
A time stamp for maintaining the timing characteristics;
the flag bit comprises various control flags;
A data length identifying a length of valid data;
original data, including the complete contents of the original I/O protocol;
and the check code is used for checking the data integrity.
Further, the method comprises the following steps:
Collecting performance indexes of the system, including delay, throughput, resource utilization rate and the like;
Analyzing the bottleneck of the system according to the performance index;
Dynamically adjusting system parameters and optimizing system performance;
and providing a targeted optimization strategy according to different application scenes and equipment characteristics.
Further, the system supports the following application scenarios:
remote access of the financial industry security encryption card equipment;
collecting and analyzing data of medical equipment;
remotely monitoring industrial control equipment;
remote sharing of scientific research experimental equipment;
virtualized access and sharing of special peripherals.
Advantageous effects
Compared with the prior art, the invention provides a network access method and a network access system for a native protocol I/O device in a virtualized environment, and the network access method and the network access system have the following beneficial effects:
1. According to the invention, through an innovative session persistence and state synchronization mechanism, the I/O connection can be seamlessly maintained in the migration process of the virtual machine, the service operation is ensured to have no perceived interruption, the continuity guarantee is provided for the core service of an enterprise, and the reliability and the user experience of the system are greatly improved.
2. The invention realizes the protocol encapsulation technology of completely reserving the original protocol field, so that the application program depending on the specific I/O protocol characteristic can normally run in the virtualized environment, the accurate operation of the time sequence sensitive I/O equipment is ensured, the networked I/O equipment is functionally equivalent to the local equipment, and the compatibility and the stability of the application are improved.
3. The system is not only suitable for common USB, serial port, parallel port and other devices, but also can support various special devices such as industrial bus devices, encryption card devices and the like, meets the requirements of different industries, and enhances the universality.
4. The system is not only suitable for common USB, serial port, parallel port and other equipment, but also can support various special equipment such as industrial bus equipment, encryption card equipment and the like, and meets the requirements of different industries. In addition, the plug-in mechanism provided by the system facilitates future expansion of new protocol types, enhancing flexibility.
5. The system is divided into a plurality of modules, each module is responsible for independent functions, the management and the maintenance are easy, and the complexity and the maintenance cost of the system are reduced by adopting a layered design.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a flow chart of a network I/O device access method based on native protocol in a virtualized environment.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Specific examples are given below.
Example 1:
referring to FIG. 1, the present invention provides a native protocol I/O device network access system in a virtualized environment, comprising an I/O virtualization service, a virtual machine I/O client, a native I/O protocol pipeline, and a session management module.
The I/O virtualization service is deployed on a physical server and is responsible for connection and management of physical I/O equipment;
The physical I/O device may be various types of external devices, such as a serial device, a USB device, a parallel device, etc.;
the I/O virtualization service directly communicates with the physical I/O device through a device driver to acquire device information and data.
The virtual machine I/O client is deployed inside the virtual machine and provides a standard I/O device interface for an application program running in the virtual machine;
applications may access remote I/O devices as local devices without having to perceive the actual location of the physical device.
The native I/O protocol pipeline is a network transmission channel established between the I/O virtualization service and the virtual machine I/O client;
The channel adopts a special packaging mode, maintains the complete characteristics of the original I/O protocol, including time sequence requirements, control commands and the like, and simultaneously provides the reliability and the security guarantee required by network transmission.
The session management module is responsible for maintaining the equipment connection state, recording session information and supporting the quick recovery of connection after the migration of the virtual machine;
The session information includes device identification, connection parameters, status records, etc.
Example 2:
The I/O virtualization service includes the following functional modules:
The device management module is responsible for discovery, registration and state monitoring of physical I/O devices, automatically identifies the types and characteristics of the devices and registers the devices into a system when new devices are connected, and simultaneously supports hot plug identification and dynamic resource allocation of the devices, and comprises the following submodules:
1. The device discovery sub-module discovers newly connected I/O devices in real time by polling a physical interface or monitoring system events;
2. A device identification sub-module for identifying the device type and function by reading the device descriptor or the feature code;
3. the drive loading sub-module is used for automatically loading a corresponding device driver according to the device type;
4. a resource allocation submodule, which allocates system resources such as memory space, interrupt numbers and the like for the equipment;
5. the state monitoring sub-module is used for periodically checking the state of the equipment and timely finding out the abnormality of the equipment;
And (II) a protocol adaptation engine responsible for identifying and processing protocol characteristics of different types of I/O devices. The engine realizes the adaptation layer of various I/O protocols, can keep the complete characteristics of the original protocol, and simultaneously provides encapsulation required by network transmission. Key technical implementations of the protocol adaptation engine include:
1. a protocol identification mechanism for identifying the specific protocol type and version used by the equipment through feature matching;
2. A protocol adaptation layer, which is to realize a special adaptation layer for each I/O protocol and keep the protocol semantics;
3. the protocol conversion engine is used for realizing lossless conversion among different protocols and is used for special application scenes;
4. a protocol expansion interface supporting a newly added protocol type through a plug-in mechanism;
5. protocol optimization processing, namely realizing a performance optimization strategy aiming at a specific protocol;
And (III) a session manager responsible for establishing and maintaining connection sessions between the virtual machines and the I/O devices. The session information is stored in a persistent manner to support connection recovery after the migration of the virtual machine, the session manager also realizes a synchronization mechanism of the session state, ensures consistency of the connection state, and the realization of the session manager comprises the following steps:
1. The session establishment module processes the connection request of the virtual machine and establishes a new session;
2. a session state machine for managing the life cycle and state transition of the session;
3. the session persistence module is used for storing the session information to persistent storage and supporting recovery after restarting the system;
4. the state synchronization module is used for realizing the consistency of session states in the multi-server deployment environment;
5. the session recovery module is used for supporting the rapid session recovery after the virtual machine migration or the network interruption;
The network transmission module is responsible for transmitting data on a network to realize reliable data transmission, adopts a low-delay network protocol, supports QoS guarantee, realizes a data compression and optimization algorithm, and improves transmission efficiency, and the specific realization technology comprises the following steps:
1. multiple protocols support, namely simultaneously supporting TCP, UDP, RDMA and other multiple transmission protocols;
2. an adaptive selection algorithm, namely automatically selecting an optimal transmission protocol according to network conditions and data characteristics;
QoS realization, namely realizing differentiated service through flow marking and priority queues;
4. Congestion control, namely realizing a self-adaptive congestion control algorithm to avoid network congestion;
5. Multipath transmission, namely supporting the simultaneous use of a plurality of network paths, and improving throughput and reliability;
6. Bandwidth management, namely realizing dynamic bandwidth allocation and ensuring the performance of key I/O operation;
and fifthly, a security control module, which is responsible for connection authentication and access control, ensures that only authorized virtual machines can access the designated I/O equipment, realizes end-to-end encryption communication, provides an integrity check mechanism and prevents data from being tampered, and is technically realized by the following steps:
1. identity authentication, namely supporting a multi-factor authentication mechanism and confirming the legality of a connection request;
2. access control, namely a refined access control system based on roles and policies;
3. data encryption, namely realizing data transmission encryption by using TLS 1.3 or higher version;
4. Integrity protection, namely ensuring that data is not tampered through HMAC and other algorithms;
5. recording all safety related events and supporting post audit;
And (six) a device resource scheduling module for reasonably distributing device resources when a plurality of virtual machines simultaneously request to access the same I/O device, wherein the module realizes a resource scheduling algorithm based on priority and fairness and ensures the efficient utilization of system resources, and the specific implementation comprises the following steps:
1. the resource pool management, namely maintaining an available I/O equipment resource pool and tracking the use state of resources;
2. The scheduling algorithm is to realize various scheduling algorithms such as priority scheduling, time slice rotation, fair sharing and the like;
QoS guarantee, namely ensuring that key business obtains required equipment resources;
4. Load balancing, namely realizing balanced distribution of equipment loads in a multi-server environment;
5. Resource isolation, namely ensuring that the resource usage among different tenants is isolated from each other;
And (seven) a cluster management module for realizing cooperative work and state synchronization among servers in a multi-server deployment environment, wherein the module supports load balancing, fault switching and centralized management functions, and the technical implementation comprises the following steps:
1. automatically discovering other server nodes in the cluster;
2. State synchronization, namely realizing state consistency among cluster nodes;
3. task distribution, namely distributing I/O processing tasks according to load conditions;
4. detecting faults, namely monitoring the state of the node in real time and detecting the faults of the node;
5. Automatic switching, namely automatically switching to a standby node when the node fails.
Example 3:
The virtual machine I/O client comprises the following functional modules:
and (one) a device interface layer for providing standard I/O device interfaces to upper layer applications. The application program accesses the I/O device through the interface, the actual position of the device is not required to be perceived, the device interface layer simulates the behavior of the local device, the compatibility of the application program is ensured, and the specific implementation comprises the following steps:
1. the standard device interface is realized by supporting various universal device interfaces, such as Windows device driver interfaces, linux device file interfaces and the like;
2. the equipment characteristic simulation, namely completely simulating the behavior characteristics of physical equipment, including interrupt, DMA and other mechanisms;
3. interface adaptation conversion, namely providing an adaptation layer aiming at different operating systems to ensure cross-platform compatibility;
4. Transparent access mechanism, which makes the application program use the remote equipment without modification;
5. The equipment state management, namely maintaining equipment state information and feeding back to an application program in time;
And (II) a protocol conversion layer which is responsible for converting the I/O request of the application program into a network transmission format and restoring the I/O request into an I/O response which can be identified by the application program after receiving network data, wherein the layer maintains the characteristics of an original I/O protocol and ensures that the semantics of the protocol are unchanged, and the technical implementation comprises the following steps:
1. the request analysis module is used for analyzing the I/O request initiated by the application program;
2. The protocol mapping module maps the I/O request to a native I/O protocol format;
3. The protocol encapsulation module encapsulates the native I/O protocol into a network transmission format;
4. The response processing module is used for processing the I/O operation result returned from the server;
5. an asynchronous processing mechanism, which supports asynchronous I/O operation and improves the concurrency performance of the system;
And thirdly, the session connection manager is responsible for establishing and maintaining network connection with the I/O virtualization service, and when detecting network interruption or virtual machine migration, the session connection manager is responsible for automatically reconnecting and recovering the session state, and the specific implementation comprises the following steps:
1. the connection establishment module is used for establishing safe connection with the I/O virtualization service;
2. The session authentication module is used for completing identity authentication and authority verification;
3. the connection monitoring module is used for monitoring the network connection state and timely finding out abnormal connection;
4. an automatic reconnection module for automatically attempting reconnection after the connection is interrupted;
5. the session recovery module is used for reconstructing a session state after connection recovery;
6. multipath management, namely supporting multiple network paths and realizing path selection and switching;
And (IV) the local caching module reduces network access through the local caching, improves the response speed of the I/O operation, saves the copy of the frequently accessed data or configuration information, and reduces network transmission delay, wherein the technology implementation comprises the following steps:
1. The cache policy management is to realize various cache policies such as LRU, LFU, etc.;
2. maintaining cache consistency, namely ensuring the consistency of cache data and server-side data;
3. a prefetching mechanism for acquiring data possibly needed in advance according to the access mode;
4. write-back strategy, which is to support different write operation strategies such as write-through, write-back and the like;
5. buffer space management, namely dynamically adjusting the buffer size and optimizing the use of the memory;
and fifthly, a fault detection and recovery module is responsible for detecting network connection faults and I/O operation anomalies and realizing a fault recovery mechanism, and the module realizes the functions of operation retry, breakpoint continuous transmission and the like and improves the reliability of a system, and specifically comprises the following steps:
1. the fault monitoring mechanism is used for monitoring the I/O operation state and network connection in real time;
2. fault classification and identification, namely distinguishing different types of faults and taking corresponding measures;
3. An operation retry strategy, namely performing intelligent retry on failed I/O operation;
4. breakpoint resume mechanism, which supports breakpoint recovery of large data volume transmission;
5. a degradation service strategy, namely providing degradation service under serious fault condition;
6. A state recovery mechanism for recovering the normal operation state after the fault is resolved;
The performance optimization module improves the performance and efficiency of the I/O operation through various optimization technologies, the module realizes the functions of request batch processing, operation merging and the like, reduces the network interaction times and improves the throughput, and the key technology comprises the following steps:
1. Request batch processing, namely merging a plurality of small I/O requests into a batch processing request;
2. operation merging, namely merging continuous read-write operations, and reducing network interaction;
I/O priority management, namely setting different processing priorities according to operation importance;
4. asynchronous parallel processing, namely supporting parallel execution of a plurality of I/O operations;
5. Optimizing the use efficiency of CPU, memory and network resources;
And (seventh) a monitoring and diagnosing module for collecting statistical information and performance index of I/O operation, providing data support for performance tuning and fault diagnosis, and realizing log recording, performance analysis and other functions, comprising:
1. Performance index collection, namely recording key performance indexes such as response time, throughput and the like;
2. recording operation log, namely recording I/O operation process and result in detail;
3. recording and reporting abnormal events and error conditions;
4. performance bottleneck analysis, namely identifying the performance bottleneck in the system;
5. the diagnostic tool supports providing a diagnostic interface supporting a fault analysis tool.
Example 4:
The workflow of the native I/O protocol pipeline includes the steps of:
The I/O virtualization service analyzes the protocol characteristics of the physical I/O device, including command set, time sequence requirement, data format, etc., and specifically, the system maintains a protocol characteristic library containing the characteristic description of common I/O device protocol, and when new device is connected, the system automatically matches the characteristic library to identify the protocol type and characteristic of the device.
And (II) protocol encapsulation, namely encapsulating the original I/O protocol into a network transmission protocol, and adopting a special encapsulation format to maintain the complete characteristics of the original protocol, wherein the semantic and time sequence characteristics of the original protocol are not changed in the encapsulation process, and the encapsulation format comprises the following fields:
1. header identification (4 bytes) identifying the start of the encapsulated packet;
2. Protocol type (2 bytes) identifies the type of original I/O protocol;
3. A time stamp (8 bytes) recording the encapsulation time for maintaining the timing characteristics;
4. Data length (4 bytes) identifying the length of valid data;
5. original data, namely maintaining the complete content of an original I/O protocol;
6. Check code (4 bytes) for data integrity check;
And (III) network transmission, namely transmitting the encapsulated protocol data from the I/O virtualization service to the virtual machine I/O client or from the client to the server through the network, wherein the integrity and the security of the data are ensured in the transmission process, and the network transmission adopts the following optimization strategies:
1. for time sensitive I/O operation, UDP protocol is used and a reliability guarantee mechanism is combined;
2. for large data volume transmission, the TCP protocol is used and data compression is realized;
3. multipath transmission is supported, and the network utilization rate and reliability are improved;
4. Realizing network QoS, ensuring the preferential transmission of key I/O operation;
And (IV) protocol decapsulation, namely decapsulating the received data by a receiving party, restoring the original I/O protocol data, wherein the decapsulation process keeps the time sequence and semantic characteristics of the original protocol, and the decapsulation step comprises the following steps of:
1. Checking the integrity of the data packet;
2. extracting time stamp information for controlling the timing of the I/O operation;
3. analyzing the protocol type and selecting corresponding processing logic;
4. Restoring original I/O protocol data;
And (V) protocol processing, namely executing corresponding I/O operation or returning operation results according to the unpacked protocol data, wherein the processing procedure comprises the following steps:
1. For input operations, passing data to an upper layer application or device;
2. For output operations, collecting operation results and preparing return data;
3. processing special control commands and status queries;
4. Recording an operation log for fault detection and performance analysis;
the native I/O protocol pipeline supports a variety of I/O protocol types including, but not limited to:
USB protocol supporting USB 1.0/2.0/3.0 specification including control transmission, bulk transmission, interrupt transmission and synchronous transmission;
2. The serial port protocol supports RS-232/422/485 standard, including baud rate setting, parity check, flow control, etc.;
3. parallel port protocol supporting IEEE 1284 standard including SPP, EPP and ECP modes;
SCSI protocol supporting SCSI-2/3 standard for storage device access;
5. a special instrument protocol supporting instrument control protocols such as GPIB/IEEE-488;
6. the encryption card protocol supports the access standard of the security devices such as PKCS#11;
For different types of protocols, the system adopts different encapsulation modes to ensure the reservation of protocol characteristics, and meanwhile, the system provides a plug-in mechanism to support the expansion of new protocol types and improve the universality and expansibility of the system.
Example 5:
the method for maintaining the I/O connection in the migration process of the virtual machine comprises the following steps:
The method comprises the steps of (a) migration preparation, namely when the virtual machine is detected to be migrated, the virtual machine I/O client sends current session state information to an I/O virtualization service, and a server persists the session information, wherein the specific implementation comprises the following steps:
1. detecting migration event, namely identifying the beginning of migration by monitoring a virtualization platform event or a system signal;
2. Session state collection, namely collecting detailed state information of all current active I/O connections;
3. The state persistence, namely, a server receives the state information and stores the state information into a persistent storage system;
4. session marking, namely, specially marking the session to be migrated for subsequent recovery;
5. Reserving resources, namely reserving necessary resources on a target server to prepare for the session recovery after migration;
and (II) connection freezing, namely suspending I/O operation by the I/O client of the virtual machine in the actual migration process of the virtual machine, notifying a server to freeze the current session, and keeping equipment connection by the server but suspending data transmission, wherein the technical implementation comprises the following steps:
1. Operation queuing, namely placing a new I/O request into a waiting queue, and temporarily not executing the new I/O request;
2. Performing in-process operation processing, namely performing proper processing on the I/O operation being executed to ensure data consistency;
3. determining a safe pause point of the I/O operation, and avoiding data damage;
4. A session freeze notification, namely sending a freeze command to a server, and suspending a related session by the server;
5. The server keeps the equipment connection and state and waits for the migration of the virtual machine to be completed;
and thirdly, after the migration of the virtual machine is completed, restarting the I/O client of the virtual machine on a new physical host, and detecting the change of the network environment, wherein the specific implementation comprises the following steps:
1. Identifying a network environment and a system configuration on the new physical host;
2. Service discovery, namely rediscovery of the network position of the I/O virtualization service;
3. analyzing network paths and performance characteristics between the network paths and the server;
4. Confirming that the migration of the virtual machine is completely completed and that the system is in a stable state;
5. Reconnection preparation, namely preparing for reestablishing the connection and loading necessary state information;
and (IV) connection reestablishment, namely, the virtual machine I/O client initiates a reconnection request to the I/O virtualization service by using the saved session information, and the server verifies the session information and restores the session state, wherein the technical implementation comprises the following steps:
1. The client sends a reconnection request to the server, wherein the reconnection request comprises a session identifier and authentication information;
2. The server verifies the validity of the request and the validity of the session information;
3. the server reallocates necessary system resources for recovering the session;
4. negotiating connection parameters, namely negotiating connection parameters, such as transmission protocol, security options and the like, between the client and the server;
5. the server loads the session state from the persistent storage and rebuilds the session environment;
And fifthly, after reconnection is successful, the client and the server perform state synchronization to ensure the consistency of the states of the I/O equipment, and the method specifically comprises the following steps:
1. The equipment state inquiry, wherein a client acquires the current equipment state of a server;
2. Comparing the acquired state with the state of the local cache;
3. the difference solution is to process the state difference and ensure consistency;
4. Updating the equipment state information in the local cache;
5. The confirmation mechanism is that the client confirms that the state synchronization is completed to the server;
and (six) restoring the I/O operation, namely restoring the normal I/O operation after the state synchronization is completed, and enabling the application program to continuously access the I/O device, wherein the technical implementation comprises the following steps:
1. queue processing, namely processing I/O requests queued during migration;
2. operation priority adjustment, namely adjusting the execution sequence of the I/O operation according to the importance;
3. Application notification, notifying the application that the I/O connection has been restored if necessary;
4. Performance monitoring, namely closely monitoring the I/O operation performance after recovery;
5. Exception handling, namely handling possible abnormal conditions in the recovery process;
key technical characteristics of the whole migration process:
1. Transparency, namely completely transparent to upper layer application, and an application program does not need to sense virtual machine migration events;
2. data consistency, namely ensuring that data is not lost, repeated and disordered in the migration process;
3. Fast recovery, namely minimizing the I/O interruption time through state pre-preservation and parallel processing;
4. Automation, namely automatically completing the whole process without manual intervention;
5. reliability, namely, a perfect exception handling mechanism is provided, so that correct recovery can be ensured under various exception conditions;
In addition, the system achieves the following advanced features:
1. Delay migration, namely for key I/O operation, delaying virtual machine migration time and ensuring operation completion;
2. The connection of key equipment is recovered preferentially, so that the core service is ensured to be recovered rapidly;
3. the batch migration support is used for supporting the I/O connection maintenance when a plurality of virtual machines migrate simultaneously;
4. And (3) migrating across network domains, namely supporting I/O connection recovery when the virtual machine migrates between different network domains.
Example 6:
the optimization technique of data transmission in the native I/O protocol pipeline:
the encapsulation format of the native I/O protocol data contains the following fields:
1. magic word (Magic Number) 4 bytes, fixed value 0x4E494F50 ("ASCII code of NI/OP") for identifying the start of the data packet;
2. Version number 2 bytes, identifying protocol version, currently 0x0100 (representing version 1.0);
3. Protocol type 2 bytes, identifying the type of original I/O protocol, such as:
0x0001 usb protocol;
0x0002 serial port protocol;
0x0003 parallel port protocol;
0x0004 scsi protocol;
0x0005 is encryption card protocol;
4. a session ID of 8 bytes, uniquely identifying a session between the virtual machine and the I/O device;
5. The serial number is 4 bytes, and the unique serial number of each data packet is used for sequencing and de-duplication;
6. 8 bytes of time stamp, recording the time of data packet generation, and using the time stamp for time sequence control and overtime processing;
7. 4 bytes of flag bit, including various control flags:
bit 0, whether Acknowledgement (ACK) is required;
bit 1, whether a control packet is used;
bit 2, whether the last slice is the last slice;
bit 3, whether compression is used;
bit 4, whether to retransmit the packet;
other bits are reserved;
8. The data length is 4 bytes, and the length of the effective data is marked;
9. The original data is lengthened and contains the complete content of the original I/O protocol;
10. Check code 4 bytes, and the CRC32 algorithm is used to calculate the check value of the whole data packet.
The design of this package format takes into account the following factors:
1. Integrity, namely ensuring the integrity and the correctness of the data packet through the magic word and the check code;
2. Protocol identification, namely identifying I/O protocols of different types through a protocol type field;
3. Session management, namely associating the connection between the virtual machine and the I/O equipment through a session ID;
4. timing control, namely maintaining the timing characteristics of the original I/O protocol through the time stamp and the serial number;
5. And the expandability is that enough fields are reserved to support future function expansion.
Multipath network transmission:
to improve transmission reliability and performance, the system implements a multipath network transmission mechanism:
1. the system automatically discovers a plurality of network paths between the client and the server, including different network interfaces, different routing paths and the like;
2. Performing performance evaluation on each path, measuring key indexes such as delay, bandwidth, packet loss rate and the like, wherein the evaluation result is used for path selection and load distribution;
3. transmission strategies-based on data characteristics and path performance, the system supports multiple transmission strategies:
a. a single path mode, namely transmitting by using only a path with optimal performance;
b. the copy mode is that important data is transmitted through a plurality of paths at the same time, so that the reliability is improved;
c. A split mode, namely dispersing data to a plurality of paths for transmission, and improving the overall throughput;
d. The backup mode is to use a path and automatically switch to a backup path when the main path fails;
4. self-adaptive adjustment, namely continuously monitoring the performance of each path by the system, and dynamically adjusting the transmission strategy and load distribution:
a. when network congestion is detected, the use proportion of the path is automatically reduced;
b. when the path performance fluctuates, the weight of each path is adjusted;
c. when the path is completely interrupted, immediately switching the flow to an available path;
5. The receiving end collects the data fragments from different paths, reorganizes the data fragments into a complete data stream according to the sequence number, and processes possible repeated packets and disordered packets;
This multipath transmission mechanism has the following advantages:
1. The reliability of the system is improved, and the system can still work normally even if part of network paths are failed;
2. The overall throughput is improved, and bandwidth resources of a plurality of network paths are fully utilized;
3. end-to-end delay is reduced, and the path with the lowest delay can be selected for time sensitive I/O operation;
4. the adaptability of the system is enhanced, and the system can cope with the change and fluctuation of the network environment.
Data transmission optimization technology:
in addition to the above-described encapsulation formats and multipath transmission, the system implements the following data transmission optimization techniques:
1. and data compression, namely compressing the original I/O data and reducing the network transmission quantity. The system supports a plurality of compression algorithms, and automatically selects an optimal algorithm according to the data characteristics:
For text-like data, an LZ77 or DEFLATE algorithm is used;
for binary data, huffman coding or LZO algorithm is used;
for real-time data streams, lightweight compression algorithms, such as snpey, are used;
2. Flow control, namely realizing a flow control mechanism based on a sliding window, and avoiding overflow of a buffer area of a receiving party caused by too fast data of a transmitting party:
the receiving side informs the sending side of the current receivable data quantity through a feedback message;
The sender dynamically adjusts the sending rate according to the window size of the receiver;
automatically reducing the window size when the network is congested, and gradually increasing the window size when the network is good;
3. batch processing and merging, namely merging a plurality of small I/O operations into a batch processing request, and reducing the network interaction times:
Combining continuous small data block read-write operation into a large block operation;
For frequent state query operation, a batch query mechanism is realized;
For command sequences, merge into a single compound command;
4. Prefetching and caching, namely, pre-acquiring data possibly needed according to an I/O access mode:
Sequential access prefetching, namely acquiring a subsequent data block in advance when a sequential reading mode is detected;
pattern recognition, namely learning an access pattern of an application program, and predicting future I/O operation;
Hot spot data caching, namely, maintaining local caching for frequently accessed data;
5. Priority and QoS, realizing a differentiated service quality guarantee mechanism:
Assigning different priorities to different types of I/O operations;
low latency and high reliability of high priority operation is guaranteed;
when network resources are limited, the performance requirements of key services are preferentially ensured;
The optimization technology is comprehensively applied, so that the performance and efficiency of the native I/O protocol pipeline are obviously improved, and the networked I/O device access is similar to the local device access in performance.
Example 7:
the system safety guarantee mechanism and the multi-tenant supporting technology are as follows:
security mechanism
The system realizes a comprehensive security guarantee mechanism and ensures the security of the access of the I/O equipment and the confidentiality of data:
1. Identity authentication and authorization:
Multi-factor authentication, namely supporting a plurality of authentication modes based on certificates, passwords, tokens and the like;
Fine-grained authorization, namely assigning different access rights to different users based on role-based access control (RBAC);
Centralized authentication, namely supporting integration with the existing authentication systems (such as LDAP and Active Directory) of enterprises;
session management, namely realizing security measures such as session timeout, concurrency limit and the like;
2. And the data transmission is safe:
Transport encryption, data transport encryption using TLS 1.3 or higher version;
Integrity protection, namely ensuring that data is not tampered through HMAC and other algorithms;
The replay attack is resisted, namely, the time stamp and the serial number are used for preventing the replay of the data packet;
Secure key management, automatic key negotiation and periodic key rotation;
3. Data security:
Encrypting the sensitive data stored in a lasting way;
Memory protection, namely a protection mechanism for preventing memory data from being leaked;
secure erase-a secure erase mechanism after sensitive data use;
data leakage prevention, namely realizing a Data Leakage Prevention (DLP) strategy and preventing sensitive data from being leaked;
4. Safety audit and compliance:
recording all security related events including identity verification, authorization decision, system configuration change and the like;
log integrity, namely ensuring the integrity and non-tamper resistance of an audit log;
compliance reports, namely generating compliance reports meeting industry standards (such as PCI DSS and HIPAA);
Abnormality detection, namely identifying potential security threats and abnormal access modes;
5. hardware security support:
the trusted execution environment supports trusted execution environments such as Intel SGX, ARM TrustZone and the like;
The hardware security module supports hardware security modules such as TPM and the like, and enhances key protection;
Secure start-up, ensuring the integrity and authenticity of system components;
Multi-tenant support technology:
the system realizes a perfect multi-tenant supporting mechanism, so that a plurality of tenants (such as different business departments or different clients) can safely share the I/O virtualization infrastructure:
1. Tenant isolation:
Network isolation, namely providing independent network namespaces for different tenants;
resource isolation, namely ensuring that the activity of one tenant does not affect the performance of other tenants;
data isolation, namely preventing cross-tenant data access and information leakage;
control plane isolation, namely providing independent management view and control interface for each tenant;
2. resource allocation and scheduling:
Allocating explicit resource quota for each tenant, including equipment number, bandwidth limitation and the like;
a priority policy, which is to support resource priority setting among tenants and ensure that key business is preferentially served;
Dynamic resource adjustment, namely dynamically adjusting resource allocation according to actual demands of tenants and system loads;
Resource reservation, namely supporting reservation of necessary resources for key tenants and ensuring performance guarantee;
3. device sharing model:
exclusive mode, in which some devices are exclusively used by specific tenants, so that the highest performance and security are ensured;
the same equipment is used by a plurality of tenants in a time slice rotation way, so that the method is suitable for a low-frequency access scene;
Multiple tenants access the same equipment at the same time, and the system is responsible for scheduling and isolating requests;
Partition sharing, namely logically partitioning equipment resources, wherein different tenants use different partitions, such as different LUNs of storage equipment;
4. tenant management:
Tenant life cycle, namely supporting the whole life cycle management such as creation, configuration, suspension, deletion and the like of the tenant;
providing a predefined tenant template, and simplifying the creation process of a new tenant;
tenant migration, namely supporting the migration of tenants and resources thereof from one physical environment to another;
Tenant monitoring, namely providing tenant-level performance monitoring and resource use reports;
5. multi-tenant performance guarantee:
performance isolation, namely ensuring that the high load of one tenant does not influence the performance of other tenants;
QoS guarantee, namely providing differentiated service quality guarantee for different tenants;
predicting resource demand based on historical data, and adjusting resources in advance;
elastic expansion, namely supporting the automatic expansion and contraction of tenant resources;
example 8:
cluster management and high availability guarantee technology in a multi-server deployment scenario;
Cluster architecture:
the system supports multi-server cluster deployment, and the expandability and reliability of the system are improved:
1. cluster topology:
The central radiation type is that one master node is responsible for coordination, and a plurality of slave nodes are responsible for specific I/O equipment connection and processing;
Peer-to-peer network, wherein all nodes are equal in status, maintain global state together, and any node can accept client requests;
Hybrid, combining the advantages of the two modes, wherein part of functions are managed in a centralized way and part of functions are processed in a distributed way;
2. Node roles:
the control node is responsible for cluster management, resource scheduling and global strategy formulation;
the data node is responsible for specific I/O equipment connection and data processing;
A storage node is responsible for the persistent storage of session information, configuration data and the like;
The monitoring node is responsible for cluster monitoring, performance analysis and alarm management;
3. Cluster scale:
support linear expansion from a few nodes to hundreds of nodes;
the dynamic nodes are added and withdrawn without stopping;
Heterogeneous node support can integrate servers with different hardware configurations;
Cluster management:
The system realizes a perfect cluster management mechanism and ensures the cooperative work of multiple nodes:
1. node discovery and registration:
Automatically discovering and joining the new node into the existing cluster after starting;
Manual registration, namely supporting an administrator to manually add new nodes;
Health examination, namely performing comprehensive health examination on the new node to ensure that the new node meets the joining condition;
A resource list, which is to report the available resources when a new node is added, and is used for global resource scheduling;
2. State synchronization and consistency:
state replication, namely, the key state information is replicated among a plurality of nodes, so that consistency is ensured;
a consistency protocol, namely using a Raft consistency algorithm or a Paxos consistency algorithm to process distributed state update;
Incremental synchronization, namely supporting incremental state synchronization and reducing network transmission quantity;
conflict resolution, when a status conflict occurs, resolving through predefined rules or arbitration mechanisms;
3. task distribution and load balancing:
load monitoring, namely monitoring load conditions of all nodes in real time, wherein the load conditions comprise indexes such as CPU (Central processing Unit), memory, network and the like;
Task distribution, namely distributing a new I/O equipment connection request to a proper node according to the load condition of the node;
Dynamic adjustment, namely dynamically adjusting task allocation in the running process and optimizing the load of the whole system;
Considering the position relation between the virtual machine and the I/O equipment, and preferentially selecting the node with the shortest network path;
4. Cluster configuration management:
centralized configuration, namely centralized management and distribution of global configuration information;
version control, namely configuring version control and rollback capability of the change;
dynamic reconfiguration, supporting updating cluster configuration without interrupting service;
supporting node level differential configuration, and adapting to the characteristics of different nodes;
High availability mechanism:
the system realizes a multi-level high availability guarantee mechanism and ensures the continuity of the service:
1. node level fault handling:
heartbeat detection, namely detecting the health state of the node through periodic heartbeat;
the fault detection, namely accurately identifying node faults by adopting a multiple detection mechanism;
automatic recovery, namely, for recoverable faults, the system automatically executes recovery operation;
The isolation mechanism is used for isolating the fault node and preventing other nodes from being influenced;
2. service level failover:
A step of hot backup, in which a key service maintains a hot backup state on a plurality of nodes;
automatic switching, namely automatically switching to a standby node when the main node fails;
stateless design, namely a service design adopts a stateless mode, so that the rapid switching is facilitated;
session maintenance, namely maintaining user session continuity in the fault switching process;
3. Data redundancy and protection:
multiple copies of the important data are stored, wherein multiple copies of the important data are maintained on multiple nodes;
Data checking, namely periodically executing data consistency checking, and finding and repairing damage;
incremental backup, namely periodically executing the incremental backup to ensure the data restorability;
Disaster recovery, supporting disaster recovery capabilities across data centers;
4. smooth expansion and contraction volume:
online capacity expansion, namely supporting the addition of new nodes under the condition of not interrupting service;
Supporting a security removing node, including task migration and data transfer;
automatic rebalancing, namely automatically rebalancing the system load after the node is changed;
the progressive migration is that large-scale data migration adopts a progressive mode, so that system impact is reduced;
5. fault self-healing capability:
the system can automatically diagnose common fault types;
automatic repair, namely, for the fault which can be automatically repaired, the system executes repair operation;
self-adjusting, namely automatically adjusting system parameters according to the running condition, and improving the stability;
preventive maintenance, namely, based on predictive analysis, performing preventive maintenance before the fault occurs;
Through the cluster management and high availability technologies, the system ensures the reliability and continuity of the service while providing high-performance I/O virtualization service, and meets the requirements of key business of enterprises;
Through the multi-tenant supporting technology, the system can safely and efficiently support the I/O equipment access requirements of a plurality of business departments or clients on a single physical infrastructure, and the resource utilization rate and the management efficiency are obviously improved.
Example 9:
Performance evaluation index:
the system defines a series of key performance indicators for evaluating the performance and efficiency of the system:
1. time delay index:
end-to-end delay, the total time from the initiation of an I/O request by a client to the receipt of a response;
Protocol processing delay, namely time required by conversion and processing of a native I/O protocol;
Network transmission delay, which is the time required for data transmission in the network;
device operation delay, which is the time required for a physical I/O device to perform an operation;
2. Throughput index:
Single device throughput, data transfer rate of a single I/O device;
the total throughput of the system is the data processing capacity of the whole system;
the concurrent connection number is the number of virtual machine connections which can be processed simultaneously by the system;
concurrent operand, I/O operation number that the system can process simultaneously;
3. resource utilization rate:
CPU utilization rate, which is the CPU resource use condition of each component of the system;
memory utilization rate, which is the use condition of memory resources of each component of the system;
Network bandwidth utilization, namely network resource use condition of the system;
I/O device utilization, namely the utilization efficiency of physical I/O devices;
4. Reliability index:
mean Time Between Failure (MTBF), the average time that the system is running continuously without failure;
mean time to failure (MTTR), the mean time for the system to recover from a failed state to a normal state;
failure rate, which is to say, the frequency of failure of the system in a specific time period;
Data integrity, namely error rate and loss rate in the data transmission process;
5. Scalability index:
Linear expansion coefficient, i.e. the linear degree of performance improvement when the system resource is increased;
Maximum number of device support, namely the maximum number of physical I/O devices that the system can support;
the maximum virtual machine support number is the maximum virtual machine connection number which can be supported by the system;
the resource increasing benefit is that the influence degree of specific resources on the system performance is increased;
Performance optimization technology:
the system realizes a multi-level performance optimization technology, and improves the overall performance of the system:
1. Protocol layer optimization:
removing unnecessary protocol fields and processing steps;
A batch processing mechanism, which is to combine a plurality of small operations into a batch processing request;
asynchronous processing, namely supporting asynchronous I/O operation and improving concurrent processing capacity;
preprocessing and optimizing a common operation sequence;
2. Data transmission optimization:
Zero copy technology, namely reducing the copying times of data in a memory;
vector I/O, which supports scattered data transmission and reduces the times of system call;
the data alignment ensures the alignment of the data in the memory and improves the access efficiency;
buffer management, namely optimizing the size and allocation strategy of the buffer;
3. and (3) optimizing memory management:
memory pool technology, namely using a pre-allocated memory pool to reduce the dynamic memory allocation overhead;
NUMA awareness, taking into account NUMA architecture characteristics in a multiprocessor system;
The cache optimization is that CPU cache is reasonably used, so that the data access speed is improved;
memory pre-allocation, namely pre-allocating memory resources for key operations;
4. and (3) concurrent processing optimization:
a multithreading architecture, namely adopting an efficient multithreading processing model;
task scheduling, namely optimizing task allocation and scheduling strategies;
lock optimization, namely reducing lock competition and using a lock-free data structure;
event driving, namely adopting an event driving model to process I/O events;
5. algorithm optimization:
A fast path, which is to provide an optimized processing path for common operations;
the index technology is that an efficient index structure is used for accelerating searching;
Heuristic prediction, predicting future operations based on historical data;
algorithm complexity control, namely controlling algorithm complexity on a key path;
The performance tuning method comprises the following steps:
the system provides a complete set of performance tuning methods for optimizing system performance in different deployment environments:
1. performance analysis tool:
The built-in performance monitoring is that a comprehensive performance monitoring function is built in the system;
analyzing performance indexes of components of the instrument panel visual display system;
bottleneck identification, namely automatically identifying performance bottlenecks in the system;
historical trend analysis, namely supporting the historical trend analysis of the performance data;
2. and (3) self-adaptive optimization:
parameter self-adjustment, wherein the system automatically adjusts key parameters according to the running condition;
load sensing, namely dynamically adjusting resource allocation according to the system load;
behavior learning, namely learning an access mode of an application program and optimizing a processing flow;
a feedback mechanism, which continuously optimizes the system configuration according to the performance feedback;
3. targeted optimization:
Device specialization, providing special optimization for a particular type of I/O device;
Optimizing application scenes, namely customizing an optimizing strategy according to different application scenes;
hardware adaptation, namely optimizing system performance aiming at different hardware platforms;
network environment adaptation, namely adjusting a transmission strategy according to the characteristics of the network environment;
4. benchmark test and comparison:
defining a standard test scene and a workload;
Establishing a system performance base line for comparing different versions and configurations;
Performing comparison analysis, namely performing performance comparison with a direct physical connection mode;
regression testing, namely ensuring that system optimization does not introduce new performance problems;
through these performance assessment and optimization techniques, the system is able to provide a performance experience that approaches a direct physical connection while maintaining native I/O protocol characteristics, meeting the needs for performance-sensitive applications.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that the above embodiments and descriptions are merely illustrative of the principles of the present invention, and various changes and modifications may be made without departing from the spirit and scope of the invention, which is defined in the appended claims.

Claims (10)

1. The network I/O equipment access method based on the native protocol in the virtualized environment is characterized by comprising the following steps:
s1, deploying an I/O virtualization service on a physical server, wherein the I/O virtualization service is used for connecting physical I/O equipment and providing equipment registration, session management and protocol adaptation services;
S2, installing a virtual machine I/O client in the virtual machine, providing a standard I/O device interface for upper-layer application, and establishing network connection with an I/O virtualization service;
S3, a native I/O protocol pipeline is established between the I/O virtualization service and the virtual machine I/O client, and the pipeline maintains the complete characteristics of the original I/O protocol;
S4, transmitting the I/O equipment data through the native I/O protocol pipeline to realize transparent access of the virtual machine to the physical I/O equipment;
and S5, in the migration process of the virtual machine, the I/O session state is maintained, network connection is reestablished, and service continuity is ensured.
2. The method for accessing network I/O devices based on native protocol in virtualized environment according to claim 1, wherein the method for establishing a native I/O protocol pipeline comprises identifying protocol characteristics of a physical I/O device, encapsulating the native I/O protocol into a network transport protocol, maintaining integrity of the native protocol, and decapsulating the encapsulated data at a receiving end to restore the native I/O protocol data.
3. The method for accessing a network I/O device based on a native protocol in a virtualized environment according to claim 1, wherein the method for maintaining an I/O connection during migration of a virtual machine comprises:
before migration, current session state information is sent to an I/O virtualization service and is stored in a lasting mode;
suspending I/O operation and freezing the current session in the migration process;
after the migration is completed, the virtual machine I/O client initiates a reconnection request to the I/O virtualization service;
The server verifies the session information and restores the session state;
The client and the server perform state synchronization to ensure the consistency of the states of the I/O devices;
Normal I/O operation is resumed, and service continuity is ensured.
4. A network I/O device access method based on native protocols in a virtualized environment according to any of claims 1-3, wherein the I/O virtualization service implements the following functions:
Automatic discovery, registration, and status monitoring of physical I/O devices;
adaptation and native property preservation of multiple I/O protocols;
establishing and maintaining a connection session between the virtual machine and the I/O device;
The persistence storage and state synchronization of session information;
security control and authentication of device access.
5. A network I/O device access method based on native protocols in a virtualized environment according to any of claims 1-3, wherein the virtual machine I/O client implements the following functions:
providing a standard I/O device interface to the upper layer application;
Converting the I/O request of the application program into a network transmission format;
Establishing and maintaining a network connection with the I/O virtualization service;
network access is reduced through the local cache, and response speed is improved;
Detecting network connection faults and realizing an automatic reconnection mechanism.
6. A native protocol I/O device network access system in a virtualized environment, comprising:
an I/O virtualization service deployed on the physical server for connecting the physical I/O devices and providing a networked I/O service;
A virtual machine I/O client deployed within the virtual machine for providing a standard I/O interface to an application and communicating with the I/O virtualization service;
a native I/O protocol pipeline, a network transmission channel which is established between the server and the client and keeps the original I/O protocol characteristics;
and the session management module is used for maintaining the equipment connection state and supporting connection recovery after the migration of the virtual machine.
7. The network I/O device network access system of claim 6, wherein the I/O virtualization service comprises a device management module, a protocol adaptation engine, a session manager, a network transmission module, a security control module, a device resource scheduling module, and a cluster management module.
8. The network I/O device network access system of claim 7, wherein the virtual machine I/O client comprises a device interface layer, a protocol conversion layer, a session connection manager, a local cache module, a failure detection recovery module, a performance optimization module, and a monitoring and diagnosis module.
9. The network I/O device network access system of claim 8, further comprising a device emulation module for maintaining proper operation of the application by emulating device behavior when the physical I/O device is temporarily unavailable.
10. The network I/O device network access system based on native protocol in a virtualized environment according to claim 9, wherein the system supports multi-path transmission at a network transport layer, improving system reliability by establishing redundant network connections.
CN202511369170.1A 2025-09-24 2025-09-24 Network access method and system for native protocol I/O equipment in virtualized environment Pending CN121166596A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202511369170.1A CN121166596A (en) 2025-09-24 2025-09-24 Network access method and system for native protocol I/O equipment in virtualized environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202511369170.1A CN121166596A (en) 2025-09-24 2025-09-24 Network access method and system for native protocol I/O equipment in virtualized environment

Publications (1)

Publication Number Publication Date
CN121166596A true CN121166596A (en) 2025-12-19

Family

ID=98033806

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202511369170.1A Pending CN121166596A (en) 2025-09-24 2025-09-24 Network access method and system for native protocol I/O equipment in virtualized environment

Country Status (1)

Country Link
CN (1) CN121166596A (en)

Similar Documents

Publication Publication Date Title
Gao et al. When cloud storage meets {RDMA}
US11500670B2 (en) Computing service with configurable virtualization control levels and accelerated launches
US20220334725A1 (en) Edge Management Service
US10990490B1 (en) Creating a synchronous replication lease between two or more storage systems
US6275953B1 (en) Recovery from failure of a data processor in a network server
US20220091771A1 (en) Moving Data Between Tiers In A Multi-Tiered, Cloud-Based Storage System
US7739379B1 (en) Network file server sharing local caches of file access information in data processors assigned to respective file systems
US9246755B2 (en) Software-defined networking disaster recovery
US10331588B2 (en) Ensuring the appropriate utilization of system resources using weighted workload based, time-independent scheduling
US10516645B1 (en) Address resolution broadcasting in a networked device
US11409711B2 (en) Barriers for dependent operations among sharded data stores
US10944671B2 (en) Efficient data forwarding in a networked device
US20080263544A1 (en) Computer system and communication control method
WO2021226344A1 (en) Providing data management as-a-service
CN103890729A (en) Collaborative management of shared resources
US10230544B1 (en) Efficient data forwarding in a networked device
US11474857B1 (en) Accelerated migration of compute instances using offload cards
CN113849136B (en) Automatic FC block storage processing method and system based on domestic platform
US7315914B1 (en) Systems and methods for managing virtualized logical units using vendor specific storage array commands
US11500678B2 (en) Virtual fibre channel port migration
US10798159B2 (en) Methods for managing workload throughput in a storage system and devices thereof
US10692168B1 (en) Availability modes for virtualized graphics processing
US11169728B2 (en) Replication configuration for multiple heterogeneous data stores
CN120321212B (en) Physical network card management method, system and device
US20230409215A1 (en) Graph-based storage management

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination