[go: up one dir, main page]

CN120216211B - System and method for dynamically combining memory resources and allocating multi-level switches - Google Patents

System and method for dynamically combining memory resources and allocating multi-level switches

Info

Publication number
CN120216211B
CN120216211B CN202510703359.3A CN202510703359A CN120216211B CN 120216211 B CN120216211 B CN 120216211B CN 202510703359 A CN202510703359 A CN 202510703359A CN 120216211 B CN120216211 B CN 120216211B
Authority
CN
China
Prior art keywords
engine
resource
memory
allocation
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202510703359.3A
Other languages
Chinese (zh)
Other versions
CN120216211A (en
Inventor
叶丰华
孙秀强
林楷智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Metabrain Intelligent Technology Co Ltd
Original Assignee
Suzhou Metabrain Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Metabrain Intelligent Technology Co Ltd filed Critical Suzhou Metabrain Intelligent Technology Co Ltd
Priority to CN202510703359.3A priority Critical patent/CN120216211B/en
Publication of CN120216211A publication Critical patent/CN120216211A/en
Application granted granted Critical
Publication of CN120216211B publication Critical patent/CN120216211B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • G06F13/4063Device-to-bus coupling
    • G06F13/4068Electrical coupling
    • G06F13/4081Live connection to bus, e.g. hot-plugging
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Multi Processors (AREA)

Abstract

本发明公开了一种多层级交换器动态组合内存资源分配的系统及方法,涉及计算机技术领域,包括:在服务器启动时动态获取并分析目标配置空间的类型数据,识别引擎根据类型数据识别当前外设互连链路连接的第一硬件资源,并在第一硬件资源包括多层级交换器时,识别多层级交换器连接的第二硬件资源,根据第二硬件资源的识别结果确定哪些设备需要进行资源预留,无需逐一检查每个可能的连接情况,并基于第二识别结果为多层级交换器及第二硬件资源分配第一内存资源,避免当外设互连链路存在过多的交换器时,导致在服务器启动时耗费过多的时间进行资源分配,以减少服务器启动时的时间消耗,提高服务器的启动效率,进而保证了内存资源分配的固定性和稳定性。

The present invention discloses a system and method for dynamically combining memory resources for multi-level switches, relating to the field of computer technology. The system comprises: dynamically acquiring and analyzing type data of a target configuration space when a server is started; an identification engine identifying a first hardware resource connected to a current peripheral interconnection link according to the type data; and, when the first hardware resource includes a multi-level switch, identifying a second hardware resource connected to the multi-level switch; determining which devices require resource reservation according to the identification result of the second hardware resource without checking each possible connection situation one by one; and allocating the first memory resource to the multi-level switch and the second hardware resource based on the second identification result, thereby avoiding consuming too much time for resource allocation when the server is started when there are too many switches in the peripheral interconnection link, thereby reducing the time consumption when the server is started, improving the startup efficiency of the server, and further ensuring the stability and stability of the memory resource allocation.

Description

System and method for dynamic combined memory resource allocation of multi-level switch
Technical Field
The invention relates to the technical field of computers, in particular to a system and a method for dynamically allocating combined memory resources of a multi-level switch.
Background
The intelligent network card internally comprises more devices, such as physical or virtual devices like network port devices, virtual network cards and storage devices, and the like, all the devices need to use memory resources and part of the memory resources are preferential, such as 32-bit memory resources need to be satisfied with high priority, but part of DPU (Data Processing Unit, data processor) devices also comprise Switch chips, other PCI devices are also linked below the Switch chips, and at the moment, the memory resources need to be allocated. Meanwhile, in the server, besides the DPU device including a Switch chip, a motherboard or a PCI device card of the server also includes a similar Switch chip, and the same Switch chip and the PCI device below the same need resources, but the 32-bit memory resources are limited and are at most 4GB, if the memory resources exceed 4GB, a part of devices cannot be used, such as an interface of a display function (DPU display interface or an on-board VGA interface) cannot be displayed.
The related art performs resource reservation when the server is started, but the Switch chip may include multiple levels, and each level needs to perform memory resource reservation, and some levels do not need to perform resource reservation, and whether to perform resource reservation depends on the purpose and equipment of the port, at this time, since the server may involve more Switch chips, and the level of the Switch chip cannot be fixed and cannot predict which Switch chip needs to reserve resources, so that the starting time of the server is significantly prolonged.
Disclosure of Invention
The invention provides a memory resource allocation method, electronic equipment, a storage medium and a program product, which at least solve the problem of long starting time of a server caused by a memory resource allocation mode in the related technology.
The invention provides a system for dynamically allocating memory resources of a multi-level switch, which comprises at least one processing circuit of a server, wherein the at least one processing circuit is connected with at least one peripheral interconnection link, the at least one processing circuit is used for executing a reading engine, reading type data of a target configuration space when the server is started, transmitting the type data to an identification engine, identifying first hardware resources connected with the current peripheral interconnection link by the identification engine according to the type data, identifying second hardware resources connected with the multi-level switch if the first hardware resources comprise the multi-level switch, transmitting an identification result of the identification engine to an allocation engine, and allocating the first memory resources for the multi-level switch and the second hardware resources according to a second identification result of the second hardware resources when the current peripheral interconnection link is determined to be connected with the multi-level switch by the allocation engine according to the first identification result of the first hardware resources.
The invention also provides a server, which comprises the system for dynamically allocating the combined memory resources of the multi-level switch.
The invention also provides a method for dynamically allocating the memory resources of the multi-level switch by using at least one processing circuit connected with at least one peripheral interconnection link, wherein the at least one processing circuit is used for executing a reading engine, reading type data of a target configuration space when a server is started, transmitting the type data to an identification engine, identifying first hardware resources connected with the current peripheral interconnection link by the identification engine according to the type data, identifying second hardware resources connected with the multi-level switch if the first hardware resources comprise the multi-level switch, transmitting an identification result of the identification engine to an allocation engine, and allocating the memory resources for the multi-level switch and the second hardware resources according to a second identification result of the second hardware resources when the allocation engine determines that the current peripheral interconnection link is connected with the multi-level switch according to the first identification result of the first hardware resources.
The invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program is executed by a processor to realize the steps of the method for dynamically allocating the memory resources of any multi-level switch.
The invention also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the method for dynamically combining memory resource allocation of any of the above-mentioned multi-level switches.
According to the invention, the type data of the target configuration space is dynamically acquired and analyzed when the server is started, the identification engine identifies the first hardware resources connected with the current peripheral interconnection link according to the type data, identifies the second hardware resources connected with the multi-level exchanger when the first hardware resources comprise the multi-level exchanger, determines which devices need to be reserved for resources according to the second identification result of the second hardware resources, does not need to check each possible connection condition one by one, and allocates the first memory resources for the multi-level exchanger and the second hardware resources based on the second identification result, so that excessive time consumption is avoided when the peripheral interconnection link has excessive exchangers, and resource allocation is performed when the server is started, so that the time consumption when the server is started is reduced, the starting efficiency of the server is improved, and the fixity and stability of the memory resource allocation are further ensured. Therefore, the technical problem that the server starting time is long due to the memory resource allocation mode in the related art can be solved.
Drawings
For a clearer description of embodiments of the present invention, the drawings that are required to be used in the embodiments will be briefly described, it being apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the drawings without inventive effort for those skilled in the art.
FIG. 1 is a schematic diagram of a system for dynamic combined memory resource allocation of a multistage switch according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a link of a Switch chip according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating an example of a link of a multi-level Switch chip according to another embodiment of the present invention;
FIG. 4 is a schematic diagram of physical links of a server motherboard according to an embodiment of the present invention;
Fig. 5 is a schematic structural diagram of a DPU device provided in one embodiment of the present invention;
FIG. 6 is a flow chart of a method for dynamically allocating combined memory resources of a multistage switch according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating connection relationships between components of a multi-level switch combined memory resource allocation system during a startup phase of a server according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating memory resource allocation of a server in a startup phase according to an embodiment of the present invention;
FIG. 9 is a diagram illustrating memory resource allocation of a server in a startup phase according to another embodiment of the present invention;
FIG. 10 is a diagram illustrating the connection between components of a multi-level switch unified memory resource allocation system when a server enters an operating system phase according to an embodiment of the present invention;
FIG. 11 is a diagram illustrating an example of memory resource allocation of a server into an operating system according to one embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by a person of ordinary skill in the art without making any inventive effort are within the scope of the present invention.
It should be noted that in the description of the present invention, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. The terms "first," "second," and the like in this specification are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
The present invention will be further described in detail below with reference to the drawings and detailed description for the purpose of enabling those skilled in the art to better understand the aspects of the present invention.
The embodiment of the invention provides a system for dynamic combined memory resource allocation of a hierarchical switch.
As shown in fig. 1, the system 10 for dynamic combined memory resource allocation of the hierarchical switch includes:
the server comprises at least one processing circuit 11, at least one processing circuit 11 and at least one peripheral interconnection link, wherein the at least one processing circuit 11 is used for executing a reading engine, reading type data of a target configuration space when the server is started, transmitting the type data to an identification engine, identifying first hardware resources connected with the current peripheral interconnection link according to the type data by the identification engine, identifying second hardware resources connected with the multi-level switch if the first hardware resources comprise the multi-level switch, transmitting an identification result of the identification engine to an allocation engine, and allocating first memory resources for the multi-level switch and the second hardware resources according to a second identification result of the second hardware resources when the allocation engine determines that the current peripheral interconnection link is connected with the multi-level switch according to the first identification result of the first hardware resources.
The target configuration space may be a PCI configuration space, and when the server is started, the BIOS (Basic Input/Output System) of the embodiment of the present invention reads the type and subtype of the PCI configuration space in the PCI (PERIPHERAL COMPONENT INTERCONNECT, peripheral interconnect device) enumeration stage of the DXE stage. The peripheral interconnection link in the embodiment of the invention can be a PCI link, the first hardware resource can comprise peripheral interconnection equipment (PCI equipment), a Switch and the like, the Switch can be a Switch chip, the single-layer Switch is connected with other equipment below the Switch without being connected with the Switch, the multi-layer Switch is connected with at least one other Switch below the Switch and is in a hierarchical structure, the second hardware resource can comprise PCI equipment, peripheral interconnection ports (PCI ports) and the like, the link connection of the single-layer Switch is shown in figure 2, and the link connection of the multi-layer Switch is shown in figure 3.
In the embodiment of the invention, if the current device is started, the excessive devices can cause that part of device resources can not be effectively allocated, namely, the 32-bit resources of the 4G can not meet the limit that the total requirements of the devices exceed the 4G, so that part of devices can not be used. If the PCI link devices of the current server are all PCI end bridge ports directly connected to the CPU (Central Processing Unit ), the problem of insufficient 4G resources is not needed, but if the PCI links are linked to a plurality of PCI SWITCH through a multi-level Switch chip or a plurality of PCI root bridges, the problem of insufficient Switch resource reservation will occur, which will cause the devices under the system to fail to work normally. As shown in fig. 4, the physical links of the server motherboard are all required to reserve memory resources, so that excessive time is required to be consumed for resource allocation when the server is started, resulting in prolonged starting time and lower starting efficiency of the server.
Specifically, the embodiment of the invention can read the type data of the PCI configuration space when the engine is started up, the identification engine identifies the first hardware resources connected with the current peripheral interconnection link according to the type data, and judges whether the current PCI link comprises a multi-layer Switch chip, when the PCI link has excessive Switch chips, the BIOS consumes excessive time to carry out resource allocation when the server is started up because of the excessive Switch chips, so that in order to optimize the starting time of the server and ensure the resource allocation of a specific Switch chip to be fixed, the embodiment of the invention can identify the second hardware resources connected with the multi-layer Switch, determine which devices need to carry out resource reservation according to the second identification result of the second hardware resources, without checking each possible connection condition one by one, and allocate the first memory resources for the multi-layer Switch and the second hardware resources, thereby reducing the time consumption when the server is started up and improving the starting efficiency of the server.
In the embodiment of the invention, the allocation engine is used for allocating the first memory resource to the peripheral interconnection equipment when the current peripheral interconnection link is determined to be connected with the peripheral interconnection equipment according to the first identification result of the first hardware resource.
It can be understood that, according to the embodiment of the invention, when the current peripheral interconnection link is determined to be connected with the peripheral interconnection device according to the first identification result of the first hardware resource, the allocation engine allocates the first memory resource to the peripheral interconnection device, and normally allocates 32-bit resources to the peripheral interconnection device.
In the embodiment of the invention, the allocation engine is used for allocating the first memory resource according to the resource size of the data processor if the peripheral interconnection equipment is the data processor before allocating the first memory resource to the peripheral interconnection equipment.
Because the DPU (Data Processing Unit, data processor) generally has specific memory resource requirements, the embodiment of the invention can read the identifier of the peripheral interconnection equipment, identify whether the peripheral interconnection equipment is the DPU through the identifier so as to adopt different memory allocation strategies for the peripheral interconnection equipment, normally allocate memory resources for the peripheral interconnection equipment when the peripheral interconnection equipment is not the DPU, and allocate memory resources for the peripheral interconnection equipment according to the resource size of the DPU when the peripheral interconnection equipment is the DPU equipment so as to ensure that the DPU equipment can normally operate.
It should be noted that, the DPU is a special processor for providing network, storage, security, management and other data center infrastructure virtualization services around data processing, and forms a computing architecture based on a CPU of an ARM/X86 architecture and a special hardware acceleration engine of ASIC(Application Specific Integrated Circuit)/NP(Network Processor)/FPGA(Field Programmable Gate Array), which form an entity for providing a virtualization function, and needs enough resources to support complex service operation, so resources need to be reserved, where the DPU is structured as shown in fig. 5, and the DPU product is widely used in various architecture servers of the data center (usually in the form of an intelligent network card) including, but not limited to, an X86 architecture and an ARM architecture, but whatever architecture server needs to reserve resources for the intelligent network card in advance when the intelligent network card is applied, because the intelligent network card contains more devices, such as a network port device, a virtual network card, a storage device SSD, and other physical or virtual devices, which all need to use memory resources and some memory resources are priority, such as 32-bit memory resources need to be highly preferentially satisfied, so that the DPU needs to be reserved in advance, so as to ensure that various devices can obtain the required memory resources and operate normally.
And judging whether the peripheral interconnection equipment is a data processor or not based on the identification, specifically, identifying the DID and the VID of the peripheral interconnection equipment, if the DID and the VID exist in a preset identification information list of the data processor, judging that the peripheral interconnection equipment is the data processor, otherwise, judging that the peripheral interconnection equipment is not the data processor.
Specifically, when the server is started, the BIOS judges whether the current PCI link comprises a Switch chip by reading the type and subtype of the PCI configuration space in the PCI enumeration stage of the DXE stage, if not, the BIOS continues to confirm whether the current device is a DPU device by reading the DID and VID of each PCI device, if not, 32-bit memory resources are normally allocated, if so, the DPU device performs 32-bit resource reservation according to the known 32-bit resource size of the DPU, meanwhile, the PCI link physically connected with the current DPU accumulates MMIO 32-bit resources reserved by the DPU when counting 32-bit resources required by other PCI devices, and meanwhile, the BIOS enables (opens) a PCI configuration space hot plug function (hotplug function) of a bridge of the PCI link physically linked with the DPU.
For example, there are multiple PCI links for the CPU of the server, where one DPU device is connected to PCI link a, and several other common PCI devices (including PCI device A, PCI device B) are connected to PCI link a, and one PCI link bridge is connected to PCI link a, so that the total 32-bit memory resources required for PCI link a is 200+30+15=245 MB according to the size of the already 32-bit memory resources of the DPU device, such as MIMO32 reserved for the DPU device, and the required 32-bit memory resources for PCI device a and PCI device B are counted, such as PCI device a needs 30MB and PCI device B needs 15 MB.
In the embodiment of the invention, the allocation engine is used for allocating the first memory resource to the peripheral interconnection port and the switch if the peripheral interconnection port is the target port when the current peripheral interconnection link is determined to be connected with the peripheral interconnection port and the switch according to the first identification result of the first hardware resource.
It can be understood that, in the PCI enumeration stage, the BIOS of the present invention confirms whether the current PCI link has a Switch chip and is a specific PCI port (target port) by reading the type and subtype of the PCI configuration space, if so, performs 32-bit memory resource reservation and sets up the uplink and downlink ports of the Switch chip and PCI configuration space Hotplug function enabling setting of the PCI link bridge where the Switch chip is located, and if not, and if PCI SWITCH chips exist, the BIOS does not perform Hotplug function enabling setting and PCI port resource reservation on the uplink and downlink ports of the Switch chip. Thus, by reserving resources only for a particular PCI port and enabling hotplug functions, critical devices can be ensured to be supported as necessary while reducing resource allocation to unnecessary ports. If the PCI port is not a specific PCI port and a PCI SWITCH chip exists, the BIOS does not perform Hotplug function enabling setting and PCI port resource reservation on the uplink and downlink ports of the Switch chip, so that unnecessary resource occupation is avoided, and resource management is simplified.
In the embodiment of the invention, the allocation engine is used for allocating the first memory resources to the uplink port, the downlink port and the peripheral interconnection equipment of the current level switch if the downlink port of the current level switch is connected with the peripheral interconnection equipment, and not allocating the memory resources to the uplink port and the downlink port of the current level switch if the downlink port of the current level switch is idle.
It will be appreciated that when a PCI device is detected to be connected to a downstream port of a Switch chip, the required memory resources are allocated to the devices, and the associated hotplug functions are ensured to be enabled. If a downstream port of a certain hierarchy is not connected to any device (i.e., is idle), memory resources are not allocated for that port and hotplug functions are turned off to save resources.
Specifically, when there is a physical multi-level Switch chip in the PCI link, which is a Switch chip in the downlink port of the Switch chip, resources need to be reserved in the downlink port of the multi-level Switch chip and the uplink and downlink ports of the Switch level through which the entire PCI link passes, and the PCI configuration space of the PCI link where the PCI device is located is set to hotplug, by reserving resources only where actually needed, unnecessary resource waste can be avoided, if there is no PCI device in the terminal PCI port of the multi-level Switch chip, resources need not be reserved, and hotplug functions are closed in the uplink and downlink ports of the entire PCI link and the Switch chip of each level, and if there is a crossover between the two cases, memory resources need to be reserved in the Switch port of the reserved memory resources.
In the embodiment of the invention, the at least one processing circuit 11 is configured to execute a reading engine, the reading engine reads an identifier of the peripheral interconnection device, and the allocation engine is configured to determine a first memory resource allocation requirement according to the identifier of the peripheral interconnection device, and if it is determined that the peripheral interconnection device has the first memory resource allocation requirement, allocate the first memory resource to the uplink port, the downlink port and the peripheral interconnection device of the current level switch.
The identifier is used for identifying the type of the peripheral interconnection Device, and the identifier may be DID (Device ID) and VID (Vendor ID). In the actual execution process, the BIOS of the embodiment of the invention reads the DID and the VID of each PCI device in the PCI enumeration stage of the DXE stage.
Based on the identification information acquired by the reading engine, the distribution engine analyzes the information to determine whether the first memory resource needs to be reserved for the peripheral interconnection equipment. If it is determined that the device does need the first memory resource allocation according to the identification information of the device, the allocation engine allocates the required memory resources to the uplink port and the downlink port of the current level Switch (Switch chip) and the peripheral interconnection device itself.
In the embodiment of the invention, the allocation engine is used for allocating non-memory resources to the downlink port of the current level switch if the external interconnection equipment is determined to have no allocation requirement of the first memory resources.
Specifically, in the embodiment of the present invention, the BIOS judges the PCI device of the multi-level Switch chip, when the PCI device is confirmed to be the PCI device requiring reservation of memory resources through VID, DID, type, subtype, etc., the BIOS performs resource reservation and hotplug function enabling on the PCI configuration space of the Switch link and the PCI bridge link where the terminal PCI device is located, if not, performs non-memory resource reservation on the downlink port of the Switch where the PCI terminal device is located and closes hotplug function of the downlink port, if for the Switch uplink port, it needs to determine whether other devices exist in the downlink port of the current Switch chip, if not, closes the uplink port and polls hotplug function of the PCI link bridge accordingly, and if other downlink ports exist and require resource reservation, the uplink port of the Switch chip cannot perform hotplug function closing.
Therefore, the embodiment of the invention scans all downlink ports of the PCI device Switch chip, checks whether other devices are connected, considers that the Switch chip does not bear actual workload currently if the downlink port of a certain Switch chip is not connected, can close the uplink port and poll the uplink port until hotplug functions of the PCI link bridge are closed according to the actual workload, and if other devices exist in the downlink port of the current Switch chip, the uplink port of the Switch chip must keep an enabled state, memory resources are reserved for the devices, and a hot plug function is started. Therefore, the embodiment of the invention can avoid unnecessary resource waste by dynamically judging the connection state of the downlink port of the Switch chip.
In the embodiment of the present invention, at least one processing circuit 11 is configured to execute a setting engine, where the setting engine sets a hot plug function for a peripheral interconnect link where a hardware resource of the first memory resource is already allocated.
In the above embodiment, the embodiment of the present invention may enable a hot plug (Hotplug) function for a peripheral interconnect link (for example, a PCI link) where a hardware resource to which a first memory resource is already allocated is located. By enabling the hot plug function, needed memory resources are automatically allocated to the equipment when the equipment is plugged, the resources are automatically recovered when the equipment is unplugged, the dynamic management of the resources is realized, the utilization rate of the resources is improved, and the waste of the resources is avoided.
In an embodiment of the invention the at least one processing circuit 11 is arranged for executing a boot engine when the recognition engine polls all peripheral interconnect links, the first hardware resources and the second hardware resources, the boot engine booting an operating system of the server.
It can be understood that when the recognition engine finishes polling all the peripheral interconnection links, the first hardware resources and the second hardware resources, the embodiment of the invention executes the starting engine, starts the operating system of the server, and enters the next stage of memory resource allocation, namely the operating system stage of the server.
In summary, when the server is started, that is, the process of performing the first memory resource allocation in the starting stage in the embodiment of the present invention is as follows.
(1) When a server is started, a BIOS judges whether a current PCI link comprises a Switch chip or not by reading the type and subtype of a PCI configuration space in a PCI enumeration stage in a DXE stage, if not, the BIOS continues to confirm whether the current device is a DPU device by reading the DID and VID of each PCI device, if not, 32-bit memory resources are normally allocated, if so, the DPU device performs 32-bit resource reservation according to the known 32-bit resource size of the DPU, meanwhile, the PCI link physically connected with the current DPU accumulates MMIO 32-bit resources reserved by the DPU when counting 32-bit resources required by other PCI devices, and meanwhile, the BIOS enables a PCI configuration space hot plug function, namely hotplug function of a bridge of the PCI link physically connected with the DPU;
(2) The BIOS confirms whether the current PCI link has a Switch chip and is a specific PCI port or not by reading the type and the subtype of the PCI configuration space in the PCI enumeration stage, if yes, the BIOS performs 32-bit memory resource reservation and sets up the uplink and downlink ports of the Switch chip and PCI configuration space Hotplug function enabling setting of the PCI link bridge where the Switch chip is located; if the PCI port is not a specific PCI port and a PCI SWITCH chip exists, BIOS does not reserve the starting function of the Hotplug function and PCI port resources, when the downlink port of the Switch chip continues to exist in the Switch chip, namely the PCI link exists in a physical multi-layer Switch chip, PCI equipment of the downlink port of the multi-layer Switch chip and the uplink and downlink ports of the Switch layer through which the whole PCI link passes are required to be reserved, a Hotplug function is set in PCI configuration space of the PCI link, if the terminal PCI port of the multi-layer Switch chip does not have PCI equipment, then no reserved resources are required, and the whole PCI link and the uplink and downlink ports of the Switch chip of each layer are closed, if the two conditions cross, then the reserved memory resources are required, meanwhile, BIOS judges the terminal equipment of the multi-layer Switch chip, if the PCI equipment is required to be reserved by a video memory, a type, a subclass and the like, then the terminal equipment of the PCI equipment of the downlink of the multi-layer Switch chip and the PCI equipment of the PCI link are required to be reserved, if the PCI equipment of the PCI link is not required to be reserved, and if the PCI equipment of the PCI port of the multi-layer Switch chip is not reserved, then the PCI equipment of the PCI port of the other PCI chip is required to be reserved, and the PCI port of the PCI equipment is not required to be reserved, and the PCI equipment of the current PCI port is not is required to be reserved, and the PCI port is not is required to be reserved, and if the PCI equipment is not is required to be reserved for the downlink port or more than the PCI port is not is required to be reserved, if not, closing the uplink port and polling until hotplug functions of the PCI link bridge are closed, and if other downlink ports exist and resources are required to be reserved, the uplink port of the Switch chip cannot be closed hotplug functions;
(3) Sequentially polling all PCI link bridges, switch chips under the corresponding PCI link bridges and equipment thereof according to the steps 1 and 2, and continuously starting and entering an operating system after all PCI links and Switch chips of a CPU are polled;
In summary, the embodiment of the invention can flexibly adapt to the requirements of different types of peripheral interconnection equipment, switches and peripheral interconnection ports on memory resources, avoid the problem that the equipment cannot be normally used due to insufficient memory resource allocation of the equipment, improve the utilization rate of the memory resources, avoid the waste of the memory resources and further improve the overall performance and stability of the server.
In one embodiment of the invention, the at least one processing circuit is configured to execute a read engine after the server enters the operating system, the read engine reads a memory allocation function of the server, and the allocation engine reallocates the first memory resource of the server according to the memory allocation function.
The memory allocation function is used for determining whether resource allocation needs to be performed again, and the memory allocation function can be identified from Grub parameters of the operating system, and is pci=realloc.
Because the resource requirements may change during the stage of the server entering the operating system, the embodiment of the invention reads the memory allocation function of the server after the server enters the operating system to determine whether the allocation of the first memory resource needs to be performed again.
In one embodiment of the invention, the at least one processing circuit is configured to execute the read engine to read a reallocation requirement of the server before reallocating the first memory resources of the server according to the memory allocation function, and the allocation engine reallocates the first memory resources of the server according to the reallocation requirement and the memory allocation function.
It can be appreciated that the embodiment of the present invention may determine whether the first memory resource of the server needs to be reallocated according to the reallocation requirement and the memory allocation function.
In the embodiment of the invention, if the read engine does not read the re-allocation requirement of the server, the read engine allocates the engine, deletes the memory allocation function, and reserves the allocation parameters of the first memory resource in the starting stage of the server.
It can be understood that when the reallocation requirement of the server is not read, the embodiment of the invention deletes the memory allocation function so as to reserve the allocation parameter of the first memory resource in the starting stage of the server when the server enters the operating system without reallocation of the memory resource.
Specifically, when the server enters the operating system, the Grub interface of the operating system adds the pci=dealloy parameter to the Grub parameter by default, if the need of reallocating the memory resources allocated by the BIOS under the system is not required at this time, the pci=dealloy parameter is removed from the Grub file, and if the need of reallocating the memory resources allocated by the BIOS under the system is required at this time, the pci=dealloy parameter is reserved, and at this time, the operating system allocates all the PCI resources of the server again to ensure reasonable allocation and utilization of the memory resources.
In the embodiment of the present invention, at least one processing circuit 11 is configured to execute a scan engine, where the scan engine scans all peripheral interconnection links of the server, and transmit scan data of the scan engine to an identification engine, where the identification engine identifies the scan data, and if it identifies a current peripheral interconnection link, a first hardware resource connected to the current peripheral interconnection link, and a switch connected to the current peripheral interconnection link, a hot plug function is turned on, then the first memory resource allocated by the current peripheral interconnection link in a server startup phase is reserved.
It can be understood that the embodiment of the invention can execute the scanning engine, transmit the scanning data of the scanning engine to the recognition engine, the recognition engine recognizes the scanning data, and starts the hot plug function when recognizing the current peripheral interconnection link, the first hardware resource connected with the current peripheral interconnection link and the exchanger connected with the current peripheral interconnection link, which indicates that the equipment may be dynamically changed, then the first memory resource allocated by the current peripheral interconnection link in the starting stage of the server is reserved, so as to ensure that the equipment can work normally when being accessed or removed, and the memory resource is allocated reasonably.
In the embodiment of the invention, the identification engine is used for closing the hot plug function and distributing the engine if the current peripheral interconnection link, the first hardware resource connected with the current peripheral interconnection link and the exchanger connected with the current peripheral interconnection link are identified, and the first memory resource of the server is redistributed according to the memory distribution function.
It can be understood that when the current peripheral interconnection link, the first hardware resource connected by the current peripheral interconnection link, and the hot plug function of the switch connected by the current peripheral interconnection link are identified to be closed, the first memory resource already allocated or the equipment incapable of allocating the first memory resource when the server is started are reallocated, so that flexible allocation of the memory resource is realized, and new memory resource allocation requirements are met.
In the embodiment of the invention, the identification engine is used for stopping distributing the first memory resource of the server if the current peripheral interconnection link is identified to be idle or a switch connected with the current peripheral interconnection link is identified to be idle.
It can be understood that when the current peripheral interconnection link space or the switch connected by the current peripheral interconnection link is idle, the embodiment of the invention stops distributing the first memory resources of the server, and retains the resources for other more needed tasks or processes, thereby improving the overall utilization rate of the memory resources and enabling the server to operate more efficiently.
In the embodiment of the present invention, the at least one processing circuit 11 is configured to execute a scan engine, the scan engine scans a current peripheral interconnect link level, transmit scan data of the scan engine to an identification engine, the identification engine identifies the scan data, if the current peripheral interconnect link level is N, the number of reallocations of the first memory resources is n+1, and the allocation engine allocates the first memory resources according to the number of reallocations.
Because the peripheral interconnection link of the server may have a multi-layer structure, the embodiment of the invention can scan the current peripheral interconnection link level, if the data exchanger is physically connected in the scanning process, the scanning is continued until the final level number is counted after the data exchanger of all levels is scanned, if the level number is N, the number of times of reallocating resources is counted as n+1, after the scanning level is completed, the scanned level is reallocated with resources, and the basis for subsequent resource reallocation can be provided by counting the level number and the reallocation number of times, so as to ensure the accuracy of resource reallocation.
The embodiment of the invention discloses a method for processing a memory resource, which comprises the steps of executing a reading engine, reading a second memory resource of a server by the reading engine, transmitting the residual resource of the second memory resource to a recognition engine, recognizing whether the residual resource is smaller than a resource threshold by the recognition engine, transmitting a recognition result of the recognition engine to a distribution engine, stopping distribution of a first memory resource of the server when the residual resource is smaller than the resource threshold by the distribution engine, and reallocating the hardware resource which does not support a hot plug function and the first memory resource which is already distributed by a peripheral interconnection link.
The resource threshold may be set according to a specific situation, for example, 4G.
It can be appreciated that, in the embodiment of the present invention, the second memory resource may be identified to optimize the allocation of the first memory resource, and when the remaining resources of the second memory resource are smaller than the resource threshold, the current memory resource is indicated to be insufficient, and the allocation of the first memory resource of the hardware resource of the current level or the next level is stopped, so as to improve the rationality and the effectiveness of the allocation of the memory resource.
In the embodiment of the invention, the allocation engine is further used for reallocating the first memory resources which do not support the peripheral interconnection link where the hot plug power exchanger is located if any one of the switches of the multi-level switch does not support the hot plug function, and reserving allocation parameters of the first memory resources in the starting stage of the server if all the switches of the multi-level switch support the hot plug function.
It can be understood that in the embodiment of the invention, when any one of the switches of the multi-level switch does not support the hot plug function, the first memory resource of the peripheral interconnection link where the hot plug power switch is located is redistributed, so that the memory resource is better adapted to the link which does not support the hot plug, the problem caused by mismatching of the resource allocation related to the hot plug and the actual hardware capability is avoided, the memory resource allocation is more reasonable, part of the memory resource reserved originally for the hot plug function is released for other more needed places, the overall utilization efficiency of the memory resource is improved, and when all the switches of the multi-level switch support the hot plug function, the allocation parameter of the first memory resource in the starting stage of the server is reserved, so that the dynamic change of equipment is more flexibly adapted, the system performance fluctuation or the configuration error possibly caused by frequently adjusting the memory allocation can be avoided, and the overall performance and the stability of the server are maintained.
Specifically, the current PCI link level is scanned, if Switch physical connection occurs in the scanning process, the scanning is continued until the final level number is counted after the Switch chips of all levels are scanned, if the level number is N, the number of reallocation resources is counted as N+1, and after the scanning of the level is completed, the system reallocates the resources to the scanned level. Therefore, the embodiment of the invention can carry out careful resource reallocation according to the equipment condition and the hot plug function state of each level, improves the rationality and the effectiveness of resource allocation, avoids the conditions of resource conflict and uneven allocation of equipment of a plurality of levels, and specifically comprises the following steps:
The purpose of resource allocation is to provide memory resources for connected peripheral interconnection equipment to support the operation of the peripheral interconnection equipment, so that if no peripheral interconnection equipment exists below a peripheral interconnection link of a current level, the resource waste is caused when the memory resources are allocated to the peripheral interconnection link, and therefore, the resource allocation is not carried out on a bridge of the current level, or the hot plug function of a data exchanger is closed, which indicates that the current data exchanger does not have the operation of inserting or extracting equipment, the reserved first memory resources in the starting stage are not used because of the access or removal of the equipment, and therefore, the reserved first memory resources are released for subsequent allocation to other equipment, and the utilization rate of the resources is improved;
If there is a peripheral interconnection device under the peripheral interconnection link of the current level, in order to make these devices operate normally, so that the peripheral interconnection device is subjected to resource allocation, if the hot plug function of the data exchanger bridge is started, it indicates that the device may have access or removal operation, so that the peripheral interconnection device is not subjected to resource allocation, and the device is used according to the size of the resource set in the starting stage of the server, so that the device is prevented from being failed or losing data due to reallocation of the resource;
if the peripheral interconnection equipment and the equipment of the next level exist below the peripheral interconnection link of the current level, performing resource allocation on the peripheral interconnection equipment to ensure normal operation of the equipment, and judging whether the peripheral interconnection equipment exists in the peripheral interconnection link of the next level or not so as to further determine whether resources need to be allocated for the equipment of the next level or not, thereby realizing reasonable allocation of the resources of the whole level structure;
If the peripheral interconnection link of the next level is not connected with the peripheral interconnection device, and the hot plug function is closed, which means that no new device is accessed or removed, so that resources do not need to be allocated to the bridge, resource waste is avoided, and if the peripheral interconnection link of the next level exists the peripheral interconnection device, memory resources need to be allocated to the peripheral interconnection link of the next level to ensure the normal operation of the devices so as to meet the operation requirement of the devices, or the hot plug function of the next level is opened, which means that there is a possibility of inserting or extracting the devices, so that the first memory resources are not required to be reallocated;
After resource allocation and judgment are carried out on peripheral interconnection links and equipment of all levels, when the re-allocation of the resources of the (n+1) th layer is completed, the resource allocation work of the whole server system is indicated to be completed, and the system can enter a stable running state after the resource allocation is finished, so that unnecessary resource allocation operation is avoided, and the efficiency and stability of the system are improved.
Specifically, the specific operation of the embodiment of the invention after the server enters the operating system is as follows:
(1) When the server enters the operating system, the Grub interface of the operating system adds the PCI=dealloc parameter into the Grub parameter by default, and if the memory resources allocated by the BIOS do not need to be reallocated under the system at this time, the parameter is removed from the Grub file; when the system driver scans to the PCI configuration space Hotplug functions of the PCI bridge and the Switch chip under the PCI bridge and the downlink port of the Switch chip, the reserved resources of the device at the time of BIOS starting are reserved, if the PCI configuration space Hotplug functions of the PCI bridge and the Switch chip under the PCI bridge and the downlink port of the Switch chip are closed, the operating system reallocates the resources already allocated by the BIOS in the DXE stage or the devices which cannot allocate 32-bit memory resources, if the PCI bridge of the PCI link or the downlink port of the Switch chip and the downlink port of the Switch chip are not provided with devices and the Hotpug function is closed, the 32-bit memory resources are not required to be allocated, so that the PCI device under the system can normally acquire the 32-bit memory resources under special requirements and meet the use conditions;
(2) When resources are reallocated under the system, the PCI bridge device of each PCI link needs to be scanned, and only one PCI link is taken as an example for illustration; firstly scanning the hierarchy of the current PCI link, if Switch physical connection occurs in the scanning process, continuing scanning until the final hierarchy number is counted after the Switch chips of all the hierarchies are scanned, and if the hierarchy number is N, counting the number of reallocation resources as N+1; after the scanning level is completed, the system re-allocates the resource to the scanned level, if PCI terminal equipment does not exist under the bridge of the first level, the bridge does not allocate the resource or the Hotplug function is not enabled, and the reserved resource is released; if there is PCI terminal equipment under the first level bridge, then the terminal equipment is allocated, if the Hotplug function of the Switch chip bridge is enabled, then the operating system does not allocate the PCI equipment, and uses the PCI equipment according to the set resource size when the BIOS is started, if there is PCI terminal equipment under the first level bridge, then the PCI terminal equipment is required to be allocated and judges whether PCI terminal equipment exists in the next level bridge, if there is no terminal equipment in the next level bridge and the PCI configuration space Hotplug function of the next level bridge is closed, then the bridge does not need to allocate resources, if there is PCI terminal equipment under the next level bridge, then the next level bridge is allocated or the Hotplug function of the next level bridge enables, then the system does not need to allocate the Switch again, and uses the memory resources allocated when the BIOS is started, thus the N+1 layer resource allocation is completed until the 32 bit resource allocation is completed, when all PCI equipment resources of the server are allocated, if the PCI resources of the 4G memory meet the current PCI equipment is not required, if the 4G memory resource is insufficient, the PCI device identified by the current level is allocated with 32-bit memory resources, and if no memory resource can be allocated, the rest PCI devices under the current level or the Switch bridge chip and PCI devices under the next level are not allocated with resources, if the Switch chip of any node does not support the Hotplug function in the PCI link multi-level Switch chip, the system layer is allocated with resources again, and if each Switch chip level supports Hotplug functions in the PCI link multi-level Switch chip, the system layer is reserved with resources according to the resource size allocated by the BIOS when the server is started and the resources are not allocated again.
According to the multi-level switch dynamic combined memory resource allocation system provided by the embodiment of the invention, the type data of the target configuration space is dynamically acquired and analyzed when the server is started, the identification engine identifies the first hardware resource connected with the current peripheral interconnection link according to the type data, and identifies the second hardware resource connected with the multi-level switch when the first hardware resource comprises the multi-level switch, and determines which devices need to be reserved for resources according to the second identification result of the second hardware resource, so that each possible connection condition is not required to be checked one by one, and the first memory resource is allocated for the multi-level switch and the second hardware resource based on the second identification result, thereby avoiding excessive time consumption when the peripheral interconnection link is in existence of excessive switches, and reducing time consumption when the server is started, improving the starting efficiency of the server, and further ensuring the fixity and stability of the memory resource allocation. Therefore, the technical problem that the server starting time is long due to the memory resource allocation mode in the related art can be solved.
The embodiment of the invention also provides a server, which comprises the system for allocating the combined memory resources of the multi-level switch.
The embodiment of the invention also provides a method for dynamically allocating the combined memory resources of the multi-level switch.
As shown in fig. 6, the method for dynamically combining memory resource allocation of the multi-level switch includes the following steps:
In step S101, at least one processing circuit connected using at least one peripheral interconnect link is used to execute a read engine that reads type data of a target configuration space at a start-up of the server.
It may be understood that, in the embodiment of the present invention, at least one processing circuit connected to at least one peripheral interconnection link may be used, where the at least one processing circuit is configured to execute a read engine, where the read engine reads type data of a target configuration space when a server is started, where the target configuration space may be a PCI configuration space, and where, when the server is started, the BIOS in the embodiment of the present invention reads a type and a subtype of the PCI configuration space in a PCI enumeration stage in a DXE stage.
In step S102, the type data is propagated to the recognition engine, which recognizes the first hardware resource of the current peripheral interconnect link connection based on the type data, and if the first hardware resource includes a multi-level switch, the recognition engine recognizes the second hardware resource of the multi-level switch connection.
It may be appreciated that, in the embodiment of the present invention, the type data of the PCI configuration space may be propagated to the recognition engine, where the recognition engine recognizes the first hardware resource connected by the current peripheral interconnect link, so as to perform subsequent memory resource allocation.
In step S103, the recognition result of the recognition engine is propagated to the allocation engine, and the allocation engine determines, according to the first recognition result of the first hardware resource, that the current peripheral interconnection link is connected with the multi-level switch, and allocates the memory resource for the multi-level switch and the second hardware resource according to the second recognition result of the second hardware resource.
It should be noted that, the description of the features in the embodiment corresponding to the method for dynamically combining memory resource allocation by the multi-level switch may be referred to the description of the embodiment corresponding to the system for dynamically combining memory resource allocation by the multi-level switch, which is not described in detail herein.
According to the method for dynamically allocating the memory resources of the multi-level switch, which is provided by the embodiment of the invention, the type data of the target configuration space is dynamically acquired and analyzed when the server is started, the identification engine identifies the first hardware resources connected with the current peripheral interconnection link according to the type data, and identifies the second hardware resources connected with the multi-level switch when the first hardware resources comprise the multi-level switch, and determines which devices need to be reserved for resources according to the second identification result of the second hardware resources, so that each possible connection condition is not required to be checked one by one, and the first memory resources are allocated for the multi-level switch and the second hardware resources based on the second identification result, thereby avoiding excessive time consumption when the peripheral interconnection link is in existence of excessive switches, reducing time consumption when the server is started, improving the starting efficiency of the server, and further ensuring the fixity and stability of the memory resource allocation. Therefore, the technical problem that the server starting time is long due to the memory resource allocation mode in the related art can be solved.
The following describes a process of dynamic memory resource allocation of a multi-level switch according to an embodiment of the present invention, which specifically includes:
The execution connection relation among all the components of the multi-level switch dynamic combination memory resource distribution system in the server starting stage is shown in fig. 7, and the BIOS controls the processing circuit in the server starting stage, which is equivalent to executing memory resource distribution operation by the BIOS in the server starting stage, specifically, executing type data of a target configuration space when the reading engine is started up, transmitting the type data to the identification engine, the identification engine identifies first hardware resources (peripheral interconnection equipment, a switch and the like), the first hardware resources comprise the multi-level switch, second hardware resources (comprising peripheral interconnection ports, peripheral interconnection equipment and the like) connected with the multi-level switch need to be identified, generating identification results, transmitting the identification results to the distribution engine, and determining that the first memory resources are distributed to the multi-level switch and the second hardware resources according to the second identification results of the second hardware resources when the current peripheral interconnection link is connected with the multi-level switch. After the first memory resource is allocated, executing the setting engine to set a hot plug function for the hardware resource allocated with the first memory resource, setting a peripheral interconnection link where the first memory resource is located, and then starting the engine to start an operating system of the server.
The following describes a specific execution flow of memory resource allocation in the startup phase of the server in conjunction with the execution connection relationship diagram between the components in the startup phase shown in fig. 7, and the specific flow is shown in fig. 8.
1. When the server is started, whether a Switch chip exists in a PCI link is judged by reading the PCI type, the subtype and the like in the DXE stage, when the PCI link does not have the Switch chip, the DID and the VID of the PCI device of the PCI link are directly read, judging and confirming whether the PCI device is a DPU and PCI specific PCI device, if yes, 32-bit memory resource reservation is carried out, meanwhile, the hotplug function of the PCI link bridge is started and set, if no device is judged, whether the PCI device is a specific PCI port and needs to reserve memory resources and supports the hot plug function under the system is evaluated, if no device is not a PCI port which needs to reserve resources, no memory resources are allocated, and if the PCI device is a DPU or a specific device, the PCI device is allocated according to normal memory resources. It should be noted that, when counting the 32-bit resources required by other PCI devices, the PCI link physically connected to the current DPU accumulates the MMIO 32-bit resources reserved by the DPU, and at the same time, the BIOS enables and opens the hot plug function, that is, the Hotplug function, of the PCI configuration space of the bridge physically linked to the DPU.
2. The BIOS confirms whether the current PCI link has a Switch chip and is a specific PCI port or not by reading the type and the subtype of the PCI configuration space in the PCI enumeration stage, if the current PCI link has the specific PCI port, the BIOS reserves 32-bit memory resources and sets up the uplink and downlink ports of the Switch chip and PCI configuration space Hotplug function enabling setting of a PCI link bridge where the Switch chip is located, and if the current PCI link has not the specific PCI port and has PCI SWITCH chips, the BIOS does not reserve the uplink and downlink ports of the Switch chip and the PCI port resources.
3. When a PCI link multi-level Switch chip exists, as shown in fig. 9, the BIOS scans the depth of the PCI link, i.e., the Switch level, if the Switch level is N, then determines whether the PCI device of the N level exists, if not, then closes the hotplug function of the Switch downlink port corresponding to the N port and does not allocate memory resources, if resource reservation is needed, then enables the hotplug function of the downlink port corresponding to the Switch N, if there is a PCI device of the Switch downlink port corresponding to the N port, allocates memory resources, if the port needs to reallocate memory resources under the system, then closes hotplug function, if not, then closes hotplug function under the system, if the memory resource allocation is not performed under the system, then executes the same operation on the upper stage SwitchN-1 level after the N level is scanned, at this time, the corresponding uplink port hotplug function of the N level also needs to be enabled, the hotplug functions of the downlink port of the N-1 level and the uplink port of the N level need to be enabled, according to the method, and the bridge can perform the function of the Switch level at the upper level hotplug after all the Switch levels have been executed.
4. And (3) sequentially polling all PCI link bridges, corresponding Switch chips under the PCI link bridges and equipment thereof according to the steps (1-3), and continuously starting and entering an operating system after all PCI links and Switch chips of the CPU are polled.
2. The server enters the operating system phase.
The execution connection relation among all the components of the dynamic combined memory resource distribution system of the multilevel exchanger in the stage of the server entering the operating system is shown in fig. 10, and the operating system control processing circuit of the server is equivalent to that in the stage of the server entering the operating system, wherein the operating system executes the redistribution of the first memory resource, specifically, the reading engine is used for reading the memory distribution function of the server and reading whether the memory resource redistribution requirement exists or not, the distribution engine executes the deletion operation of the memory distribution function (namely, the memory resource is not redistributed) or the first memory resource is redistributed according to the memory resource redistribution requirement reading result transmitted by the reading engine, after the first memory resource redistribution instruction is identified, the scanning engine is executed, the scanning engine is used for setting the hierarchy structure, the hardware resource and the hot plug function of the peripheral interconnection link, and the like, and transmits the scanning data to the identification engine, the identification engine is used for identifying the scanning data and transmitting the scanning data to the distribution engine, and the distribution engine is used for carrying out the redistribution of the first memory resource.
The following describes a specific execution flow of memory resource allocation of the server into the operating system stage in conjunction with the execution connection relationship diagram between the components of the startup stage shown in fig. 10, where the flow is shown in fig. 11, and includes:
1. When the server enters the operating system, the Grub interface of the operating system adds PCI=dealloc parameters into the Grub parameters by default, if the memory resources allocated by the BIOS are not required to be reallocated under the system, the parameters are removed from Grub files, if the memory resources allocated by the BIOS are required to be reallocated under the system, the operating system reserves the parameters, and then allocates all PCI resources of the server again, when the system drives to start PCI configuration space Hotplug functions of Switch chips and downlink ports of the Switch chips of the PCI bridge, at the moment, reserved resources of the device are reserved when the BIOS is started, if the PCI bridge and the PCI configuration space Hotplug functions of the Switch chips and the downlink ports of the Switch chips are closed, the operating system can reallocate the resources allocated by the BIOS in the E stage or equipment incapable of allocating 32-bit memory resources, if the PCI bridge or the Switch chips and the downlink ports of the Switch chips of the PCI link are not required to be allocated, and the special conditions can be met when the PCI bridge or the PCI bridge and the downlink ports of the Switch chips are not required to be allocated, and the resources of the PCI bridge can be normally closed 32 can be met.
2. When resources are reallocated under an operating system, PCI bridge equipment of each PCI link is required to be scanned, if N-level Switch chips exist, an N+1 resource reallocation scheme is carried out on the Switch chips, the uplink and downlink ports of each Switch chip and a bridge of the PCI link where the Switch chips are located are required to enable Hotplug functions, if the Hotplug function of any node is not started in the whole link, the resources are reallocated, and if all the nodes are started, BIIOS memory resources allocated during starting are adopted for allocation; therefore, when BIOS is started, the bridge configuration space of the PCI link where the DPU is located needs to be started and the Hotplug function of the PCI configuration space of the PCI link bridge of the identified PCI device is enabled, the PCI configuration space Hotplug function of the PCI bridge which needs to support the hot plug function is enabled under the system, and memory resources allocated by BIOS are adopted by default, meanwhile, when the PCI link comprises a Switch, the PCI of the downlink port of the Switch is required to be set or the PCI port which needs to reserve resources are reserved, the Hotplug function of the PCI configuration space of the uplink and downlink port of the Switch chip is started and the Hotplug function of the PCI configuration space of the PCI link bridge where the Switch chip is located is started, the PCI link and the scheme which does not have the Switch chip inform an operating system that 32-bit memory resources are allocated to the DPU, the PCI device which has been detected or the PCI port which needs to support the hot plug function are not required to be allocated at the moment, simultaneously, the operating system performs resource allocation of 32 bits to other PCI bridges and PCI devices under the system at first, allocates resources to the PCI layer of the PCI device which needs to be opened according to the default, and the PCI layer which does not have the Switch chip is required to be opened, and the memory resources are allocated to be newly allocated to the PCI layer of the PCI layer which is not required to be set to be opened to the memory layer of the PCI 2, and the memory resources are allocated to be allocated to the memory layer of the PCI layer which is not required to be opened to be the reserved for the 2 when the PCI layer is not opened, if the PCI bridge has no hotplug functions and no equipment, the 32-bit memory resource reservation is not needed, if the PCI bridge has no hotplug functions and equipment exists, the allocation is performed according to the allocated residual capacity of the 32-bit 4G memory resource, if the resources required by the current equipment are met, the allocation is performed, and if the resources required by the current equipment are not met, the allocation is not performed and the memory resource isolation is performed.
In summary, the method of dynamically combining memory resources in the multistage Switch of the embodiment of the invention uses the BIOS to judge whether the Switch chip of the PCI link exists or not when the server is started, if not, the PCI device of the PCI link is directly judged, especially if the PCI device exists, the DPU or the specific PCI device is actively identified, the hot plug function is set in the PCI configuration space of the bridge where the DPU and the specific device exist, if the PCI device in normal identification is used, the resource allocation is performed according to the normal mode and the hot plug function of the PCI link in place is closed, but for the specific port of the PCI link, the resource reservation is performed and the hot plug function of the PCI link is opened, and if other PCI links do not exist, the PCI device of the corresponding PCI link is not allocated, if the PCI Switch chip exists, the downlink port of the Switch chip is judged to be needed to support the hot plug function under the system, or if the PCI device exists in the downlink port is the PCI device, if the two conditions exist, then the resource allocation is performed according to the normal mode and the hot plug function of the PCI link is performed and if the PCI device exists under the PCI device is not, if the downlink Switch chip is not needed to be allocated, if the downlink port of the PCI device is not needed to be allocated, if the PCI device is not allocated under the PCI device is allocated, if the downlink port is not needed to be allocated, if the PCI device is allocated to be allocated under the PCI device is not needed, if the downlink port is not is allocated to be allocated, if the PCI device is allocated to be under the PCI device, if is not is allocated, if the downlink port is not is allocated, if is not required, and if the PCI is not is allocated to be, and if is not is allocated, and if is required, and if is not, and if is required to be, and if is not, and if is required, and if be, and if is not, and if, when the scanning is completed on the N level, the same operation is executed on the upper level SwitchN-1 level, at this time, the corresponding uplink port is required to be enabled when the function of the downlink port hotplug of the N level is enabled, the hotplug functions of the downlink port of the N-1 level and the uplink port of the N level are required to be enabled, according to the method, until all the Switch levels are executed, the hotplug function of the PCI link bridge where the Switch level is located is enabled, and by adopting the novel scheme, the resource reservation of inconsequential PCI devices or PCI ports which are not required to be used can be reduced.
When the server enters the os, the pci= Realloc parameter is default to be present and the os reallocates the memory resources under the system, so that even if the BIOS has completed the memory resource allocation, for example, the 32-bit memory resource allocation is completed, the system still performs the reallocation of the memory resources. Because the memory resources are limited and the position information of the DPU and the specific PCI equipment cannot be fixed, the PCI equipment identified later can be allocated no resources because the memory resources are already allocated according to the function of the parameters, but the PCI link started by the Hotplug function does not need to be allocated again, meanwhile, the system scans the Switch level, if an N-level Switch chip exists, the Switch chip is subjected to an N+1 resource reallocation scheme, the uplink and downlink ports of each Switch chip and the bridge of the PCI link where the Switch chip is located must be subjected to a Hotplug function to enable setting, if the Hotplug function of any node is not started in the whole link, the resources are reallocated, and if all the nodes are started, the allocated memory resources are allocated when BIIOS is started; when the memory resource is reallocated, the hierarchy of the scanning Switch is N, the scanning times are N+1, 32-bit memory resource allocation is carried out on PCI equipment from N+1, when the N+1 memory resource allocation is completed, the resource allocation is carried out on the PCI equipment at the N hierarchy until all equipment of the PCI link bridge is completed, if the 32-bit memory resource is insufficient, the memory resource allocation of a Switch chip and the PCI equipment at the later hierarchy is abandoned, so that BIOS is required to default to start the Hotpug function of the PCI link where the DPU and the specific PCI equipment are located, enable the Hotpug function of the uplink and downlink ports of the PCI link Switch chip and the Hotpug function of the PCI link bridge, the system is prevented from being reallocated again to cause the DPU and the specific PCI equipment to be unable to be normally used, if the Hotpug function where the DPU and the specific PCI equipment are located is closed and the DPU equipment is unable to meet the resource request when the memory resource is released under the condition that the 32-bit memory resource is already exhausted, the problem that the DPU cannot be displayed or the functions of the DPU or the specific PCI equipment cannot be normally used occurs, the DPU and the specific PCI equipment can be guaranteed to be normally used at any time through the scheme, the problem that the DPU cannot be used due to resource problems is avoided, if memory resources are not required to be reallocated under the system, PCI=dealloc parameters are removed from Grub files when the server is started, the memory resources allocated in the BIOS process are used when the server is started by default, and the memory resources are not required to be reallocated again under the system.
It should be noted that, when there are too many Switch chips in the PCI link, the BIOS consumes too much time to perform resource allocation when the server is started due to the too many Switch chips, at this time, in order to optimize the starting time of the server and ensure that the resource allocation of a specific Switch chip is fixed, at this time, the resources and hotplug functions of the Switch chip may be set in the BIOS enumeration process, and the resources are reserved by the device VID or DID or by the hierarchical relationship and the BUS, dev, FUN number of the device to reduce the time consumption when the server is started, so as to improve the starting efficiency of the server, and at the same time, the resources are reserved and hotplug functions are enabled for the PCI ports of the specific Switch chip and the uplink and downlink ports and the PCI link where the Switch chip is located.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment.
Embodiments of the present invention also provide a computer readable storage medium having a computer program stored therein, wherein the computer program is configured to perform, when run, the steps of the method embodiments of any of the above-described multi-level switch dynamic combined memory resource allocation.
In an exemplary embodiment, the computer readable storage medium may include, but is not limited to, a U disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, etc. various media in which a computer program may be stored.
Embodiments of the present invention also provide a computer program product comprising a computer program which, when executed by a processor, performs the steps of the method embodiments of dynamically combining memory resource allocation for any of the above-described multi-level switches.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The above describes a memory resource allocation method provided by the present invention in detail. The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to facilitate an understanding of the method of the present invention and its core ideas. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the invention can be made without departing from the principles of the invention and these modifications and adaptations are intended to be within the scope of the invention as defined in the following claims.

Claims (22)

1.一种多层级交换器动态组合内存资源分配的系统,其特征在于,包括:1. A system for dynamically combining memory resources and allocating them across multiple layers of switches, comprising: 服务器的至少一个处理电路,所述至少一个处理电路与至少一个外设互连链路相连,所述至少一个处理电路用于:At least one processing circuit of the server, the at least one processing circuit being connected to at least one peripheral interconnect link, the at least one processing circuit being configured to: 执行读取引擎,所述读取引擎在服务器启动时读取目标配置空间的类型数据;executing a reading engine, wherein the reading engine reads type data of a target configuration space when the server is started; 将所述类型数据传播到识别引擎,所述识别引擎根据所述类型数据识别当前外设互连链路连接的第一硬件资源,若所述第一硬件资源包括多层级交换器,则识别所述多层级交换器连接的第二硬件资源;Propagating the type data to an identification engine, the identification engine identifying a first hardware resource connected to the current peripheral interconnect link based on the type data, and if the first hardware resource includes a multi-level switch, identifying a second hardware resource connected to the multi-level switch; 将所述识别引擎的识别结果传播到分配引擎,所述分配引擎根据所述第一硬件资源的第一识别结果,确定当前外设互连链路连接有多层级交换器时,根据所述第二硬件资源的第二识别结果,为所述多层级交换器及第二硬件资源分配第一内存资源。The identification result of the identification engine is propagated to the allocation engine. The allocation engine determines, based on the first identification result of the first hardware resource, that a multi-level switch is connected to the current peripheral interconnection link. Then, based on the second identification result of the second hardware resource, the allocation engine allocates the first memory resource to the multi-level switch and the second hardware resource. 2.根据权利要求1所述的多层级交换器动态组合内存资源分配的系统,其特征在于,所述分配引擎用于:2. The system for dynamically combining memory resources of a multi-layer switch according to claim 1, wherein the allocation engine is configured to: 若当前层级交换器的下行端口连接有外设互连设备,则对所述当前层级交换器的上行端口、下行端口和外设互连设备分配第一内存资源;If a downstream port of the current level switch is connected to a peripheral interconnection device, allocating a first memory resource to the upstream port, the downstream port, and the peripheral interconnection device of the current level switch; 若当前层级交换器的下行端口空闲,则不对所述当前层级交换器的上行端口和下行端口分配内存资源。If the downlink port of the current-level switch is idle, no memory resources are allocated to the uplink port and the downlink port of the current-level switch. 3.根据权利要求2所述的多层级交换器动态组合内存资源分配的系统,其特征在于,所述至少一个处理电路用于:3. The system for dynamically combining memory resource allocation for a multi-layer switch according to claim 2, wherein the at least one processing circuit is configured to: 执行读取引擎,所述读取引擎读取所述外设互连设备的标识;executing a reading engine, wherein the reading engine reads an identification of the peripheral interconnection device; 所述分配引擎用于:根据所述外设互连设备的标识确定第一内存资源分配需求,若确定所述外设互连设备存在所述第一内存资源分配需求,则对所述当前层级交换器的上行端口、下行端口和外设互连设备分配第一内存资源。The allocation engine is used to: determine a first memory resource allocation requirement based on the identifier of the peripheral interconnection device; if it is determined that the peripheral interconnection device has the first memory resource allocation requirement, allocate the first memory resource to the upstream port, the downstream port and the peripheral interconnection device of the current level switch. 4.根据权利要求3所述的多层级交换器动态组合内存资源分配的系统,其特征在于,所述分配引擎用于:4. The system for dynamically combining memory resources of a multi-layer switch according to claim 3, wherein the allocation engine is configured to: 若确定所述外设互连设备不存在所述第一内存资源的分配需求,则对所述当前层级交换器的下行端口分配非内存资源。If it is determined that the peripheral interconnection device does not have an allocation demand for the first memory resource, non-memory resources are allocated to the downstream port of the current-level switch. 5.根据权利要求1所述的多层级交换器动态组合内存资源分配的系统,其特征在于,所述分配引擎用于:5. The system for dynamically combining memory resources of a multi-layer switch according to claim 1, wherein the allocation engine is configured to: 根据所述第一硬件资源的第一识别结果,确定当前外设互连链路连接有外设互连设备时,对所述外设互连设备分配第一内存资源。When it is determined, based on the first identification result of the first hardware resource, that a peripheral interconnection device is connected to the current peripheral interconnection link, a first memory resource is allocated to the peripheral interconnection device. 6.根据权利要求5所述的多层级交换器动态组合内存资源分配的系统,其特征在于,所述分配引擎用于:6. The system for dynamically combining memory resources of a multi-layer switch according to claim 5, wherein the allocation engine is configured to: 对所述外设互连设备分配第一内存资源之前,若所述外设互连设备为数据处理器,则根据所述数据处理器的资源大小分配第一内存资源。Before allocating the first memory resource to the peripheral interconnection device, if the peripheral interconnection device is a data processor, the first memory resource is allocated according to the resource size of the data processor. 7.根据权利要求1所述的多层级交换器动态组合内存资源分配的系统,其特征在于,所述分配引擎用于:7. The system for dynamically combining memory resources of a multi-layer switch according to claim 1, wherein the allocation engine is configured to: 根据所述第一硬件资源的第一识别结果,确定当前外设互连链路连接有外设互连端口和交换器时,若所述外设互连端口为目标端口,则对所述外设互连端口和所述交换器分配第一内存资源。When it is determined according to the first identification result of the first hardware resource that the current peripheral interconnection link is connected to a peripheral interconnection port and a switch, if the peripheral interconnection port is a target port, a first memory resource is allocated to the peripheral interconnection port and the switch. 8.根据权利要求1-7任意一项所述的多层级交换器动态组合内存资源分配的系统,其特征在于,所述至少一个处理电路用于:8. The system for dynamically combining memory resources of a multi-layer switch according to any one of claims 1 to 7, wherein the at least one processing circuit is configured to: 执行设置引擎,所述设置引擎为已经分配第一内存资源的硬件资源,所在的外设互连链路设置热插拔功能。A setting engine is executed, wherein the setting engine sets a hot-swap function for a peripheral interconnection link of the hardware resource to which the first memory resource has been allocated. 9.根据权利要求1所述的多层级交换器动态组合内存资源分配的系统,其特征在于,所述至少一个处理电路用于:9. The system for dynamically combining memory resource allocation in a multi-layer switch according to claim 1, wherein the at least one processing circuit is configured to: 在所述识别引擎轮询所有外设互连链路、所述第一硬件资源和所述第二硬件资源时,执行启动引擎,所述启动引擎启动所述服务器的操作系统。When the identification engine polls all peripheral interconnection links, the first hardware resource, and the second hardware resource, a startup engine is executed, and the startup engine starts the operating system of the server. 10.根据权利要求9所述的多层级交换器动态组合内存资源分配的系统,其特征在于,所述至少一个处理电路用于:10. The system for dynamically combining memory resource allocation of a multi-layer switch according to claim 9, wherein the at least one processing circuit is configured to: 在所述服务器进入操作系统后,执行所述读取引擎,所述读取引擎读取所述服务器的内存分配函数;After the server enters the operating system, the reading engine is executed, and the reading engine reads the memory allocation function of the server; 所述分配引擎,根据所述内存分配函数重新分配所述服务器第一内存资源。The allocation engine reallocates the first memory resource of the server according to the memory allocation function. 11.根据权利要求9所述的多层级交换器动态组合内存资源分配的系统,其特征在于,所述至少一个处理电路用于:11. The system for dynamically combining memory resources of a multi-layer switch according to claim 9, wherein the at least one processing circuit is configured to: 在根据内存分配函数重新分配所述服务器第一内存资源之前,执行所述读取引擎,所述读取引擎读取所述服务器的重新分配需求;Before reallocating the first memory resource of the server according to the memory allocation function, executing the reading engine, wherein the reading engine reads the reallocation requirement of the server; 所述分配引擎,根据所述重新分配需求和所述内存分配函数,重新分配所述服务器第一内存资源。The allocation engine reallocates the first memory resource of the server according to the reallocation requirement and the memory allocation function. 12.根据权利要求10所述的多层级交换器动态组合内存资源分配的系统,其特征在于,若所述读取引擎未读取所述服务器的重新分配需求,所述分配引擎,删除所述内存分配函数,保留所述服务器启动阶段第一内存资源的分配参数。12. The system for dynamically combining memory resource allocation of a multi-layer switch according to claim 10 is characterized in that if the reading engine does not read the reallocation requirement of the server, the allocation engine deletes the memory allocation function and retains the allocation parameters of the first memory resource in the server startup phase. 13.根据权利要求10所述的多层级交换器动态组合内存资源分配的系统,其特征在于,所述至少一个处理电路用于:13. The system for dynamically combining memory resource allocation of a multi-layer switch according to claim 10, wherein the at least one processing circuit is configured to: 执行扫描引擎,所述扫描引擎扫描所述服务器的所有外设互连链路;executing a scanning engine, wherein the scanning engine scans all peripheral interconnection links of the server; 将所述扫描引擎的扫描数据传播到识别引擎,所述识别引擎识别所述扫描数据,若识别到当前外设互连链路、所述当前外设互连链路连接的第一硬件资源、所述当前外设互连链路连接的交换器,开启热插拔功能,则保留所述当前外设互连链路在所述服务器启动阶段分配的第一内存资源。The scanning data of the scanning engine is propagated to the recognition engine. The recognition engine recognizes the scanning data. If the current peripheral interconnection link, the first hardware resource connected to the current peripheral interconnection link, and the switch connected to the current peripheral interconnection link are recognized, the hot plug function is enabled, and the first memory resource allocated to the current peripheral interconnection link during the server startup phase is reserved. 14.根据权利要求13所述的多层级交换器动态组合内存资源分配的系统,其特征在于,所述识别引擎用于:14. The system for dynamically combining memory resources of a multi-layer switch according to claim 13, wherein the recognition engine is configured to: 若识别到当前外设互连链路、所述当前外设互连链路连接的第一硬件资源、所述当前外设互连链路连接的交换器,关闭热插拔功能,所述分配引擎,根据所述内存分配函数重新分配所述服务器第一内存资源。If the current peripheral interconnection link, the first hardware resource connected to the current peripheral interconnection link, and the switch connected to the current peripheral interconnection link are identified, the hot plug function is turned off, and the allocation engine reallocates the first memory resource of the server according to the memory allocation function. 15.根据权利要求13所述的多层级交换器动态组合内存资源分配的系统,其特征在于,所述识别引擎用于:15. The system for dynamically combining memory resources of a multi-layer switch according to claim 13, wherein the recognition engine is configured to: 若识别到当前外设互连链路空闲,或者,所述当前外设互连链路连接的交换器空闲,则停止分配所述服务器的第一内存资源。If it is identified that the current peripheral interconnection link is idle, or the switch connected to the current peripheral interconnection link is idle, allocating the first memory resource of the server is stopped. 16.根据权利要求10-15任意一项所述的多层级交换器动态组合内存资源分配的系统,其特征在于,所述至少一个处理电路用于:16. The system for dynamically combining memory resources of a multi-layer switch according to any one of claims 10 to 15, wherein the at least one processing circuit is configured to: 执行扫描引擎,所述扫描引擎扫描当前外设互连链路层级;executing a scanning engine, wherein the scanning engine scans a current peripheral interconnection link level; 将所述扫描引擎的扫描数据传播到识别引擎,所述识别引擎识别所述扫描数据,若当前外设互连链路层级为N,则第一内存资源的重新分配次数为N+1;Propagating the scan data of the scan engine to the recognition engine, the recognition engine recognizing the scan data, and if the current peripheral interconnection link level is N, the number of times the first memory resource is reallocated is N+1; 所述分配引擎根据重新分配次数进行第一内存资源分配。The allocation engine performs first memory resource allocation according to the number of reallocations. 17.根据权利要求16所述的多层级交换器动态组合内存资源分配的系统,其特征在于,所述至少一个处理电路用于:17. The system for dynamically combining memory resource allocation of a multi-layer switch according to claim 16, wherein the at least one processing circuit is configured to: 执行读取引擎,所述读取引擎读取所述服务器的第二内存资源;executing a reading engine, wherein the reading engine reads a second memory resource of the server; 将所述第二内存资源的剩余资源传播到所述识别引擎,所述识别引擎识别所述剩余资源是否小于资源阈值;propagating the remaining resource of the second memory resource to the recognition engine, the recognition engine identifying whether the remaining resource is less than a resource threshold; 将所述识别引擎的识别结果传播到分配引擎,所述分配引擎在所述剩余资源小于资源阈值时,停止所述服务器的第一内存资源的分配、对不支持热插拔功能的硬件资源和外设互连链路已经分配的第一内存资源重新分配。The identification result of the identification engine is propagated to the allocation engine. When the remaining resources are less than the resource threshold, the allocation engine stops allocating the first memory resources of the server and reallocates the first memory resources that have been allocated to the hardware resources that do not support the hot-swap function and the peripheral interconnection links. 18.根据权利要求17所述的多层级交换器动态组合内存资源分配的系统,其特征在于,所述分配引擎进一步用于:18. The system for dynamically combining memory resources of a multi-layer switch according to claim 17, wherein the allocation engine is further configured to: 若多层级交换器的任意一个交换器不支持热插拔功能,则重新分配不支持热插拔功的交换器所在外设互连链路的第一内存资源;If any one of the multi-level switches does not support the hot-swap function, reallocating the first memory resource of the peripheral interconnection link where the switch that does not support the hot-swap function is located; 若所述多层级交换器的所有交换器支持热插拔功能,则保留所述服务器启动阶段第一内存资源的分配参数。If all switches of the multi-layer switch support the hot plug function, the allocation parameter of the first memory resource in the server startup phase is retained. 19.一种服务器,其特征在于,包括:如权利要求1至18任一项所述的多层级交换器动态组合内存资源分配的系统。19. A server, comprising: a system for dynamically combining memory resources and allocating multi-layer switches according to any one of claims 1 to 18. 20.一种多层级交换器动态组合内存资源分配的方法,其特征在于,包括:20. A method for dynamically combining memory resources of a multi-layer switch, comprising: 使用至少一个外设互连链路相连的至少一个处理电路,其中,所述至少一个处理电路用于:At least one processing circuit connected using at least one peripheral interconnect link, wherein the at least one processing circuit is configured to: 执行读取引擎,所述读取引擎在服务器启动时读取目标配置空间的类型数据;executing a reading engine, wherein the reading engine reads type data of a target configuration space when the server is started; 将所述类型数据传播到识别引擎,所述识别引擎根据所述类型数据识别当前外设互连链路连接的第一硬件资源,若所述第一硬件资源包括多层级交换器,则识别所述多层级交换器连接的第二硬件资源;Propagating the type data to an identification engine, the identification engine identifying a first hardware resource connected to the current peripheral interconnect link based on the type data, and if the first hardware resource includes a multi-level switch, identifying a second hardware resource connected to the multi-level switch; 将所述识别引擎的识别结果传播到分配引擎,所述分配引擎根据所述第一硬件资源的第一识别结果,确定当前外设互连链路连接有多层级交换器时,根据所述第二硬件资源的第二识别结果,为所述多层级交换器及第二硬件资源分配内存资源。The identification result of the identification engine is propagated to the allocation engine. The allocation engine determines, based on the first identification result of the first hardware resource, that a multi-level switch is connected to the current peripheral interconnection link. Then, based on the second identification result of the second hardware resource, the allocation engine allocates memory resources to the multi-level switch and the second hardware resource. 21.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机程序,其中,所述计算机程序被处理器执行时实现如权利要求20所述的多层级交换器动态组合内存资源分配的方法的步骤。21. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, wherein when the computer program is executed by a processor, the steps of the method for dynamically combining memory resources of a multi-layer switch according to claim 20 are implemented. 22.一种计算机程序产品,包括计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求20所述的多层级交换器动态组合内存资源分配的方法的步骤。22. A computer program product, comprising a computer program, wherein when the computer program is executed by a processor, the computer program implements the steps of the method for dynamically combining memory resources allocation in a multi-layer switch according to claim 20.
CN202510703359.3A 2025-05-28 2025-05-28 System and method for dynamically combining memory resources and allocating multi-level switches Active CN120216211B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510703359.3A CN120216211B (en) 2025-05-28 2025-05-28 System and method for dynamically combining memory resources and allocating multi-level switches

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510703359.3A CN120216211B (en) 2025-05-28 2025-05-28 System and method for dynamically combining memory resources and allocating multi-level switches

Publications (2)

Publication Number Publication Date
CN120216211A CN120216211A (en) 2025-06-27
CN120216211B true CN120216211B (en) 2025-08-22

Family

ID=96117237

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510703359.3A Active CN120216211B (en) 2025-05-28 2025-05-28 System and method for dynamically combining memory resources and allocating multi-level switches

Country Status (1)

Country Link
CN (1) CN120216211B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120238511B (en) * 2025-05-28 2025-08-29 苏州元脑智能科技有限公司 System and method for memory resource allocation of multi-layer switch firmware combination
CN120429129B (en) * 2025-07-07 2025-09-05 苏州元脑智能科技有限公司 High-speed peripheral component interconnect bus resource allocation system and method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110399217A (en) * 2019-06-27 2019-11-01 苏州浪潮智能科技有限公司 A memory resource allocation method, device and equipment
CN118034917A (en) * 2024-01-18 2024-05-14 苏州元脑智能科技有限公司 PCIe resource allocation method and device, electronic equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240232336A9 (en) * 2022-10-25 2024-07-11 Mellanox Technologies, Ltd. Method for definition, consumption, and controlled access of dpu resources and services
CN119883578B (en) * 2025-03-28 2025-07-08 苏州元脑智能科技有限公司 Task scheduling method and system, electronic device, and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110399217A (en) * 2019-06-27 2019-11-01 苏州浪潮智能科技有限公司 A memory resource allocation method, device and equipment
CN118034917A (en) * 2024-01-18 2024-05-14 苏州元脑智能科技有限公司 PCIe resource allocation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN120216211A (en) 2025-06-27

Similar Documents

Publication Publication Date Title
CN120216211B (en) System and method for dynamically combining memory resources and allocating multi-level switches
US7810090B2 (en) Grid compute node software application deployment
US6944847B2 (en) Virtualization of input/output devices in a logically partitioned data processing system
CN120238511B (en) System and method for memory resource allocation of multi-layer switch firmware combination
US7698529B2 (en) Method for trading resources between partitions of a data processing system
CN118034917A (en) PCIe resource allocation method and device, electronic equipment and storage medium
US8056084B2 (en) Method and system for dynamically reallocating a resource among operating systems without rebooting of the computer system
CN114281516B (en) Resource allocation method and device based on NUMA attribute
TWI616759B (en) Device allocation controller and device allocation method
US8185905B2 (en) Resource allocation in computing systems according to permissible flexibilities in the recommended resource requirements
CN115185874B (en) PCIE resource allocation method and related device
CN114546587A (en) A method for expanding and shrinking capacity of online image recognition service and related device
CN120144324A (en) A memory resource processing method, device, system and electronic device
CN115421871A (en) Method and device for dynamically allocating hardware resources of system and computing equipment
US20060010133A1 (en) Management of a scalable computer system
US20060143204A1 (en) Method, apparatus and system for dynamically allocating sequestered computing resources
US20250077293A1 (en) Resource allocation method of circuit board, apparatus, circuit board, and storage medium
WO2025107824A1 (en) Method, system and device for isolating cryptographic components of vsms, and storage medium
US6598105B1 (en) Interrupt arbiter for a computing system
KR20130104958A (en) Apparatus and methods for executing multi-operating systems
US9021506B2 (en) Resource ejectability in multiprocessor systems
CN120256132B (en) Memory management system, method, device, medium and product
CN120085939B (en) Resource management method, device, storage medium, and program product
CN120256128B (en) Memory resource allocation method, device, equipment, medium and program product of server
CN119718539B (en) Memory hot-plug control method and electronic device for server-unaware security container

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant