CN120256072A - Processor core determination method and device, storage medium and electronic device - Google Patents
Processor core determination method and device, storage medium and electronic device Download PDFInfo
- Publication number
- CN120256072A CN120256072A CN202510744582.2A CN202510744582A CN120256072A CN 120256072 A CN120256072 A CN 120256072A CN 202510744582 A CN202510744582 A CN 202510744582A CN 120256072 A CN120256072 A CN 120256072A
- Authority
- CN
- China
- Prior art keywords
- processor core
- target
- processor
- task
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Multi Processors (AREA)
Abstract
The application discloses a method and a device for determining a processor core, a storage medium and electronic equipment, and relates to the technical field of cloud service. Determining a grouping node corresponding to each processor core in the cloud platform according to the grouping node information of the processor cores, wherein the grouping node information of the processor cores comprises the corresponding relation between a plurality of grouping nodes and the processor cores, and each grouping node at least corresponds to one processor core; and determining a target processor core according to the grouping node corresponding to each processor core and the allocation state of each processor core, and executing target tasks by using the target processor core. The technical problem that the process of configuring the processor core by the cloud platform is complicated is solved, and the technical effect that the processor core is accurately and efficiently configured by the cloud platform is achieved.
Description
Technical Field
The present application relates to the field of cloud services, and in particular, to a method and apparatus for determining a processor core, a storage medium, and an electronic device.
Background
Along with development and popularization of cloud services, a series of virtualization technologies based on server hardware are derived, more and more software services are migrated to virtual machines provided by the cloud services, wherein the software services with extremely high requirements on hardware are not lacked, and advanced use methods of the cloud platform on the server hardware are promoted, and a CPU (Central Processing Unit ) isolation technology is one of the technologies.
In the related art, a CPU is typically a multi-core processor, i.e., one processor includes a plurality of cores. Each core may perform tasks independently. The multi-core processor remarkably improves the efficiency of multi-task processing by processing a plurality of tasks in parallel. If some CPU cores are required to be isolated and independently distributed to some processes for use, CPU isolation configuration needs to be added in a starting guide item of a server, then the server is restarted and the starting item is restarted to isolate the corresponding CPU cores, and the whole process is complex in configuration and low in efficiency.
Therefore, the problem of complicated process of configuring the processor core by the cloud platform exists in the related art.
Disclosure of Invention
The application provides a method and a device for determining a processor core, a storage medium and electronic equipment, which at least solve the problem that the process of configuring the processor core by a cloud platform in the related art is complex.
The application provides a method for determining processor cores, which is applied to a cloud platform and comprises the steps of receiving a task request initiated by a target object on the cloud platform, determining requirement information of a target task from the task request, wherein the requirement information at least comprises the number of processor cores required for executing the target task, determining grouping nodes corresponding to each processor core in the cloud platform according to processor core grouping node information, wherein the processor core grouping node information comprises the corresponding relation between a plurality of grouping nodes and the processor cores, each grouping node at least corresponds to one processor core, determining the target processor core according to the grouping nodes corresponding to each processor core and the allocation state of each processor core, and executing the target task by using the target processor core, wherein the allocation state is used for indicating whether the processor cores are allocated with tasks.
The application further provides a determining device of the processor cores, which comprises a task receiving module, an information determining module and a task allocation module, wherein the task receiving module is used for receiving a task request initiated by a target object on the cloud platform, determining requirement information of a target task from the task request, the requirement information at least comprises the number of the processor cores required for executing the target task, the information determining module is used for determining packet nodes corresponding to each processor core in the cloud platform according to the packet node information of the processor cores, the packet node information of the processor cores comprises the corresponding relation between a plurality of packet nodes and the processor cores, each packet node at least corresponds to one processor core, and the task allocation module is used for determining the target processor core according to the packet nodes corresponding to each processor core and the allocation state of each processor core and executing the target task by using the target processor core, and the allocation state is used for indicating whether the processor cores have allocated tasks.
The application also provides an electronic device comprising a memory for storing a computer program and a processor for implementing the steps of any of the above-mentioned processor core determination methods when executing the computer program.
The present application also provides a computer readable storage medium having a computer program stored therein, wherein the computer program when executed by a processor implements the steps of any of the above-described processor core determination methods.
The application also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of any of the above methods of determining a processor core.
According to the application, the demand information of the target task can be analyzed and determined on the received task request, and then the optimal allocation scheme meeting the demand of the target task is automatically determined according to the grouping node information of the processor cores and the allocation state of the processor cores, so that the target task is processed through the determined target processor cores, the automatic allocation of the processor cores is realized, the complicated configuration process is avoided, the technical problem of the complicated process of configuring the processor cores by the cloud platform is solved, and the technical effect of accurately and efficiently configuring the processor cores by the cloud platform is achieved.
Drawings
For a clearer description of embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described, it being apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to the drawings without inventive effort for those skilled in the art.
FIG. 1 is a schematic illustration of an application scenario of a method of determining a processor core according to an embodiment of the present application;
FIG. 2 is a flow diagram of an alternative method of determining a processor core according to an embodiment of the application;
FIG. 3 is a flow chart of an alternative method of determining a processor core according to an embodiment of the application;
FIG. 4 is a flow chart (II) of an alternative method of determining a processor core according to an embodiment of the application;
fig. 5 is a block diagram of an alternative processor core determination apparatus in accordance with an embodiment of the present application.
102 Denotes a terminal device, 104 denotes a server, 52 denotes a task receiving module, 54 denotes an information determining module, and 56 denotes a task assigning module.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. Based on the embodiments of the present application, all other embodiments obtained by a person of ordinary skill in the art without making any inventive effort are within the scope of the present application.
It should be noted that in the description of the present application, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. The terms "first," "second," and the like in this specification are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
The present application will be further described in detail below with reference to the drawings and detailed description for the purpose of enabling those skilled in the art to better understand the aspects of the present application.
According to one aspect of an embodiment of the present application, a method of determining a processor core is provided. Alternatively, in the present embodiment, the above-described determination method of the processor core may be applied, but not limited to, in a hardware environment including the terminal device 102 and the server 104 as shown in fig. 1. The server 104 may be connected to the terminal device 102 via a network, may be used to provide services (e.g., application services, etc.) to the terminal device 102 or to clients installed on the terminal device 102, and may provide databases on the server 104 or independent of the server 104 for providing data storage services to the server 104.
The network may include, but is not limited to, at least one of a wired network, a wireless network. The wired network may include, but is not limited to, at least one of a wide area network, a metropolitan area network, and a local area network, and the wireless network may include, but is not limited to, at least one of WIFI (WIRELESS FIDELITY ), bluetooth. The terminal device 102 may be, but is not limited to, a PC (Personal Computer ), a cell phone, a tablet computer, etc. Server 104 may be, but is not limited to being, a cloud server, a server cluster, or other server type.
The method for determining the processor core according to the embodiment of the present application may be performed by the server 104, may be performed by the terminal device 102, or may be performed by both the server 104 and the terminal device 102. The determining method of the processor core executed by the terminal device 102 according to the embodiment of the present application may also be executed by the client installed thereon.
Taking the example that the terminal device 102 performs the method for determining the processor core in the present embodiment, where the terminal device 102 may be a physical host, the method for determining the processor core in the present embodiment is applied to the physical host, and the memory callable by the physical host is divided into a plurality of memory levels, where one memory level of the plurality of memory levels includes at least one type of memory, and the plurality of memory levels includes a first memory level corresponding to the physical memory of the physical host. The physical memory is a physical memory module directly and tightly connected with host hardware, is a core and a foundation of a memory architecture system, and is usually composed of a dynamic random access memory (Dynamic Random Access Memory, abbreviated as DRAM), has extremely fast read-write speed, can feed back memory access requests of a processor with nanosecond response time, and is suitable for carrying core codes of a virtual machine operating system, frequently-called system function libraries and key process data running at high speed, for example, in the initial stage of virtual machine starting, the operating system kernel needs to be rapidly loaded and initialized to various hardware drivers, and a basic system running environment is established, at the moment, the physical memory can finish data reading and writing operations with extremely high efficiency, ensure that the virtual machine can be rapidly and stably started, and in the running process of the virtual machine, the application part of a transaction processing module, a real-time engine and the like of a database management system, which has extremely strict requirements on memory read-write performance, also depends on harsh physical memory to ensure that the whole virtual machine can run smoothly and respond to a high-efficient rendering system, so that the whole virtual machine can run smoothly and respond to a high-efficient rendering system can be maintained.
FIG. 2 is a flow chart of an alternative method of determining a processor core according to an embodiment of the application, as shown in FIG. 2, the method flow may include the steps of:
step S202, a task request initiated by a target object on a cloud platform is received, and the demand information of the target task is determined from the task request, wherein the demand information at least comprises the number of processor cores required for executing the target task;
Optionally, in the step S202, the task request may be initiated by a user or may be initiated by a system, and the task corresponds to a process in a task manager in a server.
It should be noted that, the above requirement information may further include executing the target task through the designated processor core.
Step S204, determining a grouping node corresponding to each processor core in the cloud platform according to the grouping node information of the processor cores, wherein the grouping node information of the processor cores comprises the corresponding relation between a plurality of grouping nodes and the processor cores, and each grouping node at least corresponds to one processor core;
Optionally, in step S204, the cloud platform has a plurality of packet nodes, each packet node corresponds to one or more processor cores, and each processor core has its corresponding packet node.
In step S206, the target processor core is determined according to the packet node corresponding to each processor core and the allocation status of each processor core, and the target processor core is used to execute the target task, where the allocation status is used to indicate whether the processor core has allocated the task.
Optionally, in the step S204, the allocation status of each processor core is two, that is, the allocable status and the unallocated status, and if the processor core is currently processing the allocated task, the allocation status is not in the allocable status until the processor core finishes processing the task and is released, and the allocation status is not updated to the allocable status.
According to the embodiment of the application, the demand information of the target task can be analyzed and determined on the received task request, and then the optimal allocation scheme meeting the demand of the target task is automatically determined according to the grouping node information of the processor cores and the allocation state of the processor cores, so that the target task is processed through the determined target processor cores, the automatic allocation of the processor cores is realized, the complicated configuration process is avoided, the technical problem of complicated process of configuring the processor cores by the cloud platform is solved, and the technical effect of accurately and efficiently configuring the processor cores by the cloud platform is achieved.
In an exemplary embodiment, before determining the packet node corresponding to each processor core in the cloud platform according to the processor core packet node information, the method further comprises obtaining a memory controller connected with each processor core, and determining the processor core packet node according to the connection relation between each processor core and the memory controller, wherein the processor cores connected to the same memory controller correspond to the same packet node.
Optionally, in the foregoing embodiment, the server architecture adopted by the multiprocessor systems such as the cloud platform is a NUMA (Non-Uniform Memory Access ) architecture, and the NUMA architecture is a memory organization manner designed to solve a memory access bottleneck in the multiprocessor system.
The NUMA architecture consists of a plurality of nodes (NUMA nodes), each Node comprises a plurality of CPU cores, each Node is provided with a respective local memory, the nodes are directly connected with the local memory, the processor cores in the same Node access the local memory through the same memory controller, the delay of the CPU cores in the Node directly accessing the memory is low, and the access speed is high. Inter-node communication is needed through the interconnection module of the nodes during cross-node access, and delay is high.
In an alternative embodiment, topology information of the NUMA architecture can be determined through numactl commands in the Linux system, wherein the topology information comprises information such as CPU core groups, memory groups, distances among the CPUs and the like. Through the inquired topological structure, an array of the NUMA Node structure is constructed, the array size is the number of the NUMA nodes, the index number of the array elements is the number of the NUMA Node, wherein the value of each array element corresponds to an Int type array, the size of the Int type array is the number of CPU cores contained under the NUMA Node, and the value contained in the array is the corresponding CPU number under the NUMA Node. For example, if 2 NUMA NODEs exist in the server corresponding to the cloud platform, and a total of 6 CPUs exist, wherein Node0 comprises CPU cores numbered 1,3 and 5, and Node1 comprises CPU cores numbered 2,4 and 6, the NUMA NODEs are constructed by grouping NODE_CPU_GROUP= [ Node0, node1], wherein Node 0= [1,3,5], and Node 1= [2,4 and 6].
In an exemplary embodiment, before determining the target processor core according to the grouping node corresponding to each processor core and the allocation state of each processor core, the method further comprises constructing an initial allocation state list, wherein the first element number of the initial allocation state list is equal to the number of processors of the cloud platform, the elements of the initial allocation state list are in one-to-one correspondence with the processor cores of the cloud platform, the elements of the initial allocation state list represent that the processor cores corresponding to the elements are in an allocable state when the elements of the initial allocation state list are in a first value, the elements of the initial allocation state list represent that the processor cores corresponding to the elements are in an unallocated state when the elements of the initial allocation state list are in a second value, the initial values of the elements of the allocation state list are in the first value, the task processing records are queried from a database, the processor cores related to the task processing records are determined, the values of the elements corresponding to the processor cores related to the task processing records are updated to the second values, the allocation state list is obtained, and the allocation state of each processor core is determined according to the allocation state list.
Optionally, in the above embodiment, an array may be used to represent the allocation status list in the cloud platform, the number of CPU cores of the server is queried first, then the array size is defined to be equal to the number of CPU cores of the cloud platform, the value of the array element is set to be a Boolean type, true represents that the cloud platform is in an unallocated state, false represents that the index number of the element is the number of the CPU core of the server, the value of the element corresponds to the allocation status of the CPU cores, for example, the server corresponding to the cloud platform has 6 CPU cores in total, an allocation status array cpu_flag= [ CPU0 is defined, CPU1, CPU2, CPU3, CPU4, CPU5, CPU_FLAGS is initialized with default value of CPU0-5 being False, then task processing record is queried from database, the database is used for recording CPU core for processing task after task is assigned processor core, and deleting task processing record after task processing is completed, so all record in database are task processor core being processed, for example CPU1, CPU2, CPU3 is existed in queried task processing record, CPU1, CPU2, CPU3 is in unassigned state, CPU1, CPU2, CPU3 value in CPU_FLAGS is updated to True. The finally constructed allocation status array CPU_FLAGS= [ CPU0, CPU1, CPU2, CPU3, CPU4, CPU5], wherein the values of CPU1, CPU2, CPU3 are True, CPU0, CPU4, CPU5 are False.
In an exemplary embodiment, after updating the value of the element corresponding to the processor core associated with the task processing record to the second value, the method further includes querying a system level processor core of the cloud platform, where the system level processor core is configured to process an operation task of the cloud platform, and updating the value of the element corresponding to the system level processor core to the second value.
Alternatively, in the above embodiment, for example, the cloud platform is configured to find that CPU0 is a system level CPU core, and the cloud platform is running on the CPU core, so that the CPU core needs to be reserved as a system level CPU core, and no task is allocated, so that the value of CPU0 needs to be updated to True in the allocation status array cpu_flags constructed above.
In one exemplary embodiment, receiving a task request initiated by a target object on a cloud platform, and determining demand information of the target task from the task request includes obtaining a demand list of the target task from the task request, determining the number of processor cores required for executing the target task according to a second element number of the demand list, and determining the processor cores specified by the target task according to the value of the elements of the demand list and the unique identification of the processors of the cloud platform.
Optionally, in the above embodiment, after receiving the task request, the requirement list of the target task and the target task ID may be parsed from the requirement list, where the requirement list is an Int type list, the size of the list is equal to the number of CPU cores required by the target task, the number of CPU cores that need to be allocated may be specified by the value of an element in the list, if no CPU core needs to be specified, the value of the element is set to a preset value (for example, set to-1), for example, the requirement list cpu_ CONTAINER = [5, -1, -1, -1, -1] of the target task, which indicates that the target task needs to be allocated with 5 CPU cores in total, where 1 CPU core is CPU5, and the remaining four CPUs are automatically allocated.
Through the embodiment, the grouping node information of the processor cores and the distribution state of the processor cores are clearly and simply in the standardized data structure, the demand information of the target task is integrated, a data basis is provided for the subsequent distribution of the processor cores according to the integrated information, and the distribution efficiency is further improved.
In one exemplary embodiment, determining the target processor core according to the packet node corresponding to each processor core and the allocation status of each processor core includes prioritizing the plurality of packet nodes, determining a priority list of the plurality of packet nodes, and determining the target processor core from the processor cores corresponding to the packet nodes in the priority list.
Alternatively, in the above-described embodiment, the demand information of the target task, the grouping node information of each CPU core, and the allocation status of each CPU core have been determined, and then it is necessary to select a CPU core from a plurality of grouping nodes as the target processor core to process the target task. In the selection process, priority ranking can be carried out on a plurality of grouping nodes through a NUMA Node selection algorithm, and a processor core is preferentially selected from the grouping nodes with the front ranking according to the ranking result. The basis of the NUMA Node selection algorithm can be determined according to the conditions such as Node numbers, available memory capacity corresponding to the nodes, the number of the assigned processor cores, the Node distribution condition of the assigned processor cores and the like, and the NUMA Node selection algorithm can be determined by comprehensively considering the plurality of conditions.
Through the embodiment, various sorting conditions are considered according to actual demands, the distribution efficiency of the processor core can be optimized by sorting the nodes according to the sorting conditions, and the distribution scheme with optimal task processing performance can be determined.
In an exemplary embodiment, the priority ordering of the plurality of packet nodes is performed to determine a priority list of the plurality of packet nodes, including ordering the plurality of packet nodes in a descending order according to node numbers of the plurality of packet nodes to obtain a first ordering result, and determining the priority list according to the first ordering result.
Optionally, in the above embodiment, for example, the server corresponding to the cloud platform includes 3 NUMA Node nodes, node0, node1, and Node2, and the three nodes are sorted in descending order according to the Node numbers to obtain the priority list [ Node2, node1, and Node0].
In an exemplary embodiment, the method for sorting the priority of the plurality of grouping nodes further comprises the steps of obtaining available memory capacity corresponding to the plurality of grouping nodes, sorting the plurality of grouping nodes in a descending order according to the available memory capacity to obtain a second sorting result, and determining a priority list according to the second sorting result.
Optionally, in the above embodiment, for example, the available memory capacity corresponding to Node0 is 5GB, the available memory capacity corresponding to Node1 is 6GB, the available memory capacity corresponding to Node2 is 2GB, and the three nodes are sorted in descending order according to the available memory capacities corresponding to the nodes to obtain the priority list [ Node1, node0, node2].
Through the embodiment, the processor cores can be preferentially determined from the nodes with large memory capacity to process the allocation tasks, so that reasonable planning and balanced use of resources of each node are ensured, and the allocation rationality of the processor cores is improved.
In an exemplary embodiment, the method includes the steps of sequencing priorities of a plurality of grouping nodes, determining a priority list of the grouping nodes, obtaining the number of assignable processor cores corresponding to each grouping node in the grouping nodes, comparing the number of assignable processor cores corresponding to each grouping node with a target number to obtain a comparison result, wherein the target number represents the number of processor cores needed for executing a target task, and determining the priority list according to the comparison result.
In an exemplary embodiment, determining the priority list according to the comparison result includes determining a first packet node when it is determined that a sum of the numbers of assignable processor cores corresponding to the plurality of packet nodes is greater than or equal to a target number and that at least one assignable processor core corresponding to the packet node is greater than or equal to the target number, wherein the first packet node represents a packet node with the number of assignable processor cores greater than or equal to the target number, sorting the first packet node in a descending order according to the number of assignable processor cores to obtain a third sorting result, and determining the priority list according to the third sorting result.
Alternatively, in the above embodiment, for example, 3 CPU cores are required for processing the target task, 4 CPU cores may be allocated in Node0, 2 CPU cores may be allocated in Node1, and 5 CPU cores may be allocated in Node2, where the number of CPU cores that may be allocated in Node0 and Node2 is greater than the number required for the target task, and the number of CPU cores that may be allocated in Node1 is smaller than the number required for the target task, then Node1 does not participate in the sorting, and the priority list obtained by sorting Node0 and Node2 in descending order according to the number of processor cores that may be allocated is [ Node2, node0].
Through the embodiment, CPU cores required by tasks can be preferentially selected from the same grouping nodes, so that performance loss caused by cross-node calculation is reduced, and task processing efficiency is improved.
In an exemplary embodiment, determining the priority list according to the comparison result further includes, in a case where it is determined that the sum of the numbers of the allocatable processor cores corresponding to the plurality of packet nodes is greater than or equal to the target number and there is no packet node with the number of the allocatable processor cores greater than or equal to the target number, sorting the plurality of packet nodes in a descending order according to the number of the allocatable processor cores to obtain a fourth sorting result, and determining the priority list according to the fourth sorting result.
Alternatively, in the above embodiment, for example, 8 CPU cores are required for processing the target task, 2 CPU cores may be allocated in Node0, 4 CPU cores may be allocated in Node1, 3 CPU cores may be allocated in Node2, and the total number of CPU cores may be larger than the number required for the target task, but none of the nodes may bear all the number of CPU cores, so that the priority list obtained by ordering the three nodes in descending order according to the number of allocable processor cores is [ Node1, node2, node0].
By the embodiment, when the processor cores for executing tasks cannot be guaranteed to come from the same node, the processor cores can be guaranteed to be as many as possible in the same node.
In an exemplary embodiment, after obtaining the comparison result, the method further includes determining that the first processing result of the target task is abnormal in processing if it is determined that the sum of the numbers of the assignable processor cores corresponding to the plurality of packet nodes is smaller than the target number, and feeding back the first processing result to the target object.
Alternatively, in the above embodiment, for example, 7 CPU cores are required for processing the target task, 2 CPU cores may be allocated in Node0, 3 CPU cores may be allocated in Node1, 1 CPU core may be allocated in Node2, the total number of CPU cores may be smaller than the number required for the target task, and CPU cores cannot be allocated for the target task, so that the feedback task processing result is abnormal at this time, and the abnormality is due to insufficient resources.
In one exemplary embodiment, determining the target processor core from the processor cores corresponding to the packet nodes in the priority list includes traversing the processor cores corresponding to the packet nodes in the priority list sequentially, obtaining the number of allocatable processor cores in the traversed processor cores, stopping the traversing if the number of allocatable processor cores is determined to be greater than the target number, and determining the allocatable processor cores in the traversed processor cores as the target processor cores.
Alternatively, in the above embodiment, for example, 3 CPU cores are required for the processing target task, 4 CPU cores may be allocated in Node0, 2 CPU cores may be allocated in Node1, and the priority list is [ Node2, node0]. Then 3 CPU cores are selected directly from Node2 to be determined as the target processor for processing the target task.
Alternatively, in the above embodiment, for example, 8 CPU cores are required for processing the target task, 2 CPU cores may be allocated in Node0, 4 CPU cores may be allocated in Node1, 3 CPU cores may be allocated in Node2, and the priority list is [ Node1, node2, node0], then 4 CPU cores are preferentially selected from Node1, then 3 CPU cores are selected from Node2, finally 1 CPU core is selected from Node0, and finally 7 CPU cores selected are determined as target processor cores for processing the target task.
In an alternative embodiment, if a plurality of processor cores are specified in the requirement list of the target task, counting the matching number of the specified CPU cores on each NUMA Node, and sorting the matching number according to the descending order of the number, that is, when the allocation is ensured, the required CPU cores can be preferentially allocated to the same Node so as to reduce the performance loss caused by cross-Node calculation;
In an alternative embodiment, the above process can be automatically completed by the programmed CPU allocator, which first obtains four data structures of the structure array node_cpu_group of the packet NODE, the processor core allocation status list cpu_flag, the priority list PREFER _node_order of the packet NODE, and the requirement list cpu_ CONTAINER of the target task, the input of the CPU allocator is the four data structures, the cpu_ CONTAINER array value is iteratively accessed in the ORDER of PREFER _node_order, if the corresponding value is not-1, the cpu_flag value with the current NODE CPU core number as the index is accessed, if the corresponding CPU core is allocable, the current cpu_ CONTAINER index value is updated as the CPU core number, if the corresponding value is not-1, the current NODE is checked to include the number of the CPU core, if the corresponding value is not included, the next NODE is accessed, if the corresponding number is checked to be the index value of the index, the state is allocable, if the state is not allocable, and if the state is not allocable, the allocation of all cpu_flag values is not included (if the allocation of the cpu_35 is not included, if the allocation of the abnormal resources is not included) is still lost, and if the allocation of the abnormal allocation of the cpu_node is not present (No. no allocation of the abnormal allocation is performed).
In an exemplary embodiment, after the requirement information of the target task is determined from the task request, the method further comprises the steps of acquiring the allocation state of the designated processor core in the case that the requirement information is determined to further comprise the designated processor core of the target task, determining that a second processing result of the target task is abnormal in processing in the case that the allocation state of the designated processor core is determined to be unallocated, and feeding back the second processing result to the target object.
Optionally, in the above embodiment, for example, the requirement list cpu_ CONTAINER = [5, -1, -1, -1, -1] of the target task specifies that the CPU5 needs to be allocated, and the processor corresponding to the cloud platform queries that the allocation status of the CPU5 is allocated, so that the requirement of the target task cannot be met, at this time, the feedback task processing result is abnormal, and the abnormality is caused by that the specified processor core cannot be allocated.
In one exemplary embodiment, after determining the target processor core according to the grouping node corresponding to each processor core and the allocation status of each processor core, the method includes adding a task processing record of the target processor core in the database to update the database if it is determined that the target processor core starts executing the target task, and updating the allocation status list according to the task processing record of the updated database.
Optionally, in the above embodiment, after completing the allocation of the CPU core for each task, the number and the task ID corresponding to the allocated target CPU core are stored in the database, and the CPU core allocation status synchronization thread is triggered to update the allocation status list stored in the server.
In one exemplary embodiment, after determining the target processor core according to the grouping node corresponding to each processor core and the allocation status of each processor core, the method includes deleting the task processing record of the target processor core in the database to update the database if it is determined that the target processor core has completed executing the target task, and updating the allocation status list according to the task processing record of the updated database.
Optionally, in the above embodiment, for each task, after the task processing is completed, the CPU core needs to be released, and according to the task ID, the task processing record and the corresponding CPU core number are deleted from the database, the CPU core is triggered to allocate a state synchronization thread, and the allocation state list stored in the server is updated.
By the embodiment, the allocation status lists in the database and the server can be synchronized in time, so that the accuracy of the allocation of the processor cores is ensured, and the occurrence of abnormal allocation conditions is reduced.
An alternative method of determining a processor core according to an embodiment of the present application is described below in conjunction with an alternative embodiment, in which the method of determining a processor core includes the following steps, as shown in fig. 3.
Step S301, constructing a CPU core allocation state array, firstly inquiring the number of CPU cores of a server, then defining the array size as being equal to the number of the CPU cores of a cloud platform, setting the value of an array element as a Boolean type, true representing that the array element is in an unallocated state, false representing that the array element is in an allocable state, wherein the index number of the element is the number of the CPU cores of the server, the value of the element corresponds to the allocation state of the CPU cores, for example, 6 CPU cores are shared by the server corresponding to the cloud platform, and defining an allocation state array CPU_FLAGS= [ CPU0, CPU1, CPU2, CPU3, CPU4 and CPU5], and the default value of the CPU0-5 is False when the CPU_FLAGS is initialized.
Step S302, inquiring task processing records from a database, wherein the database is used for recording CPU cores for processing tasks after the task processing is distributed to the processor cores, and deleting the task processing records after the task processing is completed, so that all the task processing cores which are being processed are recorded in the database, for example, CPU1, CPU2 and CPU3 exist in the inquired task processing records, the CPU1, CPU2 and CPU3 are in an unallocated state, and the values of the CPU1, CPU2 and CPU3 in the CPU_FLAGS are updated to True. Inquiring the cloud platform configuration finds that the CPU0 is a system-level CPU core, and the cloud platform runs on the CPU core, so that the CPU core needs to be reserved as the system-level CPU core and does not distribute any task, and therefore the value of the CPU0 also needs to be updated to True in the constructed distribution state array CPU_FLAGS.
Step S303, inquiring topology information of a server, constructing a Node structure array, wherein the array size is the number of NUMA nodes, the index number of array elements is the number of the NUMA nodes, the value of each array element corresponds to an Int type array, the size of the Int type array is the number of CPU cores contained in the NUMA nodes, and the value contained in the array is the corresponding CPU number in the NUMA nodes. For example, if 2 NUMA NODEs exist in the server corresponding to the cloud platform, and a total of 6 CPUs exist, wherein Node0 comprises CPU cores numbered 1,3 and 5, and Node1 comprises CPU cores numbered 0,2 and 4, the NUMA NODEs are constructed by grouping NODE_CPU_GROUP= [ Node0, node1], wherein Node 0= [1,3,5], and Node 1= [0,2 and 4].
Step S304, constructing a node priority list through a node selection algorithm. Taking this embodiment as an example, the selection algorithm of the NUMA Node is based on 4 times of conditional ordering, and each step is based on the ordering result of the previous step, and specific ordering conditions are as follows:
1. and the number of the NUMA Node nodes is used for sorting in descending order.
2. And taking the NUMA Node nodes as statistical dimensions, acquiring the residual available memory on each NUMA Node, and sorting according to the size of the residual memory in descending order.
3. If the total amount of the CPU cores which can be allocated by each node is larger than the task demand, the nodes with the number of the CPU cores which can be allocated by a single node larger than the task demand are ordered in descending order according to the number of the CPU cores which can be allocated by the single node, and if the situation that the number of the CPU cores which can be allocated by the single node is larger than the task demand does not exist, all the nodes are ordered in descending order according to the number of the CPU cores which can be allocated by the single node.
4. Counting the matching number of the CPU cores designated by the target task on each Node, and sorting according to the descending order of the matching number, namely, when the allocation is ensured, the required CPU can be preferentially allocated to the same Node so as to reduce the performance loss caused by cross-Node calculation.
Step S305, executing a CPU distributor, traversing CPU cores of all nodes in turn according to the sorting conditions until enough CPU cores are matched according to the sorting results, then determining the matched CPU cores as target processors to start processing target tasks, and executing step S306 if the matching fails, for example, enough CPU cores are not available, or the designated CPU cores are not available for distribution.
And step S306, throwing out abnormal task allocation failure and feeding back failure reasons.
Step S307, adding task processing records in the database after the CPU core is allocated to process the target task, and synchronizing data to the server to update the CPU core state.
In an alternative embodiment, as shown in fig. 4, after the target task is processed, data synchronization needs to be performed on the database and the server, which specifically includes the following procedures:
step S401, task processing records are processed in the database according to the target task ID.
Step S402, deleting the CPU core associated with the target task ID in the database.
Step S403, synchronizing database data to the server to update the allocation status of the CPU core.
According to the embodiment, the grouping node information of the processor core, the distribution state of the processor core and the data structure of the demand information of the target task are defined, the dynamic distribution of the CPU core is completed by combining and integrating the data through a reasonable distribution flow and a selection algorithm, the concurrent resource consumption caused by frequent operation of the server is reduced, the cloud platform resources with the CPU exclusive demand can be supported, the CPU resources are dynamically applied for exclusive in an on-line mode, meanwhile, the rationality and the high efficiency of the CPU resource distribution are guaranteed, the control fineness of the cloud platform to the server resources is improved, and therefore the task processing efficiency of the cloud platform is improved.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. Read-Only Memory (ROM)/random access Memory (Random Access Memory, RAM), magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method of the embodiments of the present application.
According to another aspect of the embodiments of the present application, there is further provided a processor core determining apparatus, which may be used to implement the processor core determining method provided in the foregoing embodiments, and will not be described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
FIG. 5 is a block diagram of an alternative processor core determination apparatus according to an embodiment of the present application, as shown in FIG. 5, including:
The task receiving module 52 is configured to receive a task request initiated by a target object on the cloud platform, and determine requirement information of the target task from the task request, where the requirement information at least includes the number of processor cores required for executing the target task;
The information determining module 54 is configured to determine a packet node corresponding to each processor core in the cloud platform according to the processor core packet node information, where the processor core packet node information includes a correspondence between a plurality of packet nodes and the processor cores, and each packet node corresponds to at least one processor core;
the task allocation module 56 is configured to determine a target processor core according to the packet node corresponding to each processor core and an allocation status of each processor core, and execute the target task using the target processor core, where the allocation status is used to indicate whether the processor core has allocated the task.
According to the embodiment of the application, the demand information of the target task can be analyzed and determined on the received task request, and then the optimal allocation scheme meeting the demand of the target task is automatically determined according to the grouping node information of the processor cores and the allocation state of the processor cores, so that the target task is processed through the determined target processor cores, the automatic allocation of the processor cores is realized, the complicated configuration process is avoided, the technical problem of complicated process of configuring the processor cores by the cloud platform is solved, and the technical effect of accurately and efficiently configuring the processor cores by the cloud platform is achieved.
In an exemplary embodiment, the information determining module 54 is further configured to obtain a memory controller connected to each processor core, and determine a processor core packet node according to a connection relationship between each processor core and the memory controller, where processor cores connected to the same memory controller correspond to the same packet node.
In an exemplary embodiment, the task allocation module 56 is further configured to construct an initial allocation status list, where a first number of elements of the initial allocation status list is equal to the number of processors of the cloud platform, the elements of the initial allocation status list correspond to the processor cores of the cloud platform one by one, the elements of the initial allocation status list indicate that the processor cores corresponding to the elements are in an allocable status when the elements of the initial allocation status list are at a first value, the elements of the initial allocation status list indicate that the processor cores corresponding to the elements are in an unallocated status when the elements of the initial allocation status list are at a second value, query a task processing record from a database to determine a processor core associated with the task processing record, update the values of the elements corresponding to the processor cores associated with the task processing record to a second value, obtain an allocation status list, and determine an allocation status of each processor core according to the allocation status list.
In an exemplary embodiment, the task allocation module 56 is further configured to query a system level processor core of the cloud platform, where the system level processor core is configured to process an operation task of the cloud platform, and update a value of an element corresponding to the system level processor core to a second value.
In an exemplary embodiment, the task receiving module 52 is further configured to obtain a requirement list of the target task from the task request, determine a number of processor cores required for executing the target task according to a second element number of the requirement list, and determine a processor core specified by the target task according to a value of an element of the requirement list and a unique identifier of a processor of the cloud platform.
In an exemplary embodiment, the task allocation module 56 is further configured to prioritize the plurality of packet nodes, determine a priority list of the plurality of packet nodes, and determine a target processor core from the processor cores corresponding to the packet nodes in the priority list.
In an exemplary embodiment, the task allocation module 56 is further configured to sort the plurality of packet nodes in descending order according to the node numbers of the plurality of packet nodes to obtain a first sorting result, and determine the priority list according to the first sorting result.
In an exemplary embodiment, the task allocation module 56 is further configured to obtain available memory capacities corresponding to the plurality of packet nodes, sort the plurality of packet nodes in descending order according to the available memory capacities to obtain a second sorting result, and determine a priority list according to the second sorting result.
In an exemplary embodiment, the task allocation module 56 is further configured to obtain the number of allocable processor cores corresponding to each of the plurality of packet nodes, compare the number of allocable processor cores corresponding to each packet node with a target number to obtain a comparison result, where the target number represents the number of processor cores required to execute the target task, and determine the priority list according to the comparison result.
In an exemplary embodiment, the task allocation module 56 is further configured to determine a first packet node when it is determined that the sum of the numbers of assignable processor cores corresponding to the plurality of packet nodes is greater than or equal to the target number, and the number of assignable processor cores corresponding to at least one packet node is greater than or equal to the target number, where the first packet node represents a packet node with the number of assignable processor cores greater than or equal to the target number, sort the first packet node in descending order according to the number of assignable processor cores to obtain a third sorting result, and determine the priority list according to the third sorting result.
In an exemplary embodiment, the task allocation module 56 is further configured to, when it is determined that the sum of the numbers of assignable processor cores corresponding to the plurality of packet nodes is greater than or equal to the target number, and there is no packet node with the number of assignable processor cores greater than or equal to the target number, sort the plurality of packet nodes in a descending order according to the number of assignable processor cores to obtain a fourth sorting result, and determine the priority list according to the fourth sorting result.
In an exemplary embodiment, the task allocation module 56 is further configured to determine that the first processing result of the target task is abnormal in processing if it is determined that the sum of the numbers of assignable processor cores corresponding to the plurality of packet nodes is less than the target number, and feed back the first processing result to the target object.
In an exemplary embodiment, the task allocation module 56 is further configured to sequentially traverse the processor cores corresponding to the packet nodes in the priority list, obtain the number of allocatable processor cores in the traversed processor cores, stop traversing if it is determined that the number of allocatable processor cores is greater than the target number, and determine the allocatable processor cores in the traversed processor cores as target processor cores.
In an exemplary embodiment, the task allocation module 56 is further configured to obtain an allocation status of the designated processor core if the requirement information further includes the processor core designated by the target task, determine that the second processing result of the target task is abnormal in processing if the allocation status of the designated processor core is determined to be unallocated, and feed back the second processing result to the target object.
In an exemplary embodiment, the task allocation module 56 is further configured to, in a case where it is determined that the target processor core starts executing the target task, add a task processing record of the target processor core to the database to update the database, and update the allocation status list according to the task processing record of the updated database.
In an exemplary embodiment, the task allocation module 56 is further configured to delete the task processing record of the target processor core in the database to update the database if it is determined that the target processor core has completed executing the target task, and update the allocation status list according to the task processing record of the updated database.
The description of the features in the embodiment corresponding to the determining device of the processor core may refer to the related description of the embodiment corresponding to the determining method of the processor core, which is not described herein in detail.
An embodiment of the application also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of the method embodiment of determining a processor core of any of the above.
Embodiments of the present application also provide a computer readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of the method embodiments of determining a processor core of any of the above, when run.
In an exemplary embodiment, the computer readable storage medium may include, but is not limited to, a U disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, etc. various media in which a computer program may be stored.
Embodiments of the present application also provide a computer program product comprising a computer program which, when executed by a processor, implements the steps of the method embodiments of determining a processor core of any of the above.
Embodiments of the present application also provide another computer program product comprising a non-volatile computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method embodiments of determining a processor core of any of the above.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The method and apparatus for a distributed storage system, a storage medium and an electronic device provided by the present application are described in detail above. The principles and embodiments of the present application have been described herein with reference to specific examples, the description of which is intended only to facilitate an understanding of the method of the present application and its core ideas. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the application can be made without departing from the principles of the application and these modifications and adaptations are intended to be within the scope of the application as defined in the following claims.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202510744582.2A CN120256072A (en) | 2025-06-05 | 2025-06-05 | Processor core determination method and device, storage medium and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202510744582.2A CN120256072A (en) | 2025-06-05 | 2025-06-05 | Processor core determination method and device, storage medium and electronic device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN120256072A true CN120256072A (en) | 2025-07-04 |
Family
ID=96182006
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202510744582.2A Pending CN120256072A (en) | 2025-06-05 | 2025-06-05 | Processor core determination method and device, storage medium and electronic device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN120256072A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101799772A (en) * | 2010-02-26 | 2010-08-11 | 上海华为技术有限公司 | Kernel dispatching method, kernel backup method and multi-core processor |
CN107102966A (en) * | 2016-02-22 | 2017-08-29 | 龙芯中科技术有限公司 | multi-core processor chip, interrupt control method and controller |
CN115098269A (en) * | 2022-07-26 | 2022-09-23 | 中科曙光国际信息产业有限公司 | Resource allocation method, device, electronic equipment and storage medium |
CN116700949A (en) * | 2023-05-08 | 2023-09-05 | 阿里云计算有限公司 | Methods for binding processor cores and related devices to application instances |
CN116820750A (en) * | 2023-05-29 | 2023-09-29 | 中国电子科技集团公司第五十二研究所 | A multi-task scheduling and management system under NUMA architecture |
CN117742876A (en) * | 2022-09-14 | 2024-03-22 | 腾讯科技(深圳)有限公司 | Binding method, device and equipment of processor core and computer storage medium |
-
2025
- 2025-06-05 CN CN202510744582.2A patent/CN120256072A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101799772A (en) * | 2010-02-26 | 2010-08-11 | 上海华为技术有限公司 | Kernel dispatching method, kernel backup method and multi-core processor |
CN107102966A (en) * | 2016-02-22 | 2017-08-29 | 龙芯中科技术有限公司 | multi-core processor chip, interrupt control method and controller |
CN115098269A (en) * | 2022-07-26 | 2022-09-23 | 中科曙光国际信息产业有限公司 | Resource allocation method, device, electronic equipment and storage medium |
CN117742876A (en) * | 2022-09-14 | 2024-03-22 | 腾讯科技(深圳)有限公司 | Binding method, device and equipment of processor core and computer storage medium |
CN116700949A (en) * | 2023-05-08 | 2023-09-05 | 阿里云计算有限公司 | Methods for binding processor cores and related devices to application instances |
CN116820750A (en) * | 2023-05-29 | 2023-09-29 | 中国电子科技集团公司第五十二研究所 | A multi-task scheduling and management system under NUMA architecture |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11709843B2 (en) | Distributed real-time partitioned MapReduce for a data fabric | |
CN105049268B (en) | distributed computing resource allocation system and task processing method | |
CN106406983B (en) | Task scheduling method and device in cluster | |
US9678497B2 (en) | Parallel processing with cooperative multitasking | |
CN112052068A (en) | Method and device for binding CPU (central processing unit) of Kubernetes container platform | |
CN110941481A (en) | Resource scheduling method, device and system | |
US7730488B2 (en) | Computer resource management method in distributed processing system | |
KR101013073B1 (en) | Task distribution and parallel processing systems and methods | |
CN111459677A (en) | Request distribution method and device, computer equipment and storage medium | |
EP3376399A1 (en) | Data processing method, apparatus and system | |
US20170024245A1 (en) | Workload-aware shared processing of map-reduce jobs | |
CN101341468A (en) | Information processing device, computer, resource allocation method, and resource allocation program | |
CN105868023B (en) | Data processing method and calculate node in a kind of distributed system | |
CN118445082B (en) | Computing power cluster management method, device, equipment and storage medium | |
US20240160487A1 (en) | Flexible gpu resource scheduling method in large-scale container operation environment | |
CN110928649A (en) | Resource scheduling method and device | |
CN113360455A (en) | Data processing method, device, equipment and medium of super-fusion system | |
US20140047454A1 (en) | Load balancing in an sap system | |
CN120256072A (en) | Processor core determination method and device, storage medium and electronic device | |
US8689230B2 (en) | Determination of running status of logical processor | |
WO2023274014A1 (en) | Storage resource management method, apparatus, and system for container cluster | |
CN119045842B (en) | Product deployment method, product deployment device, electronic equipment and product deployment system | |
JPH10105205A (en) | Material requirement calculation method and system | |
CN113254180B (en) | Data matching method and device, electronic equipment and storage medium | |
CN119046016B (en) | Resource allocation method, device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |