[go: up one dir, main page]

CN119565166A - Data processing method, device, electronic device and computer readable storage medium - Google Patents

Data processing method, device, electronic device and computer readable storage medium Download PDF

Info

Publication number
CN119565166A
CN119565166A CN202411636676.XA CN202411636676A CN119565166A CN 119565166 A CN119565166 A CN 119565166A CN 202411636676 A CN202411636676 A CN 202411636676A CN 119565166 A CN119565166 A CN 119565166A
Authority
CN
China
Prior art keywords
game
computing units
data
waiting queue
activity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411636676.XA
Other languages
Chinese (zh)
Inventor
朱晨辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Pixel Software Technology Co Ltd
Original Assignee
Beijing Pixel Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Pixel Software Technology Co Ltd filed Critical Beijing Pixel Software Technology Co Ltd
Priority to CN202411636676.XA priority Critical patent/CN119565166A/en
Publication of CN119565166A publication Critical patent/CN119565166A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/77Game security or game management aspects involving data related to game devices or game servers, e.g. configuration data, software version or amount of memory
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Computer Security & Cryptography (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the invention provides a data processing method, a data processing device, electronic equipment and a computer readable storage medium, and relates to the technical field of computers. The method comprises the steps of obtaining activity data according to starting time of game activities, inputting the activity data into a pre-trained prediction model to obtain the number of core computing units, the maximum number of computing units and the length of a waiting queue, starting the computing units according to the number of core computing units and the maximum number of computing units, adjusting the size of the waiting queue according to the length of the waiting queue, and scheduling game data in the waiting queue by utilizing the computing units. According to the invention, the calculation pressure of the game activity is predicted by using the prediction model, and the number of calculation units participating in calculation and the waiting queue length are dynamically adjusted according to the prediction result, so that the early intervention of the calculation pressure is realized, the game system is enabled to run stably, the increase of the processing time delay of the game data is effectively avoided, and the game experience of a user is improved.

Description

Data processing method, device, electronic equipment and computer readable storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a data processing method, apparatus, electronic device, and computer readable storage medium.
Background
During online game development, the user's game data may be processed using a computing unit. Users may be gathered to log in and participate in gaming activities at the same time of day. In the face of a large number of user logins, the computing pressure of the game server becomes great, causing a delay in logic update. When there are few logged-in users, there is a case where resources of the game server are wasted.
Disclosure of Invention
In view of the above, it is an object of the present invention to provide a data processing method, apparatus, electronic device, and computer-readable storage medium capable of effectively coping with a calculation pressure of game data so that a game system operates smoothly.
In order to achieve the above object, the technical scheme adopted by the embodiment of the invention is as follows:
In a first aspect, the present invention provides a data processing method, the method comprising:
Acquiring activity data according to the starting time of a game activity, wherein the activity data is used for representing the activity degree of a user participating in the game activity;
inputting the activity data into a pre-trained prediction model to obtain the number of core computing units, the maximum number of computing units and the waiting queue length;
starting calculation units according to the number of the core calculation units and the number of the maximum calculation units, and adjusting the size of a waiting queue according to the length of the waiting queue, wherein the waiting queue is used for storing game data corresponding to users participating in the game activity;
and scheduling the game data in the waiting queue by using the computing unit.
In an alternative embodiment, the computing units comprise a core computing unit and a temporary computing unit, and the starting of the computing units according to the number of the core computing units and the maximum computing unit number comprises the following steps:
starting the core computing units according to the number of the core computing units;
And starting the temporary computing units according to the difference value between the maximum computing unit number and the core computing unit number.
In an alternative embodiment, after the step of scheduling game data in the wait queue with the computing unit, the method further comprises:
when the game data in the waiting queue exceeds a queue high threshold, calculating the product of the maximum calculation unit number and a preset proportion, and updating the maximum calculation unit number by using the product;
And starting the temporary computing unit according to the difference value between the updated maximum computing unit number and the core computing unit number, and scheduling the game data in the waiting queue by using the temporary computing unit.
In an alternative embodiment, the method further comprises:
When the maximum computing unit number is greater than the core computing unit number, starting reinspection;
During rechecking, updating the maximum number of computing units according to game data in the waiting queue, a high queue threshold value and a low queue threshold value;
And when the number of the updated maximum computing units is equal to the number of the core computing units, closing the rechecking.
In an alternative embodiment, the updating the maximum number of computing units according to the game data in the waiting queue, the high queue threshold value, and the low queue threshold value includes:
when the game data in the waiting queue exceeds the queue high threshold, calculating the product of the maximum calculation unit number and a preset proportion, and updating the maximum calculation unit number by using the product;
And setting the maximum number of computing units as the core number of computing units when the game data in the waiting queue is lower than a queue low threshold.
In an alternative embodiment, the predictive model is obtained by:
determining training data from historical activity data of game activities based on the number of users participating in the game activities, and acquiring labels corresponding to the training data;
Inputting the training data and the corresponding labels into a deep neural network to obtain a prediction result;
and iteratively updating parameters of the deep neural network based on the label and the loss information of the prediction result to obtain the prediction model.
In a second aspect, the present invention provides a data processing apparatus, the apparatus comprising:
the processing module is used for acquiring activity data according to the starting time of the game activity, wherein the activity data is used for representing the activity degree of a user participating in the game activity;
The prediction module is used for inputting the activity data into a pre-trained prediction model to obtain the number of core calculation units, the maximum number of calculation units and the waiting queue length;
The scheduling module is used for starting the computing units according to the number of the core computing units and the number of the maximum computing units, adjusting the size of the waiting queue according to the length of the waiting queue, wherein the waiting queue is used for storing game data corresponding to users participating in the game activity, and scheduling the game data in the waiting queue by utilizing the computing units.
In an alternative implementation mode, the computing units comprise core computing units and temporary computing units, the scheduling module is used for starting the core computing units according to the number of the core computing units, and starting the temporary computing units according to the difference value between the maximum number of the core computing units and the number of the core computing units.
In a third aspect, the present invention provides an electronic device comprising a processor and a memory storing machine executable instructions executable by the processor to implement a data processing method as described in any of the preceding embodiments.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements a data processing method according to any of the preceding embodiments.
Compared with the prior art, the data processing method, the device, the electronic equipment and the computer readable storage medium provided by the embodiment of the invention have the advantages that the calculation pressure of the game activity is predicted by using the prediction model, and the number of calculation units participating in calculation and the waiting queue length are dynamically adjusted according to the prediction result, so that the early intervention of the calculation pressure is realized, the game system is enabled to run stably, the increase of the game data processing time delay is effectively avoided, and the game experience of a user is improved.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 shows a schematic flow chart of a data processing method according to an embodiment of the present invention.
Fig. 2 is a schematic flow chart of a data processing method according to an embodiment of the present invention.
Fig. 3 is a schematic flow chart of a data processing method according to an embodiment of the present invention.
Fig. 4 shows a schematic diagram of a prediction flow of a prediction model according to an embodiment of the present invention.
Fig. 5 shows a block diagram of a data processing apparatus according to an embodiment of the present invention.
Fig. 6 shows a block schematic diagram of an electronic device according to an embodiment of the present invention.
The icons are 100-electronic device, 110-memory, 120-processor, 130-communication module, 200-data processing means, 201-processing module, 202-prediction module, 203-scheduling module.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention.
It is noted that relational terms such as "first" and "second", and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
In the game development process, the computing units of all the systems are initialized while the game server is initialized, and after the initialization of the computing units is completed, game data generated by users participating in game activities are processed.
The inventor researches and discovers that the computing units in the current game server are initialized according to the default number of the system, and when a large amount of game data need to be computed, the processing of the game data is not timely due to the insufficient number of the computing units, so that logic update delay occurs. At the same time, in the case of a small amount of game data, the idle computing units still need to remain running, which causes resource waste of the game server.
Based on the above, the data processing method and device provided by the embodiment of the invention utilize the prediction model to predict the calculation pressure of the game activity, dynamically adjust the number of calculation units participating in calculation and the waiting queue length according to the prediction result, so as to realize the early intervention of the calculation pressure, ensure that the game system stably operates, effectively avoid the increase of the game data processing time delay and promote the game experience of users.
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 shows a flow chart of a data processing method according to an embodiment of the invention. The method should include the following steps:
step S10, acquiring activity data according to the starting time of the game activity, wherein the activity data is used for representing the activity degree of a user participating in the game activity.
In the embodiment of the invention, the starting time and the current system time of the game activity are obtained, and if the difference value between the starting time and the current system time is smaller than the preset time threshold value, the activity data corresponding to the game activity are obtained, and the dynamic adjustment logic of the calculation unit and the waiting queue is triggered. The preset time threshold may be set according to an empirical value of an actual application scenario, which is not limited in the present invention.
Step S20, inputting the activity data into a pre-trained prediction model to obtain the number of core computing units, the maximum number of computing units and the waiting queue length.
Step S30, starting the computing units according to the number of the core computing units and the number of the maximum computing units, and adjusting the size of the waiting queue according to the length of the waiting queue, wherein the waiting queue is used for storing game data corresponding to users participating in game activities.
In the embodiment of the invention, each game activity is independently provided with the computing units and the waiting queues, and the number of the computing units and the size of the waiting queues corresponding to each game activity can be dynamically adjusted according to the activity level of the user participating in the game activity.
And predicting the operation pressure corresponding to the game activity by utilizing a pre-trained prediction model to obtain the number of core computing units, the maximum number of computing units and the waiting queue length corresponding to the game activity. The calculation units are dynamically adjusted according to the number of the core calculation units and the maximum calculation units, so that calculation pressure generated by a large amount of game data gushing in a short time is prevented. Meanwhile, the size of the waiting queue is dynamically adjusted according to the length of the waiting queue, so that the problem that the user cannot continuously participate in game activities and the user experience is affected because the game data of the user cannot be received after the waiting queue is full is avoided.
Step S40, the game data in the waiting queue is scheduled by the computing unit.
In the embodiment of the invention, the started computing unit sequentially acquires the game data in the waiting queue and processes the game data. The computing unit may be a central processing unit, a graphics processing unit, a memory, etc.
As a possible implementation manner, assuming that the game activity a is performed within a specified time period every day, activity data is acquired according to the starting time of the game activity a, and prediction is performed based on the activity data by using a prediction model, so as to obtain the current optimal configuration data, namely the number of core computing units, the maximum number of computing units and the waiting queue length. And dynamically adjusting the sizes of the computing units participating in the computation and the waiting queue according to the number of the core computing units, the number of the maximum computing units and the waiting queue length.
In summary, in the data processing method provided by the embodiment of the invention, the prediction model is utilized to predict the calculation pressure of the game activity, and the number of calculation units participating in calculation and the waiting queue length are dynamically adjusted according to the prediction result, so that the early intervention of the calculation pressure is realized, the game system is enabled to stably run, the increase of the game data processing time delay is effectively avoided, and the game experience of the user is improved.
Optionally, the computing unit comprises a core computing unit and a temporary computing unit, one possible implementation is provided below as to how the computing unit and the waiting queue are adjusted. Referring to fig. 2, the substeps of step S30 in fig. 1 may include:
Step S301, turning on the core computing units according to the number of the core computing units.
As one possible implementation, the core computing unit may be turned on according to a default configuration when the gaming activity is created. Wherein the number of maximum computing units in the default configuration is the same as the number of core computing units. And confirming the number of the core computing units to be started according to the difference between the number of the core computing units and the number of the core computing units in the default configuration, and starting the core computing units according to the number of the core computing units to be started.
Step S302, turning on temporary computing units according to the difference between the maximum computing unit number and the core computing unit number.
In the embodiment of the invention, when the data amount of the activity data is increased, the calculation pressure generated by the game activity is relieved by starting the temporary calculation unit. Wherein the maximum number of computing units is used to characterize an upper limit of the number of computing units corresponding to the game activity.
Step S303, the size of the waiting queue is adjusted according to the waiting queue length.
In the embodiment of the invention, the storage space is allocated for the waiting queue according to the length of the waiting queue, so that the waiting queue can timely receive game data corresponding to game activities.
It should be noted that, when dynamically adjusting the computing units, the computing units pool will preferentially find the free core computing units to perform the game data calculation. And adding the newly received game data into a waiting queue if the current core computing units are all occupied. When the game data is stored in the waiting queue, the core computing unit or the temporary computing unit is utilized to schedule the game data in the waiting queue.
Alternatively, one possible implementation is provided below for how the computing units corresponding to the game activity are dynamically adjusted according to the waiting queue. Referring to fig. 3, after step S40 in fig. 1, the method further includes:
In step S50, when the game data in the waiting queue exceeds the queue high threshold, the product of the maximum number of calculation units and the preset ratio is calculated, and the maximum number of calculation units is updated by the product.
In the embodiment of the invention, when the game data in the waiting queue exceeds the high threshold value of the queue, the game activity is indicated to generate a large amount of game data, the calculation unit started in the current game activity cannot timely process the game data in the waiting queue, and a new calculation unit needs to be dynamically started to participate in the calculation of the game data.
Assuming that the maximum computing unit number default value is the core computing unit number, the preset ratio is set to 1.5, and the queue high threshold is set to 60% of the waiting queue length. When the number of game data in the waiting queue exceeds 60% of the waiting queue length, the core calculation unit number of 1.5 times is set to the current maximum calculation unit number.
Step S60, starting the temporary computing unit according to the updated difference value between the maximum computing unit number and the core computing unit number, and scheduling the game data in the waiting queue by using the temporary computing unit.
In the embodiment of the invention, the difference between the updated maximum computing unit number and the core computing unit number is calculated to obtain the temporary computing unit number. The temporary calculation units are started according to the number of the temporary calculation units. Assuming that the number of temporary computing units is 15 and the number of temporary computing units that have been turned on is 10, then it is necessary to turn on 5 more temporary computing units and schedule the game data in the waiting queue using the 15 temporary computing units that have been turned on and the core computing unit that has been turned on. Wherein the number of temporary calculation units turned on cannot exceed the upper limit of the maximum number of calculation units.
Alternatively, one possible implementation is provided below as to how the maximum number of computing units is dynamically adjusted based on the computing pressure of the gaming activity. Referring to fig. 3, the method further includes:
in step S70, when the maximum number of computing units is greater than the number of core computing units, the rechecking is started.
In the embodiment of the invention, after the number of computing units is adjusted by using a prediction model or when the game data in the waiting queue exceeds the number of the queue high threshold value adjusting computing units, comparing the maximum number of computing units with the size of the number of core computing units, and starting rechecking under the condition that the maximum number of computing units is larger than the number of core computing units.
Step S80, during the rechecking period, updating the maximum number of calculation units according to the game data in the waiting queue, the high queue threshold value and the low queue threshold value.
In the embodiment of the invention, the rechecking timing is started, the rechecking timing is waited, and after the rechecking time is reached, the size of the game data in the waiting queue and the size of the high threshold of the queue are compared or the size of the game data in the waiting queue and the size of the low threshold of the queue are compared.
Specifically, when the game data in the waiting queue exceeds the high queue threshold, the product of the maximum computing unit number and the preset proportion is calculated, and the maximum computing unit number is updated by the product, and when the game data in the waiting queue is lower than the low queue threshold, the maximum computing unit number is set as the core computing unit number.
In the embodiment of the invention, if the game data in the waiting queue exceeds the queue high threshold, the maximum calculation unit number is iteratively adjusted based on the updated maximum calculation unit number and the preset proportion until the game data in the waiting queue does not exceed the queue high threshold. If the game data in the waiting queue is lower than the low queue threshold value, the configuration of the computing units is dynamically adjusted, the maximum computing unit number is updated to the core computing unit number, and the temporary computing units are closed.
In step S90, when the updated maximum number of computing units is equal to the number of core computing units, the recheck is turned off.
Therefore, the embodiment of the invention monitors the calculation pressure of the game activity in real time through recheck, dynamically starts the temporary calculation unit under the condition of large calculation pressure, and processes the game data corresponding to the game activity through the cooperation of the core calculation unit and the temporary calculation unit, thereby ensuring that the game data is processed quickly and accurately under the condition of large data quantity, reducing the running time delay of the game and enabling the running of the game system to be more stable. And under the condition of small calculation pressure, the temporary calculation unit is closed in time, so that the game system resources are automatically released, and the calculation consumption of the game equipment is effectively reduced.
Optionally, prior to using the predictive model to predict, the deep neural network needs to be trained with historical activity data to obtain the predictive model. One possible implementation is provided below to illustrate the training process of the predictive model.
Determining training data from historical activity data of game activities based on the number of users participating in the game activities, acquiring labels corresponding to the training data, inputting the training data and the corresponding labels into a deep neural network to obtain a prediction result, and iteratively updating parameters of the deep neural network based on the labels and loss information of the prediction result to obtain a prediction model.
In embodiments of the present invention, a large collection of historical activity data for a variety of gaming activities is required prior to training a predictive model. The game activities comprise battlefield activities, dispute games and the like, the historical activity data comprise the number of active users and acquisition time, and the number of active users corresponds to the acquisition time one by one.
And aiming at each game activity, acquiring the number of user activities in each historical activity data, determining the maximum number of user activities in the historical activity data in the same time period as a user online peak value, determining the acquisition time corresponding to the user online peak value as peak value time, and determining training data corresponding to the game activity according to the user online peak value and the corresponding peak value time in different time periods.
The configuration number of the core computing units, the configuration number of the maximum computing units and the configuration length of the waiting queue corresponding to the peak time are obtained, and labels are generated according to the configuration number of the core computing units, the configuration number of the maximum computing units and the configuration length of the waiting queue.
Training the deep neural network by using training data of each game activity and corresponding labels to obtain a prediction result. The prediction result comprises the prediction number of the core computing units, the prediction number of the maximum computing units and the prediction length of the waiting queue. Determining loss information according to the configuration number of the core computing units, the prediction number of the core computing units, the configuration number of the maximum computing units, the prediction number of the maximum computing units, the configuration length of the waiting queue and the prediction length of the waiting queue, and iteratively updating parameters of the deep neural network according to the loss information to obtain a trained prediction model.
Therefore, the embodiment of the invention utilizes the online peak value training prediction model of the game activity user to obviously improve the prediction accuracy of the prediction model on the calculation pressure, thereby ensuring that the number of calculation units is dynamically adjusted by a game system in advance according to the output of the prediction model and further ensuring that game data is processed rapidly and accurately.
To more clearly illustrate the training process of the predictive model, an exemplary description is provided in connection with a deep neural network of a three-layer network architecture. Referring to fig. 4, it is assumed that the deep neural network to be trained includes an input layer, a hidden layer and an output layer, wherein the input layer is configured to receive training data and labels and perform feature extraction based on the training data, the hidden layer is configured to generate prediction data of different levels, and the output layer is configured to generate a prediction result according to the prediction data. The nerve layers are respectively provided with nerve cells which do not pass through the quantity, and the nerve cells among different layers are connected through weights.
The training data comprises group identification, game activity identification, user activity number and collection time, wherein the user activity number is the online peak value of the user, the collection time is the peak value time corresponding to the online peak value of the user, the group identification is used for identifying a game group, the game group comprises at least one game activity, and the game activity identification is used for identifying the game activity.
The training data and the corresponding labels are input into a deep neural network, and the deep neural network performs feature extraction on the training data by using an input layer to obtain 4-dimensional feature vectors, such as 4 feature vectors in the input layer of fig. 4. Since the input layer needs to process the 4-dimensional feature vector, the input layer sets 4 neurons.
The hidden layer performs feature mapping based on the 4-dimensional feature vector, maps Cheng Gaowei feature vectors to the input 4-dimensional feature vector, and supposedly sets 10 neurons of the hidden layer, so that a 4 x10 linear transformation matrix needs to be set in the hidden layer, and maps the 4-dimensional feature vector to 10-dimensional feature vectors, such as 10 feature vectors in the hidden layer in fig. 4. The 10-dimensional feature vector characterizes 10 different levels of prediction data, and is input into an output layer.
The output layer obtains 10 probabilities according to the 10-dimensional feature vector by using a softmax function, and determines prediction data with the maximum probability as a prediction result. Wherein the sum of 10 probabilities is 1, and the prediction result comprises game activity identification, the prediction number of the core computing units, the prediction number of the maximum computing units and the prediction length of the waiting queue. And determining loss information according to the prediction result and the corresponding label, and iteratively updating parameters of the deep neural network by using the loss information to obtain a prediction model.
After training to obtain a prediction model, the prediction model is utilized to dynamically adjust the calculation unit of the game activity, so as to achieve the purpose of stably running the game activity. It should be noted that, when the prediction model cannot fully predict the computing pressure of the game activity, the activity data currently generated by the game activity can be used again to continue training the prediction model, so as to improve the prediction capability of the prediction model on the computing pressure.
Based on the same inventive concept, the embodiment of the invention also provides a data processing device. The basic principle and the technical effects are the same as those of the above embodiments, and for brevity, reference is made to the corresponding matters in the above embodiments where the description of the present embodiment is omitted.
Referring to fig. 5, fig. 5 is a block diagram of a data processing apparatus 200 according to an embodiment of the invention. The data processing apparatus 200 comprises a processing module 201, a prediction module 202 and a scheduling module 203.
A processing module 201, configured to obtain activity data according to a start time of a game activity, where the activity data is used to characterize an activity of a user participating in the game activity;
the prediction module 202 is configured to input the activity data into a pre-trained prediction model, to obtain the number of core computing units, the number of maximum computing units, and the waiting queue length;
The scheduling module 203 is configured to start the computing units according to the number of core computing units and the number of maximum computing units, and adjust the size of the waiting queue according to the length of the waiting queue, where the waiting queue is used to store game data corresponding to users participating in the game activity, and schedule the game data in the waiting queue by using the computing units.
In summary, the data processing device provided by the embodiment of the invention predicts the calculation pressure of the game activity by using the prediction model, and dynamically adjusts the number of calculation units participating in the calculation and the waiting queue length according to the prediction result, so as to realize the early intervention of the calculation pressure, so that the game system stably operates, effectively avoid the increase of the game data processing time delay, and promote the game experience of the user.
Alternatively, the computing units include a core computing unit and a temporary computing unit, the scheduling module 203 is configured to turn on the core computing unit according to the number of core computing units, and turn on the temporary computing unit according to a difference between the maximum number of computing units and the number of core computing units.
Optionally, the scheduling module 203 is further configured to calculate a product of the maximum number of computing units and a preset ratio when the game data in the waiting queue exceeds the queue high threshold, update the maximum number of computing units with the product, start the temporary computing unit according to a difference between the updated maximum number of computing units and the core number of computing units, and schedule the game data in the waiting queue with the temporary computing unit.
Optionally, the scheduling module 203 is further configured to turn on the rechecking when the maximum number of computing units is greater than the number of core computing units, update the maximum number of computing units according to the game data in the waiting queue, the high queue threshold value, and the low queue threshold value during the rechecking, and turn off the rechecking when the updated maximum number of computing units is equal to the number of core computing units.
Optionally, the scheduling module 203 is specifically configured to calculate a product of the maximum number of computing units and a preset ratio when the game data in the waiting queue exceeds the high queue threshold, and update the maximum number of computing units with the product, and set the maximum number of computing units as the core number of computing units when the game data in the waiting queue is lower than the low queue threshold.
Optionally, the processing module 201 is further configured to determine training data from historical activity data of the game activity based on a number of users participating in the game activity, obtain a label corresponding to the training data, input the training data and the corresponding label into the deep neural network to obtain a prediction result, and iteratively update parameters of the deep neural network based on the label and loss information of the prediction result to obtain a prediction model.
Referring to fig. 6, fig. 6 is a block diagram of an electronic device 100 according to an embodiment of the invention. The electronic device 100 may be any device having a data processing function, such as a personal computer, a notebook computer, and a server. The electronic device 100 includes a memory 110, a processor 120, and a communication module 130. The memory 110, the processor 120, and the communication module 130 are electrically connected directly or indirectly to each other to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines.
Wherein the memory 110 is used for storing programs or data. The Memory 110 may be, but is not limited to, random access Memory (Random Access Memory, RAM), read Only Memory (ROM), programmable Read Only Memory (Programmable Read-Only Memory, PROM), erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc.
The processor 120 is used to read/write data or programs stored in the memory 110 and perform corresponding functions. For example, the data processing methods disclosed in the above embodiments may be implemented when a computer program stored in the memory 110 is executed by the processor 120.
The communication module 130 is used for establishing a communication connection between the electronic device 100 and other communication terminals through a network, and for transceiving data through the network.
It should be understood that the structure shown in fig. 6 is merely a schematic diagram of the structure of the electronic device 100, and that the electronic device 100 may also include more or fewer components than shown in fig. 6, or have a different configuration than shown in fig. 6. The components shown in fig. 6 may be implemented in hardware, software, or a combination thereof.
Embodiments of the present invention also provide a computer-readable storage medium having stored thereon a computer program which, when executed by the processor 120, implements the data processing method disclosed in the above embodiments.
In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus embodiments described above are merely illustrative, for example, of the flowcharts and block diagrams in the figures that illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present invention may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. The storage medium includes a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method of data processing, the method comprising:
Acquiring activity data according to the starting time of a game activity, wherein the activity data is used for representing the activity degree of a user participating in the game activity;
inputting the activity data into a pre-trained prediction model to obtain the number of core computing units, the maximum number of computing units and the waiting queue length;
starting calculation units according to the number of the core calculation units and the number of the maximum calculation units, and adjusting the size of a waiting queue according to the length of the waiting queue, wherein the waiting queue is used for storing game data corresponding to users participating in the game activity;
and scheduling the game data in the waiting queue by using the computing unit.
2. The data processing method according to claim 1, wherein the computing units include a core computing unit and a temporary computing unit, and wherein the turning on of the computing units based on the number of core computing units and the maximum number of computing units includes:
starting the core computing units according to the number of the core computing units;
And starting the temporary computing units according to the difference value between the maximum computing unit number and the core computing unit number.
3. The data processing method according to claim 2, wherein after the step of scheduling game data in the waiting queue with the computing unit, the method further comprises:
when the game data in the waiting queue exceeds a queue high threshold, calculating the product of the maximum calculation unit number and a preset proportion, and updating the maximum calculation unit number by using the product;
And starting the temporary computing unit according to the difference value between the updated maximum computing unit number and the core computing unit number, and scheduling the game data in the waiting queue by using the temporary computing unit.
4. The data processing method of claim 1, wherein the method further comprises:
When the maximum computing unit number is greater than the core computing unit number, starting reinspection;
During rechecking, updating the maximum number of computing units according to game data in the waiting queue, a high queue threshold value and a low queue threshold value;
And when the number of the updated maximum computing units is equal to the number of the core computing units, closing the rechecking.
5. The method according to claim 4, wherein updating the maximum number of calculation units based on the game data in the waiting queue, the high queue threshold value, the low queue threshold value, comprises:
when the game data in the waiting queue exceeds the queue high threshold, calculating the product of the maximum calculation unit number and a preset proportion, and updating the maximum calculation unit number by using the product;
And setting the maximum number of computing units as the core number of computing units when the game data in the waiting queue is lower than a queue low threshold.
6. The data processing method according to claim 1, wherein the prediction model is obtained by:
determining training data from historical activity data of game activities based on the number of users participating in the game activities, and acquiring labels corresponding to the training data;
Inputting the training data and the corresponding labels into a deep neural network to obtain a prediction result;
and iteratively updating parameters of the deep neural network based on the label and the loss information of the prediction result to obtain the prediction model.
7. A data processing apparatus, the apparatus comprising:
the processing module is used for acquiring activity data according to the starting time of the game activity, wherein the activity data is used for representing the activity degree of a user participating in the game activity;
The prediction module is used for inputting the activity data into a pre-trained prediction model to obtain the number of core calculation units, the maximum number of calculation units and the waiting queue length;
The scheduling module is used for starting the computing units according to the number of the core computing units and the number of the maximum computing units, adjusting the size of the waiting queue according to the length of the waiting queue, wherein the waiting queue is used for storing game data corresponding to users participating in the game activity, and scheduling the game data in the waiting queue by utilizing the computing units.
8. The data processing apparatus according to claim 7, wherein the computing units include a core computing unit and a temporary computing unit, wherein the scheduling module is configured to turn on the core computing unit according to the number of core computing units, and turn on the temporary computing unit according to a difference between the maximum number of core computing units and the number of core computing units.
9. An electronic device comprising a processor and a memory, the memory storing machine executable instructions executable by the processor to implement the data processing method of any of claims 1-6.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the data processing method according to any one of claims 1-6.
CN202411636676.XA 2024-11-15 2024-11-15 Data processing method, device, electronic device and computer readable storage medium Pending CN119565166A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411636676.XA CN119565166A (en) 2024-11-15 2024-11-15 Data processing method, device, electronic device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411636676.XA CN119565166A (en) 2024-11-15 2024-11-15 Data processing method, device, electronic device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN119565166A true CN119565166A (en) 2025-03-07

Family

ID=94807860

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411636676.XA Pending CN119565166A (en) 2024-11-15 2024-11-15 Data processing method, device, electronic device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN119565166A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006149663A (en) * 2004-11-29 2006-06-15 Aruze Corp GAME DEVICE, GAME SYSTEM, AND GAME METHOD
CN113230658A (en) * 2021-05-31 2021-08-10 腾讯科技(深圳)有限公司 Resource allocation method and device, computer readable medium and electronic equipment
CN114741172A (en) * 2022-04-06 2022-07-12 深圳鲲云信息科技有限公司 Operator scheduling method, device, device and storage medium for artificial intelligence model

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006149663A (en) * 2004-11-29 2006-06-15 Aruze Corp GAME DEVICE, GAME SYSTEM, AND GAME METHOD
CN113230658A (en) * 2021-05-31 2021-08-10 腾讯科技(深圳)有限公司 Resource allocation method and device, computer readable medium and electronic equipment
CN114741172A (en) * 2022-04-06 2022-07-12 深圳鲲云信息科技有限公司 Operator scheduling method, device, device and storage medium for artificial intelligence model

Similar Documents

Publication Publication Date Title
CN110096345B (en) Intelligent task scheduling method, device, equipment and storage medium
RU2683509C2 (en) Resource management based on device-specific or user-specific resource usage profiles
CN111143039B (en) Scheduling method and device of virtual machine and computer storage medium
KR20200122364A (en) Resource scheduling method and terminal device
US20190347621A1 (en) Predicting task durations
CN118796471B (en) Reasoning resource optimization method, device and electronic equipment
GB2588701A (en) Predicting a remaining battery life in a device
CN119149244B (en) Computing power scheduling method and device
CN110796591A (en) GPU card using method and related equipment
CN117950865A (en) Digital resource allocation method, system, device and storage medium for metaverse
CN120780489B (en) A method, apparatus, equipment and medium for scheduling industrial defect detection tasks
CN112035324A (en) Batch job execution condition monitoring method and device
CN115712337B (en) Processor scheduling methods, devices, electronic equipment, and storage media
CN112650566B (en) Timed task processing method and device, computer equipment and storage medium
CN119565166A (en) Data processing method, device, electronic device and computer readable storage medium
CN113050783B (en) Terminal control method, device, mobile terminal and storage medium
McGough et al. Reduction of wasted energy in a volunteer computing system through reinforcement learning
US8521855B2 (en) Centralized server-directed power management in a distributed computing system
CN113190173A (en) Low-energy-consumption data cold magnetic storage method and device based on machine learning
CN118409873A (en) Model memory occupation optimization method, equipment, medium, product and system
CN106648895A (en) Data processing method and device, and terminal
CN119472971A (en) Frequency adjustment method, device, electronic device and readable storage medium
CN117271081A (en) Scheduling method, scheduling device and storage medium
CN112784912B (en) Image recognition method and device, neural network model training method and device
CN117648155B (en) Virtual machine online migration method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination