[go: up one dir, main page]

CN112839241A - Cloud game image frame loss compensation method and device - Google Patents

Cloud game image frame loss compensation method and device Download PDF

Info

Publication number
CN112839241A
CN112839241A CN202011639229.1A CN202011639229A CN112839241A CN 112839241 A CN112839241 A CN 112839241A CN 202011639229 A CN202011639229 A CN 202011639229A CN 112839241 A CN112839241 A CN 112839241A
Authority
CN
China
Prior art keywords
game
prediction model
frame loss
picture prediction
client
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011639229.1A
Other languages
Chinese (zh)
Inventor
李运福
李启光
任文康
张鹤翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guiyang Gloud Technology Co ltd
Original Assignee
Guiyang Gloud Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guiyang Gloud Technology Co ltd filed Critical Guiyang Gloud Technology Co ltd
Priority to CN202011639229.1A priority Critical patent/CN112839241A/en
Publication of CN112839241A publication Critical patent/CN112839241A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23406Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving management of server-side video buffer
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a cloud game image frame loss compensation method and a device, wherein the method comprises the following steps: receiving and caching a game picture prediction model sent by a server; receiving and caching a game video frame sent by a server; when frame loss occurs, determining a prediction parameter according to a frame loss position; determining a prediction video frame according to the game picture prediction model and the prediction parameters; and displaying the predicted video frame at the frame loss position. According to the cloud game image frame loss compensation method and device, when the frame loss phenomenon is found in the running process of the cloud game, the prediction is carried out by using the game picture prediction model according to the game video frames received before the frame loss position, the predicted video frames are displayed at the frame loss position, the influence of the frame loss phenomenon on the display of the game picture is made up, the display effect of the game picture is enhanced, and the visual experience of a user is improved.

Description

Cloud game image frame loss compensation method and device
Technical Field
The invention relates to the technical field of cloud servers, in particular to a cloud game image frame loss compensation method and device.
Background
When the user conducts the cloud game, the running process of the game is conducted on the server. The server sends the game video pictures to the client, and the client receives the video frame data and then displays the video frame data.
However, in the game video transmission process, due to network fluctuation and other factors, the client receives video frame data when time is out or discontinuous, i.e., a frame loss phenomenon occurs, and a picture displayed by the client is lost or a frame rate is reduced.
In the related art, the cloud game has a high requirement on the real-time performance of the video, so that a video buffer area is not set, the client receives one frame of data and displays one frame of data, the client does not wait for the server to resend the frame of data after the frame loss occurs, the lost frame is directly ignored, and the problems of frame loss and picture loss are not really solved.
Disclosure of Invention
In order to solve the technical problem, the invention provides a cloud game image frame loss compensation method and device.
The invention provides a cloud game image frame loss compensation method which is applied to a client, and the method comprises the following steps:
receiving and caching a game picture prediction model sent by a server;
receiving and caching a game video frame sent by a server;
when frame loss occurs, determining a prediction parameter according to a frame loss position;
determining a predicted video frame according to the game picture prediction model and the prediction parameters;
and displaying the predicted video frame at the frame loss position.
The method also has the following characteristics: the game picture prediction model sent by the receiving and caching server side comprises the following steps:
receiving and caching a basic picture prediction model fed back by the server based on the game running request; and the number of the first and second groups,
and receiving and caching the scene picture prediction model fed back by the server based on the game scene switching request.
The method also has the following characteristics: determining a predicted video frame according to the game picture prediction model and the prediction parameters comprises:
inputting the prediction parameters into the base picture prediction model or the scene picture prediction model;
the basic picture prediction model or the scene picture prediction model outputs a prediction picture;
determining the predicted picture to be the predicted video frame.
The method also has the following characteristics: the determining the prediction parameter according to the frame loss position comprises:
determining a prediction parameter in real time according to the frame loss position; or,
determining game video frames which are cached from a frame loss position and located in front of the frame loss position and are away from the frame loss position by a preset number to serve as prediction parameters.
The invention provides a cloud game image frame loss compensation method which is applied to a server side, and the method comprises the following steps:
based on the received request, sending a game picture prediction model to the client;
and sending the game video frame to the client.
The method also has the following characteristics: the sending the game picture prediction model to the client based on the received request comprises:
sending a basic picture prediction model to a client based on the received game running request; and the number of the first and second groups,
and sending the scene picture prediction model to the client based on the received game scene switching request.
The method also has the following characteristics: the method further comprises the following steps:
receiving training data, wherein the training data comprises game videos of various scene pictures;
and constructing a game picture prediction model based on deep learning according to the training data.
The invention provides a cloud game image frame loss compensation device, which is applied to a client, and comprises:
the receiving module is used for receiving the game picture prediction model sent by the server;
the receiving module is also used for receiving the game video frame sent by the server;
the cache module is used for caching the game picture prediction model and the game video frame;
the processing module is used for determining a prediction parameter according to the frame loss position;
the processing module is further used for determining a predicted video frame according to the game picture prediction model and the prediction parameters;
and the display module is used for displaying the predicted video frame at the frame loss position.
The device also has the following characteristics: the receiving module is specifically configured to:
receiving a basic picture prediction model fed back by the server based on a game running request; and the number of the first and second groups,
receiving a scene picture prediction model fed back by the server based on a game scene switching request;
the cache module is specifically configured to:
caching a basic picture prediction model fed back by the server based on the game running request; and the number of the first and second groups,
and caching the scene picture prediction model fed back by the server based on the game scene switching request.
The device also has the following characteristics: the processing module is specifically configured to:
inputting the prediction parameters into the base picture prediction model or the scene picture prediction model;
the basic picture prediction model or the scene picture prediction model outputs a prediction picture;
determining the predicted picture to be the predicted video frame.
The device also has the following characteristics: the processing module is specifically configured to:
determining a prediction parameter in real time according to the frame loss position; or,
determining game video frames which are cached from a frame loss position and located in front of the frame loss position and are away from the frame loss position by a preset number to serve as prediction parameters.
The invention provides a cloud game image frame loss compensation device, which is applied to a server side and comprises:
a receiving unit configured to receive a request;
a transmission unit for transmitting the game picture prediction model to the client;
the sending unit is also used for sending the game video frame to the client.
The device also has the following characteristics: the receiving unit is specifically configured to:
receiving a game running request and a game scene switching request;
the sending unit is specifically configured to:
and sending the basic picture prediction model and the scene picture prediction model to a client.
The device also has the following characteristics: the device further comprises a training unit, wherein the training unit is specifically configured to:
receiving training data, wherein the training data comprises game videos of various scene pictures;
and constructing a game picture prediction model based on deep learning according to the training data.
According to the cloud game image frame loss compensation method and device, when the frame loss phenomenon is found in the running process of the cloud game, the prediction is carried out by using the game picture prediction model according to the game video frames received before the frame loss position, the predicted video frames are displayed at the frame loss position, the influence of the frame loss phenomenon on the display of the game picture is made up, the display effect of the game picture is enhanced, and the visual experience of a user is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a flow diagram illustrating a cloud game image frame loss compensation method according to an example embodiment;
FIG. 2 is a flow diagram illustrating a method for cloud game image frame loss compensation in accordance with an exemplary embodiment;
FIG. 3 is a flow diagram illustrating a method for cloud game image frame loss compensation in accordance with an exemplary embodiment;
fig. 4 is a block diagram illustrating a cloud game image drop frame compensation apparatus according to an exemplary embodiment;
fig. 5 is a block diagram illustrating a cloud game image loss frame compensating apparatus according to an exemplary embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. It should be noted that the embodiments and features of the embodiments in the present application may be arbitrarily combined with each other without conflict.
When the user conducts the cloud game, the running process of the game is conducted on the server. The server sends the game video pictures to the client, and the client receives the video frame data and then displays the video frame data.
However, in the game video transmission process, due to network fluctuation and other factors, the client receives video frame data when time is out or discontinuous, i.e., a frame loss phenomenon occurs, and a picture displayed by the client is lost or a frame rate is reduced.
In the related art, the cloud game has a high requirement on the real-time performance of the video, so that a video buffer area is not set, the client receives one frame of data and displays one frame of data, the client does not wait for the server to resend the frame of data after the frame loss occurs, the lost frame is directly ignored, and the problems of frame loss and picture loss are not really solved.
In order to solve the problems, the invention provides a cloud game image frame loss compensation method, when a frame loss phenomenon occurs in the running process of a cloud game, a game picture prediction model is used for predicting according to a game video frame received before a frame loss position, the predicted video frame is displayed at the frame loss position, the influence of the frame loss phenomenon on the display of the game picture is made up, the display effect of the game picture is enhanced, and the visual experience of a user is improved.
According to an exemplary embodiment, as shown in fig. 1, the present disclosure provides a cloud game image frame loss compensation method, and the method in this embodiment is applied to a client. The client can be a mobile terminal such as a mobile phone and a tablet personal computer, and the user can download the cloud game APP at the client to run the cloud game. The method comprises the following steps:
and S110, receiving and caching the game picture prediction model sent by the server.
Generally, a cloud game includes a basic game screen and a more precise scene screen determined according to a user's selection. And the network condition between the client and the server changes continuously, and the position of the network with poor condition can not be predicted. Therefore, in order to ensure the display effect of the client in the running process of the cloud game, the phenomenon of frame loss is avoided. When a game just starts, frame loss is avoided in the process of displaying a basic game picture, and the frame loss is still avoided after a subsequent user switches scene display.
In the implementation process, at the game starting stage, a basic picture prediction model fed back by a server side based on a game running request is received and cached. When a user needs to switch a game scene or passively needs to switch the game scene as a game progresses, the server side sends a scene picture prediction model corresponding to the game scene along with the change of the game scene. And the client receives and caches the scene picture prediction model fed back by the server based on the game scene switching request. For the cloud game, each game has its own style, and various game pictures consistent with the game style in the embodiment can be understood as basic pictures, and when frame loss occurs in the transmission process of the basic pictures, the basic pictures can be predicted by using a basic picture prediction model. The scene picture prediction model in the present embodiment may be understood as a game scene that does not occur frequently, the image picture display process may follow regularly, and the picture display content may be predicted, for example, a setting interface or the like that occurs during the game. When the frame loss phenomenon occurs in the scene, the scene picture prediction model is used for predicting the image video frame.
Here, it should be noted that, in this step, the game screen prediction model received by the client from the server is stored in advance in the server. Different game picture prediction models are correspondingly stored for different games running on the server. The server transmits a game screen prediction model corresponding to a game run on the client, according to a request transmitted from the client, as to what game is run on the client.
And S120, receiving and caching the game video frame sent by the server.
After the game is run on the server, the client is only responsible for displaying the game picture, and the client needs to continuously receive the game video frame from the server so as to ensure the fluency of the user in the game process.
Because the number of the game video frames is large, the server side adopts a mode of coding the game video frames in the sending process to send, and in the coding sending process, on one hand, the safety in the sending process can be improved, and on the other hand, the sending speed can be increased.
Since the server performs encoding processing during transmission, the client in this step can perform decoding before displaying when receiving the game video frame. The encoding and decoding processes of the video frame can be performed by adopting the conventional technical means in the field, and the detailed description is omitted here.
In addition, in the above steps S110 and S120, the client stores the received game picture prediction model and the game video frame in a cache manner, and when the user quits the game on the client, the cached content is cleared, and the memory of the client is not occupied. Meanwhile, as the game progresses, the content cached in the early stage of the game can be covered by the content cached in the later stage of the game, the content of the client side is not occupied too much, the occupation of the content of the client side is reduced while the picture display effect is improved, and the game experience of a user is improved.
S130, when frame loss occurs, determining a prediction parameter according to the frame loss position.
In this step, the method for determining the prediction parameters according to the frame loss position includes the following two methods:
firstly, the prediction parameters are determined in real time according to the frame loss position. Because the games generally have unique styles and are uniform in style, and the number of lost frames is not too large, the game generally only loses a few frames and is likely to have slight stutter or flash green pictures in the display process. For such a situation, the current display frame can be directly used as a prediction parameter according to the style of the current game and the current frame loss position, and then the game video frame predicted according to the current video frame is displayed, so that the display problems of blocking and the like are avoided.
Secondly, determining game video frames which are cached from a frame loss position and located in front of the frame loss position and away from the frame loss position by a preset number as prediction parameters.
The client judges whether frame loss occurs or not in the process of receiving the game video frames sent by the server, when frame loss occurs, the decoded game video frames are analyzed, the game video frames which are cached from the frame loss position are determined, a preset number of game video frames which are located in front of the frame loss position and away from the frame loss position are used as prediction parameters, the video frames at the frame loss position are predicted according to the game video frames received between the frame loss positions, and the video frames at the frame loss position are made up. That is, the content of the video frame to be displayed at the frame loss position is predicted based on the preset number of game video frames adjacent to the frame loss position that have been buffered from the frame loss position onward. The preset number may be, for example, one frame, two frames, five frames, and the like. Of course, it is understood that the predetermined number is preferably multiple frames to further improve the prediction accuracy. However, the more the preset number is, the better, and if the preset number is too large, the more resources of the client may be occupied, which may affect the performance of the client.
In one example, if the frame loss position is 51 th frame and the preset number is 3, it is determined that the 50 th frame, the 49 th frame, and the 48 th frame, which have been received and buffered by the client, are prediction parameters.
In another example, if the frame loss position is 20 th frame and the preset number is 1, it is determined that 19 th frame that the client has received and buffered is a prediction parameter.
In this step, the client determines whether frame loss occurs, for example, by detecting the network condition in real time, and when the detected data transmission flow is lower than a preset value in the data transmission process, it is determined that the network condition is not good, and a frame loss phenomenon is caused. The position where the frame loss occurs is the position of the received video frame when the data transmission data flow is detected to be lower than a preset value.
The method for the client to determine whether frame loss occurs in this step may also be, for example, the following method:
setting the constant frame rate FPS to N, the theoretical receive time FRAMETIME is 1000/N (milliseconds) per frame of data. Indicating that after a frame of data is currently received, it is expected that the next frame of data will be received within FRAMETIME time intervals. However, due to network fluctuation and image frame data change caused by image change, image frame data cannot be uniformly received according to a theoretical value, and the duration for judging the occurrence of the frame loss phenomenon can be set to be 2 times of the duration under the condition of ensuring the minimum smooth frame rate (25FPS) based on the constant time delay of the current encoding and decoding. That is to say that the first and second electrodes,
CHECKTIME 1000/25 x 2 ms 80 ms.
That is, if the next frame of image frame data is not received within 80 milliseconds after the previous frame of image frame data is received, a frame loss compensation action needs to be performed, a predicted frame video is displayed at a frame loss position, a user is prevented from feeling frame loss, and the game experience of the user is improved.
In the two modes in the step, the prediction parameters of the second mode are more accurate, the predicted video frame is closer to a real game picture, and the visual effect is better.
And S140, determining a predicted video frame according to the game picture prediction model and the prediction parameters.
As can be seen from the description in step S110, the game picture prediction model includes a base picture prediction model and a scene picture prediction model. Therefore, in this step, when predicting a predicted video frame, the prediction parameters are input to the base picture prediction model or the scene picture prediction model, and the base picture prediction model or the scene picture prediction model outputs a predicted picture. The predicted picture output by the basic picture prediction model or the scene picture prediction model is a predicted video frame, and the predicted video frame is a video frame which is predicted according to the model and is similar to the video frame which should be displayed originally at the frame loss position.
Different models can be used for prediction in different stages of game running, for example, in the initial stage of game running, when a user does not enter a game scene, a basic picture prediction model can be used for predicting a picture with frame loss occurring in the initial stage of game. When the user switches scenes according to the game progress, the scene picture prediction model can be used for predicting the lost frame pictures.
Here, it should be noted that, because a plurality of game scenes may exist in a game under a normal condition, the method in this embodiment may store a plurality of scene picture prediction models according to the plurality of game scenes, when a client requests a certain game scene, the server sends the scene picture prediction model corresponding to the game scene to the client, and if the client does not request a certain game scene, the scene picture prediction model corresponding to the scene is not sent to the client, so that the unused prediction model is prevented from occupying the storage space of the client, and at the same time, the traffic occupation is also reduced. According to the method in the embodiment, the scene picture prediction model is loaded according to the requirements of the client, so that the prediction precision is improved, the size of the model is reduced, and the user experience is improved.
And S150, displaying the predicted video frame at the frame loss position.
In step S140, the predicted video frame determined according to the basic picture prediction model and the scene picture prediction model is a picture that is the same as or similar to the lost video frame, and the predicted video frame is displayed at a frame loss position where the lost video frame should be originally displayed to replace the lost video frame, thereby solving the problem of picture loss and improving user experience.
According to the cloud game image frame loss compensation method in the embodiment, the video frame loss phenomenon is not ignored any more, the frame loss phenomenon is actively monitored, the picture of the lost frame is predicted according to the cached video frame and the prediction model obtained from the server side, the predicted video frame is displayed at the frame loss position, the problem of picture loss is avoided, and the visual display effect is better.
Meanwhile, the game picture prediction model in the embodiment is received from the server, and the client needs to use the cached video frame and the model to perform forward propagation calculation, so that the calculation speed is higher, and the deployment scene of the mobile terminal equipment can be met.
According to an exemplary embodiment, as shown in fig. 2, the present embodiment provides a cloud game image frame loss compensation method, which is applied to a server, for example, a server running a cloud game. The method in this embodiment comprises the steps of:
s210, based on the received request, the game picture prediction model is sent to the client.
In the step, the server side sends the game picture prediction model to the client side only when receiving the request sent by the client side, and the server side does not send the game picture prediction model to the client side when not receiving the request sent by the client side, so that the storage resource and the processing resource of the client side are not occupied.
Here, it should be noted that, for different received requests, prediction modes sent by the server to the client are different.
In one example, the server receives a game running request from the client, and at this time, the client does not enter the scene switching process, and the server sends the basic picture prediction model to the client.
In another example, as the game progresses, the client sends a game scene switching request, and the server sends a scene picture prediction model corresponding to the game scene switching request sent by the client to the client.
In the embodiment, the server side sends the prediction model to the client side according to the game running stage and the request sent by the client side, and the server side does not send the prediction model to the client side for the scene picture prediction model which is not related to the client side, so that the occupation of client side resources is reduced.
The game picture prediction model involved in the step is trained in advance and is pre-stored in the server. When training a game picture prediction model, a server receives training data including game videos of various scene pictures. And constructing a game picture prediction model based on deep learning according to the received training data.
The game is run on the server, so that the training data is convenient to obtain, and the cost is low. When training data is acquired, game videos of different game scene pictures are acquired as original data for training. Meanwhile, when the game picture prediction model is sent to the client terminal in the later period, the basic picture prediction model and the scene picture prediction model are distinguished, so in the training process, the training is also distinguished to establish the prediction model which can predict the picture of the next frame of video frame according to the previous frames of video frames.
The training in this embodiment is unsupervised training (without data labeling), and the training data (i.e., the real video frames) is compared with the prediction frames, and the intrinsic parameters of the prediction model to be built are adjusted according to the deviation value. Through the training, the accuracy of the prediction model is continuously adjusted, and finally the game picture prediction model which can be sent to the client side to help the client side to predict the frame loss video frame is obtained.
And S220, sending the game video frame to the client.
In the running process of the game, the client does not perform operation processing and the like in various games, and only displays video frames in the running process of the game. Therefore, the server needs to send the game video frames to the client. When the server side sends the video frames, the video frames of the game are coded and then sent to the client side.
In the embodiment, hierarchical training is adopted, that is, different scene picture prediction models and game-level basic picture prediction models are trained according to scenes, and then dynamic loading is performed in the process of requesting a game by a subsequent client, so that the prediction precision and the model size are balanced. Meanwhile, due to the adoption of unsupervised training, fine manual labeling is not needed, and the training cost is reduced.
According to an exemplary embodiment, as shown in fig. 3, the present implementation provides a cloud game image frame loss compensation method, including the following steps:
s301, the client sends a game running request to the server.
S302, the server runs the game.
S303, the server side sends the basic picture prediction model to the client side.
S304, the server side encodes the game video frame.
S305, the server side sends the encoded game video frame to the client side.
S306, the client sends a game scene switching request to the server.
S307, the server side sends the scene picture prediction model to the client side.
S308, the client monitors that frame loss occurs.
S309, the client decodes the game video frame.
S310, the client determines the prediction parameters and determines the prediction video frame according to the prediction parameters.
In the step, when frame loss occurs, determining that game video frames which are cached from a frame loss position and are located before the frame loss position and are a preset number from the frame loss position are used as prediction parameters.
Inputting the prediction parameters into a basic picture prediction model or a scene picture prediction model; outputting a prediction picture by a basic picture prediction model or a scene picture prediction model; the predicted picture is determined to be a predicted video frame.
S311, the client displays the predicted video frame at the frame loss position.
According to an exemplary embodiment, as shown in fig. 4, the present embodiment discloses a cloud game image frame loss compensation apparatus, which is applied to a client and is used to implement the cloud game image frame loss compensation method shown in fig. 1. The cloud game image frame loss compensation apparatus 100 in this embodiment includes a receiving module 110, a buffering module 120, a processing module 130, and a display module 140.
The receiving module 110 is configured to receive a game picture prediction model sent by a server, and is further configured to receive a game video frame sent by the server. The buffer module 120 is used for buffering the game picture prediction model and the game video frame. The processing module 130 is configured to determine, when a frame loss occurs, that a preset number of game video frames are located before the frame loss position from the frame loss position in the cache as prediction parameters; and the video frame prediction module is also used for determining a prediction video frame according to the game picture prediction model and the prediction parameters. The display module 140 is used to display the predicted video frame at the frame loss position.
According to an exemplary embodiment, as shown in fig. 5, the present embodiment discloses a cloud game image frame loss compensation apparatus, which is applied to a server and is used to implement the cloud game image frame loss compensation method shown in fig. 2. The cloud game image loss frame compensation apparatus 200 in this embodiment includes a receiving unit 210, a transmitting unit 220, and a training unit 230.
The receiving unit 210 is configured to receive requests, such as a game running request and a game scene switching request. The sending unit 220 is configured to send the game picture prediction model to the client; and also for sending game video frames to the client. The training unit 230 is specifically configured to receive training data, where the training data includes game videos of various scene pictures, and construct a game picture prediction model based on deep learning according to the training data.
The above-described aspects may be implemented individually or in various combinations, and such variations are within the scope of the present invention.
It will be understood by those skilled in the art that all or part of the steps of the above methods may be implemented by instructing the relevant hardware through a program, and the program may be stored in a computer-readable storage medium, such as a read-only memory, a magnetic or optical disk, and the like. Alternatively, all or part of the steps of the foregoing embodiments may also be implemented by using one or more integrated circuits, and accordingly, each module/unit in the foregoing embodiments may be implemented in the form of hardware, and may also be implemented in the form of a software functional module. The present invention is not limited to any specific form of combination of hardware and software.
It is to be noted that, in this document, the terms "comprises", "comprising" or any other variation thereof are intended to cover a non-exclusive inclusion, so that an article or apparatus including a series of elements includes not only those elements but also other elements not explicitly listed or inherent to such article or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of additional like elements in the article or device comprising the element.
The above embodiments are merely to illustrate the technical solutions of the present invention and not to limit the present invention, and the present invention has been described in detail with reference to the preferred embodiments. It will be understood by those skilled in the art that various modifications and equivalent arrangements may be made without departing from the spirit and scope of the present invention and it should be understood that the present invention is to be covered by the appended claims.

Claims (14)

1. A cloud game image frame loss compensation method is applied to a client side and is characterized by comprising the following steps:
receiving and caching a game picture prediction model sent by a server;
receiving and caching a game video frame sent by a server;
when frame loss occurs, determining a prediction parameter according to a frame loss position;
determining a predicted video frame according to the game picture prediction model and the prediction parameters;
and displaying the predicted video frame at the frame loss position.
2. The cloud game image frame loss compensation method of claim 1, wherein the receiving and buffering of the game picture prediction model transmitted by the server comprises:
receiving and caching a basic picture prediction model fed back by the server based on the game running request; and the number of the first and second groups,
and receiving and caching the scene picture prediction model fed back by the server based on the game scene switching request.
3. The cloud game image frame loss compensation method of claim 2, wherein said determining a predicted video frame based on said game picture prediction model and said prediction parameters comprises:
inputting the prediction parameters into the base picture prediction model or the scene picture prediction model;
the basic picture prediction model or the scene picture prediction model outputs a prediction picture;
determining the predicted picture to be the predicted video frame.
4. The cloud game image frame loss compensation method of claim 1, wherein the determining the prediction parameter according to the frame loss position comprises:
determining a prediction parameter in real time according to the frame loss position; or,
determining game video frames which are cached from a frame loss position and located in front of the frame loss position and are away from the frame loss position by a preset number to serve as prediction parameters.
5. A cloud game image frame loss compensation method is applied to a server side, and is characterized by comprising the following steps:
based on the received request, sending a game picture prediction model to the client;
and sending the game video frame to the client.
6. The cloud game image frame loss compensation method of claim 5, wherein sending the game picture prediction model to the client based on the received request comprises:
sending a basic picture prediction model to a client based on the received game running request; and the number of the first and second groups,
and sending the scene picture prediction model to the client based on the received game scene switching request.
7. The cloud game image frame loss compensation method of claim 5, wherein the method further comprises:
receiving training data, wherein the training data comprises game videos of various scene pictures;
and constructing a game picture prediction model based on deep learning according to the training data.
8. A cloud game image frame loss compensation device is applied to a client side, and is characterized by comprising:
the receiving module is used for receiving the game picture prediction model sent by the server;
the receiving module is also used for receiving the game video frame sent by the server;
the cache module is used for caching the game picture prediction model and the game video frame;
the processing module is used for determining a prediction parameter according to the frame loss position;
the processing module is further used for determining a predicted video frame according to the game picture prediction model and the prediction parameters;
and the display module is used for displaying the predicted video frame at the frame loss position.
9. The cloud game image frame loss compensation apparatus of claim 8, wherein the receiving module is specifically configured to:
receiving a basic picture prediction model fed back by the server based on a game running request; and the number of the first and second groups,
receiving a scene picture prediction model fed back by the server based on a game scene switching request;
the cache module is specifically configured to:
caching a basic picture prediction model fed back by the server based on the game running request; and the number of the first and second groups,
and caching the scene picture prediction model fed back by the server based on the game scene switching request.
10. The cloud game image frame loss compensation apparatus of claim 9, wherein the processing module is specifically configured to:
inputting the prediction parameters into the base picture prediction model or the scene picture prediction model;
the basic picture prediction model or the scene picture prediction model outputs a prediction picture;
determining the predicted picture to be the predicted video frame.
11. The cloud game image frame loss compensation apparatus of claim 8, wherein the processing module is specifically configured to:
determining a prediction parameter in real time according to the frame loss position; or,
determining game video frames which are cached from a frame loss position and located in front of the frame loss position and are away from the frame loss position by a preset number to serve as prediction parameters.
12. A cloud game image frame loss compensation device is applied to a server side, and is characterized by comprising:
a receiving unit configured to receive a request;
a transmission unit for transmitting the game picture prediction model to the client;
the sending unit is also used for sending the game video frame to the client.
13. The cloud game image frame loss compensation apparatus of claim 12, wherein the receiving unit is specifically configured to:
receiving a game running request and a game scene switching request;
the sending unit is specifically configured to:
and sending the basic picture prediction model and the scene picture prediction model to a client.
14. The cloud game image frame loss compensation apparatus of claim 12, wherein the apparatus further comprises a training unit, the training unit being specifically configured to:
receiving training data, wherein the training data comprises game videos of various scene pictures;
and constructing a game picture prediction model based on deep learning according to the training data.
CN202011639229.1A 2020-12-31 2020-12-31 Cloud game image frame loss compensation method and device Pending CN112839241A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011639229.1A CN112839241A (en) 2020-12-31 2020-12-31 Cloud game image frame loss compensation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011639229.1A CN112839241A (en) 2020-12-31 2020-12-31 Cloud game image frame loss compensation method and device

Publications (1)

Publication Number Publication Date
CN112839241A true CN112839241A (en) 2021-05-25

Family

ID=75926812

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011639229.1A Pending CN112839241A (en) 2020-12-31 2020-12-31 Cloud game image frame loss compensation method and device

Country Status (1)

Country Link
CN (1) CN112839241A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113992987A (en) * 2021-12-27 2022-01-28 北京蔚领时代科技有限公司 Intelligent code rate adjusting system and method suitable for cloud game scene
CN114390257A (en) * 2021-11-08 2022-04-22 浙江华云信息科技有限公司 Video management and control platform integrated with various video equipment
CN115412763A (en) * 2021-05-28 2022-11-29 中国移动通信有限公司研究院 Video data transmission method, terminal and server
CN115475382A (en) * 2022-09-06 2022-12-16 咪咕文化科技有限公司 Image compensation method, terminal equipment, cloud server and storage medium
CN118079380A (en) * 2024-04-29 2024-05-28 深圳云天畅想信息科技有限公司 Cloud game terminal virtual display control method, cloud game terminal virtual display control system and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110250948A1 (en) * 2010-04-08 2011-10-13 Wms Gaming, Inc. Video compression in gaming machines
CN102428483A (en) * 2009-03-23 2012-04-25 生命力有限公司 System and Method for Multi-Stream Video Compression
CN108379832A (en) * 2018-01-29 2018-08-10 珠海金山网络游戏科技有限公司 A kind of game synchronization method and apparatus
CN108810281A (en) * 2018-06-22 2018-11-13 Oppo广东移动通信有限公司 Lost frame compensation method, lost frame compensation device, storage medium and terminal
CN110270092A (en) * 2019-06-27 2019-09-24 三星电子(中国)研发中心 The method and device and electronic equipment that frame per second for electronic equipment is promoted
US20190358541A1 (en) * 2018-05-24 2019-11-28 Microsoft Technology Licensing, Llc Dead reckoning and latency improvement in 3d game streaming scenario

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102428483A (en) * 2009-03-23 2012-04-25 生命力有限公司 System and Method for Multi-Stream Video Compression
US20110250948A1 (en) * 2010-04-08 2011-10-13 Wms Gaming, Inc. Video compression in gaming machines
CN108379832A (en) * 2018-01-29 2018-08-10 珠海金山网络游戏科技有限公司 A kind of game synchronization method and apparatus
US20190358541A1 (en) * 2018-05-24 2019-11-28 Microsoft Technology Licensing, Llc Dead reckoning and latency improvement in 3d game streaming scenario
CN108810281A (en) * 2018-06-22 2018-11-13 Oppo广东移动通信有限公司 Lost frame compensation method, lost frame compensation device, storage medium and terminal
CN110270092A (en) * 2019-06-27 2019-09-24 三星电子(中国)研发中心 The method and device and electronic equipment that frame per second for electronic equipment is promoted

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115412763A (en) * 2021-05-28 2022-11-29 中国移动通信有限公司研究院 Video data transmission method, terminal and server
CN114390257A (en) * 2021-11-08 2022-04-22 浙江华云信息科技有限公司 Video management and control platform integrated with various video equipment
CN113992987A (en) * 2021-12-27 2022-01-28 北京蔚领时代科技有限公司 Intelligent code rate adjusting system and method suitable for cloud game scene
CN115475382A (en) * 2022-09-06 2022-12-16 咪咕文化科技有限公司 Image compensation method, terminal equipment, cloud server and storage medium
CN115475382B (en) * 2022-09-06 2024-12-13 咪咕文化科技有限公司 Image compensation method, terminal device, cloud server and storage medium
CN118079380A (en) * 2024-04-29 2024-05-28 深圳云天畅想信息科技有限公司 Cloud game terminal virtual display control method, cloud game terminal virtual display control system and electronic equipment
CN118079380B (en) * 2024-04-29 2024-07-05 深圳云天畅想信息科技有限公司 Cloud game terminal virtual display control method, cloud game terminal virtual display control system and electronic equipment

Similar Documents

Publication Publication Date Title
CN112839241A (en) Cloud game image frame loss compensation method and device
CN112717389A (en) Cloud game image screen-splash repairing method and device
CN109600654B (en) Bullet screen processing method, device and electronic device
US20200090324A1 (en) Method and Apparatus for Determining Experience Quality of VR Multimedia
CN114699767B (en) Game data processing method, device, medium and electronic equipment
CN112203111A (en) Multimedia resource preloading method and device, electronic equipment and storage medium
KR20200027059A (en) Output data providing server and output data providing method
CN112533048A (en) Video playing method, device and equipment
CN112203034A (en) Frame rate control method and device and electronic equipment
CN115412766B (en) Display control method and electronic device
CN108235075B (en) Video quality grade matching method, computer readable storage medium and terminal
CN104053002A (en) Video decoding method and device
CN112087646B (en) Video playing method and device, computer equipment and storage medium
US20240214443A1 (en) Methods, systems, and media for selecting video formats for adaptive video streaming
US12143595B2 (en) Transmission apparatus, reception apparatus, transmission method, reception method, and program
CN113975793A (en) Cloud game rendering method and related equipment
RU2662648C1 (en) Method and device for data processing
CN109640094B (en) Video decoding method, device and electronic device
CN105025343A (en) Caching method and device of TS video
JP7472286B2 (en) Method, system, and medium for selecting a format for streaming a media content item - Patents.com
CN115623248B (en) Data processing method, frame rate adjustment method, device, equipment and computer medium
EP3739874A1 (en) Video playback method and device, terminal device and computer readable storage medium
JP7318123B2 (en) Method, system and medium for streaming video content using adaptive buffering
CN115177955A (en) Cloud game interaction method and device, readable medium and electronic equipment
CN115033330A (en) Content display method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210525

WD01 Invention patent application deemed withdrawn after publication