[go: up one dir, main page]

CN111340917B - Three-dimensional animation generation method and device, storage medium and computer equipment - Google Patents

Three-dimensional animation generation method and device, storage medium and computer equipment Download PDF

Info

Publication number
CN111340917B
CN111340917B CN202010085820.0A CN202010085820A CN111340917B CN 111340917 B CN111340917 B CN 111340917B CN 202010085820 A CN202010085820 A CN 202010085820A CN 111340917 B CN111340917 B CN 111340917B
Authority
CN
China
Prior art keywords
dimensional animation
information
real
binding logic
binding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010085820.0A
Other languages
Chinese (zh)
Other versions
CN111340917A (en
Inventor
李静翔
杨梦菁
刘杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010085820.0A priority Critical patent/CN111340917B/en
Publication of CN111340917A publication Critical patent/CN111340917A/en
Application granted granted Critical
Publication of CN111340917B publication Critical patent/CN111340917B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to a three-dimensional animation generation method, a three-dimensional animation generation device, a storage medium and computer equipment, wherein the method comprises the following steps: acquiring a video image in real time through a camera device, inputting the video image into a real-time motion capture model for recognition, and obtaining output motion control information; inputting the motion control information into the three-dimensional animation binding logic plug-in, and calculating to obtain three-dimensional animation control parameters through the three-dimensional animation binding logic information in the three-dimensional animation binding logic plug-in; and acquiring a three-dimensional animation initial model, and driving the three-dimensional animation initial model by using the three-dimensional animation control parameters to generate the three-dimensional animation. The scheme provided by the application can improve the accuracy of the generated three-dimensional animation.

Description

Three-dimensional animation generation method and device, storage medium and computer equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for generating a three-dimensional animation, a storage medium, and a computer device.
Background
With the development of motion capture technology, motion capture can be used to generate three-dimensional animation in real time, such as virtual idol live broadcast. At present, the motion capture technology directly generates parameters of bones and blendshapes (deformation targets) through captured images, and then generates three-dimensional animation according to the parameters of the bones and the blendshapes.
However, the existing method for generating the three-dimensional animation directly according to the output of the motion capture technology has the problem that the generated three-dimensional animation is not accurate enough.
Disclosure of Invention
Based on this, it is necessary to provide a three-dimensional animation generation method, apparatus, storage medium, and computer device for solving the technical problem that the generated three-dimensional animation is not accurate enough.
A three-dimensional animation generation method, comprising:
acquiring a video image in real time through a camera device, inputting the video image into a real-time motion capture model for recognition, and obtaining output motion control information;
inputting the motion control information into the three-dimensional animation binding logic plug-in, and calculating to obtain three-dimensional animation control parameters through the three-dimensional animation binding logic information in the three-dimensional animation binding logic plug-in;
and acquiring a three-dimensional animation initial model, and driving the three-dimensional animation initial model by using the three-dimensional animation control parameters to generate the three-dimensional animation.
A three-dimensional animation generation apparatus comprising:
the control information identification module is used for acquiring a video image in real time through the camera device, inputting the video image into the real-time motion capture model for identification, and obtaining output motion control information;
the parameter calculation module is used for inputting the motion control information into the three-dimensional animation binding logic plug-in and calculating the three-dimensional animation control parameters through the three-dimensional animation binding logic information in the three-dimensional animation binding logic plug-in;
and the animation generation module is used for acquiring the three-dimensional animation initial model, driving the three-dimensional animation initial model by using the three-dimensional animation control parameters and generating the three-dimensional animation.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor when executing the program implementing the steps of:
acquiring a video image in real time through a camera device, inputting the video image into a real-time motion capture model for recognition, and obtaining output motion control information;
inputting the motion control information into the three-dimensional animation binding logic plug-in, and calculating to obtain three-dimensional animation control parameters through the three-dimensional animation binding logic information in the three-dimensional animation binding logic plug-in;
and acquiring a three-dimensional animation initial model, and driving the three-dimensional animation initial model by using the three-dimensional animation control parameters to generate the three-dimensional animation.
A storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to perform the steps of:
acquiring a video image in real time through a camera device, inputting the video image into a real-time motion capture model for recognition, and obtaining output motion control information;
inputting the motion control information into the three-dimensional animation binding logic plug-in, and calculating to obtain three-dimensional animation control parameters through the three-dimensional animation binding logic information in the three-dimensional animation binding logic plug-in;
and acquiring a three-dimensional animation initial model, and driving the three-dimensional animation initial model by using the three-dimensional animation control parameters to generate the three-dimensional animation.
According to the three-dimensional animation generation method, the device, the storage medium and the computer equipment, the video image is input into the real-time motion capture model for recognition, the output motion control information is obtained, the motion control information is used for obtaining the three-dimensional animation control parameters through the three-dimensional animation binding logic plug-in, then the three-dimensional animation control parameters are used for driving the three-dimensional animation initial model to generate the three-dimensional animation, the generated three-dimensional animation can have the three-dimensional animation binding logic information, and therefore the generated three-dimensional animation is more accurate. The problem that when the three-dimensional animation is directly generated by using the three-dimensional animation control parameters obtained by the real-time motion capture system, the generated three-dimensional animation is not accurate enough due to the fact that the logic information bound by the three-dimensional animation is lost is solved.
Drawings
FIG. 1 is a diagram showing an application environment of a three-dimensional animation generation method according to an embodiment;
FIG. 2 is a flowchart illustrating a three-dimensional animation generation method according to an embodiment;
FIG. 3 is a schematic flow diagram that illustrates the operation of a three-dimensional animation binding logic plug-in one embodiment;
FIG. 4 is a schematic flow diagram that illustrates the transformation of a three-dimensional animation binding logical dynamic link library in one embodiment;
FIG. 5 is a schematic diagram of a three-dimensional animation binding logic interface in one embodiment;
FIG. 6 is a flow diagram illustrating obtaining motion control information in one embodiment;
FIG. 7 is a schematic flow diagram illustrating the training of a real-time motion capture model in one embodiment;
FIG. 8 is a schematic diagram illustrating a process for invoking a third party real-time motion capture model, in one embodiment;
FIG. 9 is a schematic flow chart diagram illustrating the process for deriving three-dimensional animation control parameters in one embodiment;
FIG. 10 is a flowchart of obtaining face capture parameters, in an embodiment;
FIG. 11 is a schematic diagram of a three-dimensional animation generated in the embodiment of FIG. 10;
FIG. 12 is a diagram of a three-dimensional animation curve generated in one embodiment;
FIG. 13 is a block diagram showing the construction of a three-dimensional animation generating apparatus according to an embodiment;
FIG. 14 is a block diagram showing a configuration of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Computer Vision technology (CV) Computer Vision is a science for researching how to make a machine "see", and further refers to that a camera and a Computer are used to replace human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the Computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition.
Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning.
The scheme provided by the embodiment of the application relates to the computer vision technology, the machine learning technology and other technologies of artificial intelligence, and is specifically explained by the following embodiments:
FIG. 1 is a diagram of an application environment of a three-dimensional animation generation method according to an embodiment. Referring to fig. 1, the three-dimensional animation generation method is applied to a three-dimensional animation generation system. The three-dimensional animation generation system includes a terminal 110 and a server 120. The terminal 110 and the server 120 are connected through a network. The terminal 110 may be a desktop terminal or a mobile terminal, and the mobile terminal may be at least one of a mobile phone, a tablet computer, a notebook computer, and the like. The server 120 may be implemented as a stand-alone server or a server cluster composed of a plurality of servers. The three-dimensional animation generation method can be applied to a terminal and can also be applied to a server.
Specifically, the terminal 110 obtains a video image through a camera device in real time, inputs the video image into a real-time motion capture model for recognition, and obtains output motion control information; the terminal 110 inputs the motion control information into the three-dimensional animation binding logic plug-in, and three-dimensional animation control parameters are obtained through calculation of the three-dimensional animation binding logic information in the three-dimensional animation binding logic plug-in; the terminal 110 may obtain the three-dimensional animation initial model from the server 120, and drive the three-dimensional animation initial model using the three-dimensional animation control parameters to generate the three-dimensional animation.
As shown in FIG. 2, in one embodiment, a three-dimensional animation generation method is provided. The embodiment is mainly illustrated by applying the method to the terminal 110 in fig. 1. Referring to fig. 2, the three-dimensional animation generation method specifically includes the following steps:
s202, acquiring a video image in real time through the camera device, inputting the video image into the real-time motion capture model for recognition, and obtaining output motion control information.
The camera device is used for shooting a target object to obtain a video image, wherein the target object can be a person, an animal, a moving object and the like, and the camera device can be a motion capture camera, a video camera and the like. The real-time motion capture model is used for recognizing motion control information captured by a video image, and is obtained by training in advance according to historical data by using a neural network algorithm. The motion control information is motion change information of the recognized target object. For example, the eye movement control information is obtained as closed according to the eye closure of a person in an existing video image.
Specifically, the camera device collects a video image of a target object in real time, the video image is sent to the terminal, the terminal obtains the video image through the camera device in real time, and the video image is directly input into a trained real-time motion capture model to obtain output motion control information.
And S204, inputting the motion control information into the three-dimensional animation binding logic plug-in, and calculating to obtain the three-dimensional animation control parameters through the three-dimensional animation binding logic information in the three-dimensional animation binding logic plug-in.
The three-dimensional animation binding logic plug-in is a plug-in of a preset three-dimensional animation binding logic, the three-dimensional animation binding logic information is a binding relation which is prepared in advance by using three-dimensional animation production software, and control nodes with different functions are bound, and the three-dimensional production software can be Maya software, motionBuilder software and the like. For example, an animation binding engineer creates three-dimensional animation binding logic information by using Maya (3D creation software including binding creation capability and providing animation creation function) three-dimensional creation software, and creates three-dimensional animation binding logic information by using various nodes in Maya, where the nodes are independent units having an arithmetic function, for example, adddoubllinear nodes in Maya, and the functions of the nodes are results obtained by adding and outputting two input values. The three-dimensional animation control parameters are used for generating three-dimensional animation, including but not limited to bone displacement, bone rotation, bone scaling, blendshape weight, dynamic material and the like.
Specifically, the terminal inputs the obtained motion control information into the operated three-dimensional animation binding logic plug-in, and the three-dimensional animation control parameters are obtained through calculation of the three-dimensional animation binding logic information in the three-dimensional animation binding logic plug-in.
And S206, acquiring a three-dimensional animation initial model, and driving the three-dimensional animation initial model by using the three-dimensional animation control parameters to generate the three-dimensional animation.
The three-dimensional animation initial model refers to a three-dimensional animation binding scene made by three-dimensional animation production software, for example, a Rig binding scene made by Maya three-dimensional production software. The three-dimensional animation initial model specifically comprises skeleton initial information, model initial information, deformation target initial information (Blendshape), control node identification and the like.
Specifically, the terminal may directly obtain the three-dimensional animation initial model from a server storing the three-dimensional animation initial model. The terminal can also use three-dimensional production software to produce a three-dimensional animation initial model and export the produced three-dimensional animation initial model into a local file. And the terminal acquires the three-dimensional animation initial model, and drives the three-dimensional animation initial model by using the three-dimensional animation control parameters to generate the three-dimensional animation.
In the three-dimensional animation generation method, the video image is input into the real-time motion capture model for recognition to obtain the output motion control information, the motion control information is used for binding the logic plug-in unit through the three-dimensional animation to obtain the three-dimensional animation control parameters, then the three-dimensional animation control parameters are used for driving the three-dimensional animation initial model to generate the three-dimensional animation, and the generated three-dimensional animation can have the three-dimensional animation binding logic information, so that the generated three-dimensional animation is more accurate. The problem that when the three-dimensional animation is directly generated by using the three-dimensional animation control parameters obtained by the real-time motion capture system, the generated three-dimensional animation is not accurate enough due to the fact that the logic information bound by the three-dimensional animation is lost is solved.
In one embodiment, as shown in fig. 3, before step S202, that is, before the video image is acquired by the camera device in real time, and the video image is input into the real-time motion capture model for recognition, and the output motion control information is obtained, the method further includes the steps of:
s302, obtaining a three-dimensional animation binding logic dynamic link library, and packaging the three-dimensional animation binding logic dynamic link library to obtain a three-dimensional animation binding logic plug-in.
The three-dimensional animation binding logic dynamic link library is a C + + (programming language) dynamic link library of the three-dimensional animation binding logic and is used for generating a three-dimensional animation binding logic plug-in. The three-dimensional animation binding logic dynamic link library is obtained by converting three-dimensional animation binding logic information manufactured by three-dimensional manufacturing software.
Specifically, the terminal acquires a three-dimensional animation binding logic dynamic link library, encapsulates the three-dimensional animation binding logic dynamic link library, and imports the encapsulated result into three-dimensional animation production software in a plug-in mode.
And S304, running the three-dimensional animation binding logic plug-in.
Specifically, the obtained three-dimensional animation binding logic plug-in is operated in three-dimensional animation production software.
In the embodiment, the three-dimensional animation binding logic plug-in is generated and operated through the three-dimensional animation binding logic dynamic link library, so that the efficiency of generating the three-dimensional animation is improved.
In one embodiment, as shown in fig. 4, the step S302 of obtaining a three-dimensional animation binding logic dynamic link library includes the steps of:
s402, obtaining the three-dimensional animation binding logic data, and analyzing the three-dimensional animation binding logic data to obtain binding information.
The three-dimensional animation binding logic data refers to data of an animation binding system, wherein the data are obtained by binding nodes with different functions through three-dimensional production software by an animation binding operator. For example, as shown in fig. 5, the interface is a Maya animation binding logic interface, in which a control panel (right side of a human face) of a human face animation binding system made by an animation binding engineer through Maya is displayed, and white points and black points in the figure are controllers, each controller being obtained by connecting a plurality of nodes with different functions. The binding information refers to three-dimensional animation binding logic information and comprises all bound controller information and connection relation information between controllers. The controller information includes controller identification and controller attributes, among others. The connection relation information between the controllers means that the output result of the previous controller is used as the input result of the next controller.
Specifically, after the binding engineer completes the animation binding system logic data production through the three-dimensional production software in the terminal, the three-dimensional animation binding logic data processing instruction sent by the binding engineer is received, the produced animation binding system logic data is obtained according to the instruction, and the animation binding system logic data is analyzed to obtain binding information.
S404, acquiring the code template, and generating a corresponding connection relation code according to the connection relation information in the code template and the binding information.
Specifically, the code template is a preset template file for generating the code. For example, it may be a template file of C + + (computer programming language) code. The connection relationship information refers to a connection relationship between the controllers. For example, the output result in the previous controller is used as the input data of the next controller section. And when the terminal needs to generate the connection relation code, generating a corresponding connection relation code according to the connection relation information in the code template and the binding information.
S406, generating a corresponding node code according to the code template and the node information in the binding information.
Specifically, the node information refers to specific information of nodes included in the controller, such as a name, input data, and output results of each node included in the controller, and the like. And when the connection relation code is generated, generating a corresponding node code in the code template according to the node information in the binding information.
S408, obtaining a three-dimensional animation binding logic code corresponding to the three-dimensional animation binding logic data according to the connection relation code and the node code, and converting the three-dimensional animation binding logic code into a three-dimensional animation binding logic dynamic link library.
The three-dimensional animation binding logic dynamic link library is a dynamic link library generated according to three-dimensional animation binding logic data. The resulting dynamic link library may be a C + + dynamic link library. Dynamic Link Library (Dynamic Link Library or Dynamic-Link Library, abbreviated DLL) is one way to implement the concept of shared function libraries. The extensions of these library functions are ". Dll", ". Ocx" (libraries containing ActiveX controls) or ". Drv" (legacy system drivers). Dynamic linking provides a way for a process to call functions that are not part of the executable code of the process, the executable code of the functions being located in a DLL file that contains one or more functions that have been compiled, linked and stored separately from the process in which they are used. Updates can be more easily applied to individual modules using dynamically linked libraries without affecting other parts of the program. The three-dimensional animation binding logic dynamic link library can run on a 3D software platform, and comprises but is not limited to Maya, motion builder (3D character animation software) and game engines (such as non-real Engine and Unity, and the like), and is not limited to any operating system platform, including but not limited to Windows, mac, linux, IOS, android, and the like.
Specifically, a three-dimensional animation binding logic code corresponding to the three-dimensional animation binding logic data is obtained according to the connection relation code and the node code, and the three-dimensional animation binding logic code is converted into a three-dimensional animation binding logic dynamic link library. For example, the corresponding compiler may be obtained according to the operating system platform of the three-dimensional game engine, and the three-dimensional animation binding logic code is compiled by using the compiler, so as to obtain a three-dimensional animation binding logic dynamic link library that is executable by the operating system platform of the three-dimensional game engine.
In the embodiment, the binding information is obtained through analyzing the three-dimensional animation binding logic data, the corresponding three-dimensional animation binding logic code is generated through the code template, the connection relation information and the node information in the binding information, the three-dimensional animation binding logic code is converted into the three-dimensional animation binding logic dynamic link library, the three-dimensional animation binding logic plug-in is generated by using the three-dimensional animation binding logic dynamic link library, and the efficiency of obtaining the three-dimensional animation binding logic plug-in is improved.
In one embodiment, as shown in fig. 6, the step S202 of inputting the video image into the real-time motion capture model and obtaining the output motion control information includes the steps of:
s602, inputting the video image into an image key point identification network in a real-time motion capture model to obtain the characteristics of the output image key points.
The image key point identification network is obtained by training historical video image data by using a convolutional neural network, and an activation function used by the image key point identification network during training is a ReLU (Linear rectification function) function. The loss function is a cross entropy function. The image key point features refer to features corresponding to key points in an image, and different images have different features. For example, the feature of the eye key point in the face image is between 0 and 1, 0 indicates that the eye is closed, 1 indicates that the eye is open, and so on. The image keypoint recognition network is part of a real-time motion capture model.
Specifically, the terminal inputs the video image into an image key point identification network in a real-time motion capture model to obtain the characteristics of the output image key points.
And S604, inputting the image key point characteristics into the real-time motion capture model for control information identification to obtain output motion control information.
Specifically, the image key point features are linearly converted through a real-time motion capture model, and output motion control information is obtained. For example, a linear regression algorithm may be used for the linear transformation. The method comprises the steps of training by using a linear regression algorithm through historical existing image key point features and action control information to obtain a linear regression model, wherein a loss function of the linear regression model is an L2 type loss function, namely a minimized square error, and a Sigmoid function is used as an activation function. The linear regression model, which is part of the real-time motion capture model, may be placed after the image keypoint identification network. When the image key point identification network outputs the image key point characteristics, the image key point characteristics are used as the input of the linear regression model for calculation, and the output action control information is obtained.
In the embodiment, the image key point features in the video image are obtained through the image key point identification network, and then the image key point features are subjected to linear conversion to obtain the output action control information, so that the accuracy of obtaining the action control information is improved.
In one embodiment, as shown in FIG. 7, the step of training the real-time motion capture model comprises:
s702, historical video images and corresponding historical motion control information are obtained.
Specifically, the historical video image may be a video image that has been captured by a camera device, and the video image may be obtained from a server, or may be obtained from a third party, for example, a server of a third-party video image platform. The historical motion control information refers to motion control information set in advance according to the video image. When the terminal obtains the historical video image, the set historical motion control information can be obtained. Historical video images with historical motion control information can also be directly acquired.
And S704, taking the historical video image as the input of the real-time motion capture model, and taking the historical motion control information as the label of the real-time motion capture model for training.
And S706, obtaining the trained real-time motion capture model when the training completion condition is met.
Specifically, a historical video image is used as input of a real-time motion capture model to obtain an output training result, a loss function is used for calculating the training result and corresponding historical motion control information, namely a loss value of a label, whether the loss value is smaller than a preset threshold value or not is judged, when the loss value is smaller than the preset threshold value, a training completion condition is met to obtain a trained real-time motion capture model, when the loss value is not smaller than the preset threshold value, the training completion condition is not met, training is continued by using the historical video image and the historical motion control information, when training reaches the maximum iteration number, the training completion condition is also met, and the trained real-time motion capture model is obtained.
In one embodiment, the real-time motion capture model can be trained in the server and then deployed to the terminal.
In the embodiment, the entity motion capture model obtained by training the historical video images and the corresponding historical motion control information can be directly used when the three-dimensional animation is generated in real time, so that the efficiency of generating the three-dimensional animation is improved.
In one embodiment, as shown in fig. 8, the step S202 of inputting the video image into the real-time motion capture model and obtaining the output motion control information includes the steps of:
s802, calling a third-party real-time motion capture model, inputting the video image into the third-party real-time motion capture model for recognition, and obtaining the output third-party parameter information, wherein the third-party real-time motion capture model is used for recognizing the video image to obtain the third-party parameter information with a specific quantity.
The third-party real-time motion capture model is a motion capture system developed by a motion capture service provider, and the motion capture system adopts a high-quality camera as hardware and rapidly records motion moments and motion processes at high speed, high frame rate and high resolution so as to analyze motion. For example, a third party real-time motion capture model is obtained using the ARKit (augmented reality development platform, available from apple Inc., capable of creating augmented reality applications) framework. The third-party parameter information refers to parameter information set by a third-party real-time motion capture model. For example, using ARKit, we get the weights of 52 blendshapes.
Specifically, the terminal calls a trained third-party real-time motion capture model, and the video image is input into the third-party real-time motion capture model for recognition to obtain output third-party parameter information.
And S804, taking the third-party characteristic information as the action control information.
Specifically, the third-party feature information is directly used as the motion control information. For example, the weights of 52 blendshapes obtained by ARKit are used as the values of 52 controllers in the motion control information.
In the embodiment, the motion control information is obtained by calling the third-party real-time motion capture model, so that the efficiency of obtaining the motion control information is improved.
In one embodiment, as shown in fig. 9, step S204, namely inputting motion control information into the three-dimensional animation binding logic plug-in, and calculating three-dimensional animation control parameters through the three-dimensional animation binding logic information in the three-dimensional animation binding logic plug-in, includes the steps of:
and S902, acquiring the control node identification and the corresponding control node input value in the action control information.
Wherein the control node identification is used to uniquely identify the controller node. The control node input value refers to the input value of the controller node.
Specifically, the action control information includes each control node identifier and a corresponding control node input value, and the control node identifier and the corresponding control node input value are obtained from the action control information.
And S904, searching the control node identification in each bound control node identification in the three-dimensional animation binding logic information, and obtaining the bound control node identification consistent with the control node identification when the control node identification is searched.
Specifically, the three-dimensional animation binding logic information comprises the identification of the bound control nodes, the connection relation among the control nodes and the attributes. The bound control node identification refers to controller node identification with binding relation in the three-dimensional animation binding logic information. And searching the control node identification in each bound control node identification, and obtaining the bound control node identification consistent with the control node identification when the control node identification is searched.
S906, the control node input value is used as the bound control node input value corresponding to the bound control node identification, and the three-dimensional animation binding logical relation in the three-dimensional animation binding logical information is used for calculation to obtain the three-dimensional animation control parameter.
Specifically, the three-dimensional animation binding logical relationship refers to a connection relationship between bound control nodes. And the terminal takes the input value of the control node as the value of the controller in the three-dimensional animation binding logic information to calculate the binding logic relationship, so as to obtain three-dimensional animation control parameters, namely the displacement, the rotation, the scaling, the Blendshape weight and the dynamic material of the output skeleton.
In the above embodiment, the three-dimensional animation control parameters are obtained by using the control node input values in the obtained motion control information as the bound control node input values and using the three-dimensional animation binding logical relationship for calculation, so that the accuracy of obtaining the three-dimensional animation control parameters is improved.
In a specific embodiment, as shown in fig. 10, which is a flowchart for obtaining parameters of face capturing, during face capturing, a camera is used to obtain a face image, the face image is input into a real-time motion capture model trained by Dynamixyz (facial expression capture system), a value of a controller is obtained, the value of the controller is input into a C + + dynamic link library plug-in converted by an animation binding logic system in Maya, a weight of an output bone and parameters of a dynamic material are obtained, and a three-dimensional animation is generated by rendering the weight of the bone and the parameters of the dynamic material. As shown in fig. 11, a schematic diagram of a rendered three-dimensional animation of a human face is obtained, where the schematic diagram of the three-dimensional animation of the human face reserves more details of the face, such as details of a facial ordinance mark, so that the generated three-dimensional animation is more accurate.
In a specific embodiment, the animation binding logic in the generated three-dimensional animation is changed into an animation curve of a Blendshape, as shown in fig. 12, which is a schematic diagram of an animation curve one of the Blendshape generated in the present application and an animation curve two of the Blendshape generated in a conventional scheme, it is obvious that the animation curve has more changes than the animation curve two, which indicates that the three-dimensional animation generated in the present application has more details, and the three-dimensional animation generated in the present application can be more accurate.
It should be understood that although the steps in the flowcharts of fig. 2-4, 6-9 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-4, 6-9 may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or in turns with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 13, there is provided a three-dimensional animation generation apparatus 1300 including: a control information identification module 1302, a parameter calculation module 1304, and an animation generation module 1306, wherein:
a control information recognition module 1302, configured to obtain a video image in real time through a camera device, input the video image into a real-time motion capture model, and recognize the video image to obtain output motion control information;
the parameter calculation module 1304 is used for inputting the motion control information into the three-dimensional animation binding logic plug-in, and calculating the three-dimensional animation control parameters through the three-dimensional animation binding logic information in the three-dimensional animation binding logic plug-in;
and the animation generation module 1306 is used for acquiring a three-dimensional animation initial model, driving the three-dimensional animation initial model by using the three-dimensional animation control parameters, and generating a three-dimensional animation.
In one embodiment, the three-dimensional animation generation apparatus 1300 further includes:
the plug-in generating module is used for acquiring a three-dimensional animation binding logic dynamic link library and packaging the three-dimensional animation binding logic dynamic link library to obtain a three-dimensional animation binding logic plug-in;
and the plug-in running module is used for running the three-dimensional animation binding logic plug-in.
In one embodiment, the plug-in generation module is further configured to obtain three-dimensional animation binding logic data, and analyze the three-dimensional animation binding logic data to obtain binding information; acquiring a code template, and generating a corresponding connection relation code according to the connection relation information in the code template and the binding information; generating a corresponding node code according to the code template and the node information in the binding information; and obtaining a three-dimensional animation binding logic code corresponding to the three-dimensional animation binding logic data according to the connection relation code and the node code, and converting the three-dimensional animation binding logic code into a three-dimensional animation binding logic dynamic link library.
In one embodiment, the control information identifying module 1302 includes:
the key point obtaining unit is used for inputting the video image into an image key point identification network in the real-time motion capture model to obtain the characteristics of the output image key point;
and the information identification unit is used for inputting the image key point characteristics into the real-time motion capture model to identify the control information and obtain the output motion control information.
In one embodiment, the three-dimensional animation generation apparatus 1300 further includes:
the model training module is used for acquiring historical video images and corresponding historical motion control information; taking a historical video image as the input of a real-time motion capture model, and taking historical motion control information as a label of the real-time motion capture model for training; and when the training completion condition is met, obtaining the trained real-time motion capture model.
In one embodiment, the control information identifying module 1302 is further configured to invoke a third-party real-time motion capture model, input the video image into the third-party real-time motion capture model for identification, and obtain output third-party parameter information, where the third-party real-time motion capture model is used to identify the video image to obtain a specific amount of third-party parameter information; and taking the third-party characteristic information as action control information.
In one embodiment, the parameter calculation module 1304 is further configured to obtain a control node identifier and a corresponding control node input value in the action control information; searching a control node identifier in each bound control node identifier in the three-dimensional animation binding logic information, and obtaining a bound control node identifier consistent with the control node identifier when the control node identifier is searched; and taking the control node input value as a bound control node input value corresponding to the bound control node identifier, and calculating by using the three-dimensional animation binding logical relationship in the three-dimensional animation binding logical information to obtain the three-dimensional animation control parameter.
FIG. 14 is a diagram illustrating an internal structure of a computer device in one embodiment. The computer device may specifically be the terminal 110 in fig. 1. As shown in fig. 14, the computer apparatus includes a processor, a memory, a network interface, an input device, and a display screen, which are connected through a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program that, when executed by a processor, causes the processor to implement a three-dimensional animation generation method. The internal memory may also store a computer program that, when executed by the processor, causes the processor to perform a three-dimensional animation generation method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like. The computer device may specifically also be the server 120 in fig. 1. The server 120 may transmit the generated three-dimensional animation to the terminal for presentation.
Those skilled in the art will appreciate that the architecture shown in fig. 14 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the three-dimensional animation generation apparatus provided by the present application may be implemented in the form of a computer program, and the computer program may be run on a computer device as shown in fig. 14. The memory of the computer device may store therein various program modules constituting the three-dimensional animation generation apparatus, such as the control information recognition module 1302, the parameter calculation module 1304, and the animation generation module 1306 shown in fig. 13. The computer program constituted by the respective program modules causes the processor to execute the steps in the three-dimensional animation generation method of the respective embodiments of the present application described in the present specification.
For example, the computer device shown in fig. 14 may perform step S202 by the control information identifying module 1302 in the three-dimensional animation generating apparatus shown in fig. 13. The parameter calculation module 1304 performs step S204. And the animation generation module 1306 performs step S206.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the three-dimensional animation generation method described above. Here, the steps of the three-dimensional animation generation method may be steps in the three-dimensional animation generation method of each of the above embodiments.
In one embodiment, a storage medium is provided, which stores a computer program that, when executed by a processor, causes the processor to perform the steps of the above-described three-dimensional animation generation method. Here, the steps of the three-dimensional animation generation method may be steps in the three-dimensional animation generation method of each of the above embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by instructing relevant hardware through a computer program, and the program may be stored in a non-volatile computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (14)

1. A three-dimensional animation generation method, comprising:
acquiring a video image in real time through a camera device, inputting the video image into a real-time motion capture model for recognition, and obtaining output motion control information, wherein the motion control information comprises a specific number of deformation target weights;
inputting the specific quantity of the deformation target weights into a three-dimensional animation binding logic plug-in, calculating to obtain three-dimensional animation control parameters through three-dimensional animation binding logic information in the three-dimensional animation binding logic plug-in, wherein the three-dimensional animation binding logic information is a binding relation obtained after binding different control nodes, the control nodes are independent units with an operation function, the three-dimensional animation binding logic information comprises information of each bound controller and connection relation information among controllers, each controller is obtained by connecting control nodes with different functions, the controller information comprises controller identification and controller attributes, and the connection relation information among the controllers is obtained by taking an output result of a previous controller as an input result of a next controller, and the method comprises the following steps: searching a control node identifier in the action control information in each bound control node identifier in the three-dimensional animation binding logic information, obtaining a bound control node identifier consistent with the control node identifier when the control node identifier is searched, taking a control node input value in the action control information as a bound control node input value corresponding to the bound control node identifier, and calculating by using a three-dimensional animation binding logic relationship in the three-dimensional animation binding logic information to obtain a three-dimensional animation control parameter, wherein the three-dimensional animation control parameter comprises bone displacement, bone rotation, bone scaling, a deformation target weight and a dynamic material;
and obtaining a three-dimensional animation initial model, and driving the three-dimensional animation initial model by using the bone displacement, the bone rotation, the bone scaling, the deformation target weight and the dynamic material to generate a three-dimensional animation.
2. The method according to claim 1, before the real-time capturing of the video image by the camera device, inputting the video image into the real-time motion capture model for recognition, and obtaining the output motion control information, further comprising:
acquiring a three-dimensional animation binding logic dynamic link library, and packaging the three-dimensional animation binding logic dynamic link library to obtain a three-dimensional animation binding logic plug-in;
and operating the three-dimensional animation binding logic plug-in.
3. The method of claim 2, wherein obtaining the three-dimensional animation binding logical dynamic link library comprises:
acquiring three-dimensional animation binding logic data, and analyzing the three-dimensional animation binding logic data to obtain binding information;
acquiring a code template, and generating a corresponding connection relation code according to the code template and the connection relation information in the binding information;
generating a corresponding node code according to the code template and the node information in the binding information;
and obtaining a three-dimensional animation binding logic code corresponding to the three-dimensional animation binding logic data according to the connection relation code and the node code, and converting the three-dimensional animation binding logic code into the three-dimensional animation binding logic dynamic link library.
4. The method of claim 1, wherein inputting the video image into a real-time motion capture model for recognition and obtaining output motion control information comprises:
inputting the video image into an image key point identification network in a real-time motion capture model to obtain the characteristics of the output image key points;
and inputting the image key point characteristics into the real-time motion capture model for control information identification to obtain output motion control information.
5. The method of claim 1, wherein the step of training the real-time motion capture model comprises:
acquiring historical video images and corresponding historical action control information;
taking the historical video images as the input of the real-time motion capture model, and taking the historical motion control information as a label of the real-time motion capture model for training;
and when the training completion condition is met, obtaining the trained real-time motion capture model.
6. The method of claim 1, wherein inputting the video image into a real-time motion capture model for recognition and obtaining output motion control information comprises:
calling a third-party real-time motion capture model, inputting the video image into the third-party real-time motion capture model for recognition, and obtaining output third-party parameter information, wherein the third-party real-time motion capture model is used for recognizing the video image to obtain a specific number of third-party parameter information;
and taking the third party parameter information as the action control information.
7. A three-dimensional animation generation apparatus, characterized in that the apparatus comprises:
the control information identification module is used for acquiring a video image in real time through a camera device, inputting the video image into a real-time motion capture model for identification, and obtaining output motion control information, wherein the motion control information comprises a specific number of deformation target weights;
a parameter calculation module, configured to input the specific number of deformed target weights into a three-dimensional animation binding logic plug-in, calculate a three-dimensional animation control parameter through three-dimensional animation binding logic information in the three-dimensional animation binding logic plug-in, where the three-dimensional animation binding logic information is a binding relationship obtained by binding different control nodes, the control node is an independent unit with an operation function, the three-dimensional animation binding logic information includes information of each bound controller and connection relationship information between controllers, each controller is obtained by connecting control nodes with different functions, the controller information includes a controller identifier and a controller attribute, and the connection relationship information between controllers refers to an output result of a previous controller as an input result of a next controller, and includes: searching a control node identifier in the action control information in each bound control node identifier in the three-dimensional animation binding logic information, obtaining a bound control node identifier consistent with the control node identifier when the control node identifier is searched, taking a control node input value in the action control information as a bound control node input value corresponding to the bound control node identifier, and calculating by using a three-dimensional animation binding logic relationship in the three-dimensional animation binding logic information to obtain a three-dimensional animation control parameter, wherein the three-dimensional animation control parameter comprises bone displacement, bone rotation, bone scaling, a deformation target weight and a dynamic material;
and the animation generation module is used for obtaining a three-dimensional animation initial model, and driving the three-dimensional animation initial model by using the bone displacement, the bone rotation, the bone scaling, the deformation target weight and the dynamic material to generate a three-dimensional animation.
8. The apparatus of claim 7, further comprising:
the plug-in generation module is used for acquiring a three-dimensional animation binding logic dynamic link library and packaging the three-dimensional animation binding logic dynamic link library to obtain a three-dimensional animation binding logic plug-in;
and the plug-in running module is used for running the three-dimensional animation binding logic plug-in.
9. The apparatus of claim 8, wherein the plug-in generation module is further configured to obtain three-dimensional animation binding logic data, and parse the three-dimensional animation binding logic data to obtain binding information; acquiring a code template, and generating a corresponding connection relation code according to the code template and the connection relation information in the binding information; generating a corresponding node code according to the code template and the node information in the binding information; and obtaining a three-dimensional animation binding logic code corresponding to the three-dimensional animation binding logic data according to the connection relation code and the node code, and converting the three-dimensional animation binding logic code into the three-dimensional animation binding logic dynamic link library.
10. The apparatus of claim 7, wherein the control information identification module comprises:
the key point obtaining unit is used for inputting the video image into an image key point identification network in a real-time motion capture model to obtain the characteristics of the output image key points;
and the information identification unit is used for inputting the image key point characteristics into the real-time motion capture model to carry out control information identification so as to obtain output motion control information.
11. The apparatus of claim 7, further comprising:
the model training module is used for acquiring historical video images and corresponding historical motion control information; taking the historical video images as the input of the real-time motion capture model, and taking the historical motion control information as the label of the real-time motion capture model for training; and when the training completion condition is met, obtaining the trained real-time motion capture model.
12. The device of claim 7, wherein the control information recognition module is further configured to invoke a third-party real-time motion capture model, input the video image into the third-party real-time motion capture model for recognition, and obtain output third-party parameter information, where the third-party real-time motion capture model is configured to recognize the video image to obtain a specific amount of third-party parameter information; and taking the third party parameter information as the action control information.
13. A storage medium storing a computer program which, when executed by a processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 6.
14. A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the method according to any one of claims 1 to 6.
CN202010085820.0A 2020-02-11 2020-02-11 Three-dimensional animation generation method and device, storage medium and computer equipment Active CN111340917B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010085820.0A CN111340917B (en) 2020-02-11 2020-02-11 Three-dimensional animation generation method and device, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010085820.0A CN111340917B (en) 2020-02-11 2020-02-11 Three-dimensional animation generation method and device, storage medium and computer equipment

Publications (2)

Publication Number Publication Date
CN111340917A CN111340917A (en) 2020-06-26
CN111340917B true CN111340917B (en) 2023-02-28

Family

ID=71183345

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010085820.0A Active CN111340917B (en) 2020-02-11 2020-02-11 Three-dimensional animation generation method and device, storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN111340917B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184863B (en) * 2020-10-21 2024-03-15 网易(杭州)网络有限公司 Animation data processing method and device
CN114708366A (en) * 2021-06-21 2022-07-05 上海锋沛数码科技有限公司 A three-dimensional animation production system and using method
CN114445526A (en) * 2021-12-31 2022-05-06 网易(杭州)网络有限公司 Animation model generation method, animation model generation device, electronic device, and storage medium
CN116320521A (en) * 2023-03-24 2023-06-23 吉林动画学院 Three-dimensional animation live broadcast method and device based on artificial intelligence

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002133446A (en) * 2000-08-30 2002-05-10 Microsoft Corp Face image processing method and system
CN106228119A (en) * 2016-07-13 2016-12-14 天远三维(天津)科技有限公司 A kind of expression catches and Automatic Generation of Computer Animation system and method
CN106971414A (en) * 2017-03-10 2017-07-21 江西省杜达菲科技有限责任公司 A kind of three-dimensional animation generation method based on deep-cycle neural network algorithm
CN109272566A (en) * 2018-08-15 2019-01-25 广州多益网络股份有限公司 Movement expression edit methods, device, equipment, system and the medium of virtual role

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002133446A (en) * 2000-08-30 2002-05-10 Microsoft Corp Face image processing method and system
CN106228119A (en) * 2016-07-13 2016-12-14 天远三维(天津)科技有限公司 A kind of expression catches and Automatic Generation of Computer Animation system and method
CN106971414A (en) * 2017-03-10 2017-07-21 江西省杜达菲科技有限责任公司 A kind of three-dimensional animation generation method based on deep-cycle neural network algorithm
CN109272566A (en) * 2018-08-15 2019-01-25 广州多益网络股份有限公司 Movement expression edit methods, device, equipment, system and the medium of virtual role

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于实时表情驱动的三维人脸模型控制研究;黄明阳;《中国优秀博硕士学位论文全文数据库(电子期刊),信息科技辑》;20190115;第7、13-18、30、31、43页 *

Also Published As

Publication number Publication date
CN111340917A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
CN111340917B (en) Three-dimensional animation generation method and device, storage medium and computer equipment
Wang et al. Predrnn: A recurrent neural network for spatiotemporal predictive learning
EP3992846B1 (en) Action recognition method and apparatus, computer storage medium, and computer device
US12062249B2 (en) System and method for generating image landmarks
US20200272806A1 (en) Real-Time Tracking of Facial Features in Unconstrained Video
CN110852256B (en) Method, device and equipment for generating time sequence action nomination and storage medium
CN111401216A (en) Image processing method, model training method, image processing device, model training device, computer equipment and storage medium
CN109409198A (en) AU detection model training method, AU detection method, device, equipment and medium
CN113516778A (en) Model training data acquisition method and device, computer equipment and storage medium
WO2021159781A1 (en) Image processing method, apparatus and device, and storage medium
CN111784818B (en) Method, apparatus and computer readable storage medium for generating three-dimensional mannequin
CN114612414B (en) Image processing method, model training method, device, equipment and storage medium
CN114373224B (en) Fuzzy 3D skeleton action recognition method and device based on self-supervision learning
US20240169662A1 (en) Latent Pose Queries for Machine-Learned Image View Synthesis
CN114821656A (en) Unsupervised pedestrian re-identification method and device, computer equipment and storage medium
CN116911361A (en) Method, device and equipment for training network model based on deep learning framework network
CN113657272A (en) Micro-video classification method and system based on missing data completion
CN110717928B (en) Parameter estimation method and device of face motion unit AUs and electronic equipment
CN112115860A (en) Face key point positioning method and device, computer equipment and storage medium
CN108520532B (en) Method and device for identifying motion direction of object in video
EP4571653A1 (en) Virtual object animation generation method and apparatus, electronic device, computer-readable storage medium, and computer program product
HK40024188B (en) Three-dimensional animation generation method and apparatus, storage medium, and computer device
HK40024188A (en) Three-dimensional animation generation method and apparatus, storage medium, and computer device
CN116821113A (en) Time sequence data missing value processing method and device, computer equipment and storage medium
Ye et al. The Image Processing Using Soft Robot Technology in Fitness Motion Detection Under the Internet of Things

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40024188

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant