[go: up one dir, main page]

CN111773736B - Behavior generation method and device for virtual roles - Google Patents

Behavior generation method and device for virtual roles Download PDF

Info

Publication number
CN111773736B
CN111773736B CN202010631992.3A CN202010631992A CN111773736B CN 111773736 B CN111773736 B CN 111773736B CN 202010631992 A CN202010631992 A CN 202010631992A CN 111773736 B CN111773736 B CN 111773736B
Authority
CN
China
Prior art keywords
virtual
character
motivation
behavior
virtual object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010631992.3A
Other languages
Chinese (zh)
Other versions
CN111773736A (en
Inventor
梁旭源
梁瑜芳
覃柳悦
包阳捷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Kingsoft Digital Network Technology Co Ltd
Original Assignee
Zhuhai Kingsoft Digital Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Kingsoft Digital Network Technology Co Ltd filed Critical Zhuhai Kingsoft Digital Network Technology Co Ltd
Priority to CN202010631992.3A priority Critical patent/CN111773736B/en
Publication of CN111773736A publication Critical patent/CN111773736A/en
Application granted granted Critical
Publication of CN111773736B publication Critical patent/CN111773736B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/825Fostering virtual characters
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8058Virtual breeding, e.g. tamagotchi
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/807Role playing or strategy games
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a behavior generation method and device of a virtual character, wherein the method comprises the following steps: generating character memory information of the virtual character according to the history perception information of the virtual character; acquiring at least one interactable virtual object in character memory information of the virtual character, and determining a feasibility score of each virtual object under each behavior motivation in at least one preset behavior motivation; determining the current behavior motivation of the virtual character, and determining the current target virtual object of the virtual character according to the current perception information of the virtual character and the corresponding feasibility score of each virtual object under the behavior motivation; and generating a behavior sequence of the virtual character according to the current behavior motivation of the virtual character and the target virtual object, and interacting with the target virtual object through the behavior sequence.

Description

Behavior generation method and device for virtual roles
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to a method and apparatus for generating behavior of a virtual character, a computing device, and a computer readable storage medium.
Background
In the virtual scene in the prior art, the character behaviors of most virtual characters are designed in advance, a user can only operate according to a preset story line or interact with an NPC, and meanwhile, each identical operation of the user brings identical result feedback, and finally, after the user participates in the experience for many times, each link and the plot in the virtual program are well known by the user, so that the user is boring and tedious, even if the plot is updated, certain new elements are added, the overall architecture plot of the plot is not changed, the interestingness of the virtual program cannot be improved, curiosity of the user cannot be met, and the experience of the user is reduced.
Disclosure of Invention
In view of this, the embodiments of the present disclosure provide a method and apparatus for generating behavior of a virtual character, a computing device and a computer readable storage medium, so as to solve the technical drawbacks in the prior art.
According to a first aspect of embodiments of the present disclosure, there is provided a behavior generation method of a virtual character, including:
generating character memory information of the virtual character according to the history perception information of the virtual character;
Acquiring at least one interactable virtual object in character memory information of the virtual character, and determining a feasibility score of each virtual object under each behavior motivation in at least one preset behavior motivation;
determining the current behavior motivation of the virtual character, and determining the current target virtual object of the virtual character according to the current perception information of the virtual character and the corresponding feasibility score of each virtual object under the behavior motivation;
and generating a behavior sequence of the virtual character according to the current behavior motivation of the virtual character and the target virtual object, and interacting with the target virtual object through the behavior sequence.
According to a second aspect of embodiments of the present specification, there is provided a behavior generating apparatus of a virtual character, including:
a memory generation module configured to generate character memory information of a virtual character according to history perception information of the virtual character;
the memory assessment module is configured to acquire at least one interactable virtual object in the character memory information of the virtual character, and determine the feasibility score of each virtual object under each of at least one preset behavioral motivation;
The object determining module is configured to determine the current behavior motivation of the virtual character, and determine the current target virtual object of the virtual character according to the current perception information of the virtual character and the corresponding feasibility score of each virtual object under the behavior motivation;
and the action generating module is configured to generate a behavior sequence of the virtual character according to the current behavior motivation of the virtual character and a target virtual object and interact with the target virtual object through the behavior sequence.
According to a third aspect of embodiments of the present specification, there is provided a computing device comprising a memory, a processor and computer instructions stored on the memory and executable on the processor, the processor implementing the steps of the method of generating behaviour of the virtual character when executing the instructions.
According to a fourth aspect of embodiments of the present description, there is provided a computer-readable storage medium storing computer instructions which, when executed by a processor, implement the steps of a method of generating behavior of a virtual character.
According to the method and the device, based on historical experiences of the virtual character, unique memories can be generated for the virtual character, and current behavior actions of the virtual character can be predicted and generated according to the role memories and the current virtual scene of the virtual character, meanwhile, the behavior motivation of the virtual character can be generated in real time according to different memories of the virtual character and combined with the current virtual scene, and further the behavior actions of the virtual character are generated, so that the behavior actions of the virtual character have diversity and certain randomness, and the current behavior action scenes are changed in real time according to different role memories, so that the defects of single and boring behavior of the behavior action of the prior art are overcome, a rich and colorful virtual world is formed, and the user experience is greatly enhanced.
Drawings
FIG. 1 is a block diagram of a computing device provided by an embodiment of the present application;
FIG. 2 is a flowchart of a method for generating behavior of a virtual character according to an embodiment of the present application;
FIG. 3 is another flow chart of a method for generating behavior of a virtual character provided by an embodiment of the present application;
FIG. 4 is another flow chart of a method for generating behavior of a virtual character provided by an embodiment of the present application;
FIG. 5 is another flow chart of a method for generating behavior of a virtual character provided by an embodiment of the present application;
FIG. 6 is another flow chart of a method of generating behavior of a virtual character provided by an embodiment of the present application;
FIG. 7 is another flow chart of a method of generating behavior of a virtual character provided by an embodiment of the present application;
fig. 8 is a schematic structural diagram of a behavior generating device of a virtual character according to an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is, however, susceptible of embodiment in many other ways than those herein described and similar generalizations can be made by those skilled in the art without departing from the spirit of the application and the application is therefore not limited to the specific embodiments disclosed below.
The terminology used in the one or more embodiments of the specification is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the specification. As used in this specification, one or more embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present specification refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that, although the terms first, second, etc. may be used in one or more embodiments of this specification to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first may also be referred to as a second, and similarly, a second may also be referred to as a first, without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
First, terms related to one or more embodiments of the present invention will be explained.
Role perception: virtual characters in the virtual environment sense virtual objects or specific events in the virtual environment in a visual, auditory or olfactory mode and the like.
Character memory: character memory is a design concept, substantially similar to a database, in which data is stored and queried continuously, and in which the stored data is guaranteed to be automatically checked for repetition and updating, forming a "memory" for each virtual character.
Behavior motivation: the degree of desirability of a avatar in a virtual environment to perform a behavior.
Timestamp: time stamp (timestamp), a complete and verifiable data, typically a sequence of characters, that can represent a piece of data that has existed before a particular time, uniquely identifies the time at a moment.
In the present application, a method and apparatus for generating behavior of a virtual character, a computing device, and a computer-readable storage medium are provided, and are described in detail in the following embodiments.
Fig. 1 shows a block diagram of a computing device 100 according to an embodiment of the present description. The components of the computing device 100 include, but are not limited to, a memory 110 and a processor 120. Processor 120 is coupled to memory 110 via bus 130 and database 150 is used to store data.
Computing device 100 also includes access device 140, access device 140 enabling computing device 100 to communicate via one or more networks 160. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. The access device 140 may include one or more of any type of network interface, wired or wireless (e.g., a Network Interface Card (NIC)), such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present description, the above-described components of computing device 100, as well as other components not shown in FIG. 1, may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device shown in FIG. 1 is for exemplary purposes only and is not intended to limit the scope of the present description. Those skilled in the art may add or replace other components as desired.
Computing device 100 may be any type of stationary or mobile computing device including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smart phone), wearable computing device (e.g., smart watch, smart glasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 100 may also be a mobile or stationary server.
Wherein the processor 120 may perform the steps of the method shown in fig. 2. Fig. 2 is a schematic flow chart illustrating a method of behavior generation of a virtual character according to an embodiment of the present application, including steps 202 through 208.
Step 202: and generating character memory information of the virtual character according to the history perception information of the virtual character.
In the embodiment of the application, the system can acquire the history perception information of the virtual character based on the history experience of the virtual character in the virtual scene in real time and continuously, and generate the unique character memory of the virtual character in the virtual scene, namely the character memory information of the virtual character. For example, in a virtual game where there is a virtual character pet cat, the system of the present application can acquire the virtual object perceived by the pet cat in the world of the virtual game or some specific event communicated to the underlying system, from generating the pet cat's own character memory.
Step 204: and acquiring at least one interactable virtual object in character memory information of the virtual character, and determining a feasibility score of each virtual object under each of at least one preset behavioral motivation.
In an embodiment of the present application, the system of the present application may evaluate a role memory of the virtual role, that is, obtain at least one virtual object capable of interacting with the virtual role from role memory information of the virtual role, and determine, according to at least one preset behavioral motivation and a characteristic of each virtual object, a feasibility score of each virtual object under each behavioral motivation, so as to obtain a cost performance of interaction between the virtual role and each virtual object under each behavioral motivation. For example, in the case that the virtual character is a pet cat, the corresponding behavioral motivations include "eat," "sleep," "drink," or "play with a good" and the like, and the interactive virtual objects include "cat food bowl," "cat climbing rack," or "cat nest," and the like, the system of the present application may calculate the cost performance of the pet cat corresponding to "cat food bowl," "cat climbing rack," and "cat nest," respectively, under the behavioral motivation of "eat," that is, the feasibility score.
Step 206: determining the current behavior motivation of the virtual character, and determining the current target virtual object of the virtual character according to the current perception information of the virtual character and the corresponding feasibility score of each virtual object under the behavior motivation.
In the embodiment of the application, the system can generate at least one executable current behavior motivation of the virtual role in real time and continuously according to the current virtual environment of the virtual role and the role state of the virtual role, and the system determines a target virtual object capable of meeting the current behavior motivation of the virtual role according to the current role perception and role memory of the virtual role by taking the current behavior motivation of the virtual role as the requirement of the virtual role. For example, in the case where the virtual character is a pet cat, the virtual object that interacts with the pet cat may be a "cat food bowl" by giving a behavioral incentive to the pet cat to "eat" according to the virtual environment in which the pet cat is currently located and the character status of the pet cat.
Step 208: and generating a behavior sequence of the virtual character according to the current behavior motivation of the virtual character and the target virtual object, and interacting with the target virtual object through the behavior sequence.
In the embodiment of the application, the system can generate a series of behavior actions of the virtual character according to the current behavior motivation and the target virtual object of the virtual character, and enable the virtual character to interact with the target virtual object through each behavior action according to a behavior sequence, so that the behavior of the virtual character has diversity and certain randomness, namely, the system can generate unique memories for the virtual character based on the historical experience of the virtual character, predicts and generates the current behavior actions of the virtual character according to the role memories and the virtual scene of the virtual character, and simultaneously, can combine the current virtual scene according to the different memories of the virtual character to generate the behavior motivation of the virtual character in real time, further generate the behavior actions of the virtual character, enable the behavior actions of the virtual character to have diversity and certain randomness, and change in real time with the different behavior action scenes of the virtual character, thereby overcoming the defects of single behavior and boring behavior of the behavior action of the virtual character in the prior art, being beneficial to forming a rich virtual character, and greatly strengthening the colorful user experience.
In the embodiment of the present application, as shown in fig. 3, the character memory information of the virtual character is generated according to the history awareness information of the virtual character, including steps 302 to 304.
Step 302: and generating the history perception information of the virtual character according to the virtual scene where the virtual character is located, the specific event which occurs and the character state where the virtual character is located within a time threshold before the current frame.
In an embodiment of the present application, the system of the present application may generate, within a time threshold before a current frame, for example, 10 frames before the current frame, history awareness information of a virtual character according to a virtual scene in which the virtual character is located, specific events that occur, for example, an event seen by the virtual character, an event heard by the virtual character, and an event related to the virtual character, which are described by a plurality of state description parameters set by a developer, and according to values of the state description parameters, for example, in a case that the virtual character is a pet cat, the plurality of state description parameters include a satiety, a mood value, a character type, an age, a hunger value, a sleep value, and so on, and by assigning specific values to the state description parameters, the state of the pet cat in which the current situation is defined generally.
Step 304: and storing and updating the history perception information of the virtual character according to the corresponding relation with the time stamp, and generating character memory information of the virtual character.
In the embodiment of the application, the system can generate the time stamp of the perceived specific event and the virtual object of the virtual character in each key frame and combine the time of each specific event, ensure that each piece of character memory information in the character memory of the virtual character has specific perceived data and time stamp, store each generated piece of character memory information into a database similar to a brain, constantly store and inquire each piece of character memory, and update the data of the virtual object according to the latest character memory under the condition that the virtual object with the same ID has data difference, namely ensure that the stored data is automatically checked and repeated and updated.
Fig. 4 shows a method for generating behavior of a virtual character according to an embodiment of the present disclosure, which describes, as an example, generation of character memory information of the virtual character according to history awareness information of the virtual character, including steps 402 to 414.
Step 402: at least one manner of perception owned by the virtual character is determined.
In the embodiment of the application, the sensing mode includes vision, hearing, taste, smell or touch, and the like, and different combinations can be made according to the virtual characters.
Step 404: and respectively acquiring and recording at least one virtual object identified by the virtual character, the position of each virtual object in the virtual scene and the last perceived time of each virtual object according to each perceived mode to obtain a perceived record table corresponding to each perceived mode.
In an embodiment of the present application, the system of the present application is capable of creating multiple awareness record tables, as shown in table 1 below:
TABLE 1
Each perception mode corresponds to a perception record table, and the perception record table is used for recording related data of each specific virtual object perceived by the virtual character in each perception mode, wherein the related data comprises the name of the virtual object, the position of each virtual object in the virtual scene and the last perceived time of each virtual object.
Step 406: and recording at least one state description parameter corresponding to the virtual role and used for describing the state of the role.
In the embodiment of the present application, the character status is described by a plurality of status description parameters set by a developer, and the character status corresponding to the virtual character can be determined according to the values of the status description parameters, for example, in the case that the virtual character is a pet cat, the plurality of status description parameters include satiety, mood value, character type, age, hunger and thirst value, and sleep value, and the status of the pet cat can be generally defined by assigning specific values to the status description parameters.
Step 408: and generating the historical perception information of the virtual character according to the perception record table corresponding to each perception mode and the state description parameters corresponding to the virtual character.
In the embodiment of the present application, the system of the present application may generate, according to the data in the awareness record table and the state description parameter corresponding to the virtual character, historical awareness information corresponding to the virtual character for a period of time before the current frame, so as to form self-memory of the virtual character.
Step 410: traversing the perception record table, and determining the last perceived time of each virtual object according to the time stamp.
Step 412: and acquiring the names of the virtual objects and the positions of the virtual objects in the virtual scene from the perception record table according to the sequence of the time stamps, and constructing a role memory list of the virtual roles.
In the embodiment of the present application, the system of the present application may traverse all the perception record tables according to the order of the time stamps, and add the names of the virtual objects perceived by the virtual roles and the positions of the virtual objects in the virtual scene to the corresponding role memory list, so as to form a role memory list of the virtual roles with a time line.
Step 414: starting a timer, and deleting or updating the virtual object exceeding the memory dissipation time threshold from the role memory list through the timer according to the memory dissipation time threshold corresponding to each virtual object.
In the embodiment of the application, the system can establish a timer (Ti-mer) for dissipating memory, namely, according to the memory dissipation time threshold value corresponding to each virtual object, the virtual object exceeding the memory dissipation time threshold value is deleted or updated from the character memory list, so that the reliability and the diversity of character memory information are ensured.
In one embodiment of the present application, as shown in fig. 5, at least one interactable virtual object in the character memory information of the virtual character is obtained, and a feasibility score corresponding to each virtual object under each action motivation is determined, which includes steps 502 to 506.
Step 502: and acquiring the attribute label corresponding to each virtual object in at least one virtual object in the character memory information of the virtual character and the familiarity value between each virtual object and the virtual character.
In the embodiment of the present application, the attribute tag corresponding to the virtual object refers to the characteristic of the virtual object, which affects the selection of the virtual object by the virtual character, for example, if the virtual object is a "cat food bowl", the attribute tag thereof may be "food" and "small", and if the virtual object is a "cat climbing frame", the attribute tag thereof may be "sleeping", "playable" and "large". The familiarity value refers to the number of times that a perception is formed between the virtual character and a certain virtual object, and the more the number of times that interaction occurs between the virtual character and a certain virtual object, the greater the familiarity value between the virtual object and the virtual character.
Step 504: and evaluating each virtual object under each behavior motivation according to the attribute label corresponding to each virtual object to obtain the object motivation score corresponding to each virtual object under each behavior motivation.
In an embodiment of the present application, the system of the present application may evaluate each virtual object under each action motivation according to the attribute tag corresponding to each virtual object, to obtain an object motivation score corresponding to each virtual object under each action motivation, that is, evaluate the score of each interactable attribute tag of each virtual object under different action motivations, for example, taking a virtual character "pet cat" as an example, the pet cat has an action motivation "eat" in a current frame state, and then the attribute tag of the virtual object "cat food bowl" under the action motivation has a higher score compared with the attribute tag of the virtual object "cat climbing frame".
Step 506: and evaluating each virtual object under each action motivation to obtain a corresponding feasibility score of each virtual object under each action motivation, wherein the object motivation score corresponds to each virtual object under each action motivation and the familiarity value between each virtual object and the virtual role.
In the embodiment of the application, the system of the application can be based on the corresponding object motivation score of the virtual object under each action motivation, and process the object motivation score by combining the familiarity value between the virtual object and the virtual character, so as to finally obtain the corresponding feasibility score of each virtual object under each action motivation, namely, the cost performance of each virtual object for interaction is selected by the virtual character under each action motivation, and the corresponding feasibility score of each virtual object under each action motivation is imported into the character memory list, thereby realizing evaluation of the character memory information.
In one embodiment of the present application, as shown in fig. 6, determining a current behavioral motivation of the virtual character, determining a current target virtual object of the virtual character according to the current perception information of the virtual character and the feasibility score corresponding to each virtual object under the behavioral motivation, including steps 602 to 610.
Step 602: and determining a corresponding role motivation score of the virtual role under each of the at least one preset behavioral motivation.
Step 604: and sequencing the role motivation scores corresponding to each behavior motivation according to the high-low value of the scores, and determining the behavior motivation with the highest role motivation score as the current behavior motivation of the virtual role.
In the embodiment of the present application, the system of the present application is capable of calculating a corresponding role motivation score of the virtual character under each of the at least one preset behavioral motivation, that is, a score for indicating a desirability of the virtual character to do something in the current state, for example, taking a virtual character "pet cat" as an example, if the pet cat has 88, 70, 62, 50 and 15 corresponding role motivation scores for the behavioral motivations "eat", "sleep", "drink" and "play alarm", respectively, it indicates that the pet cat is currently most required to satisfy the motivation as eating. And then sequencing the role motivation scores corresponding to each behavior motivation according to the high-low value of the scores, taking the behavior motivation with the highest role motivation score, namely 'eat', as the current behavior motivation of the pet cat, and simultaneously taking the behavior motivations with the rank and the rank in the sequencing as the current behavior motivations of the pet cat and carrying out corresponding data calculation if the behavior motivations cannot be achieved by 'eat', so that the virtual role can continuously generate motivations, the static and motionless hang-up situation does not occur, and finally the diversity and the randomness of the role behaviors are increased.
Step 606: and generating current perception information of the virtual character according to the virtual scene of the virtual character in the current frame, the specific event and the character state of the virtual character.
In the embodiment of the present application, the system of the present application is capable of generating the current perception information of the virtual character according to the virtual scene where the virtual character is located in the current frame, the specific event that occurs, and the character state where the virtual character is located, where the specific event that occurs in the virtual scene, for example, the event that the virtual character sees, the event that is heard, and the event that is related to itself, etc., the character state is described by a plurality of state description parameters set by a developer, and according to the values of these state description parameters, the character state corresponding to the virtual character can be determined, for example, in the case that the virtual character is a pet cat, the plurality of state description parameters include satiety, mood value, character type, age, hunger value, sleep value, etc., and by assigning specific values to these state description parameters, the state where the pet cat is currently located can be generally defined.
Step 608: and acquiring at least one interactable virtual object from the current perception information of the virtual character.
In the embodiment of the application, the system of the application can acquire at least one interactable virtual object from the current perception information of the virtual character, namely, in order to meet the requirement of the behavior motivation of the virtual character, at least one interactable virtual object is acquired from the current perception information of the virtual character.
Step 610: and taking the virtual object with the highest feasibility score as the current target virtual object of the virtual character according to the feasibility score corresponding to the virtual object under the action motivation.
In the embodiment of the application, the system can acquire the feasibility score corresponding to each virtual object under the action motivation from the role memory list in the role memory information of the virtual roles, so as to determine the target virtual object with the highest cost performance meeting the action motivation.
In one embodiment of the present application, as shown in fig. 7, according to the current behavioral motivation of the avatar and the target virtual object, a behavioral sequence of the avatar is generated and interacted with the target virtual object through the behavioral sequence, including steps 702 to 704.
Step 702: and searching in a preset object motivation behavior sequence table according to the current behavior motivations and the target virtual objects of the virtual character to obtain at least one executable action behavior.
Step 704: and determining a motion path for realizing the at least one motion behavior according to a path search algorithm, and sequentially executing each motion behavior in the at least one motion behavior according to the motion path.
In an embodiment of the present application, the system of the present application may search in a preset object motivation behavior sequence table according to the behavior motivation and the target virtual object of the virtual character in the current frame, generate specific executable at least one action behavior, then determine a motion path for implementing the at least one action behavior according to a path search algorithm, and sequentially execute each action behavior in the at least one action behavior according to the motion path to form a relatively complete behavior sequence, for example, taking a pet cat with a virtual character as an example, where the behavior motivation is "eat" and the target virtual object is "cat food bowl", search out executable actions including walking, running, sleeping, finding food, eating, calling or kicking bowl, and then calculate a behavior sequence capable of achieving a "eat" target according to the path search algorithm: find cat food, walk (walk cat food bowl), eat (eat to full), if there is no cat food, recalculate behavior actions such as call or kick cat bowl.
Wherein, the object motivation behavior sequence table is shown in table 2:
TABLE 2
As can be seen from table 2: after a certain behavior motivation is determined, any one of a plurality of virtual objects can be selected by the virtual character as a target virtual object to interact, meanwhile, according to different target virtual objects, the generated specific behavior sequences are different, so that the character behaviors of the virtual character are diversified, in addition, under different behavior motivations, the behavior sequences of the same virtual object are also different, for example, under the condition that the behavior motivation 1 is "eat" and the virtual object 1 is "cat food bowl", the action behaviors of the behavior sequences 1-1 are "find bowl" and under the condition that the behavior motivation 2 is "play" and the virtual object 1 is "cat food bowl", the action behaviors of the behavior sequences 1-2 are "turn bowl", so that the virtual character is enabled to sense a virtual scene uninterruptedly by parallelizing the motivation, the behavior motivation is generated, the corresponding behavior sequences are executed by the behavior motivation, and the virtual character is driven to perform the corresponding action sequences according to the actions of the plurality of interaction behaviors in the behavior sequences, so that the virtual character experience is greatly improved, and the number of users is greatly improved.
Corresponding to the above method embodiment, the present disclosure further provides an embodiment of a behavior generating device for a virtual character, and fig. 8 shows a schematic structural diagram of the behavior generating device for a virtual character according to one embodiment of the present disclosure. As shown in fig. 8, the apparatus includes:
a memory generation module 801 configured to generate character memory information of a virtual character according to history awareness information of the virtual character;
a memory assessment module 802 configured to obtain at least one interactable virtual object in character memory information of the virtual character, determine a feasibility score for each virtual object under each of at least one preset behavioral motivation;
an object determining module 803 configured to determine a current behavioral motivation of the virtual character, and determine a current target virtual object of the virtual character according to the current perception information of the virtual character and a feasibility score corresponding to each virtual object under the behavioral motivation;
the action generating module 804 is configured to generate a behavior sequence of the virtual character according to the current behavior motivation of the virtual character and a target virtual object, and interact with the target virtual object through the behavior sequence.
Optionally, the memory generation module 801 includes:
a history sensing unit configured to generate history sensing information of the virtual character according to a virtual scene where the virtual character is located, a specific event which occurs, and a character state where the virtual character is located within a time threshold before a current frame;
and the memory mapping unit is configured to store and update the history perception information of the virtual character according to the corresponding relation with the time stamp, and generate character memory information of the virtual character.
Optionally, the history sensing unit includes:
a perception mode determining subunit configured to determine at least one perception mode possessed by the virtual character;
the perception record table unit is configured to acquire and record at least one virtual object identified by the virtual character, the position of each virtual object in the virtual scene and the last perceived time of each virtual object according to each perception mode respectively, so as to acquire a perception record table corresponding to each perception mode;
a state description parameter obtaining subunit configured to record at least one state description parameter corresponding to the virtual character and used for describing the state of the character;
And the history perception construction subunit is configured to generate history perception information of the virtual character according to the perception record table corresponding to each perception mode and the state description parameter corresponding to the virtual character.
Optionally, the memory mapping unit includes:
a perception time determining subunit configured to traverse the perception record table and determine, according to the time stamp, a time when each virtual object was perceived last time;
a memory list construction subunit configured to acquire, from the perception record table, a name of the virtual object and a position of the virtual object in the virtual scene in the order of the time stamps, and construct a role memory list of the virtual roles;
and the memory dissipation subunit is configured to start a timer, and delete or update the virtual objects exceeding the memory dissipation time threshold from the role memory list through the timer according to the memory dissipation time threshold corresponding to each virtual object.
Optionally, the memory assessment module 802 includes:
a tag familiarity unit configured to acquire a corresponding attribute tag of each virtual object in at least one virtual object in character memory information of the virtual character and a familiarity value between each virtual object and the virtual character;
The object motivation score calculation unit is configured to evaluate each virtual object under each behavioral motivation according to the attribute label corresponding to each virtual object to obtain an object motivation score corresponding to each virtual object under each behavioral motivation;
and the feasibility score calculation unit is configured to evaluate each virtual object under each action motivation to obtain the corresponding feasibility score of each virtual object under each action motivation according to the object motivation score of each virtual object under each action motivation and the familiarity value between each virtual object and the virtual character.
Optionally, the object determining module 803 includes:
a character motivation score calculation unit configured to determine a corresponding character motivation score of the virtual character under each of the at least one preset behavioral motivation;
the behavior motivation determining unit is configured to sort the character motivation scores corresponding to each behavior motivation according to the scores from high to low, and determine the behavior motivation with the highest character motivation score as the current behavior motivation of the virtual character;
The item determination module further includes:
the current perception information acquisition unit is configured to generate current perception information of the virtual character according to a virtual scene where the virtual character is located in a current frame, a specific event which occurs and a character state of the virtual character;
a virtual object acquisition unit configured to acquire at least one interactable virtual object from current perception information of the virtual character;
and the target virtual object determining unit is configured to take the virtual object with the highest feasibility score as the current target virtual object of the virtual character according to the feasibility score corresponding to the virtual object under the action motivation.
Optionally, the action generating module 804 includes:
the behavior retrieval determining unit is configured to retrieve in a preset object motivation behavior sequence table according to the current behavior motivation and the target virtual object of the virtual character, and obtain at least one executable action behavior;
and the action sequence execution unit is configured to determine a motion path for realizing the at least one action according to a path search algorithm and execute each action in the at least one action in turn according to the motion path.
According to the method and the device, based on historical experiences of the virtual character, unique memories can be generated for the virtual character, and current behavior actions of the virtual character can be predicted and generated according to the role memories and the current virtual scene of the virtual character, meanwhile, the behavior motivation of the virtual character can be generated in real time according to different memories of the virtual character and combined with the current virtual scene, and further the behavior actions of the virtual character are generated, so that the behavior actions of the virtual character have diversity and certain randomness, and the current behavior action scenes are changed in real time according to different role memories, so that the defects of single and boring behavior of the behavior action of the prior art are overcome, a rich and colorful virtual world is formed, and the user experience is greatly enhanced.
An embodiment of the present application also provides a computing device including a memory, a processor, and computer instructions stored on the memory and executable on the processor, the processor implementing the following steps when executing the instructions:
generating character memory information of the virtual character according to the history perception information of the virtual character;
acquiring at least one interactable virtual object in character memory information of the virtual character, and determining a feasibility score of each virtual object under each behavior motivation in at least one preset behavior motivation;
Determining the current behavior motivation of the virtual character, and determining the current target virtual object of the virtual character according to the current perception information of the virtual character and the corresponding feasibility score of each virtual object under the behavior motivation;
and generating a behavior sequence of the virtual character according to the current behavior motivation of the virtual character and the target virtual object, and interacting with the target virtual object through the behavior sequence.
An embodiment of the present application also provides a computer-readable storage medium storing computer instructions that, when executed by a processor, implement the steps of the method of generating behavior of a virtual character as described above.
The above is an exemplary version of a computer-readable storage medium of the present embodiment. It should be noted that, the technical solution of the computer readable storage medium and the technical solution of the method for generating the behavior of the virtual character belong to the same concept, and details of the technical solution of the computer readable storage medium which are not described in detail can be referred to the description of the technical solution of the method for generating the behavior of the virtual character.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The computer instructions include computer program code that may be in source code form, object code form, executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
It should be noted that, for the sake of simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily all necessary for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
The above-disclosed preferred embodiments of the present application are provided only as an aid to the elucidation of the present application. Alternative embodiments are not intended to be exhaustive or to limit the invention to the precise form disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and the practical application, to thereby enable others skilled in the art to best understand and utilize the application. This application is to be limited only by the claims and the full scope and equivalents thereof.

Claims (16)

1. A method for generating behavior of a virtual character, comprising:
generating character memory information of the virtual character according to the history perception information of the virtual character;
acquiring at least one interactable virtual object in character memory information of the virtual character, and determining a feasibility score of each virtual object under each behavior motivation in at least one preset behavior motivation;
Determining a current behavior motivation of the virtual character according to a virtual scene where the virtual character is currently located, and determining a current target virtual object of the virtual character according to the current perception information of the virtual character and a feasibility score corresponding to each virtual object under the behavior motivation;
and generating a behavior sequence of the virtual character according to the current behavior motivation of the virtual character and the target virtual object, and interacting with the target virtual object through the behavior sequence.
2. The method of claim 1, wherein generating character memory information for the virtual character based on the history awareness information for the virtual character comprises:
generating historical perception information of the virtual character according to the virtual scene where the virtual character is located, the specific event which occurs and the character state where the virtual character is located in a time threshold before the current frame;
and storing and updating the history perception information of the virtual character according to the corresponding relation with the time stamp, and generating character memory information of the virtual character.
3. The method of claim 2, wherein generating the historical awareness information of the virtual character based on the virtual scene in which the virtual character is located, the specific event that occurs, and the character state in which the virtual character is located, comprises:
Determining at least one perception mode owned by the virtual character;
respectively acquiring and recording at least one virtual object identified by the virtual character, the position of each virtual object in the virtual scene and the last perceived time of each virtual object according to each perceived mode to obtain a perceived record table corresponding to each perceived mode;
recording at least one state description parameter corresponding to the virtual character and used for describing the state of the character;
and generating the historical perception information of the virtual character according to the perception record table corresponding to each perception mode and the state description parameters corresponding to the virtual character.
4. A method according to claim 3, wherein storing and updating the history awareness information of the virtual character in correspondence with the time stamp, generating character memory information of the virtual character, comprises:
traversing the perception record table, and determining the last perceived time of each virtual object according to the time stamp;
according to the sequence of the time stamps, the names of the virtual objects and the positions of the virtual objects in the virtual scene are obtained from the perception record table, and a role memory list of the virtual roles is constructed;
Starting a timer, and deleting or updating the virtual object exceeding the memory dissipation time threshold from the role memory list through the timer according to the memory dissipation time threshold corresponding to each virtual object.
5. The method of claim 1, wherein obtaining at least one interactable virtual item in character memory information of the virtual character, determining a corresponding feasibility score for each virtual item under each behavioral motivation, comprises:
acquiring a corresponding attribute tag of each virtual object in at least one virtual object in character memory information of the virtual character and a familiarity value between each virtual object and the virtual character;
evaluating each virtual object under each behavior motivation according to the attribute label corresponding to each virtual object to obtain an object motivation score corresponding to each virtual object under each behavior motivation;
and evaluating each virtual object under each action motivation to obtain a corresponding feasibility score of each virtual object under each action motivation, wherein the object motivation score corresponds to each virtual object under each action motivation and the familiarity value between each virtual object and the virtual role.
6. The method of claim 1, wherein determining the current behavioral motivation of the virtual character comprises:
determining a corresponding character motivation score of the virtual character under each of the at least one preset behavioral motivation;
sequencing the role motivation scores corresponding to each behavior motivation according to the high-low value of the scores, and determining the behavior motivation with the highest role motivation score as the current behavior motivation of the virtual role;
determining a current target virtual object of the virtual character according to the current perception information of the virtual character and the corresponding feasibility score of each virtual object under the action motivation, wherein the method comprises the following steps:
generating current perception information of the virtual character according to the virtual scene of the virtual character in the current frame, the specific event and the character state of the virtual character;
acquiring at least one interactable virtual object from the current perception information of the virtual character;
and taking the virtual object with the highest feasibility score as the current target virtual object of the virtual character according to the feasibility score corresponding to the virtual object under the action motivation.
7. The method of claim 1, wherein generating a behavior sequence of the avatar and interacting with the target virtual object through the behavior sequence based on the avatar's current behavioral motivation and the target virtual object, comprises:
searching in a preset object motivation behavior sequence table according to the current behavior motivations and target virtual objects of the virtual character to obtain at least one executable action behavior;
and determining a motion path for realizing the at least one motion behavior according to a path search algorithm, and sequentially executing each motion behavior in the at least one motion behavior according to the motion path.
8. An apparatus for generating behavior of a virtual character, comprising:
a memory generation module configured to generate character memory information of a virtual character according to history perception information of the virtual character;
the memory assessment module is configured to acquire at least one interactable virtual object in the character memory information of the virtual character, and determine the feasibility score of each virtual object under each of at least one preset behavioral motivation;
The object determining module is configured to determine a current behavior motivation of the virtual role according to a virtual scene where the virtual role is currently located, and determine a current target virtual object of the virtual role according to the current perception information of the virtual role and a feasibility score corresponding to each virtual object under the behavior motivation;
and the action generating module is configured to generate a behavior sequence of the virtual character according to the current behavior motivation of the virtual character and a target virtual object and interact with the target virtual object through the behavior sequence.
9. The apparatus of claim 8, wherein the memory generation module comprises:
a history sensing unit configured to generate history sensing information of the virtual character according to a virtual scene where the virtual character is located, a specific event which occurs, and a character state where the virtual character is located within a time threshold before a current frame;
and the memory mapping unit is configured to store and update the history perception information of the virtual character according to the corresponding relation with the time stamp, and generate character memory information of the virtual character.
10. The apparatus of claim 9, wherein the history sensing unit comprises:
a perception mode determining subunit configured to determine at least one perception mode possessed by the virtual character;
the perception record table unit is configured to acquire and record at least one virtual object identified by the virtual character, the position of each virtual object in the virtual scene and the last perceived time of each virtual object according to each perception mode respectively, so as to acquire a perception record table corresponding to each perception mode;
a state description parameter obtaining subunit configured to record at least one state description parameter corresponding to the virtual character and used for describing the state of the character;
and the history perception construction subunit is configured to generate history perception information of the virtual character according to the perception record table corresponding to each perception mode and the state description parameter corresponding to the virtual character.
11. The apparatus of claim 10, wherein the memory mapping unit comprises:
a perception time determining subunit configured to traverse the perception record table and determine, according to the time stamp, a time when each virtual object was perceived last time;
A memory list construction subunit configured to acquire, from the perception record table, a name of the virtual object and a position of the virtual object in the virtual scene in the order of the time stamps, and construct a role memory list of the virtual roles;
and the memory dissipation subunit is configured to start a timer, and delete or update the virtual objects exceeding the memory dissipation time threshold from the role memory list through the timer according to the memory dissipation time threshold corresponding to each virtual object.
12. The apparatus of claim 8, wherein the memory assessment module comprises:
a tag familiarity unit configured to acquire a corresponding attribute tag of each virtual object in at least one virtual object in character memory information of the virtual character and a familiarity value between each virtual object and the virtual character;
the object motivation score calculation unit is configured to evaluate each virtual object under each behavioral motivation according to the attribute label corresponding to each virtual object to obtain an object motivation score corresponding to each virtual object under each behavioral motivation;
And the feasibility score calculation unit is configured to evaluate each virtual object under each action motivation to obtain the corresponding feasibility score of each virtual object under each action motivation according to the object motivation score of each virtual object under each action motivation and the familiarity value between each virtual object and the virtual character.
13. The apparatus of claim 8, wherein the item determination module comprises:
a character motivation score calculation unit configured to determine a corresponding character motivation score of the virtual character under each of the at least one preset behavioral motivation;
the behavior motivation determining unit is configured to sort the character motivation scores corresponding to each behavior motivation according to the scores from high to low, and determine the behavior motivation with the highest character motivation score as the current behavior motivation of the virtual character;
the item determination module further includes:
the current perception information acquisition unit is configured to generate current perception information of the virtual character according to a virtual scene where the virtual character is located in a current frame, a specific event which occurs and a character state of the virtual character;
A virtual object acquisition unit configured to acquire at least one interactable virtual object from current perception information of the virtual character;
and the target virtual object determining unit is configured to take the virtual object with the highest feasibility score as the current target virtual object of the virtual character according to the feasibility score corresponding to the virtual object under the action motivation.
14. The apparatus of claim 8, wherein the action generation module comprises:
the behavior retrieval determining unit is configured to retrieve in a preset object motivation behavior sequence table according to the current behavior motivation and the target virtual object of the virtual character, and obtain at least one executable action behavior;
and the action sequence execution unit is configured to determine a motion path for realizing the at least one action according to a path search algorithm and execute each action in the at least one action in turn according to the motion path.
15. A computing device comprising a memory, a processor, and computer instructions stored on the memory and executable on the processor, wherein the processor, when executing the instructions, implements the steps of the method of any of claims 1-7.
16. A computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the method of any one of claims 1 to 7.
CN202010631992.3A 2020-07-03 2020-07-03 Behavior generation method and device for virtual roles Active CN111773736B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010631992.3A CN111773736B (en) 2020-07-03 2020-07-03 Behavior generation method and device for virtual roles

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010631992.3A CN111773736B (en) 2020-07-03 2020-07-03 Behavior generation method and device for virtual roles

Publications (2)

Publication Number Publication Date
CN111773736A CN111773736A (en) 2020-10-16
CN111773736B true CN111773736B (en) 2024-02-23

Family

ID=72758977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010631992.3A Active CN111773736B (en) 2020-07-03 2020-07-03 Behavior generation method and device for virtual roles

Country Status (1)

Country Link
CN (1) CN111773736B (en)

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5982390A (en) * 1996-03-25 1999-11-09 Stan Stoneking Controlling personality manifestations by objects in a computer-assisted animation environment
KR20020000590A (en) * 2000-06-24 2002-01-05 정상무 The Business model for cyber-touring that is based time-sequentially, 3-dynamically, and virtually concentrated animations on the internet
JP2004064398A (en) * 2002-07-29 2004-02-26 Matsushita Electric Ind Co Ltd Mobile terminal and communication system provided with mobile terminal
JP2004195002A (en) * 2002-12-19 2004-07-15 Namco Ltd GAME INFORMATION, INFORMATION STORAGE MEDIUM, AND GAME DEVICE
CN104331926A (en) * 2014-10-09 2015-02-04 一派视觉(北京)数字科技有限公司 Manufacturing method for three-dimensional virtual scene tour-guiding and view-guiding system
CN105930053A (en) * 2010-08-17 2016-09-07 上海本星电子科技有限公司 Computer interaction system for automatic virtual role transmission
WO2018059540A1 (en) * 2016-09-30 2018-04-05 腾讯科技(深圳)有限公司 Method, device and storage medium for generating character behaviors in game
CN108122127A (en) * 2016-11-29 2018-06-05 韩国电子通信研究院 Predict the method and device of the operation result of game on line service
JP2018094326A (en) * 2016-12-16 2018-06-21 株式会社バンダイナムコエンターテインメント Event control system, and event notification system and program
CN108509039A (en) * 2018-03-27 2018-09-07 腾讯科技(深圳)有限公司 Method, apparatus, equipment and the storage medium of article are picked up in virtual environment
CN108579090A (en) * 2018-04-16 2018-09-28 腾讯科技(深圳)有限公司 Article display method, apparatus in virtual scene and storage medium
CN108671539A (en) * 2018-05-04 2018-10-19 网易(杭州)网络有限公司 Target object exchange method and device, electronic equipment, storage medium
CN109011576A (en) * 2018-06-26 2018-12-18 魔力小鸟(北京)信息技术有限公司 The system of virtual scene control based on network and visualized management
CN109471712A (en) * 2018-11-21 2019-03-15 腾讯科技(深圳)有限公司 Dispatching method, device and the equipment of virtual objects in virtual environment
CN109529352A (en) * 2018-11-27 2019-03-29 腾讯科技(深圳)有限公司 The appraisal procedure of scheduling strategy, device and equipment in virtual environment
CN110496394A (en) * 2019-08-30 2019-11-26 腾讯科技(深圳)有限公司 Method, apparatus, equipment and the medium of control NPC based on artificial intelligence
CN110602532A (en) * 2019-09-24 2019-12-20 腾讯科技(深圳)有限公司 Entity article recommendation method, device, server and storage medium
WO2020043015A1 (en) * 2018-08-30 2020-03-05 腾讯科技(深圳)有限公司 Method and apparatus for displaying virtual pet, terminal, and storage medium
CN111061949A (en) * 2019-12-03 2020-04-24 深圳市其乐游戏科技有限公司 Prop recommendation method, recommendation device and computer-readable storage medium
CN111095170A (en) * 2019-11-25 2020-05-01 深圳信息职业技术学院 Virtual reality scene, interaction method thereof and terminal equipment
CN111111193A (en) * 2019-12-25 2020-05-08 北京奇艺世纪科技有限公司 Game control method and device and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070021200A1 (en) * 2005-07-22 2007-01-25 David Fox Computer implemented character creation for an interactive user experience
US8347217B2 (en) * 2009-12-02 2013-01-01 International Business Machines Corporation Customized rule application as function of avatar data
US9186575B1 (en) * 2011-03-16 2015-11-17 Zynga Inc. Online game with animal-breeding mechanic
US9741145B2 (en) * 2012-06-29 2017-08-22 Disney Enterprises, Inc. Augmented reality simulation continuum

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5982390A (en) * 1996-03-25 1999-11-09 Stan Stoneking Controlling personality manifestations by objects in a computer-assisted animation environment
KR20020000590A (en) * 2000-06-24 2002-01-05 정상무 The Business model for cyber-touring that is based time-sequentially, 3-dynamically, and virtually concentrated animations on the internet
JP2004064398A (en) * 2002-07-29 2004-02-26 Matsushita Electric Ind Co Ltd Mobile terminal and communication system provided with mobile terminal
JP2004195002A (en) * 2002-12-19 2004-07-15 Namco Ltd GAME INFORMATION, INFORMATION STORAGE MEDIUM, AND GAME DEVICE
CN105930053A (en) * 2010-08-17 2016-09-07 上海本星电子科技有限公司 Computer interaction system for automatic virtual role transmission
CN104331926A (en) * 2014-10-09 2015-02-04 一派视觉(北京)数字科技有限公司 Manufacturing method for three-dimensional virtual scene tour-guiding and view-guiding system
WO2018059540A1 (en) * 2016-09-30 2018-04-05 腾讯科技(深圳)有限公司 Method, device and storage medium for generating character behaviors in game
CN108122127A (en) * 2016-11-29 2018-06-05 韩国电子通信研究院 Predict the method and device of the operation result of game on line service
JP2018094326A (en) * 2016-12-16 2018-06-21 株式会社バンダイナムコエンターテインメント Event control system, and event notification system and program
CN108509039A (en) * 2018-03-27 2018-09-07 腾讯科技(深圳)有限公司 Method, apparatus, equipment and the storage medium of article are picked up in virtual environment
CN108579090A (en) * 2018-04-16 2018-09-28 腾讯科技(深圳)有限公司 Article display method, apparatus in virtual scene and storage medium
CN108671539A (en) * 2018-05-04 2018-10-19 网易(杭州)网络有限公司 Target object exchange method and device, electronic equipment, storage medium
CN109011576A (en) * 2018-06-26 2018-12-18 魔力小鸟(北京)信息技术有限公司 The system of virtual scene control based on network and visualized management
WO2020043015A1 (en) * 2018-08-30 2020-03-05 腾讯科技(深圳)有限公司 Method and apparatus for displaying virtual pet, terminal, and storage medium
CN109471712A (en) * 2018-11-21 2019-03-15 腾讯科技(深圳)有限公司 Dispatching method, device and the equipment of virtual objects in virtual environment
CN109529352A (en) * 2018-11-27 2019-03-29 腾讯科技(深圳)有限公司 The appraisal procedure of scheduling strategy, device and equipment in virtual environment
CN110496394A (en) * 2019-08-30 2019-11-26 腾讯科技(深圳)有限公司 Method, apparatus, equipment and the medium of control NPC based on artificial intelligence
CN110602532A (en) * 2019-09-24 2019-12-20 腾讯科技(深圳)有限公司 Entity article recommendation method, device, server and storage medium
CN111095170A (en) * 2019-11-25 2020-05-01 深圳信息职业技术学院 Virtual reality scene, interaction method thereof and terminal equipment
CN111061949A (en) * 2019-12-03 2020-04-24 深圳市其乐游戏科技有限公司 Prop recommendation method, recommendation device and computer-readable storage medium
CN111111193A (en) * 2019-12-25 2020-05-08 北京奇艺世纪科技有限公司 Game control method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
动画自动生成系统中的智能虚拟角色的研究;戴晓林;陈泽琳;吴静纯;;电脑与信息技术(04);全文 *

Also Published As

Publication number Publication date
CN111773736A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
CN111953763B (en) Business data pushing method and device and storage medium
Lim et al. Investigating app store ranking algorithms using a simulation of mobile app ecosystems
KR101959368B1 (en) Determining an active persona of a user device
US11153430B2 (en) Information presentation method and device
Lim et al. How to be a successful app developer: Lessons from the simulation of an app ecosystem
CN114258670B (en) Information push method, device, electronic device and storage medium
CN111708948B (en) Content item recommendation method, device, server and computer readable storage medium
CN113886674B (en) Resource recommendation method and device, electronic equipment and storage medium
CN119185942A (en) Session interaction method and device, electronic equipment and readable storage medium
CN112286758A (en) Information processing method, apparatus, electronic device, and computer-readable storage medium
Lim et al. App epidemics: Modelling the effects of publicity in a mobile app ecosystem
CN111773736B (en) Behavior generation method and device for virtual roles
CN112138410B (en) Interaction method of virtual objects and related device
CN105610849B (en) Method and device for generating sharing label and method and device for displaying attribute information
CN109313638B (en) Application recommendation
CN113704280A (en) Data list updating method and device
CN119280838A (en) Method, device and electronic device for constructing short-term memory bank of dialogue model
JP7281241B1 (en) Information processing system and information processing method
CN110503482A (en) An article processing method, device, terminal and storage medium
CN112308530B (en) Prompt information generation method and device, storage medium and electronic device
CN106874455A (en) Task scheduling recommends method, device and server
JP2024074134A (en) Avatar generation device, avatar generation method, and program
CN116757247A (en) Click rate prediction model training method, device, computer equipment and storage medium
CN117319340A (en) Voice message playing method, device, terminal and storage medium
CN110008321B (en) Information interaction method and device, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 519000 Room 102, 202, 302 and 402, No. 325, Qiandao Ring Road, Tangjiawan Town, high tech Zone, Zhuhai City, Guangdong Province, Room 102 and 202, No. 327 and Room 302, No. 329

Applicant after: Zhuhai Jinshan Digital Network Technology Co.,Ltd.

Address before: 519000 Room 102, 202, 302 and 402, No. 325, Qiandao Ring Road, Tangjiawan Town, high tech Zone, Zhuhai City, Guangdong Province, Room 102 and 202, No. 327 and Room 302, No. 329

Applicant before: ZHUHAI KINGSOFT ONLINE GAME TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant