[go: up one dir, main page]

CN116431001A - Method for realizing AI interaction in virtual space - Google Patents

Method for realizing AI interaction in virtual space Download PDF

Info

Publication number
CN116431001A
CN116431001A CN202310508834.2A CN202310508834A CN116431001A CN 116431001 A CN116431001 A CN 116431001A CN 202310508834 A CN202310508834 A CN 202310508834A CN 116431001 A CN116431001 A CN 116431001A
Authority
CN
China
Prior art keywords
user
virtual
component
computer
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310508834.2A
Other languages
Chinese (zh)
Inventor
齐本铁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Weisaike Network Technology Co ltd
Original Assignee
Nanjing Weisaike Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Weisaike Network Technology Co ltd filed Critical Nanjing Weisaike Network Technology Co ltd
Priority to CN202310508834.2A priority Critical patent/CN116431001A/en
Publication of CN116431001A publication Critical patent/CN116431001A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Computer Graphics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a method for realizing AI interaction in a virtual space. Including computer graphics components, sensor devices, AI gadgets, and cloud databases, among others. The user can conduct various interactive activities in the virtual scene, conduct intelligent question-answering with the AI small assistant through voice, and achieve various interactive effects. The AI assistant has the outline shape of the eidolon or the robot, comprises computer components such as voice recognition, AI engine interfaces, voice synthesis, audio equipment, text controls and the like, and provides artificial intelligence service by connecting a remote AI server. The AI gadget can automatically or manually switch the presentation modality to interact with a particular scene model component. The invention provides rich and colorful virtual interaction experience for the user.

Description

Method for realizing AI interaction in virtual space
Technical Field
The invention relates to the technical field of virtual reality, in particular to a method for realizing AI interaction in a virtual space.
Background
The Virtual Reality technology (VR) comprises a computer, electronic information and simulation technology, and the basic implementation mode is that the computer technology is used as the main mode, and the latest development achievements of various high technologies such as three-dimensional graphics technology, multimedia technology, simulation technology, display technology and servo technology are utilized and integrated, and a realistic Virtual world with various sensory experiences such as three-dimensional vision, touch sense and smell sense is generated by means of equipment such as the computer, so that a person in the Virtual world generates a feeling of being personally on the scene.
In recent years, with the continuous increase of the graphic processing capability and the computing capability of computers, the application of virtual reality technology is gradually expanding. The virtual reality technology is widely applied in the fields of games, designs, education and the like by simulating the characteristics of a real space. Meanwhile, artificial intelligence technology has been developed in a long-standing manner, and various industries are striving to apply the artificial intelligence technology to actual production. In the prior art, the manner of realizing AI interaction in the virtual space is less. The traditional keyboard and mouse input mode limits the interaction between the user and the intelligent service. The invention solves the problems of single interaction, low information acquisition efficiency and the like of the traditional virtual space by combining the virtual reality technology with the artificial intelligence technology, and provides a more intelligent and natural virtual space interaction mode.
Disclosure of Invention
The patent provides a method for realizing AI interaction in a virtual space, and provides more convenient, rapid and efficient artificial intelligence service.
The patent provides a method for realizing AI interaction in a virtual space, which comprises the following components:
a computer graphics component for constructing and displaying virtual scenes;
a sensor device for tracking user motion and gesture information;
the AI small assistant is used for calling the API interface to submit a user request to the far-end AI server and acquire feedback information;
and a cloud database storing user data.
The virtual scene, the virtual character model and the AI small assistant of the patent are computer image components generated by a computer graphic engine in real time. The component has a position data feature and an animation data feature, and has a deformation module capable of automatically or manually switching the display form. And the AI small assistant and the virtual roles establish a close association relationship by adopting a binding mechanism and are bound into a part of the virtual roles through cloud data.
In this patent, the AI minor assistant is a computer graphic data component in a floating form, and the appearance is the outline shape of the eidolon or the robot. The components include speech recognition components, AI artificial intelligence engine interface APIs, speech synthesis components, audio devices, text controls, and other sub-components. And the server in the cloud is connected with the remote AI artificial intelligent server in a manner of calling an API interface to provide artificial intelligent service.
When using the method in this patent, the user is first brought into the virtual reality scene. The user may track gestures and actions through the sensor device and interact with the AI gadget, such as asking questions, etc. The AI assistant sends the input of the user to the remote AI server to obtain feedback information, and the AI assistant sets the feedback information
Specifically, the present invention proposes a complete Virtual Reality (VR) system that includes three main computer graphics components: a virtual scene component, a virtual character persona model component, and an AI gadget component. These components are displayed in the user's computer display device by the computer graphics engine invoking computer graphics hardware.
The virtual scene component provides a display of the virtual environment, and the computer graphics component invokes the rendering method to display the virtual scene in the computer display device for indicating environmental characteristics of the current virtual space. In addition, the component also provides elements such as objects, pictures and the like in the scene, so that the interactive experience of the user is enriched.
The virtual character model component is a self-defined virtual character data driven entity in the cloud database according to the requirements of users. The component comprises data stored in a cloud database, and the movement track of the control role is realized through a computer vision technology. Meanwhile, the component can also be used for interacting with the AI small assistant in the virtual space instead of the user, submitting a request to the AI small assistant in a voice, gesture and other modes, and acquiring feedback information.
Finally, the AI small assistant set is an intermediary connecting the user, the remote AI server, and the virtual environment. The component calls an API interface according to the requirement of a user to submit a request to a remote AI server and acquire feedback information, and then performs graphic interaction on the feedback information and a virtual space, so that the user can intuitively experience the feedback of the AI in the virtual environment.
The method specifically comprises the following steps:
a) A virtual scene is provided, and a computer graphics component invokes a rendering method for display in a display device.
b) A avatar character model stored in a cloud database is provided that represents the identity of the user in the virtual scene.
c) An AI gadget is provided that follows the movement of the virtual character, a computer graphics assembly and an animation method implementation, displayed in the virtual scene.
The three components are displayed in the user device by the computer graphics engine invoking hardware.
The AI assistant connects to the AI server by calling the API interface. The system is provided with a deformation module, the form can be automatically or manually switched according to the scene and the input, and the graphic component renders corresponding interaction effects and interacts with the scene model.
The AI small assistant is bound into a part of the virtual character through cloud data, and the graphic engine controls the AI small assistant to move along with the virtual character through the association relation of the data model.
The virtual scene, avatar character model, and AI gadgets are generated in real-time by a computer graphics engine. The avatar character model and AI gadgets have position and animation data features in the virtual scene.
The AI gadget is in the user device window. When the user rotates the window, the AI assistant performs an animation model whose eye graphic data is expected to be directed to the virtual character.
The AI small assistant is connected with the recording equipment of the user equipment and receives the sound environment information. And connecting a wake-up word detection module. When a matching speech feature is detected, the AI assistant performs a graphic transformation method of the animation model.
The AI gadget realizes the interaction by:
a) The speech recognition API component parses the user's speech and converts it into text.
b) The API component sends the text to the AI server, which gets a response.
c) If voice interaction is selected, the voice synthesis program converts the response to voice and the audio device outputs the voice to the user.
d) If text interaction is selected, the answer is directly displayed on the virtual scene text control.
The voice recognition component, the AI server, the voice synthesis component, the audio device and the text presentation component communicate through interfaces to realize processing input and presenting answers.
The AI small assistant is provided with a morphological method component, a morphological transformation method is triggered after interaction, the morphology is different 3D models, and the morphological transformation method is realized by the call of a graphic engine. For example, it becomes an instructor figure in a library, interacts with a scene model, and a tour guide figure, teaching scenic spot model history to multiple people.
The AI small assistant and the virtual character establish association by adopting a binding mechanism, so that the AI small assistant moves along with the virtual character. The mechanism utilizes virtual scene position information to calculate and match in the virtual space vector, realizes real-time binding of the AI small assistant and the virtual role, and ensures that the AI small assistant is rendered and displayed in the display equipment by the graphic assembly to follow the virtual role.
When the user communicates with the AI small assistant using non-English, the speech recognition component converts the speech to text, the language translation component translates, and the speech synthesis technique is presented to the user in a user-understandable language.
The method allows a user to interact with the computer components in a natural manner, such as by gestures, eye-concentration, voice, and the like.
The virtual scene presented to the user includes elements such as virtual characters, environments, articles, pictures, videos, texts, and the like.
The sensor device comprises a camera, a microphone, a gyroscope and other physical sensors.
The association connection relation is realized through a vector calculation and matching component, and the position information component in the virtual scene is used as a basis to realize real-time binding between the AI small assistant and the virtual role, so that the AI small assistant can move along with the virtual role at any time and provide services.
Compared with the prior art, the invention has the beneficial effects that: by introducing the AI technology into the field of virtual reality, a more intelligent and natural virtual space interaction mode is provided. The invention realizes the real interaction between the user and the characters and articles in the virtual scene, so that the operation in the virtual scene is more natural and smoother. The realization of the invention can also effectively reduce complicated keyboard input and mouse click, and improve the information acquisition efficiency.
Drawings
FIG. 1 is an AI small assistant interaction scene diagram of the invention;
FIG. 2 is an AI small assistant interaction scene diagram of the invention;
FIG. 3 is a diagram of steps of an interactive method of the present invention;
in the figure, 101 is a virtual scene computer component, which is an environment in a virtual space; 102 is a user character graphic component for prompting a user avatar; 103 is a speech output component, namely a speaker; 104 is a voice input button, microphone icon, press talk; 105 is a background setup button for computer component parameter settings; 201 is an AI gadget component; 202 is the graphic interaction window popped up by the AI assistant.
Description of the embodiments
As shown in fig. 1, the invention relates to a method for realizing AI interaction in a virtual space, which has the following characteristics and components:
1. the invention comprises the following components:
a) A computer graphics component for constructing and displaying virtual scenes;
b) Sensor devices for tracking user motion and gesture information, such as cameras, microphones, gyroscopes, etc.;
c) The AI small assistant is used for calling the API interface to submit a user request to the far-end AI server and acquire feedback information;
d) A cloud database storing user data;
2. the specific steps for realizing the invention are as follows:
a) Providing a virtual scene, and calling the virtual scene displayed in the computer display device by a rendering method by a computer graphic component to indicate the environmental characteristics of the current virtual space;
b) Providing a character model of a character driven by special character data customized by a user and stored in a cloud database under a current virtual scene environment model component, wherein the character model is used for representing the identity of the user in a virtual scene;
c) Providing an AI small assistant which is realized by a computer graphic component and an animation method, is displayed in a current virtual scene and moves along with a virtual character in the virtual scene, and is used for providing intelligent question-answering service;
3. in the virtual space, the computer image components generated in real time by the virtual scene, the virtual character model and the AI small assistant are provided with position data characteristics and animation data characteristics.
4. The AI small assistant assembly provided by the invention has the following functions:
a) A voice recognition component for converting the voice of the user into text data;
b) The AI artificial intelligence engine interface API is used for sending the input of the user to a remote AI server to acquire reply data;
c) The voice synthesis component is used for converting the reply data into voice data;
d) An audio device for converting voice data into sound;
e) A text control for presenting text data in a virtual scene;
5. and the connection between the AI small assistant assembly and the remote AI artificial intelligent server is realized by a cloud server in a manner of calling an API interface, so as to provide artificial intelligent service.
6. The AI small assistant assembly is provided with a morphology method assembly, and a morphology transformation method is triggered after interaction with a user, wherein the morphology is different computer three-dimensional model assemblies, and the morphology transformation method is realized by calling different three-dimensional model data by a computer graphic engine.
Fig. 1 illustrates a scenario in which a character interacts with an AI gadget. The user's avatar stands in the virtual scene at this point. The AI assistant is suspended at the side of the user, and the avatar is an input/output box to interact with the AI.
Correspondingly, in fig. 2 we can see that the AI minor assistant at this time becomes a standing person. At this time, the AI small assistant becomes a display for a female lecturer in the same role in the lecture exhibition hall.
As shown in fig. 2, the invention discloses a method for realizing AI interaction in a virtual space, which comprises virtual scenes, AI small assistants, sensor equipment, cloud database and other components and related steps. The method aims at improving the interactive experience, convenience and user friendliness of the user. Embodiments of the invention are described in detail below:
the invention provides a method for realizing AI interaction in a virtual space, which is characterized by comprising the following components:
a) A computer graphics component for constructing and displaying virtual scenes;
b) A sensor device for tracking user motion and gesture information;
c) The AI small assistant is used for calling the API interface to submit a user request to the far-end AI server and acquire feedback information;
d) A cloud database storing user data;
in the present invention, by providing a virtual scene, user actions are tracked using sensors; and providing an AI small assistant, calling an API interface according to the interaction between the user and the AI small assistant, connecting to a far-end AI server, submitting a user request, acquiring the feedback of the AI server, and performing graphic interaction with the virtual space by the AI small assistant according to the data element characteristics in the feedback information assembly.
As shown in fig. 1 and 2, the specific implementation process is as follows:
1. a virtual scene is provided, and the virtual scene displayed in the computer display device by the computer graphics component calling the rendering method is used for indicating the environmental characteristics of the current virtual space.
2. A character model of a character driven by proprietary character data stored in a cloud database customized by a user under a current virtual scene environment model component is provided for representing the identity of the user in a virtual scene.
3. An AI gadget is provided, implemented by computer graphics components and animation methods, displayed in a current virtual scene, that follows the movement of a virtual character in the virtual scene for providing intelligent question-answering services.
The three computer components are invoked by the computer graphics engine for display of computer graphics hardware in a user's computer display device.
The AI small assistant assembly in the invention is a computer graphic data assembly in a floating form; the appearance of the computer graphic assembly is characterized by the contour shape of the sprite or robot.
For the AI small assistant assembly, the invention includes the following computer components:
a) A voice recognition component for converting the voice of the user into text data;
b) The AI artificial intelligence engine interface API is used for sending the input of the user to a remote AI server to acquire reply data;
c) The voice synthesis component is used for converting the reply data into voice data;
d) An audio device for converting voice data into sound;
e) A text control for presenting text data in a virtual scene.
For the AI small assistant assembly, the invention has the following characteristics:
1. the virtual scene, the virtual character model and the AI small assistant are computer image components generated by a computer graphic engine in real time.
2. The avatar character model, AI gadget assembly, has position data features and animation data features in the virtual scene assembly.
3. The AI small assistant assembly is positioned in a window of the computer equipment of the user, and when the user rotates the window of the computer equipment by using the sensor, the AI small assistant assembly is connected with an animation assembly of the graphic engine to execute an animation model, and the animation model is a virtual character assembly of the eye graphic data assembly of the AI small assistant, which is expected to the user.
4. The AI small assistant assembly is connected with recording equipment in the computer equipment of the user and used for receiving the sound environment information of the user; the AI small assistant assembly is connected with the wake-up word detection module; when the wake-up word detection module detects that the data recorded in the user recording equipment is matched with the voice feature assembly, the AI small assistant assembly is connected with an animation assembly of the computer graphic engine, and a graphic transformation method recorded by the animation model is executed.
5. As shown in FIG. 3, the AI small assistant assembly achieves interaction by:
a) The computer equipment of the user calls a voice recognition API component to analyze the voice of the user in real time and convert the voice into a text form;
b) Connecting a text component obtained by converting user voice to a remote AI artificial intelligent server through an API component to obtain a response component of the server;
c) If the user selects to use voice for interaction in the program setting module, calling a voice synthesis program to convert the response into a voice form, and outputting the voice form to the user through the audio equipment;
d) If the user selects to use the text for interaction in the programming module, directly presenting the answer in a text form to a text control in the virtual scene;
in the system, data transmission and interaction are carried out among the voice recognition component, the AI engine server, the voice synthesis component, the audio equipment component, the text presentation component and other components through a communication interface of a computer system so as to realize processing of user input and presentation of answers. For example, during voice input and output, the audio device may transmit digital signals to the voice synthesis component or the voice recognition component by means of a wired cable or wireless bluetooth, etc., while during text presentation, text information is directly sent to the text output control of the text presentation interface.
6. The AI small assistant is provided with a morphology method component, and a morphology transformation method is triggered after interaction with a user, wherein the morphology is different computer three-dimensional model components, and the morphology transformation method is realized by calling different three-dimensional model data by a computer graphic engine. A virtual instructor figure that becomes standing beside the model, for example in a library or museum, and is able to perform related operations to interact with the scene model; for example, tour guides, historic stories of scenic spot models, etc. are taught to multiple persons.
7. And a binding mechanism is adopted between the AI small assistant and the virtual character to establish a close association relationship, so that the function that the AI small assistant always moves along with the virtual character is realized. Specifically, the binding mechanism utilizes the position information in the virtual scene as a basis, realizes real-time binding between the AI small assistant and the virtual character on the basis of vector calculation and matching technology in the virtual space, ensures that the AI small assistant realized by computer vision and virtual reality technology is displayed in the computer display device by a rendering method of the computer graphic assembly and always follows the virtual character displayed in the computer display device by the rendering method of the computer graphic assembly.
8. When the user communicates with the AI assistant in a non-English language, the speech recognition component converts the speech data to text data and is translated by the language translation component, and finally the answers are presented to the user by speech synthesis techniques in language data that is understandable to the user.
9. The method allows a user to interact with the computer components in a natural manner, such as gestures, eye-concentration, voice, and the like.
10. Virtual scenes presented to a user include computer digital components of virtual characters, environments, articles, pictures, videos, text, and the like.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate embodiment, and that this description is provided for clarity only, and that the disclosure is not limited to the embodiments described in detail below, and that the embodiments described in the examples may be combined as appropriate to form other embodiments that will be apparent to those skilled in the art.

Claims (10)

1. A method for implementing AI interactions in a virtual space, comprising the following components:
a) A computer graphics component for constructing and displaying virtual scenes;
b) A sensor device for tracking user motion and gesture information;
c) The AI small assistant is used for calling the API interface to submit a user request to the far-end AI server and acquire feedback information;
d) A cloud database storing user data;
the method comprises the following steps: providing a virtual scene, and tracking user actions by using a sensor; providing an AI small assistant, calling an API interface according to the interaction between a user and the AI small assistant, connecting to a far-end AI server, submitting a user request, acquiring the feedback of the AI server, and performing graphic interaction with a virtual space by the AI small assistant according to the data element characteristics in the feedback information assembly; specific:
a) Providing a virtual scene, and calling the virtual scene displayed in the computer display device by a rendering method by a computer graphic component to indicate the environmental characteristics of the current virtual space;
b) Providing a character model of a character driven by special character data customized by a user and stored in a cloud database under a current virtual scene environment model component, wherein the character model is used for representing the identity of the user in a virtual scene;
c) Providing an AI small assistant which is realized by a computer graphic component and an animation method, is displayed in a current virtual scene and moves along with a virtual character in the virtual scene, and is used for providing intelligent question-answering service;
the AI small assistant assembly is a computer graphic data assembly in a floating form; the appearance characteristic of the computer graphic component is the outline shape of the eidolon or the robot; the three computer components are called by a computer graphic engine, and computer graphic hardware is displayed in a computer display device of a user; the AI small assistant assembly is a computer graphic data assembly in a floating form; the appearance characteristic of the computer graphic component is the outline shape of the eidolon or the robot; the AI small assistant assembly comprises the following computer assemblies:
a) A voice recognition component for converting the voice of the user into text data;
b) The AI artificial intelligence engine interface API is used for sending the input of the user to a remote AI server to acquire reply data;
c) The voice synthesis component is used for converting the reply data into voice data;
d) An audio device for converting voice data into sound;
e) A text control for presenting text data in a virtual scene;
the AI small assistant assembly is connected with the remote AI artificial intelligent server through a cloud server in a manner of calling an API interface and is used for providing artificial intelligent service; the computer graphic assembly of the AI small assistant is provided with a deformation module, can automatically or manually switch the display form of the AI small assistant along with the input of a scene and a user, and the computer graphic assembly performs a rendering method to display a corresponding interaction effect and interacts with a specific scene model assembly; the AI small assistant is bound into a part of the virtual role through cloud data, establishes a data model association relationship through a cloud data assembly, and controls the AI small assistant to always move along with the virtual role through a graphic engine.
2. The method of claim 1, wherein the virtual scene, the avatar character model, and the AI gadget are computer image components generated in real time by a computer graphics engine, and wherein the avatar character model and the AI gadget components have position data features and animation data features.
3. The method of claim 1, wherein the AI gadget module is located in a window of the user's computer device, and wherein the AI gadget module is coupled to an animation module of the graphics engine to execute an animation model of a virtual character component of the AI gadget whose eye graphic data component is intended for the user when the user rotates the window of the computer device using the sensor.
4. The method for realizing AI interaction in a virtual space according to claim 1, wherein the AI small assistant assembly is connected to a recording device in a computer device of a user and receives sound environment information of the user; the AI small assistant assembly is connected with the wake-up word detection module; when the wake-up word detection module detects that the data recorded in the user recording equipment is matched with the voice feature assembly, the AI small assistant assembly is connected with an animation assembly of the computer graphic engine, and a graphic transformation method recorded by the animation model is executed.
5. The method of claim 1, wherein the AI gadget module performs the interaction by:
a) The computer equipment of the user calls a voice recognition API component to analyze the voice of the user in real time and convert the voice into a text form;
b) Connecting a text component obtained by converting user voice to a remote AI artificial intelligent server through an API component to obtain a response component of the server;
c) If the user selects to use voice for interaction in the program setting module, calling a voice synthesis program to convert the response into a voice form, and outputting the voice form to the user through the audio equipment;
d) If the user selects to use the text for interaction in the programming module, directly presenting the answer in a text form to a text control in the virtual scene;
in the system, data transmission and interaction are carried out among the voice recognition component, the AI engine server, the voice synthesis component, the audio equipment component, the text presentation component and other components through a communication interface of a computer system so as to realize processing of user input and presentation of answers. For example, during voice input and output, the audio device may transmit digital signals to the voice synthesis component or the voice recognition component by means of a wired cable or wireless bluetooth, etc., while during text presentation, text information is directly sent to the text output control of the text presentation interface.
6. The method of claim 1, wherein the AI gadget has a morphology method component that triggers a morphology transformation method after interaction with a user, the morphology is a different computer three-dimensional model component, and the morphology transformation method is implemented by a computer graphics engine invoking different three-dimensional model data. A virtual instructor figure that becomes standing beside the model, for example in a library or museum, and is able to perform related operations to interact with the scene model; for example, tour guides, historic stories of scenic spot models, etc. are taught to multiple persons.
7. The method for realizing AI interaction in a virtual space according to claim 1, wherein a binding mechanism is adopted between the AI small assistant and the virtual character to establish a close association relationship, so that the function that the AI small assistant always follows the movement of the virtual character is realized. Specifically, the binding mechanism utilizes the position information in the virtual scene as a basis, realizes real-time binding between the AI small assistant and the virtual character on the basis of vector calculation and matching technology in the virtual space, ensures that the AI small assistant realized by computer vision and virtual reality technology is displayed in the computer display device by a rendering method of the computer graphic assembly and always follows the virtual character displayed in the computer display device by the rendering method of the computer graphic assembly.
8. The method of claim 1, wherein when the user communicates with the AI gadget using a language other than english, the speech recognition component converts the speech data to text data and translates the text data by the language translation component, and the answer is presented to the user in language data that is understandable to the user by the speech synthesis technique.
9. The method of claim 1, wherein the method allows a user to interact with the computer component in a natural manner such as gestures, eye-concentration, voice, etc., and the virtual scene presented to the user comprises a virtual character, an environment, an object, a picture, a video, a text, etc., and the sensor device comprises a physical sensor such as a camera, a microphone, a gyroscope, etc.
10. The method for realizing AI interaction in a virtual space according to claim 1, wherein the association connection relation is realized by a vector calculation and matching component, and the real-time binding between the AI small assistant and the virtual character is realized by using a position information component in the virtual scene as a basis, so that the AI small assistant can move along with the virtual character at any time and provide services.
CN202310508834.2A 2023-05-08 2023-05-08 Method for realizing AI interaction in virtual space Pending CN116431001A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310508834.2A CN116431001A (en) 2023-05-08 2023-05-08 Method for realizing AI interaction in virtual space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310508834.2A CN116431001A (en) 2023-05-08 2023-05-08 Method for realizing AI interaction in virtual space

Publications (1)

Publication Number Publication Date
CN116431001A true CN116431001A (en) 2023-07-14

Family

ID=87081533

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310508834.2A Pending CN116431001A (en) 2023-05-08 2023-05-08 Method for realizing AI interaction in virtual space

Country Status (1)

Country Link
CN (1) CN116431001A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118860161A (en) * 2024-09-25 2024-10-29 娱电(上海)科技有限公司 A 3D virtual space application method, device, equipment and medium based on artificial intelligence

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118860161A (en) * 2024-09-25 2024-10-29 娱电(上海)科技有限公司 A 3D virtual space application method, device, equipment and medium based on artificial intelligence

Similar Documents

Publication Publication Date Title
US20240211208A1 (en) Communication assistance program, communication assistance method, communication assistance system, terminal device, and non-verbal expression program
US12225325B2 (en) Method, apparatus, electronic device, computer-readable storage medium, and computer program product for video communication
CN101419499B (en) Multimedia human-computer interaction method based on camera and mike
CN108877336A (en) Teaching method, cloud service platform and tutoring system based on augmented reality
CN110400251A (en) Method for processing video frequency, device, terminal device and storage medium
US20220301250A1 (en) Avatar-based interaction service method and apparatus
KR20220129989A (en) Avatar-based interaction service method and device
CN112424736A (en) Machine interaction
JP2022500795A (en) Avatar animation
US7467186B2 (en) Interactive method of communicating information to users over a communication network
CN117784929A (en) Exhibition display system applying virtual reality technology
CN113824982A (en) Live broadcast method and device, computer equipment and storage medium
Lu et al. Classification, application, challenge, and future of midair gestures in augmented reality
Wasfy et al. Intelligent virtual environment for process training
CN116431001A (en) Method for realizing AI interaction in virtual space
Rauterberg et al. Pattern recognition as a key technology for the next generation of user interfaces
Jeon et al. Constructing the immersive interactive sonification platform (iISoP)
CN119883006A (en) Virtual human interaction method, device, related equipment and computer program product
Rauterberg From gesture to action: natural user interfaces
Schäfer Improving essential interactions for immersive virtual environments with novel hand gesture authoring tools
CN114979789A (en) Video display method and device and readable storage medium
CN118135068B (en) Cloud interaction method and device based on virtual digital person and computer equipment
US20250238991A1 (en) System and method for authoring context-aware augmented reality instruction through generative artificial intelligence
Asiri et al. The Effectiveness of Mixed Reality Environment-Based Hand Gestures in Distributed Collaboration
Haoxin et al. Immersive Experience-based Cooperative Interaction for Computer Assembly Guidance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination