[go: up one dir, main page]

WO2018095439A1 - Method, apparatus and storage medium for information interaction - Google Patents

Method, apparatus and storage medium for information interaction Download PDF

Info

Publication number
WO2018095439A1
WO2018095439A1 PCT/CN2017/115058 CN2017115058W WO2018095439A1 WO 2018095439 A1 WO2018095439 A1 WO 2018095439A1 CN 2017115058 W CN2017115058 W CN 2017115058W WO 2018095439 A1 WO2018095439 A1 WO 2018095439A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
target object
target
facial
interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2017/115058
Other languages
French (fr)
Chinese (zh)
Inventor
陈阳
王宇
麥偉強
陈志南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Publication of WO2018095439A1 publication Critical patent/WO2018095439A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • Embodiments of the present invention relate to the field of computers, and in particular, to an information interaction method, apparatus, and storage medium.
  • the social platform is an account-based social system.
  • the information interaction is usually dominated by situations where the information is not shared.
  • the information interaction between the user and the user is performed in a peer-to-peer manner.
  • the timeline information flow is viewed from time to time to discover the user feeling.
  • Information of interest thereby interacting with information based on information of interest.
  • the information interaction of the existing solution is still based on the application of the virtual account of the social platform, which is not conducive to information interaction.
  • the embodiments of the present invention provide a method, an apparatus, and a storage medium for information interaction, so as to at least solve the technical problem of complicated process of related technical information interaction.
  • an information interaction method includes: acquiring facial information of the first target object; and facial information according to the first target object Obtaining target information of the first target object, wherein the target information is used to indicate a social behavior of the first target object; and receiving interaction information that is sent by the second target object according to the target information, wherein the interaction information is used to indicate the second target object and The first target object interacts; the interaction information is published.
  • an information interaction apparatus includes one or more processors, and one or more memories storing instructions, wherein the instructions are executed by the processor, and the program unit to be executed by the processor includes: a first obtaining unit, configured to acquire the first target a second acquisition unit, configured to acquire target information of the first target object according to the facial information of the first target object, wherein the target information is used to indicate a social behavior of the first target object; and the receiving unit is configured to receive The interaction information sent by the second target object according to the target information, wherein the interaction information is used to indicate that the second target object interacts with the first target object; and the publishing unit is configured to issue the interaction information.
  • a terminal is also provided.
  • the terminal is arranged to execute program code for performing the steps in the information interaction method of the embodiment of the present invention.
  • a storage medium is also provided.
  • the storage medium is arranged to store program code for performing the steps in the information interaction method of the embodiment of the present invention.
  • the object information of the first target object is acquired according to the face information of the first target object, wherein the target information is used to indicate the social behavior of the first target object;
  • the virtual account, the interactive entry is mainly based on the face information, which simplifies the process of information interaction and achieves the purpose of information interaction, thereby realizing the technical effect of simplifying the process of information interaction, and solving the complicated technology of the process of related technical information interaction. problem.
  • FIG. 1 is a schematic diagram of a hardware environment of an information interaction method according to an embodiment of the present invention
  • FIG. 2 is a flowchart of an information interaction method according to an embodiment of the present invention.
  • FIG. 3 is a flowchart of a method for displaying target information in a preset spatial position of a real scene according to an embodiment of the present invention
  • FIG. 4 is a flowchart of another method for displaying target information in a preset spatial position of a real scene according to facial information of a first target object according to an embodiment of the present invention
  • FIG. 5 is a flowchart of a method for displaying visible information of a first target object within a permission range in a preset spatial position according to an embodiment of the present invention
  • FIG. 6 is a flowchart of another method for displaying visible information of a first target object within a right scope in a preset spatial position according to an embodiment of the present invention
  • FIG. 7 is a flowchart of a method of transmitting a first request to a server according to an embodiment of the present invention.
  • FIG. 8 is a flowchart of another method for information interaction according to an embodiment of the present invention.
  • FIG. 10 is a flowchart of a method for information registration according to an embodiment of the present invention.
  • FIG. 11 is a flowchart of a method for displaying and interacting information according to an embodiment of the present invention.
  • FIG. 12 is a schematic diagram showing a basic information display according to an embodiment of the present invention.
  • FIG. 13 is a schematic diagram showing another basic information display according to an embodiment of the present invention.
  • FIG. 14 is a schematic diagram of an AR information display according to an embodiment of the present invention.
  • FIG. 15 is a schematic diagram of another AR information display according to an embodiment of the present invention.
  • FIG. 16 is a schematic diagram of an information interaction apparatus according to an embodiment of the present invention.
  • FIG. 17 is a schematic diagram of another information interaction apparatus according to an embodiment of the present invention.
  • FIG. 18 is a structural block diagram of a terminal according to an embodiment of the present invention.
  • an embodiment of an information interaction method is provided.
  • FIG. 1 is a schematic diagram of a hardware environment of an information interaction method according to an embodiment of the present invention.
  • the server 102 is connected to the terminal 104 through a network.
  • the network includes but is not limited to a wide area network, a metropolitan area network, or a local area network.
  • the terminal 104 is not limited to a PC, a mobile phone, a tablet, or the like.
  • the information interaction method in the embodiment of the present invention may be executed by the server 102, or may be executed by the terminal 104, and may also be performed. It is executed by the server 102 and the terminal 104 in common.
  • the information interaction method performed by the terminal 104 in the embodiment of the present invention may also be performed by a client installed thereon.
  • the information interaction method may include the following steps:
  • Step S202 Acquire face information of the first target object.
  • Augmented Reality is a real-time calculation of the position and angle of the camera image, and corresponding image, video, three-dimensional (3D) model Technology to enable real-time interaction between virtual and real-world scenarios.
  • Augmented reality applications use AR technology and can be installed and used on AR glasses, mobile communication terminals, and PCs.
  • acquiring the facial information of the first target object where the first target object is an object to be information exchanged, for example, in a meeting scene, an encounter scene, a classmate, a friend, a colleague, Family and other objects.
  • the facial information may be facial information collected by the camera, for example, facial information obtained by face information automatically obtained by face recognition by the front camera, and may replace the traditional virtual account for social behavior, so that the entrance of the information interaction is based on the face. Identification of information.
  • the face information identifying the first target object is automatically triggered.
  • the user when logging in to the augmented reality application, the user may log in through the user's palm print information, user name, and facial information, which is not limited herein.
  • Step S204 acquiring target information of the first target object according to the facial information of the first target object.
  • the target information of the first target object is acquired according to the facial information of the first target object, wherein the target information is used to indicate the social behavior of the first target object.
  • the facial information of the first target object is in one-to-one correspondence with the target information of the first target object, the target information is used to indicate the social behavior of the first target object, and may be further used as the second target object.
  • the prompt information of the first target object is understood, wherein the second target object is an object that interacts with the first target object according to the target information.
  • the first target object can be registered on the server through the face information. After acquiring the face information of the first target object, the target information of the first target object is acquired from the server according to the face information of the first target object.
  • the target information includes user basic information and social information of the first target object.
  • the user basic information may include basic information such as a nickname, a name, an address, a contact method, and a personalized signature of the first target object.
  • the social information includes dynamic information of the first target object, extended information of the first target object on the third-party platform, historical exchange information of the first target object, and the like.
  • the dynamic information of the first target object may be dynamic timeline information, including but not limited to expressions and comments.
  • the expression refers to a single static or dynamic or three-dimensional preset image without text, and the comment is rich media, which may include text, Voice, picture, and other information freely organized by users; extended information includes third-party social account information, which can be extracted on a third-party social platform by pulling the first target object according to the third-party social account information according to the network address characteristics of the third-party social platform.
  • Information; historical exchange information is information that communicates with the first target object in the past time, and can be used to evoke the communication memory of the second target object to the first target object, thereby naturally making the second target object and the first target object more natural. Start the conversation topic.
  • the target information of the first target object When the target information of the first target object is acquired according to the facial information of the first target object, the target information may be displayed at a preset spatial position of the real scene, that is, the target information is superimposed to a preset spatial position of the real scene, for example, Superimposed to one side of the first target object, thereby achieving the purpose of combining the virtual target information with the real real scene, and avoiding the manual opening of the social software to search for the dynamic information and search of the first target object by acquiring the target information History exchanges information, which simplifies the process of information interaction.
  • the target information of the first target object is automatically displayed after the face information of the first target object is automatically triggered.
  • the camera is not easy to acquire the face information of the first target object, and the target can be obtained by voice search.
  • Information for example, by voice search for basic information such as nicknames, names, etc. to obtain target information.
  • voice search for basic information such as nicknames, names, etc. to obtain target information.
  • the second target object and the first target object do not meet in the real scene, but want to view the social information of the first target object, for example, want to view historical exchange information with the first target object, this The face information of the first target object cannot be obtained at the time, and can be performed by the above-mentioned voice search.
  • Step S206 receiving interaction information that is sent by the second target object according to the target information.
  • step S206 of the present invention the interaction information sent by the second target object according to the target information is received, wherein the interaction information is used to indicate that the second target object interacts with the first target object.
  • the second target object After acquiring the target information of the first target object according to the facial information of the first target object, the second target object has a further understanding of the first target object by the target information of the first target object.
  • the second target object performs information interaction with the first target object according to the actual intention thereof, and receives the interaction information sent by the second target object according to the target information, so that the first target object and the second target object perform information interaction.
  • the interaction information may be information related to the content of the target information, or may be information that is not related to the content of the target information.
  • the second target object learns the first target by using the target information of the first target object.
  • the object likes the soccer ball, and the second target object may send the interactive information inviting the first target object to watch the soccer match, or may send the interactive information inviting the other party to watch the basketball game in order to let the first target object feel the new ball game experience.
  • the interaction information may be voice information, image information, video information, etc., and may be virtual interaction information in a virtual scene, including but not limited to expressions and comments, such as, but not limited to, text manually input by the second target object.
  • the interaction information may also be voice information, image information, video messages, etc. recorded in a real scene, which is not limited herein, so that all interactions between the virtual world and the real world are realized, and the virtual coexistence of information interaction is achieved, thereby enriching The type of information interaction.
  • step S208 the interaction information is released.
  • step S208 of the present invention after receiving the interaction information sent by the second target object according to the target information, the interaction information is released, and the first target object and the second target object can view the interaction information through the client, thereby The first target object is caused to interact with the second target object.
  • the publishing portal mainly includes a personal dynamic information portal, and a session information portal with others.
  • the former can control the authority of the publication, and the latter includes the entry of the interaction information performed by the two parties in the virtual scene and the entry of the interaction information in the real scene.
  • the permission control is divided into at least four categories, for example, all people are visible, friends are visible, specific friends are visible, and only visible to themselves. People can set different requirements for the degree of information disclosure, and can use the widest visible control permission to be seen by people. The privacy concern can be set to be visible only to friends, thus preventing unfamiliar people from peeking into their own information. Improve the security of user information.
  • the first target information of the first target object whether it is basic user information, dynamic information, or a manner of displaying interaction information between the first target object and the second target object, including but not limited to a three-dimensional spiral , spherical, cylindrical and other presentation methods, thereby increasing the interest of interactive information display.
  • step S208 the facial information of the first target object; acquiring the target information of the first target object according to the facial information of the first target object, wherein the target information is used to indicate the social behavior of the first target object; The interaction information sent by the second target object according to the target information, wherein the interaction information is used to indicate that the second target object interacts with the first target object; and the interaction information is released.
  • the virtual account and interactive portal are mainly based on facial information, which simplifies the process of information interaction, solves the complicated technical problems of the process of related technical information interaction, and further achieves the technical effect of simplifying the interactive process of information.
  • receiving the interaction information that is sent by the second target object according to the target information includes: receiving, in the reality, the second target object is sent according to the target information. Real interaction information in the scenario; and/or receiving virtual interaction information in the virtual scenario sent by the second target object according to the target information.
  • the real interaction information between the second target object and the first target object is recorded, thereby realizing the record of the real world.
  • the AR content is used to input the content obtained by the user, such as the image content and the video content in the real scene, without having to go back and forth between the screen and the reality when recording the image content and the video content in the real scene like the mobile phone platform. Switch attention.
  • the virtual interaction information in the virtual scenario sent by the second target object according to the target information is received, and the virtual interaction information is the exchange information of the virtual world, and may be a single static or dynamic or three-dimensional preset without text.
  • Pictures can also be texts, voices, pictures, and other information that are freely organized by users.
  • the virtual interaction information is stored to the preset storage location.
  • the real interaction information After receiving the real interaction information in the real scene sent by the second target object according to the target information, the real interaction information is stored to a preset storage location, for example, stored in the server, so as to be included in the next acquired target information.
  • the real interactive information optionally, after the real interactive information is entered through the AR glasses, the recorded image content, the video content, and the like can be played back without using other platforms, and the user experience is the angle of view that was originally entered, thereby bringing Users have a more realistic experience. And/or after receiving the virtual interaction information in the virtual scenario sent by the second target object according to the target information, storing the virtual interaction information to a preset storage location, for example, storing in the server, so that the target acquired next time The information includes this virtual interaction information.
  • the real interaction information includes at least one or more of the following: voice information in a real scene; image information in a real scene; and video information in a real scene.
  • the real interaction information in the real scene sent by the second target object according to the target information includes the voice information in the real scene, for example, the conversation between the second target object and the first target object, and the image information in the real scene.
  • the facial image of the first target object further includes video information in a real scene, such as a video recording of a meeting in a conference room, thereby enriching the type of interactive information.
  • step S202 acquiring the facial information of the first target object includes: scanning a face of the first target object to obtain facial information of the first target object; and in step S204, according to the face of the first target object After the information acquires the target information of the first target object, the method further includes: displaying the target information in a preset spatial position of the real scene.
  • the face information of the first target object may be obtained by scanning the face of the first target object, for example, the front camera installed by the AR glasses automatically performs the face of the first target object. Identifying, obtaining face information of the first target object, thereby achieving the purpose of acquiring face information of the first target object.
  • displaying the target information in a preset spatial position of the real scene for example, displaying the target information on one side of the first target object, and the user passes the AR device
  • the target information displayed in the preset spatial position, the first target object, and other scenes in the real scene can be seen.
  • the device having the camera in theory can be applied to the face information of the first target object in the embodiment, including but not limited to the AR glasses device, and can also be different for the mobile communication terminal, the PC end, and the like. It's ease of use and how it works.
  • displaying the target information in the preset spatial position of the real scene includes: determining a display space position of the target information in the real scene according to the current spatial position of the first target object in the real scene; The location displays the target information.
  • FIG. 3 is a flowchart of a method for displaying target information in a preset spatial position of a real scene according to an embodiment of the present invention. As shown in FIG. 3, the method for displaying target information in a preset spatial position of a real scene includes the following steps:
  • Step S301 determining a current spatial location of the first target object in the real scene.
  • step S301 of the present invention after acquiring the first target information of the first target object, determining a current spatial location of the first target object in the real scene, where the current spatial location may be the first target object The position of the face in the real scene.
  • the current location of the first target object in the real scene is determined by information such as a distance from the second target object, a direction relative to the second target object, and the like.
  • Step S302 determining a display space position of the target information in the real scene according to the current spatial position.
  • step S302 of the present invention after determining the current spatial position of the first target object in the real scene, determining the display space position of the target information in the real scene according to the current spatial position, it may be determined that the display spatial position is located.
  • the left, right, top, bottom, etc. of the current spatial position can also be manually set according to the current spatial position, so as to achieve a superimposed effect of the display position of the target information and the real scene.
  • step S303 the target information is displayed in the display space position.
  • displaying the target information in the display space position may be in an automatically floating form in the first target object.
  • the target information is displayed on the side, and the target information is displayed on the side of the first target object in a bouncing form or a fade-in form, which is not limited herein, thereby improving the interest of the information interaction.
  • the embodiment determines the current spatial position of the first target object in the real scene; determines the display spatial position of the target information in the real scene according to the current spatial position; displays the target information in the display spatial position, and implements the target object according to the first target object
  • the purpose of displaying the target information in the preset spatial position of the real scene is to simplify the process of information interaction.
  • displaying the target information in the display space location includes at least one or more of the following: displaying the user of the first target object in the first display space position when the target information includes the user profile information Informational information; in the target information including personal dynamics And displaying the personal dynamic information of the first target object in the second display space position; displaying the extended information of the first target object in the third display space position when the target information includes the extended information; and when the target information includes the historical interaction information And displaying, in the fourth display space display position, historical interaction information generated during the historical interaction between the second target object and the first target object.
  • the target information includes user profile information, which is basic information of the first target object, for example, basic information such as a nickname, a name, an address, a contact information, and a personalized signature of the first target object.
  • user profile information is displayed at the first display space location.
  • the user profile information of the first target object is superimposed on the side of the face of the first target object, and the user can not only see the target information of the first display space location through the AR glasses, but also can view other scenes in the real scene. , thus achieving the combination of the virtual world and the real world.
  • the target information may further include personal dynamic information that displays personal dynamic information of the first target object at the second display space location.
  • personal dynamic information displays personal dynamic information of the first target object at the second display space location.
  • the display instruction includes The voice command, the instruction command generated by the user clicking through the gesture, and the instruction command generated by the user by gazing.
  • the personal dynamic may be sequentially displayed in the order of the time axis, or in a bounce manner, Progressive form display, not limited here.
  • personal dynamic information is one of the entrances of information interaction.
  • the target information may further include extended information that displays the extended information of the first target object at the third display space position.
  • the extended information includes the third-party social account information of the first target object, and the information published by the first target object may be pulled according to the third-party social account information according to the network address characteristics of the third-party social platform.
  • the target information may further include historical interaction information, and the historical interaction information generated during the historical interaction between the second target object and the first target object is displayed in the fourth display space display position, and the historical interaction information may be picture information, voice information, and text.
  • Information, video information, etc., historical communication is a message session, which is one of the information exchange entries, and records the exchange information between the virtual scene and the real scene.
  • the target information of the embodiment is virtual content superimposed in the real world, realizing the virtual and real combination of the interaction information, thereby bringing a more realistic interactive experience to the user.
  • step S204 displaying the target information in the preset spatial position of the real scene according to the facial information of the first target object includes: determining whether the server is in the case of scanning the face of the first target object And storing facial feature data matching the facial information of the first target object; and determining whether the facial scanning data of the first target object matches the facial feature data of the first target object; To allow scanning, that is, to determine whether the scanning permission of the account corresponding to the facial feature data is allowed to scan; if it is determined that the facial scanning permission of the first target object is to allow scanning, visible information is displayed in the preset spatial position, wherein The information includes at least user profile information of the first target object.
  • FIG. 4 is a flowchart of another method for displaying target information in a preset spatial position of a real scene according to face information of a first target object according to an embodiment of the present invention.
  • the method for displaying target information according to the facial information of the first target object in a preset spatial position of the real scene includes the following steps:
  • step S401 the face is scanned.
  • the information display is to scan the face as the main entrance scene.
  • a face scan is performed to determine whether there is a face.
  • the faces of the plurality of target objects include faces of the first target object. If the face is not scanned, the scanning of the face is continued to determine whether or not the face of the other subject is scanned. If the face of the object is scanned, it is judged whether the face data matching the face information of the scanned object is stored in the server, and if it is determined that the face data matching the face information of the scanned object is not stored in the server, the process continues.
  • the server performs a scan to determine if you are scanning the faces of other objects. If it is determined that the server stores face data matching the face information of the scanned object, further determining the Whether the face scanning permission of the object is to allow the visible information of the object to be displayed within the scope of the permission after scanning the face of the object, and if it is determined that the face scanning permission of the object is not allowed to display the object after the face of the object is displayed, the permission is Visible information within the range, continue to perform a scan to determine whether to scan the faces of other objects, and so on.
  • Step S402 it is determined whether the facial feature data matching the facial information of the first target object is stored in the server.
  • step S402 of the present invention in the case of scanning the face of the first target object, it is determined whether the face feature data matching the face information of the first target object is stored in the server.
  • the first target object stores the facial feature data of the first target object if the information is registered in the augmented reality application.
  • the face information of the first target object is acquired, and the face information of the first target object may be composed of face data having preset features. It is judged whether or not the face feature data matching the face information of the first target object is stored in the server.
  • the matching of the facial information and the facial feature data is a matching of the degree of coincidence or similarity of the data in the facial information with the facial feature data within a preset threshold, for example, if the data in the facial information coincides with the facial feature data. If the degree or the degree of similarity reaches 80% or more, it is determined that the face information matches the face feature data, that is, the face feature data that matches the face information of the first target object is stored in the server.
  • step S401 is performed to continue scanning the faces of the objects other than the first target object.
  • Step S403 determining whether the face scan permission of the first target object is an allowable scan.
  • step S403 of the present invention if it is determined that the face feature data matching the face information of the first target object is stored in the server, it is determined whether the face scan permission of the first target object is the allowable scan.
  • the face scan permission of the first target object is used to indicate the face of the first target object
  • the extent of the scan including allowing all objects to scan the face of the first target object through the augmented reality application, ie, all objects can be scanned.
  • the preset object is allowed to scan the face of the first target object through the augmented reality application, that is, only the object scan is preset. Any object is prohibited from scanning the face of the first target object through the augmented reality application, that is, scanning is prohibited, wherein the preset object may be a friend.
  • the face scan authority of the first target object is determined when the first target object requesting server stores the face feature data. It is determined whether the face scan permission of the first target object is the allowable scan, and if it is determined that the face scan permission of the first target object is the allowable scan, step S404 is performed.
  • step S401 is performed to continue scanning other than the first target object.
  • the face of other objects is determined that the face scan permission of the first target object is that the second target object is not allowed to scan the face of the first target object by using the augmented reality application.
  • Step S404 displaying visible information of the first target object within the permission range in the preset spatial position.
  • step S404 of the present invention if it is determined that the face scan permission of the first target object is to allow scanning, the visible information of the first target object within the permission range is displayed in the preset space position, wherein the visible information is at least The user profile information of the first target object is included.
  • the visible information of the first target object within the scope of authority may include user profile information, extension information, and dynamic information of the first target object within the scope of authority.
  • the user profile information and the extended information in the scope of the authority are determined when the first target object registers the information with the server, wherein the permission control of each item of the user profile information and the extended information may be classified into at least three categories, respectively All objects are visible through the augmented reality application, allowing only the preset objects to be visible through the augmented reality application, and only visible through the augmented reality application itself.
  • the control rights of dynamic information are determined at the time of dynamic information release.
  • the control rights can include four categories, which allow all objects to be visible through the augmented reality application, allow friends to be visible through the augmented reality application, allow specific friends to be visible through the augmented reality application, and only pass by themselves. Augmented reality apps are visible. After determining whether the face scan permission of the first target object is an allowable scan, if it is determined that the face scan permission of the first target object is If the scan is performed, the user profile information, extension information, and dynamic information within the permission range can be displayed in the preset space position. Among them, dynamic information is one of the entrances of information interaction, including but not limited to expressions and comments. Another major information interaction portal is a message session, which records the exchange information between the virtual scene and the real scene.
  • the embodiment scans the face; in the case of scanning the face of the first target object, it is determined whether the face feature data matching the face information of the first target object is stored in the server; if it is determined that the server stores the first target.
  • the facial feature data matching the facial information of the object is determined whether the facial scanning permission of the first target object is allowed to be scanned; if it is determined that the facial scanning permission of the first target object is allowed to be scanned, the visible information is displayed in the preset spatial position,
  • the visible information includes at least the user profile information of the first target object, and the purpose of displaying the target information according to the facial information of the first target object in a preset spatial position of the real scene is implemented, thereby simplifying the process of information interaction.
  • the visible information includes extended information of the first target object
  • step S404, displaying the visible information within the permission range in the preset spatial location includes: the first target object has account information of the third-party platform.
  • the first display instruction for indicating the extended content corresponding to the account information is displayed, and the extended content within the permission range is displayed in the preset space position.
  • FIG. 5 is a flowchart of a method for displaying visible information of a first target object within a right range at a preset spatial position, according to an embodiment of the present invention. As shown in FIG. 5, the method for displaying visible information of the first target object within the permission range in the preset spatial position comprises the following steps:
  • Step S501 determining whether the first target object has account information of a third-party platform.
  • the extended information includes account information.
  • the permission visible information is displayed, when it is determined that the face scan permission of the first target object is the allowable scan, the extended information of the first target object within the permission range is allowed to be displayed after scanning the face of the first target object, and the extended information includes Account information of the third-party platform of the first target object.
  • the second target object can obtain the first target object through the account information of the third platform on the third party platform. Published content.
  • the visible information within the scope of the display permission it is determined whether the first target object has account information of the third-party platform.
  • Step S502 receiving a first display instruction for indicating that the extended content corresponding to the account information is displayed.
  • step S502 of the present invention if it is determined that the first target object has the account information of the third-party platform, the first display instruction for indicating the extended content corresponding to the account information is received.
  • the icon of the third-party platform that can pull the content can also be marked in the preset space position, and can be displayed at the bottom of the display position of the user profile information.
  • Step S503 displaying the extended content within the permission range at the preset spatial location.
  • step S503 of the present invention after receiving the first display instruction, the extended content within the permission range is displayed in the preset spatial position.
  • the preset space location After receiving the first display instruction for indicating the extended content corresponding to the account information, the preset space location displays the extended content within the permission scope, and can switch to the timeline information flow on the third-party platform, thereby obtaining rich information.
  • the embodiment determines whether the first target object has the account information of the third-party platform, wherein the extended information includes the account information; if it is determined that the first target object has the account information of the third-party platform, the receiving is used to indicate that the display corresponds to the account information.
  • the first display instruction of the extended content after receiving the first display instruction, displaying the extended content within the permission range in the preset spatial position, achieving the purpose of displaying the visible information in the preset spatial position.
  • the visible information includes personal dynamic information of the first target object, and step S404, displaying the visible information within the permission range in the preset spatial location comprises: receiving a second display instruction for indicating the display of the personal dynamic information; and receiving the second display instruction After that, the right is displayed in the preset space. Personal dynamic information within the limits.
  • FIG. 6 is a flow chart of another method of displaying visible information of a first target object within a right range at a preset spatial location, in accordance with an embodiment of the present invention.
  • the method for displaying the visible information of the first target object within the permission range in the preset spatial position comprises the following steps:
  • Step S601 receiving a second display instruction for indicating the display of personal dynamic information.
  • a second display instruction for indicating the display of personal dynamic information is received.
  • the display permission visible information when it is determined that the face scan permission of the first target object is allowed to scan, the personal dynamic information of the first target object within the permission range is allowed to be displayed after scanning the face of the first target object, and can be received.
  • a second display instruction indicating the display of the personal dynamic information the second display instruction includes a voice instruction, an instruction instruction generated by the user by clicking the gesture, an instruction instruction generated by the user by gazing pause, and the like, thereby performing the downward rotation according to the second display instruction. Or click on the action of the personal dynamic info icon.
  • Step S602 displaying personal dynamic information within the scope of authority in a preset spatial location.
  • step S602 of the present invention after receiving the second display instruction, the personal dynamic information within the permission range is displayed in the preset spatial position.
  • the personal dynamic information within the scope of the authority may be displayed on the basis of the display position of the user profile information.
  • the embodiment receives the second display instruction for indicating the display of the personal dynamic information; after receiving the second display instruction, displaying the personal dynamic information in the preset spatial position, thereby realizing that the preset spatial position is displayed within the permission range.
  • the purpose of the information is visible, which simplifies the process of information interaction.
  • the method comprising: sending a first request to the server, where the first request carries the first The facial feature data of the target object matches the facial feature data, the server responds to the first request, and stores the facial feature data of the first target object, and may also send a second request to the server, where the second request carries the user of the first target object Information, service Responding to the second request and storing user profile information of the first target object; and/or transmitting a third request to the server, wherein the third request carries extended information of the first target object, the server responds to the third request, and stores Extended information of the first target object.
  • the first target object Before acquiring the face information of the first target object, the first target object passes the server registration information, and the registered information includes the face information of the first target object.
  • the facial image information of the first target object needs to be acquired in real time, and the authenticity verification is performed on the facial image information, including but not limited to verifying the unmanned face and using the facial information to prompt the first
  • the target object performs a specified facial action in real time, determines whether the actual facial action made by the first target object matches the facial action for verifying authenticity, and the actual facial action made by the first target object matches the facial action for verifying authenticity. By detecting whether the face information is in a three-dimensional form, the counterfeiting registration behavior is further eliminated.
  • the rights control can be set to allow everyone to scan, allow friends to scan, and prohibit scanning.
  • the registered information may further include user profile information of the first target object, including but not limited to the nickname, name, address, contact information, signature, and the like of the first target object.
  • the registered information may also include extended information of the first target object.
  • the extended information includes the third-party social account information provided by the user.
  • the social platform of this embodiment can pull the information it publishes if the user account is known. Provides aggregated third-party social platform information pull capability for scanners to obtain richer inventory information.
  • the user profile information and the extension information of this embodiment can select the degree of information disclosure at the time of registration according to the user's own wishes.
  • the control granularity of each piece of information can be divided into at least three categories, allowing everyone to be visible, allowing only friends to be visible, and only visible to themselves. For example, age permission control, phone number permission control, address information permission control, etc. You can set them one by one according to your needs.
  • the embodiment does not limit the type of the client, and can be registered through the AR glasses, or can be registered through the mobile communication terminal, or can be registered through the PC segment, which is not limited herein. .
  • sending the first request to the server includes: when the face of the first target object is detected, issuing an instruction instruction for instructing the first target object to perform the preset facial action, at the first
  • the target object detects whether the face of the first target object is in a three-dimensional form according to the actual facial action preset facial motion performed by the instruction instruction; and when the face of the first target object is detected as a three-dimensional shape, the first target is acquired.
  • the facial feature data of the object; the first request is sent to the server based on the facial feature data.
  • FIG. 7 is a flow chart of a method of transmitting a first request to a server, in accordance with an embodiment of the present invention. As shown in FIG. 7, the method for transmitting a first request to a server includes the following steps:
  • Step S701 detecting a face.
  • step S701 of the present invention the face is detected.
  • This embodiment records face information in real time.
  • there are a plurality of objects the plurality of objects including the first target object.
  • the face image data of the first target object is detected before the face information of the first target object is acquired, and the face image data of the second target object may be detected by the front camera.
  • the user takes a self-portrait face shot in real time, and the system verifies the authenticity of the received facial image data.
  • the face detection algorithm of this embodiment does not specify a specific method, including but not limited to traditional algorithms such as feature recognition, template recognition, and neural network recognition, and a Gaussian Face-based face recognition (Gaussian Face) algorithm.
  • traditional algorithms such as feature recognition, template recognition, and neural network recognition, and a Gaussian Face-based face recognition (Gaussian Face) algorithm.
  • the face is continuously detected.
  • Step S702 an instruction instruction for instructing the first target object to perform a preset facial action is issued.
  • step S702 of the present invention in the case that the face of the first target object is detected, an instruction instruction for instructing the first target object to perform the preset facial action is issued, wherein the first target object is in accordance with the indication
  • the command performs a facial action to obtain an actual facial action.
  • the first target object is prompted to perform the specified face action in real time, and a voice instruction instruction for instructing the first target object to perform the preset face action may be issued, and the voice command may be raised in real time according to the voice instruction.
  • Preset facial movements such as bowing, slightly turning left, slightly turning right, frowning, opening mouth, and blinking.
  • step S703 it is determined whether the actual facial motion matches the preset facial motion.
  • step S703 of the present invention it is determined whether the actual facial motion matches the preset facial motion.
  • step S701 After an instruction to instruct the first target object to perform the preset facial motion is issued, it is determined whether the actual facial motion matches the preset facial motion. If it is determined that the actual facial motion does not match the preset facial motion, step S701 is performed again to continue detecting the facial. If it is determined that the actual facial motion matches the preset facial motion, step S704 is performed. Thereby, the authenticity of the received image information is determined by whether the actual facial motion and the preset facial motion match.
  • Step S704 detecting whether the face of the first target object is in a three-dimensional form.
  • step S704 of the present invention if it is determined that the actual facial action matches the preset facial motion, it is detected whether the face of the first target object is in a three-dimensional form.
  • the face After determining whether the actual facial action matches the preset facial motion, if it is determined that the actual facial motion matches the preset facial motion, detecting whether the facial of the first target object is a three-dimensional shape, that is, for the first target object
  • the face performs face depth information detection.
  • the camouflage method can be to play a previously prepared face image on the terminal screen. In order to deceive the registration system.
  • Step S705 if it is detected that the face of the first target object is in a three-dimensional form, the facial feature data of the first target object is acquired.
  • step S705 of the present invention in a case where the face of the first target object is detected to be in a three-dimensional form, the facial feature data of the first target object is acquired.
  • the face feature data matching the face information of the first target object is acquired, and the face information of the first target object allows an error of a preset threshold value with the face feature data.
  • Step S706 sending a first request to the server according to the facial feature data.
  • the first request is sent to the server according to the facial feature data, and the first request carries facial feature data that matches the facial information of the first target object, and the server responds to the first request and stores Facial feature data of the first target object.
  • step S204 acquiring the target information of the first target object according to the facial information of the first target object includes: requesting, by the facial information information of the first target object, the server to deliver the target information according to the facial feature data; and receiving the target information.
  • the target information After acquiring the face information of the first target object, sending a request for matching the face information to the server according to the face information, the server responding to the request, and searching for facial feature data of the first target object in the facial feature database, After the server finds the face data of the first target object, the target information is delivered.
  • the embodiment detects the face; in the case where the face of the first target object is detected, an instruction instruction for instructing the first target object to perform the preset face action is issued; determining whether the actual face action matches the preset face action; If it is determined that the actual facial motion matches the preset facial motion, detecting whether the facial surface of the first target object is a three-dimensional shape; and when detecting that the facial surface of the first target object is a three-dimensional shape, acquiring facial features of the first target object Data; transmitting a first request to the server based on the facial feature data, achieving the purpose of the server storing facial feature data for matching facial information of the first target object.
  • the search information for indicating the search target information is received, where the user
  • the profile information includes search information; the target information is obtained based on the search information.
  • FIG. 8 is a flowchart of another method of information interaction according to an embodiment of the present invention. As shown in FIG. 8, the information interaction method further includes the following steps:
  • Step S801 receiving search information for indicating search target information.
  • the search information for indicating the search target information is received, wherein the user profile information includes the search information.
  • the information display is to obtain the facial information of the first target object as the main entrance scene, and the search information for indicating the search target information is supplemented, and the search information may be the user information information such as the voice search nickname and name.
  • Obtaining the face information of the first target object may be applied to the face visible scene, and the search information for indicating the search target information may be applied in a scene in which the face information cannot be acquired or accurately acquired.
  • Step S802 acquiring target information according to the search information.
  • the target information is acquired according to the search information.
  • the target information After receiving the search information for indicating the search target information, the target information is acquired according to the search information, and the target information of the first target object may be acquired according to the nickname, name, and the like of the first target object.
  • the embodiment receives search information for indicating search target information in a case where the face of the first target object is not visible before receiving the interaction information transmitted by the second target object according to the target information, the user profile information including the search information;
  • the target information is obtained according to the search information, thereby realizing the acquisition of the target information and simplifying the process of information interaction.
  • FIG. 9 is a flow chart of another method of information interaction according to an embodiment of the present invention. As shown in FIG. 9, the information interaction method further includes the following steps:
  • Step S901 identifying a facial contour of the first target object according to the facial information of the first target object.
  • the facial contour of the first target object is identified according to the facial information of the first target object.
  • the facial contour of the first target object is identified according to the facial information of the first target object, and the facial contour of the first target object may be identified by the AR glasses according to the facial information of the first target object.
  • Step S902 adding static and/or dynamic three-dimensional image information at a preset position of the facial contour.
  • step S902 of the present invention static and/or dynamic three-dimensional image information is added at a preset position of the facial contour.
  • the three-dimensional image information may be a three-dimensional decoration, and a static or dynamic three-dimensional decoration is added to the recognized face contour by the AR glasses.
  • the embodiment identifies the facial contour of the first target object according to the facial information of the first target object after acquiring the facial information of the first target object; adding static and/or dynamic three-dimensional image information at the preset position of the facial contour, Thereby enhancing the interest of information interaction.
  • the publishing interaction information includes at least one of the following: releasing the interaction information in the form of a voice; and releasing the interaction information in the form of a picture, where the interaction information in the form of a picture includes the interaction information in the form of a panoramic picture; Publish interactive information in the form of video; publish interactive information of the 3D model.
  • the interaction information generated by this embodiment depends on the hardware used.
  • the relatively intuitive and fast content mainly includes interactive information in the form of voice, interactive information in the form of pictures, and interactive information in the form of video. Also includes a panoramic image in the form of AR device capabilities. Interaction information between interaction information and 3D models.
  • Embodiments of the invention are preferably applicable to AR glasses devices that have a front camera.
  • the embodiment of the present invention is not limited to the AR glasses device, and may also be a mobile communication terminal and a PC terminal, and is theoretically applicable to devices having a camera, and the difference is the ease of use and the interactive operation mode.
  • the embodiment of the invention further provides an augmented reality social system, which mainly comprises a registration module, an information display and interaction module, and an information generation and release module.
  • the registration module included in the embodiment of the present invention provides user information including a real face
  • the information display and interaction module provides an AR information display and an interactive portal after the face is recognized
  • the information generation and distribution module focuses on the dynamic generation of the user itself.
  • FIG. 10 is a flowchart of a method of information registration according to an embodiment of the present invention. As shown in FIG. 10, the method for registering information includes the following steps:
  • step S1001 basic information is entered.
  • the information registered by the user in the system includes basic information, face information, and extended information.
  • the basic information is similar to the existing platforms, including but not limited to nickname, name, gender, address, contact information, signature, etc.
  • Step S1002 detecting a face.
  • the face information is the key information of the system.
  • the user needs to take a self-photograph of the face in real time, and the system will verify the authenticity of the received facial image information.
  • the verification process includes, but is not limited to, verifying that there is a face without using a face detection algorithm. If a face is detected, step S1003 is performed, and if no face is detected, the step is continued to detect the face.
  • the face detection algorithm of this embodiment does not specify a specific method, including but not limited to a traditional algorithm such as feature recognition, template recognition, neural network recognition, and a Gaussian Face algorithm.
  • Step S1003 instructing the user to make a specified facial action in real time.
  • the system prompts the user to perform a specified facial motion in real time, and the user makes an actual facial motion according to the system prompt.
  • step S1004 it is determined whether the actual facial action made by the user matches the specified facial motion.
  • step S1005 is performed. If it is determined that the actual facial motion made by the user does not match the specified facial motion, step S1002 is performed to detect the facial faces of other users.
  • step S1005 face depth information detection is performed.
  • the facial depth information detection is performed.
  • step S1006 it is determined whether the detected facial image information is in a three-dimensional form.
  • the depth camera information of the AR glasses can be used to detect whether the face is in a three-dimensional form, thereby eliminating the currently known method of camouflaging facial image information, for example, playing pre-prepared face information on a screen such as a mobile communication terminal, and passing the face information.
  • the mobile video is used to trick the registration system.
  • step S1007 the requesting server stores the face image information, and uses the face image information as the face feature data.
  • the requesting server After determining whether the detected facial image information is in a three-dimensional form, if it is determined that the detected facial image information is in a three-dimensional form, the requesting server stores the facial image information as the facial feature data and stores it in the facial feature database, thereby completing The registration process of the face information after the entry of the basic information.
  • the extended information includes the third-party social account information provided by the user, and the third-party social account information can be used to pull the information posted by the user on the third-party social platform.
  • the system provides the ability to aggregate the information of third-party social platform information, so that the scanner can obtain richer stock information.
  • the information content registered in this embodiment can be selected according to the will of the user, and the degree of information disclosure can be realized by the authority control.
  • the granularity of control of each of the basic information and the extended information can be classified into at least three categories, including allowing everyone to be visible, only friends to be visible, and only visible to themselves.
  • the authority control of the face information itself at least it can be divided into three categories, all of which can be scanned, only friends can scan, and scanning is prohibited.
  • FIG. 11 is a flow chart of a method for displaying and interacting information according to an embodiment of the present invention. As shown in FIG. 11, the method for displaying and interacting with the information includes the following steps:
  • Step S1101 face scanning.
  • This embodiment can detect a human face by a camera, for example, detecting a human face through a front camera of the AR glasses.
  • step S1102 whether a face is detected.
  • step S1103 if a face is detected, step S1103 is performed, and if no face is detected, step S1101 is performed to continue face scanning.
  • step S1103 it is determined whether there is facial feature data in the system.
  • step S1101 is performed to detect the faces of other users. If it is determined that there is facial feature data corresponding to the detected face image information in the system, step S1104 is performed.
  • step S1104 it is determined whether there is a face-sweeping authority.
  • step S1105 is performed. If judged If the face is not scanned, the process proceeds to step S1101 to continue scanning the faces of other users.
  • step S1105 the visible information of the permission is displayed.
  • the display-right-visible information is displayed, wherein the rights-visible information includes basic information and dynamic timeline information, wherein the latter is one of the interactive entries, including but not limited to Expressions and comments.
  • Another major interaction entry is a message session that records the exchange of information between the virtual and the reality.
  • step S1106 it is determined whether there is third-party platform account information.
  • Step S1107 displaying a platform icon.
  • the platform icon is displayed.
  • Step S1108 Receive indication information indicating that the platform icon is expanded.
  • the indication information includes a voice instruction, an instruction instruction generated by the user through the gesture click, an instruction instruction generated by the user by the gaze pause, and the like. If it is determined that the instruction information for instructing to expand the platform icon is received, step S1109 is performed.
  • Step S1109 the user information flow of the platform is presented.
  • the user information flow of the platform is presented, thereby realizing information display and interaction.
  • the information display of the embodiment is mainly based on scanning a human face, supplemented by a voice search nickname, a name, etc., and is applied to a visible and invisible scene of a face, respectively.
  • the basic process is to scan face recognition, reveal the basic information and dynamics of the identified user, and mark other social platform icons that can pull the content, and click on the icon to pop up the extended content.
  • the information generation and distribution module of the embodiment of the present invention is introduced below.
  • the information generated by this embodiment depends on the hardware used. Taking AR glasses as an example, the content mainly includes voice, pictures and video. It also includes information on AR device capabilities such as panoramic images and 3D models.
  • Interactive information can be preset expressions, comments, and more.
  • a special interactive information is the use of face recognition capabilities, the system can add static or dynamic three-dimensional decoration at the recognized face contour.
  • the publishing portal of this embodiment mainly includes personal dynamic information, and session information with others.
  • the personal dynamic information can be controlled by the authority of the publishing, and the conversation information with others includes the virtual world information of both parties and the recorded real world information.
  • the permission control part is divided into at least four categories, all visible, friends visible, specific friends visible, and only visible to themselves. People have different needs for privacy settings. The most widely visible control rights that people are willing to see are visible. The accounts that are extremely concerned about privacy can only be seen by friends, so that unfamiliar people can peek into their own information.
  • the AR glasses of this embodiment are installed with AR applications independent of other platforms, and the input and output of information are completed on the glasses platform.
  • the interactive portal is mainly based on face recognition, which simplifies. The process of information interaction.
  • the application environment of the embodiment of the present invention may be, but is not limited to, the reference to the application environment in the foregoing embodiment, which is not described in this embodiment.
  • An embodiment of the present invention provides an optional specific application for implementing the foregoing information interaction method.
  • the front camera of the AR glasses can perform automatic face recognition instead of virtual account search, and the virtual and superimposed ability of the glasses is used to superimpose and display the recognized user's data and social information in the form of AR, and then in reality. Engage in and within the social system. This provides a new AR social system based on faces rather than virtual accounts.
  • the interaction is mainly based on the situation in which the reality is not together.
  • the social interaction between people such as acquaintances will send messages point-to-point, and the timeline information flow will be viewed from time to time to find interest information and interaction.
  • the AR social system of this embodiment automatically recognizes the face, and Many usage scenarios are triggered when the reality meets, automatically displaying the other party's information to understand its dynamics.
  • the buddy scene can further show the historical conversations between the two parties to evoke the exchange of memory between the two parties. Instead of a friend scene, it is easier to find an opening topic that starts to communicate more naturally because of the other party's dynamics and information.
  • the AR social system not only communicates with the virtual world, but also contains real-world memories.
  • AR glasses can conveniently record the voice, image and video in reality, so that the interaction between the virtual world and the real world is all recorded in the AR social system, which makes the virtual and real coexistence and enriches the information type of the system.
  • the user can see what is obtained, and it is not necessary to switch the attention between the screen and the reality back and forth during recording as in the mobile phone platform. In retrospect, what was experienced was the perspective of the original record, which brought a more realistic feeling.
  • FIG. 12 is a schematic diagram of a basic information display according to an embodiment of the present invention.
  • the AR glasses scan the face of the real world, and automatically recognize the basic information of the user on the side of the face after recognition, for example, the user's name "Melissa Banks", the user's country “Hometown: Chicaga”, the user The birthday "Birthday: May, 23, 1987” and other information is superimposed on the real scene, and the friend "Add friend” and the message “Message” can also be displayed.
  • the basic information of the user is virtual, and other scenes are the real scenes in the real scene, thus achieving the purpose of combining virtual and real.
  • FIG. 13 is a schematic illustration of another basic information display in accordance with an embodiment of the present invention.
  • the flipping or clicking operation on the screen can display the personal dynamic information, wherein the personal dynamic information is the virtual content superimposed in the real world, and the personal dynamic of the system is in the order of the time axis.
  • the third-party available platform aggregation information can be displayed at the bottom icon. After clicking the platform icon, switch to the platform timeline information flow in the above image.
  • An expression refers to a single static or dynamic or 3D preset image without text. Comments are rich media, including text, voice, pictures and other freely organized information.
  • FIG. 14 is a schematic diagram of an AR information display according to an embodiment of the present invention. As shown in Figure 14 It shows that the AR information is displayed in a spherical manner, which enhances the interest of information display.
  • FIG. 15 is a schematic diagram of another AR information display according to an embodiment of the present invention.
  • the display manner of the AR can be three-dimensional graphics such as three-dimensional spirals and cylinders, thereby improving the interest of information display.
  • the three-dimensional display capability of AR is fully utilized to provide users with more interesting presentation modes. Including but not limited to three-dimensional spirals, spheres, cylinders.
  • This embodiment throws away the virtual account and provides a new way of augmented reality social game based on the face in reality.
  • common students such as classmates, friends, colleagues and even family members meet in real life when meeting, encountering, passing by, etc., usually do not open social software to search and understand each other's dynamics, and the last exchange Content.
  • This embodiment provides a natural and convenient way to automatically display the other party's information, dynamics and mutual communication sessions in the glasses when they meet.
  • this information itself has the effect of evoking the previous exchange of memories and understanding the latest developments of the other party, and also provides more topics and background information for the communication in the real world.
  • important exchanges in reality can also be fed back into the system as a memory.
  • the system may have a beneficial effect on the ripening of the person.
  • the system then provides notifications to the scanned person, allowing the swept to understand who is scanning themselves, and is expected to promote more social behavior.
  • the embodiment is most suitable for the AR glasses device with the front camera.
  • the user is convenient to carry and operate, and the user experience is improved.
  • the embodiment of the present invention is not limited to the AR glasses device, and the device having the camera can be used. Applicable, but there are differences in ease of use and interoperability.
  • the method according to the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course, by hardware, but in many cases, the former is A better implementation.
  • the technical solution of the present invention which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • the optical disc includes a number of instructions for causing a terminal device (which may be a cell phone, a computer, a server, or a network device, etc.) to perform the methods described in various embodiments of the present invention.
  • FIG. 16 is a schematic diagram of an information interaction apparatus according to an embodiment of the present invention.
  • the information interaction apparatus may include: a first acquisition unit 10, a second acquisition unit 20, a receiving unit 30, and a distribution unit 40.
  • the first obtaining unit 10 is configured to acquire facial information of the first target object.
  • the second obtaining unit 20 is configured to acquire target information of the first target object according to the facial information of the first target object, wherein the target information is used to indicate the social behavior of the first target object.
  • the receiving unit 30 is configured to receive interaction information that is sent by the second target object according to the target information, where the interaction information is used to indicate that the second target object interacts with the first target object.
  • the publishing unit 40 is arranged to publish the interaction information.
  • first obtaining unit 10, second obtaining unit 20, receiving unit 30 and issuing unit 40 may be operated in the terminal as part of the device, and may be implemented by the processor in the terminal.
  • the terminal can also be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, an applause computer, and a mobile Internet device (MID), a PAD, and the like.
  • MID mobile Internet device
  • the receiving unit 30 includes: a first receiving module, configured to receive real interaction information in a real scene sent by the second target object according to the target information; and/or a second receiving module, configured to receive the second The virtual interaction information in the virtual scenario that the target object sends according to the target information.
  • the first receiving module and the second receiving module may be run in the terminal as part of the device, and the function implemented by the module may be performed by a processor in the terminal, and the terminal may also be a smart phone (eg, Terminal devices such as Android phones, iOS phones, etc., tablets, applause computers, and mobile Internet devices (MID), PAD, etc.
  • Terminal devices such as Android phones, iOS phones, etc., tablets, applause computers, and mobile Internet devices (MID), PAD, etc.
  • the information interaction device further includes: a first storage unit, configured to store the real interaction information to the preset storage location after receiving the real interaction information in the real scene sent by the second target object according to the target information; And/or the second storage unit is configured to store the virtual interaction information to the preset storage location after receiving the virtual interaction information in the virtual scenario sent by the second target object according to the target information.
  • a first storage unit configured to store the real interaction information to the preset storage location after receiving the real interaction information in the real scene sent by the second target object according to the target information
  • the second storage unit is configured to store the virtual interaction information to the preset storage location after receiving the virtual interaction information in the virtual scenario sent by the second target object according to the target information.
  • the foregoing first storage unit and the second storage unit may be operated in the terminal as a part of the device, and the function implemented by the foregoing module may be performed by a processor in the terminal, and the terminal may also be a smart phone (eg, Terminal devices such as Android phones, iOS phones, etc., tablets, applause computers, and mobile Internet devices (MID), PAD, etc.
  • Terminal devices such as Android phones, iOS phones, etc., tablets, applause computers, and mobile Internet devices (MID), PAD, etc.
  • the foregoing real interaction information includes at least one or more of the following: voice information in a real scene; image information in a real scene; and video information in a real scene.
  • the first obtaining unit 10 is configured to scan a face of the first target object to obtain face information of the first target object; the device further includes: a display unit, configured to be according to the first target object After the face information acquires the target information of the first target object, the target information is displayed at a preset spatial position of the real scene.
  • the first acquiring unit 10 and the display unit may be used as a device. Part of the system runs in the terminal, and the functions implemented by the above modules can be executed by the processor in the terminal.
  • the terminal can also be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, an applause computer, and a mobile Internet device (Mobile Internet). Terminal devices such as Devices, MID) and PAD.
  • the second obtaining unit 20 includes: a first determining module, a second determining module, and a display module.
  • the first determining module is configured to determine a current spatial location of the first target object in the real scene;
  • the second determining module is configured to determine a display spatial location of the target information in the real scene according to the current spatial location;
  • the display module Is set to display the target information in the display space position.
  • the foregoing first determining module, the second determining module, and the display module may be run in the terminal as part of the device, and the function implemented by the foregoing module may be performed by a processor in the terminal, and the terminal may also be intelligent.
  • Mobile devices such as Android phones, iOS phones, etc.
  • tablets such as Samsung phones, iOS phones, etc.
  • applause computers such as Samsung Galaxy Tabs, Samsung Galaxy Tabs, etc.
  • MID mobile Internet devices
  • the display module is configured to perform at least one of: displaying the user profile information of the first target object in the first display space location when the target information includes the user profile information; and when the target information includes personal dynamic information,
  • the second display space position displays personal dynamic information of the first target object; when the target information includes the extended information, the extended information of the first target object is displayed in the third display space position; and when the target information includes historical interaction information, in the fourth
  • the display space display position displays historical interaction information generated by the second target object and the first target object during historical interaction.
  • the information interaction apparatus may include: a first acquisition unit 10, a second acquisition unit 20, a receiving unit 30, a distribution unit 40, and a display unit 50.
  • the display unit 50 includes a first determining module 51, a second determining module 52, and a display module 53.
  • the roles of the first obtaining unit 10, the second obtaining unit 20, the receiving unit 30, and the issuing unit 40 of the embodiment are the same as those in the information interaction device of the embodiment shown in FIG. The same is not repeated here.
  • the display unit 50 is configured to display the target information in a preset spatial position of the real scene after acquiring the target information of the first target object according to the facial information of the first target object.
  • the first judging module 51 is configured to determine whether the face feature data matching the face information of the first target object is stored in the server in the case of scanning the face of the first target object.
  • the second determining module 52 is configured to determine whether the face scanning authority of the first target object is allowed to scan when it is determined that the face feature data matching the face information of the first target object is stored in the server.
  • the display module 53 is configured to display visible information at a preset spatial location when it is determined that the facial scanning authority of the first target object is permitted, wherein the visible information includes at least user profile information of the first target object.
  • the first determining module 51, the second determining module 52, and the display module 53 may be run in the terminal as part of the device, and the functions implemented by the foregoing modules may be performed by a processor in the terminal, and the terminal also It can be a smart phone (such as Android phone, iOS phone, etc.), tablet computer, applause computer, and mobile Internet devices (MID), PAD and other terminal devices.
  • a smart phone such as Android phone, iOS phone, etc.
  • tablet computer such as Samsung phone, Samsung Galaxy Tabs, etc.
  • applause computer such as Samsung Galaxy Tabs, etc.
  • PAD mobile Internet devices
  • the visible information includes extended information of the first target object
  • the display module 53 includes: a determining submodule, a first receiving submodule, and a first displaying submodule.
  • the determining sub-module is configured to determine whether the first target object has account information of a third-party platform, wherein the extended information includes account information; and the first receiving sub-module is configured to determine that the first target object has a third party
  • the account information of the platform receives a first display instruction for indicating extended content corresponding to the account information; and the first display sub-module is configured to display the extended content at a preset spatial location after receiving the first display instruction.
  • the foregoing determining sub-module, the first receiving sub-module and the first displaying sub-module may be operated in the terminal as part of the device, and may be implemented by a processor in the terminal.
  • the functions implemented by the above modules can also be terminal devices such as smart phones (such as Android phones, iOS phones, etc.), tablet computers, applause computers, and mobile Internet devices (MID), PAD, and the like.
  • the visible information includes personal dynamic information of the first target object
  • the display module 53 includes: a second receiving submodule and a second displaying submodule.
  • the second receiving sub-module is configured to receive a second display instruction for indicating the display of the personal dynamic information; the second display sub-module is configured to display the personal dynamic information at the preset spatial location after receiving the second display instruction.
  • the foregoing second receiving submodule and the second displaying submodule may be run in the terminal as part of the device, and the function implemented by the foregoing module may be performed by a processor in the terminal, and the terminal may also be a smart phone. (such as Android phones, iOS phones, etc.), tablets, applause computers and mobile Internet devices (Mobile Internet Devices, MID), PAD and other terminal devices.
  • the information interaction device further includes: a first requesting unit, configured to send a first request to the server before acquiring the facial information of the first target object, wherein the first request carries the face with the first target object The information matching the facial feature data, the server responding to the first request, and storing the facial feature data of the first target object, the device further comprising: a second requesting unit configured to send a second request to the server, wherein the second Requesting to carry user profile information of the first target object, the server responding to the second request, and storing user profile information of the first target object; and/or the third requesting unit is configured to send a third request to the server, wherein the third The request carries extended information of the first target object, the server responds to the third request, and stores extended information of the first target object.
  • a first requesting unit configured to send a first request to the server before acquiring the facial information of the first target object, wherein the first request carries the face with the first target object The information matching the facial feature data, the server responding to the first request,
  • first request unit, the second request unit, and the third request unit may be run in the terminal as part of the device, and the functions implemented by the foregoing unit may be performed by a processor in the terminal, and the terminal may also It is a smart phone (such as Android phone, iOS phone, etc.), tablet computer, applause computer, and mobile Internet devices (MID), PAD and other terminal devices.
  • a smart phone such as Android phone, iOS phone, etc.
  • tablet computer such as Samsung phone, Samsung Galaxy Tabs, etc.
  • applause computer such as Samsung Galaxy Tabs, etc.
  • PAD mobile Internet devices
  • the first request unit includes: a first detection module, a first sending module, and a third sentence The module, the second detection module, the acquisition module, and the second transmission module.
  • the first detecting module is configured to detect a face
  • the first sending module is configured to, when the face of the first target object is detected, issue an instruction instruction for instructing the first target object to perform the preset facial action,
  • the first target object performs a facial motion according to the instruction instruction to obtain an actual facial motion
  • the third determining module is configured to determine whether the actual facial motion matches the preset facial motion
  • the second detecting module is configured to determine When the actual facial motion matches the preset facial motion, detecting whether the face of the first target object is a three-dimensional shape
  • the acquiring module is configured to acquire the first target when detecting that the face of the first target object is in a three-dimensional shape a facial feature data of the object
  • a second sending module configured to send the first request to the server according to the facial feature data
  • the second obtaining unit 20 is configured to request
  • the first detecting module, the first sending module, the third determining module, the second detecting module, the obtaining module, and the second sending module may be run in the terminal as part of the device, and may pass through the terminal.
  • the processor performs the functions implemented by the above modules, and the terminal may also be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, and a mobile Internet device (MID), a PAD, and the like.
  • the information interaction device is further configured to: before receiving the interaction information sent by the second target object according to the target information, receiving the search information for indicating the search target information if the face of the first target object is not visible
  • the user profile information includes search information; and the target information is obtained according to the search information.
  • the information acquiring apparatus further includes: an identifying unit and an adding unit.
  • the identification unit is configured to: after acquiring the face information of the first target object, identify the facial contour of the first target object according to the facial information of the first target object; the adding unit is set to be added at the preset position of the facial contour Static and/or dynamic 3D image information.
  • the foregoing identification unit and the adding unit may be run in the terminal as part of the device, and the functions implemented by the above unit may be performed by a processor in the terminal, and the terminal may also be a smart phone (such as an Android mobile phone, iOS). Mobile phone, etc.), tablet, palm phone Brain and mobile Internet devices (MID), PAD and other terminal devices.
  • a smart phone such as an Android mobile phone, iOS). Mobile phone, etc.
  • tablet such as a Samsung phone Brain and mobile Internet devices (MID), PAD and other terminal devices.
  • MID mobile Internet devices
  • the publishing unit 40 is configured to perform at least one of: publishing interaction information in the form of a voice; publishing interaction information in the form of a picture, wherein the interaction information in the form of a picture comprises interaction information in the form of a panoramic picture; Information; publish interactive information for 3D models.
  • first obtaining unit 10 in this embodiment may be configured to perform step S202 in the embodiment of the present application
  • second obtaining unit 20 in this embodiment may be configured to perform the steps in the embodiment of the present application
  • the receiving unit 30 in this embodiment may be configured to perform step S206 in the embodiment of the present application
  • the issuing unit 40 in this embodiment may be configured to perform step S208 in the embodiment of the present application.
  • the first acquisition unit 10 acquires the face information of the first target object
  • the second acquisition unit 20 acquires the target information of the first target object according to the face information of the first target object, wherein the target information is used to indicate the first
  • the social behavior of the target object is received by the receiving unit 30, and the interaction information is sent by the second target object according to the target information, wherein the interaction information is used to indicate that the second target object interacts with the first target object, and the interaction information is released by the publishing unit 40.
  • the purpose of information interaction is achieved, thereby realizing the technical effect of simplifying the interaction process of information, thereby solving the complicated technical problem of the process of related technical information interaction.
  • the above-mentioned units and modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the contents disclosed in the above embodiments. It should be noted that the foregoing module may be implemented in a hardware environment as shown in FIG. 1 as part of the device, and may be implemented by software or by hardware, where the hardware environment includes a network environment.
  • a terminal for implementing the above information interaction method is further provided, wherein the terminal may be a computer terminal, and the computer terminal may be any one of the computer terminal groups.
  • the foregoing computer terminal may also be replaced with a terminal device such as a mobile terminal.
  • the computer terminal may be located in at least one network device of the plurality of network devices of the computer network.
  • FIG. 18 is a structural block diagram of a terminal according to an embodiment of the present invention.
  • the terminal may include one or more (only one shown in the figure) processor 181, memory 183, and transmission device 185.
  • the terminal may further include an input/output device 187. .
  • the memory 183 can be used to store software programs and modules, such as the information interaction method and the program instruction/module corresponding to the device in the embodiment of the present invention, and the processor 181 executes each of the software programs and modules stored in the memory 183.
  • a functional application and data processing, that is, the above information interaction method is implemented.
  • Memory 183 may include high speed random access memory, and may also include non-volatile memory such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory.
  • memory 183 can further include memory remotely located relative to processor 181, which can be connected to the terminal over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the transmission device 185 described above is for receiving or transmitting data via a network, and can also be used for data transmission between the processor and the memory.
  • Specific examples of the above network may include a wired network and a wireless network.
  • the transmission device 185 includes a Network Interface Controller (NIC) that can be connected to other network devices and routers via a network cable to communicate with the Internet or a local area network.
  • the transmission device 185 is a Radio Frequency (RF) module for communicating with the Internet wirelessly.
  • NIC Network Interface Controller
  • RF Radio Frequency
  • the memory 183 is used to store an application.
  • the processor 181 can call the application stored in the memory 183 through the transmission device 185 to perform the following steps:
  • target information of the first target object according to the facial information of the first target object, wherein the target information is used to indicate a social behavior of the first target object;
  • the processor 181 is further configured to: receive real interaction information in a real scenario sent by the second target object according to the target information; and/or receive a virtual interaction in the virtual scenario that is sent by the second target object according to the target information. information.
  • the processor 181 is further configured to: after receiving the real interaction information in the real scene sent by the second target object according to the target information, storing the real interaction information to the preset storage location; and/or receiving the second target After the object transmits the virtual interaction information in the virtual scenario according to the target information, the virtual interaction information is stored to the preset storage location.
  • the processor 181 is further configured to: scan a face of the first target object to obtain face information of the first target object; and display target information in a preset spatial position of the real scene according to the face information of the first target object.
  • the processor 181 is further configured to: determine a current spatial location of the first target object in the real scene; determine a display spatial location of the target information in the real scene according to the current spatial location; and display the target information in the display spatial location.
  • the processor 181 is further configured to: perform one of the following steps: displaying the user profile information of the first target object in the first display space position when the target information includes the user profile information; and in the second when the target information includes the personal dynamic information Displaying a spatial location to display personal dynamic information of the first target object; displaying, when the target information includes the extended information, extended information of the first target object in the third display space position; and when the target information includes historical interaction information, in the fourth display space
  • the display location displays historical interaction information generated by the second target object and the first target object during historical interaction.
  • the processor 181 is further configured to: scan the face; in the case of scanning the face of the first target object, determine whether the face feature data matching the face information of the first target object is stored in the server; if it is determined The server stores the face information of the first target object The matched facial feature data determines whether the face scanning authority of the first target object is allowed to scan; if it is determined that the face scanning authority of the first target object is allowed to scan, the visible information is displayed in the preset spatial position.
  • the processor 181 is further configured to: determine whether the first target object has account information of a third-party platform, wherein the extended information includes account information; and if it is determined that the first target object has account information of the third-party platform, the receiving And displaying a first display instruction for displaying the extended content corresponding to the account information; displaying the extended content at the preset spatial location after receiving the first display instruction.
  • the processor 181 is further configured to: receive a second display instruction for indicating the display of the personal dynamic information; and display the personal dynamic information at the preset spatial location after receiving the second display instruction.
  • the processor 181 is further configured to: send a first request to the server before acquiring the facial information of the first target object, where the first request carries facial feature data that matches the facial information of the first target object,
  • the server responds to the first request and stores facial feature data of the first target object
  • the processor 181 is further configured to perform at least the following steps: sending a second request to the server, where the second request carries the user profile information of the first target object And the server responds to the second request and stores user profile information of the first target object; and/or sends a third request to the server, wherein the third request carries extended information of the first target object, the server responds to the third request, and stores Extended information of the first target object.
  • the processor 181 is further configured to: perform a step of: detecting a face; and in a case of detecting a face of the first target object, issue an instruction instruction for instructing the first target object to perform a preset facial action, wherein the first target object Performing a facial motion according to the instruction instruction to obtain an actual facial motion; determining whether the actual facial motion matches the preset facial motion; if it is determined that the actual facial motion matches the preset facial motion, detecting whether the facial surface of the first target object is a three-dimensional shape Obtaining facial feature data of the first target object when detecting that the face of the first target object is in a three-dimensional shape; transmitting a first request to the server according to the facial feature data, the server responding to the first request, and storing the first target object The facial feature data; wherein the acquiring the target information of the first target object according to the facial information of the first target object comprises: the facial information according to the first target object
  • the requesting server delivers the target information according to the facial feature data; and receives the target
  • the processor 181 is further configured to: before receiving the interaction information sent by the second target object according to the target information, in a case where the face of the first target object is not visible, receiving search information for indicating the search target information,
  • the user profile information includes search information; and the target information is obtained according to the search information.
  • the processor 181 is further configured to: after acquiring the facial information of the first target object, identify the facial contour of the first target object according to the facial information of the first target object; add static and/or at the preset position of the facial contour Or dynamic 3D image information.
  • the processor 181 is further configured to perform at least one of the following steps: publishing the interaction information in the form of a voice; and releasing the interaction information in the form of a picture, where the interaction information in the form of a picture includes the interaction information in the form of a panoramic picture; and the interaction information in the form of a video is released; Publish interactive information for 3D models.
  • An embodiment of the present invention provides an information interaction method. Obtaining target information of the first target object according to the facial information of the first target object, wherein the target information is used to indicate a social behavior of the first target object; and receiving the second target object according to the target information
  • the interaction information is sent, wherein the interaction information is used to indicate that the second target object interacts with the first target object; the interaction information is released, and the purpose of the information interaction is achieved, thereby realizing the technical effect of simplifying the interaction process of the information, thereby solving the problem.
  • the technical process of related technical information interaction is a complex technical problem.
  • FIG. 18 is only schematic, and the terminal can be a smart phone (such as an Android mobile phone, an iOS mobile phone, etc.), a tablet computer, a palm computer, and a mobile Internet device (MID). Terminal equipment such as PAD.
  • FIG. 18 does not limit the structure of the above electronic device.
  • the terminal may also include more or less components (such as a network interface, display device, etc.) than shown in FIG. 18, or have a different configuration than that shown in FIG.
  • Embodiments of the present invention also provide a storage medium.
  • the foregoing storage medium may store program code, where the program code is used to execute the steps in the program code of the information interaction method provided by the foregoing method embodiment.
  • the foregoing storage medium may be located in any one of the computer terminal groups in the computer network, or in any one of the mobile terminal groups.
  • the storage medium is arranged to store program code for performing the following steps:
  • target information of the first target object according to the facial information of the first target object, wherein the target information is used to indicate a social behavior of the first target object;
  • the storage medium is further configured to store program code for performing the following steps: receiving real interaction information in a real scene sent by the second target object according to the target information; and/or receiving the second target object according to the target information The virtual interaction information sent in the virtual scene.
  • the storage medium is further configured to store program code for performing the following steps: storing the real interaction information to the preset storage location after receiving the real interaction information in the real scene sent by the second target object according to the target information And/or receiving the second target object according to the purpose After the virtual interaction information in the virtual scenario sent by the target information, the virtual interaction information is stored to the preset storage location.
  • the storage medium is further configured to store program code for performing the steps of: scanning a face of the first target object to obtain face information of the first target object; and pre-realizing the scene according to the face information of the first target object Set the spatial location to display the target information.
  • the storage medium is further configured to store program code for: determining a current spatial location of the first target object in the real scene; determining a display spatial location of the target information in the real scene according to the current spatial location; Display target information in the display space location.
  • the storage medium is further configured to store program code for performing one of the following steps: displaying the user profile information of the first target object in the first display space location when the target information includes the user profile information; When the personal dynamic information is included, the personal dynamic information of the first target object is displayed in the second display space position; when the target information includes the extended information, the extended information of the first target object is displayed in the third display space position; the target information includes the history When the information is exchanged, the historical interaction information generated by the second target object and the first target object during the historical interaction is displayed in the fourth display space display position.
  • the storage medium is further configured to store program code for performing the step of: determining whether a face matching the face information of the first target object is stored in the server in the case of scanning the face of the first target object Feature data; if it is determined that the face feature data matching the face information of the first target object is stored in the server, determining whether the face scan permission of the first target object is allowed to scan; if it is determined that the face scan permission of the first target object is The scanning is allowed to display visible information in a preset spatial location, wherein the visible information includes at least user profile information of the first target object.
  • the storage medium is further configured to store program code for performing: determining whether the first target object has account information of a third-party platform, wherein the extended information includes account information; if it is determined that the first target object has The account information of the third-party platform receives a first display instruction for indicating the extended content corresponding to the account information; and after displaying the first display instruction, displaying the extended content at the preset spatial location.
  • the storage medium is further configured to store program code for: receiving a second display instruction for indicating the display of personal dynamic information; displaying the personal dynamic at a preset spatial location after receiving the second display instruction information.
  • the storage medium is further configured to store program code for performing the step of: transmitting a first request to the server before acquiring the facial information of the first target object, wherein the first request carries the first target object
  • the facial feature data matches the facial feature data
  • the server responds to the first request, and stores facial feature data of the first target object
  • the storage medium is further configured to be configured to perform at least the following steps: sending a second request to the server, wherein, The second request carries the user profile information of the first target object, the server responds to the second request, and stores the user profile information of the first target object; and/or sends a third request to the server, wherein the third request carries the first target object Extending the information, the server responds to the third request and stores the extended information of the first target object.
  • the storage medium is further configured to store program code for performing the following steps: in the case of detecting the face of the first target object, issue an instruction instruction for instructing the first target object to perform the preset facial action,
  • the first target object performs a facial motion according to the instruction instruction to obtain an actual facial motion; determines whether the actual facial motion matches the preset facial motion; and if it is determined that the actual facial motion matches the preset facial motion, detecting the first target object Whether the face is a three-dimensional form; if the face of the first target object is detected to be in a three-dimensional shape, acquiring facial feature data of the first target object; transmitting a first request to the server according to the facial feature data, and the server responds to the first request, And storing the facial feature data of the first target object, wherein the acquiring the target information of the first target object according to the facial information of the first target object comprises: requesting, by the facial information information of the first target object, the delivery of the target information according to the facial feature data; Receive target information.
  • the storage medium is further configured to store program code for performing the following steps: before receiving the interaction information sent by the second target object according to the target information, in the case that the face of the first target object is not visible, the receiving The search information indicating the search target information, wherein the user profile information includes search information; and the target information is acquired according to the search information.
  • the storage medium is further configured to store program code for performing the following steps: after acquiring the face information of the first target object, identifying a facial contour of the first target object according to the facial information of the first target object; The preset position of the contour adds static and/or dynamic three-dimensional image information.
  • the storage medium is further configured to store program code for performing the following steps: releasing interaction information in the form of a voice; publishing interaction information in the form of a picture, wherein the interaction information in the form of a picture includes interaction information in the form of a panoramic picture; Interactive information in the form of video; publishing interactive information of the 3D model.
  • the foregoing storage medium may include, but not limited to, a USB flash drive, a read-only memory (ROM), a random access memory (RAM), a mobile hard disk, and a magnetic
  • ROM read-only memory
  • RAM random access memory
  • mobile hard disk a magnetic
  • magnetic A variety of media that can store program code, such as a disc or a disc.
  • the integrated unit in the above embodiment if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in the above-described computer readable storage medium.
  • the technical solution of the present invention may contribute to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause one or more computer devices (which may be personal computers, servers, network devices, etc.) to perform various embodiments of the present invention All or part of the steps of the method described.
  • the disclosed client may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the object information of the first target object is acquired according to the face information of the first target object, wherein the target information is used to indicate the social behavior of the first target object;
  • the interaction of the second target object according to the target information Information wherein the interaction information is used to indicate that the second target object interacts with the first target object; and the interaction information is released.
  • the virtual account, the interactive entry is mainly based on the face information, which simplifies the process of information interaction and achieves the purpose of information interaction, thereby realizing the technical effect of simplifying the process of information interaction, and solving the complicated technology of the process of related technical information interaction. problem.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Disclosed in the embodiments of the present invention are a method, apparatus and storage medium for information interaction. Said method comprises: acquiring facial information of a first target subject; acquiring target information of the first target subject according to the facial information of the first target subject, wherein target information is used for indicating social behavior of the first subject; receiving interaction information which is sent by a second target subject according to target information, wherein interaction information is used for indicating that the second target subject and the first target subject interact; issuing the interaction information. The embodiments of the present invention solve the technical problem of the relevant technology wherein a process of information interaction is complicated.

Description

信息交互方法、装置和存储介质Information interaction method, device and storage medium

本申请要求于2016年11月25日提交中国专利局、优先权号为2016110644199、发明名称为“信息交互方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。The present application claims the priority of the Chinese Patent Application, the entire disclosure of which is hereby incorporated by reference.

技术领域Technical field

本发明实施例涉及计算机领域,具体而言,涉及一种信息交互方法、装置和存储介质。Embodiments of the present invention relate to the field of computers, and in particular, to an information interaction method, apparatus, and storage medium.

背景技术Background technique

目前,社交平台是基于账号的社交系统,信息交互通常以不在一起的情形为主,用户与用户之间的信息交互以点对点形式进行,另外还有不定时地查看时间轴信息流来发现用户感兴趣的信息,从而根据感兴趣的信息进行信息交互。在熟人之间的社交圈中,比如,想要了解同学、朋友、同事、甚至家人的动态时,通常手动打开社交软件搜索对方的动态、搜索上次的交流内容,导致信息交互的过程复杂。At present, the social platform is an account-based social system. The information interaction is usually dominated by situations where the information is not shared. The information interaction between the user and the user is performed in a peer-to-peer manner. In addition, the timeline information flow is viewed from time to time to discover the user feeling. Information of interest, thereby interacting with information based on information of interest. In the social circle between acquaintances, for example, when you want to know the dynamics of classmates, friends, colleagues, and even family members, you usually manually open the social software to search for the other party's dynamics and search for the last communication content, which leads to complicated information interaction.

另外,现有方案的信息交互仍然是基于社交平台的虚拟账号的应用,不利于实现信息交互。In addition, the information interaction of the existing solution is still based on the application of the virtual account of the social platform, which is not conducive to information interaction.

针对上述信息交互的过程复杂的问题,目前尚未提出有效的解决方案。In view of the complicated process of the above information interaction, an effective solution has not yet been proposed.

发明内容Summary of the invention

本发明实施例提供了一种信息交互方法、装置和存储介质,以至少解决相关技术信息交互的过程复杂的技术问题。The embodiments of the present invention provide a method, an apparatus, and a storage medium for information interaction, so as to at least solve the technical problem of complicated process of related technical information interaction.

根据本发明实施例的一个方面,提供了一种信息交互方法。该信息交互方法包括:获取第一目标对象的面部信息;根据第一目标对象的面部信 息获取第一目标对象的目标信息,其中,目标信息用于指示第一目标对象的社交行为;接收第二目标对象根据目标信息发送的交互信息,其中,交互信息用于指示第二目标对象与第一目标对象进行交互;发布交互信息。According to an aspect of an embodiment of the present invention, an information interaction method is provided. The information interaction method includes: acquiring facial information of the first target object; and facial information according to the first target object Obtaining target information of the first target object, wherein the target information is used to indicate a social behavior of the first target object; and receiving interaction information that is sent by the second target object according to the target information, wherein the interaction information is used to indicate the second target object and The first target object interacts; the interaction information is published.

根据本发明实施例的另一方面,还提供了一种信息交互装置。该信息交互装置包括一个或多个处理器,以及一个或多个存储指令的存储器,其中,指令由处理器执行,处理器所要执行的程序单元包括:第一获取单元,用于获取第一目标对象的面部信息;第二获取单元,用于根据第一目标对象的面部信息获取第一目标对象的目标信息,其中,目标信息用于指示第一目标对象的社交行为;接收单元,用于接收第二目标对象根据目标信息发送的交互信息,其中,交互信息用于指示第二目标对象与第一目标对象进行交互;发布单元,用于发布交互信息。According to another aspect of an embodiment of the present invention, an information interaction apparatus is also provided. The information interaction device includes one or more processors, and one or more memories storing instructions, wherein the instructions are executed by the processor, and the program unit to be executed by the processor includes: a first obtaining unit, configured to acquire the first target a second acquisition unit, configured to acquire target information of the first target object according to the facial information of the first target object, wherein the target information is used to indicate a social behavior of the first target object; and the receiving unit is configured to receive The interaction information sent by the second target object according to the target information, wherein the interaction information is used to indicate that the second target object interacts with the first target object; and the publishing unit is configured to issue the interaction information.

根据本发明实施例的另一方面,还提供了一种终端。该终端被设置为执行程序代码,程序代码用于执行本发明实施例的信息交互方法中的步骤。According to another aspect of an embodiment of the present invention, a terminal is also provided. The terminal is arranged to execute program code for performing the steps in the information interaction method of the embodiment of the present invention.

根据本发明实施例的另一方面,还提供了一种存储介质。该存储介质被设置为存储程序代码,程序代码用于执行本发明实施例的信息交互方法中的步骤。According to another aspect of an embodiment of the present invention, a storage medium is also provided. The storage medium is arranged to store program code for performing the steps in the information interaction method of the embodiment of the present invention.

在本发明实施例中,通过获取第一目标对象的面部信息;根据第一目标对象的面部信息获取第一目标对象的目标信息,其中,目标信息用于指示第一目标对象的社交行为;接收第二目标对象根据目标信息发送的交互信息,其中,交互信息用于指示第二目标对象与第一目标对象进行交互;发布交互信息。根据第一目标对象的面部信息获取到的第一目标对象的目标信息,接收用于指示第二目标对象与第一目标对象进行交互的交互信息,进而发布交互信息,这不同于现有社交系统的虚拟账号,交互入口主要是基于面部信息,简化了信息交互的过程,达到了信息交互的目的,从而实现了简化信息的交互过程的技术效果,进而解决了相关技术信息交互的过程复杂的技术问题。 In the embodiment of the present invention, the object information of the first target object is acquired according to the face information of the first target object, wherein the target information is used to indicate the social behavior of the first target object; The interaction information sent by the second target object according to the target information, wherein the interaction information is used to indicate that the second target object interacts with the first target object; and the interaction information is released. Obtaining, according to the target information of the first target object acquired by the facial information of the first target object, interaction information indicating that the second target object interacts with the first target object, and then releasing the interaction information, which is different from the existing social system. The virtual account, the interactive entry is mainly based on the face information, which simplifies the process of information interaction and achieves the purpose of information interaction, thereby realizing the technical effect of simplifying the process of information interaction, and solving the complicated technology of the process of related technical information interaction. problem.

附图说明DRAWINGS

此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:The drawings described herein are intended to provide a further understanding of the invention, and are intended to be a part of the invention. In the drawing:

图1是根据本发明实施例的一种信息交互方法的硬件环境的示意图;1 is a schematic diagram of a hardware environment of an information interaction method according to an embodiment of the present invention;

图2是根据本发明实施例的一种信息交互方法的流程图;2 is a flowchart of an information interaction method according to an embodiment of the present invention;

图3是根据本发明实施例的一种在现实场景的预设空间位置显示目标信息的方法的流程图;3 is a flowchart of a method for displaying target information in a preset spatial position of a real scene according to an embodiment of the present invention;

图4是根据本发明实施例的另一种根据第一目标对象的面部信息在现实场景的预设空间位置显示目标信息的方法的流程图;4 is a flowchart of another method for displaying target information in a preset spatial position of a real scene according to facial information of a first target object according to an embodiment of the present invention;

图5是根据本发明实施例的一种在预设空间位置显示第一目标对象在权限范围内的可见信息的方法的流程图;FIG. 5 is a flowchart of a method for displaying visible information of a first target object within a permission range in a preset spatial position according to an embodiment of the present invention; FIG.

图6是根据本发明实施例的另一种在预设空间位置显示第一目标对象在权限范围内的可见信息的方法的流程图;6 is a flowchart of another method for displaying visible information of a first target object within a right scope in a preset spatial position according to an embodiment of the present invention;

图7是根据本发明实施例的一种向服务器发送第一请求的方法的流程图;7 is a flowchart of a method of transmitting a first request to a server according to an embodiment of the present invention;

图8是根据本发明实施例的另一种信息交互方法的流程图;FIG. 8 is a flowchart of another method for information interaction according to an embodiment of the present invention; FIG.

图9是根据本发明实施例的另一种信息交互方法的流程图;9 is a flowchart of another method of information interaction according to an embodiment of the present invention;

图10是根据本发明实施例的一种信息注册的方法的流程图;FIG. 10 is a flowchart of a method for information registration according to an embodiment of the present invention; FIG.

图11是根据本发明实施例的一种信息展示与交互的方法的流程图;11 is a flowchart of a method for displaying and interacting information according to an embodiment of the present invention;

图12是根据本发明实施例的一种基本信息展示的示意图;FIG. 12 is a schematic diagram showing a basic information display according to an embodiment of the present invention; FIG.

图13是根据本发明实施例的另一种基本信息展示的示意图;FIG. 13 is a schematic diagram showing another basic information display according to an embodiment of the present invention; FIG.

图14是根据本发明实施例的一种AR信息展示的示意图; FIG. 14 is a schematic diagram of an AR information display according to an embodiment of the present invention; FIG.

图15是根据本发明实施例的另一种AR信息展示的示意图;FIG. 15 is a schematic diagram of another AR information display according to an embodiment of the present invention; FIG.

图16是根据本发明实施例的一种信息交互装置的示意图;16 is a schematic diagram of an information interaction apparatus according to an embodiment of the present invention;

图17是根据本发明实施例的另一种信息交互装置的示意图;以及17 is a schematic diagram of another information interaction apparatus according to an embodiment of the present invention;

图18是根据本发明实施例的一种终端的结构框图。FIG. 18 is a structural block diagram of a terminal according to an embodiment of the present invention.

具体实施方式detailed description

为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。The technical solutions in the embodiments of the present invention are clearly and completely described in the following with reference to the accompanying drawings in the embodiments of the present invention. It is an embodiment of the invention, but not all of the embodiments. All other embodiments obtained by those skilled in the art based on the embodiments of the present invention without creative efforts shall fall within the scope of the present invention.

需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。It is to be understood that the terms "first", "second" and the like in the specification and claims of the present invention are used to distinguish similar objects, and are not necessarily used to describe a particular order or order. It is to be understood that the data so used may be interchanged where appropriate, so that the embodiments of the invention described herein can be implemented in a sequence other than those illustrated or described herein. In addition, the terms "comprises" and "comprises" and "the" and "the" are intended to cover a non-exclusive inclusion, for example, a process, method, system, product, or device that comprises a series of steps or units is not necessarily limited to Those steps or units may include other steps or units not explicitly listed or inherent to such processes, methods, products or devices.

根据本发明实施例的一个方面,提供了一种信息交互方法的实施例。According to an aspect of an embodiment of the present invention, an embodiment of an information interaction method is provided.

可选地,在本实施例中,上述信息交互方法可以应用于如图1所示的由服务器102和终端104所构成的硬件环境中。图1是根据本发明实施例的一种信息交互方法的硬件环境的示意图。如图1所示,服务器102通过网络与终端104进行连接,上述网络包括但不限于:广域网、城域网或局域网,终端104并不限定于PC、手机、平板电脑等。本发明实施例的信息交互方法可以由服务器102来执行,也可以由终端104来执行,还可以 是由服务器102和终端104共同执行。其中,终端104执行本发明实施例的信息交互方法也可以是由安装在其上的客户端来执行。Optionally, in the embodiment, the information interaction method may be applied to a hardware environment formed by the server 102 and the terminal 104 as shown in FIG. 1 . FIG. 1 is a schematic diagram of a hardware environment of an information interaction method according to an embodiment of the present invention. As shown in FIG. 1, the server 102 is connected to the terminal 104 through a network. The network includes but is not limited to a wide area network, a metropolitan area network, or a local area network. The terminal 104 is not limited to a PC, a mobile phone, a tablet, or the like. The information interaction method in the embodiment of the present invention may be executed by the server 102, or may be executed by the terminal 104, and may also be performed. It is executed by the server 102 and the terminal 104 in common. The information interaction method performed by the terminal 104 in the embodiment of the present invention may also be performed by a client installed thereon.

图2是根据本发明实施例的一种信息交互方法的流程图。如图2所示,该信息交互方法可以包括以下步骤:2 is a flow chart of an information interaction method according to an embodiment of the present invention. As shown in FIG. 2, the information interaction method may include the following steps:

步骤S202,获取第一目标对象的面部信息。Step S202: Acquire face information of the first target object.

在本发明上述步骤S202提供的技术方案中,增强现实(Augmented Reality,简称为AR)是一种实时地计算摄影机影像的位置以及角度,并加上相应的图像、视频、三维(3D)模型的技术,从而实现虚拟场景和现实场景之间的实时互动。增强现实应用采用AR技术,可以在AR眼镜、移动通讯终端、PC电脑上安装使用。在增强现实应用中,获取第一目标对象的面部信息,该第一目标对象为待进行信息交互的对象,比如,在见面场景、偶遇场景、擦肩而过场景下的同学、朋友、同事、家人等对象。面部信息可以为通过摄像头采集的人脸信息,比如,面部信息通过前置摄像头自动进行人脸识别得到的人脸信息,可以代替传统的虚拟账号以进行社交行为,使信息交互的入口为基于面部信息的识别。In the technical solution provided by the above step S202 of the present invention, Augmented Reality (AR) is a real-time calculation of the position and angle of the camera image, and corresponding image, video, three-dimensional (3D) model Technology to enable real-time interaction between virtual and real-world scenarios. Augmented reality applications use AR technology and can be installed and used on AR glasses, mobile communication terminals, and PCs. In the augmented reality application, acquiring the facial information of the first target object, where the first target object is an object to be information exchanged, for example, in a meeting scene, an encounter scene, a classmate, a friend, a colleague, Family and other objects. The facial information may be facial information collected by the camera, for example, facial information obtained by face information automatically obtained by face recognition by the front camera, and may replace the traditional virtual account for social behavior, so that the entrance of the information interaction is based on the face. Identification of information.

可选地,在人脸可见的场景下,当第一目标对象进入可识别的范围时,自动触发识别第一目标对象的面部信息。Optionally, in the scene where the face is visible, when the first target object enters the identifiable range, the face information identifying the first target object is automatically triggered.

可选地,在登录增强现实应用时,可以通过用户的掌纹信息、用户名、面部信息等方式进行登录,此处不做限定。Optionally, when logging in to the augmented reality application, the user may log in through the user's palm print information, user name, and facial information, which is not limited herein.

步骤S204,根据第一目标对象的面部信息获取第一目标对象的目标信息。Step S204, acquiring target information of the first target object according to the facial information of the first target object.

在本发明上述步骤S204提供的技术方案中,根据第一目标对象的面部信息获取第一目标对象的目标信息,其中,目标信息用于指示第一目标对象的社交行为。In the technical solution provided in the above step S204 of the present invention, the target information of the first target object is acquired according to the facial information of the first target object, wherein the target information is used to indicate the social behavior of the first target object.

第一目标对象的面部信息与第一目标对象的目标信息一一对应,该目标信息用于指示第一目标对象的社交行为,可以作为第二目标对象进一步 了解第一目标对象的提示信息,其中,第二目标对象为根据目标信息与第一目标对象进行交互的对象。The facial information of the first target object is in one-to-one correspondence with the target information of the first target object, the target information is used to indicate the social behavior of the first target object, and may be further used as the second target object. The prompt information of the first target object is understood, wherein the second target object is an object that interacts with the first target object according to the target information.

第一目标对象通过面部信息可以在服务器上注册过。在获取第一目标对象的面部信息之后,根据第一目标对象的面部信息从服务器获取第一目标对象的目标信息,可选地,该目标信息包括第一目标对象的用户基本信息和社交信息。The first target object can be registered on the server through the face information. After acquiring the face information of the first target object, the target information of the first target object is acquired from the server according to the face information of the first target object. Optionally, the target information includes user basic information and social information of the first target object.

上述用户基本信息可以包括第一目标对象的昵称、姓名、地址、联系方式、个性签名等基本信息。The user basic information may include basic information such as a nickname, a name, an address, a contact method, and a personalized signature of the first target object.

上述社交信息包括第一目标对象的动态信息、第一目标对象在第三方平台上的扩展信息、第一目标对象参与的历史交流信息等。其中,第一目标对象的动态信息可以为动态时间轴信息,包括但不限于表情和评论,表情指单个的不含文字的静态或动态或三维预置图片,评论为富媒体,可以包括文字、语音、图片等用户自由组织的信息;扩展信息包括第三方社交账号信息,可以根据第三方社交平台的网络地址特点,通过根据第三方社交账号信息拉取第一目标对象在第三方社交平台发布的信息;历史交流信息为与第一目标对象在过去时间进行交流的信息,可以用于唤起第二目标对象对第一目标对象的交流记忆,从而比较自然地使第二目标对象和第一目标对象开始开场交流话题。The social information includes dynamic information of the first target object, extended information of the first target object on the third-party platform, historical exchange information of the first target object, and the like. The dynamic information of the first target object may be dynamic timeline information, including but not limited to expressions and comments. The expression refers to a single static or dynamic or three-dimensional preset image without text, and the comment is rich media, which may include text, Voice, picture, and other information freely organized by users; extended information includes third-party social account information, which can be extracted on a third-party social platform by pulling the first target object according to the third-party social account information according to the network address characteristics of the third-party social platform. Information; historical exchange information is information that communicates with the first target object in the past time, and can be used to evoke the communication memory of the second target object to the first target object, thereby naturally making the second target object and the first target object more natural. Start the conversation topic.

在根据第一目标对象的面部信息获取第一目标对象的目标信息时,可以在现实场景的预设空间位置显示目标信息,也即,将目标信息叠加至现实场景的预设空间位置,比如,叠加至第一目标对象的一侧,从而实现了将虚拟的目标信息和真实的现实场景相结合的目的,通过获取的目标信息避免了手动打开社交软件去搜索第一目标对象的动态信息、搜索历史交流信息,从而简化了信息交互的过程。When the target information of the first target object is acquired according to the facial information of the first target object, the target information may be displayed at a preset spatial position of the real scene, that is, the target information is superimposed to a preset spatial position of the real scene, for example, Superimposed to one side of the first target object, thereby achieving the purpose of combining the virtual target information with the real real scene, and avoiding the manual opening of the social software to search for the dynamic information and search of the first target object by acquiring the target information History exchanges information, which simplifies the process of information interaction.

可选地,在自动触发识别第一目标对象的面部信息之后,自动展现第一目标对象的目标信息。 Optionally, the target information of the first target object is automatically displayed after the face information of the first target object is automatically triggered.

在不易获取第一目标对象的面部信息的情况下,比如,在光线较弱或者尘粒较多的环境下,摄像头不易获取第一目标对象的面部信息,此时可以通过语音搜索的方式获取目标信息,比如,通过语音搜索昵称、姓名等基本信息来获取目标信息。可选地,如果第二目标对象和第一目标对象没有在现实场景中碰面,但又想要查看第一目标对象的社交信息,比如,想要查看与第一目标对象的历史交流信息,此时并不能获取第一目标对象的面部信息,可以通过上述语音搜索的方式进行。In the case that the face information of the first target object is not easily acquired, for example, in an environment where the light is weak or the dust particles are large, the camera is not easy to acquire the face information of the first target object, and the target can be obtained by voice search. Information, for example, by voice search for basic information such as nicknames, names, etc. to obtain target information. Optionally, if the second target object and the first target object do not meet in the real scene, but want to view the social information of the first target object, for example, want to view historical exchange information with the first target object, this The face information of the first target object cannot be obtained at the time, and can be performed by the above-mentioned voice search.

步骤S206,接收第二目标对象根据目标信息发送的交互信息。Step S206, receiving interaction information that is sent by the second target object according to the target information.

在本发明上述步骤S206提供的技术方案中,接收第二目标对象根据目标信息发送的交互信息,其中,交互信息用于指示第二目标对象与第一目标对象进行交互。In the technical solution provided in the foregoing step S206 of the present invention, the interaction information sent by the second target object according to the target information is received, wherein the interaction information is used to indicate that the second target object interacts with the first target object.

在根据第一目标对象的面部信息获取第一目标对象的目标信息之后,第二目标对象通过第一目标对象的目标信息对第一目标对象有了进一步了解。第二目标对象根据自己实际意愿与第一目标对象进行信息交互,接收第二目标对象根据目标信息发送的交互信息,使第一目标对象与第二目标对象进行信息交互。After acquiring the target information of the first target object according to the facial information of the first target object, the second target object has a further understanding of the first target object by the target information of the first target object. The second target object performs information interaction with the first target object according to the actual intention thereof, and receives the interaction information sent by the second target object according to the target information, so that the first target object and the second target object perform information interaction.

可选地,该交互信息可以为与目标信息的内容相关的信息,也可以为与目标信息的内容不相关的信息,比如,第二目标对象通过第一目标对象的目标信息了解到第一目标对象喜欢足球,第二目标对象可以发送邀请第一目标对象观看足球比赛的交互信息,也可以为了让第一目标对象感受到新的球赛体验,发送邀请对方观看篮球比赛的交互信息。Optionally, the interaction information may be information related to the content of the target information, or may be information that is not related to the content of the target information. For example, the second target object learns the first target by using the target information of the first target object. The object likes the soccer ball, and the second target object may send the interactive information inviting the first target object to watch the soccer match, or may send the interactive information inviting the other party to watch the basketball game in order to let the first target object feel the new ball game experience.

可选地,交互信息可以为语音信息、图像信息、视频信息等,可以为虚拟场景下的虚拟交互信息,包括但不限于表情和评论,比如,包括但不限于第二目标对象手动输入的文字消息、图像信息、语音信息等。该交互信息也可以为现实场景下录入的语音信息、图像信息、视频消息等,此处不作限定,从而实现虚拟世界和现实世界的互动全部录入,达到了信息交互的虚拟并存的目的,进而丰富了信息交互的种类。 Optionally, the interaction information may be voice information, image information, video information, etc., and may be virtual interaction information in a virtual scene, including but not limited to expressions and comments, such as, but not limited to, text manually input by the second target object. Messages, image information, voice messages, etc. The interaction information may also be voice information, image information, video messages, etc. recorded in a real scene, which is not limited herein, so that all interactions between the virtual world and the real world are realized, and the virtual coexistence of information interaction is achieved, thereby enriching The type of information interaction.

步骤S208,发布交互信息。In step S208, the interaction information is released.

在本发明上述步骤S208提供的技术方案中,在接收第二目标对象根据目标信息发送的交互信息之后,发布交互信息,第一目标对象和第二目标对象可以通过客户端查看到交互信息,从而使得第一目标对象与第二目标对象进行信息互动。In the technical solution provided in the foregoing step S208 of the present invention, after receiving the interaction information sent by the second target object according to the target information, the interaction information is released, and the first target object and the second target object can view the interaction information through the client, thereby The first target object is caused to interact with the second target object.

可选地,发布入口主要包括个人动态信息入口,以及与他人的会话信息入口。前者可以对发布做出权限控制,后者则包括双方在虚拟场景下进行的交互信息的入口和在现实场景下进行的交互信息的入口。其中,权限控制至少分为四类,比如,所有人可见、朋友可见、特定朋友可见、仅自己可见。人们对信息公开的程度可以设置不同的需求,愿意被人看到的可使用最宽泛的可见控制权限,对隐私极其关注的可以设置为仅朋友可见,从而杜绝不熟悉的人对自己信息的窥视,提高了用户信息的安全性。Optionally, the publishing portal mainly includes a personal dynamic information portal, and a session information portal with others. The former can control the authority of the publication, and the latter includes the entry of the interaction information performed by the two parties in the virtual scene and the entry of the interaction information in the real scene. Among them, the permission control is divided into at least four categories, for example, all people are visible, friends are visible, specific friends are visible, and only visible to themselves. People can set different requirements for the degree of information disclosure, and can use the widest visible control permission to be seen by people. The privacy concern can be set to be visible only to friends, thus preventing unfamiliar people from peeking into their own information. Improve the security of user information.

可选地,在第一目标对象的第一目标信息中,无论是基本用户信息、动态信息,还是第一目标对象和第二目标对象之间的交互信息的展示方式,包括但不限于三维螺旋、球面、圆柱等展现方式,从而提高了交互信息展现的趣味性。Optionally, in the first target information of the first target object, whether it is basic user information, dynamic information, or a manner of displaying interaction information between the first target object and the second target object, including but not limited to a three-dimensional spiral , spherical, cylindrical and other presentation methods, thereby increasing the interest of interactive information display.

通过上述步骤S202至步骤S208,获取第一目标对象的面部信息;根据第一目标对象的面部信息获取第一目标对象的目标信息,其中,目标信息用于指示第一目标对象的社交行为;接收第二目标对象根据目标信息发送的交互信息,其中,交互信息用于指示第二目标对象与第一目标对象进行交互;发布交互信息。根据第一目标对象的面部信息获取到的第一目标对象的目标信息,接收用于指示第二目标对象与第一目标对象进行交互的交互信息,进而发布交互信息,这不同于现有社交系统的虚拟账号,交互入口主要是基于面部信息,简化了信息交互的过程,解决了相关技术信息交互的过程复杂的技术问题,进而达到简化信息的交互过程的技术效果。Obtaining, by using the foregoing step S202 to step S208, the facial information of the first target object; acquiring the target information of the first target object according to the facial information of the first target object, wherein the target information is used to indicate the social behavior of the first target object; The interaction information sent by the second target object according to the target information, wherein the interaction information is used to indicate that the second target object interacts with the first target object; and the interaction information is released. Obtaining, according to the target information of the first target object acquired by the facial information of the first target object, interaction information indicating that the second target object interacts with the first target object, and then releasing the interaction information, which is different from the existing social system. The virtual account and interactive portal are mainly based on facial information, which simplifies the process of information interaction, solves the complicated technical problems of the process of related technical information interaction, and further achieves the technical effect of simplifying the interactive process of information.

作为一种可选的实施方式,步骤S206,接收第二目标对象根据目标信息发送的交互信息包括:接收第二目标对象根据目标信息发送的在现实 场景下的真实交互信息;和/或接收第二目标对象根据目标信息发送的在虚拟场景下的虚拟交互信息。As an optional implementation manner, in step S206, receiving the interaction information that is sent by the second target object according to the target information includes: receiving, in the reality, the second target object is sent according to the target information. Real interaction information in the scenario; and/or receiving virtual interaction information in the virtual scenario sent by the second target object according to the target information.

在现实场景下,对第二目标对象与第一目标对象的真实交互信息进行记录,从而实现了对现实世界的记录。可选地,通过AR眼镜录入现实场景中的图像内容、视频内容等用户所见所得的内容,而不必像移动手机平台一样在记录现实场景中的图像内容、视频内容时来回在屏幕与现实之间切换注意力。In the real scene, the real interaction information between the second target object and the first target object is recorded, thereby realizing the record of the real world. Optionally, the AR content is used to input the content obtained by the user, such as the image content and the video content in the real scene, without having to go back and forth between the screen and the reality when recording the image content and the video content in the real scene like the mobile phone platform. Switch attention.

在虚拟场景下,接收第二目标对象根据目标信息发送的在虚拟场景下的虚拟交互信息,该虚拟交互信息为虚拟世界的交流信息,可以为单个的不含文字的静态或动态或三维预置图片,也可以为文字、语音、图片等由用户自由组织的信息。In the virtual scenario, the virtual interaction information in the virtual scenario sent by the second target object according to the target information is received, and the virtual interaction information is the exchange information of the virtual world, and may be a single static or dynamic or three-dimensional preset without text. Pictures can also be texts, voices, pictures, and other information that are freely organized by users.

作为一种可选的实施方式,在接收第二目标对象根据目标信息发送的在现实场景下的真实交互信息之后,存储真实交互信息至预设存储位置;和/或在接收第二目标对象根据目标信息发送的在虚拟场景下的虚拟交互信息之后,存储虚拟交互信息至预设存储位置。As an optional implementation manner, after receiving the real interaction information in the real scene sent by the second target object according to the target information, storing the real interaction information to the preset storage location; and/or receiving the second target object according to After the virtual interaction information in the virtual scenario sent by the target information, the virtual interaction information is stored to the preset storage location.

在接收第二目标对象根据目标信息发送的在现实场景下的真实交互信息之后,将真实交互信息存储至预设存储位置,比如,存储至服务器中,从而使得在下一次获取到的目标信息中包括此次真实交互信息,可选地,在通过AR眼镜录入真实交互信息后,不需要利用其它平台,就可以回放录入的图像内容、视频内容等,用户体验的是当初录入的视角,从而带给用户更加真实的体验效果。和/或在接收第二目标对象根据目标信息发送的在虚拟场景下的虚拟交互信息之后,将虚拟交互信息存储至预设存储位置,比如,存储至服务器中,从而使得在下一次获取到的目标信息中包括此次虚拟交互信息。After receiving the real interaction information in the real scene sent by the second target object according to the target information, the real interaction information is stored to a preset storage location, for example, stored in the server, so as to be included in the next acquired target information. The real interactive information, optionally, after the real interactive information is entered through the AR glasses, the recorded image content, the video content, and the like can be played back without using other platforms, and the user experience is the angle of view that was originally entered, thereby bringing Users have a more realistic experience. And/or after receiving the virtual interaction information in the virtual scenario sent by the second target object according to the target information, storing the virtual interaction information to a preset storage location, for example, storing in the server, so that the target acquired next time The information includes this virtual interaction information.

作为一种可选的实施方式,真实交互信息至少包括以下一种或多种:在现实场景下的语音信息;在现实场景下的图像信息;在现实场景下的视频信息。 As an optional implementation manner, the real interaction information includes at least one or more of the following: voice information in a real scene; image information in a real scene; and video information in a real scene.

第二目标对象根据目标信息发送的在现实场景下的真实交互信息包括在现实场景下的语音信息,比如,第二目标对象和第一目标对象的谈话,还包括在现实场景下的图像信息,比如,第一目标对象的面部图像,还包括在现实场景下的视频信息,比如,在会议室中开会的视频记录,从而丰富了交互信息的种类。The real interaction information in the real scene sent by the second target object according to the target information includes the voice information in the real scene, for example, the conversation between the second target object and the first target object, and the image information in the real scene. For example, the facial image of the first target object further includes video information in a real scene, such as a video recording of a meeting in a conference room, thereby enriching the type of interactive information.

作为一种可选的实施方式,步骤S202,获取第一目标对象的面部信息包括:扫描第一目标对象的面部,得到第一目标对象的面部信息;在步骤S204,根据第一目标对象的面部信息获取第一目标对象的目标信息之后,该方法还包括:在现实场景的预设空间位置显示目标信息。As an optional implementation manner, in step S202, acquiring the facial information of the first target object includes: scanning a face of the first target object to obtain facial information of the first target object; and in step S204, according to the face of the first target object After the information acquires the target information of the first target object, the method further includes: displaying the target information in a preset spatial position of the real scene.

在获取第一目标对象的面部信息时,可以通过扫描第一目标对象的面部,得到第一目标对象的面部信息,比如,通过AR眼镜安装的前置摄像头对第一目标对象的人脸进行自动识别,得到第一目标对象的人脸信息,从而实现获取第一目标对象的面部信息的目的。在扫描第一目标对象的面部,得到第一目标对象的面部信息之后,在现实场景的预设空间位置显示目标信息,比如,在第一目标对象的一侧显示目标信息,用户通过借助AR设备可以看到在预设空间位置显示的目标信息、第一目标对象以及现实场景中的其它景象。When acquiring the face information of the first target object, the face information of the first target object may be obtained by scanning the face of the first target object, for example, the front camera installed by the AR glasses automatically performs the face of the first target object. Identifying, obtaining face information of the first target object, thereby achieving the purpose of acquiring face information of the first target object. After scanning the face of the first target object to obtain the face information of the first target object, displaying the target information in a preset spatial position of the real scene, for example, displaying the target information on one side of the first target object, and the user passes the AR device The target information displayed in the preset spatial position, the first target object, and other scenes in the real scene can be seen.

需要说明的是,理论上拥有摄像头的设备都可以适用于该实施例的获取第一目标对象的面部信息,包括但不限于AR眼镜设备,还可以为移动通讯终端、PC端等设备,所不同的是易用性以及交互的操作方式。It should be noted that the device having the camera in theory can be applied to the face information of the first target object in the embodiment, including but not limited to the AR glasses device, and can also be different for the mobile communication terminal, the PC end, and the like. It's ease of use and how it works.

作为一种可选的实施方式,在现实场景的预设空间位置显示目标信息包括:根据第一目标对象在现实场景中的当前空间位置确定目标信息在现实场景中的显示空间位置;在显示空间位置显示目标信息。As an optional implementation manner, displaying the target information in the preset spatial position of the real scene includes: determining a display space position of the target information in the real scene according to the current spatial position of the first target object in the real scene; The location displays the target information.

图3是根据本发明实施例的一种在现实场景的预设空间位置显示目标信息的方法的流程图。如图3所示,该在现实场景的预设空间位置显示目标信息的方法包括以下步骤: FIG. 3 is a flowchart of a method for displaying target information in a preset spatial position of a real scene according to an embodiment of the present invention. As shown in FIG. 3, the method for displaying target information in a preset spatial position of a real scene includes the following steps:

步骤S301,确定第一目标对象在现实场景中的当前空间位置。Step S301, determining a current spatial location of the first target object in the real scene.

在本发明上述步骤S301提供的技术方案中,在获取第一目标对象的第一目标信息之后,确定第一目标对象在现实场景中的当前空间位置,该当前空间位置可以为第一目标对象的面部在现实场景中的位置。可选地,通过与第二目标对象之间的距离、相对第二目标对象的方向等信息确定第一目标对象在现实场景中的当前位置。In the technical solution provided in the foregoing step S301 of the present invention, after acquiring the first target information of the first target object, determining a current spatial location of the first target object in the real scene, where the current spatial location may be the first target object The position of the face in the real scene. Optionally, the current location of the first target object in the real scene is determined by information such as a distance from the second target object, a direction relative to the second target object, and the like.

步骤S302,根据当前空间位置确定目标信息在现实场景中的显示空间位置。Step S302, determining a display space position of the target information in the real scene according to the current spatial position.

在本发明上述步骤S302提供的技术方案中,在确定第一目标对象在现实场景中的当前空间位置之后,根据当前空间位置确定目标信息在现实场景中的显示空间位置,可以确定显示空间位置位于当前空间位置的左侧、右侧、上方、下方等,也可以根据当前空间位置进行手动设置,从而达到目标信息的显示位置和现实场景进行很好叠加的效果。In the technical solution provided by the foregoing step S302 of the present invention, after determining the current spatial position of the first target object in the real scene, determining the display space position of the target information in the real scene according to the current spatial position, it may be determined that the display spatial position is located. The left, right, top, bottom, etc. of the current spatial position can also be manually set according to the current spatial position, so as to achieve a superimposed effect of the display position of the target information and the real scene.

步骤S303,在显示空间位置显示目标信息。In step S303, the target information is displayed in the display space position.

在本发明上述步骤S303提供的技术方案中,在根据当前空间位置确定目标信息在现实场景中的显示空间位置之后,在显示空间位置显示目标信息,可以以自动浮现形式在第一目标对象的一侧浮现出目标信息,也可以以弹跳形式、渐入形式等在第一目标对象的一侧显示目标信息,此处不做限定,从而提高了信息交互的趣味性。In the technical solution provided in the above step S303 of the present invention, after determining the display space position of the target information in the real scene according to the current spatial position, displaying the target information in the display space position may be in an automatically floating form in the first target object. The target information is displayed on the side, and the target information is displayed on the side of the first target object in a bouncing form or a fade-in form, which is not limited herein, thereby improving the interest of the information interaction.

该实施例通过确定第一目标对象在现实场景中的当前空间位置;根据当前空间位置确定目标信息在现实场景中的显示空间位置;在显示空间位置显示目标信息,实现了根据第一目标对象的面部信息在现实场景的预设空间位置显示目标信息的目的,进而简化了信息交互的过程。The embodiment determines the current spatial position of the first target object in the real scene; determines the display spatial position of the target information in the real scene according to the current spatial position; displays the target information in the display spatial position, and implements the target object according to the first target object The purpose of displaying the target information in the preset spatial position of the real scene is to simplify the process of information interaction.

作为一种可选的实施方式,步骤S303,在显示空间位置显示目标信息至少包括以下一种或多种:在目标信息包括用户资料信息时,在第一显示空间位置显示第一目标对象的用户资料信息;在目标信息包括个人动态 信息时,在第二显示空间位置显示第一目标对象的个人动态信息;在目标信息包括扩展信息时,在第三显示空间位置显示第一目标对象的扩展信息;在目标信息包括历史交互信息时,在第四显示空间显示位置显示第二目标对象与第一目标对象在历史交互过程中产生的历史交互信息。As an optional implementation manner, in step S303, displaying the target information in the display space location includes at least one or more of the following: displaying the user of the first target object in the first display space position when the target information includes the user profile information Informational information; in the target information including personal dynamics And displaying the personal dynamic information of the first target object in the second display space position; displaying the extended information of the first target object in the third display space position when the target information includes the extended information; and when the target information includes the historical interaction information And displaying, in the fourth display space display position, historical interaction information generated during the historical interaction between the second target object and the first target object.

目标信息包括用户资料信息,该用户资料信息为第一目标对象的基本信息,比如,第一目标对象的昵称、姓名、地址、联系方式、个性签名等基本信息。在目标信息包括用户资料信息时,在第一显示空间位置显示第一目标对象的用户资料信息。可选地,在第一目标对象的面部一侧叠加第一目标对象的用户资料信息,用户通过AR眼镜不仅可以看到第一显示空间位置的目标信息,也可以看到现实场景中的其它景象,从而实现了将虚拟世界和现实世界的结合。The target information includes user profile information, which is basic information of the first target object, for example, basic information such as a nickname, a name, an address, a contact information, and a personalized signature of the first target object. When the target information includes the user profile information, the user profile information of the first target object is displayed at the first display space location. Optionally, the user profile information of the first target object is superimposed on the side of the face of the first target object, and the user can not only see the target information of the first display space location through the AR glasses, but also can view other scenes in the real scene. , thus achieving the combination of the virtual world and the real world.

目标信息还可以包括个人动态信息,在第二显示空间位置显示第一目标对象的个人动态信息。可选地,在第一目标对象的面部一侧叠加第一目标对象的用户资料信息的基础上,通过接收展示指令执行向下翻动或点击个人动态信息的图标的操作,其中,该展示指令包括语音指令、用户通过手势点击而产生的指示指令,用户通过凝视停留而产生的指示指令。在执行向下翻动或点击个人动态信息的图标的操作之后,在第二显示空间位置上显示第一目标对象的个人动态信息,该个人动态可以以时间轴为序依次浮现,或者以弹跳形式、渐进形式展示,此处不做限定。其中,个人动态信息为信息交互的入口之一。The target information may further include personal dynamic information that displays personal dynamic information of the first target object at the second display space location. Optionally, on the basis of superimposing the user profile information of the first target object on the side of the face of the first target object, performing an operation of scrolling down or clicking an icon of the personal dynamic information by receiving the display instruction, where the display instruction includes The voice command, the instruction command generated by the user clicking through the gesture, and the instruction command generated by the user by gazing. After performing the operation of scrolling down or clicking the icon of the personal dynamic information, displaying the personal dynamic information of the first target object in the second display space position, the personal dynamic may be sequentially displayed in the order of the time axis, or in a bounce manner, Progressive form display, not limited here. Among them, personal dynamic information is one of the entrances of information interaction.

目标信息还可以包括扩展信息,在第三显示空间位置显示第一目标对象的扩展信息。该扩展信息包括第一目标对象的第三方社交账号信息,可以根据第三方社交平台的网络地址特点,根据第三方社交账号信息拉取第一目标对象发表的信息。The target information may further include extended information that displays the extended information of the first target object at the third display space position. The extended information includes the third-party social account information of the first target object, and the information published by the first target object may be pulled according to the third-party social account information according to the network address characteristics of the third-party social platform.

目标信息还可以包括历史交互信息,在第四显示空间显示位置显示第二目标对象与第一目标对象在历史交互过程中产生的历史交互信息,该历史交互信息可以为图片信息、语音信息、文字信息、视频信息等,历史交 互信息为消息会话,为信息交互入口之一,记录了虚拟场景与现实场景中的交流信息。The target information may further include historical interaction information, and the historical interaction information generated during the historical interaction between the second target object and the first target object is displayed in the fourth display space display position, and the historical interaction information may be picture information, voice information, and text. Information, video information, etc., historical communication The mutual information is a message session, which is one of the information exchange entries, and records the exchange information between the virtual scene and the real scene.

该实施例的目标信息是叠加在现实世界中的虚拟内容,实现了交互信息的虚实结合,从而带给用户更加真实的交互体验。The target information of the embodiment is virtual content superimposed in the real world, realizing the virtual and real combination of the interaction information, thereby bringing a more realistic interactive experience to the user.

作为一种可选的实施方式,步骤S204,根据第一目标对象的面部信息在现实场景的预设空间位置显示目标信息包括:在扫描到第一目标对象的面部的情况下,判断服务器中是否存储与第一目标对象的面部信息相匹配的面部特征数据;在判断出服务器中存储与第一目标对象的面部信息相匹配的面部特征数据的情况下,判断第一目标对象的面部扫描权限是否为允许扫描,也即,判断面部特征数据所对应的账户的扫描权限是否为允许扫描;如果判断出第一目标对象的面部扫描权限为允许扫描,在预设空间位置显示可见信息,其中,可见信息至少包括第一目标对象的用户资料信息。As an optional implementation manner, in step S204, displaying the target information in the preset spatial position of the real scene according to the facial information of the first target object includes: determining whether the server is in the case of scanning the face of the first target object And storing facial feature data matching the facial information of the first target object; and determining whether the facial scanning data of the first target object matches the facial feature data of the first target object; To allow scanning, that is, to determine whether the scanning permission of the account corresponding to the facial feature data is allowed to scan; if it is determined that the facial scanning permission of the first target object is to allow scanning, visible information is displayed in the preset spatial position, wherein The information includes at least user profile information of the first target object.

图4是根据本发明实施例的另一种根据第一目标对象的面部信息在现实场景的预设空间位置显示目标信息的方法的流程图。如图4所示,该根据第一目标对象的面部信息在现实场景的预设空间位置显示目标信息的方法包括以下步骤:FIG. 4 is a flowchart of another method for displaying target information in a preset spatial position of a real scene according to face information of a first target object according to an embodiment of the present invention. As shown in FIG. 4, the method for displaying target information according to the facial information of the first target object in a preset spatial position of the real scene includes the following steps:

步骤S401,扫描面部。In step S401, the face is scanned.

在本发明上述步骤S401提供的技术方案中,在面部可见的场景下,信息展示以扫描面部为主入口场景。在主入口场景下,进行面部扫描,判断是否有面部。可选地,在预设范围内有多个目标对象的面部,该多个目标对象的面部包括第一目标对象的面部。如果没有扫描到面部,则继续执行扫描面部,确定是否扫描到其它对象的面部。如果扫描到对象的面部,判断服务器中是否存储有与扫描到的对象的面部信息相匹配的面部数据,如果判断出服务器中没有存储与扫描到的对象的面部信息相匹配的面部数据,则继续执行扫描,确定是否扫描到其它对象的面部。如果判断出服务器中存储与扫描到的对象的面部信息相匹配的面部数据,进一步判断该 对象的面部扫描权限是否为允许在扫描该对象的面部之后显示该对象在权限范围内的可见信息,如果判断出该对象的面部扫描权限为不允许在扫描该对象的面部之后显示该对象在权限范围内的可见信息,则继续执行扫描,确定是否扫描到其它对象的面部,以此类推。In the technical solution provided in the above step S401 of the present invention, in the scene where the face is visible, the information display is to scan the face as the main entrance scene. In the main entrance scene, a face scan is performed to determine whether there is a face. Optionally, there are a plurality of faces of the target object within the preset range, and the faces of the plurality of target objects include faces of the first target object. If the face is not scanned, the scanning of the face is continued to determine whether or not the face of the other subject is scanned. If the face of the object is scanned, it is judged whether the face data matching the face information of the scanned object is stored in the server, and if it is determined that the face data matching the face information of the scanned object is not stored in the server, the process continues. Perform a scan to determine if you are scanning the faces of other objects. If it is determined that the server stores face data matching the face information of the scanned object, further determining the Whether the face scanning permission of the object is to allow the visible information of the object to be displayed within the scope of the permission after scanning the face of the object, and if it is determined that the face scanning permission of the object is not allowed to display the object after the face of the object is displayed, the permission is Visible information within the range, continue to perform a scan to determine whether to scan the faces of other objects, and so on.

步骤S402,判断服务器中是否存储与第一目标对象的面部信息相匹配的面部特征数据。Step S402, it is determined whether the facial feature data matching the facial information of the first target object is stored in the server.

在本发明上述步骤S402提供的技术方案中,在扫描到第一目标对象的面部的情况下,判断服务器中是否存储与第一目标对象的面部信息相匹配的面部特征数据。In the technical solution provided in the above step S402 of the present invention, in the case of scanning the face of the first target object, it is determined whether the face feature data matching the face information of the first target object is stored in the server.

第一目标对象如果在增强现实应用中注册信息过,则服务器上存储第一目标对象的面部特征数据。在扫描到第一目标对象的面部的情况下,获取第一目标对象的面部信息,该第一目标对象的面部信息可以由具有预设特征的面部数据组成。判断服务器中是否存储与第一目标对象的面部信息相匹配的面部特征数据。可选地,面部信息与面部特征数据的匹配为面部信息中的数据与面部特征数据的重合度或者相似度在预设阈值内的匹配,比如,如果面部信息中的数据与面部特征数据的重合度或者相似度达到80%以上,则确定面部信息与面部特征数据相匹配,也即,服务器中存储了与第一目标对象的面部信息相匹配的上述面部特征数据。The first target object stores the facial feature data of the first target object if the information is registered in the augmented reality application. In the case of scanning to the face of the first target object, the face information of the first target object is acquired, and the face information of the first target object may be composed of face data having preset features. It is judged whether or not the face feature data matching the face information of the first target object is stored in the server. Optionally, the matching of the facial information and the facial feature data is a matching of the degree of coincidence or similarity of the data in the facial information with the facial feature data within a preset threshold, for example, if the data in the facial information coincides with the facial feature data. If the degree or the degree of similarity reaches 80% or more, it is determined that the face information matches the face feature data, that is, the face feature data that matches the face information of the first target object is stored in the server.

可选地,如果判断出服务器中没有存储与第一目标对象的面部信息相匹配的面部特征数据,则执行步骤S401,继续扫描除第一目标对象之外的其它对象的面部。Optionally, if it is determined that the facial feature data that matches the facial information of the first target object is not stored in the server, step S401 is performed to continue scanning the faces of the objects other than the first target object.

步骤S403,判断第一目标对象的面部扫描权限是否为允许扫描。Step S403, determining whether the face scan permission of the first target object is an allowable scan.

在本发明上述步骤S403提供的技术方案中,如果判断出服务器中存储与第一目标对象的面部信息相匹配的面部特征数据,判断第一目标对象的面部扫描权限是否为允许扫描。In the technical solution provided in the above step S403 of the present invention, if it is determined that the face feature data matching the face information of the first target object is stored in the server, it is determined whether the face scan permission of the first target object is the allowable scan.

第一目标对象的面部扫描权限用于表示第一目标对象的面部对外公 开扫描的程度,包括允许所有对象通过增强现实应用对第一目标对象的面部进行扫描,也即,所有对象可扫描。允许预设对象通过增强现实应用对第一目标对象的面部进行扫描,也即,仅预设对象扫描。禁止任何对象通过增强现实应用对第一目标对象的面部进行扫描,也即,禁止扫描,其中,预设对象可以为好友。该第一目标对象的面部扫描权限在第一目标对象请求服务器对面部特征数据进行存储时确定。判断第一目标对象的面部扫描权限是否为允许扫描,如果判断出第一目标对象的面部扫描权限为允许扫描,执行步骤S404。The face scan permission of the first target object is used to indicate the face of the first target object The extent of the scan, including allowing all objects to scan the face of the first target object through the augmented reality application, ie, all objects can be scanned. The preset object is allowed to scan the face of the first target object through the augmented reality application, that is, only the object scan is preset. Any object is prohibited from scanning the face of the first target object through the augmented reality application, that is, scanning is prohibited, wherein the preset object may be a friend. The face scan authority of the first target object is determined when the first target object requesting server stores the face feature data. It is determined whether the face scan permission of the first target object is the allowable scan, and if it is determined that the face scan permission of the first target object is the allowable scan, step S404 is performed.

可选地,如果判断出第一目标对象的面部扫描权限为不允许第二目标对象通过增强现实应用对第一目标对象的面部进行扫描,执行步骤S401,继续扫描除第一目标对象之外的其它对象的面部。Optionally, if it is determined that the face scan permission of the first target object is that the second target object is not allowed to scan the face of the first target object by using the augmented reality application, step S401 is performed to continue scanning other than the first target object. The face of other objects.

步骤S404,在预设空间位置显示第一目标对象在权限范围内的可见信息。Step S404, displaying visible information of the first target object within the permission range in the preset spatial position.

在本发明上述步骤S404提供的技术方案中,如果判断出第一目标对象的面部扫描权限为允许扫描,在预设空间位置显示第一目标对象在权限范围内的可见信息,其中,可见信息至少包括第一目标对象的用户资料信息。In the technical solution provided in the above step S404 of the present invention, if it is determined that the face scan permission of the first target object is to allow scanning, the visible information of the first target object within the permission range is displayed in the preset space position, wherein the visible information is at least The user profile information of the first target object is included.

第一目标对象在权限范围内的可见信息可以包括第一目标对象在权限范围内的用户资料信息、扩展信息和动态信息。在权限范围内的用户资料信息和扩展信息在第一目标对象向服务器注册信息时确定,其中,用户资料信息和扩展信息中的每一项信息的权限控制至少可以分为三类,分别为允许所有对象通过增强现实应用可见,仅允许预设对象通过增强现实应用可见,仅自己通过增强现实应用可见。动态信息的控制权限在动态信息发布时确定,控制权限可以包括四类,分别为允许所有对象通过增强现实应用可见,允许朋友通过增强现实应用可见,允许特定朋友通过增强现实应用可见,仅自己通过增强现实应用可见。在判断第一目标对象的面部扫描权限是否为允许扫描之后,如果判断第一目标对象的面部扫描权限为允 许扫描,则可以在预设空间位置显示在权限范围内的用户资料信息、扩展信息、动态信息。其中,动态信息为信息交互的入口之一,包括但不限于表情与评论。另一主要的信息交互入口为消息会话,记录了虚拟场景与现实场景中的交流信息。The visible information of the first target object within the scope of authority may include user profile information, extension information, and dynamic information of the first target object within the scope of authority. The user profile information and the extended information in the scope of the authority are determined when the first target object registers the information with the server, wherein the permission control of each item of the user profile information and the extended information may be classified into at least three categories, respectively All objects are visible through the augmented reality application, allowing only the preset objects to be visible through the augmented reality application, and only visible through the augmented reality application itself. The control rights of dynamic information are determined at the time of dynamic information release. The control rights can include four categories, which allow all objects to be visible through the augmented reality application, allow friends to be visible through the augmented reality application, allow specific friends to be visible through the augmented reality application, and only pass by themselves. Augmented reality apps are visible. After determining whether the face scan permission of the first target object is an allowable scan, if it is determined that the face scan permission of the first target object is If the scan is performed, the user profile information, extension information, and dynamic information within the permission range can be displayed in the preset space position. Among them, dynamic information is one of the entrances of information interaction, including but not limited to expressions and comments. Another major information interaction portal is a message session, which records the exchange information between the virtual scene and the real scene.

该实施例通过扫描面部;在扫描到第一目标对象的面部的情况下,判断服务器中是否存储与第一目标对象的面部信息相匹配的面部特征数据;如果判断出服务器中存储与第一目标对象的面部信息相匹配的面部特征数据,判断第一目标对象的面部扫描权限是否为允许扫描;如果判断出第一目标对象的面部扫描权限为允许扫描,在预设空间位置显示可见信息,,其中,可见信息至少包括第一目标对象的用户资料信息,实现了根据第一目标对象的面部信息在现实场景的预设空间位置显示目标信息的目的,进而简化了信息交互的过程。The embodiment scans the face; in the case of scanning the face of the first target object, it is determined whether the face feature data matching the face information of the first target object is stored in the server; if it is determined that the server stores the first target The facial feature data matching the facial information of the object is determined whether the facial scanning permission of the first target object is allowed to be scanned; if it is determined that the facial scanning permission of the first target object is allowed to be scanned, the visible information is displayed in the preset spatial position, The visible information includes at least the user profile information of the first target object, and the purpose of displaying the target information according to the facial information of the first target object in a preset spatial position of the real scene is implemented, thereby simplifying the process of information interaction.

作为一种可选的实施方式,可见信息包括第一目标对象的扩展信息,步骤S404,在预设空间位置显示在权限范围内的可见信息包括:第一目标对象具有第三方平台的账户信息的情况下,接收用于指示展示与账户信息对应的扩展内容的第一展示指令,在预设空间位置展示在权限范围内的扩展内容。As an optional implementation manner, the visible information includes extended information of the first target object, and step S404, displaying the visible information within the permission range in the preset spatial location includes: the first target object has account information of the third-party platform. In the case, the first display instruction for indicating the extended content corresponding to the account information is displayed, and the extended content within the permission range is displayed in the preset space position.

图5是根据本发明实施例的一种在预设空间位置显示第一目标对象在权限范围内的可见信息的方法的流程图。如图5所示,该在预设空间位置显示第一目标对象在权限范围内的可见信息的方法包括以下步骤:FIG. 5 is a flowchart of a method for displaying visible information of a first target object within a right range at a preset spatial position, according to an embodiment of the present invention. As shown in FIG. 5, the method for displaying visible information of the first target object within the permission range in the preset spatial position comprises the following steps:

步骤S501,判断第一目标对象是否具有第三方平台的账户信息。Step S501, determining whether the first target object has account information of a third-party platform.

在本发明上述步骤S501提供的技术方案中,判断第一目标对象是否具有第三方平台的账户信息,其中,扩展信息包括账户信息。在展示权限可见信息时,在判断出第一目标对象的面部扫描权限为允许扫描时,允许在扫描第一目标对象的面部之后显示第一目标对象在权限范围内的扩展信息,该扩展信息包括第一目标对象的第三方平台的账户信息。第二目标对象可以通过第三平台的账户信息获取第一目标对象在第三方平台已经 发布的内容。在展示权限范围内的可见信息中,判断第一目标对象是否有第三方平台的账户信息。In the technical solution provided in the above step S501 of the present invention, it is determined whether the first target object has account information of a third-party platform, wherein the extended information includes account information. When the permission visible information is displayed, when it is determined that the face scan permission of the first target object is the allowable scan, the extended information of the first target object within the permission range is allowed to be displayed after scanning the face of the first target object, and the extended information includes Account information of the third-party platform of the first target object. The second target object can obtain the first target object through the account information of the third platform on the third party platform. Published content. In the visible information within the scope of the display permission, it is determined whether the first target object has account information of the third-party platform.

步骤S502,接收用于指示展示与账户信息对应的扩展内容的第一展示指令。Step S502, receiving a first display instruction for indicating that the extended content corresponding to the account information is displayed.

在本发明上述步骤S502提供的技术方案中,如果判断出第一目标对象具有第三方平台的账户信息,接收用于指示展示与账户信息对应的扩展内容的第一展示指令。In the technical solution provided in the above step S502 of the present invention, if it is determined that the first target object has the account information of the third-party platform, the first display instruction for indicating the extended content corresponding to the account information is received.

在预设空间位置还可以标注可拉取内容的第三方平台的图标,可以在用户资料信息的显示位置的置底处显示。在判断第一目标对象是否具有第三方平台的账户信息之后,通过第三方平台的图标接收用于指示展示与账户信息对应的扩展内容的第一展示指令,该第一展示指令可以为语音指令、用户通过手势点击产生的指示指令、用户通过凝视停留产生的指示指令等。The icon of the third-party platform that can pull the content can also be marked in the preset space position, and can be displayed at the bottom of the display position of the user profile information. After determining whether the first target object has the account information of the third-party platform, receiving, by the icon of the third-party platform, a first display instruction for indicating that the extended content corresponding to the account information is displayed, where the first display instruction may be a voice instruction, The instruction command generated by the user by clicking the gesture, the instruction instruction generated by the user by gazing, and the like.

步骤S503,在预设空间位置展示在权限范围内的扩展内容。Step S503, displaying the extended content within the permission range at the preset spatial location.

在本发明上述步骤S503提供的技术方案中,在接收第一展示指令之后,在预设空间位置展示在权限范围内的扩展内容。In the technical solution provided in the above step S503 of the present invention, after receiving the first display instruction, the extended content within the permission range is displayed in the preset spatial position.

在接收用于指示展示与账户信息对应的扩展内容的第一展示指令之后,预设空间位置展示在权限范围内的扩展内容,可以切换为第三方平台上的时间轴信息流,从而获取丰富的信息。After receiving the first display instruction for indicating the extended content corresponding to the account information, the preset space location displays the extended content within the permission scope, and can switch to the timeline information flow on the third-party platform, thereby obtaining rich information.

该实施例通过判断第一目标对象是否具有第三方平台的账户信息,其中,扩展信息包括账户信息;如果判断出第一目标对象具有第三方平台的账户信息,接收用于指示展示与账户信息对应的扩展内容的第一展示指令;在接收第一展示指令之后,在预设空间位置展示在权限范围内的扩展内容,达到了在预设空间位置显示可见信息的目的。The embodiment determines whether the first target object has the account information of the third-party platform, wherein the extended information includes the account information; if it is determined that the first target object has the account information of the third-party platform, the receiving is used to indicate that the display corresponds to the account information. The first display instruction of the extended content; after receiving the first display instruction, displaying the extended content within the permission range in the preset spatial position, achieving the purpose of displaying the visible information in the preset spatial position.

可见信息包括第一目标对象的个人动态信息,步骤S404,在预设空间位置显示在权限范围内的可见信息包括:接收用于指示展示个人动态信息的第二展示指令;在接收第二展示指令之后,在预设空间位置展示在权 限范围内的个人动态信息。The visible information includes personal dynamic information of the first target object, and step S404, displaying the visible information within the permission range in the preset spatial location comprises: receiving a second display instruction for indicating the display of the personal dynamic information; and receiving the second display instruction After that, the right is displayed in the preset space. Personal dynamic information within the limits.

图6是根据本发明实施例的另一种在预设空间位置显示第一目标对象在权限范围内的可见信息的方法的流程图。如图6所示,该在预设空间位置显示第一目标对象在权限范围内的可见信息的方法包括以下步骤:6 is a flow chart of another method of displaying visible information of a first target object within a right range at a preset spatial location, in accordance with an embodiment of the present invention. As shown in FIG. 6, the method for displaying the visible information of the first target object within the permission range in the preset spatial position comprises the following steps:

步骤S601,接收用于指示展示个人动态信息的第二展示指令。Step S601, receiving a second display instruction for indicating the display of personal dynamic information.

在本发明上述步骤S601提供的技术方案中,接收用于指示展示个人动态信息的第二展示指令。在展示权限可见信息中,在判断出第一目标对象的面部扫描权限为允许扫描时,允许在扫描第一目标对象的面部之后显示第一目标对象在权限范围内的个人动态信息,可以接收用于指示展示个人动态信息的第二展示指令,该第二展示指令包括语音指令、用户通过手势点击产生的指示指令、用户通过凝视停留产生的指示指令等,从而根据第二展示指令进行向下翻动或点击个人动态信息图标的操作。In the technical solution provided in the above step S601 of the present invention, a second display instruction for indicating the display of personal dynamic information is received. In the display permission visible information, when it is determined that the face scan permission of the first target object is allowed to scan, the personal dynamic information of the first target object within the permission range is allowed to be displayed after scanning the face of the first target object, and can be received. a second display instruction indicating the display of the personal dynamic information, the second display instruction includes a voice instruction, an instruction instruction generated by the user by clicking the gesture, an instruction instruction generated by the user by gazing pause, and the like, thereby performing the downward rotation according to the second display instruction. Or click on the action of the personal dynamic info icon.

步骤S602,在预设空间位置展示在权限范围内的个人动态信息。Step S602, displaying personal dynamic information within the scope of authority in a preset spatial location.

在本发明上述步骤S602提供的技术方案中,在接收第二展示指令之后,在预设空间位置展示在权限范围内的个人动态信息。In the technical solution provided in the above step S602 of the present invention, after receiving the second display instruction, the personal dynamic information within the permission range is displayed in the preset spatial position.

在接收用于指示展示个人动态信息的第二展示指令之后,可以在用户资料信息的显示位置的基础上,展示在权限范围内的个人动态信息。After receiving the second display instruction for indicating the display of the personal dynamic information, the personal dynamic information within the scope of the authority may be displayed on the basis of the display position of the user profile information.

该实施例通过接收用于指示展示个人动态信息的第二展示指令;在接收第二展示指令之后,在预设空间位置展示个人动态信息,从而实现了在预设空间位置显示在权限范围内的可见信息的目的,进而简化了信息交互的过程。The embodiment receives the second display instruction for indicating the display of the personal dynamic information; after receiving the second display instruction, displaying the personal dynamic information in the preset spatial position, thereby realizing that the preset spatial position is displayed within the permission range. The purpose of the information is visible, which simplifies the process of information interaction.

作为一种可选的实施方式,在获取第一目标对象的面部信息之前,对第一目标对象的信息进行注册,该方法包括:向服务器发送第一请求,其中,第一请求携带与第一目标对象的面部信息相匹配的面部特征数据,服务器响应第一请求,并存储第一目标对象的面部特征数据,还可以向服务器发送第二请求,其中,第二请求携带第一目标对象的用户资料信息,服 务器响应第二请求,并存储第一目标对象的用户资料信息;和/或向服务器发送第三请求,其中,第三请求携带第一目标对象的扩展信息,服务器响应第三请求,并存储第一目标对象的扩展信息。As an optional implementation manner, before acquiring the facial information of the first target object, registering information of the first target object, the method comprising: sending a first request to the server, where the first request carries the first The facial feature data of the target object matches the facial feature data, the server responds to the first request, and stores the facial feature data of the first target object, and may also send a second request to the server, where the second request carries the user of the first target object Information, service Responding to the second request and storing user profile information of the first target object; and/or transmitting a third request to the server, wherein the third request carries extended information of the first target object, the server responds to the third request, and stores Extended information of the first target object.

在获取第一目标对象的面部信息之前,第一目标对象通过服务器注册信息,注册的信息包括第一目标对象的面部信息。在注册第一目标对象的面部信息时,需要实时获取第一目标对象的面部图像信息,对该面部图像信息做真实性验证,包括但不限于验证有无人脸、利用人脸信息提示第一目标对象实时做指定面部动作,判断第一目标对象做出的实际面部动作是否匹配用于验证真实性的面部动作,在第一目标对象做出的实际面部动作匹配用于验证真实性的面部动作时,通过检测人脸信息是否为三维形态,以进一步杜绝假冒注册行为。Before acquiring the face information of the first target object, the first target object passes the server registration information, and the registered information includes the face information of the first target object. When registering the facial information of the first target object, the facial image information of the first target object needs to be acquired in real time, and the authenticity verification is performed on the facial image information, including but not limited to verifying the unmanned face and using the facial information to prompt the first The target object performs a specified facial action in real time, determines whether the actual facial action made by the first target object matches the facial action for verifying authenticity, and the actual facial action made by the first target object matches the facial action for verifying authenticity. By detecting whether the face information is in a three-dimensional form, the counterfeiting registration behavior is further eliminated.

在检测到人脸信息为三维形态的情况下,获取第一目标对象的面部特征数据,向服务器发送第一请求,其中,第一请求携带与第一目标对象的面部特征数据,服务器响应第一请求,并存储第一目标对象的面部特征数据。在注册第一目标对象的面部信息时,权限控制可以设置为允许所有人扫描、允许朋友扫描、禁止扫描。After detecting that the face information is in a three-dimensional form, acquiring facial feature data of the first target object, and sending a first request to the server, where the first request carries facial feature data of the first target object, and the server responds to the first Requesting and storing facial feature data of the first target object. When registering the face information of the first target object, the rights control can be set to allow everyone to scan, allow friends to scan, and prohibit scanning.

注册的信息还可以包括第一目标对象的用户资料信息,包括但不限于第一目标对象的昵称、姓名、地址、联系方式、签名等信息。向服务器发送第二请求,其中,第二请求携带第一目标对象的用户资料信息,服务器响应第二请求,并存储第一目标对象的用户资料信息,从而实现对第一目标对象的基本信息的注册。The registered information may further include user profile information of the first target object, including but not limited to the nickname, name, address, contact information, signature, and the like of the first target object. Sending a second request to the server, where the second request carries the user profile information of the first target object, the server responds to the second request, and stores the user profile information of the first target object, thereby implementing basic information about the first target object. registered.

注册的信息还可以包括第一目标对象的扩展信息。扩展信息包括用户提供的本人第三方社交账号信息。该实施例的社交平台,可以在知晓用户账号的情况下,即可拉取其发布的信息。提供聚合第三方社交平台信息拉取能力,供扫描方获得更丰富的存量信息。The registered information may also include extended information of the first target object. The extended information includes the third-party social account information provided by the user. The social platform of this embodiment can pull the information it publishes if the user account is known. Provides aggregated third-party social platform information pull capability for scanners to obtain richer inventory information.

该实施例的用户资料信息、扩展信息可以根据用户自己的意愿,在注册时选择信息公开的程度。在用户资料信息和扩展信息的权限控制方面, 每一项信息的控制粒度至少可分为三大类,允许所有人可见、仅允许朋友可见、仅自己可见,比如,年龄的权限控制、电话号码的权限控制、地址信息的权限控制等用户都可以根据自己的需求按照上述方式一一设置。The user profile information and the extension information of this embodiment can select the degree of information disclosure at the time of registration according to the user's own wishes. In terms of access control of user profile information and extended information, The control granularity of each piece of information can be divided into at least three categories, allowing everyone to be visible, allowing only friends to be visible, and only visible to themselves. For example, age permission control, phone number permission control, address information permission control, etc. You can set them one by one according to your needs.

需要说明的是,该实施例在信息注册的过程中,不限定客户端的类型,可以通过AR眼镜实现注册,也可以通过移动通讯终端实现注册,也可以通过PC段实现注册,此处不做限定。It should be noted that, in the process of information registration, the embodiment does not limit the type of the client, and can be registered through the AR glasses, or can be registered through the mobile communication terminal, or can be registered through the PC segment, which is not limited herein. .

作为一种可选的实施方式,向服务器发送第一请求包括:在检测到第一目标对象的面部的情况下,发出用于指示第一目标对象执行预设面部动作的指示指令,在第一目标对象根据指示指令执行的实际面部动作预设面部动作相匹配时,检测第一目标对象的面部是否为三维形态;在检测到第一目标对象的面部为三维形态的情况下,获取第一目标对象的面部特征数据;根据面部特征数据向服务器发送第一请求。As an optional implementation manner, sending the first request to the server includes: when the face of the first target object is detected, issuing an instruction instruction for instructing the first target object to perform the preset facial action, at the first The target object detects whether the face of the first target object is in a three-dimensional form according to the actual facial action preset facial motion performed by the instruction instruction; and when the face of the first target object is detected as a three-dimensional shape, the first target is acquired. The facial feature data of the object; the first request is sent to the server based on the facial feature data.

图7是根据本发明实施例的一种向服务器发送第一请求的方法的流程图。如图7所示,该向服务器发送第一请求的方法包括以下步骤:7 is a flow chart of a method of transmitting a first request to a server, in accordance with an embodiment of the present invention. As shown in FIG. 7, the method for transmitting a first request to a server includes the following steps:

步骤S701,检测面部。Step S701, detecting a face.

在本发明上述步骤S701提供的技术方案中,检测面部。In the technical solution provided in the above step S701 of the present invention, the face is detected.

该实施例实时录入人脸信息。可选地,存在多个对象,多个对象包括第一目标对象。在获取第一目标对象的面部信息之前,检测第一目标对象的面部图像数据,可以通过前置摄像头检测第二目标对象的面部图像数据。可选地,用户实时自拍人脸照,系统将对接收到的面部图像数据做真实性验证。This embodiment records face information in real time. Optionally, there are a plurality of objects, the plurality of objects including the first target object. The face image data of the first target object is detected before the face information of the first target object is acquired, and the face image data of the second target object may be detected by the front camera. Optionally, the user takes a self-portrait face shot in real time, and the system verifies the authenticity of the received facial image data.

需要说明的是,该实施例的人脸检测算法并未指定特定方法,包括但不限于特征识别、模版识别、神经网络识别等传统算法,以及基于高斯过程的人脸识别(GaussianFace)算法。It should be noted that the face detection algorithm of this embodiment does not specify a specific method, including but not limited to traditional algorithms such as feature recognition, template recognition, and neural network recognition, and a Gaussian Face-based face recognition (Gaussian Face) algorithm.

可选地,如果没有检测到面部,则继续检测面部。 Alternatively, if no face is detected, the face is continuously detected.

步骤S702,发出用于指示第一目标对象执行预设面部动作的指示指令。Step S702, an instruction instruction for instructing the first target object to perform a preset facial action is issued.

在本发明上述步骤S702提供的技术方案中,在检测到第一目标对象的面部的情况下,发出用于指示第一目标对象执行预设面部动作的指示指令,其中,第一目标对象根据指示指令执行面部动作,得到实际面部动作。In the technical solution provided in the above step S702 of the present invention, in the case that the face of the first target object is detected, an instruction instruction for instructing the first target object to perform the preset facial action is issued, wherein the first target object is in accordance with the indication The command performs a facial action to obtain an actual facial action.

在检测到第一目标对象的面部的情况下,提示第一目标对象实时做指定面部动作,可以发出用于指示第一目标对象执行预设面部动作的语音指示指令,可以根据语音指令实时做抬头、低头、微微左转、微微右转、皱眉、张口、眨眼等预设面部动作。In the case that the face of the first target object is detected, the first target object is prompted to perform the specified face action in real time, and a voice instruction instruction for instructing the first target object to perform the preset face action may be issued, and the voice command may be raised in real time according to the voice instruction. Preset facial movements such as bowing, slightly turning left, slightly turning right, frowning, opening mouth, and blinking.

步骤S703,判断实际面部动作是否与预设面部动作相匹配。In step S703, it is determined whether the actual facial motion matches the preset facial motion.

在本发明上述步骤S703提供的技术方案中,判断实际面部动作是否与预设面部动作相匹配。In the technical solution provided in the above step S703 of the present invention, it is determined whether the actual facial motion matches the preset facial motion.

在发出用于指示第一目标对象执行预设面部动作的指示指令之后,判断实际面部动作是否与预设面部动作相匹配。如果判断出实际面部动作与预设面部动作不匹配,则重新执行步骤S701继续检测面部。如果判断出实际面部动作与预设面部动作相匹配,执行步骤S704。从而通过实际面部动作和预设面部动作是否相匹配来确定接收到的图像信息的真实性。After an instruction to instruct the first target object to perform the preset facial motion is issued, it is determined whether the actual facial motion matches the preset facial motion. If it is determined that the actual facial motion does not match the preset facial motion, step S701 is performed again to continue detecting the facial. If it is determined that the actual facial motion matches the preset facial motion, step S704 is performed. Thereby, the authenticity of the received image information is determined by whether the actual facial motion and the preset facial motion match.

步骤S704,检测第一目标对象的面部是否为三维形态。Step S704, detecting whether the face of the first target object is in a three-dimensional form.

在本发明上述步骤S704提供的技术方案中,如果判断出实际面部动作与预设面部动作相匹配,检测第一目标对象的面部是否为三维形态。In the technical solution provided in the above step S704 of the present invention, if it is determined that the actual facial action matches the preset facial motion, it is detected whether the face of the first target object is in a three-dimensional form.

在判断实际面部动作是否与预设面部动作相匹配之后,如果判断出实际面部动作与预设面部动作相匹配,检测第一目标对象的面部是否为三维形态,也即,对第一目标对象的面部进行面部深度信息检测。可选地,当利用AR眼镜的深度摄像头信息检测到人脸为三维形态时,确定接收到的面部图像信息的真实性,从而拒绝当前已知的伪装方法,提高了信息注册的安全性。其中,伪装方法可以为在终端屏幕播放事先准备好的人脸图像 以欺骗注册系统。After determining whether the actual facial action matches the preset facial motion, if it is determined that the actual facial motion matches the preset facial motion, detecting whether the facial of the first target object is a three-dimensional shape, that is, for the first target object The face performs face depth information detection. Optionally, when the face image is detected to be a three-dimensional shape by using the depth camera information of the AR glasses, the authenticity of the received face image information is determined, thereby rejecting the currently known camouflage method, and improving the security of information registration. Wherein, the camouflage method can be to play a previously prepared face image on the terminal screen. In order to deceive the registration system.

步骤S705,在检测到第一目标对象的面部为三维形态的情况下,获取第一目标对象的面部特征数据。Step S705, if it is detected that the face of the first target object is in a three-dimensional form, the facial feature data of the first target object is acquired.

在本发明上述步骤S705提供的技术方案中,在检测到第一目标对象的面部为三维形态的情况下,获取第一目标对象的面部特征数据。In the technical solution provided in the above step S705 of the present invention, in a case where the face of the first target object is detected to be in a three-dimensional form, the facial feature data of the first target object is acquired.

在检测到第一目标对象的面部为三维形态的情况下,获取与第一目标对象的面部信息匹配的面部特征数据,第一目标对象的面部信息允许与面部特征数据有预设阈值的误差。In the case where it is detected that the face of the first target object is in a three-dimensional form, the face feature data matching the face information of the first target object is acquired, and the face information of the first target object allows an error of a preset threshold value with the face feature data.

步骤S706,根据面部特征数据向服务器发送第一请求。Step S706, sending a first request to the server according to the facial feature data.

在本发明上述步骤S706提供的技术方案中,根据面部特征数据向服务器发送第一请求,第一请求携带与第一目标对象的面部信息相匹配的面部特征数据,服务器响应第一请求,并存储第一目标对象的面部特征数据。In the technical solution provided by the foregoing step S706, the first request is sent to the server according to the facial feature data, and the first request carries facial feature data that matches the facial information of the first target object, and the server responds to the first request and stores Facial feature data of the first target object.

可选地,在步骤S204,根据第一目标对象的面部信息获取第一目标对象的目标信息包括:根据第一目标对象的面部信息请求服务器根据面部特征数据下发目标信息;接收目标信息。Optionally, in step S204, acquiring the target information of the first target object according to the facial information of the first target object includes: requesting, by the facial information information of the first target object, the server to deliver the target information according to the facial feature data; and receiving the target information.

在获取第一目标对象的面部信息之后,根据该面部信息向服务器发送用于对面部信息进行匹配的请求,服务器对该请求进行响应,在面部特征数据库中查找第一目标对象的面部特征数据,在服务器查找到第一目标对象的面部数据之后,下发目标信息。After acquiring the face information of the first target object, sending a request for matching the face information to the server according to the face information, the server responding to the request, and searching for facial feature data of the first target object in the facial feature database, After the server finds the face data of the first target object, the target information is delivered.

该实施例通过检测面部;在检测到第一目标对象的面部的情况下,发出用于指示第一目标对象执行预设面部动作的指示指令;判断实际面部动作是否与预设面部动作相匹配;如果判断出实际面部动作与预设面部动作相匹配,检测第一目标对象的面部是否为三维形态;在检测到第一目标对象的面部为三维形态的情况下,获取第一目标对象的面部特征数据;根据面部特征数据向服务器发送第一请求,实现了服务器存储用于与第一目标对象的面部信息相匹配的面部特征数据的目的。 The embodiment detects the face; in the case where the face of the first target object is detected, an instruction instruction for instructing the first target object to perform the preset face action is issued; determining whether the actual face action matches the preset face action; If it is determined that the actual facial motion matches the preset facial motion, detecting whether the facial surface of the first target object is a three-dimensional shape; and when detecting that the facial surface of the first target object is a three-dimensional shape, acquiring facial features of the first target object Data; transmitting a first request to the server based on the facial feature data, achieving the purpose of the server storing facial feature data for matching facial information of the first target object.

作为一种可选的实施方式,接收第二目标对象根据目标信息发送的交互信息之前,在第一目标对象的面部不可见的情况下,接收用于指示搜索目标信息的搜索信息,其中,用户资料信息包括搜索信息;根据搜索信息获取目标信息。As an optional implementation manner, before receiving the interaction information sent by the second target object according to the target information, if the face of the first target object is not visible, the search information for indicating the search target information is received, where the user The profile information includes search information; the target information is obtained based on the search information.

图8是根据本发明实施例的另一种信息交互方法的流程图。如图8所示,该信息交互方法还包括以下步骤:FIG. 8 is a flowchart of another method of information interaction according to an embodiment of the present invention. As shown in FIG. 8, the information interaction method further includes the following steps:

步骤S801,接收用于指示搜索目标信息的搜索信息。Step S801, receiving search information for indicating search target information.

在本发明上述步骤S801提供的技术方案中,在第一目标对象的面部不可见的情况下,接收用于指示搜索目标信息的搜索信息,其中,用户资料信息包括搜索信息。In the technical solution provided in the above step S801 of the present invention, in a case where the face of the first target object is not visible, the search information for indicating the search target information is received, wherein the user profile information includes the search information.

信息展示以获取第一目对象的面部信息为主入口场景,以接收用于指示搜索目标信息的搜索信息为辅,该搜索信息可以为语音搜索昵称、姓名等用户资料信息。获取第一目标对象的面部信息可以应用在人脸可见场景下,接收用于指示搜索目标信息的搜索信息可以应用在无法获取或者准确获取面部信息的场景下。The information display is to obtain the facial information of the first target object as the main entrance scene, and the search information for indicating the search target information is supplemented, and the search information may be the user information information such as the voice search nickname and name. Obtaining the face information of the first target object may be applied to the face visible scene, and the search information for indicating the search target information may be applied in a scene in which the face information cannot be acquired or accurately acquired.

步骤S802,根据搜索信息获取目标信息。Step S802, acquiring target information according to the search information.

在本发明上述步骤S802提供的技术方案中,根据搜索信息获取目标信息。In the technical solution provided by the above step S802 of the present invention, the target information is acquired according to the search information.

在接收用于指示搜索目标信息的搜索信息之后,根据搜索信息获取目标信息,可以根据语音搜索第一目标对象的昵称、姓名等获取第一目标对象的目标信息。After receiving the search information for indicating the search target information, the target information is acquired according to the search information, and the target information of the first target object may be acquired according to the nickname, name, and the like of the first target object.

该实施例通过在接收第二目标对象根据目标信息发送的交互信息之前,在第一目标对象的面部不可见的情况下,接收用于指示搜索目标信息的搜索信息,用户资料信息包括搜索信息;根据搜索信息获取目标信息,从而实现了对目标信息的获取,简化了信息交互的过程。 The embodiment receives search information for indicating search target information in a case where the face of the first target object is not visible before receiving the interaction information transmitted by the second target object according to the target information, the user profile information including the search information; The target information is obtained according to the search information, thereby realizing the acquisition of the target information and simplifying the process of information interaction.

图9是根据本发明实施例的另一种信息交互方法的流程图。如图9所示,该信息交互方法还包括以下步骤:9 is a flow chart of another method of information interaction according to an embodiment of the present invention. As shown in FIG. 9, the information interaction method further includes the following steps:

步骤S901,根据第一目标对象的面部信息识别第一目标对象的面部轮廓。Step S901, identifying a facial contour of the first target object according to the facial information of the first target object.

在本发明上述步骤S901提供的技术方案中,根据第一目标对象的面部信息识别第一目标对象的面部轮廓。In the technical solution provided in the above step S901 of the present invention, the facial contour of the first target object is identified according to the facial information of the first target object.

在增强现实应用中,根据第一目标对象的面部信息识别第一目标对象的面部轮廓,可以通过AR眼镜根据第一目标对象的面部信息识别第一目标对象的面部轮廓。In the augmented reality application, the facial contour of the first target object is identified according to the facial information of the first target object, and the facial contour of the first target object may be identified by the AR glasses according to the facial information of the first target object.

步骤S902,在面部轮廓的预设位置添加静态和/或动态的三维图像信息。Step S902, adding static and/or dynamic three-dimensional image information at a preset position of the facial contour.

在本发明上述步骤S902提供的技术方案中,在面部轮廓的预设位置添加静态和/或动态的三维图像信息。In the technical solution provided in the above step S902 of the present invention, static and/or dynamic three-dimensional image information is added at a preset position of the facial contour.

该三维图像信息可以为三维装饰,通过AR眼镜在识别出的人脸轮廓中添加静态或动态的三维装饰。The three-dimensional image information may be a three-dimensional decoration, and a static or dynamic three-dimensional decoration is added to the recognized face contour by the AR glasses.

该实施例通过在获取第一目标对象的面部信息之后,根据第一目标对象的面部信息识别第一目标对象的面部轮廓;在面部轮廓的预设位置添加静态和/或动态的三维图像信息,从而增强了信息交互的趣味性。The embodiment identifies the facial contour of the first target object according to the facial information of the first target object after acquiring the facial information of the first target object; adding static and/or dynamic three-dimensional image information at the preset position of the facial contour, Thereby enhancing the interest of information interaction.

作为一种可选的实施方式,发布交互信息至少包括以下一种或多种:发布语音形式的交互信息;发布图片形式的交互信息,其中,图片形式的交互信息包括全景图片形式的交互信息;发布视频形式的交互信息;发布三维模型的交互信息。As an optional implementation manner, the publishing interaction information includes at least one of the following: releasing the interaction information in the form of a voice; and releasing the interaction information in the form of a picture, where the interaction information in the form of a picture includes the interaction information in the form of a panoramic picture; Publish interactive information in the form of video; publish interactive information of the 3D model.

该实施例产生的交互信息依赖于所使用的硬件,以AR眼镜为例,比较直观快捷的内容主要包括语音形式的交互信息、图片形式的交互信息,视频形式的交互信息。另外还包括AR设备能力所涉及的全景图片形式的 交互信息和三维模型的交互信息。The interaction information generated by this embodiment depends on the hardware used. Taking AR glasses as an example, the relatively intuitive and fast content mainly includes interactive information in the form of voice, interactive information in the form of pictures, and interactive information in the form of video. Also includes a panoramic image in the form of AR device capabilities. Interaction information between interaction information and 3D models.

下面结合优选的实施例对本发明的技术方案进行说明。The technical solution of the present invention will be described below in conjunction with preferred embodiments.

本发明实施例优选适用于拥有前置摄像头的AR眼镜设备。但本发明实施例并不限于AR眼镜设备,还可以为移动通讯终端、PC端,理论上在拥有摄像头的设备上都适用,所不同的是易用性和交互的操作方式。Embodiments of the invention are preferably applicable to AR glasses devices that have a front camera. However, the embodiment of the present invention is not limited to the AR glasses device, and may also be a mobile communication terminal and a PC terminal, and is theoretically applicable to devices having a camera, and the difference is the ease of use and the interactive operation mode.

本发明实施例还提供了一种增强现实社交系统,该增强现实社交系统主要包括注册模块、信息展示与交互模块、信息产生与发布模块。The embodiment of the invention further provides an augmented reality social system, which mainly comprises a registration module, an information display and interaction module, and an information generation and release module.

本发明实施例所包括的注册模块提供包括真实人脸的用户信息,信息展示与交互模块在识别人脸后提供AR信息的展示和交互入口,信息产生与发布模块则聚焦在用户本身的动态生成。The registration module included in the embodiment of the present invention provides user information including a real face, the information display and interaction module provides an AR information display and an interactive portal after the face is recognized, and the information generation and distribution module focuses on the dynamic generation of the user itself. .

下面对本发明实施例的注册模块的实现方法进行介绍。The implementation method of the registration module in the embodiment of the present invention is introduced below.

图10是根据本发明实施例的一种信息注册的方法的流程图。如图10所示,该信息注册的方法包括以下步骤:FIG. 10 is a flowchart of a method of information registration according to an embodiment of the present invention. As shown in FIG. 10, the method for registering information includes the following steps:

步骤S1001,录入基本信息。In step S1001, basic information is entered.

用户在系统中注册的信息包括基本信息、人脸信息和扩展信息。其中基本信息与现有各平台的做法类似,包括但不限于昵称、姓名、性别、地址、联系方式、签名等。The information registered by the user in the system includes basic information, face information, and extended information. The basic information is similar to the existing platforms, including but not limited to nickname, name, gender, address, contact information, signature, etc.

步骤S1002,检测人脸。Step S1002, detecting a face.

人脸信息为本系统的关键信息,用户需要实时地自拍人脸照,系统将对接收到的面部图像信息做真实性验证。验证过程包括但不限于利用人脸检测算法验证有无人脸。如果检测到人脸,执行步骤S1003,如果没有检测到人脸,继续执行本步骤,检测人脸。该实施例的人脸检测算法本系统未指定特定方法,包括但不限于特征识别、模版识别、神经网络识别等传统算法以及高斯过程的人脸识别(GaussianFace)算法。The face information is the key information of the system. The user needs to take a self-photograph of the face in real time, and the system will verify the authenticity of the received facial image information. The verification process includes, but is not limited to, verifying that there is a face without using a face detection algorithm. If a face is detected, step S1003 is performed, and if no face is detected, the step is continued to detect the face. The face detection algorithm of this embodiment does not specify a specific method, including but not limited to a traditional algorithm such as feature recognition, template recognition, neural network recognition, and a Gaussian Face algorithm.

步骤S1003,指示用户实时做出指定面部动作。 Step S1003, instructing the user to make a specified facial action in real time.

在检测到有人脸的情况下,系统提示用户实时做指定面部动作,用户根据系统提示做出实际面部动作。In the case that a human face is detected, the system prompts the user to perform a specified facial motion in real time, and the user makes an actual facial motion according to the system prompt.

步骤S1004,判断用户做的实际面部动作与指定面部动作是否相匹配。In step S1004, it is determined whether the actual facial action made by the user matches the specified facial motion.

在指示用户实时做出指定面部动作之后,判断用户做出的实际面部动作是否与指定面部动作相匹配,以验证面部图像信息的真实性。如果判断出用户做的实际面部动作与指定面部动作相匹配,执行步骤S1005,如果判断用户做出的实际面部动作与指定面部动作不匹配,执行步骤S1002,检测其他用户的人脸。After instructing the user to make a specified facial motion in real time, it is determined whether the actual facial motion made by the user matches the specified facial motion to verify the authenticity of the facial image information. If it is determined that the actual facial motion performed by the user matches the specified facial motion, step S1005 is performed. If it is determined that the actual facial motion made by the user does not match the specified facial motion, step S1002 is performed to detect the facial faces of other users.

步骤S1005,进行面部深度信息检测。In step S1005, face depth information detection is performed.

在判断用户做的实际面部动作与指定面部动作是否相匹配之后,如果判断出用户做的实际面部动作与指定面部动作相匹配,进行面部深度信息检测。After determining whether the actual facial motion performed by the user matches the specified facial motion, if it is determined that the actual facial motion performed by the user matches the specified facial motion, the facial depth information detection is performed.

步骤S1006,判断检测到的面部图像信息是否为三维形态。In step S1006, it is determined whether the detected facial image information is in a three-dimensional form.

可以利用AR眼镜的深度摄像头信息检测人脸是否为三维形态,从而杜绝当前已知的伪装面部图像信息的方法,比如,在移动通讯终端等屏幕播放预先准备好的人脸信息,通过人脸信息的可动视频以欺骗注册系统。The depth camera information of the AR glasses can be used to detect whether the face is in a three-dimensional form, thereby eliminating the currently known method of camouflaging facial image information, for example, playing pre-prepared face information on a screen such as a mobile communication terminal, and passing the face information. The mobile video is used to trick the registration system.

步骤S1007,请求服务器对面部图像信息进行存储,并将面部图像信息作为面部特征数据。In step S1007, the requesting server stores the face image information, and uses the face image information as the face feature data.

在判断检测到的面部图像信息是否为三维形态之后,如果判断出检测到的面部图像信息为三维形态,请求服务器对面部图像信息进行存储,作为面部特征数据存储到面部特征数据库中,从而完成在录入基本信息之后的人脸信息的注册过程。After determining whether the detected facial image information is in a three-dimensional form, if it is determined that the detected facial image information is in a three-dimensional form, the requesting server stores the facial image information as the facial feature data and stores it in the facial feature database, thereby completing The registration process of the face information after the entry of the basic information.

扩展信息包括用户提供的本人第三方社交账号信息,通过第三方社交账号信息即可拉取用户在第三方社交平台发布的信息。本系统提供聚合第三方社交平台信息的拉取能力,从而供扫描方获得更丰富的存量信息。 The extended information includes the third-party social account information provided by the user, and the third-party social account information can be used to pull the information posted by the user on the third-party social platform. The system provides the ability to aggregate the information of third-party social platform information, so that the scanner can obtain richer stock information.

该实施例注册的信息内容,可以根据自己的意愿,选择信息公开的程度,该信息公开的程度可以通过权限控制实现。在基本信息和扩展信息的权限控制方面,基本信息和扩展信息中的每一项信息的控制粒度至少可分为三大类,包括允许所有人可见、仅朋友可见、仅自己可见。在人脸信息本身的权限控制方面,至少也可以分为三大类,所有人可扫描、仅朋友可扫描、禁止扫描。The information content registered in this embodiment can be selected according to the will of the user, and the degree of information disclosure can be realized by the authority control. In terms of access control of basic information and extended information, the granularity of control of each of the basic information and the extended information can be classified into at least three categories, including allowing everyone to be visible, only friends to be visible, and only visible to themselves. In terms of the authority control of the face information itself, at least it can be divided into three categories, all of which can be scanned, only friends can scan, and scanning is prohibited.

下面对本发明实施例的信息展示与交互模块的实现方法进行介绍。The following describes the implementation of the information display and interaction module in the embodiment of the present invention.

图11是根据本发明实施例的一种信息展示与交互的方法的流程图。如图11所示,该信息展示与交互的方法包括以下步骤:11 is a flow chart of a method for displaying and interacting information according to an embodiment of the present invention. As shown in FIG. 11, the method for displaying and interacting with the information includes the following steps:

步骤S1101,人脸扫描。Step S1101, face scanning.

该实施例可以通过摄像头检测人脸,比如,通过AR眼镜的前置摄像头检测人脸。This embodiment can detect a human face by a camera, for example, detecting a human face through a front camera of the AR glasses.

步骤S1102,是否检测到人脸。In step S1102, whether a face is detected.

在进行人脸扫描的过程中,如果检测到人脸,执行步骤S1103,如果没有检测到人脸,执行步骤S1101,继续进行人脸扫描。In the process of performing face scanning, if a face is detected, step S1103 is performed, and if no face is detected, step S1101 is performed to continue face scanning.

步骤S1103,判断系统中是否有面部特征数据。In step S1103, it is determined whether there is facial feature data in the system.

如果检测到人脸,则判断系统中是否有与检测到的人脸图像信息对应的面部特征数据。如果判断出系统中没有与检测到的人脸图像信息对应的面部特征数据,则执行步骤S1101,检测其他用户的人脸。如果判断出系统中有与检测到的人脸图像信息对应的面部特征数据,则执行步骤S1104。If a face is detected, it is judged whether there is facial feature data corresponding to the detected face image information in the system. If it is determined that there is no facial feature data corresponding to the detected face image information in the system, step S1101 is performed to detect the faces of other users. If it is determined that there is facial feature data corresponding to the detected face image information in the system, step S1104 is performed.

步骤S1104,判断是否有扫脸权限。In step S1104, it is determined whether there is a face-sweeping authority.

在判断系统中是否有面部特征数据之后,如果判断出系统中有与检测到的人脸图像信息对应的面部特征数据,判断是否有扫脸权限。在人脸信息扫描的权限控制方面,至少可以分为三大类,包括所有对象扫描,仅朋友扫描,禁止扫描。如果判断出具有扫脸权限,执行步骤S1105。如果判 断出没有扫脸权限,执行步骤S1101,继续扫描其他用户的人脸。After judging whether there is facial feature data in the system, if it is determined that there is facial feature data corresponding to the detected facial image information in the system, it is determined whether there is a scanning face authority. In terms of the authority control of face information scanning, at least three categories can be divided, including all object scanning, only friend scanning, and scanning is prohibited. If it is determined that there is a face-sweeping authority, step S1105 is performed. If judged If the face is not scanned, the process proceeds to step S1101 to continue scanning the faces of other users.

步骤S1105,展示权限可见信息。In step S1105, the visible information of the permission is displayed.

在判断是否有扫脸权限之后,如果判断出具有扫脸权限,展示权限可见信息,其中,权限可见信息包括基本信息和动态时间轴信息,其中后者即为交互入口之一,包括但不限于表情与评论。另一的主要交互入口为消息会话,记录虚拟与现实中的交流信息。After determining whether there is a face-sweeping right, if it is determined that there is a face-sweeping right, the display-right-visible information is displayed, wherein the rights-visible information includes basic information and dynamic timeline information, wherein the latter is one of the interactive entries, including but not limited to Expressions and comments. Another major interaction entry is a message session that records the exchange of information between the virtual and the reality.

步骤S1106,判断是否有第三方平台账户信息。In step S1106, it is determined whether there is third-party platform account information.

在展示权限可见信息之后,判断是否有第三方平台账户信息。After displaying the visible information of the permission, it is judged whether there is third-party platform account information.

步骤S1107,展示平台图标。Step S1107, displaying a platform icon.

在判断是否有第三方平台账户信息之后,如果判断出有第三方平台账户信息,展示平台图标。After judging whether there is third-party platform account information, if it is determined that there is third-party platform account information, the platform icon is displayed.

步骤S1108,是否接收用于指示展开平台图标的指示信息。Step S1108: Receive indication information indicating that the platform icon is expanded.

在展示平台图标之后,判断是否接收用于指示展开平台图标的指示信息,该指示信息包括语音指令、用户通过手势点击产生的指示指令、用户通过凝视停留产生的指示指令等。如果判断出否接收到用于指示展开平台图标的指示信息,执行步骤S1109。After the platform icon is displayed, it is determined whether to receive the indication information for instructing to expand the platform icon, the indication information includes a voice instruction, an instruction instruction generated by the user through the gesture click, an instruction instruction generated by the user by the gaze pause, and the like. If it is determined that the instruction information for instructing to expand the platform icon is received, step S1109 is performed.

步骤S1109,浮现所点平台的用户信息流。Step S1109, the user information flow of the platform is presented.

在判断是否接收用于指示展开平台图标的指示信息之后,如果判断出接收用于指示展开平台图标的指示信息,浮现所点平台的用户信息流,从而实现了信息展示与交互。After determining whether to receive the indication information for instructing to expand the platform icon, if it is determined that the indication information for indicating the deployment platform icon is received, the user information flow of the platform is presented, thereby realizing information display and interaction.

该实施例的信息展示以扫描人脸为主入口,以语音搜索昵称、姓名等为辅,分别应用于人脸可见与不可见场景。在主入口场景下,基本流程为扫描人脸识别,浮现识别到的用户的基本信息和动态,并标注可拉取内容的其它社交平台图标,用以点击图标浮现扩展内容。 The information display of the embodiment is mainly based on scanning a human face, supplemented by a voice search nickname, a name, etc., and is applied to a visible and invisible scene of a face, respectively. In the main portal scenario, the basic process is to scan face recognition, reveal the basic information and dynamics of the identified user, and mark other social platform icons that can pull the content, and click on the icon to pop up the extended content.

下面对本发明实施例的信息产生与发布模块进行介绍。The information generation and distribution module of the embodiment of the present invention is introduced below.

该实施例产生的信息依赖于所使用的硬件,以AR眼镜为例,内容主要包括语音、图片和视频。另外还包括AR设备能力所及的信息如全景图片和三维模型。The information generated by this embodiment depends on the hardware used. Taking AR glasses as an example, the content mainly includes voice, pictures and video. It also includes information on AR device capabilities such as panoramic images and 3D models.

互动的信息可以为预置表情,评论等。特别的一个互动信息为利用人脸识别能力,系统可以在识别出的人脸轮廓处加静态或动态的三维装饰。Interactive information can be preset expressions, comments, and more. A special interactive information is the use of face recognition capabilities, the system can add static or dynamic three-dimensional decoration at the recognized face contour.

该实施例的发布入口主要包括个人动态信息,以及与他人的会话信息。其中,个人动态信息可以对发布做出权限控制,与他人的会话信息则包括双方的虚拟世界信息往来和记录的现实世界信息。其中权限控制部分至少分为四类,所有人可见、朋友可见、特定朋友可见、仅自己可见。人们对隐私设置有不同的需求,愿意被人看到的可使用最宽泛的可见控制权限,对隐私极其关注的可设置账户仅朋友可见杜绝不熟悉的人对自己信息的窥视。The publishing portal of this embodiment mainly includes personal dynamic information, and session information with others. Among them, the personal dynamic information can be controlled by the authority of the publishing, and the conversation information with others includes the virtual world information of both parties and the recorded real world information. The permission control part is divided into at least four categories, all visible, friends visible, specific friends visible, and only visible to themselves. People have different needs for privacy settings. The most widely visible control rights that people are willing to see are visible. The accounts that are extremely concerned about privacy can only be seen by friends, so that unfamiliar people can peek into their own information.

该实施例的AR眼镜安装有脱离其它平台独立的AR应用,信息的输入和输出都在眼镜平台上完成,不同于现有社交系统的虚拟账号,交互入口主要是基于人脸的识别,简化了信息交互的过程。The AR glasses of this embodiment are installed with AR applications independent of other platforms, and the input and output of information are completed on the glasses platform. Different from the virtual account of the existing social system, the interactive portal is mainly based on face recognition, which simplifies. The process of information interaction.

本发明实施例的应用环境可以但不限于参照上述实施例中的应用环境,本实施例中对此不再赘述。本发明实施例提供了用于实施上述信息交互方法的一种可选的具体应用。The application environment of the embodiment of the present invention may be, but is not limited to, the reference to the application environment in the foregoing embodiment, which is not described in this embodiment. An embodiment of the present invention provides an optional specific application for implementing the foregoing information interaction method.

该实施例利用AR眼镜的前置摄像头可以进行自动的人脸识别代替虚拟账号查找,利用眼镜的虚实叠加能力以AR的形式在真人旁边叠加展示识别到的用户的资料和社交信息,进而在现实中以及社交系统内进行互动。这就提供了一种基于人脸而非虚拟账号的新式AR社交系统。In this embodiment, the front camera of the AR glasses can perform automatic face recognition instead of virtual account search, and the virtual and superimposed ability of the glasses is used to superimpose and display the recognized user's data and social information in the form of AR, and then in reality. Engage in and within the social system. This provides a new AR social system based on faces rather than virtual accounts.

现有的社交系统,交互以现实中不在一起的情形为主,人与人之间社交诸如熟人间遇事会点对点发送消息,另外还有不定时地查看时间轴信息流来发现兴趣信息和互动。而该实施例的AR社交系统自动识别人脸,更 多的使用场景是在现实相遇时触发,自动展现对方信息来了解其动态。好友场景还可进一步展示双方的历史会话来唤起双方交流记忆。而非好友场景,则因了解了对方的动态及信息,更容易找到比较自然地开始交流的开场话题。In the existing social system, the interaction is mainly based on the situation in which the reality is not together. The social interaction between people such as acquaintances will send messages point-to-point, and the timeline information flow will be viewed from time to time to find interest information and interaction. . And the AR social system of this embodiment automatically recognizes the face, and Many usage scenarios are triggered when the reality meets, automatically displaying the other party's information to understand its dynamics. The buddy scene can further show the historical conversations between the two parties to evoke the exchange of memory between the two parties. Instead of a friend scene, it is easier to find an opening topic that starts to communicate more naturally because of the other party's dynamics and information.

对于好友会话记录,AR社交系统不只有虚拟世界的交流,还可以包含现实世界的记忆。相比现有的设备,AR眼镜能方便地记录现实中的语音、图像和视频,这样就把虚拟世界与现实世界的互动全部录入了AR社交系统,做到了虚实并存、丰富了系统的信息种类。并且对于AR眼镜录入的图像、视频内容,用户所见即所得,不必像移动手机平台一样在记录时来回在屏幕与现实之间切换注意力。而在回顾时,体验到的正是当初记录时的视角,带来更真实的感觉。For a friend's session record, the AR social system not only communicates with the virtual world, but also contains real-world memories. Compared with the existing equipment, AR glasses can conveniently record the voice, image and video in reality, so that the interaction between the virtual world and the real world is all recorded in the AR social system, which makes the virtual and real coexistence and enriches the information type of the system. . And for the images and video content recorded by the AR glasses, the user can see what is obtained, and it is not necessary to switch the attention between the screen and the reality back and forth during recording as in the mobile phone platform. In retrospect, what was experienced was the perspective of the original record, which brought a more realistic feeling.

图12是根据本发明实施例的一种基本信息展示的示意图。如图12所示,AR眼镜扫描到现实世界的人脸,识别后自动在人脸一侧浮现出用户基本信息,比如,用户的姓名“Melissa Banks”,用户的国家“Hometown:Chicaga”,用户的生日“Birthday:May,23,1987”等信息叠加在现实场景中,还可以显示添加朋友“Add friend”和消息“Message”。其中,只有用户基本信息为虚拟,其它景象为现实场景中真实存在的景象,从而达到了虚实结合的目的。FIG. 12 is a schematic diagram of a basic information display according to an embodiment of the present invention. As shown in FIG. 12, the AR glasses scan the face of the real world, and automatically recognize the basic information of the user on the side of the face after recognition, for example, the user's name "Melissa Banks", the user's country "Hometown: Chicaga", the user The birthday "Birthday: May, 23, 1987" and other information is superimposed on the real scene, and the friend "Add friend" and the message "Message" can also be displayed. Among them, only the basic information of the user is virtual, and other scenes are the real scenes in the real scene, thus achieving the purpose of combining virtual and real.

图13是根据本发明实施例的另一种基本信息展示的示意图。如图13所示,在屏幕上进行向下翻动或点击操作,可以展现出个人动态的信息,其中,个人动态的信息是叠加在现实世界中的虚拟内容,系统的个人动态以时间轴为序依次浮现,第三方可用平台聚合信息可以在置底处图标展示。点击平台图标后则在上图中切换为该平台时间轴信息流。Figure 13 is a schematic illustration of another basic information display in accordance with an embodiment of the present invention. As shown in FIG. 13, the flipping or clicking operation on the screen can display the personal dynamic information, wherein the personal dynamic information is the virtual content superimposed in the real world, and the personal dynamic of the system is in the order of the time axis. In turn, the third-party available platform aggregation information can be displayed at the bottom icon. After clicking the platform icon, switch to the platform timeline information flow in the above image.

在系统的个人动态中,允许有权限的用户进行互动,包括但不限于表情与评论。表情指单个的不含文字的静态或动态或三维预置图片。评论则为富媒体,包括文字、语音、图片等用户自由组织的信息。In the personal dynamics of the system, authorized users are allowed to interact, including but not limited to expressions and comments. An expression refers to a single static or dynamic or 3D preset image without text. Comments are rich media, including text, voice, pictures and other freely organized information.

图14是根据本发明实施例的一种AR信息展示的示意图。如图14所 示,AR的信息展现方式为球面,从而提高了信息展示的趣味性。FIG. 14 is a schematic diagram of an AR information display according to an embodiment of the present invention. As shown in Figure 14 It shows that the AR information is displayed in a spherical manner, which enhances the interest of information display.

图15是根据本发明实施例的另一种AR信息展示的示意图。如图15所示,AR的展现方式可以为三维螺旋、圆柱等三维图形,从而提高可信息展示的趣味性。FIG. 15 is a schematic diagram of another AR information display according to an embodiment of the present invention. As shown in FIG. 15, the display manner of the AR can be three-dimensional graphics such as three-dimensional spirals and cylinders, thereby improving the interest of information display.

需要说明的是,无论是个人动态,或者用户的互动信息,在AR的世界里,除了上述普通二维布局外,充分利用AR的三维展现能力,给用户提供更有趣的展现方式。包括但不限于三维螺旋、球面、圆柱。It should be noted that, whether it is personal dynamics or interactive information of users, in the AR world, in addition to the above-mentioned ordinary two-dimensional layout, the three-dimensional display capability of AR is fully utilized to provide users with more interesting presentation modes. Including but not limited to three-dimensional spirals, spheres, cylinders.

该实施例抛开了虚拟账号,提供了一种基于现实中人脸的增强现实社交新玩法。在熟人圈子中,常见的如同学、朋友、同事甚至家人,在见面、偶遇、擦肩而过等现实中碰面时刻,通常不会去打开社交软件去搜索和了解对方的动态,以及上次交流过的内容。该实施例则提供了一种自然便捷的方式,相遇时眼镜内自动显示对方信息、动态和彼此的交流会话。一方面,这些信息本身有着唤起此前交流记忆、了解对方最新动态的作用,也给现实中的交流提供了更多的话题和背景信息。另一方面,现实中的重要交流,也能够被回馈到系统中,作为记忆保存。This embodiment throws away the virtual account and provides a new way of augmented reality social game based on the face in reality. In the circle of acquaintances, common students such as classmates, friends, colleagues and even family members meet in real life when meeting, encountering, passing by, etc., usually do not open social software to search and understand each other's dynamics, and the last exchange Content. This embodiment provides a natural and convenient way to automatically display the other party's information, dynamics and mutual communication sessions in the glasses when they meet. On the one hand, this information itself has the effect of evoking the previous exchange of memories and understanding the latest developments of the other party, and also provides more topics and background information for the communication in the real world. On the other hand, important exchanges in reality can also be fed back into the system as a memory.

在陌生人情况下,本系统可能会对生人变熟带来有益的促进效果。而系统再提供了对被扫描者的通知,让被扫者了解谁在扫描自己,预计会促进产生更多社交行为。In the case of strangers, the system may have a beneficial effect on the ripening of the person. The system then provides notifications to the scanned person, allowing the swept to understand who is scanning themselves, and is expected to promote more social behavior.

需要说明的是,该实施例最适用于拥有前置摄像头的AR眼镜设备,用户携带、操作方便,提升用户的体验性能,但本发明实施例并不限于AR眼镜设备,拥有摄像头的设备都可以适用,但在易用性和交互操作方式有区别。It should be noted that the embodiment is most suitable for the AR glasses device with the front camera. The user is convenient to carry and operate, and the user experience is improved. However, the embodiment of the present invention is not limited to the AR glasses device, and the device having the camera can be used. Applicable, but there are differences in ease of use and interoperability.

需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实 施例均属于优选实施例,所涉及的动作和模块并不一定是本发明所必须的。It should be noted that, for the foregoing method embodiments, for the sake of simple description, they are all expressed as a series of action combinations, but those skilled in the art should understand that the present invention is not limited by the described action sequence. Because certain steps may be performed in other sequences or concurrently in accordance with the present invention. Secondly, those skilled in the art should also know the realities described in the specification. The embodiments are all preferred embodiments, and the actions and modules involved are not necessarily required by the present invention.

通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本发明各个实施例所述的方法。Through the description of the above embodiments, those skilled in the art can clearly understand that the method according to the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course, by hardware, but in many cases, the former is A better implementation. Based on such understanding, the technical solution of the present invention, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk, The optical disc includes a number of instructions for causing a terminal device (which may be a cell phone, a computer, a server, or a network device, etc.) to perform the methods described in various embodiments of the present invention.

根据本发明实施例的另一方面,还提供了一种用于实施上述信息交互方法的信息交互装置,该信息交互装置包括:一个或多个处理器,以及一个或多个存储指令的存储器,其中,指令由处理器执行,处理器所要执行的程序单元包括。图16是根据本发明实施例的一种信息交互装置的示意图。如图16所示,该信息交互装置可以包括:第一获取单元10、第二获取单元20、接收单元30和发布单元40。According to another aspect of an embodiment of the present invention, there is also provided an information interaction apparatus for implementing the above information interaction method, the information interaction apparatus comprising: one or more processors, and one or more memories storing instructions, Wherein the instructions are executed by the processor, and the program units to be executed by the processor are included. FIG. 16 is a schematic diagram of an information interaction apparatus according to an embodiment of the present invention. As shown in FIG. 16, the information interaction apparatus may include: a first acquisition unit 10, a second acquisition unit 20, a receiving unit 30, and a distribution unit 40.

第一获取单元10,被设置为获取第一目标对象的面部信息。The first obtaining unit 10 is configured to acquire facial information of the first target object.

第二获取单元20,被设置为根据第一目标对象的面部信息获取第一目标对象的目标信息,其中,目标信息用于指示第一目标对象的社交行为。The second obtaining unit 20 is configured to acquire target information of the first target object according to the facial information of the first target object, wherein the target information is used to indicate the social behavior of the first target object.

接收单元30,被设置为接收第二目标对象根据目标信息发送的交互信息,其中,交互信息用于指示第二目标对象与第一目标对象进行交互。The receiving unit 30 is configured to receive interaction information that is sent by the second target object according to the target information, where the interaction information is used to indicate that the second target object interacts with the first target object.

发布单元40,被设置为发布交互信息。The publishing unit 40 is arranged to publish the interaction information.

此处需要说明的是,上述第一获取单元10、第二获取单元20、接收单元30和发布单元40可以作为装置的一部分运行在终端中,可以通过终端中的处理器来执行上述单元实现的功能,终端也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌声电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。 It should be noted that the foregoing first obtaining unit 10, second obtaining unit 20, receiving unit 30 and issuing unit 40 may be operated in the terminal as part of the device, and may be implemented by the processor in the terminal. Function, the terminal can also be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, an applause computer, and a mobile Internet device (MID), a PAD, and the like.

可选地,接收单元30包括:第一接收模块,被设置为接收第二目标对象根据目标信息发送的在现实场景下的真实交互信息;和/或第二接收模块,被设置为接收第二目标对象根据目标信息发送的在虚拟场景下的虚拟交互信息。Optionally, the receiving unit 30 includes: a first receiving module, configured to receive real interaction information in a real scene sent by the second target object according to the target information; and/or a second receiving module, configured to receive the second The virtual interaction information in the virtual scenario that the target object sends according to the target information.

此处需要说明的是,上述第一接收模块和第二接收模块可以作为装置的一部分运行在终端中,可以通过终端中的处理器来执行上述模块实现的功能,终端也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌声电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。It should be noted that the first receiving module and the second receiving module may be run in the terminal as part of the device, and the function implemented by the module may be performed by a processor in the terminal, and the terminal may also be a smart phone (eg, Terminal devices such as Android phones, iOS phones, etc., tablets, applause computers, and mobile Internet devices (MID), PAD, etc.

可选地,该信息交互装置还包括:第一存储单元,被设置为在接收第二目标对象根据目标信息发送的在现实场景下的真实交互信息之后,存储真实交互信息至预设存储位置;和/或第二存储单元,被设置为在接收第二目标对象根据目标信息发送的在虚拟场景下的虚拟交互信息之后,存储虚拟交互信息至预设存储位置。Optionally, the information interaction device further includes: a first storage unit, configured to store the real interaction information to the preset storage location after receiving the real interaction information in the real scene sent by the second target object according to the target information; And/or the second storage unit is configured to store the virtual interaction information to the preset storage location after receiving the virtual interaction information in the virtual scenario sent by the second target object according to the target information.

此处需要说明的是,上述第一存储单元和第二存储单元可以作为装置的一部分运行在终端中,可以通过终端中的处理器来执行上述模块实现的功能,终端也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌声电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。It should be noted that the foregoing first storage unit and the second storage unit may be operated in the terminal as a part of the device, and the function implemented by the foregoing module may be performed by a processor in the terminal, and the terminal may also be a smart phone (eg, Terminal devices such as Android phones, iOS phones, etc., tablets, applause computers, and mobile Internet devices (MID), PAD, etc.

可选地,上述真实交互信息至少包括以下一种或多种:在现实场景下的语音信息;在现实场景下的图像信息;在现实场景下的视频信息。Optionally, the foregoing real interaction information includes at least one or more of the following: voice information in a real scene; image information in a real scene; and video information in a real scene.

可选地,第一获取单元10被设置为扫描第一目标对象的面部,得到第一目标对象的面部信息;所述装置还包括:显示单元,被设置为在根据所述第一目标对象的面部信息获取所述第一目标对象的目标信息之后,在现实场景的预设空间位置显示目标信息。Optionally, the first obtaining unit 10 is configured to scan a face of the first target object to obtain face information of the first target object; the device further includes: a display unit, configured to be according to the first target object After the face information acquires the target information of the first target object, the target information is displayed at a preset spatial position of the real scene.

此处需要说明的是,上述第一获取单元10和显示单元可以作为装置 的一部分运行在终端中,可以通过终端中的处理器来执行上述模块实现的功能,终端也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌声电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。It should be noted here that the first acquiring unit 10 and the display unit may be used as a device. Part of the system runs in the terminal, and the functions implemented by the above modules can be executed by the processor in the terminal. The terminal can also be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, an applause computer, and a mobile Internet device (Mobile Internet). Terminal devices such as Devices, MID) and PAD.

可选地,第二获取单元20包括:第一确定模块、第二确定模块和显示模块。其中,第一确定模块,被设置为确定第一目标对象在现实场景中的当前空间位置;第二确定模块,被设置为根据当前空间位置确定目标信息在现实场景中的显示空间位置;显示模块,被设置为在显示空间位置显示目标信息。Optionally, the second obtaining unit 20 includes: a first determining module, a second determining module, and a display module. The first determining module is configured to determine a current spatial location of the first target object in the real scene; the second determining module is configured to determine a display spatial location of the target information in the real scene according to the current spatial location; the display module Is set to display the target information in the display space position.

此处需要说明的是,上述第一确定模块、第二确定模块和显示模块可以作为装置的一部分运行在终端中,可以通过终端中的处理器来执行上述模块实现的功能,终端也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌声电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。It should be noted that the foregoing first determining module, the second determining module, and the display module may be run in the terminal as part of the device, and the function implemented by the foregoing module may be performed by a processor in the terminal, and the terminal may also be intelligent. Mobile devices (such as Android phones, iOS phones, etc.), tablets, applause computers, and mobile Internet devices (MID), PAD and other terminal devices.

可选地,显示模块被设置为执行以下至少之一:在目标信息包括用户资料信息时,在第一显示空间位置显示第一目标对象的用户资料信息;在目标信息包括个人动态信息时,在第二显示空间位置显示第一目标对象的个人动态信息;在目标信息包括扩展信息时,在第三显示空间位置显示第一目标对象的扩展信息;在目标信息包括历史交互信息时,在第四显示空间显示位置显示第二目标对象与第一目标对象在历史交互过程中产生的历史交互信息。Optionally, the display module is configured to perform at least one of: displaying the user profile information of the first target object in the first display space location when the target information includes the user profile information; and when the target information includes personal dynamic information, The second display space position displays personal dynamic information of the first target object; when the target information includes the extended information, the extended information of the first target object is displayed in the third display space position; and when the target information includes historical interaction information, in the fourth The display space display position displays historical interaction information generated by the second target object and the first target object during historical interaction.

图17是根据本发明实施例的另一种信息交互装置的示意图。如图17所示,该信息交互装置可以包括:第一获取单元10、第二获取单元20、接收单元30、发布单元40和显示单元50。其中,显示单元50包括:第一判断模块51、第二判断模块52和显示模块53。17 is a schematic diagram of another information interaction apparatus according to an embodiment of the present invention. As shown in FIG. 17, the information interaction apparatus may include: a first acquisition unit 10, a second acquisition unit 20, a receiving unit 30, a distribution unit 40, and a display unit 50. The display unit 50 includes a first determining module 51, a second determining module 52, and a display module 53.

需要说明的是,该实施例的第一获取单元10、第二获取单元20、接收单元30和发布单元40与图16所示实施例的信息交互装置中的作用相 同,此处不再赘述。It should be noted that the roles of the first obtaining unit 10, the second obtaining unit 20, the receiving unit 30, and the issuing unit 40 of the embodiment are the same as those in the information interaction device of the embodiment shown in FIG. The same is not repeated here.

显示单元50,被设置为在根据所述第一目标对象的面部信息获取所述第一目标对象的目标信息之后,在现实场景的预设空间位置显示目标信息。The display unit 50 is configured to display the target information in a preset spatial position of the real scene after acquiring the target information of the first target object according to the facial information of the first target object.

第一判断模块51,被设置为在扫描到第一目标对象的面部的情况下,判断服务器中是否存储与第一目标对象的面部信息相匹配的面部特征数据。The first judging module 51 is configured to determine whether the face feature data matching the face information of the first target object is stored in the server in the case of scanning the face of the first target object.

第二判断模块52,被设置为在判断出服务器中存储与第一目标对象的面部信息相匹配的面部特征数据时,判断第一目标对象的面部扫描权限是否为允许扫描。The second determining module 52 is configured to determine whether the face scanning authority of the first target object is allowed to scan when it is determined that the face feature data matching the face information of the first target object is stored in the server.

显示模块53,被设置为在判断出第一目标对象的面部扫描权限为允许时,在预设空间位置显示可见信息,其中,可见信息至少包括第一目标对象的用户资料信息。The display module 53 is configured to display visible information at a preset spatial location when it is determined that the facial scanning authority of the first target object is permitted, wherein the visible information includes at least user profile information of the first target object.

此处需要说明的是,上述第一判断模块51、第二判断模块52和显示模块53可以作为装置的一部分运行在终端中,可以通过终端中的处理器来执行上述模块实现的功能,终端也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌声电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。It should be noted that the first determining module 51, the second determining module 52, and the display module 53 may be run in the terminal as part of the device, and the functions implemented by the foregoing modules may be performed by a processor in the terminal, and the terminal also It can be a smart phone (such as Android phone, iOS phone, etc.), tablet computer, applause computer, and mobile Internet devices (MID), PAD and other terminal devices.

可选地,可见信息包括第一目标对象的扩展信息,显示模块53包括:判断子模块、第一接收子模块和第一展示子模块。其中,判断子模块,被设置为判断第一目标对象是否具有第三方平台的账户信息,其中,扩展信息包括账户信息;第一接收子模块,被设置为在判断出第一目标对象具有第三方平台的账户信息,接收用于指示展示与账户信息对应的扩展内容的第一展示指令;第一展示子模块,被设置为在接收第一展示指令之后,在预设空间位置展示扩展内容。Optionally, the visible information includes extended information of the first target object, and the display module 53 includes: a determining submodule, a first receiving submodule, and a first displaying submodule. The determining sub-module is configured to determine whether the first target object has account information of a third-party platform, wherein the extended information includes account information; and the first receiving sub-module is configured to determine that the first target object has a third party The account information of the platform receives a first display instruction for indicating extended content corresponding to the account information; and the first display sub-module is configured to display the extended content at a preset spatial location after receiving the first display instruction.

此处需要说明的是,上述判断子模块、第一接收子模块和第一展示子模块可以作为装置的一部分运行在终端中,可以通过终端中的处理器来执 行上述模块实现的功能,终端也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌声电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。It should be noted that the foregoing determining sub-module, the first receiving sub-module and the first displaying sub-module may be operated in the terminal as part of the device, and may be implemented by a processor in the terminal. The functions implemented by the above modules can also be terminal devices such as smart phones (such as Android phones, iOS phones, etc.), tablet computers, applause computers, and mobile Internet devices (MID), PAD, and the like.

可选地,可见信息包括第一目标对象的个人动态信息,显示模块53包括:第二接收子模块和第二展示子模块。其中,第二接收子模块被设置为接收用于指示展示个人动态信息的第二展示指令;第二展示子模块被设置为在接收第二展示指令之后,在预设空间位置展示个人动态信息。Optionally, the visible information includes personal dynamic information of the first target object, and the display module 53 includes: a second receiving submodule and a second displaying submodule. The second receiving sub-module is configured to receive a second display instruction for indicating the display of the personal dynamic information; the second display sub-module is configured to display the personal dynamic information at the preset spatial location after receiving the second display instruction.

此处需要说明的是,上述第二接收子模块和第二展示子模块可以作为装置的一部分运行在终端中,可以通过终端中的处理器来执行上述模块实现的功能,终端也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌声电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。It should be noted that the foregoing second receiving submodule and the second displaying submodule may be run in the terminal as part of the device, and the function implemented by the foregoing module may be performed by a processor in the terminal, and the terminal may also be a smart phone. (such as Android phones, iOS phones, etc.), tablets, applause computers and mobile Internet devices (Mobile Internet Devices, MID), PAD and other terminal devices.

可选地,该信息交互装置还包括:第一请求单元,被设置为在获取第一目标对象的面部信息之前,向服务器发送第一请求,其中,第一请求携带与第一目标对象的面部信息相匹配的面部特征数据,服务器响应第一请求,并存储第一目标对象的面部特征数据,该装置还至少包括:第二请求单元,被设置为向服务器发送第二请求,其中,第二请求携带第一目标对象的用户资料信息,服务器响应第二请求,并存储第一目标对象的用户资料信息;和/或第三请求单元,被设置为向服务器发送第三请求,其中,第三请求携带第一目标对象的扩展信息,服务器响应第三请求,并存储第一目标对象的扩展信息。Optionally, the information interaction device further includes: a first requesting unit, configured to send a first request to the server before acquiring the facial information of the first target object, wherein the first request carries the face with the first target object The information matching the facial feature data, the server responding to the first request, and storing the facial feature data of the first target object, the device further comprising: a second requesting unit configured to send a second request to the server, wherein the second Requesting to carry user profile information of the first target object, the server responding to the second request, and storing user profile information of the first target object; and/or the third requesting unit is configured to send a third request to the server, wherein the third The request carries extended information of the first target object, the server responds to the third request, and stores extended information of the first target object.

此处需要说明的是,上述第一请求单元、第二请求单元和第三请求单元可以作为装置的一部分运行在终端中,可以通过终端中的处理器来执行上述单元实现的功能,终端也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌声电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。It should be noted that the first request unit, the second request unit, and the third request unit may be run in the terminal as part of the device, and the functions implemented by the foregoing unit may be performed by a processor in the terminal, and the terminal may also It is a smart phone (such as Android phone, iOS phone, etc.), tablet computer, applause computer, and mobile Internet devices (MID), PAD and other terminal devices.

可选地,第一请求单元包括:第一检测模块、第一发送模块、第三判 断模块、第二检测模块、获取模块和第二发送模块。其中,第一检测模块,用于检测面部;第一发送模块,被设置为在检测到第一目标对象的面部的情况下,发出用于指示第一目标对象执行预设面部动作的指示指令,其中,第一目标对象根据指示指令执行面部动作,得到实际面部动作;第三判断模块,被设置为判断实际面部动作是否与预设面部动作相匹配;第二检测模块,被设置为在判断出实际面部动作与预设面部动作相匹配时,检测第一目标对象的面部是否为三维形态;获取模块,被设置为在检测到第一目标对象的面部为三维形态的情况下,获取第一目标对象的面部特征数据;第二发送模块,被设置为根据面部特征数据向服务器发送所述第一请求;第二获取单元20被设置为根据第一目标对象的面部信息请求服务器根据面部特征数据下发目标信息,接收目标信息。Optionally, the first request unit includes: a first detection module, a first sending module, and a third sentence The module, the second detection module, the acquisition module, and the second transmission module. The first detecting module is configured to detect a face, and the first sending module is configured to, when the face of the first target object is detected, issue an instruction instruction for instructing the first target object to perform the preset facial action, The first target object performs a facial motion according to the instruction instruction to obtain an actual facial motion; the third determining module is configured to determine whether the actual facial motion matches the preset facial motion; and the second detecting module is configured to determine When the actual facial motion matches the preset facial motion, detecting whether the face of the first target object is a three-dimensional shape; and the acquiring module is configured to acquire the first target when detecting that the face of the first target object is in a three-dimensional shape a facial feature data of the object; a second sending module configured to send the first request to the server according to the facial feature data; the second obtaining unit 20 is configured to request the server to perform the photo according to the facial feature data according to the facial information of the first target object Send target information and receive target information.

此处需要说明的是,上述第一检测模块、第一发送模块、第三判断模块、第二检测模块、获取模块和第二发送模块可以作为装置的一部分运行在终端中,可以通过终端中的处理器来执行上述模块实现的功能,终端也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌声电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。It should be noted that the first detecting module, the first sending module, the third determining module, the second detecting module, the obtaining module, and the second sending module may be run in the terminal as part of the device, and may pass through the terminal. The processor performs the functions implemented by the above modules, and the terminal may also be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, and a mobile Internet device (MID), a PAD, and the like.

可选地,该信息交互装置还被设置为在接收第二目标对象根据目标信息发送的交互信息之前,在第一目标对象的面部不可见的情况下,接收用于指示搜索目标信息的搜索信息,其中,用户资料信息包括搜索信息;根据搜索信息获取目标信息。Optionally, the information interaction device is further configured to: before receiving the interaction information sent by the second target object according to the target information, receiving the search information for indicating the search target information if the face of the first target object is not visible The user profile information includes search information; and the target information is obtained according to the search information.

可选地,该信息获取装置还包括:识别单元和添加单元。其中,识别单元,被设置为在获取第一目标对象的面部信息之后,根据第一目标对象的面部信息识别第一目标对象的面部轮廓;添加单元,被设置为在面部轮廓的预设位置添加静态和/或动态的三维图像信息。Optionally, the information acquiring apparatus further includes: an identifying unit and an adding unit. Wherein the identification unit is configured to: after acquiring the face information of the first target object, identify the facial contour of the first target object according to the facial information of the first target object; the adding unit is set to be added at the preset position of the facial contour Static and/or dynamic 3D image information.

此处需要说明的是,上述识别单元和添加单元可以作为装置的一部分运行在终端中,可以通过终端中的处理器来执行上述单元实现的功能,终端也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌声电 脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。It should be noted that the foregoing identification unit and the adding unit may be run in the terminal as part of the device, and the functions implemented by the above unit may be performed by a processor in the terminal, and the terminal may also be a smart phone (such as an Android mobile phone, iOS). Mobile phone, etc.), tablet, palm phone Brain and mobile Internet devices (MID), PAD and other terminal devices.

可选地,发布单元40被设置为执行以下至少之一:发布语音形式的交互信息;发布图片形式的交互信息,其中,图片形式的交互信息包括全景图片形式的交互信息;发布视频形式的交互信息;发布三维模型的交互信息。Optionally, the publishing unit 40 is configured to perform at least one of: publishing interaction information in the form of a voice; publishing interaction information in the form of a picture, wherein the interaction information in the form of a picture comprises interaction information in the form of a panoramic picture; Information; publish interactive information for 3D models.

需要说明的是,该实施例中的第一获取单元10可以用于执行本申请实施例中的步骤S202,该实施例中的第二获取单元20可以被设置为执行本申请实施例中的步骤S204,该实施例中的接收单元30可以被设置为执行本申请实施例中的步骤S206,该实施例中的发布单元40可以被设置为执行本申请实施例中的步骤S208。It should be noted that the first obtaining unit 10 in this embodiment may be configured to perform step S202 in the embodiment of the present application, where the second obtaining unit 20 in this embodiment may be configured to perform the steps in the embodiment of the present application. S204, the receiving unit 30 in this embodiment may be configured to perform step S206 in the embodiment of the present application, and the issuing unit 40 in this embodiment may be configured to perform step S208 in the embodiment of the present application.

该实施例通过第一获取单元10获取第一目标对象的面部信息,通过第二获取单元20根据第一目标对象的面部信息获取第一目标对象的目标信息,其中,目标信息用于指示第一目标对象的社交行为,通过接收单元30接收第二目标对象根据目标信息发送的交互信息,其中,交互信息用于指示第二目标对象与第一目标对象进行交互,通过发布单元40发布交互信息,达到了信息交互的目的,从而实现了简化信息的交互过程的技术效果,进而解决了相关技术信息交互的过程复杂的技术问题。The first acquisition unit 10 acquires the face information of the first target object, and the second acquisition unit 20 acquires the target information of the first target object according to the face information of the first target object, wherein the target information is used to indicate the first The social behavior of the target object is received by the receiving unit 30, and the interaction information is sent by the second target object according to the target information, wherein the interaction information is used to indicate that the second target object interacts with the first target object, and the interaction information is released by the publishing unit 40. The purpose of information interaction is achieved, thereby realizing the technical effect of simplifying the interaction process of information, thereby solving the complicated technical problem of the process of related technical information interaction.

此处需要说明的是,上述单元和模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例所公开的内容。需要说明的是,上述模块作为装置的一部分可以运行在如图1所示的硬件环境中,可以通过软件实现,也可以通过硬件实现,其中,硬件环境包括网络环境。It should be noted that the above-mentioned units and modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the contents disclosed in the above embodiments. It should be noted that the foregoing module may be implemented in a hardware environment as shown in FIG. 1 as part of the device, and may be implemented by software or by hardware, where the hardware environment includes a network environment.

根据本发明实施例的另一方面,还提供了一种用于实施上述信息交互方法终端,其中,终端就可以为计算机终端,该计算机终端可以是计算机终端群中的任意一个计算机终端设备。可选地,在本实施例中,上述计算机终端也可以替换为移动终端等终端设备。 According to another aspect of the present invention, a terminal for implementing the above information interaction method is further provided, wherein the terminal may be a computer terminal, and the computer terminal may be any one of the computer terminal groups. Optionally, in this embodiment, the foregoing computer terminal may also be replaced with a terminal device such as a mobile terminal.

可选地,在本实施例中,上述计算机终端可以位于计算机网络的多个网络设备中的至少一个网络设备。Optionally, in this embodiment, the computer terminal may be located in at least one network device of the plurality of network devices of the computer network.

图18是根据本发明实施例的一种终端的结构框图。如图18所示,该终端可以包括:一个或多个(图中仅示出一个)处理器181、存储器183、以及传输装置185,如图18所示,该终端还可以包括输入输出设备187。FIG. 18 is a structural block diagram of a terminal according to an embodiment of the present invention. As shown in FIG. 18, the terminal may include one or more (only one shown in the figure) processor 181, memory 183, and transmission device 185. As shown in FIG. 18, the terminal may further include an input/output device 187. .

其中,存储器183可用于存储软件程序以及模块,如本发明实施例中的信息交互方法和装置对应的程序指令/模块,处理器181通过运行存储在存储器183内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的信息交互方法。存储器183可包括高速随机存储器,还可以包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器183可进一步包括相对于处理器181远程设置的存储器,这些远程存储器可以通过网络连接至终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 183 can be used to store software programs and modules, such as the information interaction method and the program instruction/module corresponding to the device in the embodiment of the present invention, and the processor 181 executes each of the software programs and modules stored in the memory 183. A functional application and data processing, that is, the above information interaction method is implemented. Memory 183 may include high speed random access memory, and may also include non-volatile memory such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, memory 183 can further include memory remotely located relative to processor 181, which can be connected to the terminal over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.

上述的传输装置185用于经由一个网络接收或者发送数据,还可以用于处理器与存储器之间的数据传输。上述的网络具体实例可包括有线网络及无线网络。在一个实例中,传输装置185包括一个网络适配器(Network Interface Controller,NIC),其可通过网线与其他网络设备与路由器相连从而可与互联网或局域网进行通讯。在一个实例中,传输装置185为射频(Radio Frequency,RF)模块,其用于通过无线方式与互联网进行通讯。The transmission device 185 described above is for receiving or transmitting data via a network, and can also be used for data transmission between the processor and the memory. Specific examples of the above network may include a wired network and a wireless network. In one example, the transmission device 185 includes a Network Interface Controller (NIC) that can be connected to other network devices and routers via a network cable to communicate with the Internet or a local area network. In one example, the transmission device 185 is a Radio Frequency (RF) module for communicating with the Internet wirelessly.

其中,具体地,存储器183用于存储应用程序。Among them, specifically, the memory 183 is used to store an application.

处理器181可以通过传输装置185调用存储器183存储的应用程序,以执行下述步骤:The processor 181 can call the application stored in the memory 183 through the transmission device 185 to perform the following steps:

获取第一目标对象的面部信息;Obtaining facial information of the first target object;

根据第一目标对象的面部信息获取第一目标对象的目标信息,其中,目标信息用于指示第一目标对象的社交行为; Obtaining target information of the first target object according to the facial information of the first target object, wherein the target information is used to indicate a social behavior of the first target object;

接收第二目标对象根据目标信息发送的交互信息,其中,交互信息用于指示第二目标对象与第一目标对象进行交互;Receiving interaction information that is sent by the second target object according to the target information, where the interaction information is used to indicate that the second target object interacts with the first target object;

发布交互信息。Publish interactive information.

处理器181还用于执行下述步骤:接收第二目标对象根据目标信息发送的在现实场景下的真实交互信息;和/或接收第二目标对象根据目标信息发送的在虚拟场景下的虚拟交互信息。The processor 181 is further configured to: receive real interaction information in a real scenario sent by the second target object according to the target information; and/or receive a virtual interaction in the virtual scenario that is sent by the second target object according to the target information. information.

处理器181还用于执行下述步骤:在接收第二目标对象根据目标信息发送的在现实场景下的真实交互信息之后,存储真实交互信息至预设存储位置;和/或在接收第二目标对象根据目标信息发送的在虚拟场景下的虚拟交互信息之后,存储虚拟交互信息至预设存储位置。The processor 181 is further configured to: after receiving the real interaction information in the real scene sent by the second target object according to the target information, storing the real interaction information to the preset storage location; and/or receiving the second target After the object transmits the virtual interaction information in the virtual scenario according to the target information, the virtual interaction information is stored to the preset storage location.

处理器181还用于执行下述步骤:扫描第一目标对象的面部,得到第一目标对象的面部信息;根据第一目标对象的面部信息在现实场景的预设空间位置显示目标信息。The processor 181 is further configured to: scan a face of the first target object to obtain face information of the first target object; and display target information in a preset spatial position of the real scene according to the face information of the first target object.

处理器181还用于执行下述步骤:确定第一目标对象在现实场景中的当前空间位置;根据当前空间位置确定目标信息在现实场景中的显示空间位置;在显示空间位置显示目标信息。The processor 181 is further configured to: determine a current spatial location of the first target object in the real scene; determine a display spatial location of the target information in the real scene according to the current spatial location; and display the target information in the display spatial location.

处理器181还用于执行下述步骤之一:在目标信息包括用户资料信息时,在第一显示空间位置显示第一目标对象的用户资料信息;在目标信息包括个人动态信息时,在第二显示空间位置显示第一目标对象的个人动态信息;在目标信息包括扩展信息时,在第三显示空间位置显示第一目标对象的扩展信息;在目标信息包括历史交互信息时,在第四显示空间显示位置显示第二目标对象与第一目标对象在历史交互过程中产生的历史交互信息。The processor 181 is further configured to: perform one of the following steps: displaying the user profile information of the first target object in the first display space position when the target information includes the user profile information; and in the second when the target information includes the personal dynamic information Displaying a spatial location to display personal dynamic information of the first target object; displaying, when the target information includes the extended information, extended information of the first target object in the third display space position; and when the target information includes historical interaction information, in the fourth display space The display location displays historical interaction information generated by the second target object and the first target object during historical interaction.

处理器181还用于执行下述步骤:扫描面部;在扫描到第一目标对象的面部的情况下,判断服务器中是否存储与第一目标对象的面部信息相匹配的面部特征数据;如果判断出服务器中存储与第一目标对象的面部信息 相匹配的面部特征数据,判断第一目标对象的面部扫描权限是否为允许扫描;如果判断出第一目标对象的面部扫描权限为允许扫描,在预设空间位置显示可见信息。The processor 181 is further configured to: scan the face; in the case of scanning the face of the first target object, determine whether the face feature data matching the face information of the first target object is stored in the server; if it is determined The server stores the face information of the first target object The matched facial feature data determines whether the face scanning authority of the first target object is allowed to scan; if it is determined that the face scanning authority of the first target object is allowed to scan, the visible information is displayed in the preset spatial position.

处理器181还用于执行下述步骤:判断第一目标对象是否具有第三方平台的账户信息,其中,扩展信息包括账户信息;如果判断出第一目标对象具有第三方平台的账户信息,接收用于指示展示与账户信息对应的扩展内容的第一展示指令;在接收第一展示指令之后,在预设空间位置展示扩展内容。The processor 181 is further configured to: determine whether the first target object has account information of a third-party platform, wherein the extended information includes account information; and if it is determined that the first target object has account information of the third-party platform, the receiving And displaying a first display instruction for displaying the extended content corresponding to the account information; displaying the extended content at the preset spatial location after receiving the first display instruction.

处理器181还用于执行下述步骤:接收用于指示展示个人动态信息的第二展示指令;在接收第二展示指令之后,在预设空间位置展示个人动态信息。处理器181还用于执行下述步骤:在获取第一目标对象的面部信息之前,向服务器发送第一请求,其中,第一请求携带与第一目标对象的面部信息相匹配的面部特征数据,服务器响应第一请求,并存储第一目标对象的面部特征数据,处理器181还用于至少执行下述步骤:向服务器发送第二请求,其中,第二请求携带第一目标对象的用户资料信息,服务器响应第二请求,并存储第一目标对象的用户资料信息;和/或向服务器发送第三请求,其中,第三请求携带第一目标对象的扩展信息,服务器响应第三请求,并存储第一目标对象的扩展信息。The processor 181 is further configured to: receive a second display instruction for indicating the display of the personal dynamic information; and display the personal dynamic information at the preset spatial location after receiving the second display instruction. The processor 181 is further configured to: send a first request to the server before acquiring the facial information of the first target object, where the first request carries facial feature data that matches the facial information of the first target object, The server responds to the first request and stores facial feature data of the first target object, and the processor 181 is further configured to perform at least the following steps: sending a second request to the server, where the second request carries the user profile information of the first target object And the server responds to the second request and stores user profile information of the first target object; and/or sends a third request to the server, wherein the third request carries extended information of the first target object, the server responds to the third request, and stores Extended information of the first target object.

处理器181还用于执行下述步骤:检测面部;在检测到第一目标对象的面部的情况下,发出用于指示第一目标对象执行预设面部动作的指示指令,其中,第一目标对象根据指示指令执行面部动作,得到实际面部动作;判断实际面部动作是否与预设面部动作相匹配;如果判断出实际面部动作与预设面部动作相匹配,检测第一目标对象的面部是否为三维形态;在检测到第一目标对象的面部为三维形态的情况下,获取第一目标对象的面部特征数据;根据面部特征数据向服务器发送第一请求,服务器响应第一请求,并存储第一目标对象的面部特征数据;其中,根据第一目标对象的面部信息获取第一目标对象的目标信息包括:根据第一目标对象的面部信息 请求服务器根据面部特征数据下发目标信息;接收目标信息。The processor 181 is further configured to: perform a step of: detecting a face; and in a case of detecting a face of the first target object, issue an instruction instruction for instructing the first target object to perform a preset facial action, wherein the first target object Performing a facial motion according to the instruction instruction to obtain an actual facial motion; determining whether the actual facial motion matches the preset facial motion; if it is determined that the actual facial motion matches the preset facial motion, detecting whether the facial surface of the first target object is a three-dimensional shape Obtaining facial feature data of the first target object when detecting that the face of the first target object is in a three-dimensional shape; transmitting a first request to the server according to the facial feature data, the server responding to the first request, and storing the first target object The facial feature data; wherein the acquiring the target information of the first target object according to the facial information of the first target object comprises: the facial information according to the first target object The requesting server delivers the target information according to the facial feature data; and receives the target information.

处理器181还用于执行下述步骤:在接收第二目标对象根据目标信息发送的交互信息之前,在第一目标对象的面部不可见的情况下,接收用于指示搜索目标信息的搜索信息,其中,用户资料信息包括搜索信息;根据搜索信息获取目标信息。The processor 181 is further configured to: before receiving the interaction information sent by the second target object according to the target information, in a case where the face of the first target object is not visible, receiving search information for indicating the search target information, The user profile information includes search information; and the target information is obtained according to the search information.

处理器181还用于执行下述步骤:在获取第一目标对象的面部信息之后,根据第一目标对象的面部信息识别第一目标对象的面部轮廓;在面部轮廓的预设位置添加静态和/或动态的三维图像信息。The processor 181 is further configured to: after acquiring the facial information of the first target object, identify the facial contour of the first target object according to the facial information of the first target object; add static and/or at the preset position of the facial contour Or dynamic 3D image information.

处理器181还用于执行下述步骤至少之一:发布语音形式的交互信息;发布图片形式的交互信息,其中,图片形式的交互信息包括全景图片形式的交互信息;发布视频形式的交互信息;发布三维模型的交互信息。The processor 181 is further configured to perform at least one of the following steps: publishing the interaction information in the form of a voice; and releasing the interaction information in the form of a picture, where the interaction information in the form of a picture includes the interaction information in the form of a panoramic picture; and the interaction information in the form of a video is released; Publish interactive information for 3D models.

采用本发明实施例,提供了一种信息交互方法。通过获取第一目标对象的面部信息;根据第一目标对象的面部信息获取第一目标对象的目标信息,其中,目标信息用于指示第一目标对象的社交行为;接收第二目标对象根据目标信息发送的交互信息,其中,交互信息用于指示第二目标对象与第一目标对象进行交互;发布交互信息,达到了信息交互的目的,从而实现了简化信息的交互过程的技术效果,进而解决了相关技术信息交互的过程复杂的技术问题。An embodiment of the present invention provides an information interaction method. Obtaining target information of the first target object according to the facial information of the first target object, wherein the target information is used to indicate a social behavior of the first target object; and receiving the second target object according to the target information The interaction information is sent, wherein the interaction information is used to indicate that the second target object interacts with the first target object; the interaction information is released, and the purpose of the information interaction is achieved, thereby realizing the technical effect of simplifying the interaction process of the information, thereby solving the problem. The technical process of related technical information interaction is a complex technical problem.

可选地,本实施例中的具体示例可以参考上述实施例中所描述的示例,本实施例在此不再赘述。For example, the specific examples in this embodiment may refer to the examples described in the foregoing embodiments, and details are not described herein again.

本领域普通技术人员可以理解,图18所示的结构仅为示意,终端可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌上电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。图18其并不对上述电子装置的结构造成限定。例如,终端还可包括比图18中所示更多或者更少的组件(如网络接口、显示装置等),或者具有与图18所示不同的配置。 A person skilled in the art can understand that the structure shown in FIG. 18 is only schematic, and the terminal can be a smart phone (such as an Android mobile phone, an iOS mobile phone, etc.), a tablet computer, a palm computer, and a mobile Internet device (MID). Terminal equipment such as PAD. FIG. 18 does not limit the structure of the above electronic device. For example, the terminal may also include more or less components (such as a network interface, display device, etc.) than shown in FIG. 18, or have a different configuration than that shown in FIG.

本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令终端设备相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。A person of ordinary skill in the art may understand that all or part of the steps of the foregoing embodiments may be completed by a program to instruct terminal device related hardware, and the program may be stored in a computer readable storage medium, and the storage medium may be Including: flash disk, read-only memory (ROM), random access memory (RAM), disk or optical disk.

本发明的实施例还提供了一种存储介质。可选地,在本实施例中,上述存储介质可以存储程序代码,所述程序代码用于执行上述方法实施例所提供的信息交互方法的程序代码中的步骤。Embodiments of the present invention also provide a storage medium. Optionally, in this embodiment, the foregoing storage medium may store program code, where the program code is used to execute the steps in the program code of the information interaction method provided by the foregoing method embodiment.

可选地,在本实施例中,上述存储介质可以位于计算机网络中计算机终端群中的任意一个计算机终端中,或者位于移动终端群中的任意一个移动终端中。Optionally, in this embodiment, the foregoing storage medium may be located in any one of the computer terminal groups in the computer network, or in any one of the mobile terminal groups.

可选地,在本实施例中,存储介质被设置为存储用于执行以下步骤的程序代码:Optionally, in the present embodiment, the storage medium is arranged to store program code for performing the following steps:

获取第一目标对象的面部信息;Obtaining facial information of the first target object;

根据第一目标对象的面部信息获取第一目标对象的目标信息,其中,目标信息用于指示第一目标对象的社交行为;Obtaining target information of the first target object according to the facial information of the first target object, wherein the target information is used to indicate a social behavior of the first target object;

接收第二目标对象根据目标信息发送的交互信息,其中,交互信息用于指示第二目标对象与第一目标对象进行交互;Receiving interaction information that is sent by the second target object according to the target information, where the interaction information is used to indicate that the second target object interacts with the first target object;

发布交互信息。Publish interactive information.

可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:接收第二目标对象根据目标信息发送的在现实场景下的真实交互信息;和/或接收第二目标对象根据目标信息发送的在虚拟场景下的虚拟交互信息。Optionally, the storage medium is further configured to store program code for performing the following steps: receiving real interaction information in a real scene sent by the second target object according to the target information; and/or receiving the second target object according to the target information The virtual interaction information sent in the virtual scene.

可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:在接收第二目标对象根据目标信息发送的在现实场景下的真实交互信息之后,存储真实交互信息至预设存储位置;和/或在接收第二目标对象根据目 标信息发送的在虚拟场景下的虚拟交互信息之后,存储虚拟交互信息至预设存储位置。Optionally, the storage medium is further configured to store program code for performing the following steps: storing the real interaction information to the preset storage location after receiving the real interaction information in the real scene sent by the second target object according to the target information And/or receiving the second target object according to the purpose After the virtual interaction information in the virtual scenario sent by the target information, the virtual interaction information is stored to the preset storage location.

可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:扫描第一目标对象的面部,得到第一目标对象的面部信息;根据第一目标对象的面部信息在现实场景的预设空间位置显示目标信息。Optionally, the storage medium is further configured to store program code for performing the steps of: scanning a face of the first target object to obtain face information of the first target object; and pre-realizing the scene according to the face information of the first target object Set the spatial location to display the target information.

可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:确定第一目标对象在现实场景中的当前空间位置;根据当前空间位置确定目标信息在现实场景中的显示空间位置;在显示空间位置显示目标信息。Optionally, the storage medium is further configured to store program code for: determining a current spatial location of the first target object in the real scene; determining a display spatial location of the target information in the real scene according to the current spatial location; Display target information in the display space location.

可选地,存储介质还被设置为存储用于执行以下步骤之一的程序代码:在目标信息包括用户资料信息时,在第一显示空间位置显示第一目标对象的用户资料信息;在目标信息包括个人动态信息时,在第二显示空间位置显示第一目标对象的个人动态信息;在目标信息包括扩展信息时,在第三显示空间位置显示第一目标对象的扩展信息;在目标信息包括历史交互信息时,在第四显示空间显示位置显示第二目标对象与第一目标对象在历史交互过程中产生的历史交互信息。Optionally, the storage medium is further configured to store program code for performing one of the following steps: displaying the user profile information of the first target object in the first display space location when the target information includes the user profile information; When the personal dynamic information is included, the personal dynamic information of the first target object is displayed in the second display space position; when the target information includes the extended information, the extended information of the first target object is displayed in the third display space position; the target information includes the history When the information is exchanged, the historical interaction information generated by the second target object and the first target object during the historical interaction is displayed in the fourth display space display position.

可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:在扫描到第一目标对象的面部的情况下,判断服务器中是否存储与第一目标对象的面部信息相匹配的面部特征数据;如果判断出服务器中存储与第一目标对象的面部信息相匹配的面部特征数据,判断第一目标对象的面部扫描权限是否为允许扫描;如果判断出第一目标对象的面部扫描权限为允许扫描,在预设空间位置显示可见信息,其中,可见信息至少包括第一目标对象的用户资料信息。Optionally, the storage medium is further configured to store program code for performing the step of: determining whether a face matching the face information of the first target object is stored in the server in the case of scanning the face of the first target object Feature data; if it is determined that the face feature data matching the face information of the first target object is stored in the server, determining whether the face scan permission of the first target object is allowed to scan; if it is determined that the face scan permission of the first target object is The scanning is allowed to display visible information in a preset spatial location, wherein the visible information includes at least user profile information of the first target object.

可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:判断第一目标对象是否具有第三方平台的账户信息,其中,扩展信息包括账户信息;如果判断出第一目标对象具有第三方平台的账户信息,接收用于指示展示与账户信息对应的扩展内容的第一展示指令;在接收第一展示指令之后,在预设空间位置展示扩展内容。 Optionally, the storage medium is further configured to store program code for performing: determining whether the first target object has account information of a third-party platform, wherein the extended information includes account information; if it is determined that the first target object has The account information of the third-party platform receives a first display instruction for indicating the extended content corresponding to the account information; and after displaying the first display instruction, displaying the extended content at the preset spatial location.

可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:接收用于指示展示个人动态信息的第二展示指令;在接收第二展示指令之后,在预设空间位置展示个人动态信息。Optionally, the storage medium is further configured to store program code for: receiving a second display instruction for indicating the display of personal dynamic information; displaying the personal dynamic at a preset spatial location after receiving the second display instruction information.

可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:在获取第一目标对象的面部信息之前,向服务器发送第一请求,其中,第一请求携带与第一目标对象的面部信息相匹配的面部特征数据,服务器响应第一请求,并存储第一目标对象的面部特征数据,存储介质还被设置为存储用于至少执行以下步骤:向服务器发送第二请求,其中,第二请求携带第一目标对象的用户资料信息,服务器响应第二请求,并存储第一目标对象的用户资料信息;和/或向服务器发送第三请求,其中,第三请求携带第一目标对象的扩展信息,服务器响应第三请求,并存储第一目标对象的扩展信息。Optionally, the storage medium is further configured to store program code for performing the step of: transmitting a first request to the server before acquiring the facial information of the first target object, wherein the first request carries the first target object The facial feature data matches the facial feature data, the server responds to the first request, and stores facial feature data of the first target object, and the storage medium is further configured to be configured to perform at least the following steps: sending a second request to the server, wherein, The second request carries the user profile information of the first target object, the server responds to the second request, and stores the user profile information of the first target object; and/or sends a third request to the server, wherein the third request carries the first target object Extending the information, the server responds to the third request and stores the extended information of the first target object.

可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:在检测到第一目标对象的面部的情况下,发出用于指示第一目标对象执行预设面部动作的指示指令,其中,第一目标对象根据指示指令执行面部动作,得到实际面部动作;判断实际面部动作是否与预设面部动作相匹配;如果判断出实际面部动作与预设面部动作相匹配,检测第一目标对象的面部是否为三维形态;在检测到第一目标对象的面部为三维形态的情况下,获取第一目标对象的面部特征数据;根据面部特征数据向服务器发送第一请求,服务器响应第一请求,并存储第一目标对象的面部特征数据;其中,根据第一目标对象的面部信息获取第一目标对象的目标信息包括:根据第一目标对象的面部信息请求服务器根据面部特征数据下发目标信息;接收目标信息。Optionally, the storage medium is further configured to store program code for performing the following steps: in the case of detecting the face of the first target object, issue an instruction instruction for instructing the first target object to perform the preset facial action, The first target object performs a facial motion according to the instruction instruction to obtain an actual facial motion; determines whether the actual facial motion matches the preset facial motion; and if it is determined that the actual facial motion matches the preset facial motion, detecting the first target object Whether the face is a three-dimensional form; if the face of the first target object is detected to be in a three-dimensional shape, acquiring facial feature data of the first target object; transmitting a first request to the server according to the facial feature data, and the server responds to the first request, And storing the facial feature data of the first target object, wherein the acquiring the target information of the first target object according to the facial information of the first target object comprises: requesting, by the facial information information of the first target object, the delivery of the target information according to the facial feature data; Receive target information.

可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:在接收第二目标对象根据目标信息发送的交互信息之前,在第一目标对象的面部不可见的情况下,接收用于指示搜索目标信息的搜索信息,其中,用户资料信息包括搜索信息;根据搜索信息获取目标信息。 Optionally, the storage medium is further configured to store program code for performing the following steps: before receiving the interaction information sent by the second target object according to the target information, in the case that the face of the first target object is not visible, the receiving The search information indicating the search target information, wherein the user profile information includes search information; and the target information is acquired according to the search information.

可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:在获取第一目标对象的面部信息之后,根据第一目标对象的面部信息识别第一目标对象的面部轮廓;在面部轮廓的预设位置添加静态和/或动态的三维图像信息。Optionally, the storage medium is further configured to store program code for performing the following steps: after acquiring the face information of the first target object, identifying a facial contour of the first target object according to the facial information of the first target object; The preset position of the contour adds static and/or dynamic three-dimensional image information.

可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:发布语音形式的交互信息;发布图片形式的交互信息,其中,图片形式的交互信息包括全景图片形式的交互信息;发布视频形式的交互信息;发布三维模型的交互信息。Optionally, the storage medium is further configured to store program code for performing the following steps: releasing interaction information in the form of a voice; publishing interaction information in the form of a picture, wherein the interaction information in the form of a picture includes interaction information in the form of a panoramic picture; Interactive information in the form of video; publishing interactive information of the 3D model.

可选地,本实施例中的具体示例可以参考上述实施例中所描述的示例,本实施例在此不再赘述。For example, the specific examples in this embodiment may refer to the examples described in the foregoing embodiments, and details are not described herein again.

可选地,在本实施例中,上述存储介质可以包括但不限于:U盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。Optionally, in this embodiment, the foregoing storage medium may include, but not limited to, a USB flash drive, a read-only memory (ROM), a random access memory (RAM), a mobile hard disk, and a magnetic A variety of media that can store program code, such as a disc or a disc.

如上参照附图以示例的方式描述了根据本发明的信息交互方法、装置和存储介质。但是,本领域技术人员应当理解,对于上述本发明所提出的信息交互方法、装置和存储介质,还可以在不脱离本发明内容的基础上做出各种改进。因此,本发明的保护范围应当由所附的权利要求书的内容确定。The information interaction method, apparatus, and storage medium according to the present invention are described by way of example with reference to the accompanying drawings. However, it should be understood by those skilled in the art that various improvements can be made to the information interaction method, apparatus, and storage medium proposed by the present invention without departing from the scope of the present invention. Therefore, the scope of the invention should be determined by the content of the appended claims.

上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。The serial numbers of the embodiments of the present invention are merely for the description, and do not represent the advantages and disadvantages of the embodiments.

上述实施例中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在上述计算机可读取的存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干指令用以使得一台或多台计算机设备(可为个人计算机、服务器或者网络设备等)执行本发明各个实施 例所述方法的全部或部分步骤。The integrated unit in the above embodiment, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in the above-described computer readable storage medium. Based on such understanding, the technical solution of the present invention may contribute to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium. A number of instructions are included to cause one or more computer devices (which may be personal computers, servers, network devices, etc.) to perform various embodiments of the present invention All or part of the steps of the method described.

在本发明的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。In the above-mentioned embodiments of the present invention, the descriptions of the various embodiments are different, and the parts that are not detailed in a certain embodiment can be referred to the related descriptions of other embodiments.

在本申请所提供的几个实施例中,应该理解到,所揭露的客户端,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。In the several embodiments provided by the present application, it should be understood that the disclosed client may be implemented in other manners. The device embodiments described above are merely illustrative. For example, the division of the unit is only a logical function division. In actual implementation, there may be another division manner. For example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed. In addition, the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.

所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.

另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit. The above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.

以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。The above description is only a preferred embodiment of the present invention, and it should be noted that those skilled in the art can also make several improvements and retouchings without departing from the principles of the present invention. It should be considered as the scope of protection of the present invention.

工业实用性Industrial applicability

在本发明实施例中,通过获取第一目标对象的面部信息;根据第一目标对象的面部信息获取第一目标对象的目标信息,其中,目标信息用于指示第一目标对象的社交行为;接收第二目标对象根据目标信息发送的交互 信息,其中,交互信息用于指示第二目标对象与第一目标对象进行交互;发布交互信息。根据第一目标对象的面部信息获取到的第一目标对象的目标信息,接收用于指示第二目标对象与第一目标对象进行交互的交互信息,进而发布交互信息,这不同于现有社交系统的虚拟账号,交互入口主要是基于面部信息,简化了信息交互的过程,达到了信息交互的目的,从而实现了简化信息的交互过程的技术效果,进而解决了相关技术信息交互的过程复杂的技术问题。 In the embodiment of the present invention, the object information of the first target object is acquired according to the face information of the first target object, wherein the target information is used to indicate the social behavior of the first target object; The interaction of the second target object according to the target information Information, wherein the interaction information is used to indicate that the second target object interacts with the first target object; and the interaction information is released. Obtaining, according to the target information of the first target object acquired by the facial information of the first target object, interaction information indicating that the second target object interacts with the first target object, and then releasing the interaction information, which is different from the existing social system. The virtual account, the interactive entry is mainly based on the face information, which simplifies the process of information interaction and achieves the purpose of information interaction, thereby realizing the technical effect of simplifying the process of information interaction, and solving the complicated technology of the process of related technical information interaction. problem.

Claims (22)

一种信息交互方法,包括:An information interaction method, including: 获取第一目标对象的面部信息;Obtaining facial information of the first target object; 根据所述第一目标对象的面部信息获取所述第一目标对象的目标信息,其中,所述目标信息用于指示所述第一目标对象的社交行为;Obtaining target information of the first target object according to the facial information of the first target object, wherein the target information is used to indicate a social behavior of the first target object; 接收第二目标对象根据所述目标信息发送的交互信息,其中,所述交互信息用于指示所述第二目标对象与所述第一目标对象进行交互;Receiving interaction information that is sent by the second target object according to the target information, where the interaction information is used to indicate that the second target object interacts with the first target object; 发布所述交互信息。Publish the interaction information. 根据权利要求1所述的方法,其中,接收所述第二目标对象根据所述目标信息发送的所述交互信息包括:The method of claim 1, wherein the receiving the interaction information that is sent by the second target object according to the target information comprises: 接收所述第二目标对象根据所述目标信息发送的在现实场景下的真实交互信息;和/或Receiving real interaction information in the real scene sent by the second target object according to the target information; and/or 接收所述第二目标对象根据所述目标信息发送的在虚拟场景下的虚拟交互信息。Receiving virtual interaction information in the virtual scenario that is sent by the second target object according to the target information. 根据权利要求2所述的方法,其中,The method of claim 2, wherein 在接收所述第二目标对象根据所述目标信息发送的在所述现实场景下的所述真实交互信息之后,所述方法还包括:存储所述真实交互信息至预设存储位置;和/或After receiving the real interaction information in the real scene sent by the second target object according to the target information, the method further includes: storing the real interaction information to a preset storage location; and/or 在接收所述第二目标对象根据所述目标信息发送的在所述虚拟场景下的所述虚拟交互信息之后,所述方法还包括:存储所述虚拟交互信息至所述预设存储位置。After receiving the virtual interaction information in the virtual scenario that is sent by the second target object according to the target information, the method further includes: storing the virtual interaction information to the preset storage location. 根据权利要求2所述的方法,其中,所述真实交互信息至少包括以下一种或多种:The method of claim 2, wherein the real interaction information comprises at least one or more of the following: 在所述现实场景下的语音信息;Voice information in the real scene; 在所述现实场景下的图像信息; Image information in the real scene; 在所述现实场景下的视频信息。Video information in the real-life scene. 根据权利要求1所述的方法,其中,The method of claim 1 wherein 获取所述第一目标对象的面部信息包括:扫描所述第一目标对象的面部,得到所述第一目标对象的面部信息;Obtaining the facial information of the first target object includes: scanning a face of the first target object to obtain facial information of the first target object; 在根据所述第一目标对象的面部信息获取所述第一目标对象的目标信息之后,所述方法还包括:在现实场景的预设空间位置显示所述目标信息。After acquiring the target information of the first target object according to the facial information of the first target object, the method further includes: displaying the target information in a preset spatial position of the real scene. 根据权利要求5所述的方法,其中,在所述现实场景的所述预设空间位置显示所述目标信息包括:The method of claim 5, wherein displaying the target information in the preset spatial location of the real scene comprises: 确定所述第一目标对象在所述现实场景中的当前空间位置;Determining a current spatial location of the first target object in the real-world scene; 根据所述当前空间位置确定所述目标信息在所述现实场景中的显示空间位置;Determining, according to the current spatial location, a display space location of the target information in the real scene; 在所述显示空间位置显示所述目标信息。The target information is displayed at the display space location. 根据权利要求6所述的方法,其中,在所述显示空间位置显示所述目标信息至少包括以下一种或多种:The method of claim 6, wherein displaying the target information in the display space location comprises at least one or more of the following: 在所述目标信息包括用户资料信息时,在第一显示空间位置显示所述第一目标对象的所述用户资料信息;Displaying the user profile information of the first target object in the first display space location when the target information includes user profile information; 在所述目标信息包括个人动态信息时,在第二显示空间位置显示所述第一目标对象的所述个人动态信息;Displaying the personal dynamic information of the first target object in a second display space position when the target information includes personal dynamic information; 在所述目标信息包括扩展信息时,在第三显示空间位置显示所述第一目标对象的所述扩展信息;Displaying the extended information of the first target object in a third display space position when the target information includes extended information; 在所述目标信息包括历史交互信息时,在第四显示空间显示位置显示所述第二目标对象与所述第一目标对象在历史交互过程中产生的所述历史交互信息。When the target information includes the historical interaction information, the historical interaction information generated by the second target object and the first target object during the historical interaction is displayed in the fourth display space display position. 根据权利要求5所述的方法,其中,在所述现实场景的所述预设空间位置显示所述目标信息包括: The method of claim 5, wherein displaying the target information in the preset spatial location of the real scene comprises: 在扫描到所述第一目标对象的面部的情况下,判断服务器中是否存储与所述第一目标对象的面部信息相匹配的面部特征数据;In the case of scanning to the face of the first target object, determining whether the face feature data matching the face information of the first target object is stored in the server; 如果判断出所述服务器中存储与所述第一目标对象的面部信息相匹配的所述面部特征数据,判断所述第一目标对象的面部扫描权限是否为允许扫描;If it is determined that the facial feature data that matches the facial information of the first target object is stored in the server, determining whether the facial scanning permission of the first target object is an allowable scanning; 如果判断出所述第一目标对象的面部扫描权限为允许扫描,在所述预设空间位置显示所述第一目标对象在权限范围内的可见信息,其中,所述可见信息至少包括所述第一目标对象的用户资料信息。If it is determined that the face scan permission of the first target object is allowed to scan, the visible information of the first target object within the permission range is displayed at the preset space position, wherein the visible information includes at least the first User profile information of a target object. 根据权利要求8所述的方法,其中,所述可见信息包括所述第一目标对象的扩展信息,在所述预设空间位置显示所述第一目标对象在所述权限范围内的可见信息包括:The method according to claim 8, wherein the visible information comprises extended information of the first target object, and displaying, in the preset spatial location, visible information of the first target object within the permission range comprises : 判断所述第一目标对象是否具有第三方平台的账户信息,其中,所述扩展信息包括所述账户信息;Determining whether the first target object has account information of a third-party platform, wherein the extended information includes the account information; 如果判断出所述第一目标对象具有所述第三方平台的所述账户信息,接收用于指示展示与所述账户信息对应的扩展内容的第一展示指令;If it is determined that the first target object has the account information of the third-party platform, receiving a first display instruction for indicating that extended content corresponding to the account information is displayed; 在接收所述第一展示指令之后,在所述预设空间位置展示在所述权限范围内的所述扩展内容。After receiving the first display instruction, the extended content within the permission range is displayed at the preset spatial location. 根据权利要求8所述的方法,其中,所述可见信息包括所述第一目标对象的个人动态信息,在所述预设空间位置显示所述第一目标对象在所述权限范围内的可见信息包括:The method according to claim 8, wherein the visible information comprises personal dynamic information of the first target object, and visible information of the first target object within the permission range is displayed at the preset spatial location include: 接收用于指示展示所述个人动态信息的第二展示指令;Receiving a second display instruction for indicating the display of the personal dynamic information; 在接收所述第二展示指令之后,在所述预设空间位置展示在所述权限范围内的所述个人动态信息。After receiving the second display instruction, the personal dynamic information within the permission range is displayed at the preset spatial location. 根据权利要求1所述的方法,其中,在获取所述第一目标对象的面部信息之前,向服务器发送第一请求,其中,所述第一请求携带与所述第一目标对象的面部信息相匹配的面部特征数据,所述服务器响应所 述第一请求,并存储所述第一目标对象的面部特征数据,所述方法还包括:The method according to claim 1, wherein the first request is sent to the server before acquiring the face information of the first target object, wherein the first request carries the face information of the first target object Matching facial feature data, the server response The first request is performed, and the facial feature data of the first target object is stored, and the method further includes: 向所述服务器发送第二请求,其中,所述第二请求携带所述第一目标对象的用户资料信息,所述服务器响应所述第二请求,并存储所述第一目标对象的用户资料信息;和/或Sending a second request to the server, where the second request carries user profile information of the first target object, the server responds to the second request, and stores user profile information of the first target object ;and / or 向所述服务器发送第三请求,其中,所述第三请求携带所述第一目标对象的扩展信息,所述服务器响应所述第三请求,并存储所述第一目标对象的扩展信息。Sending a third request to the server, wherein the third request carries extended information of the first target object, the server responds to the third request, and stores extended information of the first target object. 根据权利要求11所述的方法,其中,向所述服务器发送所述第一请求包括:The method of claim 11 wherein transmitting the first request to the server comprises: 在检测到所述第一目标对象的面部的情况下,发出用于指示所述第一目标对象执行预设面部动作的指示指令,其中,所述第一目标对象根据所述指示指令执行面部动作,得到实际面部动作;In the case that the face of the first target object is detected, an instruction instruction for instructing the first target object to perform a preset facial action is issued, wherein the first target object performs a facial action according to the instruction instruction To get the actual facial movements; 判断所述实际面部动作是否与所述预设面部动作相匹配;Determining whether the actual facial action matches the preset facial motion; 如果判断出所述实际面部动作与所述预设面部动作相匹配,检测所述第一目标对象的面部是否为三维形态;If it is determined that the actual facial action matches the preset facial motion, detecting whether the face of the first target object is in a three-dimensional shape; 在检测到所述第一目标对象的面部为所述三维形态的情况下,获取所述第一目标对象的所述面部特征数据;And acquiring, in the case that the face of the first target object is the three-dimensional shape, acquiring the facial feature data of the first target object; 根据所述面部特征数据向所述服务器发送所述第一请求;Sending the first request to the server according to the facial feature data; 其中,根据所述第一目标对象的面部信息获取所述第一目标对象的目标信息包括:根据所述第一目标对象的面部信息请求所述服务器根据所述面部特征数据下发所述目标信息;接收所述目标信息。The acquiring the target information of the first target object according to the facial information of the first target object, requesting the server to deliver the target information according to the facial feature data according to the facial information of the first target object Receiving the target information. 根据权利要求11所述的方法,其中,在接收所述第二目标对象根据所述目标信息发送的所述交互信息之前,所述方法还包括:The method according to claim 11, wherein before receiving the interaction information that is sent by the second target object according to the target information, the method further comprises: 在所述第一目标对象的面部不可见的情况下,接收用于指示搜索所述目标信息的搜索信息,其中,所述用户资料信息包括所述搜索信息; Receiving, in a case where the face of the first target object is not visible, receiving search information for instructing to search for the target information, wherein the user profile information includes the search information; 根据所述搜索信息获取所述目标信息。The target information is acquired according to the search information. 根据权利要求1至13中任意一项所述的方法,其中,在获取所述第一目标对象的面部信息之后,所述方法还包括:The method according to any one of claims 1 to 13, wherein after acquiring the face information of the first target object, the method further comprises: 根据所述第一目标对象的面部信息识别所述第一目标对象的面部轮廓;Identifying a facial contour of the first target object according to the facial information of the first target object; 在所述面部轮廓的预设位置添加静态和/或动态的三维图像信息。Static and/or dynamic three-dimensional image information is added at a preset position of the facial contour. 根据权利要求1至13中任意一项所述的方法,其中,发布所述交互信息至少包括以下一种或多种:The method according to any one of claims 1 to 13, wherein the publishing of the interaction information includes at least one or more of the following: 发布语音形式的交互信息;Publish interactive information in the form of voice; 发布图片形式的交互信息,其中,所述图片形式的交互信息包括全景图片形式的交互信息;And releasing interaction information in the form of a picture, where the interaction information in the form of the picture includes interaction information in the form of a panoramic picture; 发布视频形式的交互信息;Publish interactive information in the form of video; 发布三维模型的交互信息。Publish interactive information for 3D models. 一种信息交互装置,包括一个或多个处理器,以及一个或多个存储指令的存储器,其中,所述指令由所述处理器执行,所述处理器所要执行的程序单元包括:An information interaction apparatus comprising one or more processors, and one or more memories storing instructions, wherein the instructions are executed by the processor, and the program unit to be executed by the processor comprises: 第一获取单元,被设置为获取第一目标对象的面部信息;a first acquiring unit, configured to acquire facial information of the first target object; 第二获取单元,被设置为根据所述第一目标对象的面部信息获取所述第一目标对象的目标信息,其中,所述目标信息用于指示所述第一目标对象的社交行为;a second acquiring unit, configured to acquire target information of the first target object according to the facial information of the first target object, wherein the target information is used to indicate a social behavior of the first target object; 接收单元,被设置为接收第二目标对象根据所述目标信息发送的交互信息,其中,所述交互信息用于指示所述第二目标对象与所述第一目标对象进行交互;a receiving unit, configured to receive interaction information that is sent by the second target object according to the target information, where the interaction information is used to indicate that the second target object interacts with the first target object; 发布单元,被设置为发布所述交互信息。A publishing unit configured to publish the interaction information. 根据权利要求16所述的装置,其中,所述接收单元包括:The apparatus of claim 16, wherein the receiving unit comprises: 第一接收模块,被设置为接收所述第二目标对象根据所述目标信 息发送的在现实场景下的真实交互信息;和/或a first receiving module, configured to receive the second target object according to the target letter Real interactive information sent in real-world scenarios; and/or 第二接收模块,被设置为接收所述第二目标对象根据所述目标信息发送的在虚拟场景下的虚拟交互信息。The second receiving module is configured to receive virtual interaction information in the virtual scenario that is sent by the second target object according to the target information. 根据权利要求17所述的装置,其中,所述程序单元还包括:The apparatus of claim 17, wherein the program unit further comprises: 第一存储单元,被设置为在接收所述第二目标对象根据所述目标信息发送的在所述现实场景下的所述真实交互信息之后,存储所述真实交互信息至预设存储位置;和/或a first storage unit, configured to: after receiving the real interaction information in the real scene sent by the second target object according to the target information, storing the real interaction information to a preset storage location; and /or 第二存储单元,被设置为在接收所述第二目标对象根据所述目标信息发送的在所述虚拟场景下的所述虚拟交互信息之后,存储所述虚拟交互信息至所述预设存储位置。a second storage unit, configured to store the virtual interaction information to the preset storage location after receiving the virtual interaction information in the virtual scenario that is sent by the second target object according to the target information . 根据权利要求16所述的装置,其中,The device according to claim 16, wherein 所述第一获取单元被设置为扫描所述第一目标对象的面部,得到所述第一目标对象的面部信息;The first acquiring unit is configured to scan a face of the first target object to obtain facial information of the first target object; 所述装置还包括:显示单元,用于在根据所述第一目标对象的面部信息获取所述第一目标对象的目标信息之后,在现实场景的预设空间位置显示所述目标信息。The device further includes: a display unit, configured to display the target information in a preset spatial position of the real scene after acquiring the target information of the first target object according to the facial information of the first target object. 根据权利要求19所述的装置,其中,所述显示单元包括:The apparatus of claim 19, wherein the display unit comprises: 第一判断模块,被设置为在扫描到所述第一目标对象的面部的情况下,判断服务器中是否存储与所述第一目标对象的面部信息相匹配的面部特征数据;a first determining module, configured to determine, in a case of scanning a face of the first target object, whether to store facial feature data matching the facial information of the first target object in the server; 第二判断模块,被设置为在判断出所述服务器中存储与所述第一目标对象的面部信息相匹配的所述面部特征数据时,判断所述第一目标对象的面部扫描权限是否为允许扫描;a second determining module, configured to determine, when the facial feature data that matches the facial information of the first target object is stored in the server, whether the facial scanning permission of the first target object is allowed scanning; 显示模块,被设置为在判断出所述第一目标对象的面部扫描权限为允许扫描,在所述预设空间位置显示可见信息,其中,所述可见信息至少包括所述第一目标对象的用户资料信息。 a display module configured to: when it is determined that the face scan permission of the first target object is an allowable scan, display visible information at the preset spatial location, wherein the visible information includes at least a user of the first target object Information information. 一种终端,其中,所述终端被设置为执行程序代码,所述程序代码用于执行所述权利要求1至15中任意一项所述的信息交互方法中的步骤。A terminal, wherein the terminal is arranged to execute program code, the program code for performing the steps in the information interaction method according to any one of claims 1 to 15. 一种存储介质,其中,所述存储介质被设置为存储程序代码,所述程序代码用于执行所述权利要求1至15中任意一项所述信息交互方法中的步骤。 A storage medium, wherein the storage medium is configured to store program code for performing the steps in the information interaction method of any one of claims 1 to 15.
PCT/CN2017/115058 2016-11-25 2017-12-07 Method, apparatus and storage medium for information interaction Ceased WO2018095439A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611064419.9 2016-11-25
CN201611064419.9A CN108108012B (en) 2016-11-25 2016-11-25 Information interaction method and device

Publications (1)

Publication Number Publication Date
WO2018095439A1 true WO2018095439A1 (en) 2018-05-31

Family

ID=62194802

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/115058 Ceased WO2018095439A1 (en) 2016-11-25 2017-12-07 Method, apparatus and storage medium for information interaction

Country Status (2)

Country Link
CN (1) CN108108012B (en)
WO (1) WO2018095439A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111367402A (en) * 2018-12-26 2020-07-03 阿里巴巴集团控股有限公司 Task triggering method, interaction equipment and computer equipment
CN111385337A (en) * 2018-12-29 2020-07-07 阿里巴巴集团控股有限公司 Cross-space interaction method, device, equipment, server and system
WO2020236391A1 (en) 2019-05-17 2020-11-26 Sensata Technologies, Inc. Wireless vehicle area network having connected brake sensors
CN112306254A (en) * 2019-07-31 2021-02-02 北京搜狗科技发展有限公司 Expression processing method, device and medium
CN112817830A (en) * 2021-03-01 2021-05-18 北京车和家信息技术有限公司 Setting item display method, setting item display device, setting item display medium, setting item display equipment, display system and vehicle
CN114697686A (en) * 2020-12-25 2022-07-01 北京达佳互联信息技术有限公司 Online interaction method and device, server and storage medium

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109274575B (en) * 2018-08-08 2020-07-24 阿里巴巴集团控股有限公司 Message sending method and device and electronic device
CN109276887B (en) * 2018-09-21 2020-06-30 腾讯科技(深圳)有限公司 Information display method, device, equipment and storage medium of virtual object
CN110650081A (en) * 2019-08-22 2020-01-03 南京洁源电力科技发展有限公司 Virtual reality instant messaging method
CN111093033B (en) * 2019-12-31 2021-08-06 维沃移动通信有限公司 An information processing method and device
CN111240471B (en) * 2019-12-31 2023-02-03 维沃移动通信有限公司 Information interaction method and wearable device
CN111355644B (en) * 2020-02-19 2021-08-20 珠海格力电器股份有限公司 Method and system for information interaction between different spaces
CN112235181A (en) * 2020-08-29 2021-01-15 上海量明科技发展有限公司 Weak social method, client and system
CN115379588B (en) * 2022-08-24 2026-02-06 歌尔科技有限公司 Interaction method, device and medium for simultaneous interpretation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080052312A1 (en) * 2006-08-23 2008-02-28 Microsoft Corporation Image-Based Face Search
US20100277611A1 (en) * 2009-05-01 2010-11-04 Adam Holt Automatic content tagging, such as tagging digital images via a wireless cellular network using metadata and facial recognition
CN103970804A (en) * 2013-02-06 2014-08-06 腾讯科技(深圳)有限公司 Information inquiring method and device

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI452527B (en) * 2011-07-06 2014-09-11 Univ Nat Chiao Tung Method and system for application program execution based on augmented reality and cloud computing
US20130156274A1 (en) * 2011-12-19 2013-06-20 Microsoft Corporation Using photograph to initiate and perform action
KR20140015946A (en) * 2012-07-27 2014-02-07 김소영 System and method for publicize politician using augmented reality
CN103870485B (en) * 2012-12-13 2017-04-26 华为终端有限公司 Method and device for achieving augmented reality application
CN104426933B (en) * 2013-08-23 2018-01-23 华为终端(东莞)有限公司 A kind of method, apparatus and system for screening augmented reality content
CN103412953A (en) * 2013-08-30 2013-11-27 苏州跨界软件科技有限公司 Social contact method on the basis of augmented reality
EP3070585A4 (en) * 2013-11-13 2017-07-05 Sony Corporation Display control device, display control method and program
CN103942049B (en) * 2014-04-14 2018-09-07 百度在线网络技术(北京)有限公司 Implementation method, client terminal device and the server of augmented reality
CN105323252A (en) * 2015-11-16 2016-02-10 上海璟世数字科技有限公司 Method and system for realizing interaction based on augmented reality technology and terminal
CN105320282B (en) * 2015-12-02 2018-12-25 广州经信纬通信息科技有限公司 A kind of image recognition solution based on augmented reality
CN105955456B (en) * 2016-04-15 2018-09-04 深圳超多维科技有限公司 The method, apparatus and intelligent wearable device that virtual reality is merged with augmented reality
CN106100983A (en) * 2016-08-30 2016-11-09 黄在鑫 A kind of mobile social networking system based on augmented reality Yu GPS location technology

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080052312A1 (en) * 2006-08-23 2008-02-28 Microsoft Corporation Image-Based Face Search
US20100277611A1 (en) * 2009-05-01 2010-11-04 Adam Holt Automatic content tagging, such as tagging digital images via a wireless cellular network using metadata and facial recognition
CN103970804A (en) * 2013-02-06 2014-08-06 腾讯科技(深圳)有限公司 Information inquiring method and device

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111367402A (en) * 2018-12-26 2020-07-03 阿里巴巴集团控股有限公司 Task triggering method, interaction equipment and computer equipment
CN111367402B (en) * 2018-12-26 2023-04-18 阿里巴巴集团控股有限公司 Task triggering method, interaction equipment and computer equipment
CN111385337A (en) * 2018-12-29 2020-07-07 阿里巴巴集团控股有限公司 Cross-space interaction method, device, equipment, server and system
CN111385337B (en) * 2018-12-29 2023-04-07 阿里巴巴集团控股有限公司 Cross-space interaction method, device, equipment, server and system
WO2020236391A1 (en) 2019-05-17 2020-11-26 Sensata Technologies, Inc. Wireless vehicle area network having connected brake sensors
CN112306254A (en) * 2019-07-31 2021-02-02 北京搜狗科技发展有限公司 Expression processing method, device and medium
CN114697686A (en) * 2020-12-25 2022-07-01 北京达佳互联信息技术有限公司 Online interaction method and device, server and storage medium
CN114697686B (en) * 2020-12-25 2023-11-21 北京达佳互联信息技术有限公司 Online interaction method and device, server and storage medium
CN112817830A (en) * 2021-03-01 2021-05-18 北京车和家信息技术有限公司 Setting item display method, setting item display device, setting item display medium, setting item display equipment, display system and vehicle
CN112817830B (en) * 2021-03-01 2024-05-07 北京车和家信息技术有限公司 Method and device for displaying setting items, medium, equipment, display system and vehicle

Also Published As

Publication number Publication date
CN108108012A (en) 2018-06-01
CN108108012B (en) 2019-12-06

Similar Documents

Publication Publication Date Title
WO2018095439A1 (en) Method, apparatus and storage medium for information interaction
CN103812761B (en) For using augmented reality to provide the device and method of social networking service
KR102905757B1 (en) A messaging system with a carousel of related entities
EP3713159B1 (en) Gallery of messages with a shared interest
US10402825B2 (en) Device, system, and method of enhancing user privacy and security within a location-based virtual social networking context
CN106716306B (en) Synchronizing multiple head mounted displays to a unified space and correlating object movements in the unified space
JP7473556B2 (en) Confirmation of consent
KR102077354B1 (en) Communication system
US12206719B2 (en) Communication sessions between devices using customizable interaction environments and physical location determination
JP7708506B2 (en) Method and system for authenticating a user - Patents.com
CN109691054A (en) animation user identifier
US11770356B2 (en) Method and device for providing location based avatar messenger service
CN109428859B (en) Synchronous communication method, terminal and server
CN109155024A (en) Share content with users and receiving devices
EP3272127B1 (en) Video-based social interaction system
KR102030322B1 (en) Methods, systems, and media for detecting stereoscopic videos by generating fingerprints for multiple portions of a video frame
KR20250012652A (en) External messaging capabilities for interactive systems
CN118922808A (en) Relationship agnostic messaging system
WO2022161289A1 (en) Identity information display method and apparatus, and terminal, server and storage medium
KR102808338B1 (en) Choosing a Smart Media Overlay for Your Messaging System
KR20260002978A (en) Sharing content collections
US20250378616A1 (en) Pose-Based Facial Expressions
WO2024037001A1 (en) Interaction data processing method and apparatus, electronic device, computer-readable storage medium, and computer program product
CN119422036A (en) System for displaying user paths
CN118354134A (en) Video playback method, device, electronic device and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17875022

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17875022

Country of ref document: EP

Kind code of ref document: A1