CN117170504B - Method, system and storage medium for viewing with person in virtual character interaction scene - Google Patents
Method, system and storage medium for viewing with person in virtual character interaction scene Download PDFInfo
- Publication number
- CN117170504B CN117170504B CN202311439200.2A CN202311439200A CN117170504B CN 117170504 B CN117170504 B CN 117170504B CN 202311439200 A CN202311439200 A CN 202311439200A CN 117170504 B CN117170504 B CN 117170504B
- Authority
- CN
- China
- Prior art keywords
- user
- client
- coordinates
- following
- tape
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
技术领域Technical field
本发明涉及虚拟与现实技术领域,特别涉及在虚拟人物交互场景中带人观看的方法、系统及存储介质。The present invention relates to the field of virtual and reality technologies, and in particular to methods, systems and storage media for leading people to watch in virtual character interaction scenes.
背景技术Background technique
虚拟与现实是20世纪发展起来的一项全新的实用技术。VR虚拟数字人: VR虚拟数字人是基于以上技术构建的一个在计算机中真实展示VR人物模型的技术,用人工智能、虚拟现实技术和先进技术打造的一系列虚拟形象,它由一个或多个计算机生成并融合了真人形象的数据和特征的人类活动过程和信息的综合表现形式。虚拟数字人可使人们通过数字形象进行与真人平等的交流沟通;也可通过其互动形式完成虚拟形象与现实世界之间的互动;更有人情味。Virtual and reality is a new practical technology developed in the 20th century. VR virtual digital human: VR virtual digital human is a technology built based on the above technology to truly display VR character models in the computer. It is a series of virtual images created using artificial intelligence, virtual reality technology and advanced technology. It consists of one or more A comprehensive representation of human activity processes and information generated by a computer and incorporating the data and characteristics of a real person. Virtual digital people allow people to communicate equally with real people through digital images; they can also complete the interaction between virtual images and the real world through their interactive forms; they are more humane.
目前虚拟与现实技术应用在各个领域中,在不同场景下,用户之间的交互需求也是不同的,例如在一些需要导览的场景,往往导游在讲解展品时,需要不停的移动到各个展台位置,传统的交互形式是通过用户自由跟随导游观看展品,但是可能会出现用户无法及时的跟进导游,导致用户只听到语音无法看到展品,又或者在展品较多时,被带领的用户的视角无法精准的定位到当前介绍的展品的情况发生,因此传统的靠用户主动跟随的交互方式,交互效果差,不够真实。At present, virtual and reality technologies are used in various fields. In different scenarios, the interaction needs between users are also different. For example, in some scenes that require guided tours, the tour guide often needs to constantly move to various booths when explaining the exhibits. Location, the traditional form of interaction is for the user to freely follow the tour guide to view the exhibits. However, it may happen that the user is unable to follow the tour guide in time, resulting in the user only hearing the voice and not being able to see the exhibits, or when there are many exhibits, the user being led will not be able to see the exhibits. The perspective cannot accurately locate the situation of the currently introduced exhibit, so the traditional interaction method that relies on users to actively follow has poor interaction effects and is not realistic enough.
发明内容Contents of the invention
本发明的目的就在于为了解决上述在导游移动讲解展品的虚拟场景下其他用户与导游之间交互效果差的问题提出在虚拟人物交互场景中带人观看的方法、系统及存储介质,具有可管理场景中被带看人员准确移动到带看人员的位置并同步带看人员的视角,交互更真实的优点。The purpose of the present invention is to solve the above-mentioned problem of poor interaction effect between other users and the tour guide in the virtual scene where the tour guide moves to explain the exhibits, and proposes a method, system and storage medium for leading people to watch in the virtual character interaction scene, with manageable In the scene, the person being watched accurately moves to the position of the person watching and the perspective of the person being watched is synchronized, making the interaction more realistic.
第一方面,本发明通过以下技术方案来实现上述目的,一种在虚拟人物交互场景中带人观看的方法,定义客户端一的用户A为被跟随人员,客户端二的用户B为跟随人员,该方法包括以下步骤:In the first aspect, the present invention achieves the above object through the following technical solution, a method of leading people to watch in a virtual character interaction scene, defining user A of client one as the person to be followed, and user B of client two as the person to follow. , the method includes the following steps:
步骤S1、客户端一获取用户A的跟随指令,向客户端二发送跟随消息,所述跟随消息用于所述客户端二控制用户B进入跟随状态;Step S1: Client 1 obtains the follow instruction of user A and sends a follow message to client 2. The follow message is used by client 2 to control user B to enter the follow state;
步骤S2、客户端一向所述客户端二实时发送所述用户A的位置坐标,所述客户端二控制所述用户B朝着所述位置坐标自动移动;Step S2: Client 1 sends the location coordinates of user A to client 2 in real time, and client 2 controls user B to automatically move toward the location coordinates;
步骤S3、客户端二实时计算所述用户B与所述用户A之间的距离,当距离小于预先设置的最短间距时,所述客户端二控制所述用户B脱离跟随状态,当距离大于预先设置的最短间距时,重复执行步骤S2和步骤S3;Step S3: Client 2 calculates the distance between user B and user A in real time. When the distance is less than the preset shortest distance, client 2 controls user B to leave the following state. When the distance is greater than the preset minimum distance, When the set minimum distance is reached, repeat steps S2 and S3;
步骤S4、客户端一接收用户A的带看指令,向客户端二发送带看消息,所述带看消息用于所述客户端二控制用户B进入带看状态;Step S4: Client 1 receives the watch instruction from user A and sends a watch message to client 2. The watch message is used by client 2 to control user B to enter the watch state;
步骤S5、计算所述用户A的视角目标的坐标,将所述坐标发送给所述客户端二,所述坐标用于所述客户端二控制用户B的摄像机朝向所述视角目标。Step S5: Calculate the coordinates of the user A's perspective target, and send the coordinates to the client 2. The coordinates are used by the client 2 to control the camera of user B to face the perspective target.
优选的,所述客户端二控制所述用户B朝着所述位置坐标自动移动的方法包括:Preferably, the method for client 2 to control user B to automatically move toward the location coordinates includes:
客户端二接收到跟随消息在当前虚拟场景的地面烘焙出NavMesh网格,所述NavMesh网格用于控制用户B按照与目标点之间最短的路径自动移动,将所述目标点的输入设置为用户A的位置坐标。Client 2 receives the follow message and bakes a NavMesh grid on the ground of the current virtual scene. The NavMesh grid is used to control user B to automatically move along the shortest path to the target point. The input of the target point is set to User A’s location coordinates.
优选的,所述预先设置的最短间距的长度范围为0.1-1m,所述最短间距的长度单位是虚拟场景中设置的长度单位。Preferably, the length range of the preset shortest distance is 0.1-1m, and the length unit of the shortest distance is the length unit set in the virtual scene.
优选的,所述计算所述用户A的视角目标的坐标的方法包括:Preferably, the method of calculating the coordinates of the user A's perspective target includes:
计算虚拟场景显示界面对角线交叉点的坐标;Calculate the coordinates of the diagonal intersection of the virtual scene display interface;
以用户A的摄像机为出发点向该坐标发送一条射线;Send a ray to this coordinate using user A's camera as the starting point;
通过碰撞检测获得所述射线上碰撞的物体,作为视角目标;Obtain the objects colliding on the ray through collision detection as the perspective target;
获取摄像与该物体表面碰撞处的坐标作为视角目标的坐标。Obtain the coordinates of the point where the camera collides with the surface of the object as the coordinates of the perspective target.
优选的,该方法还包括所述用户B进入跟随状态时,关闭用户B接收移动指令指令的操作功能。Preferably, the method further includes turning off the operation function for user B to receive movement instructions when user B enters the following state.
优选的,该方法还包括所述用户B进入带看状态时,关闭用户B接收转动视角指令的操作功能。Preferably, the method further includes, when user B enters the viewing state, turning off the operation function for user B to receive rotation angle instructions.
第二方面,本发明通过以下技术方案来实现上述目的,一种在虚拟人物交互场景中带人观看的系统,所述系统包括:In the second aspect, the present invention achieves the above object through the following technical solution, a system for leading people to watch in a virtual character interaction scene, the system includes:
跟随开始单元,用于客户端一获取用户A的跟随指令,向客户端二发送跟随消息,所述跟随消息用于所述客户端二控制用户B进入跟随状态;The follow start unit is used for client one to obtain the follow instruction of user A and send a follow message to client two. The follow message is used for client two to control user B to enter the follow state;
移动控制单元,用于客户端一向所述客户端二实时发送所述用户A的位置坐标,所述客户端二控制所述用户B朝着所述位置坐标自动移动;A movement control unit configured for client one to send the location coordinates of user A to client two in real time, and client two controls user B to automatically move toward the location coordinates;
跟随结束单元,用于客户端二实时计算所述用户B与所述用户A之间的距离,当距离小于预先设置的最短间距时,所述客户端二控制所述用户B脱离跟随状态,当距离大于预先设置的最短间距时,重复执行移动控制单元和跟随结束单元;The following end unit is used for client 2 to calculate the distance between user B and user A in real time. When the distance is less than the preset shortest distance, client 2 controls user B to leave the following state. When When the distance is greater than the preset shortest distance, the movement control unit and the follow-up end unit are repeatedly executed;
带看开始单元,用于客户端一接收用户A的带看指令,向客户端二发送带看消息,所述带看消息用于所述客户端二控制用户B进入带看状态;The watch start unit is used for client one to receive the watch instruction from user A and send a watch message to client two. The watch message is used by client two to control user B to enter the watch state;
视野同步单元,用于计算所述用户A的视角目标的坐标,将所述坐标发送给所述客户端二,所述坐标用于所述客户端二控制处于脱离跟随状态下的用户B的摄像机朝向所述视角目标。A field of view synchronization unit, used to calculate the coordinates of the user A's view target, and send the coordinates to the client 2. The coordinates are used by the client 2 to control the camera of user B in a disengaged following state. Towards said perspective target.
优选的,所述移动控制单元中客户端二控制所述用户B朝着所述位置坐标自动移动的方法包括:Preferably, the method for client 2 in the mobile control unit to control user B to automatically move toward the location coordinates includes:
客户端二接收到跟随消息在当前虚拟场景的地面烘焙出NavMesh网格,所述NavMesh网格用于控制用户B按照与目标点之间最短的路径自动移动,将所述目标点的输入设置为用户A的位置坐标。Client 2 receives the follow message and bakes a NavMesh grid on the ground of the current virtual scene. The NavMesh grid is used to control user B to automatically move along the shortest path to the target point. The input of the target point is set to User A’s location coordinates.
优选的,所述视野同步单元中计算所述用户A的视角目标的坐标的方法包括:Preferably, the method for calculating the coordinates of the user A's perspective target in the visual field synchronization unit includes:
计算虚拟场景显示界面对角线交叉点的坐标;Calculate the coordinates of the diagonal intersection of the virtual scene display interface;
以用户A的摄像机为出发点向该坐标发送一条射线;Send a ray to this coordinate using user A's camera as the starting point;
通过碰撞检测获得所述射线上碰撞的物体,作为视角目标;Obtain the objects colliding on the ray through collision detection as the perspective target;
获取摄像与该物体表面碰撞处的坐标作为视角目标的坐标。Obtain the coordinates of the point where the camera collides with the surface of the object as the coordinates of the perspective target.
第三方面,本发明通过以下技术方案来实现上述目的,一种存储介质,其上存储有计算机程序,在所述计算机程序被处理器执行时,实现如第一方面所述的在虚拟人物交互场景中带人观看的方法。In a third aspect, the present invention achieves the above object through the following technical solution: a storage medium on which a computer program is stored. When the computer program is executed by a processor, the virtual character interaction as described in the first aspect is realized. Ways to take people to watch in a scene.
与现有技术相比,本发明的有益效果是:本发明利用用户B在进入跟随状态下可随着用户A移动,并且再靠近用户A的时候,用户B可进入带看状态,在带看状态下用户B可同步用户A的视角,这样能保证用户B看到的目标与用户A一致,该方法在一些带人游览参观的场景中,可以保证带领人员可以精准的向被带人员介绍目标,使用户交互的更加真实,也方便管理被带人员,避免掉队。Compared with the existing technology, the beneficial effects of the present invention are: the present invention uses user B to move with user A when entering the following state, and when it is closer to user A, user B can enter the watching state. In this state, user B can synchronize the perspective of user A, which ensures that the target seen by user B is consistent with that of user A. This method can ensure that the leader can accurately introduce the target to the person being taken in some sightseeing scenarios. , making user interaction more real, and also convenient for managing the people being led to avoid falling behind.
附图说明Description of the drawings
图1为本发明的在虚拟人物交互场景中带人观看的方法流程图。Figure 1 is a flow chart of the method of leading people to watch in a virtual character interaction scene according to the present invention.
图2为本发明的在虚拟人物交互场景中带人观看的系统组成示意图。Figure 2 is a schematic diagram of the system composition of the present invention for leading people to watch in a virtual character interaction scene.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some of the embodiments of the present invention, not all of them. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts fall within the scope of protection of the present invention.
实施例1Example 1
如图1所示,一种在虚拟人物交互场景中带人观看的方法,定义客户端一的用户A为被跟随人员,客户端二的用户B为跟随人员,该方法包括以下步骤:As shown in Figure 1, a method of leading people to watch in a virtual character interaction scene defines user A of client one as the person to be followed, and user B of client two as the person to follow. The method includes the following steps:
步骤S1、客户端一获取用户A的跟随指令,向客户端二发送跟随消息,所述跟随消息用于所述客户端二控制用户B进入跟随状态,跟随指令由用户A通过点击场景显示界面上的跟随按钮开启,如图2所示,在显示界面的地步显示出用户B的名称、跟随状态以及控制跟随的几个按钮,在交互时,用户A点击完跟随按钮,用户B收到邀请跟随的消息,用户B进入跟随状态,而用户A点击结束控制按钮后,用户B收到结束消息,脱离跟随状态,这一种控制方式是由用户A手动来控制用户B的跟随和脱离,而在下文所说的用户B脱离跟随状态是由用户B触发的,具体细节在下文中进行描述。如步骤S2所提到的在跟随状态下用户B自动跟随用户A移动。Step S1: Client 1 obtains the follow instruction of user A and sends a follow message to client 2. The follow message is used by client 2 to control user B to enter the follow state. The follow instruction is given by user A by clicking on the scene display interface. The follow button is turned on, as shown in Figure 2. The name of user B, the following status, and several buttons to control following are displayed on the display interface. During the interaction, user A clicks the follow button, and user B receives the invitation to follow. message, user B enters the following state, and after user A clicks the end control button, user B receives the end message and leaves the following state. This control method is for user A to manually control user B's following and leaving. User B's departure from the following state mentioned below is triggered by user B, and the specific details are described below. As mentioned in step S2, user B automatically follows user A in the following state.
步骤S2、客户端一向所述客户端二实时发送所述用户A的位置坐标,所述客户端二控制所述用户B朝着所述位置坐标自动移动;在步骤S1中,我们知道了用户B进入跟随状态时会跟着用户A移动,那么具体是通过用户A实时同步位置坐标给用户B,使用户B实时朝着用户A的位置坐标移动,从视觉上来看,用户B实现跟随用户A行走的效果,所述客户端二控制所述用户B朝着所述位置坐标自动移动的方法包括:Step S2: Client 1 sends the location coordinates of user A to client 2 in real time, and client 2 controls user B to automatically move toward the location coordinates; in step S1, we know user B When entering the following state, it will move with user A. Specifically, user A synchronizes the position coordinates to user B in real time, so that user B moves toward the position coordinates of user A in real time. From a visual point of view, user B follows user A. Effect, the method for the client 2 to control the user B to automatically move toward the location coordinates includes:
客户端二接收到跟随消息在当前虚拟场景的地面烘焙出NavMesh网格,所述NavMesh网格用于控制用户B按照与目标点之间最短的路径自动移动,将所述目标点的输入设置为用户A的位置坐标。NavMesh网格是3Dunity中的一种自动寻路的算法,它可以自动算出最优路径进行寻路,本方案利用该自动寻路的功能,将用户A的位置坐标作为自动寻路的目标点,实现用户B跟随用户A移动的效果。Client 2 receives the follow message and bakes a NavMesh grid on the ground of the current virtual scene. The NavMesh grid is used to control user B to automatically move along the shortest path to the target point. The input of the target point is set to User A’s location coordinates. NavMesh is an automatic path-finding algorithm in 3Dunity. It can automatically calculate the optimal path for path-finding. This program uses the automatic path-finding function to use the position coordinates of user A as the target point for automatic path-finding. Achieve the effect of user B following user A’s movement.
步骤S3、客户端二实时计算所述用户B与所述用户A之间的距离,当距离小于预先设置的最短间距时,所述客户端二控制所述用户B脱离跟随状态,当距离大于预先设置的最短间距时,重复执行步骤S2和步骤S3;为了避免用户B与用户A的角色模型重叠,造成用户视角上的违和感,通过设置最短间距来限定用户B跟随用户A的距离,当用户B与用户A距离比较近时,用户B从跟随状态脱离,此时用户A移动,用户B不会跟着移动,用户B可以自动转动视角和小范围移动,但是一旦用户B移动范围变大,导致二者的距离重新超过最短间距,用户B的客户端二又会重新执行步骤S2和步骤S3,继续恢复到跟随状态跟随用户A移动。Step S3: Client 2 calculates the distance between user B and user A in real time. When the distance is less than the preset shortest distance, client 2 controls user B to leave the following state. When the distance is greater than the preset minimum distance, When the shortest distance is set, repeat steps S2 and S3; in order to avoid the overlap of the role models of user B and user A, causing a sense of inconsistency in the user's perspective, the shortest distance is set to limit the distance for user B to follow user A. When When user B is relatively close to user A, user B breaks away from the following state. At this time, user A moves, and user B will not follow. User B can automatically rotate the viewing angle and move in a small range, but once user B's movement range becomes larger, As a result, the distance between the two exceeds the shortest distance again, and user B's client 2 will perform steps S2 and S3 again, and continue to return to the following state to follow user A.
步骤S4、客户端一接收用户A的带看指令,向客户端二发送带看消息,所述带看消息用于所述客户端二控制用户B进入带看状态,在用户B与用户A间距小于最短间距的时候,客户端一才能接收用户A的带看指令,实现向用户B发送带看消息,用户B收到该消息后进入带看状态后,处于带看状态下的用户B视角无法由用户B操控移动,而用户B的视角所看到的内容就是用户A看到的画面或者场景。Step S4: Client 1 receives the watch command from user A and sends a watch message to client 2. The watch message is used by client 2 to control user B to enter the watch state. When the distance between user B and user A is When the distance is less than the shortest distance, the client can receive the viewing instruction from user A and send the viewing message to user B. After user B receives the message and enters the viewing state, the perspective of user B in the viewing state cannot be User B controls the movement, and the content seen from user B's perspective is the picture or scene seen by user A.
步骤S5、计算所述用户A的视角目标的坐标,将所述坐标发送给所述客户端二,所述坐标用于所述客户端二控制用户B的摄像机朝向所述视角目标,在步骤S4中可知,用户B进入带看状态,用户A的视角同步给用户B,本步骤是对用户A如何同步进行详细描述,用户A同步视角目标所采用的方式是同步坐标,将坐标发送给用户B,这样用户B的摄像机输入该坐标后,就能转动到这个坐标的位置,从而使用户B所看到的画面与用户A所看到的画面一致。这种同步方式可选择性很多,例如还有一种方式是通过同步用户A的摄像机的旋转角度给用户B,这样用户B按照同样的旋转角度直接旋转即可,但是这种方式对用户B所站的位置要求过高,因为用户B一旦与用户A的距离过远,那么即使二者摄像机的旋转角度相同,所看到的内容也会有偏差,在步骤S3中也提到了,用户B与用户A距离过近会脱离跟随状态,可以自由移动,这样就不好掌握用户B的位置,因此该方式虽然也能起到同步视角的效果,但是准确性不如步骤S5,所以一般不使用该方式。Step S5: Calculate the coordinates of the user A's perspective target, and send the coordinates to the client 2. The coordinates are used by the client 2 to control the camera of user B to face the perspective target. In step S4 It can be seen that user B enters the viewing state, and user A's perspective is synchronized to user B. This step is a detailed description of how user A synchronizes. The method used by user A to synchronize the perspective target is to synchronize coordinates and send the coordinates to user B. , so that after user B's camera inputs the coordinates, it can rotate to the position of these coordinates, so that the picture seen by user B is consistent with the picture seen by user A. This synchronization method has many options. For example, another method is to synchronize the rotation angle of user A's camera to user B, so that user B can directly rotate according to the same rotation angle. However, this method does not affect the position where user B is standing. The location requirements are too high, because once user B is too far away from user A, even if the rotation angles of the two cameras are the same, the content seen will be deviated. As mentioned in step S3, user B and user A If A is too close, he will be out of the following state and can move freely, which makes it difficult to grasp the position of user B. Therefore, although this method can also achieve the effect of synchronizing the perspective, the accuracy is not as good as step S5, so this method is generally not used.
最短间距属于预先设定的,可以根据需求来设定这个间距范围,所述预先设置的最短间距的长度范围为0.1-1m,所述最短间距的长度单位是虚拟场景中设置的长度单位,由于虚拟场景追求的是一比一还原现实场景,所以这里的0.1-1m是虚拟场景的长度单位,若是展台所在区域比较大,可以一次性容纳多个用户,由于角色模型多那么为了避免角色的模型叠加,这个最短间距可以设置为1m这个最大值,相反的,若是展台区域较小,那么为了避免用户间的距离过大导致同步视角目标出现偏差,因此可以将最短间距设置为0.1m这个最小值。The shortest distance is preset, and this distance range can be set according to needs. The length range of the preset shortest distance is 0.1-1m. The length unit of the shortest distance is the length unit set in the virtual scene. Since The virtual scene pursues a one-to-one restoration of the real scene, so 0.1-1m here is the length unit of the virtual scene. If the area of the booth is relatively large, it can accommodate multiple users at one time. Since there are many character models, in order to avoid character models Overlay, the shortest distance can be set to the maximum value of 1m. On the contrary, if the booth area is small, in order to avoid the deviation of the synchronized perspective target caused by the excessive distance between users, the shortest distance can be set to the minimum value of 0.1m. .
在步骤S5中,我们知道利用同步视角坐标的方式可以保证用户B的视角与用户A的视角偏差较小,为了保证误差最小,所述计算所述用户A的视角目标的坐标的方法包括:In step S5, we know that synchronizing the viewing angle coordinates can ensure that the deviation between the viewing angle of user B and the viewing angle of user A is small. In order to ensure the minimum error, the method of calculating the coordinates of the viewing angle target of user A includes:
计算虚拟场景显示界面对角线交叉点的坐标,该步骤为了计算出用户A视角的画面的中心点;Calculate the coordinates of the diagonal intersection of the virtual scene display interface. This step is to calculate the center point of the screen from user A's perspective;
以用户A的摄像机为出发点向该坐标发送一条射线,射线用来进行碰撞检测;Taking user A's camera as the starting point, send a ray to the coordinate, and the ray is used for collision detection;
通过碰撞检测获得所述射线上碰撞的物体,作为视角目标,射线能碰撞到的物体,就代表当前用户所看到的物体;The object that collides with the ray is obtained through collision detection. As a perspective target, the object that the ray can collide with represents the object currently seen by the user;
获取摄像与该物体表面碰撞处的坐标作为视角目标的坐标,物体的中心点作为视角目标,即使用户B的位置与用户A之间具有间距,那么所看到的物体中心也是相同的,二者视角画面大小一致,那么所看到的内容也趋近相同,该方式所获取的视角目标误差最小。Obtain the coordinates of the collision point between the camera and the object surface as the coordinates of the perspective target, and the center point of the object as the perspective target. Even if there is a distance between the position of user B and user A, the center of the object seen by both is the same. If the viewing angles are the same size, the content you see will also be the same, and the viewing angle target error obtained by this method will be the smallest.
在步骤S3中可知用户B跟随用户A时为了防止用户B影响自动移动的效果,所以用户B是无法手动控制移动的,因此该方法还包括所述用户B进入跟随状态时,关闭用户B接收移动指令指令的操作功能,用户B关闭了接收指令的操作功能,即使有使用者通过屏幕上的移动按钮进行操作,用户B依旧按照自动移动的路径进行移动,不会破坏自动移动的效果。在步骤S4中可知,进入带看状态的用户B所看到的视角都是用户A的视角,为了避免使用者主动晃动鼠标而影响视角的同步,该方法还包括所述用户B进入带看状态时,关闭用户B接收转动视角指令的操作功能,即使有使用者通过屏幕上的视角转动按钮进行操作,用户B看到的画面依旧是用户A的视角画面,不会破坏同步视角的效果。In step S3, it can be seen that when user B follows user A, in order to prevent user B from affecting the effect of automatic movement, user B cannot manually control the movement. Therefore, the method also includes turning off user B to receive movement when user B enters the following state. Regarding the operation function of command instructions, user B has turned off the operation function of receiving instructions. Even if a user operates through the move button on the screen, user B will still move according to the automatic movement path, which will not destroy the effect of automatic movement. In step S4, it can be seen that the viewing angle seen by user B who enters the viewing state is the perspective of user A. In order to prevent the user from actively shaking the mouse and affecting the synchronization of the viewing angles, the method also includes the user B entering the viewing state. , turn off the operation function for user B to receive rotation angle instructions. Even if a user operates through the angle rotation button on the screen, the picture that user B sees is still the picture from user A's perspective, and the effect of synchronized perspective will not be destroyed.
实施例2Example 2
如图2所示,一种在虚拟人物交互场景中带人观看的系统,所述系统包括:As shown in Figure 2, a system for leading people to watch in a virtual character interaction scene, the system includes:
跟随开始单元,用于客户端一获取用户A的跟随指令,向客户端二发送跟随消息,所述跟随消息用于所述客户端二控制用户B进入跟随状态,所述用户B进入跟随状态时,关闭用户B接收移动指令指令的操作功能。The follow start unit is used for client one to obtain the follow instruction of user A and send a follow message to client two. The follow message is used for client two to control user B to enter the follow state. When user B enters the follow state, , close the operation function for user B to receive movement instructions.
移动控制单元,用于客户端一向所述客户端二实时发送所述用户A的位置坐标,所述客户端二控制所述用户B朝着所述位置坐标自动移动。A movement control unit, configured for client one to send the location coordinates of user A to client two in real time, and client two controls user B to automatically move toward the location coordinates.
跟随结束单元,用于客户端二实时计算所述用户B与所述用户A之间的距离,当距离小于预先设置的最短间距时,所述客户端二控制所述用户B脱离跟随状态,当距离大于预先设置的最短间距时,重复执行移动控制单元和跟随结束单元,所述移动控制单元中客户端二控制所述用户B朝着所述位置坐标自动移动的方法包括:The following end unit is used for client 2 to calculate the distance between user B and user A in real time. When the distance is less than the preset shortest distance, client 2 controls user B to leave the following state. When When the distance is greater than the preset shortest distance, the movement control unit and the follow end unit are repeatedly executed. The method for client 2 in the movement control unit to control the user B to automatically move toward the position coordinates includes:
客户端二接收到跟随消息在当前虚拟场景的地面烘焙出NavMesh网格,所述NavMesh网格用于控制用户B按照与目标点之间最短的路径自动移动,将所述目标点的输入设置为用户A的位置坐标。Client 2 receives the follow message and bakes a NavMesh grid on the ground of the current virtual scene. The NavMesh grid is used to control user B to automatically move along the shortest path to the target point. The input of the target point is set to User A’s location coordinates.
带看开始单元,用于客户端一接收用户A的带看指令,向客户端二发送带看消息,所述带看消息用于所述客户端二控制用户B进入带看状态,用户B进入带看状态时,关闭用户B接收转动视角指令的操作功能;The watch start unit is used for client one to receive the watch instruction from user A and send a watch message to client two. The watch message is used for the client two to control user B to enter the watch state, and user B enters the watch state. When in the viewing state, turn off the operation function for user B to receive rotation angle instructions;
视野同步单元,用于计算所述用户A的视角目标的坐标,将所述坐标发送给所述客户端二,所述坐标用于所述客户端二控制处于脱离跟随状态下的用户B的摄像机朝向所述视角目标。所述视野同步单元中计算所述用户A的视角目标的坐标的方法包括:A field of view synchronization unit, used to calculate the coordinates of the user A's view target, and send the coordinates to the client 2. The coordinates are used by the client 2 to control the camera of user B in a disengaged following state. Towards said perspective target. The method of calculating the coordinates of the user A's perspective target in the visual field synchronization unit includes:
计算虚拟场景显示界面对角线交叉点的坐标;Calculate the coordinates of the diagonal intersection of the virtual scene display interface;
以用户A的摄像机为出发点向该坐标发送一条射线;Send a ray to this coordinate using user A's camera as the starting point;
通过碰撞检测获得所述射线上碰撞的物体,作为视角目标;Obtain the objects colliding on the ray through collision detection as the perspective target;
获取摄像与该物体表面碰撞处的坐标作为视角目标的坐标。Obtain the coordinates of the point where the camera collides with the surface of the object as the coordinates of the perspective target.
实施例2与实施例1的本质相同,因此不在赘述各个单元模块之间的运行原理。Embodiment 2 is essentially the same as Embodiment 1, so the operating principles between each unit module will not be described in detail.
实施例3Example 3
本实施例提出了一种存储介质,存储介质包括存储程序区和存储数据区,其中存储程序区可存储操作系统,以及运行即时通讯功能所需的程序等;存储数据区可存储各种即时通讯信息和操作指令集等。计算机程序存储在存储程序区,在所述计算机程序被处理器执行时,实现如实施例1所述的在虚拟人物交互场景中带人观看的方法。处理器可以包括一个或多个中央处理单元(centrNlprocessingunit,CPU)或者为数字处理单元等等。This embodiment proposes a storage medium. The storage medium includes a storage program area and a storage data area. The storage program area can store operating systems and programs required to run instant messaging functions. The storage data area can store various instant messaging areas. Information and operating instruction sets, etc. The computer program is stored in the stored program area. When the computer program is executed by the processor, the method of leading people to watch in a virtual character interaction scene as described in Embodiment 1 is implemented. The processor may include one or more central processing units (CPUs) or a digital processing unit or the like.
对于本领域技术人员而言,显然本发明不限于上述示范性实施例的细节,而且在不背离本发明的精神或基本特征的情况下,能够以其他的具体形式实现本发明。因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本发明的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化囊括在本发明内。It is obvious to those skilled in the art that the present invention is not limited to the details of the above-described exemplary embodiments, and that the present invention can be implemented in other specific forms without departing from the spirit or essential characteristics of the present invention. Therefore, the embodiments should be regarded as illustrative and non-restrictive from any point of view, and the scope of the present invention is defined by the appended claims rather than the above description, and it is therefore intended that all claims falling within the claims All changes within the meaning and scope of equivalent elements are included in the present invention.
此外,应当理解,虽然本说明书按照实施方式加以描述,但并非每个实施方式仅包含一个独立的技术方案,说明书的这种叙述方式仅仅是为清楚起见,本领域技术人员应当将说明书作为一个整体,各实施例中的技术方案也可以经适当组合,形成本领域技术人员可以理解的其他实施方式。In addition, it should be understood that although this specification is described in terms of implementations, not each implementation only contains an independent technical solution. This description of the specification is only for the sake of clarity, and those skilled in the art should take the specification as a whole. , the technical solutions in each embodiment can also be appropriately combined to form other implementations that can be understood by those skilled in the art.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311439200.2A CN117170504B (en) | 2023-11-01 | 2023-11-01 | Method, system and storage medium for viewing with person in virtual character interaction scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311439200.2A CN117170504B (en) | 2023-11-01 | 2023-11-01 | Method, system and storage medium for viewing with person in virtual character interaction scene |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117170504A CN117170504A (en) | 2023-12-05 |
CN117170504B true CN117170504B (en) | 2024-01-19 |
Family
ID=88937829
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311439200.2A Active CN117170504B (en) | 2023-11-01 | 2023-11-01 | Method, system and storage medium for viewing with person in virtual character interaction scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117170504B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117679745B (en) * | 2024-02-01 | 2024-04-12 | 南京维赛客网络科技有限公司 | Method, system and medium for controlling virtual character orientation through multi-angle dynamic detection |
CN118741257B (en) * | 2024-09-03 | 2024-11-08 | 南京维赛客网络科技有限公司 | Method, system and storage medium for realizing virtual scene multi-person interaction based on WebRTC |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109213834A (en) * | 2017-06-29 | 2019-01-15 | 深圳市掌网科技股份有限公司 | A kind of guidance method and system based on augmented reality |
CN110689623A (en) * | 2019-08-20 | 2020-01-14 | 重庆特斯联智慧科技股份有限公司 | Tourist guide system and method based on augmented reality display |
CN110732135A (en) * | 2019-10-18 | 2020-01-31 | 腾讯科技(深圳)有限公司 | Virtual scene display method and device, electronic equipment and storage medium |
WO2020139409A1 (en) * | 2018-12-27 | 2020-07-02 | Facebook Technologies, Llc | Virtual spaces, mixed reality spaces, and combined mixed reality spaces for improved interaction and collaboration |
CN111881861A (en) * | 2020-07-31 | 2020-11-03 | 北京市商汤科技开发有限公司 | Display method, device, equipment and storage medium |
CN111984114A (en) * | 2020-07-20 | 2020-11-24 | 深圳盈天下视觉科技有限公司 | Multi-person interaction system based on virtual space and multi-person interaction method thereof |
CN112817453A (en) * | 2021-01-29 | 2021-05-18 | 聚好看科技股份有限公司 | Virtual reality equipment and sight following method of object in virtual reality scene |
CN113181650A (en) * | 2021-05-31 | 2021-07-30 | 腾讯科技(深圳)有限公司 | Control method, device, equipment and storage medium for calling object in virtual scene |
CN113608613A (en) * | 2021-07-30 | 2021-11-05 | 建信金融科技有限责任公司 | Virtual reality interaction method and device, electronic equipment and computer readable medium |
CN115639976A (en) * | 2022-10-28 | 2023-01-24 | 深圳市数聚能源科技有限公司 | Multi-mode and multi-angle synchronous display method and system for virtual reality content |
CN116051044A (en) * | 2023-02-03 | 2023-05-02 | 南京维赛客网络科技有限公司 | Online management method, system and storage medium for personnel in virtual scene |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9818225B2 (en) * | 2014-09-30 | 2017-11-14 | Sony Interactive Entertainment Inc. | Synchronizing multiple head-mounted displays to a unified space and correlating movement of objects in the unified space |
US20180059812A1 (en) * | 2016-08-22 | 2018-03-01 | Colopl, Inc. | Method for providing virtual space, method for providing virtual experience, program and recording medium therefor |
JP7316360B2 (en) * | 2018-09-25 | 2023-07-27 | マジック リープ, インコーポレイテッド | Systems and methods for augmented reality |
US11704874B2 (en) * | 2019-08-07 | 2023-07-18 | Magic Leap, Inc. | Spatial instructions and guides in mixed reality |
-
2023
- 2023-11-01 CN CN202311439200.2A patent/CN117170504B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109213834A (en) * | 2017-06-29 | 2019-01-15 | 深圳市掌网科技股份有限公司 | A kind of guidance method and system based on augmented reality |
WO2020139409A1 (en) * | 2018-12-27 | 2020-07-02 | Facebook Technologies, Llc | Virtual spaces, mixed reality spaces, and combined mixed reality spaces for improved interaction and collaboration |
CN110689623A (en) * | 2019-08-20 | 2020-01-14 | 重庆特斯联智慧科技股份有限公司 | Tourist guide system and method based on augmented reality display |
CN110732135A (en) * | 2019-10-18 | 2020-01-31 | 腾讯科技(深圳)有限公司 | Virtual scene display method and device, electronic equipment and storage medium |
CN111984114A (en) * | 2020-07-20 | 2020-11-24 | 深圳盈天下视觉科技有限公司 | Multi-person interaction system based on virtual space and multi-person interaction method thereof |
CN111881861A (en) * | 2020-07-31 | 2020-11-03 | 北京市商汤科技开发有限公司 | Display method, device, equipment and storage medium |
CN112817453A (en) * | 2021-01-29 | 2021-05-18 | 聚好看科技股份有限公司 | Virtual reality equipment and sight following method of object in virtual reality scene |
CN113181650A (en) * | 2021-05-31 | 2021-07-30 | 腾讯科技(深圳)有限公司 | Control method, device, equipment and storage medium for calling object in virtual scene |
CN113608613A (en) * | 2021-07-30 | 2021-11-05 | 建信金融科技有限责任公司 | Virtual reality interaction method and device, electronic equipment and computer readable medium |
CN115639976A (en) * | 2022-10-28 | 2023-01-24 | 深圳市数聚能源科技有限公司 | Multi-mode and multi-angle synchronous display method and system for virtual reality content |
CN116051044A (en) * | 2023-02-03 | 2023-05-02 | 南京维赛客网络科技有限公司 | Online management method, system and storage medium for personnel in virtual scene |
Also Published As
Publication number | Publication date |
---|---|
CN117170504A (en) | 2023-12-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN117170504B (en) | Method, system and storage medium for viewing with person in virtual character interaction scene | |
JP7498209B2 (en) | Information processing device, information processing method, and computer program | |
EP3804301B1 (en) | Re-creation of virtual environment through a video call | |
CN112243583B (en) | Multi-endpoint mixed reality conference | |
CN109960401B (en) | Dynamic projection method, device and system based on face tracking | |
US20170127023A1 (en) | Virtual conference room | |
US20120192088A1 (en) | Method and system for physical mapping in a virtual world | |
WO2019080047A1 (en) | Augmented reality image implementation method, device, terminal device and storage medium | |
US11880999B2 (en) | Personalized scene image processing method, apparatus and storage medium | |
US20210255328A1 (en) | Methods and systems of a handheld spatially aware mixed-reality projection platform | |
KR101329935B1 (en) | Augmented reality system and method that share augmented reality service to remote using different marker | |
US11244423B2 (en) | Image processing apparatus, image processing method, and storage medium for generating a panoramic image | |
WO2013119475A1 (en) | Integrated interactive space | |
WO2020050103A1 (en) | Virtual viewpoint control device and method for controlling same | |
CN112148125A (en) | AR interaction state control method, device, equipment and storage medium | |
US20230412897A1 (en) | Video distribution system for live distributing video containing animation of character object generated based on motion of actors | |
CN114625468B (en) | Display method and device of augmented reality picture, computer equipment and storage medium | |
CN110737414A (en) | Interactive display method, device, terminal device and storage medium | |
WO2022111005A1 (en) | Virtual reality (vr) device and vr scenario image recognition method | |
WO2022247747A1 (en) | Information sharing method and apparatus, and electronic device and medium | |
CN114631323B (en) | Zone adaptive video generation | |
US20220086413A1 (en) | Processing system, processing method and non-transitory computer-readable storage medium | |
JP2023121636A (en) | Information processing system, communication system, image sharing method, and program | |
WO2023015868A1 (en) | Image background generation method and aparatus, and computer-readable storage medium | |
JP2020010327A (en) | System, method and program for automatic detection and insetting of digital stream into 360-degree video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right |
Denomination of invention: Methods, systems, and storage media for watching with people in virtual character interaction scenes Granted publication date: 20240119 Pledgee: Bank of Suzhou Co.,Ltd. Nanjing Branch Pledgor: Nanjing weisaike Network Technology Co.,Ltd. Registration number: Y2024980021629 |
|
PE01 | Entry into force of the registration of the contract for pledge of patent right | ||
PC01 | Cancellation of the registration of the contract for pledge of patent right |
Granted publication date: 20240119 Pledgee: Bank of Suzhou Co.,Ltd. Nanjing Branch Pledgor: Nanjing weisaike Network Technology Co.,Ltd. Registration number: Y2024980021629 |
|
PC01 | Cancellation of the registration of the contract for pledge of patent right | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right |
Denomination of invention: Methods, systems, and storage media for watching with people in virtual character interaction scenes Granted publication date: 20240119 Pledgee: Bank of Suzhou Co.,Ltd. Nanjing Branch Pledgor: Nanjing weisaike Network Technology Co.,Ltd. Registration number: Y2025980022861 |
|
PE01 | Entry into force of the registration of the contract for pledge of patent right |