CN116126147A - Virtual scene control method and device - Google Patents
Virtual scene control method and device Download PDFInfo
- Publication number
- CN116126147A CN116126147A CN202310153444.8A CN202310153444A CN116126147A CN 116126147 A CN116126147 A CN 116126147A CN 202310153444 A CN202310153444 A CN 202310153444A CN 116126147 A CN116126147 A CN 116126147A
- Authority
- CN
- China
- Prior art keywords
- virtual
- real
- hand operation
- hand
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/011—Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/012—Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Computer Graphics (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
技术领域technical field
本申请涉及图像处理技术领域,尤其涉及一种虚拟场景控制方法和装置。The present application relates to the technical field of image processing, in particular to a virtual scene control method and device.
背景技术Background technique
在基于虚拟现实或者增强现实等虚拟技术的虚拟场景中,可以将物理场景中真实用户的输入操作转换为虚拟场景中虚拟用户的输入操作。In a virtual scene based on virtual technologies such as virtual reality or augmented reality, an input operation of a real user in a physical scene may be converted into an input operation of a virtual user in a virtual scene.
但是,在将真实用户到虚拟用户的映射过程中,只能将真实用户的动作行为转换为虚拟场景中虚拟用户相关的动作行为,却无法在虚拟用户上呈现真实用户握持或者期望展现的物体等对象,导致目前虚拟场景无法更为准确地呈现用户期望的交互场景,显示灵活性差。However, in the process of mapping real users to virtual users, the actions of real users can only be converted into actions related to virtual users in the virtual scene, but the objects held or expected to be displayed by real users cannot be presented on virtual users. As a result, the current virtual scene cannot more accurately present the interactive scene expected by the user, and the display flexibility is poor.
发明内容Contents of the invention
一方面,本申请提供了一种虚拟场景控制方法,包括:On the one hand, the application provides a virtual scene control method, including:
检测到真实用户的真实手部操作,将所述真实手部操作映射为虚拟场景图像中虚拟用户的虚拟手部操作;Detecting the real hand operation of the real user, mapping the real hand operation to the virtual hand operation of the virtual user in the virtual scene image;
如果所述真实手部操作的手部姿势属于设定的用于维持物体位姿的持物姿势,获得所述真实用户输入的语音信息;If the hand gesture of the real hand operation belongs to the set object holding posture for maintaining the object pose, obtain the voice information input by the real user;
识别出所述语音信息中描述的目标对象;identifying the target object described in the voice information;
向所述虚拟用户中做出所述虚拟手部操作的虚拟手部添加虚拟的所述目标对象。The virtual target object is added to the virtual hand of the virtual user performing the virtual hand operation.
在一种可能的实现方式中,所述如果所述真实手部操作的手部姿势属于设定的用于维持物体位姿的持物姿势,获得所述真实用户输入的语音信息,包括:In a possible implementation manner, if the hand gesture of the real hand operation belongs to the set object holding gesture for maintaining the pose of the object, obtaining the voice information input by the real user includes:
如果所述真实手部操作的手部姿势属于设定的用于维持物体位姿的持物姿势,且所述真实手部操作的手部姿态维持处于持物姿势的时长超过设定时长,获得所述真实用户输入的语音信息。If the hand gesture of the real hand operation belongs to the set object-holding posture for maintaining the object posture, and the time length of the real hand operation hand posture maintained in the object-holding posture exceeds the set duration, the obtained Voice information input by real users.
在又一种可能的实现方式中,所述识别出所述语音信息中描述的目标对象,包括:In yet another possible implementation manner, the identifying the target object described in the voice information includes:
识别出所述语音信息中描述的目标对象的对象类别和对象特征;Recognizing the object category and object characteristics of the target object described in the voice information;
在向所述虚拟用户中做出所述虚拟手部操作的虚拟手部添加虚拟的所述目标对象之前,还包括:Before adding the virtual target object to the virtual hand of the virtual user performing the virtual hand operation, it also includes:
基于所述目标对象的对象类别和对象特征,构建出虚拟的所述目标对象。A virtual target object is constructed based on the object category and object features of the target object.
在又一种可能的实现方式中,所述将所述真实手部操作映射为虚拟场景图像中虚拟用户的虚拟手部操作,包括:In yet another possible implementation manner, the mapping the real hand operation to the virtual hand operation of the virtual user in the virtual scene image includes:
确定所述真实手部操作的输入手势类型;determining the input gesture type of the real hand operation;
基于所述输入手势类型,确定由所述真实手部操作映射为虚拟场景图像中虚拟输入操作的输入灵敏度;Based on the type of input gesture, determine the input sensitivity mapped from the real hand operation to the virtual input operation in the virtual scene image;
按照所述输入灵敏度,将所述真实手部操作映射为虚拟场景图像中虚拟用户的虚拟手部操作。According to the input sensitivity, the real hand operation is mapped to the virtual hand operation of the virtual user in the virtual scene image.
在又一种可能的实现方式中,所述按照所述输入灵敏度,将所述真实手部操作映射为虚拟场景图像中虚拟用户的虚拟手部操作,包括:In yet another possible implementation manner, the mapping the real hand operation to the virtual hand operation of the virtual user in the virtual scene image according to the input sensitivity includes:
按照所述输入灵敏度,确定所述真实手部操作的移动距离与所述虚拟场景图像中虚拟手部的虚拟移动距离之间的距离映射关系;According to the input sensitivity, determine the distance mapping relationship between the moving distance of the real hand operation and the virtual moving distance of the virtual hand in the virtual scene image;
按照所述距离映射关系以及所述真实手部操作的移动轨迹,将所述真实手部操作映射为所述虚拟场景图像中虚拟用户的虚拟手部操作。According to the distance mapping relationship and the movement track of the real hand operation, the real hand operation is mapped to the virtual hand operation of the virtual user in the virtual scene image.
在又一种可能的实现方式中,所述确定所述真实手部操作的输入手势类型,包括:In yet another possible implementation manner, the determining the input gesture type of the real hand operation includes:
确定所述真实手部操作中用于指示手势类型的手指姿态;determining a finger gesture for indicating a gesture type in said real hand manipulation;
基于所述手指姿态,确定所述真实手部操作的输入手势类型。Based on the gesture of the finger, an input gesture type of the real hand operation is determined.
在又一种可能的实现方式中,所述确定所述真实手部操作中用于指示手势类型的手指姿态,包括:In yet another possible implementation manner, the determining the finger gesture used to indicate the gesture type in the real hand operation includes:
如果所述真实手部操作属于手部移动操作,确定所述真实手部操作中用于指示手势类型的手指姿态。If the real hand operation belongs to a hand movement operation, determining a gesture of a finger used to indicate a gesture type in the real hand operation.
在又一种可能的实现方式中,在向所述虚拟用户中做出所述虚拟手部操作的虚拟手部添加虚拟的所述目标对象之前,还包括:In yet another possible implementation manner, before adding the virtual target object to the virtual hand of the virtual user performing the virtual hand operation, the method further includes:
基于所述真实用户的真实手部操作,确定所述目标对象的对象特征;determining object characteristics of the target object based on the real hand operation of the real user;
基于所述目标对象的对象特征,构建虚拟的所述目标对象。A virtual target object is constructed based on the object characteristics of the target object.
又一方面,本申请还提供了一种虚拟场景控制装置,包括:In yet another aspect, the present application also provides a virtual scene control device, including:
操作映射单元,用于检测到真实用户的真实手部操作,将所述真实手部操作映射为虚拟场景图像中虚拟用户的虚拟手部操作;An operation mapping unit, configured to detect a real hand operation of a real user, and map the real hand operation to a virtual hand operation of a virtual user in the virtual scene image;
语音获得单元,用于如果所述真实手部操作的手部姿势属于设定的用于维持物体位姿的持物姿势,获得所述真实用户输入的语音信息;A voice obtaining unit, configured to obtain the voice information input by the real user if the hand gesture of the real hand operation belongs to the set object holding posture for maintaining the object pose;
语音识别单元,用于识别出所述语音信息中描述的目标对象;a speech recognition unit, configured to recognize the target object described in the speech information;
对象添加单元,用于向所述虚拟用户中做出所述虚拟手部操作的虚拟手部添加虚拟的所述目标对象。An object adding unit, configured to add the virtual target object to the virtual hand of the virtual user performing the virtual hand operation.
在一种可能的实现方式中,所述操作映射单元,包括:In a possible implementation manner, the operation mapping unit includes:
类型确定单元,用于检测到真实用户的真实手部操作,确定所述真实手部操作的输入手势类型;A type determining unit, configured to detect a real hand operation of a real user, and determine the input gesture type of the real hand operation;
灵敏度确定单元,用于基于所述输入手势类型,确定由所述真实手部操作映射为虚拟场景图像中虚拟输入操作的输入灵敏度;A sensitivity determining unit, configured to determine the input sensitivity mapped from the real hand operation to the virtual input operation in the virtual scene image based on the input gesture type;
映射处理单元,用于按照所述输入灵敏度,将所述真实手部操作映射为虚拟场景图像中虚拟用户的虚拟手部操作。A mapping processing unit, configured to map the real hand operation to the virtual hand operation of the virtual user in the virtual scene image according to the input sensitivity.
附图说明Description of drawings
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present application, the accompanying drawings that need to be used in the description of the embodiments will be briefly introduced below. Obviously, the accompanying drawings in the following description are only the embodiments of the present application. For Those of ordinary skill in the art can also obtain other drawings based on the provided drawings without making creative efforts.
图1示出了本申请实施例提供的虚拟场景控制方法的一种流程示意图;Fig. 1 shows a schematic flow chart of a virtual scene control method provided by an embodiment of the present application;
图2示出了本申请实施例提供的虚拟场景控制方法的又一种流程示意图;Fig. 2 shows another schematic flow chart of the virtual scene control method provided by the embodiment of the present application;
图3示出了本申请实施例提供的虚拟场景控制方法的又一种流程示意图;Fig. 3 shows another schematic flow chart of the virtual scene control method provided by the embodiment of the present application;
图4示出了本申请实施例提供的虚拟场景控制方法的又一种流程示意图;Fig. 4 shows another schematic flow chart of the virtual scene control method provided by the embodiment of the present application;
图5示出了本申请实施例提供的虚拟场景控制装置的一种组成结构示意图;Fig. 5 shows a schematic diagram of the composition and structure of the virtual scene control device provided by the embodiment of the present application;
图6示出了本申请实施例提供的电子设备的一种组成架构示意图。FIG. 6 shows a schematic diagram of a composition structure of an electronic device provided by an embodiment of the present application.
具体实施方式Detailed ways
本申请的方案可以应用于任意需要借助虚拟现实、增强显示或者其他类型的虚拟技术呈现虚拟场景的电子设备或者多台电子设备构成的系统中,以更为灵活地控制虚拟场景呈现,提高虚拟场景的交互灵活性。The solution of this application can be applied to any electronic device or a system composed of multiple electronic devices that needs to present virtual scenes with the help of virtual reality, augmented display or other types of virtual technologies, so as to more flexibly control the presentation of virtual scenes and improve the performance of virtual scenes. interactive flexibility.
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有付出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The following will clearly and completely describe the technical solutions in the embodiments of the application with reference to the drawings in the embodiments of the application. Apparently, the described embodiments are only some of the embodiments of the application, not all of them. Based on the embodiments in this application, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the scope of protection of this application.
如图1,其示出了本申请实施例提供的虚拟场景控制方法的一种流程示意图,本实施例的方法可以应用于任意虚拟场景控制的电子设备或者虚拟场景控制系统等。As shown in Fig. 1, it shows a schematic flow chart of the virtual scene control method provided by the embodiment of the present application. The method of this embodiment can be applied to any electronic device or virtual scene control system for virtual scene control.
本实施例的方法可以包括:The method of this embodiment may include:
S101,检测到真实用户的真实手部操作,将真实手部操作映射为虚拟场景图像中虚拟用户的虚拟手部操作。S101. Detect a real hand operation of a real user, and map the real hand operation to a virtual hand operation of a virtual user in a virtual scene image.
其中,真实用户是指物理场景中真实存在的用户。为了便于区分,将物理世界中真实用户的手部操作称为真实手部操作,而将虚拟场景图像中虚拟用户的手部操作称为虚拟手部操作。Wherein, the real user refers to a real user in a physical scene. For ease of distinction, the hand operations of a real user in the physical world are called real hand operations, and the hand operations of a virtual user in a virtual scene image are called virtual hand operations.
可以理解的是,在虚拟场景展现过程中,可以通过真实用户的输入操作来控制或者调整虚拟场景中虚拟图像的展现,在本申请中,虚拟场景图像中至少包括真实用户关联的虚拟用户。在此基础上,通过捕获真实用户的动作行为,可以将真实用户的动作行为转换为虚拟场景图像中该虚拟用户的响应动作行为。It can be understood that, during the presentation of the virtual scene, the presentation of the virtual image in the virtual scene can be controlled or adjusted through the input operation of the real user. In this application, the virtual scene image at least includes the virtual user associated with the real user. On this basis, by capturing the action behavior of the real user, the action behavior of the real user can be converted into the response action behavior of the virtual user in the virtual scene image.
例如,在数字传播场景中或者虚拟电商主播等多种场景中都可能会涉及到构建出虚拟场景图像中的虚拟用户,并通过真实用户驱动虚拟用户。当然,本案的应用场景还可以有其他可能,本申请对此不加限制。For example, in various scenarios such as digital communication scenarios or virtual e-commerce anchors, it may involve constructing virtual users in virtual scene images, and drive virtual users through real users. Of course, there may be other possibilities in the application scenario of this case, and this application does not limit it.
其中,检测真实用户的真实手部操作的具体实现方式可以多种可能。如,可以通过摄像头采集真实用户的用户图像,通过用户图像确定真实用户的真实手部操作。又如,可以利用佩戴在真实用户的手部等至少一个部位上的传感器,结合动作捕捉技术获得真实用户的真实手部操作。Wherein, there are many possible specific implementation manners for detecting the real hand operation of the real user. For example, a user image of a real user may be collected through a camera, and a real hand operation of the real user may be determined through the user image. For another example, a sensor worn on at least one part such as the hand of the real user may be used in combination with motion capture technology to obtain the real hand operation of the real user.
当然,此处仅仅是以两种情况为例进行简单说明,在实际应用中检测或者说捕获真实用户的真实手部操作还可以有其他可能。在实际应用中,还可能会结合多种不同技术来检测真实用户的真实手部操作,对此不加限制。Of course, here are only two cases as examples for a brief description, and there are other possibilities to detect or capture real hand operations of real users in practical applications. In practical applications, multiple different technologies may be combined to detect real hand operations of real users, without limitation.
可以理解的是,将真实用户的真实手部操作映射为虚拟用户的虚拟手部操作是指将真实用户手部操作转换为虚拟用户的虚拟手部操作,使得虚拟用户的虚拟手部操作与真实用户的真实手部操作保持一致,或者是使得虚拟手部操作与真实手部操作之间满足特定的转换规则,对此不加限制。It can be understood that mapping real user's real hand operations to virtual user's virtual hand operations refers to transforming real user's hand operations into virtual user's virtual hand operations, so that the virtual user's virtual hand operations are consistent with the real The user's real hand operations are kept consistent, or a specific conversion rule is satisfied between the virtual hand operations and the real hand operations, and there is no limitation on this.
例如,真实手部操作为摊开手部的手指并向上移动的操作,那么通过将各个时刻真实手部操作不断映射为虚拟手部操作,可以使得虚拟手部也做出摊开虚拟手部的手指并向上移动的操作。For example, the real hand operation is the operation of spreading the fingers of the hand and moving upwards, then by continuously mapping the real hand operation to the virtual hand operation at each moment, the virtual hand can also make the gesture of spreading the virtual hand. The operation of moving the finger up.
需要说明的是,真实手部操作是一个连续的操作动作,因此,在实际应用中,可以不定时或者按照采样周期定时获得真实手部操作,并将各不同时刻得到的真实手部操作不断映射为虚拟手部操作,使得该步骤S101可以被多次不断执行。It should be noted that the real hand operation is a continuous operation action. Therefore, in practical applications, the real hand operation can be obtained irregularly or according to the sampling period, and the real hand operation obtained at different times can be continuously mapped It is a virtual hand operation, so that step S101 can be continuously performed multiple times.
S102,如果真实手部操作的手部姿势属于设定的用于维持物体位姿的持物姿势,获得真实用户输入的语音信息。S102, if the hand gesture of the real hand operation belongs to the set holding gesture for maintaining the pose of the object, obtain voice information input by the real user.
其中,持物姿势是指能够维持物体处于某个位置以及姿势等至少一种状态的手部姿态。Wherein, the object-holding posture refers to a hand posture capable of maintaining an object in at least one state such as a certain position and posture.
如,持物姿势可以为手部握持物体或者其他对象的握持姿势;握持姿态还可以是手部托举物体等对象的手部推举姿势;握持姿态还可以是手部提拉物体等对象的提拉姿势;当然,握持姿势还可以是手部搂抱或者捏拿等用于维持对象等对象的姿势,对此不加限制。For example, the holding posture can be the holding posture of the hand holding the object or other objects; The lifting posture of the object; of course, the holding posture can also be a posture for maintaining the object such as holding the object with the hands or holding it, and there is no limitation on this.
可以理解的是,在真实用户操作某个时刻的手部姿势为持物姿势时,说明真实用户的手部目前正在持有物体等对象,或者是,真实用户希望能够通过手部姿态来表现出手部持有物体等对象的效果。基于此,为了能够准确确定真实用户期望虚拟用户的手部持有的对象,本申请还可以进一步获得真实用户的语音信息,以便后续分析出真实手部具有持物姿势的情况下,真实用户希望呈现的对象。It is understandable that when the real user's hand gesture at a certain moment is the holding gesture, it means that the real user's hand is currently holding an object or other object, or the real user hopes to express the hand posture through the hand gesture. The effect of holding objects such as objects. Based on this, in order to accurately determine the object that the real user expects the virtual user's hand to hold, the application can further obtain the voice information of the real user, so that in the subsequent analysis, when the real hand has an object-holding posture, the real user wants to present Object.
S103,识别出语音信息中描述的目标对象。S103. Identify the target object described in the voice information.
其中,该目标对象可以为物体或者小动物等能够通过手部展示出来的对象。Wherein, the target object may be an object or a small animal that can be displayed by hands.
可以理解的是,在真实用户的真实手部操作的手部姿态为持物姿势的情况下,基于真实用户的语音信息可以分析出真实用户的手部持有的对象。It can be understood that, in the case that the real hand gesture of the real user is an object-holding gesture, the object held by the real user's hand can be analyzed based on the voice information of the real user.
可以理解的是,对语音信息的识别方式可以有多种,本申请对于具体的语音信息识别的具体实现不加限制。It can be understood that there may be many ways to recognize voice information, and this application does not limit the specific implementation of voice information recognition.
S104,向该虚拟用户中做出该虚拟手部操作的虚拟手部添加虚拟的目标对象。S104. Add a virtual target object to the virtual hand of the virtual user performing the virtual hand operation.
可以理解的是,通过向虚拟用户的虚拟手部添加虚拟的目标对象,可以使得虚拟用户呈现出持有该目标对象的效果,从而能够通过虚拟场景中的虚拟用户能够更为真实或者灵活地呈现出真实用户希望达到的持物效果。It can be understood that by adding a virtual target object to the virtual hand of the virtual user, the virtual user can appear to hold the target object, so that the virtual user in the virtual scene can present more realistically or flexibly Create the object-holding effect that real users want to achieve.
为了便于理解,以一应用场景为例说明:For ease of understanding, an application scenario is taken as an example:
以通过动作捕获技术捕获真实用户的动作并驱动虚拟用户,通过虚拟用户进行直播等场景为例。在该场景中,真实用户的表情和动作都能够生动形象地在虚拟用户上呈现出来,但是,在真实用户手里拿着的物品的情况下,由于该场景中为了能够准确捕获真实用户的动作行为,会去除真实用户之外的其他背景,所以无法捕获到真实用户手部持有的物品。Take scenarios such as capturing the actions of real users and driving virtual users through motion capture technology, and live broadcasting through virtual users as examples. In this scene, the expressions and actions of the real user can be vividly presented on the virtual user. However, in the case of items held by the real user, in order to accurately capture the actions Behavior, other backgrounds other than real users will be removed, so items held by real users cannot be captured.
而在本申请中,在识别出真实用户的真实手部操作的手部姿势处于持物姿势的情况下,通过识别真实用户输入的语音信息中描述的目标对象,可以确定出用户希望在虚拟用户的虚拟手部呈现出的物品,通过在虚拟用户的虚拟手部添加相应的虚拟物品,从而可以展现出虚拟用户也持有该物品的效果。However, in this application, when it is recognized that the real hand gesture of the real user is in the holding posture, by identifying the target object described in the voice information input by the real user, it can be determined that the user wants to be in the virtual user's hand position. For the item presented by the virtual hand, by adding the corresponding virtual item to the virtual user's virtual hand, it can show the effect that the virtual user also holds the item.
例如,真实用户的真实手部托着一个苹果,同时真实用户说着“看这个苹果”,则可以确定真实用户的手部托着一个苹果,相应的,可以在虚拟用户做出托举手势的虚拟手部添加一个苹果。For example, if the real user's real hand is holding an apple, and the real user says "look at this apple", then it can be determined that the real user's hand is holding an apple. Correspondingly, when the virtual user makes a lift gesture Add an apple to the virtual hand.
当然,此处是以真实用户真实持有物品为例说明,如果真实用户的手部做出持物姿势,但是真实用户实际上并未持有相应的物品,但是真实用户希望虚拟用户呈现出持有该物品的效果,那么只要真实用户通过语言描述了手部希望持有的物体,同样可以实现在虚拟用户的手部呈现出持有相应的物品的效果。Of course, here is an example of a real user holding an item. If the real user’s hand makes a holding gesture, but the real user does not actually hold the corresponding item, the real user wants the virtual user to appear to be holding the object. The effect of this item, as long as the real user describes the object that the hand wants to hold through language, the virtual user's hand can also achieve the effect of holding the corresponding item.
其中,向虚拟用户的虚拟手部添加虚拟的目标对象的具体方式也可以有多种可能。Wherein, there may be many specific ways of adding the virtual target object to the virtual hand of the virtual user.
在一种可选方式中,为了能够更为真实地呈现出虚拟用户的持物效果,可以结合虚拟手部的虚拟手部操作对应的持物姿势的类别,向虚拟手部添加目标对象。In an optional manner, in order to present the virtual user's holding effect more realistically, a target object may be added to the virtual hand in combination with the category of the holding gesture corresponding to the virtual hand operation of the virtual hand.
如,虚拟手部的虚拟手部操作对应的是握持物品的持物姿势,那么可以向虚拟手部的手部中心添加虚拟的目标对象,以呈现出虚拟手部握持该目标对象的效果。For example, the virtual hand operation of the virtual hand corresponds to the holding posture of holding an object, then a virtual target object can be added to the center of the virtual hand to present the effect of the virtual hand holding the target object.
又如,虚拟手部的虚拟手部操作对应的捏提物体的持物姿势,可以向虚拟手部中做出捏提姿势的手指部位添加虚拟的目标对象,以呈现出虚拟用户通过虚拟手部捏提该虚拟的目标对象的效果。As another example, the virtual hand operation of the virtual hand corresponds to the holding posture of the pinching object, and a virtual target object can be added to the fingers of the virtual hand that make the pinching gesture, so as to show that the virtual user Improve the effect of the virtual target object.
由以上可知,在本申请实施例中,在将真实用户的真实手部操作映射为虚拟场景图像中虚拟用户的虚拟手部操作的同时,如果真实用户的真实手部操作的手部姿势属于设定的持物姿势,可以基于真实用户的语音信息中描述的目标对象,向虚拟用户中相应的虚拟手部添加虚拟的该目标对象,从而使得虚拟场景图像中的虚拟用户不仅仅能够体现出真实用户的动作行为,还能够将真实用户描述的对象呈现出来,使得虚拟场景图像中能够更为准确地呈现出真实用户期望的场景,提高了虚拟场景的显示灵活性。As can be seen from the above, in the embodiment of the present application, while the real hand operation of the real user is mapped to the virtual hand operation of the virtual user in the virtual scene image, if the hand gesture of the real hand operation of the real user belongs to the set Based on the target object described in the voice information of the real user, the virtual target object can be added to the corresponding virtual hand of the virtual user, so that the virtual user in the virtual scene image can not only reflect the real user The action behavior of the real user can also be presented, so that the virtual scene image can more accurately present the scene expected by the real user, and the display flexibility of the virtual scene is improved.
可以理解的是,为了能够向虚拟用户的虚拟手部添加虚拟的目标对象,还可以先构建虚拟的该目标对象,其中,构建该虚拟的目标对象的具体实现可以有多种可能,例如,可以利用人工智能自动生成内容(AI generated content,AIGC)技术构建出虚拟的目标对象。当然,还可以有其他构建虚拟的目标对象的方式,对此不加限制。It can be understood that, in order to be able to add a virtual target object to the virtual hand of the virtual user, the virtual target object can also be constructed first, wherein there are many possibilities for constructing the virtual target object, for example, Use artificial intelligence to automatically generate content (AI generated content, AIGC) technology to construct a virtual target object. Of course, there may be other ways of constructing the virtual target object, and there is no limitation on this.
可以理解的是,对象的种类具有多样性,而且即使是同一种对象也可以划分为不同的子类别,不同子类别的对象也具有各自的特征,而且即使是同一种对象具有的外形特征等也会有所差别。如,苹果可以分为青苹果和红苹果,而不同青苹果又有着大小与颜色深浅的区别。It is understandable that the types of objects are diverse, and even the same object can be divided into different subcategories, and the objects of different subcategories also have their own characteristics, and even the same object has the same appearance characteristics. will vary. For example, apples can be divided into green apples and red apples, and different green apples have differences in size and color depth.
基于此,为了能够更为真实地呈现出真实用户实际持有或者描述的目标对象,在本申请中,还可以在构建虚拟对象前先确定目标对象具有的对象特征。Based on this, in order to present the target object actually held or described by the real user more realistically, in this application, the object characteristics of the target object may be determined before constructing the virtual object.
下面结合一种确定目标对象的对象特征的方式对本申请的方案进行说明。如图2所示,其示出了本申请实施例提供的虚拟场景控制方法又一个实施例的流程示意图,本实施例的方法可以包括:The solution of the present application will be described below in conjunction with a manner of determining object characteristics of a target object. As shown in Figure 2, it shows a schematic flow chart of another embodiment of the virtual scene control method provided by the embodiment of the present application. The method of this embodiment may include:
S201,检测到真实用户的真实手部操作,将真实手部操作映射为虚拟场景图像中虚拟用户的虚拟手部操作。S201. Detect a real hand operation of a real user, and map the real hand operation to a virtual hand operation of a virtual user in a virtual scene image.
S202,如果该真实手部操作的手部姿势属于设定的用于维持物体位姿的持物姿势,获得该真实用户输入的语音信息。S202. If the hand gesture of the real hand operation belongs to the set object holding gesture for maintaining the pose of the object, obtain voice information input by the real user.
以上步骤S201和S202可以参见前面实施例的相关介绍,在此不再赘述。For the above steps S201 and S202, reference may be made to the relevant introduction of the previous embodiments, and details are not repeated here.
S203,识别出该语音信息中描述的目标对象的对象类别和对象特征。S203. Identify the object category and object features of the target object described in the voice information.
其中,目标对象的对象类别可以为目标对象所属的种类。通过目标对象的对象类别可以表征出目标对象为哪种对象,还可能可以表征出对象的名称。Wherein, the object category of the target object may be the category to which the target object belongs. The object category of the target object can represent what kind of object the target object is, and may also represent the name of the object.
如,目标对象的对象类别为水果中的苹果,则可以确定目标对象为苹果。当然,如果对象类别中还包括苹果的类别细分信息,则可以进一步确定苹果的种类或者名称,例如,对象类别包括苹果的类别“黄元帅”,那么可以确定该苹果为黄元帅这种苹果。For example, if the object category of the target object is an apple among fruits, it may be determined that the target object is an apple. Of course, if the object category also includes apple category subdivision information, the type or name of the apple can be further determined. For example, if the object category includes the apple category "Huang Yuanshuai", then it can be determined that the apple is an apple of the type Huang Yuanshuai.
对象特征可以为对象自身具有的属性或者特征。如,对象为苹果,苹果具有的特征可以包括果皮颜色、果皮纹路以及苹果个头大小等。The object feature may be an attribute or characteristic of the object itself. For example, the object is an apple, and the characteristics of the apple may include the color of the peel, the texture of the peel, and the size of the apple.
其中,从语音信息中识别对象类别和对象特征的具体实现也可以有多种,本申请对此不加限制。Among them, there may also be various specific implementations for identifying object categories and object features from voice information, which is not limited in this application.
S204,基于目标对象的对象类别和对象特征,构建出虚拟的目标对象。S204. Construct a virtual target object based on the object category and object features of the target object.
与前面类似,本实施例中构建该虚拟的目标对象的具体实现方式也可以有多种,对此不加限制。Similar to the above, in this embodiment, there may be various specific implementation manners for constructing the virtual target object, which is not limited.
如,以目标对象为苹果为例,假设真实用户的手部托举着一个大红苹果,同时真实用户用语言描述“看这个苹果,真的是又大又圆又红”,那么可以识别出目标对象为一颗又大又红且圆的苹果,从而可以构建出一颗大且圆的红苹果,从而后续可以使得虚拟用户也呈现出手部托举着一个大红苹果的效果。For example, taking the target object as an apple, assuming that the real user is holding a big red apple in his hand, and the real user describes in words "Look at this apple, it is really big, round and red", then the target can be identified The object is a big, red and round apple, so that a big, round red apple can be constructed, so that the virtual user can also show the effect of holding a big red apple in his hand.
又或者,真实用户在做出托举物体的手势时,手部却并未托起苹果,但是,真实用户希望通过显示的虚拟场景图像中包括:手部托举着又大又红的苹果的虚拟用户,以便其他用户通过观看虚拟场景图像能够更为直观且真切地了解该真实用户所描述苹果。那么,真实用在做出托举物体的手势的同时,用语言描述“看这个苹果,真的是又大又圆又红”,那么基于本申请的方案同样会构建出一个虚拟的又大又红的圆苹果,将该苹果放置到虚拟用户的相应手部,便可以呈现出用户希望呈现的虚拟场景图像。Or, when the real user makes the gesture of lifting the object, the hand does not hold the apple, but the real user hopes to include in the displayed virtual scene image: the hand holding the big red apple A virtual user, so that other users can more intuitively and truly understand the apple described by the real user by watching the virtual scene image. Then, it is actually used to describe in words "Look at this apple, it is really big, round and red" while making a gesture of lifting an object, then the solution based on this application will also construct a virtual big and round apple. A red round apple is placed on the corresponding hand of the virtual user to present the virtual scene image that the user wishes to present.
S205,向该虚拟用户中做出该虚拟手部操作的虚拟手部添加该拟的目标对象。S205. Add the virtual target object to the virtual hand of the virtual user performing the virtual hand operation.
可以理解的是,本实施例在构建虚拟的目标对象时,考虑了目标对象的对象类别和对象特征,使得构建出的虚拟的目标对象能够更为贴合真实用户描述的目标对象,从而使得真实用户实际持有或者希望呈现的物体准确呈现在虚拟用户的手部。It can be understood that in this embodiment, when constructing a virtual target object, the object category and object characteristics of the target object are considered, so that the constructed virtual target object can be more suitable for the target object described by the real user, so that the real The objects actually held by the user or expected to be presented are accurately presented in the hands of the virtual user.
可以理解的是,图2是以一种构建虚拟的目标对象的方式为例说明。在实际应用中,基于真实用户输入语音信息识别出目标对象的同时或者之后,还可以基于真实用户的真实手部操作,确定目标对象的对象特征。在此基础上,基于目标对象的对象特征,构建虚拟的目标对象。It can be understood that FIG. 2 is an example of a way of constructing a virtual target object. In practical applications, while or after the target object is recognized based on the voice information input by the real user, the object characteristics of the target object may also be determined based on the real hand operation of the real user. On this basis, a virtual target object is constructed based on the object characteristics of the target object.
可以理解的是,在真实手部操作为持物姿势的情况下,根据持物姿势的类别,以及真实手部操作的手部和手指等姿势也可以反映出手部持有的物体的重量、尺寸、体积以及柔软程度等特征,基于此,结合真实手部操作的手部姿态特征也可以确定目标对象的对象特征。It can be understood that when the real hand operation is an object-holding posture, the weight, size, and volume of the object held by the hand can also be reflected according to the category of the object-holding posture, and the gestures of the hands and fingers of the real hand operation And features such as softness, based on this, combined with the hand gesture features of real hand operations, the object features of the target object can also be determined.
例如,在目标对象为苹果,且真实用户的真实手部操作为握持物体的姿势,那么根据真实手部操作的握持姿势,可以分析出握持的苹果的大小,从而得到苹果的大小这一特征。For example, if the target object is an apple, and the real hand operation of the real user is the posture of holding the object, then according to the holding posture of the real hand operation, the size of the apple held can be analyzed, and the size of the apple can be obtained. a feature.
又如,真实手部操作为环抱物体的环抱姿势,那么根据环抱姿势也可以分析出手部支撑的物体的大体姿态以及大小等。For another example, if the real hand operation is an embracing gesture of embracing an object, then the general posture and size of the object supported by the hand can also be analyzed according to the embracing gesture.
当然,此处仅仅是以几种情况简单举例说明,在实际应用中,除了结合真实手部操作的手部、手臂以及手指的姿态之外,还可以考虑真实用户的身体姿态等信息综合确定目标对象的对象特征。Of course, here are just a few simple examples. In practical applications, in addition to combining the gestures of the hands, arms, and fingers of real hand operations, the real user's body posture and other information can also be considered to comprehensively determine the target. Object characteristics of the object.
可以理解的是,在本申请中,考虑到在不同时刻真实用户的真实手部操作会有所不同,很可能某个时刻真实用户并不希望做出持物姿势,但是由于某个操作而被识别为持物姿势。基于此,为了减少误识别,本申请还可以在真实使用户的真实手部操作的手部姿势属于设定的用于维持物体位姿的持物姿势,且该真实手部操作的手部姿态维持处于持物姿势的时长超过设定时长的情况下,才会获得真实用户输入的语音信息,识别语音信息中描述的目标对象。It is understandable that in this application, considering that the real hand operations of real users will be different at different times, it is very likely that the real user does not want to make an object-holding gesture at a certain moment, but is recognized due to a certain operation For holding posture. Based on this, in order to reduce misidentification, the present application can also make the hand gesture of the user's real hand operation belong to the set holding posture for maintaining the object pose, and the hand gesture of the real hand operation is maintained Only when the time in the holding posture exceeds the set time can the voice information input by the real user be obtained and the target object described in the voice information be recognized.
其中,该设定时长可以根据需要设定,如该设定时长可以为3秒或者5秒等。Wherein, the set duration can be set according to needs, for example, the set duration can be 3 seconds or 5 seconds.
如,在真实用户握持一个物品并希望通过虚拟用户呈现出相应的物品的情况下,真实用户必然会持续握持该物品一段时间,在此基础上,通过识别真实用户的语音信息描述的物品,可以向虚拟用户做出握持动作的手部添加该物品,使得虚拟用户呈现握持该物品的效果。For example, when a real user holds an item and wants to present the corresponding item through the virtual user, the real user will inevitably continue to hold the item for a period of time. On this basis, the item described by the voice information of the real user , the item can be added to the hand of the virtual user making a holding motion, so that the virtual user presents the effect of holding the item.
如果真实用户的手部在做其他动作的过程中出现了握持姿势,但是握持姿势并非持续设定时长,则说明用户并非真的握持了物品,此时就无需识别真实用户的语音信息所描述的物品,自然也不需要向虚拟用户的手部添加物品。If the real user's hand shows a holding posture during other actions, but the holding posture does not last for a set period of time, it means that the user is not really holding the object, and there is no need to recognize the real user's voice information at this time The described items naturally do not need to be added to the hands of the virtual user.
可以理解的是,在将真实用户的操作行为映射为虚拟用户的虚拟操作行为的具体实现可以有多种可能,本申请对此不加限制。It can be understood that there may be many possibilities in the specific realization of mapping the operation behavior of the real user to the virtual operation behavior of the virtual user, which is not limited in this application.
但是,经研究发现:在控制虚拟用户的虚拟操作行为与真实用户的操作行为的姿势以及动作内容一致的前提下,一般是按照设定的映射比例,将真实用户的操作行为映射为虚拟用户同类型相应的虚拟操作行为。如,可以采用等比映射方式,将真实用户的真实操作行为映射为虚拟用户的虚拟操作行为,使得虚拟用户的虚拟操作行为在动作内容、移动距离以及动作幅度上均与真实用户的操作行为保持一致。However, it is found through research that: under the premise of controlling the posture and action content of the virtual user's virtual operation behavior to be consistent with the real user's operation behavior, generally according to the set mapping ratio, the real user's operation behavior is mapped to the virtual user's same operation behavior. Type corresponding virtual operation behavior. For example, the proportional mapping method can be used to map the real operation behavior of the real user to the virtual operation behavior of the virtual user, so that the virtual operation behavior of the virtual user is consistent with the operation behavior of the real user in terms of action content, moving distance and action range. unanimous.
然而,在虚拟场景的操作范围较大或者需要精准连续输入的情况下,智能基于设定的映射比例将真实动作行为映射为虚拟动作行为,导致虚拟场景可控制的灵活性较差,也很容易出现由于输入操作不便等原因导致输入效率较低的情况。However, when the operating range of the virtual scene is large or precise continuous input is required, the intelligence maps the real action behavior to the virtual action action based on the set mapping ratio, resulting in poor controllable flexibility of the virtual scene and easy There is a case where input efficiency is low due to reasons such as input operation inconvenience.
例如,以需要通过真实用户的操作来实现拖动虚拟场景图像中交互界面中的滑块这一场景为例说明。由于虚拟场景图像的虚拟空间较大,该交互界面的显示区域较大,因此,如果采用等比映射的方式,那么真实用户需要利用手部做出拖动滑块较长距离的滑动操作,才能够映射为虚拟场景图像中虚拟用户完成该滑块拖动的操作,使得真实用户做出的滑动操作的滑动距离较长且耗时较多,输入效率低。For example, a scenario where a slider in an interactive interface in a virtual scene image needs to be dragged through an operation of a real user is taken as an example for illustration. Due to the large virtual space of the virtual scene image, the display area of the interactive interface is relatively large. Therefore, if the proportional mapping method is used, the real user needs to use the hand to make a sliding operation of dragging the slider for a long distance. It can be mapped to the virtual user in the virtual scene image to complete the operation of dragging the slider, so that the sliding distance of the sliding operation made by the real user is long and time-consuming, and the input efficiency is low.
基于此,在本申请实施例中,可以设定从真实手部操作映射为虚拟场景图像中虚拟手部操作的多种不同输入灵敏度。其中,输入灵敏度不同,由真实手部操作到虚拟手部操作的映射比例不同。如,在同一真实手部操作的情况下,输入灵敏度不同,映射到虚拟手部操作的动作变化幅度等也不同。Based on this, in the embodiment of the present application, various input sensitivities mapped from real hand operations to virtual hand operations in the virtual scene image may be set. Among them, the input sensitivity is different, and the mapping ratio from the real hand operation to the virtual hand operation is different. For example, in the case of the same real hand operation, the input sensitivity is different, and the range of motion changes mapped to the virtual hand operation is also different.
下面以确定输入灵敏度的一种实现方式为例对本申请的方案进行说明。如图3所示,其示出了本申请实施例提供的虚拟场景控制方法又一个实施例的流程示意图。The solution of the present application will be described below by taking an implementation manner of determining the input sensitivity as an example. As shown in FIG. 3 , it shows a schematic flowchart of another embodiment of the virtual scene control method provided by the embodiment of the present application.
本实施例的方法可以包括:The method of this embodiment may include:
S301,检测到真实用户的真实手部操作,确定该真实手部操作的输入手势类型。S301. Detecting a real hand operation of a real user, and determining an input gesture type of the real hand operation.
其中,输入手势类型为真实手部操作中能够反映输入灵敏度的手势部分对应的手势类型。相应的,通过输入手势类型可以反映出该真实手部操作所需的输入灵敏度。Wherein, the input gesture type is a gesture type corresponding to a gesture part that can reflect input sensitivity in a real hand operation. Correspondingly, the input sensitivity required by the real hand operation can be reflected by the input gesture type.
在本申请中,输入手势类型与真实手部操作对虚拟场景执行控制操作的操作类型不同,因此,根据对输入灵敏度的不同需求,在真实用户对虚拟场景执行同一类控制操作时,真实用户做出该类控制操作的真实手部操作对应的输入手势类型也可以有所不同。In this application, the type of input gestures is different from the type of control operations performed by real hand operations on the virtual scene. Therefore, according to the different requirements for input sensitivity, when the real user performs the same type of control operation on the virtual scene, what the real user does The types of input gestures corresponding to real hand operations of this type of control operations may also be different.
其中,确定输入手势类型的具体实现方式可以有多种可能,下面结合几种可能情况进行说明。Among them, there are many possible specific implementation manners for determining the input gesture type, which will be described below in conjunction with several possible situations.
在一种可能的情况中,在获得真实手部操作后,可以确定该真实手部操作中用于指示手势类型的手指姿态。相应的,基于该手指姿态,可以确定真实手部操作的输入手势类型。In a possible situation, after the real hand operation is obtained, the gesture of the finger used to indicate the gesture type in the real hand operation may be determined. Correspondingly, based on the gesture of the finger, the input gesture type of the real hand operation can be determined.
其中,指示手势类型的手指姿态也就是指示输入手指类型的手指姿态。手指姿态可以通过手指的姿势以及手指数量等多个维度表征,对此不加限制。在实际应用中,可以结合输入手势类型的种类数量,设定相应数量种手指姿态。Wherein, the finger gesture indicating the gesture type is also the finger gesture indicating the input finger type. The gesture of the finger can be represented by multiple dimensions such as the gesture of the finger and the number of fingers, without limitation. In practical applications, a corresponding number of finger gestures can be set in combination with the number of types of input gestures.
举例说明:for example:
以手指捏合的数量表征不同输入手势类型为例说明。Take the number of finger pinches to represent different input gesture types as an example.
假设输入灵敏度分为高、中和低三个灵敏度等级,相应的,三个等级的输入灵敏度对应三种不同的输入手势类型。在此基础上,可以设定单个手指为对应高灵敏度等级的第一类输入手势类型,两个手指捏合为对应中灵敏度等级的第二类输入手势类型,而三个手指捏合为对应低灵敏度等级的第三类输入手势类型。Assuming that the input sensitivity is divided into three sensitivity levels: high, medium and low, correspondingly, the three levels of input sensitivity correspond to three different types of input gestures. On this basis, you can set a single finger as the first type of input gesture type corresponding to the high sensitivity level, pinch two fingers as the second type of input gesture type corresponding to the medium sensitivity level, and pinch three fingers as the corresponding low sensitivity level The third category of input gesture types.
在此基础上,如果真实用户在做出真实手部操作时,真实手部操作中存在两个手指捏合在一起,那么可以确定输入手势类型为第二类输入手势类型。On this basis, if two fingers are pinched together in the real hand operation by the real user, it can be determined that the input gesture type is the second type of input gesture type.
在又一种实现方式中,真实用户还可以在做出真实手部操作之前,先设定待输入的真实手部操作对应的输入手势类型,如,可以在输入真实手部操作之前,由真实用户从多种手势类型的选项中,预先选择出所需的输入手势类型。In yet another implementation, the real user can also set the input gesture type corresponding to the real hand operation to be input before making the real hand operation. For example, before inputting the real hand operation, the real The user pre-selects the desired input gesture type from a variety of gesture type options.
在实际应用中,在获得真实手部操作的同时,还可以是借助手部之外的身体姿势,来辅助确定真实手部操作对应的输入手势类型。例如,借助头部晃动次数或者眼睛连续眨眼次数等,确定真实手部操作对应的输入手势类型,当然,还可以有其他确定输入手势类型的方式,对此不加限制。In practical applications, while obtaining the real hand operation, it is also possible to assist in determining the input gesture type corresponding to the real hand operation by means of body postures other than the hand. For example, the input gesture type corresponding to the real hand operation is determined by means of the number of times the head shakes or the number of consecutive eye blinks, etc. Of course, there may be other ways to determine the type of the input gesture, which is not limited.
S302,基于该输入手势类型,确定由该真实手部操作映射为虚拟场景图像中虚拟输入操作的输入灵敏度。S302. Based on the input gesture type, determine the input sensitivity mapped from the real hand operation to a virtual input operation in the virtual scene image.
其中,输入灵敏度可以真实用户的输入操作能够触发虚拟场景图像中的虚拟对象(虚拟用户或者是其他虚拟被控对象)产生的虚拟输入操作的变化程度。输入灵敏度越高,由真实用户的输入操作触发虚拟场景图像中虚拟对象所产生的操作变化也就越大。Wherein, the input sensitivity may refer to the change degree of the virtual input operation generated by the virtual object (virtual user or other virtual controlled object) in the virtual scene image that can be triggered by the input operation of the real user. The higher the input sensitivity, the greater the operation change generated by the virtual object in the virtual scene image triggered by the input operation of the real user.
基于此,输入灵敏度可以影响到真实手部操作映射为虚拟场景图像中虚拟输入操作的操作频次以及操作幅度等。Based on this, the input sensitivity can affect the operation frequency and the operation range of the virtual input operation mapped from the real hand operation to the virtual scene image.
如,在一种可能的情况中,输入灵敏度可以反映的是真实手部操作的操作频次与虚拟手部操作的操作频次之间的映射关系。例如,在真实手部操作为点击操作的情况下,输入灵敏度不同,真实手部操作的点击次数映射为虚拟场景图像中点击次数也会有所不同。For example, in a possible situation, the input sensitivity may reflect the mapping relationship between the operation frequency of the real hand operation and the operation frequency of the virtual hand operation. For example, when the real hand operation is a click operation, the input sensitivity is different, and the number of clicks of the real hand operation mapped to the number of clicks in the virtual scene image will also be different.
在又一种可能的情况中,输入灵敏度为真实手部操作的移动距离到虚拟场景图像中虚拟手部的虚拟移动距离的距离映射关系。其中,该距离映射关系实际上是操作幅度的映射关系的一种,该距离映射关系反映的是从真实手部操作的移动距离与虚拟手部的移动距离的比例关系。In yet another possible situation, the input sensitivity is a distance mapping relationship between the moving distance of the real hand operation and the virtual moving distance of the virtual hand in the virtual scene image. Wherein, the distance mapping relationship is actually a kind of mapping relationship of the operating range, and the distance mapping relationship reflects the proportional relationship between the moving distance operated by the real hand and the moving distance of the virtual hand.
例如,该距离映射关系为一比一的映射关系,那么真实手部操作的移动距离与虚拟手部的虚拟移动距离相同。如果该距离映射关系为二比一,那么真实手部操作的移动距离是虚拟手部的虚拟移动距离的两倍。For example, if the distance mapping relationship is a one-to-one mapping relationship, then the moving distance of the real hand operation is the same as the virtual moving distance of the virtual hand. If the distance mapping relationship is two to one, then the moving distance of the real hand operation is twice the virtual moving distance of the virtual hand.
其中,输入灵敏度不同,该距离映射关系也会有所不同。一般输入灵敏度越高,真实手部操作的移动距离与虚拟手部的虚拟移动距离的比值越低,使得真实用户的真实手部操作只需有较小的移动就可以触发虚拟手部产生较大距离的移动。Wherein, if the input sensitivity is different, the distance mapping relationship will also be different. Generally, the higher the input sensitivity, the lower the ratio of the moving distance of the real hand operation to the virtual moving distance of the virtual hand, so that the real hand operation of the real user only needs a small movement to trigger the virtual hand to produce a large distance movement.
S303,按照该输入灵敏度,将该真实手部操作映射为虚拟场景图像中虚拟用户的虚拟手部操作。S303, according to the input sensitivity, map the real hand operation to the virtual hand operation of the virtual user in the virtual scene image.
由前面介绍可知,输入灵敏度可以影响到真实手部操作触发虚拟用户的虚拟手部操作的变化程度,因此,输入灵敏度不同,真实手部操作能够触发虚拟手部操作的操作频次以及变化幅度中的一种或者多种也会有所不同。From the previous introduction, it can be seen that the input sensitivity can affect the change degree of the virtual hand operation triggered by the real hand operation. Therefore, the input sensitivity is different, and the real hand operation can trigger the operation frequency of the virtual hand operation and the change range. One or more will also vary.
如,在输入灵敏度为真实手部操作的移动距离与该虚拟场景图像中虚拟手部的虚拟移动距离之间的距离映射关系的情况下,可以按照该距离映射关系以及真实手部操作的移动轨迹,将真实手部操作映射为虚拟场景图像中虚拟用户的虚拟手部操作。For example, when the input sensitivity is the distance mapping relationship between the moving distance of the real hand operation and the virtual moving distance of the virtual hand in the virtual scene image, the distance mapping relationship and the moving track of the real hand operation can be , to map the real hand manipulation to the virtual hand manipulation of the virtual user in the virtual scene image.
当然,在输入灵敏度还可以体现在操作频次和操作幅度等多个维度上,在此情况下,本申请还可以考虑真实手部操作的点击等操作频次到虚拟手部操作的操作频次的映射,来综合确定映射出的虚拟手部操作,对此不再赘述。Of course, the input sensitivity can also be reflected in multiple dimensions such as operation frequency and operation range. In this case, this application can also consider the mapping from the operation frequency of real hand operations such as clicks to the operation frequency of virtual hand operations. to comprehensively determine the mapped virtual hand operation, which will not be repeated here.
需要说明的是,在以上步骤S301到S303的操作中并不关心真实手部操作的手部姿势是否为持物姿势,只要是获得真实手部操作后,都可以按照如上步骤的操作,将真实手部操作映射为虚拟用户的虚拟手部操作。It should be noted that in the operations of the above steps S301 to S303, it does not care whether the hand posture of the real hand operation is an object-holding posture, as long as the real hand operation is obtained, the operation of the real hand can be performed according to the above steps. The hand operation is mapped to the virtual hand operation of the virtual user.
S304,如果该真实手部操作的手部姿势属于设定的用于维持物体位姿的持物姿势,获得该真实用户输入的语音信息。S304, if the hand gesture of the real hand operation belongs to the set object-holding gesture for maintaining the pose of the object, obtain voice information input by the real user.
S305,识别出该语音信息中描述的目标对象。S305. Identify the target object described in the voice information.
S306,向该虚拟用户中做出该虚拟手部操作的虚拟手部添加虚拟的该目标对象。S306. Add the virtual target object to the virtual hand of the virtual user performing the virtual hand operation.
以上步骤S304到S306可以参见前面实施例的相关介绍,在此不再赘述。For the above steps S304 to S306, reference may be made to the relevant introduction of the previous embodiments, and details are not repeated here.
在本申请实施例中,在获得真实手部操作后,可以基于真实手部操作的输入操作类型确定输入灵敏度,使得真实用户可以根据需要合理控制输入灵敏度,进而实现合理控制真实手部操作对虚拟手部操作的映射关系,提高了虚拟场景控制的灵活性。而且,通过真实手部操作的输入操作类型可以合理调整输入灵敏度,从而合理调整真实手部操作到虚拟手部操作的映射关系,既可以有利于减少对虚拟场景进行一些精细控制过程中存在的输入错误,又可以减少真实用户需要手部移动较远距离才可以完成虚拟输入操作等导致输入效率的情况。In the embodiment of this application, after obtaining the real hand operation, the input sensitivity can be determined based on the input operation type of the real hand operation, so that the real user can reasonably control the input sensitivity according to the needs, and then realize the reasonable control of the real hand operation on the virtual The mapping relationship of hand operation improves the flexibility of virtual scene control. Moreover, the input sensitivity can be reasonably adjusted through the input operation type of the real hand operation, so as to reasonably adjust the mapping relationship between the real hand operation and the virtual hand operation, which can help reduce the input that exists in the process of finely controlling the virtual scene Errors can also reduce the situation that real users need to move their hands a long distance to complete the virtual input operation, which leads to input efficiency.
为了便于理解本申请中确定输入灵敏度以及基于输入灵敏度将真实手部操作映射到虚拟手部操作的具体实现,下面以一种实现方式为例进行说明。考虑到真实手部存在移动操作的情况下,结合输入灵敏度,能够更为灵活控制真实手部映射到虚拟手部之间的移动变化量,因此,结合真实手部操作存在手部移动这一情况进行说明。In order to facilitate the understanding of the specific implementation of determining the input sensitivity and mapping the real hand operation to the virtual hand operation based on the input sensitivity in this application, an implementation manner is taken as an example for description below. Considering that there is a movement operation of the real hand, combined with the input sensitivity, it is possible to more flexibly control the amount of movement change between the real hand mapped to the virtual hand. Therefore, there is a hand movement in combination with the real hand operation Be explained.
如图4所示,其示出了本申请实施例提供的虚拟场景控制方法的又一种流程示意图,本实施例的方法可以包括:As shown in Figure 4, it shows another schematic flow chart of the virtual scene control method provided by the embodiment of the present application. The method of this embodiment may include:
S401,检测到真实用户的真实手部操作,如果真实手部操作包括手部移动操作,确定该真实手部操作中用于指示手势类型的手指姿态。S401. Detecting a real hand operation of a real user, and determining a finger gesture used to indicate a gesture type in the real hand operation if the real hand operation includes a hand movement operation.
其中,真实手部操作包括手部移动操作是指真实用户做出真实手部操作的过程中,真实用户的真实手部存在移动。如,真实用户在利用手部做出握持或者托举物品动作的同时,真实用户的手部向上移动,则真实手部操作为呈现托举手势并向上移动的操作。Wherein, the real hand operation including the hand movement operation means that the real hand of the real user moves during the real user's real hand operation. For example, when the real user uses the hand to hold or lift an object, while the real user's hand moves upward, the real hand operation is an operation of presenting a lifting gesture and moving upward.
由于真实手部操作存在手部移动操作必然会使得虚拟手部也存在相应的移动,因此,为了确定真实手部与虚拟手部之间移动距离映射关系,就需要确定期望的输入灵敏度。Since there is a hand movement operation in the real hand operation, the virtual hand must also move accordingly. Therefore, in order to determine the moving distance mapping relationship between the real hand and the virtual hand, it is necessary to determine the expected input sensitivity.
其中,真实手部操作中手指姿态可以参见前面的介绍,如,本申请中可以设定三种输入灵敏度,相应的,可以设置三种输入手指类型对应的手指姿态,如,高级灵敏度对应的第一输入手势类型的手指姿态为单个手指伸出,而中级灵敏度对应的第二输入手势类型的手指姿态为拇指与食指捏合,而低级灵敏度对应的第三输入手势类型的手指姿态为拇指、食指与中指相互捏合。Among them, the gestures of the fingers in the real hand operation can be referred to the previous introduction. For example, in this application, three input sensitivities can be set. Correspondingly, the finger gestures corresponding to the three input finger types can be set. For example, the first The finger posture of the first input gesture type is a single finger extended, and the finger posture of the second input gesture type corresponding to the medium sensitivity is thumb and index finger pinching, and the finger posture of the third input gesture type corresponding to the low sensitivity is thumb, index finger and The middle fingers pinch each other.
S402,基于该手指姿态,确定真实手部操作的输入手势类型。S402. Based on the gesture of the finger, determine an input gesture type of a real hand operation.
S403,基于该输入手势类型,确定由真实手部操作映射为虚拟场景图像中虚拟输入操作的输入灵敏度。S403. Based on the input gesture type, determine the input sensitivity mapped from the real hand operation to the virtual input operation in the virtual scene image.
由前面介绍可知,在手指姿态属于设定的输入手指类型对应的手指姿态的情况下,可以确定出真实手部操作所属的输入手势类型。相应的。可以按照输入手势类型与输入灵敏度的对应关系,可以确定出输入灵敏度。It can be seen from the foregoing introduction that, in the case that the finger gesture belongs to the finger gesture corresponding to the set input finger type, the input gesture type to which the real hand operation belongs can be determined. corresponding. The input sensitivity can be determined according to the corresponding relationship between the input gesture type and the input sensitivity.
S404,按照该输入灵敏度,确定该真实手部操作的移动距离与该虚拟场景图像中虚拟手部的虚拟移动距离之间的距离映射关系。S404. According to the input sensitivity, determine a distance mapping relationship between the moving distance of the real hand operation and the virtual moving distance of the virtual hand in the virtual scene image.
S405,按照距离映射关系以及真实手部操作的移动轨迹,将该真实手部操作映射为该虚拟场景图像中虚拟用户的虚拟手部操作。S405. Map the real hand operation to the virtual hand operation of the virtual user in the virtual scene image according to the distance mapping relationship and the movement trajectory of the real hand operation.
举例说明,假设输入灵敏度分为低级灵敏度、中级灵敏度和高级灵敏度。As an example, assume that the input sensitivity is divided into low sensitivity, medium sensitivity, and high sensitivity.
其中,中级灵敏度对应的距离映射关系可以为基础距离比例,如,该基础距离比例可以根据需要设定。以基础距离比例为1说明,那么在输入灵敏度为中级灵敏度时,距离映射关系为真实手部操作的移动距离与虚拟手部的虚拟移动距离之比为1,相应的,虚拟手部操作的移动距离与真实手部操作的移动距离相同。Wherein, the distance mapping relationship corresponding to the intermediate sensitivity may be a basic distance ratio, for example, the basic distance ratio may be set as required. Assuming that the basic distance ratio is 1, then when the input sensitivity is intermediate sensitivity, the distance mapping relationship is that the ratio of the moving distance of the real hand operation to the virtual moving distance of the virtual hand is 1, correspondingly, the movement of the virtual hand operation The distance is the same as the movement distance of a real hand operation.
低级灵敏度中距离映射关系对应的距离比例可以为基础距离比例乘以设定系数,该设定系数可以为大于0且小于等于1的数值。而高级灵敏度中距离映射关系对应的距离比例可以为基础距离比例乘以(1/设定系数)。在低级灵敏度和高级灵敏度对应的距离比例确定后,如果输入灵敏度为低级灵敏度或者高级灵敏度,那么将真实手部操作的移动距离映射为虚拟手部操作的移动距离的过程也类似,不再赘述。The distance ratio corresponding to the distance mapping relationship in the low-level sensitivity may be the base distance ratio multiplied by a set coefficient, and the set coefficient may be a value greater than 0 and less than or equal to 1. The distance ratio corresponding to the distance mapping relationship in the advanced sensitivity can be multiplied by (1/setting coefficient) as the basic distance ratio. After the distance ratio corresponding to the low-level sensitivity and high-level sensitivity is determined, if the input sensitivity is low-level sensitivity or high-level sensitivity, the process of mapping the moving distance of the real hand operation to the moving distance of the virtual hand operation is similar and will not be described again.
S406,如果该真实手部操作的手部姿势属于设定的用于维持物体位姿的持物姿势,获得该真实用户输入的语音信息。S406, if the hand gesture of the real hand operation belongs to the set object-holding gesture for maintaining the pose of the object, obtain voice information input by the real user.
S407,识别出该语音信息中描述的目标对象。S407. Identify the target object described in the voice information.
S408,向该虚拟用户中做出所述虚拟手部操作的虚拟手部添加虚拟的该目标对象。S408. Add the virtual target object to the virtual hand of the virtual user performing the virtual hand operation.
以上步骤S406到S408可以参见前面实施例的相关介绍,在此不再赘述。For the above steps S406 to S408, reference may be made to the related introduction of the previous embodiments, and details are not repeated here.
对应本申请的一种虚拟场景控制方法,本申请还提供了一种虚拟场景控制装置。Corresponding to a virtual scene control method of the present application, the present application also provides a virtual scene control device.
如图5所示,其示出了本申请实施例提供的虚拟场景控制装置的一种组成结构示意图,本实施例的装置可以包括:As shown in Figure 5, it shows a schematic diagram of the composition and structure of the virtual scene control device provided by the embodiment of the present application. The device of this embodiment may include:
操作映射单元501,用于检测到真实用户的真实手部操作,将所述真实手部操作映射为虚拟场景图像中虚拟用户的虚拟手部操作;An
语音获得单元502,用于如果所述真实手部操作的手部姿势属于设定的用于维持物体位姿的持物姿势,获得所述真实用户输入的语音信息;A
语音识别单元503,用于识别出所述语音信息中描述的目标对象;a
对象添加单元504,用于向所述虚拟用户中做出所述虚拟手部操作的虚拟手部添加虚拟的所述目标对象。The
在一种可能的实现方式中,语音获得单元,包括:In a possible implementation manner, the voice obtaining unit includes:
语音获得子单元,用于如果所述真实手部操作的手部姿势属于设定的用于维持物体位姿的持物姿势,且所述真实手部操作的手部姿态维持处于持物姿势的时长超过设定时长,获得所述真实用户输入的语音信息。The speech acquisition subunit is used for if the hand gesture of the real hand operation belongs to the set holding posture for maintaining the object posture, and the hand posture of the real hand operation is maintained in the holding posture for more than The duration is set, and the voice information input by the real user is obtained.
在又一种可能的实现方式中,该语音识别单元包括:In yet another possible implementation, the speech recognition unit includes:
对象识别单元,用于识别出所述语音信息中描述的目标对象的对象类别和对象特征;an object recognition unit, configured to recognize the object category and object characteristics of the target object described in the voice information;
本申请中该装置还包括:In this application, the device also includes:
第一对象构建单元,用于在对象添加单元向所述虚拟用户中做出所述虚拟手部操作的虚拟手部添加虚拟的所述目标对象之前,基于该目标对象的对象类别和对象特征,构建出虚拟的目标对象。The first object construction unit is configured to add the virtual target object based on the object category and object characteristics of the target object before the object adding unit adds the virtual target object to the virtual hand of the virtual user performing the virtual hand operation, Build a virtual target object.
在又一种可能的实现方式中,该装置还包括:In another possible implementation manner, the device also includes:
特征确定单元,用于在对象添加单元向所述虚拟用户中做出所述虚拟手部操作的虚拟手部添加虚拟的所述目标对象之前,基于所述真实用户的真实手部操作,确定所述目标对象的对象特征;A feature determining unit, configured to determine the target object based on the real hand operation of the real user before the object adding unit adds the virtual target object to the virtual hand of the virtual user performing the virtual hand operation. Describe the object characteristics of the target object;
第二对象构建单元,用于基于所述目标对象的对象特征,构建虚拟的所述目标对象。The second object construction unit is configured to construct the virtual target object based on the object characteristics of the target object.
在又一种可能的实现方式中,所述操作映射单元,包括:In yet another possible implementation manner, the operation mapping unit includes:
类型确定单元,用于检测到真实用户的真实手部操作,确定所述真实手部操作的输入手势类型;A type determining unit, configured to detect a real hand operation of a real user, and determine the input gesture type of the real hand operation;
灵敏度确定单元,用于基于所述输入手势类型,确定由所述真实手部操作映射为虚拟场景图像中虚拟输入操作的输入灵敏度;A sensitivity determining unit, configured to determine the input sensitivity mapped from the real hand operation to the virtual input operation in the virtual scene image based on the input gesture type;
映射处理单元,用于按照所述输入灵敏度,将所述真实手部操作映射为虚拟场景图像中虚拟用户的虚拟手部操作。A mapping processing unit, configured to map the real hand operation to the virtual hand operation of the virtual user in the virtual scene image according to the input sensitivity.
在又一种可能的实现方式中,映射处理单元,包括:In yet another possible implementation manner, the mapping processing unit includes:
关系确定子单元,用于按照所述输入灵敏度,确定所述真实手部操作的移动距离与所述虚拟场景图像中虚拟手部的虚拟移动距离之间的距离映射关系;A relationship determining subunit, configured to determine the distance mapping relationship between the moving distance of the real hand operation and the virtual moving distance of the virtual hand in the virtual scene image according to the input sensitivity;
映射处理子单元,用于按照所述距离映射关系以及所述真实手部操作的移动轨迹,将所述真实手部操作映射为所述虚拟场景图像中虚拟用户的虚拟手部操作。The mapping processing subunit is configured to map the real hand operation to a virtual hand operation of a virtual user in the virtual scene image according to the distance mapping relationship and the movement trajectory of the real hand operation.
在又一种可能的实现方式中,类型确定单元,包括:In yet another possible implementation manner, the type determination unit includes:
姿态确定子单元,用于确定所述真实手部操作中用于指示手势类型的手指姿态;a gesture determining subunit, configured to determine a gesture of a finger used to indicate a gesture type in the real hand operation;
类型确定子单元,用于基于所述手指姿态,确定所述真实手部操作的输入手势类型。The type determination subunit is configured to determine the input gesture type of the real hand operation based on the gesture of the finger.
在又一种可能的实现方式中,该姿态确定子单元,具体为,用于如果所述真实手部操作属于手部移动操作,确定所述真实手部操作中用于指示手势类型的手指姿态。In yet another possible implementation manner, the posture determining subunit is specifically configured to, if the real hand operation belongs to a hand movement operation, determine a finger posture used to indicate a gesture type in the real hand operation .
又一方面,本申请还提供了一种电子设备,如图6所示,其示出了该电子设备的一种组成结构示意图,该电子设备可以为任意类型的电子设备,该电子设备至少包括处理器601和存储器602;In yet another aspect, the present application also provides an electronic device, as shown in FIG. 6 , which shows a schematic structural diagram of the electronic device. The electronic device may be any type of electronic device, and the electronic device includes at
其中,处理器601用于执行如上任意一个实施例中的虚拟场景控制方法。Wherein, the
该存储器602用于存储处理器执行操作所需的程序。The
可以理解的是,该电子设备还可以包括显示单元603以及输入单元604。It can be understood that the electronic device may further include a
当然,该电子设备还可以具有比图6更多或者更少的部件,对此不加限制。Of course, the electronic device may also have more or fewer components than those shown in FIG. 6 , without limitation.
另一方面,本申请还提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如上任意一个实施例所述的虚拟场景控制方法。On the other hand, the present application also provides a computer-readable storage medium, at least one instruction, at least one program, code set or instruction set is stored in the computer-readable storage medium, the at least one instruction, the at least A program, the code set or instruction set is loaded and executed by the processor to implement the virtual scene control method described in any one of the above embodiments.
本申请还提出了一种计算机程序,该计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机程序在电子设备上运行时,用于执行如上任意一个实施例中的虚拟场景控制方法。The present application also proposes a computer program, the computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium. When the computer program runs on the electronic device, it is used to execute the virtual scene control method in any one of the above embodiments.
可以理解的是,在本申请中,说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”“第四”等(如果存在)是用于区别类似的部分,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示的以外的顺序实施。It should be understood that in this application, the terms "first", "second", "third", "fourth", etc. (if any) in the description and claims and the above drawings are used to distinguish similar parts, and are not necessarily used to describe a particular order or sequence. It is to be understood that the data so used are interchangeable under appropriate circumstances such that the embodiments of the application described herein can be practiced in sequences other than those illustrated herein.
需要说明的是,本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。同时,本说明书中各实施例中记载的特征可以相互替换或者组合,使本领域专业技术人员能够实现或使用本申请。对于装置类实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。It should be noted that each embodiment in this specification is described in a progressive manner, and each embodiment focuses on the differences from other embodiments. For the same and similar parts in each embodiment, refer to each other, that is, Can. At the same time, the features recorded in the various embodiments in this specification can be replaced or combined with each other, so that those skilled in the art can realize or use the present application. As for the device-type embodiments, since they are basically similar to the method embodiments, the description is relatively simple, and for related parts, please refer to part of the description of the method embodiments.
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括要素的过程、方法、物品或者设备中还存在另外的相同要素。Finally, it should also be noted that in this text, relational terms such as first and second etc. are only used to distinguish one entity or operation from another, and do not necessarily require or imply that these entities or operations, any such actual relationship or order exists. Furthermore, the term "comprises", "comprises" or any other variation thereof is intended to cover a non-exclusive inclusion such that a process, method, article, or apparatus comprising a set of elements includes not only those elements, but also includes elements not expressly listed. other elements of or also include elements inherent in such a process, method, article, or device. Without further limitations, an element defined by the phrase "comprising a ..." does not preclude the presence of additional identical elements in the process, method, article, or apparatus that includes the element.
对所公开的实施例的上述说明,使本领域技术人员能够实现或使用本申请。对这些实施例的多种修改对本领域技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本申请的精神或范围的情况下,在其它实施例中实现。因此,本申请将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。The above description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be implemented in other embodiments without departing from the spirit or scope of the application. Therefore, the present application will not be limited to the embodiments shown herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
以上仅是本申请的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。The above are only the preferred embodiments of the present application. It should be pointed out that for those of ordinary skill in the art, some improvements and modifications can be made without departing from the principle of the application, and these improvements and modifications should also be considered as For the scope of protection of this application.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310153444.8A CN116126147A (en) | 2023-02-17 | 2023-02-17 | Virtual scene control method and device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310153444.8A CN116126147A (en) | 2023-02-17 | 2023-02-17 | Virtual scene control method and device |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN116126147A true CN116126147A (en) | 2023-05-16 |
Family
ID=86302754
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310153444.8A Pending CN116126147A (en) | 2023-02-17 | 2023-02-17 | Virtual scene control method and device |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN116126147A (en) |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160239080A1 (en) * | 2015-02-13 | 2016-08-18 | Leap Motion, Inc. | Systems and methods of creating a realistic grab experience in virtual reality/augmented reality environments |
| CN106774907A (en) * | 2016-12-22 | 2017-05-31 | 腾讯科技(深圳)有限公司 | A kind of method and mobile terminal that virtual objects viewing area is adjusted in virtual scene |
| CN109976519A (en) * | 2019-03-14 | 2019-07-05 | 浙江工业大学 | A kind of interactive display unit and its interactive display method based on augmented reality |
| CN110766804A (en) * | 2019-10-30 | 2020-02-07 | 济南大学 | A method for human-machine cooperative grasping of objects in VR scene |
| WO2020252813A1 (en) * | 2019-06-20 | 2020-12-24 | 上海交通大学 | Double-layer adaptive inertia control method and device for inverter interfaced distributed generator |
| CN113144593A (en) * | 2021-03-19 | 2021-07-23 | 网易(杭州)网络有限公司 | Target aiming method and device in game, electronic equipment and storage medium |
| CN113325952A (en) * | 2021-05-27 | 2021-08-31 | 百度在线网络技术(北京)有限公司 | Method, apparatus, device, medium and product for presenting virtual objects |
-
2023
- 2023-02-17 CN CN202310153444.8A patent/CN116126147A/en active Pending
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160239080A1 (en) * | 2015-02-13 | 2016-08-18 | Leap Motion, Inc. | Systems and methods of creating a realistic grab experience in virtual reality/augmented reality environments |
| CN106774907A (en) * | 2016-12-22 | 2017-05-31 | 腾讯科技(深圳)有限公司 | A kind of method and mobile terminal that virtual objects viewing area is adjusted in virtual scene |
| CN109976519A (en) * | 2019-03-14 | 2019-07-05 | 浙江工业大学 | A kind of interactive display unit and its interactive display method based on augmented reality |
| WO2020252813A1 (en) * | 2019-06-20 | 2020-12-24 | 上海交通大学 | Double-layer adaptive inertia control method and device for inverter interfaced distributed generator |
| CN110766804A (en) * | 2019-10-30 | 2020-02-07 | 济南大学 | A method for human-machine cooperative grasping of objects in VR scene |
| CN113144593A (en) * | 2021-03-19 | 2021-07-23 | 网易(杭州)网络有限公司 | Target aiming method and device in game, electronic equipment and storage medium |
| CN113325952A (en) * | 2021-05-27 | 2021-08-31 | 百度在线网络技术(北京)有限公司 | Method, apparatus, device, medium and product for presenting virtual objects |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11818455B2 (en) | Devices, methods, and graphical user interfaces for depth-based annotation | |
| CN105144067B (en) | Apparatus, method and graphical user interface for adjusting the appearance of a control | |
| CN105955520B (en) | For controlling the device and method of media presentation | |
| US10095343B2 (en) | Devices, methods, and graphical user interfaces for processing intensity information associated with touch inputs | |
| CN108174612B (en) | Apparatus and method for processing and disambiguating touch input using intensity thresholds based on previous input intensity | |
| CN105144057B (en) | Apparatus, method, and graphical user interface for moving a cursor based on changes in appearance of a control icon having simulated three-dimensional features | |
| CN104487927B (en) | Apparatus, method and graphical user interface for selecting user interface objects | |
| CN105955641B (en) | For the equipment, method and graphic user interface with object interaction | |
| CN105264480B (en) | Apparatus, method and graphical user interface for switching between camera interfaces | |
| CN104471521B (en) | Apparatus, method and graphical user interface for providing feedback for changing the activation state of a user interface object | |
| EP3117602B1 (en) | Metadata-based photo and/or video animation | |
| US10552004B2 (en) | Method for providing application, and electronic device therefor | |
| CN107111415B (en) | Apparatus, method and graphical user interface for mobile application interface elements | |
| CN109219796A (en) | Digital touch on real-time video | |
| CN110297679A (en) | For providing the equipment, method and graphic user interface of audiovisual feedback | |
| CN104903834A (en) | Device, method, and graphical user interface for transitioning between touch input to display output relationships | |
| CN106780685B (en) | A kind of generation method and terminal of dynamic picture | |
| JP2014502399A (en) | Handwriting input method by superimposed writing | |
| CN108616712B (en) | Camera-based interface operation method, device, equipment and storage medium | |
| KR20230011349A (en) | Trackpad on the back part of the device | |
| CN107871001B (en) | Audio playback method, device, storage medium and electronic device | |
| KR20160101605A (en) | Gesture input processing method and electronic device supporting the same | |
| KR102346565B1 (en) | Multiple stage user interface | |
| CN109974581A (en) | The device and method measured using augmented reality | |
| CN109032359A (en) | A kind of multi-medium play method, device, electric terminal and medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |