[go: up one dir, main page]

HK1248851B - Display system and method - Google Patents

Display system and method

Info

Publication number
HK1248851B
HK1248851B HK18108141.1A HK18108141A HK1248851B HK 1248851 B HK1248851 B HK 1248851B HK 18108141 A HK18108141 A HK 18108141A HK 1248851 B HK1248851 B HK 1248851B
Authority
HK
Hong Kong
Prior art keywords
pixels
frame
end user
virtual
head
Prior art date
Application number
HK18108141.1A
Other languages
Chinese (zh)
Other versions
HK1248851A1 (en
Inventor
Brian T. Schowengerdt
Samuel A. Miller
Original Assignee
Magic Leap, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Magic Leap, Inc. filed Critical Magic Leap, Inc.
Publication of HK1248851A1 publication Critical patent/HK1248851A1/en
Publication of HK1248851B publication Critical patent/HK1248851B/en

Links

Description

显示系统和方法Display system and method

本申请是国际申请号为PCT/US2014/028977、国际申请日为2014年3月14日、中国国家申请号为201480027589.2、标题为“显示系统和方法”的PCT申请的分案申请。This application is a divisional application of PCT application with international application number PCT/US2014/028977, international application date March 14, 2014, Chinese national application number 201480027589.2, and title “Display System and Method”.

技术领域Technical Field

本发明一般地涉及被配置以便于用于一个或多个用户的交互式虚拟或增强现实环境的系统和方法。The present invention generally relates to systems and methods configured to facilitate an interactive virtual or augmented reality environment for one or more users.

背景技术Background Art

许多显示系统可以从与观看者或用户的头部姿势有关的信息(即,用户的头部朝向和/或位置)中受益。Many display systems can benefit from information regarding the viewer or user's head pose (ie, the user's head orientation and/or position).

例如,头戴式显示器(或头盔显示器,或智能眼镜)至少松散地耦合到用户头部,并且因此在所述用户的头部移动时移动。如果所述用户的头部运动由显示系统检测到,被显示的数据可以更新以考虑头部姿势的变化。For example, a head-mounted display (or helmet-mounted display, or smart glasses) is at least loosely coupled to the user's head and therefore moves when the user's head moves. If the user's head movement is detected by the display system, the displayed data can be updated to take into account the change in head posture.

作为示例,如果佩戴头戴式显示器的用户浏览显示器上的3D对象的虚拟表示并围绕3D对象出现的区域走动,所述3D对象可以为每个视点来重新渲染,给予用户他或她正在围绕占据现实空间的对象走动的感知。如果头戴式显示器被用于为多个对象显示虚拟空间(例如,丰富的虚拟世界),头部姿势的测量可以用来重新渲染场景,以匹配用户头部位置和朝向的动态改变,并提供虚拟空间的增强的沉浸感。As an example, if a user wearing a head-mounted display views a virtual representation of a 3D object on the display and walks around the area where the 3D object appears, the 3D object can be re-rendered for each viewpoint, giving the user the perception that he or she is walking around the object occupying real space. If the head-mounted display is used to display a virtual space for multiple objects (e.g., a rich virtual world), measurements of head pose can be used to re-render the scene to match dynamic changes in the user's head position and orientation, and provide an enhanced sense of immersion in the virtual space.

尤其是对于用虚拟元素填充了用户视场的实质部分的显示系统,高的头部追踪的准确性以及从低的第一次检测到头部运动到更新由显示器发送到用户的视觉系统的光之间整个系统的延迟非常是至关重要的。如果延迟高,所述系统可以创建用户的前庭和视觉感觉系统之间的失配,并产生晕车或虚拟幻境头晕。Especially for display systems that fill a substantial portion of the user's field of view with virtual elements, high head tracking accuracy and low overall system latency from the first detection of head movement to the update of light sent by the display to the user's visual system are crucial. If latency is high, the system can create a mismatch between the user's vestibular and visual sensory systems and produce motion sickness or virtual reality sickness.

一些头戴式显示器能够同时观看现实和虚拟的元素——通常被描述为增强的现实或混合的现实方法。在一个这样的配置中,通常被称为“视频透视”显示器,摄像机捕获现实场景的元素,计算系统叠加虚拟组件到被捕获的现实场景上,并且非透明显示器显示合成图像到眼睛。另一种配置是通常被称为“光透视”的显示器,其中,用户可以通过显示系统中的透明(或半透明)元素看以直接看到来自环境中的显示对象的光线。透明元素,通常被称为“组合器”,叠加来自显示器的光到用户的现实世界的视觉。Some head-mounted displays are capable of viewing both real and virtual elements simultaneously—often described as an augmented reality or mixed reality approach. In one such configuration, often referred to as a "video see-through" display, a camera captures elements of the real scene, a computing system overlays virtual components onto the captured real scene, and a non-transparent display displays the composite image to the eye. Another configuration is often referred to as a "light see-through" display, in which a user can look through a transparent (or translucent) element in the display system to directly see light from displayed objects in the environment. The transparent element, often referred to as a "combiner," overlays light from the display onto the user's view of the real world.

在视频和光透视两种显示器中,检测头部姿势可以使得显示系统渲染虚拟对象以使得它们看上去占据了现实世界中的空间。当用户头部在现实世界中移动时,根据头部姿势来重新渲染虚拟对象,以使得虚拟对象看上去相对于现实世界仍然保持稳定。在这种光透视显示器的情况下,用户的现实世界视觉基本具有零延迟,而他或她的虚拟对象视觉具有的延迟取决于头部追踪速度、处理时间、呈现时间、和显示帧速率。如果系统延迟高,在快速头部运动期间,虚拟对象的显现位置将出现不稳定。In both video and optical see-through displays, detecting head pose allows the display system to render virtual objects so that they appear to occupy real-world space. As the user's head moves in the real world, the virtual objects are re-rendered based on the head pose so that the virtual objects appear stable relative to the real world. In the case of such optical see-through displays, the user's view of the real world has essentially zero latency, while his or her view of virtual objects has a latency that depends on head tracking speed, processing time, rendering time, and display frame rate. If the system latency is high, the apparent position of virtual objects will be unstable during rapid head movements.

除了头戴式显示系统之外,其他显示系统可从准确的和低延迟的头部姿势检测中获益。这些包括头部追踪显示系统,其中显示器不戴在用户身体上,但是,例如,安装在墙上或其它表面上。头部追踪显示器表现得像到现场的窗口,并为当用户相对所述“窗口”移动其头部时,场景重新渲染以匹配用户不断变化的视点。其它系统包括一个头戴式投影系统,其中,头戴式显示器将光投影到现实世界。In addition to head-mounted display systems, other display systems can benefit from accurate and low-latency head pose detection. These include head-tracked display systems, in which the display is not worn on the user's body, but, for example, mounted on a wall or other surface. The head-tracked display acts like a window to the scene, and as the user moves their head relative to the "window," the scene is re-rendered to match the user's changing viewpoint. Other systems include a head-mounted projection system, in which the head-mounted display projects light onto the real world.

发明内容Summary of the Invention

本发明的实施例针对有助于一个或多个用户与虚拟现实和/或增强现实互动的设备、系统和方法。Embodiments of the present invention are directed to devices, systems, and methods that facilitate interaction of one or more users with virtual reality and/or augmented reality.

一个实施例针对在虚拟图像系统或增强现实系统中操作的方法,所述方法包括,对于被呈现给最终用户的多个帧中的至少一些中的每一个帧,确定所述最终用户的视场中的虚拟对象相对于最终用户的参考帧出现的位置,以及至少部分地基于已确定的所述最终用户的视场中的虚拟对象出现的位置调整至少一个后续帧的呈现。在时间上相对于先前呈现给所述最终用户的帧,虚拟对象可以被新引入所述最终用户的视场。新引入的虚拟对象可以被确定以有可能吸引所述最终用户的注意。相对于至少一个先前帧,虚拟对象可以处于帧中的一个新的方位。或者,相对于如先前呈现给所述最终用户的所述虚拟对象的先前方位,虚拟对象可以处于一个新的位置以呈现给所述最终用户。One embodiment is directed to a method operating in a virtual imaging system or augmented reality system, the method comprising, for each of at least some of a plurality of frames presented to an end user, determining a position at which a virtual object in the end user's field of view appears relative to the end user's frame of reference, and adjusting presentation of at least one subsequent frame based at least in part on the determined position at which the virtual object in the end user's field of view appears. A virtual object may be newly introduced into the end user's field of view in time relative to a frame previously presented to the end user. The newly introduced virtual object may be determined to be likely to attract the end user's attention. The virtual object may be at a new orientation in the frame relative to at least one previous frame. Alternatively, the virtual object may be at a new position for presentation to the end user relative to a previous orientation of the virtual object as previously presented to the end user.

所述方法可以进一步包括基于指示最终用户对所述虚拟对象的注意的输入选择所述虚拟对象。指示最终用户对所述虚拟对象的注意的输入可至少部分地基于相对于先前呈现给最终用户的虚拟对象的方位,虚拟对象在呈现给最终用户的新方位的出现。或者,指示最终用户对所述虚拟对象的注意的输入可至少部分地基于呈现给所述最终用户的所述虚拟对象的方位相对于如先前呈现给最终用户的虚拟对象的方位多快地改变。The method may further include selecting the virtual object based on input indicating the end user's attention to the virtual object. The input indicating the end user's attention to the virtual object may be based at least in part on the appearance of the virtual object in a new orientation presented to the end user relative to the orientation of the virtual object previously presented to the end user. Alternatively, the input indicating the end user's attention to the virtual object may be based at least in part on how quickly the orientation of the virtual object presented to the end user changes relative to the orientation of the virtual object as previously presented to the end user.

至少一个后续帧的呈现的调整可以包括呈现该至少一个后续帧的中心被移向最终用户的视场中的虚拟对象已确定出现的位置的至少一个后续帧。或者,至少一个后续帧的呈现的调整可以包括呈现该至少一个后续帧的中心被移至最终用户的视场中的虚拟对象已确定出现的位置的至少一个后续帧。Adjusting the presentation of at least one subsequent frame may include presenting at least one subsequent frame with a center of the at least one subsequent frame moved toward the location in the end user's field of view where the virtual object was determined to appear. Alternatively, adjusting the presentation of at least one subsequent frame may include presenting at least one subsequent frame with a center of the at least one subsequent frame moved to the location in the end user's field of view where the virtual object was determined to appear.

所述方法可以进一步包括至少部分地基于最终用户的视场中的虚拟对象的已确定出现的位置来预测最终用户头部运动的发生。方法可以进一步包括估计指示所预测的最终用户头部运动的所估计的速度的至少一个值,确定至少部分地补偿所预测的最终用户头部运动的所估计的速度的至少一个值,以及至少部分地基于已确定的值渲染至少一个后续帧。The method may further include predicting the occurrence of end-user head movement based at least in part on the determined location of the virtual object in the end-user's field of view. The method may further include estimating at least one value indicative of an estimated velocity of the predicted end-user head movement, determining at least one value that at least in part compensates for the estimated velocity of the predicted end-user head movement, and rendering at least one subsequent frame based at least in part on the determined value.

所述方法可以进一步包括估计所预测的最终用户头部运动中的速度的至少一个变化,其中,速度的至少一个变化发生在所预测的头部运动开始和所预测的头部运动结束之间,并且其中估计指示所预测的头部运动的估计速度的至少一个值包括估计指示所预测的速度的至少一个值,其至少部分地适应所预测的最终用户头部运动中的速度所估计的变化。The method may further include estimating at least one change in velocity in the predicted end-user head movement, wherein the at least one change in velocity occurs between the start of the predicted head movement and the end of the predicted head movement, and wherein estimating at least one value indicative of the estimated velocity of the predicted head movement includes estimating at least one value indicative of the predicted velocity that at least partially accommodates the estimated change in velocity in the predicted end-user head movement.

估计所预测的最终用户头部运动中的速度的至少一个变化可包括,估计在所预测的头部运动开始后第一定义的时间和所预测的头部运动结束前的第二定义的时间之间的至少一个变化。Estimating at least one change in velocity in the predicted end-user head movement may include estimating at least one change between a first defined time after the predicted head movement begins and a second defined time before the predicted head movement ends.

所述方法可以进一步包括估计指示所预测的最终用户头部运动的所估计加速度的至少一个值,确定至少部分地补偿所预测的最终用户头部运动的所估计的加速度的至少一个值,以及至少部分地基于已确定的值渲染至少一个后续帧。The method may further include estimating at least one value of an estimated acceleration indicative of predicted end-user head movement, determining at least one value of the estimated acceleration that at least partially compensates for the predicted end-user head movement, and rendering at least one subsequent frame based at least in part on the determined value.

所述方法可以进一步包括接收指示最终用户的身份的信息,以及基于所接收的指示最终用户的身份的信息来检索至少一个用于最终用户的用户特定的历史属性,其中用户特定的历史属性指示用于最终用户的先前头部运动速度、用于最终用户的先前头部运动加速度和用于最终用户的先前眼睛运动到头部运动的关系之中的至少一个。The method may further include receiving information indicating the identity of the end user, and retrieving at least one user-specific historical attribute for the end user based on the received information indicating the identity of the end user, wherein the user-specific historical attribute indicates at least one of a previous head movement velocity for the end user, a previous head movement acceleration for the end user, and a previous eye movement to head movement relationship for the end user.

所述虚拟对象可以是虚拟文本对象、虚拟数字对象、虚拟字母数字对象、虚拟标签对象、虚拟场对象、虚拟图表对象、虚拟地图对象、虚拟工具对象或物理对象的虚拟视觉表示之中的至少一个。The virtual object may be at least one of a virtual text object, a virtual numeric object, a virtual alphanumeric object, a virtual label object, a virtual field object, a virtual chart object, a virtual map object, a virtual tool object, or a virtual visual representation of a physical object.

另一个实施例针对增强现实系统中的操作方法,该方法包括:接收指示最终用户的身份的信息,至少部分地基于所接收的指示最终用户的身份的信息检索至少一个用于最终用户的用户特定的历史属性,以及至少部分地基于所检索的至少一个用于最终用户的用户特定的历史属性向最终用户提供帧。所接收的信息可以是指示最终用户的眼睛的至少一部分的图像的图像信息。Another embodiment is directed to a method of operating in an augmented reality system, the method comprising: receiving information indicative of an identity of an end user, retrieving at least one user-specific historical attribute for the end user based at least in part on the received information indicative of the identity of the end user, and providing a frame to the end user based at least in part on the retrieved at least one user-specific historical attribute for the end user. The received information may be image information indicative of an image of at least a portion of an eye of the end user.

所检索的至少一个用于最终用户的用户特定的历史属性可以是至少一个属性,其提供了用于最终用户的至少一个头部运动属性的指示,其中头部运动属性指示最终用户的至少一个先前头部运动。或所检索的至少一个用于最终用户的用户特定的历史属性可以是至少一个属性,其提供了用于最终用户的至少一个先前头部运动的至少一个先前头部运动速度的指示。或者,所检索的至少一个用于最终用户的用户特定的历史属性可以是至少一个属性,其提供最终用户的跨越至少一个先前头部运动的范围内的至少一部分的头部运动速度的变化指示。或者,所检索的至少一个用于最终用户的用户特定的历史属性可以是至少一个属性,其提供用于最终用户的至少一个先前头部运动的至少一个先前头部运动加速度的指示。或者,所检索的至少一个用于最终用户的用户特定的历史属性可以是至少一个属性,其提供最终用户的至少一个先前头部运动和至少一个先前眼睛运动之间的关系指示。或者,所检索的至少一个用于最终用户的用户特定的历史属性可以是至少一个属性,其提供最终用户的至少一个先前头部运动和至少一个先前眼睛运动之间的比率指示。The at least one user-specific historical attribute retrieved for the end user may be at least one attribute that provides an indication of at least one head movement attribute for the end user, wherein the head movement attribute indicates at least one previous head movement of the end user. Or the at least one user-specific historical attribute retrieved for the end user may be at least one attribute that provides an indication of at least one previous head movement velocity for at least one previous head movement of the end user. Alternatively, the at least one user-specific historical attribute retrieved for the end user may be at least one attribute that provides an indication of a change in the velocity of the end user's head movement across at least a portion of a range of at least one previous head movement. Alternatively, the at least one user-specific historical attribute retrieved for the end user may be at least one attribute that provides an indication of at least one previous head movement acceleration for at least one previous head movement of the end user. Alternatively, the at least one user-specific historical attribute retrieved for the end user may be at least one attribute that provides an indication of a relationship between at least one previous head movement of the end user and at least one previous eye movement. Alternatively, the at least one user-specific historical attribute retrieved for the end user may be at least one attribute that provides an indication of a ratio between at least one previous head movement of the end user and at least one previous eye movement.

所述方法可以进一步包括预测最终用户头部运动的至少一个终点,并至少部分地基于所检索的用于最终用户的至少一个用户特定的历史属性向最终用户提供帧,包括渲染至少一个后续帧给至少一个图像缓冲器,该至少一个后续帧移向所预测的头部运动的终点。The method may further include predicting at least one endpoint of the end user's head movement and providing frames to the end user based at least in part on at least one user-specific historical attribute retrieved for the end user, including rendering at least one subsequent frame to at least one image buffer, the at least one subsequent frame moving toward the predicted endpoint of the head movement.

所述方法可以进一步包括渲染多个后续帧,其至少部分适应用于最终用户的至少一个头部运动属性移向所预测的头部运动的终点,该头部运动属性指示最终用户的至少一个先前的头部运动。The method may further include rendering a plurality of subsequent frames that are at least partially adapted for at least one head movement attribute of the end user to move toward the predicted end point of the head movement, the head movement attribute indicating at least one previous head movement of the end user.

指示最终用户的至少一个先前的头部运动的头部运动属性可以是用于最终用户的历史头部移动速度、用于最终用户的历史头部移动加速度或用于最终用户的头部运动和眼睛运动之间的历史比率。The head movement attribute indicating at least one previous head movement of the end user may be a historical head movement velocity for the end user, a historical head movement acceleration for the end user, or a historical ratio between head movement and eye movement for the end user.

所述方法可以进一步包括至少部分地基于最终用户的视场中的虚拟对象出现的位置,预测最终用户的头部运动的发生。虚拟对象出现的位置可以用与上述相同的方式确定。The method may further include predicting the occurrence of head movement of the end user based at least in part on the location where the virtual object appears in the end user's field of view. The location where the virtual object appears may be determined in the same manner as described above.

另一个实施例针对检测呈现给最终用户的帧中的一些像素之间的间隔将不同于帧中其它像素之间的间隔的指示,基于已检测的指示调整第一组像素,并提供具有所调整的第一组像素给至少一个后续帧的一部分以至少部分补偿呈现给最终用户的间隔之间的差异。像素特征(例如,尺寸、亮度等)可以被最终用户感知到。Another embodiment is directed to detecting an indication that spacing between some pixels in a frame presented to an end user will differ from spacing between other pixels in the frame, adjusting a first set of pixels based on the detected indication, and providing a portion of at least one subsequent frame having the adjusted first set of pixels to at least partially compensate for the difference in spacing between pixels presented to the end user. Pixel characteristics (e.g., size, brightness, etc.) may be perceptible to the end user.

所述方法可以进一步包括基于已检测的头部运动的方向选择帧的第一组像素,其中第一组像素的方向与已检测的头部运动的方向相同,以及增加至少一个后续帧的第一组像素的尺寸。所述方法可以进一步包括基于已检测的头部运动的方向选择帧的第一组像素,其中第一组像素的方向与已检测的头部运动的方向相同,以及响应于已检测的头部运动增加至少一个后续帧的第一组像素的亮度。The method may further include selecting a first group of pixels of a frame based on a direction of the detected head movement, wherein the direction of the first group of pixels is the same as the direction of the detected head movement, and increasing a size of the first group of pixels of at least one subsequent frame. The method may further include selecting a first group of pixels of a frame based on a direction of the detected head movement, wherein the direction of the first group of pixels is the same as the direction of the detected head movement, and increasing a brightness of the first group of pixels of at least one subsequent frame in response to the detected head movement.

所述方法可以进一步包括基于已检测的头部运动的方向选择帧的第一组像素,其中第一组像素的方向与已检测的头部运动的方向相反,以及响应于已检测的头部运动减少至少一个后续帧的第一组像素的尺寸。The method may further include selecting a first group of pixels of a frame based on a direction of the detected head movement, wherein the direction of the first group of pixels is opposite to the direction of the detected head movement, and reducing a size of the first group of pixels of at least one subsequent frame in response to the detected head movement.

所述方法可以进一步包括基于已检测的头部运动的方向选择帧的第一组像素,其中第一组像素的方向与已检测的头部运动的朝向相反,以及响应于已检测的头部运动减少至少一个后续帧的第一组像素的亮度。The method may further include selecting a first group of pixels of a frame based on a direction of the detected head movement, wherein the direction of the first group of pixels is opposite to the direction of the detected head movement, and reducing the brightness of the first group of pixels of at least one subsequent frame in response to the detected head movement.

另一个实施例id针对在虚拟图像呈现系统中操作的方法,方法包括渲染第一完整帧到图像缓冲器,其中第一完整帧包括用于像素的顺序呈现以形成虚拟对象的图像的像素信息,开始第一完整帧的呈现,以及在第一完整帧的呈现结束之前通过呈现对第一完整帧的更新动态地中断呈现第一完整帧,在所述更新中像素信息的一部分已经从第一完整帧改变。Another embodiment id directed to a method operating in a virtual image rendering system, the method comprising rendering a first complete frame to an image buffer, wherein the first complete frame includes pixel information for sequential rendering of pixels to form an image of a virtual object, starting rendering of the first complete frame, and dynamically interrupting rendering of the first complete frame before rendering of the first complete frame ends by rendering an update to the first complete frame in which a portion of the pixel information has changed from the first complete frame.

另一个实施例针对在虚拟图像呈现系统中操作的方法,方法包括渲染具有第一场(field)和第二场的第一完整帧到图像缓冲器,其中第一场至少包括第一螺旋扫描线并且第二场至少包括第二螺旋扫描线,第二螺旋扫描线至少与第一螺旋扫描线交错;读出存储第一完整帧的帧缓冲器;以及在读取第一完整帧完成之前通过读出对第一完整帧的更新动态中断读出第一完整帧,在所述更新中像素信息的一部分已经从第一完整帧改变。读出的动态中断可以基于已检测的最终用户头部运动,其中已检测的头部运动超过标称头部运动值。Another embodiment is directed to a method operating in a virtual image rendering system, the method comprising rendering a first complete frame having a first field and a second field to an image buffer, wherein the first field includes at least a first spiral scan line and the second field includes at least a second spiral scan line, the second spiral scan line being interleaved with at least the first spiral scan line; reading out a frame buffer storing the first complete frame; and dynamically interrupting the reading out of the first complete frame before completion of reading the first complete frame by reading out an update to the first complete frame in which a portion of pixel information has changed from the first complete frame. The dynamic interruption of the reading out can be based on detected end-user head movement, wherein the detected head movement exceeds a nominal head movement value.

另一个实施例针对在虚拟图像呈现系统中操作的方法,所述方法包括渲染具有第一场和第二场的第一完整帧到图像缓冲器,其中第一场至少包括第一利萨茹(Lissajous)扫描线并且第二场至少包括第二利萨茹扫描线,第二利萨茹扫描线至少与第一利萨茹扫描线交错;读出存储第一完整帧的帧缓冲器;以及在读取第一完整帧完成之前通过读出对第一完整帧的更新动态中断读出第一完整帧,在所述更新中像素信息的一部分已经从第一完整帧改变,所述更新动态中断基于最终用户已检测的头部运动超过标称头部运动值。所述方法可以进一步包括相移利萨茹扫描线以交错利萨茹扫描线。Another embodiment is directed to a method operating in a virtual image presentation system, the method comprising rendering a first complete frame having a first field and a second field to an image buffer, wherein the first field includes at least a first Lissajous scan line and the second field includes at least a second Lissajous scan line, the second Lissajous scan line being interleaved with at least the first Lissajous scan line; reading out a frame buffer storing the first complete frame; and dynamically interrupting reading out the first complete frame before reading out the first complete frame is completed by reading out an update to the first complete frame in which a portion of pixel information has changed from the first complete frame, the dynamic interruption of the update being based on detected head motion of an end user exceeding a nominal head motion value. The method may further comprise phase-shifting the Lissajous scan lines to interleave the Lissajous scan lines.

另一个实施例针对在虚拟图像呈现系统中操作的方法,所述方法包括对于多帧中的每一个,响应于已检测的最终用户头部运动,为每个相应帧中的至少两部分中的每一部分确定相应的分辨率,并且基于已确定的相应帧中的至少两部分相应的分辨率呈现虚拟对象。所述相应的帧的部分可以帧的场、帧的线、帧的像素中的至少一个。所述方法可以进一步包括在呈现帧的第一部分和帧的第二部分之间调整驱动信号的特征以创建虚拟对象的图像中的可变分辨率。驱动信号的特征可以是驱动信号的振幅和驱动信号的斜率中的至少一个。Another embodiment is directed to a method operating in a virtual image rendering system, the method comprising, for each of a plurality of frames, determining a respective resolution for each of at least two portions of each respective frame in response to detected end-user head movement, and rendering a virtual object based on the determined respective resolutions of the at least two portions of the respective frames. The portions of the respective frames may be at least one of fields of the frame, lines of the frame, or pixels of the frame. The method may further comprise adjusting a characteristic of a drive signal between rendering a first portion of the frame and a second portion of the frame to create a variable resolution in the image of the virtual object. The characteristic of the drive signal may be at least one of an amplitude of the drive signal and a slope of the drive signal.

所述方法可以进一步包括基于已处理的眼睛追踪数据、已确定的最终用户的视场中的虚拟对象相对于最终用户参考帧出现的位置、已确定的当新引进最终用户的视场时虚拟对象出现的位置、已确定的虚拟对象相对于该虚拟图像在至少一个先前图像中的方位在图像中的新方位中出现的位置之中的至少一个来为最终用户评估至少第一图像中的注意点。The method may further include evaluating a point of attention in at least the first image for the end user based on at least one of the processed eye tracking data, a determined position where a virtual object in the end user's field of view appears relative to the end user's reference frame, a determined position where the virtual object appears when newly introduced into the end user's field of view, and a determined position where the virtual object appears in a new position in the image relative to the orientation of the virtual image in at least one previous image.

所述方法可以进一步包括在至少一个后续的图像的一部分中增加所述至少一个后续图像中的分辨率,其相对于所述至少一个后续的图像的其他部分至少邻近于评估出的注意点。所述方法可以进一步包括在至少一个后续的图像的一部分中减少所述至少一个后续图像中的分辨率,其相对于所述至少一个后续的图像的其他部分至少为评估出的注意点的远侧。The method may further include increasing a resolution in a portion of the at least one subsequent image that is at least adjacent to the assessed point of interest relative to other portions of the at least one subsequent image. The method may further include decreasing a resolution in a portion of the at least one subsequent image that is at least distal to the assessed point of interest relative to other portions of the at least one subsequent image.

另一个实施例针对在虚拟图像呈现系统中操作的方法,所述方法包括:向最终用户显示至少一个虚拟对象,并且在已检测的头部运动超过标称头部运动值和所预测的头部运动被预测超过头部运动值中至少之一时暂时消隐所述至少一个虚拟对象的显示的一部分。所述方法可进一步包括处理经由至少一个传感器提供的追踪数据以确定已检测的头部运动和所预测头部运动中的至少一个,其中头部追踪数据至少指示最终用户的头部朝向。Another embodiment is directed to a method operational in a virtual image presentation system, the method comprising: displaying at least one virtual object to an end user, and temporarily blanking a portion of the display of the at least one virtual object when a detected head movement exceeds at least one of a nominal head movement value and a predicted head movement is predicted to exceed a head movement value. The method may further comprise processing tracking data provided via at least one sensor to determine at least one of the detected head movement and the predicted head movement, wherein the head tracking data indicates at least a head orientation of the end user.

另一个实施例针对一种投影机装置来在增强现实系统中至少投影[a1]虚拟图像,所述投影机装置包括投影机组件、使投影机组件在至少一个自由轴上可移动的支撑投影机组件的支撑件、至少一个被耦合以选择性地移动投影机组件的执行机构、以通信方式耦合以控制执行机构使得投影机组件响应于最终用户头部运动超过标称头部运动值的检测和最终用户头部运动的预测被预测超过标称头部运动值中的至少一个被移动的控制子系统。投影机组件可以进一步包括至少一个第一光纤,所述第一光纤具有后端和前端,后端被耦合以接收图像,前端被定位以由此传送图像。Another embodiment is directed to a projector apparatus for projecting at least [a1] a virtual image in an augmented reality system, the projector apparatus comprising a projector assembly, a support supporting the projector assembly so that the projector assembly is movable in at least one free axis, at least one actuator coupled to selectively move the projector assembly, and a control subsystem communicatively coupled to control the actuator such that the projector assembly is moved in response to at least one of a detection of end-user head movement exceeding a nominal head movement value and a prediction of end-user head movement being predicted to exceed the nominal head movement value. The projector assembly may further comprise at least one first optical fiber having a rear end coupled to receive an image and a front end positioned to transmit the image therefrom.

支撑组件可以包括接收至少第一光纤的压电环,其靠近但与第一光纤前端向后侧隔开,使得第一光纤靠近其前端的一部分从所述压电环延伸并且以定义的谐振频率自由振荡。The support assembly may include a piezoelectric ring receiving at least the first optical fiber proximate to but spaced rearwardly from a leading end of the first optical fiber such that a portion of the first optical fiber proximate its leading end extends from the piezoelectric ring and is free to oscillate at a defined resonant frequency.

根据权利要求87的投影机装置,其中至少受控制的子系统被以通信方式耦合以接收经由至少一个传感器提供的头部追踪数据,该头部追踪数据至少指示最终用户的头部朝向。对于呈现给最终用户的多个图像中的至少一些的每一个,控制子系统确定虚拟对象在最终用户的视场中相对于最终用户参考帧出现的位置,评估已确定的位置是否需要最终用户转动用户头部,并基于该评估预测头部运动的发生。The projector apparatus of claim 87, wherein at least the controlled subsystem is communicatively coupled to receive head tracking data provided via at least one sensor, the head tracking data indicating at least a head orientation of the end user. For each of at least some of the plurality of images presented to the end user, the control subsystem determines a position at which a virtual object appears in the end user's field of view relative to the end user's frame of reference, evaluates whether the determined position requires the end user to turn the user's head, and predicts the occurrence of head motion based on the evaluation.

另一实施例针对虚拟图像呈现系统中的操作方法,所述方法包括过渲染(overrendering)用于定义的视场的帧,使得用于帧的一组像素的像素信息超出以最大分辨率显示的最大区域,确定帧的一部分以基于已检测的头部运动和所预测的头部运动中的至少一个呈现给最终用户,并且选择性地仅读出帧的确定的部分。Another embodiment is directed to an operating method in a virtual image presentation system, the method comprising overrendering a frame for a defined field of view such that pixel information for a set of pixels of the frame exceeds a maximum area displayed at a maximum resolution, determining a portion of the frame to present to an end user based on at least one of a detected head movement and a predicted head movement, and selectively reading out only the determined portion of the frame.

另一实施例针对用户显示设备,包括可安装在用户头部的外壳框架、可安装在外壳框架上的透镜,以及被耦合到外壳框架以至少部分地基于用户头部运动的检测和用户头部运动的预测中的至少一个来确定显示对象在用户视场出现的位置并基于已确定的显示对象出现的位置来投影显示对象到用户的的投影子系统。显示对象出现的位置可以被移动以响应用户头部运动的检测或用户头部运动的预测中的至少一个超过或预测超过标称头部运动值。用户头部运动的预测可基于用户在焦点的移动的预测或用户的一组历史属性。Another embodiment is directed to a user display device comprising a housing frame mountable on a user's head, a lens mountable on the housing frame, and a projection subsystem coupled to the housing frame to determine a location where a display object appears in the user's field of view based at least in part on at least one of detection of user head movement and prediction of user head movement, and to project the display object onto the user based on the determined location where the display object appears. The location where the display object appears can be moved in response to at least one of detection of user head movement or prediction of user head movement exceeding or predicted to exceed a nominal head movement value. The prediction of the user's head movement can be based on a prediction of the user's movement in focus or a set of historical attributes of the user.

用户显示设备可以进一步包括可安装在外壳框架上的第一对摄像机以追踪用户眼睛的移动,并基于追踪的眼睛运动估计用户的眼睛的焦点深度。投影子系统可以基于所估计的焦点深度投影显示对象。The user display device may further include a first pair of cameras mountable on the housing frame to track movement of the user's eyes and estimate a focus depth of the user's eyes based on the tracked eye movement. The projection subsystem may project a display object based on the estimated focus depth.

用户显示设备可以进一步包括可安装在外壳框架上的第二对摄像机以捕获如用户眼睛看到的视场图像,其中视场图像包含至少一个物理对象。投影子系统可以以一种方式投影显示对象,使得显示对象和通过第二对摄像机捕获的物理对象相互混合并一起出现在同一帧中。出现位置可至少部分地基于物理对象。显示对象和物理对象可具有预先确定的关系。捕获的视场图像可以被用于收集关于用户头部运动的信息,其中关于用户头部运动的信息包括用户注意的中心、用户头部的朝向、用户头部的方向、用户头部运动的速度、用户头部运动的加速度和与用户的本地环境有关的用户头部的距离。The user display device may further include a second pair of cameras mountable on the housing frame to capture a field of view image as seen by the user's eyes, wherein the field of view image includes at least one physical object. The projection subsystem may project the display object in a manner such that the display object and the physical object captured by the second pair of cameras are intermixed and appear together in the same frame. The appearance position may be based at least in part on the physical object. The display object and the physical object may have a predetermined relationship. The captured field of view image may be used to collect information about the user's head movement, wherein the information about the user's head movement includes the center of the user's attention, the orientation of the user's head, the direction of the user's head, the speed of the user's head movement, the acceleration of the user's head movement, and the distance of the user's head relative to the user's local environment.

透镜可以包括至少一个透明表面以选择性地允许透射光,使得用户能够观察本地环境。投影子系统可以以一种方式投影显示对象,使得用户观察显示对象和本地环境如同通过透镜的透明表面进行观察。The lens may include at least one transparent surface to selectively allow transmission of light, enabling the user to observe the local environment. The projection subsystem may project the display object in a manner such that the user observes the display object and the local environment as if observing through the transparent surface of the lens.

用户显示设备还可以包括至少一个惯性传感器以捕获一组表示用户头部运动的惯性测量,其中该组惯性测量包括用户头部运动的速度、用户头部运动的加速度、用户头部运动的方向、用户头部的方位以及用户头部的朝向。The user display device may also include at least one inertial sensor to capture a set of inertial measurements representing the user's head movement, wherein the set of inertial measurements includes the speed of the user's head movement, the acceleration of the user's head movement, the direction of the user's head movement, the orientation of the user's head, and the heading of the user's head.

用户显示设备可以进一步包括至少一个光源以照亮用户的头部和用户的本地环境中的至少一个。The user display device may further include at least one light source to illuminate at least one of the user's head and the user's local environment.

投影子系统可以调整与显示对象相关联的一组像素的所感知尺寸、亮度和分辨率中的至少一个以补偿检测到的头部移动和所预测的头部运动中的至少一个。显示对象可以是虚拟对象和增强虚拟对象中的一个。The projection subsystem may adjust at least one of a perceived size, brightness, and resolution of a set of pixels associated with a display object to compensate for at least one of the detected head movement and the predicted head motion. The display object may be one of a virtual object and an augmented virtual object.

本发明的另外的和其他的目标、特征、和优点在详细描述、附图和权利要求中被描述。Additional and other objects, features, and advantages of the present invention are described in the detailed description, drawings, and claims.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1示出了使用预测头部追踪来为最终用户|[12]渲染帧的示例。Figure 1 shows an example of using predictive head tracking to render a frame for an end user [12].

图2示出了基于呈现给最终用户的虚拟对象的特征来预测头部运动的技术示例。FIG2 illustrates an example of a technique for predicting head motion based on features of a virtual object presented to an end user.

图3示出了帧中心偏移到哪的示例。FIG3 shows an example of where the frame center is shifted.

图4示出了基于最终用户的一组历史属性来预测头部运动的技术示例。FIG4 illustrates an example of a technique for predicting head motion based on a set of historical attributes of an end user.

图5示出了基于历史属性预测头部运动的技术的另一示例。FIG5 shows another example of a technique for predicting head motion based on historical attributes.

图6示出了检索用户的多种历史属性的示例。FIG. 6 shows an example of retrieving various historical attributes of a user.

图7示出了基于预测的终点来渲染后续帧的示例。FIG. 7 shows an example of rendering subsequent frames based on a predicted endpoint.

图8示出了渲染后续帧的另一示例。FIG8 shows another example of rendering subsequent frames.

图9示出了预测头部运动的发生的示例。FIG9 shows an example of predicting the occurrence of head motion.

图10示出了基于头部运动来调节像素的示例。FIG10 shows an example of adjusting pixels based on head motion.

图11示出了使用调整后的像素渲染帧的示例。Figure 11 shows an example of rendering a frame using adjusted pixels.

图12示出了增加像素的尺寸和/或亮度的示例。FIG. 12 shows an example of increasing the size and/or brightness of a pixel.

图13示出了的动态中断帧的呈现的示例。FIG13 shows an example of presentation of a dynamic interruption frame.

图14示出了呈现更新后的帧的一部分的示例。FIG. 14 shows an example of presenting a portion of an updated frame.

图15示出了读取更新帧的示例。FIG15 shows an example of reading an update frame.

图16示出了相移的示例。FIG16 shows an example of phase shift.

图17示出了导致图像中可变分辨率的示例。FIG17 shows an example resulting in variable resolution in an image.

图18示出了调整驱动信号的振幅的示例。FIG. 18 shows an example of adjusting the amplitude of the drive signal.

图19示出了基于最终用户的注意点调整后续图像中的分辨率的示例。FIG. 19 shows an example of adjusting the resolution in subsequent images based on the end user's attention point.

图20示出了调整分辨率的另一示例。FIG. 20 shows another example of adjusting the resolution.

图21示出了确定虚拟对象出现的位置的示例。FIG. 21 shows an example of determining the position at which a virtual object appears.

图22示出了消隐显示虚拟对象的一部分的示例。FIG. 22 shows an example of hiding a portion of a virtual object.

图23示出了基于虚拟对象的吸引力来预测头部运动的示例。FIG. 23 shows an example of predicting head motion based on the attractiveness of a virtual object.

图24示出了频闪的示例。FIG24 shows an example of strobing.

图25示出了有选择地激活执行机构[13]以移动投影机组件的示例。Figure 25 shows an example of selectively activating the actuator [13] to move the projector assembly.

图26示出了选择性地读出帧的一部分的示例。FIG. 26 shows an example of selectively reading out a portion of a frame.

图27示出了基于已确定的虚拟对象的位置选择性地读出一部分的示例。FIG. 27 shows an example of selectively reading out a portion based on the determined position of the virtual object.

图28示出了选择性地读出一部分的另一示例。FIG. 28 shows another example of selectively reading out a portion.

图29示出了确定图像的一部分以呈现给最终用户的示例。FIG. 29 illustrates an example of determining a portion of an image to present to an end user.

图30示出了动态地处理[14]过渲染的帧的一部分的示例。FIG30 shows an example of dynamically processing [14] a portion of an over-rendered frame.

图31示出了具有像素信息的帧的示例。FIG. 31 shows an example of a frame with pixel information.

图32示出了光栅扫描图样的示例。FIG32 shows an example of a raster scan pattern.

图33示出了螺旋扫描图样的示例。FIG33 shows an example of a spiral scan pattern.

图34示出了利萨茹扫描图样。FIG34 shows a Lissajous scan pattern.

图35示出了多场螺旋扫描图样的示例。FIG35 shows an example of a multi-field spiral scan pattern.

图36A示出了在最终用户的头部快速横向移动期间光栅扫描图样失真的示例。FIG. 36A shows an example of raster scan pattern distortion during rapid lateral movement of the end user's head.

图36B示出了在最终用户的头部垂直向上移动期间光栅扫描图样失真的示例。FIG36B shows an example of raster scan pattern distortion during vertical upward movement of the end user's head.

图37A示出了在最终用户的头部向左侧迅速地横向运动期间螺旋扫描线失真的示例。FIG. 37A shows an example of spiral scanline distortion during a rapid lateral motion of the end user's head to the left.

图37B示出了在用户的头部向左侧非常迅速地横向运动期间螺旋扫描线失真的示例。FIG. 37B shows an example of spiral scanline distortion during a very rapid lateral motion of the user's head to the left.

图38示出了虚拟图像生成系统的概述。FIG38 shows an overview of the virtual image generation system.

具体实施方式DETAILED DESCRIPTION

下面的说明涉及在虚拟现实和/或增强现实系统中使用的显示系统和方法。然而,应理解,在适用于虚拟现实中的应用的同时,本发明在其最广泛的方面并不受限制。The following description relates to display systems and methods for use in virtual reality and/or augmented reality systems. However, it should be understood that the invention in its broadest aspects is not limited, while being applicable to applications in virtual reality.

首先参照图38,图38根据一个所示出的实施例示出了虚拟图像生成系统3800,其可以操作以向最终用户3802提供虚拟映像。Reference is first made to FIG. 38 , which illustrates a virtual image generation system 3800 that is operable to provide a virtual image to an end user 3802 , according to one illustrated embodiment.

虚拟图像生成系统3800可以被作为增强现实系统操作,其在最终用户的视场中提供与物理对象混合的虚拟物体图像。当将虚拟图像生成系统3800作为增强现实系统操作时,有两种基本方法。第一种方法采用一个或多个成像器(例如,摄像机)以捕获周围环境的图像。虚像生成系统3800可以将图像混合进代表周围环境的图像的数据。第二种方法采用一个或多个通过其可以看到周围环境的至少部分透明的表面,并且虚拟图像生成系统3800在其上产生虚拟物体的图像。如将对本领域技术人员显而易见的,至少一些本文所描述的方面特别适合于增强现实系统。The virtual image generation system 3800 can be operated as an augmented reality system that provides images of virtual objects mixed with physical objects in the end user's field of view. When operating the virtual image generation system 3800 as an augmented reality system, there are two basic approaches. The first approach uses one or more imagers (e.g., cameras) to capture images of the surrounding environment. The virtual image generation system 3800 can mix the images into data representing the image of the surrounding environment. The second approach uses one or more at least partially transparent surfaces through which the surrounding environment can be seen, and the virtual image generation system 3800 generates images of virtual objects thereon. As will be apparent to those skilled in the art, at least some of the aspects described herein are particularly well suited for augmented reality systems.

虚像生成系统3800可以被作为虚拟现实系统操作,提供虚拟环境中的虚拟对象的图像。The virtual image generation system 3800 may be operated as a virtual reality system, providing images of virtual objects in a virtual environment.

虚像生成系统3800以及本文所教导的各种技术可以被用在除增强现实和虚拟现实系统之外的应用中。例如,各种技术可被应用到任何投影或显示系统。例如,本文描述的各种技术可应用到微型投影机,其中运动可以是最终用户的手的运动而不是头部运动。因此,尽管本文经常就增强现实系统而言进行描述,但教导不应受限于这种系统或这种用途。The virtual image generation system 3800 and the various techniques taught herein can be used in applications other than augmented reality and virtual reality systems. For example, the various techniques can be applied to any projection or display system. For example, the various techniques described herein can be applied to a pico projector, where the motion can be the end user's hand motion rather than head motion. Therefore, although this document is often described in terms of augmented reality systems, the teachings should not be limited to such systems or such uses.

至少对于增强现实应用,需要在最终用户3802的视场中放置相对于相应的物理对象的各种虚拟对象。本文也称为虚拟标签或标签或标注的虚拟对象可以采用任何各种各样的形式,主要为能够被呈现为图像的任何种类的数据、信息、概念或逻辑结构。虚拟对象的非限制性示例可以包括:虚拟文本对象、虚拟数字对象、虚拟字母数字对象、虚拟标签对象、虚拟场对象、虚拟图表对象、虚拟地图对象、虚拟工具对象或物理对象的虚拟视觉表示。At least for augmented reality applications, it is necessary to place various virtual objects relative to corresponding physical objects in the field of view of the end user 3802. Virtual objects, also referred to herein as virtual labels, tags, or annotations, can take any of a variety of forms, primarily any type of data, information, concept, or logical structure that can be presented as an image. Non-limiting examples of virtual objects may include: virtual text objects, virtual numeric objects, virtual alphanumeric objects, virtual label objects, virtual field objects, virtual chart objects, virtual map objects, virtual tool objects, or virtual visual representations of physical objects.

头部追踪精度和延迟一直是虚拟现实及增强现实系统中的问题。追踪不准确和延迟产生最终用户的视觉系统和前庭系统之间的不一致。这可能会导致恶心和不适。这在填充最终用户的大部分视场的显示系统中特别成问题。解决这些的途径可能包括提高帧速率或有效帧速率,例如通过闪动或闪烁或通过其他技术。如本文所描述的,可以用预测性头部追踪来解决这些问题,例如通过减少延迟。预测性头部追踪可依赖于大量的不同因素或方法,包括用于特定最终用户的历史数据或属性。本文还描述了显示或呈现的消隐可以被有效地使用,例如在快速头部运动时消隐。Head tracking accuracy and latency have been issues in virtual reality and augmented reality systems. Tracking inaccuracies and latency create inconsistencies between the end user's visual system and vestibular system. This can lead to nausea and discomfort. This is particularly problematic in display systems that fill a large portion of the end user's field of view. Approaches to addressing these issues may include increasing the frame rate or effective frame rate, such as by flickering or blinking or through other techniques. As described herein, predictive head tracking can be used to address these issues, such as by reducing latency. Predictive head tracking can rely on a number of different factors or methods, including historical data or attributes for a particular end user. It is also described herein that blanking of displays or presentations can be effectively used, such as blanking during rapid head movements.

至少对于增强的现实应用,与物理对象有空间关系的虚拟对象的放置(例如,被呈现以在两个或三个维度出现空间上接近物理对象)是重要的问题。例如,考虑到周围环境,头部运动可显著地使视场内的虚拟对象的放置复杂化。无论是被捕获的周围环境的图像并随后被投影或显示给最终用户3802的视场,还是最终用户3802直接感知的周围环境的视场,均是如此。例如,头部运动将可能导致最终用户3802的视场的改变,其将可能需要对显示在最终用户3802视场中的各种虚拟对象的位置进行更新。此外,头部运动可能会在种类繁多的范围和速度中发生。不仅在不同头部运动之间头部运动速度可能会发生变化,而且跨单个头部运动范围或者在单个头部运动范围之内头部运动速度也可能会发生变化。例如,头的移动速度可最初从起点增加(例如,线性地或非线性地),并且当到达终点时可以降低,在头部运动的起点和终点之间的某处获得最大速度。快速的头部运动甚至可能超过特定显示或投影技术向最终用户3802渲染出现均匀和/或如平滑的运动的图像的能力。At least for augmented reality applications, the placement of virtual objects that have a spatial relationship with physical objects (e.g., rendered to appear spatially close to physical objects in two or three dimensions) is an important issue. For example, head movement can significantly complicate the placement of virtual objects within the field of view, given the surrounding environment. This is true whether the image of the surrounding environment is captured and subsequently projected or displayed to the field of view of the end user 3802, or the field of view of the surrounding environment directly perceived by the end user 3802. For example, head movement will likely cause a change in the field of view of the end user 3802, which may require updating the positions of various virtual objects displayed in the field of view of the end user 3802. In addition, head movement may occur in a wide variety of ranges and speeds. Not only may the speed of the head movement vary between different head movements, but it may also vary across a single head movement range or within a single head movement range. For example, the speed of the head movement may initially increase (e.g., linearly or nonlinearly) from a starting point and may decrease when reaching the end point, with the maximum speed being achieved somewhere between the starting and end points of the head movement. Rapid head movements may even exceed the ability of a particular display or projection technology to render an image to the end user 3802 that appears to have uniform and/or smooth motion.

在图38所示的实施例中,虚拟图像生成系统3800包括投影子系统3804,其可操作以在位于最终用户的3802视场中的部分透明的显示表面3806上投影图像,所述显示表面3806在最终用户3802的眼睛3808和周围环境之间。虚拟图像生成系统3800可能被穿戴或安装在最终用户3802的头部3810上,例如并入到一副眼镜或护目镜。38 , a virtual image generation system 3800 includes a projection subsystem 3804 operable to project an image onto a partially transparent display surface 3806 located in the field of view of an end user 3802, between the eyes 3808 of the end user 3802 and the surrounding environment. The virtual image generation system 3800 may be worn or mounted on the head 3810 of the end user 3802, such as incorporated into a pair of glasses or goggles.

在所示的实施例中,投影子系统3804包括一个或多个光纤3812(例如,单模光纤),其具有光被接收进入的后端或远端3812a和光从其被提供至部分透明的显示表面3806或者被直接投影进最终用户3802的眼睛3808的前端或近端3812b。投影子系统3804还可以包括一个或多个光源3815,其产生光(例如,以定义的图样发射不同颜色的光),并以通信方式将光耦合至以一个或多个光纤3812的后端或远端3812a。光源3815可采取任何不同各种形式,例如根据在像素信息或数据的相应帧中指定的所定义的像素图案其可操作以分别产生红色、绿色和蓝色相干平行光的一组RGB激光器(例如,能够输出红色绿色和蓝色光的激光二极管)。激光提供高色彩饱和度并高的能源效率。In the illustrated embodiment, the projection subsystem 3804 includes one or more optical fibers 3812 (e.g., single-mode optical fibers) having a rear or distal end 3812a into which light is received and a front or proximal end 3812b from which light is provided to a partially transparent display surface 3806 or projected directly into the eye 3808 of the end user 3802. The projection subsystem 3804 may also include one or more light sources 3815 that generate light (e.g., emit light of different colors in a defined pattern) and communicatively couple the light to the rear or distal end 3812a of the one or more optical fibers 3812. The light source 3815 may take any of a variety of forms, such as a set of RGB lasers (e.g., laser diodes capable of outputting red, green, and blue light) that are operable to generate coherent parallel light of red, green, and blue, respectively, according to a defined pixel pattern specified in a corresponding frame of pixel information or data. Lasers provide high color saturation and high energy efficiency.

尽管图38示出了将光分解至多个信道的单个光纤3812,一些实现可采用两个或更多光纤3812。在这样的实施例中,光纤3812可以具有交错的尖端或者有斜面并且被抛光的尖端以弯曲光,降低了信道之间的光学间隔。光纤3812可方便地包装成带状电缆。适当的光学器件可产生由每个信道所产生的相应的图像的结合物。Although FIG38 shows a single optical fiber 3812 that splits light into multiple channels, some implementations may employ two or more optical fibers 3812. In such embodiments, the optical fibers 3812 may have staggered tips or beveled and polished tips to bend light, reducing the optical separation between channels. The optical fibers 3812 may be conveniently packaged into ribbon cables. Appropriate optics may produce a combination of the corresponding images produced by each channel.

一个或多个光纤3812可通过轭3814支持,具有由此延伸的前或近端3812b的一部分。轭3814可被操作以设置该前或近端3812b处于振荡运动。例如,轭3814可以包括一管压电式传感器3814a(图38仅示出一个)。多个电极3813(例如,示出了四个,仅有一个被标注)放射状地关于压电式传感器3814a排列。例如经由帧缓冲器3828施加控制信号至与压电式传感器3814a相关联的各个电极3813可导致光纤3812的前或近端3812b以第一谐振模式振动。振动的大小或偏离中心的量可经由所施加的驱动信号控制以获得任何各种的至少二轴的图样。图样可以例如包括光栅扫描图样、螺旋或蜗旋的扫描图样、或利萨茹或图8扫描图样。One or more optical fibers 3812 can be supported by a yoke 3814, with a portion of a front or proximal end 3812b extending therefrom. The yoke 3814 can be operated to set the front or proximal end 3812b in oscillatory motion. For example, the yoke 3814 can include a tube piezoelectric sensor 3814a (only one is shown in FIG38 ). A plurality of electrodes 3813 (e.g., four are shown, only one of which is labeled) are radially arranged about the piezoelectric sensor 3814a. Applying a control signal to each electrode 3813 associated with the piezoelectric sensor 3814a, for example via a frame buffer 3828, can cause the front or proximal end 3812b of the optical fiber 3812 to vibrate in a first resonant mode. The magnitude or off-center amount of the vibration can be controlled via the applied drive signal to obtain any of a variety of at least two-axis patterns. The pattern can include, for example, a raster scan pattern, a spiral or helical scan pattern, or a Lissajous or Figure 8 scan pattern.

图31根据一个所示的实施例示出了像素信息或数据的帧3100,其指定像素信息或数据以呈现图像,例如,一个或多个虚拟对象的图像。帧3100以单元3100A,-3100n(仅标注出两个,统称3102)示意性地示出每个像素。排列成行或线的单元3104a、31004b-3100n(标注出三个,统称3104)的序列,被示为水平延伸跨越图31中的图页。帧3100包括多个线3104。图31采用椭圆型来表示缺少的信息,例如为了清楚说明而省略的单元或行。FIG31 shows a frame 3100 of pixel information or data that specifies pixel information or data to present an image, such as an image of one or more virtual objects, according to one illustrated embodiment. Frame 3100 schematically illustrates each pixel as cells 3100A, -3100n (only two are labeled, collectively referred to as 3102). A sequence of cells 3104a, 31004b-3100n (three are labeled, collectively referred to as 3104) arranged in rows or lines is shown extending horizontally across the page in FIG31. Frame 3100 includes a plurality of lines 3104. FIG31 uses ellipses to indicate missing information, such as cells or lines that are omitted for clarity.

帧3100的每个单元3102可以为与单元对应的像素和/或亮度的各个像素指定用于多个颜色中的每一个的值(统称3106)。例如,帧3100可以为每个像素指定用于红色3106a的一个或多个值,指定用于绿色3106b的一个或多个值以及指定用于蓝色3106c的一个或多个值。值3106可以被为指定为用于每个颜色的二进制表示,例如每一个颜色对应4位数字。帧3100的每个单元3102可能另外包括为每个像素指定振幅或径向尺寸的振幅或径向值#P06d,例如在结合基于螺旋扫描线图样的系统或基于利萨茹扫描线图样的系统使用帧3100时。Each cell 3102 of frame 3100 may specify a value for each of a plurality of colors (collectively, 3106) for each pixel and/or brightness corresponding to the cell. For example, frame 3100 may specify one or more values for red 3106a, one or more values for green 3106b, and one or more values for blue 3106c for each pixel. Values 3106 may be specified as a binary representation for each color, such as a 4-bit number for each color. Each cell 3102 of frame 3100 may additionally include an amplitude or radial value #P06d specifying an amplitude or radial dimension for each pixel, such as when frame 3100 is used in conjunction with a system based on a spiral scan line pattern or a system based on a Lissajous scan line pattern.

帧3100可以包括被统称为3110的一个或多个场。帧3100可以由单个场构成。可选地,帧3100可包括两个,或甚至更多的场3110a-3110b。图31中所示的帧3100示出了两个场3110a、3110b。用于帧3100的完整的第一场3110a的像素信息可在用于完整的第二场3110b的像素信息之前被指定,例如在用于数列、有序列表或其它数据结构(例如,记录、链表)中用于第二场3110b的像素信息之前发生。假定呈现系统被配置以处理多于两个的场3110a-3110b,第三或甚至第四场可以跟随在第二场3110b之后。Frame 3100 can include one or more fields, collectively referred to as 3110. Frame 3100 can be made up of a single field. Alternatively, frame 3100 can include two, or even more fields 3110a-3110b. Frame 3100 shown in Figure 31 shows two fields 3110a, 3110b. Pixel information for the complete first field 3110a of frame 3100 can be specified before pixel information for the complete second field 3110b, for example, before pixel information for the second field 3110b in a sequence, ordered list, or other data structure (e.g., record, linked list). Assuming the rendering system is configured to process more than two fields 3110a-3110b, a third or even fourth field can follow after the second field 3110b.

图32示意性地表示光栅扫描图样3200。在光栅扫描图样3200中,像素3202(只有一个被标注出)被依次呈现。光栅扫描图样3200典型地从左至右(由箭头3204a、3204b表明)然后从上到下呈现(由箭头3206表明)像素。因此,该呈现可以在右上角开始并向左遍历第一行3208直到达到行的末尾。随后光栅扫描图样3200典型地从下一行的左侧开始。呈现可以被暂时停止或消隐,其从一行的末尾返回到到下一行的开始。该过程一行接一行重复直到完成底部线3208n完成,例如,底部最右像素。随着帧3100被完成,新的帧开始,再一次返回下一帧的最上面的行的右侧。再一次地,当从底部左侧返回到上方右侧以呈现下一个帧时,呈现可被停止。Figure 32 schematically shows a raster scan pattern 3200. In the raster scan pattern 3200, pixels 3202 (only one is marked) are presented in sequence. The raster scan pattern 3200 typically presents pixels from left to right (indicated by arrows 3204a, 3204b) and then from top to bottom (indicated by arrow 3206). Therefore, the presentation can start in the upper right corner and traverse the first row 3208 to the left until the end of the row is reached. The raster scan pattern 3200 then typically starts from the left side of the next row. The presentation can be temporarily stopped or blanked, which returns from the end of a row to the beginning of the next row. This process is repeated row by row until the bottom line 3208n is completed, for example, the bottom rightmost pixel. As frame 3100 is completed, a new frame begins, once again returning to the right side of the top row of the next frame. Once again, the presentation can be stopped when returning from the bottom left to the upper right to present the next frame.

光栅扫描的许多实现方式使用交错扫描图样。在交错光栅扫描图样中,来自第一和第二场3210A、3210b的行是交错的。例如,当呈现第一场3210a的行时,用于第一场3210a的像素信息可以仅用于奇数行,而用于第二场3210b的像素信息可以仅用于偶数行。这样,帧3100(图31)的第一场3210a的所有行典型地在第二场3210b的行之前呈现。第一场3210a可以使用第一场3210a的像素信息来依次呈现行1、行3、行5等。随后帧3100(图31)的第二场3210b可以通过使用第二场3210b的像素信息跟着第一场3210a呈现来依次呈现行2、行4、行6等。Many implementations of raster scanning use interlaced scanning patterns. In an interlaced raster scanning pattern, the rows from the first and second fields 3210a, 3210b are interlaced. For example, when presenting the rows of the first field 3210a, the pixel information for the first field 3210a may be used only for odd rows, while the pixel information for the second field 3210b may be used only for even rows. Thus, all rows of the first field 3210a of the frame 3100 ( FIG. 31 ) are typically presented before the rows of the second field 3210b. The first field 3210a can use the pixel information of the first field 3210a to sequentially present row 1, row 3, row 5, and so on. Subsequently, the second field 3210b of the frame 3100 ( FIG. 31 ) can sequentially present row 2, row 4, row 6, and so on by using the pixel information of the second field 3210b to follow the first field 3210a.

图33示意性地根据一个示出的实施例表示螺旋扫描图样3300。螺旋扫描图样3300可以由单个螺旋扫描线3302构成,其可包括一个或多个完整的角周期(例如,360度),其可以被命名为圈或环。像素信息被用于指定每个顺序的像素的颜色和/或亮度,随着角度增加。振幅或径向值3208(图31)指定自螺旋扫描线3302的起点3308的径向尺寸#R06。FIG33 schematically illustrates a spiral scan pattern 3300 according to one illustrated embodiment. Spiral scan pattern 3300 may be comprised of a single spiral scan line 3302, which may include one or more complete angular periods (e.g., 360 degrees), which may be referred to as loops or rings. Pixel information is used to specify the color and/or brightness of each sequential pixel, increasing with angle. Amplitude or radial value 3208 ( FIG31 ) specifies the radial dimension #R06 from the starting point 3308 of spiral scan line 3302.

图34示意性地根据一个所示实施例表示利萨茹扫描图样3400。利萨茹扫描图样3400可以由单个利萨茹扫描线3402构成,其可包括一个或多个完整的角周期(例如,360度),其可以被命名为圈或环。可选地,利萨茹扫描图样3400可以包括两个或更多利萨茹扫描线3402,所述两个或更多利萨茹扫描线3402每个均相对于彼此相移以嵌套利萨茹扫描线3402。像素信息被用于指定随着角度增量每个顺序的像素的颜色和/或亮度。振幅或径向值3208(图31)指定从利萨茹扫描线3402的起点开始的径向尺寸。FIG34 schematically illustrates a Lissajous scan pattern 3400 according to one illustrated embodiment. The Lissajous scan pattern 3400 can be comprised of a single Lissajous scan line 3402, which can include one or more complete angular periods (e.g., 360 degrees), which can be termed loops or rings. Alternatively, the Lissajous scan pattern 3400 can include two or more Lissajous scan lines 3402, each of which is phase-shifted relative to one another to nest the Lissajous scan lines 3402. Pixel information is used to specify the color and/or brightness of each sequential pixel as the angular increment increases. The amplitude or radial value 3208 ( FIG31 ) specifies the radial dimension from the starting point of the Lissajous scan line 3402.

图35示意性地根据一个所示实施例表示多场螺旋扫描图样3500。多场螺旋扫描图样3500包括两个或更多个不同的螺旋扫描线,统称为3502,图35示出了四个螺旋扫描线3502a-3502d。用于每个螺旋扫描3502线的像素信息可以被帧3100(图31)各自的场(例如,3210a,3210b)指定。有利地,可以简单地通过在每一个连续的螺旋扫描线3502之间进行相移来嵌套多个螺旋扫描线3502。螺旋扫描线3502之间的相位差应取决于将被采用的螺旋形扫描线3502的总数。例如,四个螺旋扫描线3502a-3502d可以以90度相移分开。示例性实施例可以以100赫兹刷新率结合10个不同的螺旋扫描线(即,子螺旋)操作。类似于图33的实施例,一个或多个振幅或径向值3208(图31)指定从螺旋扫描线3502的起点3508开始的径向尺寸3506。FIG35 schematically illustrates a multi-field spiral scan pattern 3500 according to one illustrated embodiment. The multi-field spiral scan pattern 3500 includes two or more distinct spiral scan lines, collectively referred to as 3502, with FIG35 illustrating four spiral scan lines 3502a-3502d. The pixel information for each spiral scan line 3502 may be specified by a respective field (e.g., 3210a, 3210b) of the frame 3100 ( FIG31 ). Advantageously, multiple spiral scan lines 3502 may be nested simply by applying a phase shift between each successive spiral scan line 3502. The phase difference between the spiral scan lines 3502 should depend on the total number of spiral scan lines 3502 to be employed. For example, the four spiral scan lines 3502a-3502d may be separated by a 90-degree phase shift. An exemplary embodiment may operate at a 100 Hz refresh rate in conjunction with 10 distinct spiral scan lines (i.e., sub-spirals). Similar to the embodiment of FIG. 33 , one or more amplitude or radial values 3208 ( FIG. 31 ) specify a radial dimension 3506 from a starting point 3508 of the spiral scan line 3502 .

如从图34和35可知,相邻像素之间的相对间隔可以在整个图像中变化。至少部分地适应或补偿该不均匀性可能是有利的。例如,调整像素尺寸可能是有利的,例如增加用于比其它像素间隔更远的像素的所感知的像素尺寸。这可以例如经由选择性模糊(例如,可变焦距透镜、可变扩散、抖动)以增加高斯光斑尺寸来实现。另外或可选地,调整用于比其它像素间隔更远的像素的亮度可能是有利的。As can be seen from Figures 34 and 35, the relative spacing between adjacent pixels can vary across the image. It may be advantageous to at least partially accommodate or compensate for this non-uniformity. For example, it may be advantageous to adjust the pixel size, such as increasing the perceived pixel size for pixels that are spaced further apart than other pixels. This can be achieved, for example, via selective blurring (e.g., variable focus lens, variable diffusion, dithering) to increase the Gaussian spot size. Additionally or alternatively, it may be advantageous to adjust the brightness for pixels that are spaced further apart than other pixels.

返回到图38,使用正弦波驱动信号以关于第一轴的共振频率和关于垂直于第一轴的第二轴的共振频率驱动压电式传感器3814a来产生螺旋扫描图样。螺旋扫描图样以随角度尺寸的变化而变化的径向尺寸为特征。例如,径向尺寸可以线性地或非线性地变化,当径向尺寸从0度到或通过360度变化时。在现象上,螺旋扫描线可出现连续的螺旋,该螺旋起始于起点并在在一个平面上旋转的同时径向向外放射。每个完整的角周期可被描述为构成圈或环。在起始点开始之前,可以定义具有任何期望数量的圈或环的螺旋扫描线。在在时间上第一螺旋扫描图样的结尾和下一个在时间上连续的螺旋扫描图样开始之间可发生显示或呈现被消隐的刷新周期。螺旋扫描图样的最外径向尺寸可通过正弦波驱动信号的振幅调制来设置。螺旋扫描线图样的振幅调整调整径向尺寸而不会影响角度尺寸。这样,振幅调制将不会影响周期的频率(例如,圈或环的数目)或给定时间的用于给定的扫描线的周期数。图样中的前端或近端3812b的方位与光源3815的输出同步以形成二维或三维图像。Returning to FIG. 38 , a spiral scan pattern is generated by driving piezoelectric transducer 3814a using a sinusoidal drive signal at a resonant frequency about a first axis and a resonant frequency about a second axis perpendicular to the first axis. A spiral scan pattern is characterized by a radial dimension that varies with angular dimension. For example, the radial dimension can vary linearly or nonlinearly as the radial dimension varies from 0 degrees to or through 360 degrees. Physiognomically, a spiral scan line can appear as a continuous spiral that begins at a starting point and radiates radially outward while rotating in a plane. Each complete angular cycle can be described as comprising a circle or loop. A spiral scan line can be defined with any desired number of circles or loops before starting at the starting point. A refresh period, in which the display or presentation is blanked, can occur between the end of a first spiral scan pattern in time and the beginning of the next, consecutive spiral scan pattern in time. The outermost radial dimension of the spiral scan pattern can be set by amplitude modulation of the sinusoidal drive signal. Amplitude adjustment of the spiral scan line pattern adjusts the radial dimension without affecting the angular dimension. Thus, the amplitude modulation will not affect the frequency of the cycles (e.g., the number of turns or rings) or the number of cycles used for a given scan line at a given time. The orientation of the leading or proximal end 3812b in the pattern is synchronized with the output of the light source 3815 to form a two-dimensional or three-dimensional image.

虽然未示出,投影子系统3804可以包括一个或多个光学组件(例如,透镜、滤光器、光栅、棱镜、反射镜、分色反射镜、折射镜),其例如经由部分透明显示表面3806直接或间接地引导来自一个或多个光纤3812的前端或近端3812b的输出到最终用户3802的眼睛3808。虽然未示出,投影子系统3804可以包括一个或多个光学组件,其调制像素数据的Z轴方位的深度。这样可以例如采取柔性反射(例如,涂覆有铝的氮化物溅镀(nitride sputter))膜和一个或多个被操作以引起柔性反射膜偏转的电极的形式。柔性反射膜被定位以反射和聚焦从一个或多个光纤3812的前端或近端3812b发射的光。柔性反射膜可基于用于像素数据或信息的深度图选择性地被操作以在Z维或轴聚焦光。柔性反射膜可采用高斯点以产生深度的外观,图像中的某些虚拟对象出现在焦点同时其他对象在焦点之外。附加或备选地,该系统可以使用一个或多个克尔(kerr)效应透镜。Although not shown, the projection subsystem 3804 may include one or more optical components (e.g., lenses, filters, gratings, prisms, mirrors, dichroic mirrors, refractors) that, for example, directly or indirectly direct the output from the front or proximal ends 3812b of one or more optical fibers 3812 to the eye 3808 of the end user 3802 via a partially transparent display surface 3806. Although not shown, the projection subsystem 3804 may include one or more optical components that modulate the depth of the pixel data in the Z-axis. This may, for example, take the form of a flexible reflective film (e.g., a nitride sputtered film coated with aluminum) and one or more electrodes operable to cause deflection of the flexible reflective film. The flexible reflective film is positioned to reflect and focus light emitted from the front or proximal ends 3812b of the one or more optical fibers 3812. The flexible reflective film may be selectively operated to focus light in the Z dimension or axis based on a depth map for the pixel data or information. The flexible reflective film may employ Gaussian points to create the appearance of depth, with certain virtual objects in the image appearing in focus while other objects are out of focus. Additionally or alternatively, the system may utilize one or more Kerr effect lenses.

虽然对于头戴式实施例是不必要的,光纤3812以及可选的轭3814可以被支持以用于在一个或多个方向的运动。例如,光纤3812,以及任选轭轭3814可以经由常平架3816被支持以用于运动的2、3或更多个自由度。常平架3816可以包括转盘3816a、可被操作以围绕第一轴3820a转动或旋转的第一执行机构3818a(例如,电动机、螺线管、压电式传感器)。常平架3816可以包括被转盘3816a上的框架3816c支撑的托架3816b,第二执行机构3818b(例如,电动机、螺线管、压电式传感器器)可被操作以围绕第二轴3820b转动或旋转。常平架3816可以包括由托架3816b可转动地支撑的杆3816d、可被操作以围绕第三轴3820c转动或旋转的第三执行机构3818c(例如,电动机、螺线管、压电式传感器)。第一、第二和第三轴(统称为3820)可以是正交的轴。Although not necessary for head-mounted embodiments, the optical fiber 3812 and optional yoke 3814 can be supported for movement in one or more directions. For example, the optical fiber 3812, and optional yoke 3814, can be supported for two, three, or more degrees of freedom of movement via a gimbal 3816. The gimbal 3816 can include a turntable 3816a, a first actuator 3818a (e.g., a motor, solenoid, piezoelectric sensor) operable to rotate or turn about a first axis 3820a. The gimbal 3816 can include a bracket 3816b supported by a frame 3816c on the turntable 3816a, and a second actuator 3818b (e.g., a motor, solenoid, piezoelectric sensor) operable to rotate or turn about a second axis 3820b. The gimbal 3816 may include a rod 3816d rotatably supported by a bracket 3816b, and a third actuator 3818c (e.g., a motor, a solenoid, a piezoelectric sensor) operable to rotate or turn about a third axis 3820c. The first, second, and third axes (collectively referred to as 3820) may be orthogonal axes.

在图38所示的实施例中,虚拟图像生成系统3800包括控制子系统3822。控制子系统3822可以采取大量不同形式中的任意一个,其中一种在图38中示出。38, virtual image generation system 3800 includes a control subsystem 3822. Control subsystem 3822 can take any of a number of different forms, one of which is shown in FIG38.

控制子系统3822包括多个控制器,例如一个或多个微控制器、微处理器或中央处理单元(CPU)3824、数字信号处理器(DSP)、图形处理单元(GPU)3826、其它集成电路控制器如专用集成电路(ASIC)、可编程门阵列(PGA),例如现场PGA(FPGA)和/或可编程逻辑控制器(PLU)。在图38所示的实施例中,微处理器3824控制整个操作,而GPU3826渲染帧(例如,像素数据组)到一个或多个帧缓冲器3828a-3828n(统称为3828)。虽然没有示出,一个或多个附加的集成电路可控制从帧缓冲器3828读入和/或读出帧以及压电式传感器或电极3814a的操作,同步两者以产生二维或三维图像。例如在帧被过渲染的地方,读入和/或输出帧缓冲器3828的帧可以采用动态寻址。The control subsystem 3822 includes multiple controllers, such as one or more microcontrollers, microprocessors, or central processing units (CPUs) 3824, digital signal processors (DSPs), graphics processing units (GPUs) 3826, and other integrated circuit controllers such as application-specific integrated circuits (ASICs), programmable gate arrays (PGAs), such as field-programmable gate arrays (FPGAs), and/or programmable logic controllers (PLUs). In the embodiment shown in FIG38 , the microprocessor 3824 controls overall operation, while the GPU 3826 renders frames (e.g., sets of pixel data) into one or more frame buffers 3828a-3828n (collectively, 3828). Although not shown, one or more additional integrated circuits may control the reading of frames into and/or out of the frame buffers 3828 and the operation of the piezoelectric sensors or electrodes 3814a, synchronizing the two to produce a two-dimensional or three-dimensional image. Frames read into and/or out of the frame buffers 3828 may utilize dynamic addressing, for example where frames are over-rendered.

控制子系统3822包括一个或多非个临时性计算机或处理器可读介质以存储指令和数据。该非临时性计算机或处理器可读介质可以例如包括帧缓冲器3828。非临时性计算机或处理器可读介质可以例如包括一个或多个非易失性存储器,例如只读存储器(RAM)3830或闪存。非临时性计算机或处理器可读介质可以例如包括一个或多个易失性存储器,例如随机存取存储器(RAM)3832。控制子系统3822可包括其它易失性和非易失性存储器,包括旋转介质存储以及固态存储设备。The control subsystem 3822 includes one or more non-transitory computer or processor readable media to store instructions and data. The non-transitory computer or processor readable media may, for example, include a frame buffer 3828. The non-transitory computer or processor readable media may, for example, include one or more non-volatile memories, such as read-only memory (RAM) 3830 or flash memory. The non-transitory computer or processor readable media may, for example, include one or more volatile memories, such as random access memory (RAM) 3832. The control subsystem 3822 may include other volatile and non-volatile memories, including rotating media storage and solid-state storage devices.

在采用执行机构(统称3818)的实现方式中,控制子系统3822可选地包括以通信方式经由马达控制信号被耦合以驱动执行机构3818的一个或多个专用马达控制器3834。In implementations employing actuators (collectively 3818 ), the control subsystem 3822 optionally includes one or more dedicated motor controllers 3834 communicatively coupled to drive the actuators 3818 via motor control signals.

控制子系统3822可选地包括一个或多个通信端口3836a、3836b(统称为3836),其提供与不同的其它系统、组件或设备的通信。例如,控制子系统3822可以包括一个或多个提供有线或光通信的有线接口或端口3836a。又例如,控制子系统3822可以包括一个或多个无线接口或端口,如一个或多个提供无线通信的无线电设备(即,无线发射器、接收器、收发器)3836b。The control subsystem 3822 optionally includes one or more communication ports 3836a, 3836b (collectively 3836) that provide for communication with various other systems, components, or devices. For example, the control subsystem 3822 may include one or more wired interfaces or ports 3836a that provide for wired or optical communication. For another example, the control subsystem 3822 may include one or more wireless interfaces or ports, such as one or more radios (i.e., wireless transmitters, receivers, transceivers) 3836b that provide for wireless communication.

如示出的,有线接口或端口3836a提供与环境成像系统3838的有线或光通信,环境成像系统3838包括被定位和定向以捕捉最终用户3802所在的环境的图像的一个或多个摄像机3838a。这些可用于感知、测量或收集有关最终用户3802和/或环境的信息。例如,这些可以用于检测或测量最终用户3802或者最终用户3802的部分身体——例如头部3810——的运动和/或和方位。如示出的,有线接口或端口3836a可选地提供与结构照明系统3840的有线或光通信,所述结构照明系统3840的包括被定位和定向以照亮终端用户3802、最终用户3802的一部分如头3810和/或在其中最终用户3802所在环境的一个或多个光源3840a。As shown, a wired interface or port 3836a provides wired or optical communication with an environmental imaging system 3838, which includes one or more cameras 3838a positioned and oriented to capture images of the environment in which the end user 3802 is located. These can be used to sense, measure, or collect information about the end user 3802 and/or the environment. For example, these can be used to detect or measure the movement and/or orientation of the end user 3802 or a portion of the end user 3802, such as the head 3810. As shown, a wired interface or port 3836a optionally provides wired or optical communication with a structured lighting system 3840, which includes one or more light sources 3840a positioned and oriented to illuminate the end user 3802, a portion of the end user 3802, such as the head 3810, and/or the environment in which the end user 3802 is located.

如示出的,无线接口或端口3836b提供与一个或多个头戴式传感器系统3842的无线(例如,RF、微波,IR)通信,所述头戴式传感器系统3842包括一个或多个惯性传感器3842a以捕获指示最终用户3802的头部3810运动的惯性测量。这些可用于感知、测量或收集有关最终用户3802的头部运动的信息。例如,这些可用于检测或测量最终用户3802的头部3810的运动、速度、加速度和/或方位。如示出的,有线接口或端口3836a可选地提供与成像系统3842的有线或光通信,所述成像系统3842包括例如,一个或多个前向成像器或摄像机3842a。这些可以被用来捕获关于最终用户3802所在的环境的信息。这些可以被用来捕获指示最终用户3802相对于那个环境和那个环境中的特定对象的距离和朝向的信息。当被戴在头上,前向成像器或摄像机3842a特别适合于捕获指示最终用户的头部3810相对于最终用户3802所在的环境和那个环境中的特定对象的距离和朝向的信息。这些可以例如被用来检测头部运动、头部运动的速度和/或加速度。这些可以例如被用于检测或推断最终用户3802的注意中心,例于至少部分地基于最终用户的头部3810的朝向。朝向可以在任何方向被检测(例如,相对于最终用户的参考帧的上/下、左/右)。As shown, a wireless interface or port 3836b provides wireless (e.g., RF, microwave, IR) communication with one or more head-mounted sensor systems 3842, which include one or more inertial sensors 3842a to capture inertial measurements indicating the movement of the end user's 3802 head 3810. These can be used to sense, measure, or collect information about the end user's 3802 head movement. For example, these can be used to detect or measure the movement, velocity, acceleration, and/or orientation of the end user's 3802 head 3810. As shown, a wired interface or port 3836a optionally provides wired or optical communication with an imaging system 3842, which includes, for example, one or more forward-facing imagers or cameras 3842a. These can be used to capture information about the environment in which the end user 3802 is located. These can be used to capture information indicating the distance and orientation of the end user 3802 relative to that environment and specific objects in that environment. When worn on the head, the forward-pointing imager or camera 3842a is particularly suitable for capturing information indicating the distance and orientation of the end user's head 3810 relative to the environment in which the end user 3802 is located and specific objects in that environment. These can be used, for example, to detect head movement, speed and/or acceleration of head movement. These can be used, for example, to detect or infer the center of attention of the end user 3802, for example based at least in part on the orientation of the end user's head 3810. The orientation can be detected in any direction (e.g., up/down, left/right relative to the end user's frame of reference).

在一些实现中所有通信都可以是有线的,而在另一些实现中所有通信都可以是无线的。在进一步的实现中,有线和无线通信的选择可能不同于图38所示出的。因此,有线或无线通信的特定选择不应该被认为是限制性的。In some implementations, all communications may be wired, while in other implementations, all communications may be wireless. In further implementations, the selection of wired and wireless communications may differ from that shown in Figure 38. Therefore, the specific selection of wired or wireless communications should not be considered as restrictive.

控制子系统3822的不同组件,例如微处理器3824、GPU3826、帧缓冲器3828、ROM3830、RAM3832、和/或可选地专用马达控制器3834可以经由一个或多个通信通道可通信地被耦合,所述通信通道例如是一个或多个总线3846(仅示出一个)。总线3846可以采取不同的形式,包括指令总线、数据总线、地址总线、其他通信总线,和/或电源总线。The various components of the control subsystem 3822, such as the microprocessor 3824, GPU 3826, frame buffer 3828, ROM 3830, RAM 3832, and/or optionally a dedicated motor controller 3834, may be communicatively coupled via one or more communication channels, such as one or more buses 3846 (only one shown). A bus 3846 may take various forms, including an instruction bus, a data bus, an address bus, other communication buses, and/or a power bus.

预测头部运动的能力允许虚拟图像生成系统3800(图38)——例如增强现实系统,快速地更新图像的呈现和/或适应或补偿头部运动。例如,由可能在仅感知到头部运动的采用的情况下,后续帧可被更早的再渲染或读出。如从本文讨论中将显而易见的,适应或补偿可采取各种形式。例如,可以以偏移的视场或者移向或移到最终用户的注意或聚焦的区域的中心渲染或读出后续帧。又例如,后续帧可被再渲染或读出以适应或补偿自头部运动而导致的变化。例如,在某些显示器或投影技术中(例如,“飞行像素”技术,其中像素被顺序显示,例如光栅扫描、螺旋扫描、利萨茹扫描),快速的头部运动可能导致被呈现给最终用户的帧像的素之间的间隔变化。适应或补偿可以包括适应或补偿这种像素间距的变化。例如,一些像素的尺寸或所预测的尺寸可以相对于其它像素被调整。还例如,一些像素的亮度或所预测的亮度可以相对于其它像素被调整。作为进一步的例示例,后续帧可以以所得图像不同部分之间的可变分辨率被渲染或读出。其他适应或补偿技术将从该讨论中显而易见。在其它方面,许多这些相同的技术可为了除适应或补偿以外的目的而被采用,并且可以独立地被用于预测头部追踪、感知的头部追踪,和/或与基于非“飞行像素”的显示或投影技术结合使用。The ability to predict head motion allows a virtual image generation system 3800 ( FIG. 38 ), such as an augmented reality system, to quickly update the presentation of an image and/or adapt to or compensate for head motion. For example, subsequent frames may be re-rendered or read out earlier, possibly in the case of a situation where only head motion is perceived. As will be apparent from the discussion herein, adaptation or compensation can take various forms. For example, subsequent frames may be rendered or read out with an offset field of view or shifted toward or toward the center of an area of attention or focus of the end user. For another example, subsequent frames may be re-rendered or read out to adapt to or compensate for changes caused by head motion. For example, in certain display or projection technologies (e.g., “flying pixel” technologies, in which pixels are displayed sequentially, such as raster scan, spiral scan, Lissajous scan), rapid head motion may cause the spacing between pixels of the frame presented to the end user to vary. Adaptation or compensation may include adapting to or compensating for such changes in pixel spacing. For example, the size or predicted size of some pixels may be adjusted relative to other pixels. For another example, the brightness or predicted brightness of some pixels may be adjusted relative to other pixels. As a further example, subsequent frames can be rendered or read out with variable resolution between different portions of the resulting image. Other adaptation or compensation techniques will be apparent from this discussion. In other aspects, many of these same techniques can be employed for purposes other than adaptation or compensation and can be used independently for predictive head tracking, perceptual head tracking, and/or in conjunction with non-"flying pixel" based display or projection techniques.

最终用户的运动,例如头部运动可能对图像产生重大的影响。由于增强现实系统试图渲染帧的后续帧与头部运动相一致,所得的虚拟对象的图像可以被压缩、扩展或以其他方式变形。这至少部分是事实的结果——对于许多显示或呈现技术(即“飞行像素”技术),用于任何给定的帧的完整的图像并非被同时呈现或显示,而是由一个又一个的像素呈现或显示。因此,对于这些显示或呈现技术不存在真正的瞬时视场。这可能会以不同的形式,跨越许多不同类型的图像生成技术发生,例如光栅扫描、螺旋扫描或利萨茹扫描方法。一个或多个“白色”或空白帧或图像可以减轻一些快速头部运动的影响。End-user motion, such as head motion, can have a significant effect on the image. As the augmented reality system attempts to render subsequent frames of a frame consistent with the head motion, the resulting image of the virtual object may be compressed, expanded, or otherwise distorted. This is at least in part a result of the fact that for many display or rendering technologies (i.e., "flying pixel" technologies), the complete image for any given frame is not rendered or displayed simultaneously, but rather is rendered or displayed pixel by pixel. Therefore, there is no true instantaneous field of view for these display or rendering technologies. This can occur in different forms across many different types of image generation techniques, such as raster scan, spiral scan, or Lissajous scan methods. One or more "white" or blank frames or images can mitigate the effects of some rapid head motions.

例如,图36A示出了在最终用户的头部快速横向运动期间产生在光栅扫描3600a中的示例性失真。该失真可能是非线性的,因为头部运动可以在开始后加速并在终止前减慢。该失真取决于头部运动的方向、速度和加速度以及光栅扫描像素生成的方向(例如,从右到左,从上到下)。For example, FIG36A shows an exemplary distortion produced in a raster scan 3600a during a rapid lateral motion of the end user's head. The distortion may be nonlinear because the head motion may accelerate after initiation and slow down before termination. The distortion depends on the direction, speed, and acceleration of the head motion and the direction in which the raster scan pixels are generated (e.g., from right to left, from top to bottom).

又例如,图36B示出了在最终用户的头部快速垂直运动期间产生在光栅扫描3600中的示例性失真。该失真可能是非线性的,因为头部运动可以在开始后加速并在终止前减慢。该失真取决于头部运动的方向、速度和加速度以及光栅扫描像素生成的方向(例如,从右到左,从上到下)。As another example, FIG36B illustrates exemplary distortion in raster scan 3600 produced during rapid vertical head motion of an end user. The distortion may be nonlinear because the head motion may accelerate after initiation and slow down before termination. The distortion depends on the direction, speed, and acceleration of the head motion and the direction in which the raster scan pixels are generated (e.g., from right to left, from top to bottom).

作为另一个例子,图37A示出了在最终用户的头部向左侧的快速横向运动期间产生在螺旋扫描线3700a中的示例性失真。该失真可能是非线性的,因为头部运动可以在开始后加速并在终止前减慢。该失真取决于头部运动的方向、速度和加速度以及螺旋扫描像素生成的方向(例如,顺时针方向,增加半径)。如示出的螺旋扫描线3700a的连续环或圈之间的间隔在头部运动的方向增加(例如,向图纸的左边),并且在相反的方向减小(例如,向图纸的右边)。As another example, FIG37A shows exemplary distortion produced in a spiral scan line 3700a during a rapid lateral motion of the end user's head to the left. The distortion may be nonlinear because the head motion may accelerate after initiation and slow down before termination. The distortion depends on the direction, speed, and acceleration of the head motion and the direction of spiral scan pixel generation (e.g., clockwise, increasing radius). The spacing between consecutive loops or circles of the spiral scan line 3700a as shown increases in the direction of head motion (e.g., to the left of the drawing) and decreases in the opposite direction (e.g., to the right of the drawing).

作为另一个例子,图37B示出了在最终用户的头部向左侧的快速横向运动期间产生在螺旋扫描线3700b中的示例性失真。该失真可能是非线性的,因为头部运动可以在开始后加速并在终止前减慢。事实上,如图37B所示,失真可能是高度椭圆(high elliptic)并且去中心化的。该失真取决于头部运动的方向、速度和加速度以及螺旋扫描像素生成的方向(例如,顺时针方向,增加半径)的函数。如示出的的螺旋扫描线3700b的连续环或圈之间的间隔在头部运动的方向上增加(例如,向图纸的左边)。对于相对于系统来说头部运动过快的地方,每个环或圈的最左部分可以位于与相对于螺旋扫描线3700b的起点的头部运动相同的方向,如图37B所示。As another example, FIG37B shows an exemplary distortion produced in a spiral scan line 3700b during a rapid lateral motion of the end user's head to the left. The distortion may be nonlinear because the head motion may accelerate after initiation and slow down before termination. In fact, as shown in FIG37B , the distortion may be highly elliptic and decentralized. The distortion depends on a function of the direction, speed, and acceleration of the head motion and the direction (e.g., clockwise, increasing radius) in which the spiral scan pixels are generated. As shown, the spacing between successive loops or circles of the spiral scan line 3700b increases in the direction of the head motion (e.g., to the left of the drawing). For locations where the head motion is too rapid for the system, the leftmost portion of each loop or circle may be located in the same direction as the head motion relative to the starting point of the spiral scan line 3700b, as shown in FIG37B .

采用螺旋扫描图样的一个优点是,地址到图像缓冲器的转换独立于运动的方向(例如,头部运动、用于手持的微型投影机的手部运动)。One advantage of using a helical scan pattern is that the conversion of addresses to the image buffer is independent of the direction of motion (eg, head motion, hand motion for a handheld pico projector).

上述系统被用于下述的所有实施例。在一个实施例中,基于预测用户的焦点的移动,该系统可以被用于预测头部追踪。图1根据一个所示实施例示出了在采用预测头部追踪的增强现实系统中的方法100的操作。The above system is used in all embodiments described below. In one embodiment, the system can be used for predictive head tracking based on predicting the movement of the user's focus. FIG1 illustrates the operation of a method 100 in an augmented reality system using predictive head tracking according to an illustrated embodiment.

在102,增强现实系统(例如,控制器子系统和/或其处理器)将多个帧作为呈现给增强现实系统的最终用户的图像。所述帧典型地包括指定用于在视场产生一个或多个虚拟对象的像素信息。如前所述,虚拟对象可以采取多种不同的虚拟对象的形式或格式中的任意一种,其可以可视地表示物理对象或者可被表示的信息、数据或逻辑结构。虚拟对象的非限制性实例可以包括:虚拟文本对象、虚拟数字对象、虚拟字母数字对象、虚拟标签对象、虚拟场对象、虚拟图表对象、虚拟地图对象、虚拟工具对象或物理对象的虚拟可视化表示。At 102, an augmented reality system (e.g., a controller subsystem and/or its processor) presents a plurality of frames as images to an end user of the augmented reality system. The frames typically include pixel information designated for generating one or more virtual objects in a field of view. As previously described, the virtual objects can take any of a variety of different virtual object forms or formats that can visually represent physical objects or representable information, data, or logical structures. Non-limiting examples of virtual objects can include: a virtual text object, a virtual numeric object, a virtual alphanumeric object, a virtual label object, a virtual field object, a virtual chart object, a virtual map object, a virtual tool object, or a virtual visual representation of a physical object.

在104,增强现实系统至少基于指示最终用户的注意的输入选择一个或更多虚拟对象。At 104 , the augmented reality system selects one or more virtual objects based at least on the input indicative of the end user's attention.

输入可以是最终用户的实际选择。选择可以由最终用户实时做出或可能已被由预先指定。因此,最终用户可以选择某一组虚拟工具作为最终用户通常地较其他对象更为聚焦或者注意的一类虚拟对象。The input may be an actual selection by the end user. The selection may be made in real time by the end user or may have been pre-specified. Thus, the end user may select a certain set of virtual tools as a type of virtual object that the end user typically focuses on or pays attention to more than other objects.

输入可从各种来源被推出。输入可涉及到虚拟对象本身。输入可以被附加地或备选地涉及到最终用户的视场或者显示器或投影机的视场中的物理对象。输入可以附加地或备选地涉及到用户自身,例如最终用户和/或最终用户的一部分(例如,头、眼睛)的朝向和/或方位,或者历史属性。该历史属性可以是最终用户特定的,或者更一般化或通用的。历史属性可以指示一组已定义的最终用户的特征。最终用户特征可以例如包括头部运动速度、头部运动加速度、和/或头部运动和眼睛运动之间的关系(如一个到另一个的比率)。通过历史属性追踪的最终用户特征甚至可以包括指示给定的最终用户注意某些虚拟对象的倾向性。这些可以通过虚拟对象类型(例如,文本、图表)、最近的虚拟对象(例如,新出现的对象)、虚拟对象的运动(例如,图像到图像的大的变化、快速或迅速的运动、运动的方向)和/或虚拟物体的特征(如颜色,亮度,尺寸)来指定。Input can be derived from a variety of sources. Input can relate to the virtual object itself. Input can additionally or alternatively relate to physical objects in the end user's field of view or the field of view of a display or projector. Input can additionally or alternatively relate to the user itself, such as the orientation and/or position of the end user and/or a portion of the end user (e.g., head, eyes), or historical attributes. The historical attributes can be end-user specific or more general or universal. Historical attributes can indicate characteristics of a defined set of end users. End-user characteristics can include, for example, head movement speed, head movement acceleration, and/or the relationship between head movement and eye movement (e.g., a ratio of one to the other). End-user characteristics tracked through historical attributes can even include a tendency for a given end user to pay attention to certain virtual objects. These can be specified by virtual object type (e.g., text, graphics), recent virtual objects (e.g., newly appeared objects), virtual object motion (e.g., large changes from image to image, rapid or swift motion, direction of motion), and/or characteristics of virtual objects (e.g., color, brightness, size).

在106,对于被呈现给最终用户的多个帧中至少一些中的每一个帧,增强现实系统(例如,控制器子系统和/或其处理器)确定虚拟对象相对于最终用户参考帧在最终用户的视场中出现的位置。例如,增强现实系统可以确定新引入的虚拟对象的位置、已定义类型的虚拟对象、快速移动或超过大距离的虚拟对象、或曾经为最终用户的注意点的虚拟对象。At 106, for each of at least some of the plurality of frames presented to the end user, the augmented reality system (e.g., a controller subsystem and/or a processor thereof) determines a location at which a virtual object appears in the end user's field of view relative to the end user's reference frame. For example, the augmented reality system may determine the location of a newly introduced virtual object, a virtual object of a defined type, a virtual object that moves quickly or over a large distance, or a virtual object that was once the end user's focus.

在108,增强现实系统至少部分地基于已确定的最终用户的视场中的虚拟对象出现的位置调整至少一个后续帧的呈现。本文讨论了调整视场中虚拟对象的出现的多种方式,包括非彻底地适应或补偿、调整像素尺寸、调整像素亮度、调整分辨率、开窗和/或消隐或闪烁。At 108, the augmented reality system adjusts presentation of at least one subsequent frame based at least in part on the determined location of the virtual object in the end user's field of view. Various ways of adjusting the appearance of the virtual object in the field of view are discussed herein, including non-exhaustive adaptation or compensation, adjusting pixel size, adjusting pixel brightness, adjusting resolution, windowing, and/or blanking or flickering.

图2根据一个所示实施例示出了在增强现实系统中另一个操作的方法200。在执行图1中方法100的动作104和/或10 6时可采用方法200。2 illustrates another method 200 of operating in an augmented reality system, according to one illustrated embodiment. The method 200 may be employed when performing actions 104 and/or 106 of the method 100 of FIG. 1 .

方法200采用基于被或将被呈现给最终用户的虚拟对象的特征来预测头部运动的技术。例如,预期新引入的虚拟对象或虚拟对象的运动(例如,由于意外、速度和/或距离)可能会吸引用户注意的虚拟对象,导致头部运动以将特定的虚拟对象带入或靠近最终用户视场的中心。附加地或备选地,在评估哪些最有可能吸引人注意时,增强现实系统可依赖于虚拟对象的其他特征。例如,高吸引力的(例如,闪烁、微闪)、大型的、快速运动的,或明亮的虚拟对象比其他虚拟对象更可能吸引注意。Method 200 employs techniques for predicting head motion based on features of virtual objects that are or will be presented to an end user. For example, it is anticipated that a newly introduced virtual object or the motion of a virtual object (e.g., due to surprise, speed, and/or distance) may attract the user's attention to a virtual object, resulting in a head motion to bring the particular virtual object into or near the center of the end user's field of view. Additionally or alternatively, the augmented reality system may rely on other features of virtual objects when assessing which are most likely to attract attention. For example, virtual objects that are highly attractive (e.g., flickering, shimmering), large, fast-moving, or brightly lit are more likely to attract attention than other virtual objects.

针对新引入虚拟对象的情况,在#ab02,当其被新引入最终用户的视场时,增强现实系统(例如,控制器子系统和/或其处理器)选择和/或确定虚拟对象出现的位置。当未出现在先前(时间上)呈现给最终用户的相关的帧中时,虚拟对象被认为是新引入的。特别是,增强现实系统依赖于一个事实,即新引入的虚拟对象相对于出现在直接的在先帧中的虚拟对象更可能会吸引最终的注意。附加地或备选地,增强现实系统可通过评估哪个最可能引起注意来依赖虚拟对象的其它特征,例如,以在多个新引入的虚拟对象选择或划分优先级。例如,高吸引力的(例如,闪烁,微闪)、大型、快速移动、或明亮的虚拟对象可能比其他的虚拟对象更能够吸引注意。For the case of a newly introduced virtual object, at #ab02, the augmented reality system (e.g., a controller subsystem and/or its processor) selects and/or determines the location where the virtual object appears when it is newly introduced into the end user's field of view. A virtual object is considered to be newly introduced when it does not appear in a relevant frame previously (temporally) presented to the end user. In particular, the augmented reality system relies on the fact that newly introduced virtual objects are more likely to attract eventual attention than virtual objects that appeared in the directly preceding frame. Additionally or alternatively, the augmented reality system can rely on other features of the virtual object by evaluating which is most likely to attract attention, for example, to select or prioritize among multiple newly introduced virtual objects. For example, a highly attractive (e.g., flashing, shimmering), large, fast-moving, or bright virtual object may be more able to attract attention than other virtual objects.

针对正在移动的虚拟对象的情况,在204,增强现实系统(例如,控制器子系统和/或其处理器)选择和/或确定虚拟对象相对于至少一个先前帧中的同一虚拟对象的方位在帧中的新方位中出现的位置。这样,突然移动、快速移动、和/或空间上虚拟对象从一帧到一个或多个后续帧的大的方位移动可能易于吸引最终用户的注意或聚焦。附加地或备选地,增强现实系统可通过评估哪个最可能引起注意来依赖虚拟对象的其它特征,例如,以在多个新引入的虚拟对象之间选择或划分优先级。例如,高吸引力的(例如,闪烁,微闪)、大型、或明亮的虚拟对象比其他的虚拟对象更可能吸引注意。For the case of a moving virtual object, at 204, the augmented reality system (e.g., a controller subsystem and/or its processor) selects and/or determines where the virtual object appears in a new orientation in the frame relative to the orientation of the same virtual object in at least one previous frame. In this way, sudden movements, rapid movements, and/or large spatial positional shifts of a virtual object from one frame to one or more subsequent frames may tend to attract the end user's attention or focus. Additionally or alternatively, the augmented reality system can rely on other features of the virtual object by assessing which is most likely to attract attention, for example, to select or prioritize between multiple newly introduced virtual objects. For example, a highly attractive (e.g., shimmering, glittering), large, or bright virtual object is more likely to attract attention than other virtual objects.

图3根据一个所示实施例示出了在增强现实系统中操作的方法300。可以在执行图1的方法100中的动作108时采用方法300。FIG3 illustrates a method 300 of operating in an augmented reality system according to one illustrated embodiment. The method 300 may be employed when performing act 108 of the method 100 of FIG1 .

在302,增强现实系统(例如,控制器子系统和/或其处理器)呈现至少一个后续帧,该至少一个后续帧的中心至少被移向——如果不是被集中在——已确定的最终用户的视场中虚拟对象出现的位置。后续帧或图像的中心可以被移动以共同位于所选择的被预测吸引最终用户注意的虚拟对象的位置。备选地,后续帧的中心可能移动以接近所选择的被预测吸引最终用户注意的虚拟对象的位置。这可在二维或三维中执行。例如,虚拟对象的二维或三维方位可以被用来分别在二维或三维中调整后续图像的视场。移动的后续帧或图像最好与所预测的最终用户头部运动在时间上一致。因此,移动的后续帧或图像应该尽可能在时间上与真实头部运动一致地呈现给最终用户。如本文所讨论的,这可能考虑速度、加速度、以及速度和加速度中的变化。At 302, the augmented reality system (e.g., a controller subsystem and/or its processor) presents at least one subsequent frame whose center is at least moved toward, if not centered on, the location where the virtual object appears in the determined field of view of the end user. The center of the subsequent frame or image may be moved to be co-located at the location of the selected virtual object that is predicted to attract the end user's attention. Alternatively, the center of the subsequent frame may be moved to be close to the location of the selected virtual object that is predicted to attract the end user's attention. This can be performed in two or three dimensions. For example, the two-dimensional or three-dimensional orientation of the virtual object can be used to adjust the field of view of the subsequent image in two or three dimensions, respectively. The moved subsequent frame or image is preferably consistent in time with the predicted end user head movement. Therefore, the moved subsequent frame or image should be presented to the end user as consistent in time with the real head movement as possible. As discussed herein, this may take into account velocity, acceleration, and changes in velocity and acceleration.

图4根据一个所示实施例示出了的增强现实系统中操作的方法400。可以在执行图1的方法100时采用方法400。FIG4 illustrates a method 400 for operating in an augmented reality system according to an exemplary embodiment. The method 400 may be employed when executing the method 100 of FIG1 .

可选地,在402,增强现实系统接收指示最终用户的身份的信息。该信息可以采取多种不同形式中的任何形式。例如,信息可以是用户名或由最终用户输入或者来自与最终用户相关联的应答器、磁条、或机器可读符号的其他用户标识符(例如,加密的)。例如,信息可以包括指示最终用户的一个或多个物理特征的生物识别信息。在一个特别有利的实现中,增强现实系统可以接收图像数据,其代表最终用户的一个或两个眼睛的一部分(例如,视网膜)。例如,增强现实系统可以例如经由一个或多个光纤投影光进入最终用户的一个或两个眼睛。该光可以被调制以例如增加信噪比和/或限制眼睛的加热。图像传感器可以例如经由投影光的一个或多个光纤捕获眼睛的部分的图像,所述光纤提供双向路径。备选地,专用光纤也可被采用。如进一步备选地,图像传感器可以被置于靠近眼睛的位置,消除作为到图像传感器的返回路径的光纤的使用。人眼的某些部分(例如,视网膜血管)可能被视为具有足够的独特特征来作为唯一的最终用户标识符。Optionally, at 402, the augmented reality system receives information indicating the identity of the end user. This information can take any of a variety of different forms. For example, the information can be a username or other user identifier (e.g., encrypted) entered by the end user or from a transponder, magnetic stripe, or machine-readable symbol associated with the end user. For example, the information can include biometric information indicating one or more physical characteristics of the end user. In one particularly advantageous implementation, the augmented reality system can receive image data representing a portion of one or both eyes of the end user (e.g., the retina). For example, the augmented reality system can project light into one or both eyes of the end user, e.g., via one or more optical fibers. The light can be modulated to, for example, increase the signal-to-noise ratio and/or limit heating of the eye. An image sensor can capture an image of the portion of the eye, e.g., via the one or more optical fibers projecting the light, the optical fibers providing a bidirectional path. Alternatively, dedicated optical fibers can also be employed. As a further alternative, the image sensor can be placed close to the eye, eliminating the use of an optical fiber as a return path to the image sensor. Certain portions of the human eye (e.g., retinal blood vessels) may be considered to have sufficiently unique characteristics to serve as a unique end-user identifier.

可选地,在404,增强现实系统中基于所接收到指示最终用户的身份的信息检索用于最终用户的至少一个用户特定的历史属性。该用户特定的历史属性可指示下述至少之一:用于最终用户的先前头部运动速度、用于最终用户的先前头部运动加速度、用于最终用户的先前的眼部运动到头部运动的关系、最终用户注意某些类型或具有某些特征的虚拟对象的倾向。Optionally, at 404, the augmented reality system retrieves at least one user-specific historical attribute for the end user based on the received information indicating the identity of the end user. The user-specific historical attribute may indicate at least one of the following: a previous head movement velocity for the end user, a previous head movement acceleration for the end user, a previous eye movement to head movement relationship for the end user, and a tendency of the end user to pay attention to virtual objects of certain types or having certain characteristics.

在406,增强现实系统(例如,控制器子系统和/或其处理器)至少部分地基于已确定的最终用户的视场中的虚拟对象出现的位置来预测最终用户的头部运动的发生。再一次,增强现实系统可依赖虚拟对象的吸引力来预测头部运动,例如在逐个最终用户的基础上。At 406, the augmented reality system (e.g., the controller subsystem and/or its processor) predicts the occurrence of head movement of the end user based at least in part on the determined location of the virtual object in the end user's field of view. Again, the augmented reality system can rely on the attractiveness of the virtual object to predict head movement, such as on an end-user-by-end-user basis.

增强现实系统可以采用估计的速度和/或估计的速度变化或估计的加速度以至少部分地将图像呈现与最终用户的预测的头部运动同步。预测的头部运动中速度的估计的变化可基于所预测的头部运动开始之后的第一定义时间和和预测的头部运动结束之前的第二定义时间之间的延伸范围。The augmented reality system can employ the estimated velocity and/or estimated change in velocity or estimated acceleration to at least partially synchronize image presentation with the predicted head movement of the end user. The estimated change in velocity in the predicted head movement can be based on a range extending between a first defined time after the predicted head movement begins and a second defined time before the predicted head movement ends.

在408,增强现实系统估计至少一个指示所预测的最终用户头部运动的估计速度的值。增强现实系统可基于一个或多个值、参数或特征来估计速度。例如,增强现实系统可依赖于移动最终用户头部到新位置以观察所选择或标识的虚拟对象所需的运动范围。增强现实系统可能依赖于用于人的采样的平均速度、或可依赖于用于特定的最终用户的历史头部移动速度。增强现实系统可依赖于用于特定的最终用户的历史属性。速度跨越可以用角速度表示。At 408, the augmented reality system estimates at least one value indicative of an estimated velocity of the predicted end-user head movement. The augmented reality system may estimate the velocity based on one or more values, parameters, or features. For example, the augmented reality system may rely on the range of motion required to move the end-user's head to a new position to observe a selected or identified virtual object. The augmented reality system may rely on an average velocity for a sample of people, or may rely on historical head movement velocities for a particular end-user. The augmented reality system may rely on historical attributes for a particular end-user. The velocity span may be expressed in terms of angular velocity.

在410,增强现实系统估计所预测的最终用户头部运动中的至少一个速度变化,该变化在所预测的头部运动开始和所预测的头部运动结束之间的头部运动的范围内发生。遍及所预测的运动范围内的某些部分,速度的变化可能会以不同的增量发生。At 410, the augmented reality system estimates at least one change in velocity in the predicted end-user head movement, the change occurring within a range of head movement between the predicted head movement start and the predicted head movement end. The change in velocity may occur in different increments throughout certain portions of the predicted range of movement.

在412,增强现实系统估计至少一个指示所预测的最终用户头部运动的估计的加速度的值。该估计的加速度可能在头部运动的整个范围之上或仅在其中一部分上进行。该估计的加速度可以在头部运动范围离散的间隔上进行。加速度的估计可以被确定以用于头部运动开始后的一些已定义的持续时间的一个或多个间隔。加速度的估计可以被确定以用于头部运动终止前的一些定义的持续时间中的一个或多个间隔。从开始和/或结束点隔开估计可以避免加速度测量中的大的变化。At 412, the augmented reality system estimates at least one value of an estimated acceleration indicative of the predicted end-user head motion. The estimated acceleration may be over the entire range of the head motion or only over a portion thereof. The estimated acceleration may be performed at discrete intervals within the range of the head motion. The estimate of acceleration may be determined for one or more intervals of some defined duration after the head motion begins. The estimate of acceleration may be determined for one or more intervals of some defined duration before the head motion ends. Spacing the estimates from the start and/or end points may avoid large variations in the acceleration measurements.

可选地,在414,增强现实系统确定至少一个值,其至少部分地适应或补偿最终用户的预测的头部运动的估计的速度。例如,增强现实系统可以确定与在给定的时间内呈现的帧的总数量有关的值和/或指定跨越一系列将被渲染和/或呈现的图像的一个或多个虚拟对象应在何处和/或多快地移动的值。这些可以被用来渲染后续帧。Optionally, at 414, the augmented reality system determines at least one value that at least partially accommodates or compensates for the estimated speed of the end user's predicted head movement. For example, the augmented reality system may determine a value related to the total number of frames to be rendered in a given time and/or a value specifying where and/or how fast one or more virtual objects should move across a series of images to be rendered and/or presented. These values may be used to render subsequent frames.

可选地,在416,增强现实系统至少部分地基于至少一个值渲染至少一个后续帧,所述值至少部分地补偿最终用户的预测的头部运动的估计速度。例如,增强现实系统可以确定与在给定的时间内呈现的帧的总数量有关的值和/或指定跨越一系列将被渲染和/或呈现的图像的一个或多个虚拟对象应在何处和/或多快地移动的值。这些可以被用来渲染后续帧。Optionally, at 416, the augmented reality system renders at least one subsequent frame based at least in part on at least one value that at least in part compensates for the estimated speed of the end user's predicted head movement. For example, the augmented reality system can determine a value related to the total number of frames to be rendered in a given time and/or a value specifying where and/or how fast one or more virtual objects should move across a series of images to be rendered and/or presented. These can be used to render the subsequent frame.

在另一个实施例中,系统可以被用于基于用户的历史属性预测头部追踪。图5根据一个所示实施例示出了在采用预测的头部追踪的增强现实系统中操作的方法500。In another embodiment, the system may be used to predict head tracking based on historical attributes of the user. Figure 5 illustrates a method 500 operating in an augmented reality system employing predictive head tracking, according to one illustrated embodiment.

增强现实系统在执行预测的头部追踪时可以采用历史属性。历史属性可能是最终用户特定的或者更通用或一般化的。历史属性可指示一组定义的最终用户特征。最终用户特征可以例如包括头部运动速度、头部运动加速度和/或头部运动和眼睛运动之间的关系(例如,一对另一个的比率)。由历史属性追踪的最终用户特征甚至可以包括指示给定的最终用户注意某些虚拟对象的倾向。The augmented reality system can employ historical attributes when performing predictive head tracking. The historical attributes may be end-user specific or more general or generalized. The historical attributes may indicate a defined set of end-user characteristics. The end-user characteristics may include, for example, head movement velocity, head movement acceleration, and/or a relationship between head movement and eye movement (e.g., a one-to-one ratio). The end-user characteristics tracked by the historical attributes may even include a tendency for a given end-user to pay attention to certain virtual objects.

在502,增强现实系统接收到指示最终用户的身份的信息。该信息可以采取多种不同的形式中的任何形式,例如由最终用户主动地提供的信息,从非临时性存储介质读取的信息、从用户读取的信息(例如,生物识别数据或特征),或从最终用户行为中推断的信息。At 502, the augmented reality system receives information indicative of an end user's identity. This information can take any of a number of different forms, such as information actively provided by the end user, information read from a non-transitory storage medium, information read from the user (e.g., biometric data or characteristics), or information inferred from end user behavior.

在504,增强现实系统至少部分地基于所接收的指示最终用户的身份的信息来检索至少一个用于用户的用户特定历史属性。该身份信息可被以多种不同方式中的任何方式来接收、产生或确定。At 504, the augmented reality system retrieves at least one user-specific historical attribute for the end user based at least in part on the received information indicating the identity of the end user.The identity information can be received, generated, or determined in any of a number of different ways.

在506,增强现实系统至少部分地基于所检索的用于最终用户的至少一个用户特定历史属性来向最终用户提供帧。例如,增强现实系统可以提供来自帧缓冲器的帧到投影机或显示设备(例如,与一个或多个光纤配对的光源),或可以渲染帧到帧缓冲器。增强现实系统可以经由至少在双轴方向可移动的至少一个光纤提供光。增强现实系统可以经由至少一个光纤接收指示最终用户的眼睛的至少一部分的图像的图像信息,所述光纤也向最终用户提供帧。At 506, the augmented reality system provides a frame to the end user based at least in part on the retrieved at least one user-specific historical attribute for the end user. For example, the augmented reality system can provide the frame from the frame buffer to a projector or display device (e.g., a light source paired with one or more optical fibers), or can render the frame to the frame buffer. The augmented reality system can provide light via at least one optical fiber that is movable in at least two axes. The augmented reality system can receive image information indicative of an image of at least a portion of the end user's eye via at least one optical fiber, the optical fiber also providing the frame to the end user.

图6根据一个所示实施例示出了在采用预测的头部追踪的增强现实系统中操作的方法600。可以在执行图5中的方法500的动作504时采用方法600。6 illustrates a method 600 operating in an augmented reality system employing predictive head tracking, according to one illustrated embodiment. The method 600 may be employed when performing act 504 of the method 500 in FIG.

在602,增强现实系统检索提供用于最终用户的至少一个头部运动属性的指示的至少一个历史属性。头部运动属性指示最终用户的至少一个先前的头部运动。历史属性可以被存储在非临时性介质上,例如在数据库或其它逻辑结构中。At 602, the augmented reality system retrieves at least one historical attribute that provides an indication of at least one head movement attribute for an end user. The head movement attribute indicates at least one previous head movement of the end user. The historical attribute can be stored on a non-transitory medium, such as in a database or other logical structure.

在604,增强现实系统检索提供用于最终用户的至少一个先前的头部运动的头部运动速度的指示的至少一个历史属性。At 604 , the augmented reality system retrieves at least one historical attribute that provides an indication of a head movement speed for at least one previous head movement of the end user.

在606,增强现实系统检索跨越最终用户的至少一个先前头部运动范围的至少一部分的头部运动速度的变化的指示的至少一个历史属性。At 606 , the augmented reality system retrieves at least one historical attribute of an indication of a change in head movement speed across at least a portion of at least one previous range of head movement of the end user.

在608,增强现实系统检索提供用于最终用户的至少一个先前头部运动的头部运动加速度的指示至少一个历史属性。At 608 , the augmented reality system retrieves at least one historical attribute providing an indication of head movement acceleration for at least one previous head movement of the end user.

在610,增强现实系统检索提供用于最终用户的至少一个先前头部和眼部运动组合的头部运动与眼部运动之间关系的指示的至少一个历史属性。该关系可以例如被表示为代表最终用户的至少一个先前头部运动的头部运动的头部运动值和代表至少一个先前眼部运动的值的比率。所述值可以分别代表头部和眼部运动的量,例如表示为角度变化。所述比率可能是最终用户的头部运动的历史平均和眼部运动的历史平均的比率。附加地或备选地,可以采用头部和眼部运动之间的其他关系,例如速度或加速度。At 610, the augmented reality system retrieves at least one historical attribute that provides an indication of a relationship between head movement and eye movement for at least one previous head and eye movement combination of the end user. The relationship can be expressed, for example, as a ratio of a head movement value representing the head movement of at least one previous head movement of the end user and a value representing at least one previous eye movement. The values can represent the amount of head and eye movement, respectively, for example, expressed as an angular change. The ratio can be a ratio of a historical average of the end user's head movement and a historical average of the eye movement. Additionally or alternatively, other relationships between head and eye movements can be employed, such as velocity or acceleration.

图7根据一个所示实施例示出了在采用预测的头部追踪的增强现实系统中操作的方法700。可以在执行图5的方法500的动作506时采用方法700。7 illustrates a method 700 for operating in an augmented reality system employing predictive head tracking, according to one illustrated embodiment. The method 700 may be employed when performing act 506 of the method 500 of FIG. 5 .

在702,增强现实系统至少预测最终用户的头部运动的终点。例如,当虚拟对象的出现被用于预测的头部运动时,特定虚拟对象的相对位置可被用作终点。At 702, the augmented reality system predicts at least an endpoint of an end user's head movement. For example, when the appearance of a virtual object is used for the predicted head movement, the relative position of a particular virtual object can be used as the endpoint.

在704,增强现实系统渲染至少一个后续帧到至少一个图像缓冲器。所述至少一个后续帧被至少移向,甚至移至,所预测的头部运动的终点。At 704, the augmented reality system renders at least one subsequent frame to at least one image buffer. The at least one subsequent frame is shifted at least toward, or even to, the predicted end point of the head movement.

图8根据一个所示实施例示出了在采用预测的头部追踪的增强现实系统中操作的方法800。可以在执行图7的方法700的动作704时采用方法800。8 illustrates a method 800 operating in an augmented reality system employing predictive head tracking, according to one illustrated embodiment. The method 800 may be employed when performing act 704 of the method 700 of FIG.

在802,增强现实系统以至少部分地适应用于最终用户的至少一个头部运动属性的方式渲染多个后续帧,所述多个后续帧至少向所预测的头部运动的终点偏移。所述头部运动属性可以指示头部运动的不同物理特性,特别是最终用户的头部运动的历史物理特性。头部运动属性可以例如包括下述中的一个或多个:用于最终用户的历史头部运动速度、用于最终用户的历史头部运动加速度,和/或用于最终用户的头部运动与眼部运动之间的历史关系(例如,比率)。可以通过将对应的图像或对应的图像的中心相对于与先前帧所对应的图像进行偏移来渲染后续帧从而实现偏移。At 802, the augmented reality system renders a plurality of subsequent frames in a manner at least partially adapted to at least one head motion attribute for the end user, the plurality of subsequent frames being offset toward at least an endpoint of the predicted head motion. The head motion attributes can indicate different physical characteristics of the head motion, in particular historical physical characteristics of the end user's head motion. The head motion attributes can, for example, include one or more of: historical head motion velocities for the end user, historical head motion accelerations for the end user, and/or historical relationships (e.g., ratios) between head motions and eye motions for the end user. The offset can be achieved by rendering the subsequent frames by offsetting the corresponding image or the center of the corresponding image relative to the image corresponding to the previous frame.

图9根据一个所示实施例示出了在采用预测的头部追踪的增强现实系统中操作的方法900。可以在执行图7的方法700的动作703时采用方法900。9 illustrates a method 900 operating in an augmented reality system employing predictive head tracking, according to one illustrated embodiment. The method 900 may be employed when performing act 703 of the method 700 of FIG.

在902,增强现实系统至少部分地基于最终用户的视场中的虚拟图像的出现来预测最终用户的头部运动的发生。At 902 , an augmented reality system predicts occurrence of head movement of an end user based at least in part on the appearance of a virtual image in a field of view of the end user.

所述出现在时间上可能是相对于作为图像呈现给最终用户的先前帧,当新的虚拟对象被新引入呈现给最终用户的视场时的出现。备选地或附加地,所述出现可能是相对于先前呈现给最终用户的虚拟对象的方位,虚拟对象在呈现给最终用户的视场中在新的方位中的出现。所述预测可以例如考虑因素。例如,预测可以部分地基于虚拟对象的尺寸或显著性、方位改变的量或百分比、速度、突然地加速度,或虚拟对象的方位中的其他变化。The appearance may be the appearance in time of a new virtual object newly introduced into the field of view presented to the end user relative to a previous frame presented as an image to the end user. Alternatively or additionally, the appearance may be the appearance of a virtual object in a new orientation in the field of view presented to the end user relative to the orientation of the virtual object previously presented to the end user. The prediction may, for example, take into account factors. For example, the prediction may be based in part on the size or significance of the virtual object, the amount or percentage of the orientation change, the speed, sudden acceleration, or other changes in the orientation of the virtual object.

所述系统还可以被用于动态控制像素特征。图10根据一个所示实施例示出了在增强现实系统中操作的方法1000。The system can also be used to dynamically control pixel characteristics. Figure 10 illustrates a method 1000 operating in an augmented reality system, according to one illustrated embodiment.

在1002,增强现实系统(例如,控制器子系统和/或其处理器)检测呈现给最终用户的帧中的一些像素之间的间隔将不同于在同一帧中的其他像素之间的间隔的指示。例如,增强现实系统可检测呈现给最终用户的帧中第一组像素的像素之间的间隔将不同于该呈现给最终用户的帧中至少第二组像素的像素之间的间隔的指示。例如,在帧的像素在一段时间上被顺序呈现(例如,帧缓冲器的读出)时(例如,“飞行像素”图样,诸如光栅扫描图样、螺旋扫描图样、利萨茹扫描图样),快速头部运动可能会导致在图像或帧的不同部分之间的像素间隔的变化。At 1002, an augmented reality system (e.g., a controller subsystem and/or a processor thereof) detects an indication that spacing between some pixels in a frame presented to an end user will be different than spacing between other pixels in the same frame. For example, the augmented reality system may detect an indication that spacing between pixels of a first group of pixels in a frame presented to the end user will be different than spacing between pixels of at least a second group of pixels in the frame presented to the end user. For example, when pixels of a frame are presented sequentially over a period of time (e.g., readout of a frame buffer) (e.g., a "flying pixel" pattern, such as a raster scan pattern, a spiral scan pattern, a Lissajous scan pattern), rapid head motion may cause variations in pixel spacing between different portions of an image or frame.

在1004,响应于呈现给最终用户的帧中的一些像素之间的间隔将不同于该帧中的其他像素之间的间隔的检测,增强现实系统向至少一个后续帧的至少一部分提供被调整以至少部分补偿可由最终用户感知的至少一个像素特征的至少第一组像素。这样可至少部分补偿呈现给最终用户的图像的不同部分中的像素之间的间隔。At 1004, in response to detecting that spacing between some pixels in a frame presented to an end user will differ from spacing between other pixels in the frame, the augmented reality system provides, to at least a portion of at least one subsequent frame, at least a first set of pixels that are adjusted to at least partially compensate for at least one pixel characteristic perceptible by the end user. This may at least partially compensate for spacing between pixels in different portions of the image presented to the end user.

图11根据一个所示实施例示出了在增强现实系统中操作的方法1100。可以在执行图10的方法1000时采用方法1100。FIG11 illustrates a method 1100 for operating in an augmented reality system according to one illustrated embodiment. The method 1100 may be employed when executing the method 1000 of FIG10 .

可选地,在1102,增强现实系统(例如,控制器子系统和/或其处理器)接收指示用户佩戴的至少一个头戴式惯性传感器的输出的信号。该惯性传感器可以采取多种形式,例如陀螺仪传感器或加速度传感器。惯性传感器可以是单轴或多轴设备。惯性传感器可以采取MEMS设备的形式。Optionally, at 1102, the augmented reality system (e.g., the controller subsystem and/or its processor) receives a signal indicative of an output of at least one head-mounted inertial sensor worn by a user. The inertial sensor can take a variety of forms, such as a gyroscope or an accelerometer. The inertial sensor can be a single-axis or multi-axis device. The inertial sensor can take the form of a MEMS device.

可选地,在1104,增强现实系统接收指示用户佩戴的至少一个头戴式成像器的输出的信号。该成像器可以例如采取数码摄像机或其它图像捕获设备的形式。这些可以是前向摄像机以捕捉至少接近于最终用户视场的视场。Optionally, at 1104, the augmented reality system receives a signal indicative of the output of at least one head-mounted imager worn by the user. The imager may, for example, take the form of a digital camera or other image capture device. These may be forward-facing cameras to capture a field of view that is at least approximately the end user's field of view.

可选地,在1106,增强现实系统检测超过标称头部运动值的头部运动。例如,增强现实系统可以检测超过标称速度和/或标称加速度的头部运动。增强现实系统可采用来自惯性传感器的信号以检测运动,并且特别是加速度。增强现实系统可以采用来自头戴式摄像机的信号检测周围环境中的物理对象的方位的变化,特别是固定的物理对象如墙壁、地板、天花板。增强现实系统可以采用任何数量的图像处理技术。已检测的方位变化允许增强现实系统确定在头部方位、运动速度和加速度的变化。除了或代替惯性传感器和头戴成像信息,增强现实系统可以采用其他信息。例如,增强现实系统可采用来自监视周围环境并且不是被用户佩戴而是追踪用户的系统的信号。这样的系统可以采用一个或多个成像器,例如数码摄像机,来监控周围环境。成像器检测最终用户和最终用户的部分如头部的运动。再一次,多种图像处理技术可被采用。这样的系统可被有利地与结构化的光系统配对。备选地,方法#CB00可被独立于已检测的或甚至所预测的头部运动执行。Optionally, at 1106, the augmented reality system detects head movement that exceeds a nominal head movement value. For example, the augmented reality system may detect head movement that exceeds a nominal velocity and/or a nominal acceleration. The augmented reality system may utilize signals from inertial sensors to detect movement, and in particular acceleration. The augmented reality system may utilize signals from a head-mounted camera to detect changes in the orientation of physical objects in the surrounding environment, in particular fixed physical objects such as walls, floors, and ceilings. The augmented reality system may utilize any number of image processing techniques. Detected orientation changes allow the augmented reality system to determine changes in head orientation, velocity, and acceleration. In addition to or in lieu of inertial sensor and head-mounted imaging information, the augmented reality system may utilize other information. For example, the augmented reality system may utilize signals from a system that monitors the surrounding environment and is not worn by the user but rather tracks the user. Such a system may utilize one or more imagers, such as digital cameras, to monitor the surrounding environment. The imagers detect movement of the end user and parts of the end user, such as the head. Again, a variety of image processing techniques may be employed. Such a system may advantageously be paired with a structured light system. Alternatively, method #CB00 may be performed independently of detected or even predicted head movement.

在1108,增强现实系统例如基于已检测到的头部运动的方向选择帧的第一组像素。增强现实系统可以基于其它准则例如已检测的头部运动的速度另外选择帧的第一组像素。At 1108, the augmented reality system selects a first set of pixels of the frame, for example, based on the direction of the detected head movement. The augmented reality system can additionally select the first set of pixels of the frame based on other criteria, such as the speed of the detected head movement.

在1110,增强现实系统调节至少一个后续帧的第一组像素的至少一些像素的尺寸和/或亮度中的至少一个。该调整可被设计为至少适应或至少部分地补偿头部运动造成的帧或图像所中不期望的变化。At 1110, the augmented reality system adjusts at least one of the size and/or brightness of at least some pixels of the first set of pixels of at least one subsequent frame. The adjustment can be designed to at least accommodate or at least partially compensate for undesirable changes in the frames or images caused by head movement.

可选地,在1112,增强现实系统渲染至少一个后续帧。渲染的后续帧包括已调节的像素信息以至少部分适应或补至少部分适应或补偿头部运动造成的帧或图像中不期望的变化。Optionally, the augmented reality system renders at least one subsequent frame at 1112. The rendered subsequent frame includes pixel information adjusted to at least partially accommodate or compensate for undesirable changes in the frame or image caused by head movement.

可选地,在1114,增强现实系统从存储一个或多个后续帧的至少一个帧缓冲器中读出至少一个后续帧。例如,增强现实系统可以选择性地从至少一个帧缓冲器中读出至少一个后续帧。这样可以利用过渲染,其中帧相对于图像区域或视场的尺寸被过渲染。该系统,特别是在头部佩戴该系统时,将在大多数情况下专用于具有已知面积和已知分辨率的固定显示表面。这与旨在向多种尺寸和分辨率的显示器提供信号的计算机和其它其它设备相反。因此,增强现实系统选择性地读入或读出帧缓冲器而不是读入或读出来自帧缓冲器的整个帧。如果被渲染以创建的后续图像的新帧显示位于以前的图像之外的部分,那么不进行过渲染时可能需要的GPU的过度运行,过渲染可以防止这种情况。例如,没有过渲染的话,增强现实系统将需要在每次最终用户的头被移动时渲染新的帧。结合过渲染,一组专用的电子设备可被用于选择或读出过渲染的帧的所期望的部分,其本质上是移动在先前渲染的帧中的窗口。Optionally, at 1114, the augmented reality system reads at least one subsequent frame from at least one frame buffer storing one or more subsequent frames. For example, the augmented reality system can selectively read at least one subsequent frame from the at least one frame buffer. This allows for overrendering, in which frames are overrendered relative to the size of the image area or field of view. This system, particularly when worn on the head, will in most cases be designed for use with a fixed display surface of known area and resolution. This is in contrast to computers and other devices designed to provide signals to displays of various sizes and resolutions. Therefore, the augmented reality system selectively reads in or out of the frame buffer rather than reading in or out the entire frame from the frame buffer. Overrendering can prevent the GPU from overworking if the new frame being rendered to create the subsequent image displays a portion that is outside of the previous image. For example, without overrendering, the augmented reality system would need to render a new frame each time the end user's head moves. In conjunction with overrendering, a dedicated set of electronic devices can be used to select or read out the desired portion of the overrendered frame, essentially moving a window in the previously rendered frame.

图12根据一个所示实施例示出了在增强现实系统中操作的方法1200。可以在执行图11的方法1100的动作1108和1110时采用方法1200。12 illustrates a method 1200 of operating in an augmented reality system, according to one illustrated embodiment. The method 1200 may be employed when performing actions 1108 and 1110 of the method 1100 of FIG.

在1202,增强现实系统(例如,控制器子系统和/或其处理器)选择帧的至少第一组像素中,使得第一组像素相对于检测到的头部运动方向处于给定的方向(例如,相同的方向、相反的方向)。At 1202, an augmented reality system (e.g., a controller subsystem and/or its processor) selects at least a first group of pixels of a frame such that the first group of pixels is in a given direction (e.g., the same direction, the opposite direction) relative to a detected head motion direction.

在1202,增强现实系统调整所选择的组的像素的像素尺寸作为呈现给用户的至少一个后续帧的第一组像素中的像素。At 1202 , the augmented reality system adjusts a pixel size of the selected group of pixels as pixels in a first group of pixels of at least one subsequent frame presented to the user.

例如,增强现实系统可以选择帧的第一组像素,使得第一组像素相对于其他像素位于与已检测的头部运动相同的方向上。例如,相对于在图像中通常朝向右侧的第二组像素,在图像中第一组像素相对地朝向左侧。例如,相对于在图像中通常朝向底部的第二组像素,在图像中第一组像素相对地朝向上部。增强现实系统可以提供一个或多个后续帧或图像,其中第一组中的像素相对于后续帧中的一些其他像素具有增加的尺寸。这可以至少部分地适应或至少部分地补偿像素间的扩散,该扩散产生于增强现实系统不能跟上的快速头部运动。For example, the augmented reality system may select a first group of pixels of a frame so that the first group of pixels are located in the same direction as the detected head movement relative to the other pixels. For example, the first group of pixels are relatively oriented to the left in the image relative to a second group of pixels that are generally oriented to the right in the image. For example, the first group of pixels are relatively oriented to the top in the image relative to a second group of pixels that are generally oriented to the bottom in the image. The augmented reality system may provide one or more subsequent frames or images in which the pixels in the first group have an increased size relative to some other pixels in the subsequent frames. This may at least partially accommodate or at least partially compensate for diffusion between pixels that results from rapid head movements that the augmented reality system cannot keep up with.

例如,增强现实系统可以选择帧的第一组像素,使得第一组像素相对于其他像素位于与已检测的头部运动方法相反的方向上。增强现实系统可以提供一个或多个后续帧或图像,其中第一组像素相对于后续帧中的一些其他像素具有减少的尺寸。这可以至少部分地适应或至少部分地补偿像素间的扩散,该扩散产生于增强现实系统不能跟上的快速头部运动。For example, the augmented reality system can select a first set of pixels of a frame such that the first set of pixels are located in a direction opposite to the detected head motion relative to other pixels. The augmented reality system can provide one or more subsequent frames or images in which the first set of pixels has a reduced size relative to some other pixels in the subsequent frames. This can at least partially accommodate or at least partially compensate for inter-pixel diffusion resulting from rapid head motion that the augmented reality system cannot keep up with.

调整(例如,增加、减少)所选择的组的像素的尺寸可以包括调整可变的聚焦组件。调整(例如,增加、减少)所选择的组的像素的尺寸可以包括调整可变尺寸的源。调整(例如,增加、减少)选择的组的像素的尺寸可以包括调整抖动。Adjusting (e.g., increasing, decreasing) the size of the selected group of pixels may include adjusting a variable focus component. Adjusting (e.g., increasing, decreasing) the size of the selected group of pixels may include adjusting a variable size source. Adjusting (e.g., increasing, decreasing) the size of the selected group of pixels may include adjusting jitter.

作为进一步的示例,增强现实系统可以选择帧的第一组像素,使得第一组像素相对于其他像素位于与已检测的头部运动相同的方向上。增强现实系统可以提供一个或多个后续帧或图像,其中第一组中的像素相对于后续帧中的一些其他像素具有增加的亮度。这可以至少部分地适应或至少部分地补偿像素间的扩散,该扩散产生于增强现实系统不能跟上的快速头部运动。As a further example, the augmented reality system can select a first group of pixels of a frame such that the first group of pixels are located in the same direction as the detected head motion relative to other pixels. The augmented reality system can provide one or more subsequent frames or images in which the pixels in the first group have increased brightness relative to some other pixels in the subsequent frames. This can at least partially accommodate or at least partially compensate for inter-pixel scattering that results from rapid head motion that the augmented reality system cannot keep up with.

作为又一个更进一步的示例,增强现实系统可以选择帧的第一组像素,使得第一组像素相对于其他像素位于与已检测的头部运动相反的方向上。增强现实系统可以提供一个或多个后续帧或图像,其中第一组中的像素相对于后续帧中的一些其他像素具有减少的亮度。这可以至少部分地适应或至少部分地补偿像素间的扩散,该扩散产生于增强现实系统不能跟上的快速头部运动。As yet another further example, the augmented reality system can select a first group of pixels of a frame such that the first group of pixels are located in a direction opposite to the detected head motion relative to other pixels. The augmented reality system can provide one or more subsequent frames or images in which the pixels in the first group have reduced brightness relative to some other pixels in the subsequent frames. This can at least partially accommodate or at least partially compensate for inter-pixel scattering that results from rapid head motion that the augmented reality system cannot keep up with.

如上,增强现实系统可以仅调整所选择的像素的尺寸、仅调整所选择的像素的亮度,或调整所选择的像素的尺寸和亮度的两者。进一步地,增强现实系统可以调整一些像素的亮度、其它像素的尺寸,甚至其它像素的亮度和尺寸,和/或不调整进一步的像素的亮度或者尺寸。As described above, the augmented reality system can adjust only the size of selected pixels, only the brightness of selected pixels, or both the size and brightness of selected pixels. Further, the augmented reality system can adjust the brightness of some pixels, the size of other pixels, or even the brightness and size of other pixels, and/or not adjust the brightness or size of further pixels.

所述系统还可以被用于在整个帧的基础上动态地对小于整个的帧进行更新,如下所示。图13根据一个所示实施例示出了在增强现实系统中操作的方法00。The system can also be used to dynamically update less than an entire frame on an entire frame basis, as shown below. Figure 13 illustrates a method 00 operating in an augmented reality system according to one illustrated embodiment.

在1302,增强现实系统(例如,控制器子系统和/或其处理器)渲染第一完整帧到图像缓冲器。第一完整帧包括用于像素的顺序呈现以形成多个虚拟对象的图像的像素信息。第一完整帧可以采取多种适用于不同显示技术的形式。例如,完整帧可包括适于形成完整的光栅扫描帧的像素信息,其可以是具有两个场的交错的光栅扫描帧。交错的光栅扫描的每个场均包括多条线,第一场包括奇数行并且第二场包含偶数行。至少如显示给所示最终用户的,奇数和偶数行可以是交错的。一种特别有优势的技术采用螺旋扫描线。该螺旋扫描方法可采用每帧单个场,例如由单个螺旋迹线(spiraltrace)组成。备选地,该螺旋扫描方法可以采用每帧两个或更多场,例如,由两个或更多个被顺序呈现的螺旋迹线组成。螺旋迹线可以有利地通过引入帧的每个场之间的相移被简单地交错或嵌套。另一种技术采用了利萨茹扫描方法。该利萨茹扫描方法可采用每帧单场,例如由单个利萨茹迹线组成。备选地,利萨茹扫描方法可采用每帧两个或更多场,例如由两个或更多个被顺序呈现的利萨茹迹线组成。利萨茹迹线可以有利地通过引入帧的每个场之间的相移被简单地交错或嵌套。。At 1302, the augmented reality system (e.g., a controller subsystem and/or its processor) renders a first complete frame to an image buffer. The first complete frame includes pixel information for sequential presentation of pixels to form images of multiple virtual objects. The first complete frame can take a variety of forms suitable for different display technologies. For example, the complete frame can include pixel information suitable for forming a complete raster scan frame, which can be an interlaced raster scan frame having two fields. Each field of the interlaced raster scan includes multiple lines, the first field including odd lines and the second field including even lines. The odd and even lines can be interlaced, at least as displayed to the end user. A particularly advantageous technique uses spiral scan lines. The spiral scanning method can use a single field per frame, for example, consisting of a single spiral trace. Alternatively, the spiral scanning method can use two or more fields per frame, for example, consisting of two or more spiral traces presented sequentially. The spiral traces can advantageously be simply interlaced or nested by introducing a phase shift between each field of the frame. Another technique uses a Lissajous scanning method. The Lissajous scanning method can employ a single field per frame, e.g., consisting of a single Lissajous trace. Alternatively, the Lissajous scanning method can employ two or more fields per frame, e.g., consisting of two or more sequentially presented Lissajous traces. Advantageously, the Lissajous traces can be simply interleaved or nested by introducing a phase shift between each field of the frame.

在1304,增强现实系统开始第一完整帧的呈现。这可以包括读出帧缓冲器,例如以驱动光源和一个或多个光纤的端部。该读出可以包括动态地确定帧缓冲器的哪部分将被读出。At 1304, the augmented reality system begins rendering the first complete frame. This may include reading out the frame buffer, for example to drive a light source and the ends of one or more optical fibers. This reading may include dynamically determining which portion of the frame buffer to read out.

可选地,在1306,增强现实系统检测最终用户的的头部运动超过标称头部运动值。这可以采用先前讨论的不同方法中的任何一种。Optionally, the augmented reality system detects that the end user's head movement exceeds a nominal head movement value at 1306. This can be done using any of the various methods discussed previously.

在1308,在整个第一完整帧的呈现完成之前,增强现实系统动态地中断第一完整帧的呈现。详细地说,增强现实系统开始对第一完整帧的更新的呈现。在对第一完整帧的更新中的像素信息的至少一部分已经从第一完整帧改变。例如,在交错的基于光栅扫描的系统中,增强现实系统可呈现第一场的全部或一部分,使用更新的第二场来代替第二场。又例如,在交错的基于螺旋扫描的系统中,增强现实系统可呈现第一场(例如,第一螺旋扫描线或迹线)的全部或一部分,使用更新的第二场(例如,不同于初始第二螺旋扫描或迹线的更新的第二螺旋扫描线或迹线)来代替第二场。类似的,在交错的基于利萨茹扫描的系统中,增强现实系统可呈现第一场(例如,第一利萨茹扫描线或迹线,即,完整的图8周期)的全部或一部分,使用更新的第二场(例如,不同于初始第二螺旋扫描或迹线的更新的第二利萨茹扫描线或迹线)来代替第二场。虽然示例根据场给出,但其并不限于整个的场。呈现可在场的呈现期间被解释,例如在呈现第一或第二或第三场期间。呈现可以在任何给定的线的呈现的过程中被解释(例如,光栅扫描的行、螺旋或利萨茹扫描的完整周期)。At 1308, the augmented reality system dynamically interrupts presentation of the first complete frame before presentation of the entire first complete frame is complete. In detail, the augmented reality system begins presentation of an update to the first complete frame. At least a portion of the pixel information in the update to the first complete frame has changed from the first complete frame. For example, in an interlaced raster scan-based system, the augmented reality system may present all or a portion of the first field, replacing the second field with an updated second field. For another example, in an interlaced spiral scan-based system, the augmented reality system may present all or a portion of the first field (e.g., a first spiral scan line or trace), replacing the second field with an updated second field (e.g., an updated second spiral scan line or trace that is different from the initial second spiral scan or trace). Similarly, in an interlaced Lissajous scan-based system, the augmented reality system may present all or a portion of the first field (e.g., a first Lissajous scan line or trace, i.e., a complete Figure 8 cycle), replacing the second field with an updated second field (e.g., an updated second Lissajous scan line or trace that is different from the initial second spiral scan or trace). While examples are given in terms of fields, they are not limited to entire fields. A presentation may be interpreted during the presentation of a field, such as during the presentation of the first, second, or third field. A presentation may be interpreted during the presentation of any given line (e.g., a line of a raster scan, a complete cycle of a spiral or Lissajous scan).

图14根据一个所示实施例示出了在增强现实系统中操作的方法1400。可以在执行图13的方法1300时采用方法1400。FIG14 illustrates a method 1400 for operating in an augmented reality system according to one illustrated embodiment. The method 1400 may be employed when executing the method 1300 of FIG13 .

在1402,增强现实系统(例如,控制器子系统和/或其处理器)渲染更新的第一完整帧。更新的第一完整帧包括像素信息,其在至少一个方面不同于第一完整帧的像素信息。At 1402, an augmented reality system (e.g., a controller subsystem and/or its processor) renders an updated first full frame. The updated first full frame includes pixel information that differs from pixel information of the first full frame in at least one aspect.

渲染更新的第一完整帧可包括以第一场和至少第二场渲染更新的整帧。第二场可以与第一场交错,典型地跟随第一场的呈现而被顺序地呈现。例如,第一场可由光栅扫描中的偶数行组成而第二场由奇数行组成。又例如,第一场可以由第一螺旋扫描线或第一利萨茹扫描线组成,而第二场由第二螺旋扫描线或第二利萨茹扫描线组成。因此,渲染更新的第一完整帧可包括使用第一场和至少第二场渲染更新的整帧,第二场与至少第一场交错。Rendering the updated first complete frame may include rendering the updated complete frame using the first field and at least the second field. The second field may be interlaced with the first field, typically being presented sequentially following presentation of the first field. For example, the first field may be comprised of even-numbered lines in a raster scan and the second field may be comprised of odd-numbered lines. For another example, the first field may be comprised of first spiral scan lines or first Lissajous scan lines, while the second field may be comprised of second spiral scan lines or second Lissajous scan lines. Thus, rendering the updated first complete frame may include rendering the updated complete frame using the first field and at least the second field, the second field being interlaced with at least the first field.

在1404,增强现实系统呈现已更新的第一完整帧的一部分来代替第一完整帧的对应部分。因此,初始未更新的第一完整帧中断后,更新的帧的一部分代替第一完整帧的所有或一部分。At 1404, the augmented reality system presents the updated portion of the first complete frame to replace the corresponding portion of the first complete frame. Thus, after the initial non-updated first complete frame is interrupted, the portion of the updated frame replaces all or a portion of the first complete frame.

例如,增强现实系统呈现已更新的第一完整帧的第二场来代替初始(即,未更新的)第一完整帧的第二场。又例如,增强现实系统可呈现伴随已更新的第一完整帧的第二场的第一场的第二部分来代替第一初始(即,未更新的)第一完整帧的第一场的相应部分和整个第二场。For example, the augmented reality system may present the second field of the updated first complete frame instead of the second field of the original (i.e., non-updated) first complete frame. For another example, the augmented reality system may present the second portion of the first field accompanied by the second field of the updated first complete frame instead of the corresponding portion of the first field and the entire second field of the original (i.e., non-updated) first complete frame.

又例如,增强现实系统可呈现已更新的第一完整帧的场的一部分(例如,线、线的一部分、像素组,像素)来代替第一完整帧的对应场的对应部分。例如,增强现实系统可呈现已更新的光栅扫描帧的更新的第一完整帧的场的一部分来代替光栅扫描帧的初始(即,未更新的)第一完整帧的对应场的对应部分。For another example, the augmented reality system can present a portion of a field (e.g., a line, a portion of a line, a group of pixels, a pixel) of the updated first complete frame in place of a corresponding portion of a corresponding field of the first complete frame. For example, the augmented reality system can present a portion of a field of the updated first complete frame of the updated raster scan frame in place of a corresponding portion of a corresponding field of the original (i.e., non-updated) first complete frame of the raster scan frame.

作为另一个示例,增强现实系统可呈现已更新的第一完整帧的线来代替初始(即,未更新的)第一完整帧的对应线。作为又一个示例,增强现实系统可呈现更新的第一完整帧的螺旋线来代替初始(即,未更新的)第一完整帧的对应螺旋线。作为进一步的示例,增强现实系统可呈现更新的第一完整帧的线的一部分来代替初始(即,未更新的)第一完整帧的对应线的对应部分。作为又一进一步的示例,增强现实系统可呈现更新的第一完整帧的至少一个像素来代替初始(即,未更新的)第一完整帧的至少一个像素的对应部分。作为又一个附加示例,增强现实系统可呈现更新的第一完整帧的利萨茹图样扫描的一个完整周期来代替初始(即,未更新的)第一完整帧的利萨茹图样扫描的一个完整周期的对应部分。As another example, the augmented reality system may present a line of the updated first complete frame in place of a corresponding line of the initial (i.e., non-updated) first complete frame. As yet another example, the augmented reality system may present a spiral of the updated first complete frame in place of a corresponding spiral of the initial (i.e., non-updated) first complete frame. As a further example, the augmented reality system may present a portion of a line of the updated first complete frame in place of a corresponding portion of a corresponding line of the initial (i.e., non-updated) first complete frame. As yet a further example, the augmented reality system may present at least one pixel of the updated first complete frame in place of a corresponding portion of at least one pixel of the initial (i.e., non-updated) first complete frame. As yet another additional example, the augmented reality system may present a complete cycle of the Lissajous pattern scan of the updated first complete frame in place of a corresponding portion of a complete cycle of the Lissajous pattern scan of the initial (i.e., non-updated) first complete frame.

图15根据一个所示实施例示出了在增强现实系统中操作的方法1500。可以在执行图13的方法1300时采用方法1500。FIG15 illustrates a method 1500 for operating in an augmented reality system according to one illustrated embodiment. The method 1500 may be employed when executing the method 1300 of FIG13 .

在1502,增强现实系统(例如,控制器子系统和/或其处理器)渲染第一完整帧到帧缓冲器。第一完整帧可例如包括第一场和至少第二场。第一场可例如包括用于至少第一螺旋扫描线的像素信息并且第二场可包括用于至少第二螺旋扫描线的像素信息。第二场的扫描线可与第一场的扫描线交错。第一场可例如包括用于至少第一利萨茹扫描线的像素信息并且第二场可包括用于至少第二利萨茹扫描线的像素信息。第二场的扫描线可与第一场的扫描线交错。用于螺旋和利萨茹扫描图样两者的扫描线的交错可以使用相移来有效地实现。场或扫描线的数量可以大于2,例如3个,4个,8个,16个或更多。At 1502, an augmented reality system (e.g., a controller subsystem and/or its processor) renders a first complete frame to a frame buffer. The first complete frame may, for example, include a first field and at least a second field. The first field may, for example, include pixel information for at least a first spiral scan line and the second field may include pixel information for at least a second spiral scan line. The scan lines of the second field may be interleaved with the scan lines of the first field. The first field may, for example, include pixel information for at least a first Lissajous scan line and the second field may include pixel information for at least a second Lissajous scan line. The scan lines of the second field may be interleaved with the scan lines of the first field. The interleaving of scan lines for both spiral and Lissajous scan patterns can be efficiently implemented using phase shifting. The number of fields or scan lines can be greater than 2, such as 3, 4, 8, 16, or more.

在1504,增强现实系统开始读出存储第一完整帧的帧缓冲器。增强现实系统可以驱动光源和轭或其它设备或结构以基于在来自图像缓冲器的帧中指定的像素数据生成图像。At 1504, the augmented reality system begins reading out the frame buffer storing the first complete frame. The augmented reality system can drive the light source and yoke or other devices or structures to generate an image based on the pixel data specified in the frame from the image buffer.

在1506,增强现实系统渲染更新的第一完整帧到帧缓冲器。更新的第一完整帧包括指定帧的像素信息,其中的一部分已从由初始(即,未更新的)第一完整帧所指定的信息改变。The augmented reality system renders an updated first full frame to the frame buffer at 1506. The updated first full frame includes pixel information specifying a frame, a portion of which has changed from information specified by the initial (ie, non-updated) first full frame.

在1508,在从帧缓冲器中读出第一完整帧完成之前,增强现实系统开始读出更新的第一完整帧,从而中断初始(即,未更新的)第一完整帧的呈现。一些实现可以利用有两个或多个的帧缓冲器,允许对一个帧缓冲器进行渲染,同时从其他帧缓冲器中读出帧。这不应该被认为限制增强现实系统的不同实现采用一个、两个、三个或甚至更多的帧缓冲器。At 1508, before reading the first complete frame from the frame buffer is complete, the augmented reality system begins reading the updated first complete frame, thereby interrupting the rendering of the initial (i.e., non-updated) first complete frame. Some implementations may utilize two or more frame buffers, allowing rendering to one frame buffer while reading frames from other frame buffers. This should not be considered to limit different implementations of augmented reality systems to employing one, two, three, or even more frame buffers.

图16根据一个所示实施例示出了在增强现实系统中操作的方法1600。可以在执行图13的方法1300时采用方法1600。FIG16 illustrates a method 1600 for operating in an augmented reality system, according to one illustrated embodiment. The method 1600 may be employed when executing the method 1300 of FIG13 .

在1602,增强现实系统(例如,控制器子系统和/或其处理器)生成用于第一扫描线(如螺旋、利萨茹)的像素信息。At 1602 , an augmented reality system (eg, a controller subsystem and/or a processor thereof) generates pixel information for a first scan line (eg, spiral, Lissajous).

可选地,在1604,增强现实系统生成用于相对于第一扫描线(如螺旋、利萨茹)相移的第二扫描线(如螺旋、利萨茹)的像素信息。相移有利地使用用于螺旋和利萨茹扫描线的第一扫描线与第二扫描线相互作用或嵌套。Optionally, the augmented reality system generates pixel information for a second scan line (e.g., spiral, Lissajous) phase-shifted relative to the first scan line (e.g., spiral, Lissajous) at 1604. The phase shift advantageously uses the first scan line for the spiral and Lissajous scan lines to interact or nest with the second scan line.

可选地,在1606,增强现实系统生成用于相对于第二扫描线(如螺旋、利萨茹)相移的第三扫描线(如螺旋、利萨茹)的像素信息。相移有利地使用用于螺旋和利萨茹扫描线的第一和第二扫描线与第三扫描线相互作用或嵌套。Optionally, the augmented reality system generates pixel information for a third scan line (e.g., spiral, Lissajous) phase-shifted relative to the second scan line (e.g., spiral, Lissajous) at 1606. The phase shift advantageously uses the first and second scan lines for the spiral and Lissajous scan lines to interact or nest with the third scan line.

可选地,在1608,增强现实系统生成用于相对于第三扫描线(如螺旋、利萨茹)相移的第四扫描线(如螺旋、利萨茹)的像素信息。相移有利地使用用于螺旋和利萨茹扫描线的第一、第二和第三扫描线与第四扫描线相互作用或嵌套。Optionally, the augmented reality system generates pixel information for a fourth scan line (e.g., spiral, Lissajous) phase-shifted relative to the third scan line (e.g., spiral, Lissajous) at 1608. The phase shift advantageously uses the first, second, and third scan lines for the spiral and Lissajous scan lines to interact or nest with the fourth scan line.

图17根据一个所示实施例示出了在增强现实系统中操作的方法1700。FIG17 illustrates a method 1700 of operating in an augmented reality system, according to one illustrated embodiment.

在1702,对于多个帧中的每一个帧,增强现实系统(例如,控制器子系统和/或其处理器)为各个帧的至少两部分中的每一部分确定相应的分辨率。部分可以是场、线、其他细分,或甚至单个的像素。At 1702, for each of a plurality of frames, an augmented reality system (e.g., a controller subsystem and/or a processor thereof) determines a corresponding resolution for each of at least two portions of the respective frame. A portion can be a field, a line, other subdivision, or even a single pixel.

在1704,增强现实系统基于多个帧引起虚拟图像的呈现,呈现给最终用户的图像中的至少一些图像具有可变的分辨率。例如,相邻像素之间的间隔从一部分到另一部分可能不同。At 1704, the augmented reality system causes presentation of virtual images based on the plurality of frames, at least some of the images presented to the end user having variable resolutions, such as, for example, the spacing between adjacent pixels may vary from one portion to another.

图18根据一个所示实施例示出了在增强现实系统中操作的方法1800。可以在执行图17的方法1700时采用方法1800。FIG18 illustrates a method 1800 for operating in an augmented reality system according to one illustrated embodiment. The method 1800 may be employed when executing the method 1700 of FIG17 .

在1802,增强现实系统(例如,控制器子系统和/或其处理器)将帧渲染为用于螺旋扫描图样的相应像素数据。At 1802 , an augmented reality system (eg, a controller subsystem and/or a processor thereof) renders a frame into corresponding pixel data for a spiral scan pattern.

在1804,增强现实系统调整在帧中的第一个帧第一部分的呈现的时间和帧中的第一个帧第二部分的呈现的时间之间的驱动信号的振幅。该振幅变化导致对应于帧中的第一个帧的图像中可变的分辨率。增强现实系统可以例如改变驱动信号的斜率或坡度。这在使用螺旋扫描图样的时候特别有用。例如,帧的第一场可具有一个斜率或坡度,其第二场具有不同的斜率或坡度,从而以单个帧改变有效分辨率。可以在最终用户感兴趣或吸引处或其附近采用更高的分辨率或像素密度,而在远离这些位置的地方可以使用较低的分辨率或像素密度。在中心或图像被移向最终用户吸引或聚焦的中心的情况下,高分辨率可以在图像中心附近出现,而周围的部分以较低的分辨率出现。这基本上实现了具有可控像素的可被命名为视场显示(foviated display)。At 1804, the augmented reality system adjusts the amplitude of the drive signal between the time of presentation of the first portion of the first frame in the frame and the time of presentation of the second portion of the first frame in the frame. This amplitude change results in a variable resolution in the image corresponding to the first frame in the frame. The augmented reality system can, for example, change the slope or gradient of the drive signal. This is particularly useful when using a spiral scan pattern. For example, the first field of a frame may have one slope or gradient, and its second field may have a different slope or gradient, thereby changing the effective resolution in a single frame. A higher resolution or pixel density can be used at or near locations of interest or attraction to the end user, while a lower resolution or pixel density can be used away from these locations. In the case where the center or image is moved toward the center of attraction or focus of the end user, a high resolution can appear near the center of the image, while the surrounding portions appear at a lower resolution. This essentially implements what can be called a foviated display with controllable pixels.

图19根据一个所示实施例示出了在增强现实系统中操作的方法1900。可以在执行图17的方法1700时采用方法1800。FIG19 illustrates a method 1900 for operating in an augmented reality system according to one illustrated embodiment. The method 1800 may be employed when executing the method 1700 of FIG17 .

在1902,增强现实系统(例如,控制器子系统和/或其处理器)评估用于最终用户的至少第一图像的注意点。增强现实系统可以使用任意的前述的技术以用于评估该注意点。例如,确定新的虚拟对象是否和将在何处出现、或虚拟对象将在最终用户的视场中移至何处。又例如,增强现实系统可以评估虚拟对象的相对吸引力(例如,速度、颜色、尺寸、亮度、微闪)。这可以也采用眼睛追踪信息,其指示最终用户的眼睛追踪或聚焦的视场中的位置。At 1902, an augmented reality system (e.g., a controller subsystem and/or its processor) evaluates an attention point for at least a first image for an end user. The augmented reality system may use any of the aforementioned techniques for evaluating the attention point. For example, determining whether and where a new virtual object will appear, or where a virtual object will move in the end user's field of view. For another example, the augmented reality system may evaluate the relative attractiveness of the virtual object (e.g., speed, color, size, brightness, shimmer). This may also employ eye tracking information, which indicates the location in the field of view that the end user's eyes are tracking or focusing on.

眼睛追踪信息可以例如经由例如头戴摄像机的一个或多个头戴传感器来提供。这样的眼睛追踪信息可以例如通过在最终用户的眼睛投影闪烁的光(glint)并检测至少一些所投影的光的返回或反射来辨别。例如,创建或投影图像的投影子系统可以投影像素、点或来自至少一个光纤的光的其它元素来创建照到最终用户的角膜的光。眼睛追踪可采用一个、两个、三个或甚至更多的光斑或光点。光斑或光点越多,就可以分辨越多的信息。光源(例如,激光二极管)可以是脉冲的或调制的,例如与用摄像机或图像传感器的帧速率同步。在这种情况下,斑或点可能出现为随眼部运动的线。作为跨越传感器的线迹的线的方向指示了眼睛运动的方向。线的朝向(例如,垂直、水平、对角)指示了眼睛运动的朝向。线的长度指示眼睛运动的速度。Eye tracking information can be provided, for example, via one or more head-mounted sensors, such as head-mounted cameras. Such eye tracking information can be discerned, for example, by projecting a glint of light at the end user's eye and detecting the return or reflection of at least some of the projected light. For example, a projection subsystem that creates or projects an image can project pixels, dots, or other elements of light from at least one optical fiber to create light that strikes the end user's cornea. Eye tracking can employ one, two, three, or even more light spots or points. The more light spots or points there are, the more information can be discerned. The light source (e.g., a laser diode) can be pulsed or modulated, for example, synchronized with the frame rate of a camera or image sensor. In this case, the spots or points may appear as lines that follow the eye movement. The direction of the line, as a trace across the sensor, indicates the direction of the eye movement. The orientation of the line (e.g., vertical, horizontal, diagonal) indicates the direction of the eye movement. The length of the line indicates the speed of the eye movement.

对于眼睛追踪,光可以被调制(例如,在时间上、亮度)以增加信噪比。附加地或备选地,光可以是特定的波长(例如,近红外(near-IR)),允许其从背景光或甚至形成最终用户正在观看的图像的光中被区分。该光可以被调制以减少增强现实系统提供给眼睛的能量(例如,热)的总量。闪烁的光可以经由相同的或另一光纤被返回到传感器。传感器可以例如采取二维图像传感器的形式,例如CCD传感器或CMOS传感器。For eye tracking, the light can be modulated (e.g., temporally, in brightness) to increase the signal-to-noise ratio. Additionally or alternatively, the light can be of a specific wavelength (e.g., near-infrared (near-IR)) that allows it to be distinguished from background light or even light forming the image that the end user is viewing. The light can be modulated to reduce the total amount of energy (e.g., heat) provided to the eye by the augmented reality system. The flickering light can be returned to the sensor via the same or another optical fiber. The sensor can, for example, take the form of a two-dimensional image sensor, such as a CCD sensor or a CMOS sensor.

这样,增强现实系统可以检测和追踪相应的眼睛运动,提供最终用户的注意或聚焦的点或位置的指示。增强现实系统可以在逻辑上将虚拟对象或虚拟事件(例如,虚拟对象的出现或运动)与最终用户注意或聚焦的被识别的点或位置相关联。例如,增强现实系统可指定虚拟对象出现在最终用户的注意或聚焦的点或位置或在其附近,作为对最终用户有吸引力的虚拟对象。In this way, the augmented reality system can detect and track corresponding eye movements, providing an indication of the point or location of the end user's attention or focus. The augmented reality system can logically associate a virtual object or virtual event (e.g., the appearance or movement of a virtual object) with the identified point or location of the end user's attention or focus. For example, the augmented reality system can designate that a virtual object appears at or near the point or location of the end user's attention or focus as a virtual object that is attractive to the end user.

在1904,增强现实系统调整(如,增加、减少)至少一个后续图像的至少一个部分中的分辨率。增强现实系统可以使用本文中不同技术中的任意技术以及其他技术来调整后续的页部分相对于同一后续页的其它部分的分辨率。At 1904, the augmented reality system adjusts (e.g., increases, decreases) the resolution of at least one portion of at least one subsequent image. The augmented reality system can use any of the various techniques described herein, as well as other techniques, to adjust the resolution of the subsequent page portion relative to other portions of the same subsequent page.

图20根据一个所示实施例示出了在增强现实系统中操作的方法2000。可以在执行图19的方法1900的动作1904时采用方法2000。FIG20 illustrates a method 2000 of operating in an augmented reality system, according to one illustrated embodiment. The method 2000 may be employed when performing act 1904 of the method 1900 of FIG19.

在2002,增强现实系统(例如,控制器子系统和/或其处理器)增加至少一个后续图像的一部分中的分辨率,该部分相对于至少一个后续图像其他部分至少接近所评估的注意点。正如先前所解释的,可以通过控制驱动信号的幅度或振幅(例如,电流、电压)来调整螺旋扫描图样的分辨率。可以通过调整驱动信号的斜率来调整分辨率。因此,可以通过增加驱动信号的振幅来增加分辨率,同时相位不变。At 2002, an augmented reality system (e.g., a controller subsystem and/or a processor thereof) increases the resolution of a portion of at least one subsequent image that is at least proximal to the assessed attention point relative to other portions of the at least one subsequent image. As previously explained, the resolution of the spiral scan pattern can be adjusted by controlling the amplitude or magnitude (e.g., current, voltage) of a drive signal. The resolution can be adjusted by adjusting the slope of the drive signal. Thus, the resolution can be increased by increasing the amplitude of the drive signal while maintaining the phase.

在2004,增强现实系统减小至少一个后续图像的一部分中的分辨率,该部分相对于至少一个后续图像的其他部分在所评估的注意点的远侧。可以通过减小驱动信号的振幅来减小分辨率,同时相位保持不变。At 2004, the augmented reality system reduces a resolution in a portion of at least one subsequent image that is distal to the assessed attention point relative to other portions of the at least one subsequent image. The resolution can be reduced by reducing an amplitude of a drive signal while maintaining a constant phase.

在一些实现中仅增加分辨率,在某些部分增加分辨率而在其它部分分辨率既不增加也不减小。在另一些实现中仅减小分辨率,在某些部分减小分辨率,而其它部分分辨率既不增加也不减小。在又一些实现中,在一些部分增加分辨率而在另一些部分减小分辨率。In some implementations, only the resolution is increased, increasing the resolution in some parts while neither increasing nor decreasing the resolution in other parts. In other implementations, only the resolution is decreased, decreasing the resolution in some parts while neither increasing nor decreasing the resolution in other parts. In still other implementations, the resolution is increased in some parts while decreasing the resolution in other parts.

图21根据一个所示实施例示出了在增强现实系统中操作的方法2100。可以结合图17的方法1700采用方法2100。FIG21 illustrates a method 2100 for operating in an augmented reality system, according to one illustrated embodiment. The method 2100 may be employed in conjunction with the method 1700 of FIG17 .

在2102,增强现实系统(例如,控制器子系统和/或其处理器)处理眼睛追踪数据。眼睛追踪数据指示最终用户的至少一只眼睛的至少一个朝向。眼睛追踪数据经由至少一个传感器被提供。例如,眼睛追踪数据可以经由头戴传感器被提供。在一些实现中,经由光纤使用位于或接近该光纤远端的传感器收集眼睛追踪数据。例如,光纤可收集从最终用户的眼睛的一部分反射的光,所述光可能是闪烁的光。该光纤可以是被用来创建用于向最终用户显示或投影图像的那一根光纤。At 2102, an augmented reality system (e.g., a controller subsystem and/or its processor) processes eye tracking data. The eye tracking data indicates at least one orientation of at least one eye of an end user. The eye tracking data is provided via at least one sensor. For example, the eye tracking data may be provided via a head-mounted sensor. In some implementations, the eye tracking data is collected via an optical fiber using a sensor located at or near a distal end of the optical fiber. For example, the optical fiber may collect light reflected from a portion of the end user's eye, which may be scintillating light. The optical fiber may be the same optical fiber used to create an image for displaying or projecting an image to the end user.

在2104,增强现实系统处理头部追踪数据。头部追踪数据至少指示最终用户的头部朝向。头部追踪数据经由至少一个传感器来提供。At 2104, the augmented reality system processes head tracking data. The head tracking data indicates at least an orientation of the end user's head. The head tracking data is provided via at least one sensor.

例如,一个或多个头戴或头部安装传感器例如惯性传感器(例如,陀螺仪传感器、加速度计)。头部运动追踪可以使用一个或多个头戴或头部安装光源和至少一个传感器来实现。头部追踪可采用一个、两个、三个或甚至更多的光斑或光点。光斑或光点越多,可分辨的信息就越多。光源(例如,激光二极管)可以是脉冲的或调制的,例如与摄像机或图像传感器(例如,前向摄像机)的帧速率同步。激光光源可能以低于摄像机或图像传感器的帧速率的频率被调制。在这种情况下,斑或点可能出现为如头部运动的线。作为跨越传感器的线迹的线的方向可以指示头部运动[a5]的方向。线的朝向(例如,垂直、水平、对角)指示头部运动的朝向。线的长度指示头部运动的速度。反射光还可提供有关周围环境中的对象的信息,诸如距离和/或几何形状(例如,平面的、弯曲的)和/或朝向(例如,成角度的或垂直的)。例如,一个激光束可能会产生关于方向和速率的信息(如,冲撞(dash)或线的长度)。第二激光束可以添加关于深度或距离的信息(例如,Z轴)。第三激光束可以添加关于周围环境中的表面的几何形状和/或朝向的信息。在头部运动期间或在头部运动的一部分的期间激光器或其它光源可以是脉冲的。For example, one or more head-mounted or head-mounted sensors such as inertial sensors (e.g., gyroscope sensors, accelerometers). Head motion tracking can be achieved using one or more head-mounted or head-mounted light sources and at least one sensor. Head tracking can use one, two, three, or even more light spots or points. The more light spots or points there are, the more information that can be distinguished. The light source (e.g., a laser diode) can be pulsed or modulated, for example synchronized with the frame rate of a camera or image sensor (e.g., a forward-facing camera). The laser light source may be modulated at a frequency lower than the frame rate of the camera or image sensor. In this case, the spots or points may appear as lines as head movement. The direction of the line as a trace across the sensor can indicate the direction of head movement [a5]. The orientation of the line (e.g., vertical, horizontal, diagonal) indicates the direction of the head movement. The length of the line indicates the speed of the head movement. The reflected light can also provide information about objects in the surrounding environment, such as distance and/or geometry (e.g., planar, curved) and/or orientation (e.g., angled or vertical). For example, one laser beam may generate information about direction and velocity (e.g., dash or line length). A second laser beam may add information about depth or distance (e.g., Z axis). A third laser beam may add information about the geometry and/or orientation of surfaces in the surrounding environment. The laser or other light source may be pulsed during head movement or during a portion of head movement.

附加地或备选地,头部追踪数据可经由非头戴的传感器来提供。例如,摄像机或成像系统成像包括最终用户的头部的最终用户,追踪其运动。这可诸如例如相对于某些外部参考帧来追踪运动,例如由追踪系统定义的参考帧或追踪系统位于其中的房间。Additionally or alternatively, head tracking data may be provided via non-head-mounted sensors. For example, a camera or imaging system may image the end user, including the end user's head, and track their movement. This may be, for example, tracking movement relative to some external reference frame, such as a reference frame defined by the tracking system or the room in which the tracking system is located.

在2106,增强现实系统确定虚拟对象相对于最终用户参考帧在最终用户的视场中出现的位置。该出现可以是当被新引入最终用户的视场中时新的虚拟对象的出现。该出现可以是虚拟对象相对于该虚拟对象在至少一个先前图像中的方位在新方位的出现。增强现实系统可以采用在本文其它地方描述的多种技术中任何技术来确定虚拟对象出现的位置At 2106, the augmented reality system determines where the virtual object appears in the end user's field of view relative to the end user's reference frame. The appearance can be the appearance of a new virtual object when newly introduced into the end user's field of view. The appearance can be the appearance of the virtual object in a new position relative to the position of the virtual object in at least one previous image. The augmented reality system can use any of the various techniques described elsewhere herein to determine the position of the virtual object.

系统还可以使用消隐以提高最终用户感知体验。The system may also use blanking to improve the end-user perceived experience.

图22根据一个所示实施例示出了在增强现实系统中操作的方法2200。方法2200可有效采用消隐以提高最终用户感知体验。22 illustrates a method 2200 operating in an augmented reality system, according to one illustrated embodiment. The method 2200 can effectively employ culling to enhance the end-user perceived experience.

在2202,增强现实系统(例如,控制器子系统和/或其处理器)向最终用户显示至少一个虚拟对象。增强现实系统可渲染帧到帧缓冲器,读取帧以驱动一个或多个光源和/或轭或其他系统以产生光的至少双轴向运动或迹线。At 2202, an augmented reality system (e.g., a controller subsystem and/or its processor) displays at least one virtual object to an end user. The augmented reality system may render frames to a frame buffer and read the frames to drive one or more light sources and/or yokes or other systems to generate at least biaxial motion or traces of light.

在2204,增强现实系统检测和/或预测最终用户的头部运动的发生。增强现实系统可以采用可以采用在本文其它地方描述的多种技术中的任意技术来检测和/或预测头部运动的发生。非限制性地,这些技术包括直接感知头部运动,例如经由惯性传感器或感应器、或经由头戴成像器或成像最终用户在其中被呈现或可见的区域的环境成像器。这些技术还包括间接地预测头部运动,例如通过确定新的虚拟对象将出现在何处、现有的虚拟对象将移动到何处,或特别有吸引力的虚拟对象将被置于图像中的方位。At 2204, the augmented reality system detects and/or predicts the occurrence of head movement of the end user. The augmented reality system can use any of the various techniques described elsewhere herein to detect and/or predict the occurrence of head movement. Without limitation, these techniques include directly sensing head movement, such as via inertial sensors or inductors, or via head-mounted imagers or environmental imagers that image the area in which the end user is presented or visible. These techniques also include indirectly predicting head movement, such as by determining where new virtual objects will appear, where existing virtual objects will move, or the position at which particularly attractive virtual objects will be placed in the image.

在2206,增强现实系统评估是否已检测的和/或所预测的头部运动超过或被预测超过标称头部运动值。增强现实系统可以采用可以采用本文其它地方描述的多种技术中的任何技术来评估已检测的和/或所预测的头部运动是否超过或被预测超过标称头部运动值。这样的评估可以包括简单的将已检测或所预测的速度与标称速度相比较。这样的评估可以包括简单的将已检测或所预测的加速度与标称加速度相比较。这样的评估可包括简单的将已检测或所预测的范围与标称范围相比较。这样的评估可以包括更复杂的比较,包括通过运动期间内多次速度、加速度或范围的平均或积分。这样的评估可以甚至采用历史属性或其他信息。At 2206, the augmented reality system evaluates whether the detected and/or predicted head movement exceeds or is predicted to exceed the nominal head movement value. The augmented reality system can employ any of the various techniques described elsewhere herein to evaluate whether the detected and/or predicted head movement exceeds or is predicted to exceed the nominal head movement value. Such an evaluation can include a simple comparison of the detected or predicted speed with the nominal speed. Such an evaluation can include a simple comparison of the detected or predicted acceleration with the nominal acceleration. Such an evaluation can include a simple comparison of the detected or predicted range with the nominal range. Such an evaluation can include more complex comparisons, including by averaging or integrating multiple speeds, accelerations, or ranges over the motion period. Such an evaluation can even employ historical attributes or other information.

在2208,增强现实系统暂时对最终用户消隐至少一个虚拟对象的显示的至少一部分。例如,增强现实系统可以停止从帧缓冲器读取。附加地或备选地,增强现实系统可以关闭照明或光源。这可以包括暂时关闭LCD显示器的背光。At 2208, the augmented reality system temporarily hides at least a portion of the display of at least one virtual object from the end user. For example, the augmented reality system may stop reading from the frame buffer. Additionally or alternatively, the augmented reality system may turn off the lighting or light source. This may include temporarily turning off the backlight of an LCD display.

图23根据一个所示实施例示出了在增强现实系统中操作的方法2300。可以在执行图22的方法2200时采用方法2300。FIG23 illustrates a method 2300 for operating in an augmented reality system according to one illustrated embodiment. The method 2300 may be employed when executing the method 2200 of FIG22 .

在2302,增强现实系统(例如,控制器子系统和/或其处理器)处理头部追踪数据。头部追踪数据指示最终用户头部的至少一个朝向。头部追踪数据可以经由至少一个传感器来提供,其可能或可能不被最终用户所佩戴。增强现实系统可以采用可以采用本文其它地方描述的多种技术中的任何技术来处理头部追踪数据。At 2302, an augmented reality system (e.g., a controller subsystem and/or its processor) processes head tracking data. The head tracking data indicates at least one orientation of an end user's head. The head tracking data may be provided via at least one sensor, which may or may not be worn by the end user. The augmented reality system may process the head tracking data using any of a variety of techniques described elsewhere herein.

在2304,对于呈现给最终用户的图像中的至少一些中的每个帧,增强现实系统(例如,控制器子系统和/或其处理器)确定虚拟对象相对于用户参考帧在最终用户的视场中的出现的位置。当虚拟对象被新引进最终用户的视场时确定其出现的位置。相对于至少一个先前图像中的虚拟对象的方位,确定该虚拟对象在图像中新的方位中出现的位置。增强现实系统可以采用本文其它地方描述的多种技术中的任何技术来确定虚拟对象出现的位置。At 2304, for each frame of at least some of the images presented to the end user, the augmented reality system (e.g., a controller subsystem and/or its processor) determines a location where a virtual object appears in the end user's field of view relative to the user's reference frame. The location where the virtual object appears is determined when the virtual object is newly introduced into the end user's field of view. The location where the virtual object appears in the new orientation in the image is determined relative to the orientation of the virtual object in at least one previous image. The augmented reality system can employ any of a variety of techniques described elsewhere herein to determine the location where the virtual object appears.

在2306,增强现实系统评估已确定的虚拟对象的出现是否有足够的吸引力。例如,增强现实系统可以评估虚拟对象的相对视觉吸引力(例如,速度、颜色、尺寸、亮度、闪烁的光、透明度,特殊光学效应)。又例如,增强现实系统可以评估相对兴趣吸引力(例如,新颖程度、新旧程度、先前的注意、由最终用户进行的先前的身份识别、由最终用户先前进行的互动)。At 2306, the augmented reality system evaluates whether the appearance of the determined virtual object is sufficiently attractive. For example, the augmented reality system may evaluate the relative visual attractiveness of the virtual object (e.g., speed, color, size, brightness, flickering light, transparency, special optical effects). For another example, the augmented reality system may evaluate the relative interest appeal (e.g., novelty, recency, previous attention, previous identification by the end user, previous interaction by the end user).

在2308,增强现实系统评估已确定的位置是否需要最终用户相对于最终用户头部的当前位置转动其头部。增强现实系统可以采用最终用户头部的当前方位和/或朝向以及虚拟对象的相对方位和/或朝向。增强现实系统可确定距离,例如最终用户的当前焦点和虚拟对象的方位和/或朝向之间的角距离。增强现实系统可以确定已确定的距离是否在眼睛运动范围内,或者是否最终用户还必须转动他们的头部。如果最终用户必须转动他们的头部,系统可以评估最终用户必须转动头部多远。例如,增强现实系统可以采用指定最终用户的眼睛运动和头部运动之间的关系的信息。该信息可以指示最终用户在转动其头部之前,单独经由眼睛运动转移其注视能达到的程度。值得注意的是,眼睛运动和头部运动之间的关系可以被指定为多种不同的朝向,例如a)从上到下、b)从下到上、c)从左到右、d)从右到左、e)沿对角线地从左下到右上、f)沿对角线地右下到左上、g)沿对角线地从左上到右下、h)沿对角线地从右上到左下。At 2308, the augmented reality system evaluates whether the determined position requires the end user to turn their head relative to the current position of the end user's head. The augmented reality system may employ the current position and/or orientation of the end user's head and the relative position and/or orientation of the virtual object. The augmented reality system may determine a distance, such as an angular distance between the end user's current focus and the position and/or orientation of the virtual object. The augmented reality system may determine whether the determined distance is within the range of eye movement or whether the end user must also turn their head. If the end user must turn their head, the system may evaluate how far the end user must turn their head. For example, the augmented reality system may employ information that specifies the relationship between the end user's eye movement and head movement. This information may indicate the extent to which the end user can shift their gaze via eye movement alone before turning their head. It is worth noting that the relationship between eye movements and head movements can be specified in a variety of different directions, such as a) from top to bottom, b) from bottom to top, c) from left to right, d) from right to left, e) diagonally from bottom left to top right, f) diagonally from bottom right to top left, g) diagonally from top left to bottom right, h) diagonally from top right to bottom left.

在2310,增强现实系统基于所述评估来预测头部运动的发生。增强现实系统可以使用一个或多个因素形成评估以预测头部运动是否会发生、头部运动的方向和/或朝向、和/或头部运动的速度或加速度。增强现实系统可以采用或者最终用户特定的或者是对一组最终用户更通用的历史数据。增强现实系统可以实现一个或多个机器学习算法以增加头部运动预测的准确性。At 2310, the augmented reality system predicts the occurrence of head movement based on the evaluation. The augmented reality system can use one or more factors to form an evaluation to predict whether head movement will occur, the direction and/or orientation of the head movement, and/or the speed or acceleration of the head movement. The augmented reality system can use historical data that is either specific to the end user or more general to a group of end users. The augmented reality system can implement one or more machine learning algorithms to increase the accuracy of the head movement prediction.

图24根据一个所示实施例示出了在增强现实系统中操作的方法2400。可以在执行图22的方法2200的操作2208时采用方法2400。24 illustrates a method 2400 of operating in an augmented reality system, according to one illustrated embodiment. The method 2400 may be employed when performing operation 2208 of the method 2200 of FIG.

在2402,增强现实系统(例如,控制器子系统和/或其处理器)闪动或闪烁显示器或显示器的背光。该闪动或闪烁在已检测的头部运动或所预测的头部运动中的全部或一部分上发生。这可有利地有效减少帧或虚拟对象的呈现中不一致的感知。这也可有效地提高所感知的帧速率。At 2402, the augmented reality system (e.g., a controller subsystem and/or its processor) flashes or blinks a display or a backlight of the display. The flashing or blinking occurs in response to all or a portion of the detected or predicted head motion. This can advantageously reduce the perception of inconsistencies in the presentation of frames or virtual objects. This can also effectively increase the perceived frame rate.

图25根据一个所示实施例示出了在增强现实系统中操作的方法2500。FIG. 25 illustrates a method 2500 of operating in an augmented reality system, according to one illustrated embodiment.

在2502,增强现实系统(例如,控制器子系统和/或其处理器)检测和/或预测最终用户头部运动的发生。例如,增强现实系统可处理指示最终用户头部运动的至少一个朝向的头部追踪数据。附加地或备选地,增强现实系统可以确定虚拟对象相对于最终用户参考帧在最终用户的视场中出现的位置,评估已确定的位置是否需要最终用户转动最终用户的头部,并基于该评估预测头部运动的发生。增强现实系统可以采用本文其它地方描述的多种技术中的任何技术来检测和/或预测头部运动的发生。At 2502, an augmented reality system (e.g., a controller subsystem and/or its processor) detects and/or predicts the occurrence of end-user head movement. For example, the augmented reality system may process head tracking data indicating at least one orientation of the end-user head movement. Additionally or alternatively, the augmented reality system may determine a position where a virtual object appears in the end-user's field of view relative to the end-user's reference frame, evaluate whether the determined position requires the end-user to turn the end-user's head, and predict the occurrence of head movement based on the evaluation. The augmented reality system may employ any of a variety of techniques described elsewhere herein to detect and/or predict the occurrence of head movement.

在2504,增强现实系统确定已检测的和/或所预测的头部运动是否超过标称头部运动值。增强现实系统可以采用本文其它地方描述的多种技术中的任何技术来确定已检测的和/或所预测的头部运动是否超过标称头部运动值。At 2504, the augmented reality system determines whether the detected and/or predicted head movement exceeds the nominal head movement value. The augmented reality system can employ any of a variety of techniques described elsewhere herein to determine whether the detected and/or predicted head movement exceeds the nominal head movement value.

在2506,响应于确定已检测的和/或所预测的头部运动超过标称头部运动值,增强现实系统选择性地激活执行机构以在至少一个自由度移动投影机。移动投影机可包括沿着至少一个轴平移第一光纤。移动投影机可包括关于至少一个轴旋转第一光纤。At 2506, in response to determining that the detected and/or predicted head motion exceeds the nominal head motion value, the augmented reality system selectively activates an actuator to move the projector in at least one degree of freedom. Moving the projector may include translating the first optical fiber along at least one axis. Moving the projector may include rotating the first optical fiber about at least one axis.

图26根据一个所示实施例示出了在增强现实系统中操作的方法2600。FIG. 26 illustrates a method 2600 of operating in an augmented reality system, according to one illustrated embodiment.

增强现实系统可以过渲染帧,产生较给定显示技术的最大面积和最大分辨率所需的更大的帧。例如,在头戴或头部安装的增强现实系统中可由该设备的多种参数设定可用于显示或投影的面积。同样,尽管增强现实系统可能能够以多个不同的分辨率进行操作,该设备将设置上限或最大分辨率。过渲染帧包括用于超过以最大分辨率显示的最大区域的一组像素的像素信息。这可以有利地允许增强现实系统仅读出的帧的一部分(例如,如果没有被中断,帧的每一个场的一部分)。这可允许增强现实系统移动呈现给用户的图像。The augmented reality system can overrender frames, producing larger frames than required for the maximum area and maximum resolution of a given display technology. For example, in a head-worn or head-mounted augmented reality system, the area available for display or projection may be set by various parameters of the device. Likewise, although the augmented reality system may be able to operate at a number of different resolutions, the device will set an upper limit or maximum resolution. The overrendered frame includes pixel information for a group of pixels that exceeds the maximum area displayed at the maximum resolution. This can advantageously allow the augmented reality system to read out only a portion of a frame (e.g., a portion of each field of the frame if it is not interrupted). This can allow the augmented reality system to move the image presented to the user.

在2602,增强现实系统(例如,控制器子系统和/或其处理器)过渲染用于已定义的视场的多个帧中的每一个帧。这需要生成被最大分辨率的最大面积所需要的更多的像素信息。例如,帧的面积可以通过最大面积的百分比增加,例如增加由帧定义的水平、垂直或者对角线方向中的像素信息。帧的尺寸越大,增强现实系统具有的移动呈现给用户的图像的边界自由度就越高。At 2602, an augmented reality system (e.g., a controller subsystem and/or its processor) over-renders each of a plurality of frames for a defined field of view. This may require generating more pixel information than is required for a maximum area of maximum resolution. For example, the area of a frame may be increased by a percentage of the maximum area, such as by increasing pixel information in a horizontal, vertical, or diagonal direction defined by the frame. The larger the size of the frame, the more freedom the augmented reality system has to move the boundaries of the image presented to the user.

在2604,增强现实系统在至少一个帧缓冲器依次缓冲过渲染的帧。增强现实系统可以采用大于用于最大分辨率最大显示尺寸所需的帧的尺寸的帧缓冲器。一些实现使用多个帧缓冲器。这可以如本文其它地方所述的有助于帧的呈现的中断。At 2604, the augmented reality system sequentially buffers the rendered frames in at least one frame buffer. The augmented reality system may utilize a frame buffer larger than the size required for the maximum resolution and maximum display size. Some implementations utilize multiple frame buffers. This may facilitate interruption of frame presentation as described elsewhere herein.

在2606,增强现实系统确定各个图像的一部分来呈现。增强现实系统可基于多个不同的因素确定该部分。例如,所述因素可以指示图像或场景中最终用户正在注意、聚焦或已经以其他方式吸引了最终用户的注意的位置。再一次地,多种技术可以被采用,包括但不限于眼睛追踪。又例如,所述因素可以指示图像或场景中最终用户被预测正在注意、聚焦或将以其他方式吸引最终用户的注意的位置。再一次地,多种技术可以被采用,包括但不限于:识别新出现的虚拟对象、快速或迅速移动的虚拟对象、视觉上具有吸引力的虚拟对象、先前已经指定的虚拟对象(例如,由最终用户或由先前追踪的最终用户的交互来指定)和/或基于垂直对象的固有性质吸引注意的虚拟对象。基于垂直对象的固有性质吸引注意的虚拟对象可以例如包括对广义的最终用户或者特定的最终用户,在视觉上代表关注或担心的对象或项目的虚拟对象(例如,即将来临的威胁)。At 2606, the augmented reality system determines a portion of each image to present. The augmented reality system may determine the portion based on a number of different factors. For example, the factor may indicate a location in the image or scene that the end user is paying attention to, focusing on, or has otherwise attracted the end user's attention. Again, a variety of techniques may be employed, including but not limited to eye tracking. As another example, the factor may indicate a location in the image or scene that the end user is predicted to be paying attention to, focusing on, or will otherwise attract the end user's attention. Again, a variety of techniques may be employed, including but not limited to: identifying newly appearing virtual objects, fast or rapidly moving virtual objects, visually attractive virtual objects, previously designated virtual objects (e.g., designated by the end user or by the interaction of a previously tracked end user) and/or virtual objects that attract attention based on the inherent properties of vertical objects. Virtual objects that attract attention based on the inherent properties of vertical objects may, for example, include virtual objects that visually represent objects or items of concern or worry (e.g., an impending threat) to the end user in a broad sense or to a specific end user.

在2608,增强现实系统选择性地从帧缓冲器中读出过渲染的帧的一部分。该部分至少部分地基于各自图像的已确定的部分来呈现。例如,被读出的部分可以具有被移动以被与所识别的位置接近,或者甚至匹配或者共对齐的中心。所识别的位置可以例如是先前图像或帧中已经吸引最终用户注意的位置。所识别的位置可以例如是增强现实系统已经预测将吸引最终用户注意的后续帧中的位置。At 2608, the augmented reality system selectively reads a portion of the rendered frame from the frame buffer. The portion is presented based at least in part on the determined portion of the respective image. For example, the read portion may have its center moved to be closer to, or even aligned with, the identified location. The identified location may, for example, be a location in a previous image or frame that has already attracted the end user's attention. The identified location may, for example, be a location in a subsequent frame that the augmented reality system has predicted will attract the end user's attention.

图27根据一个所示实施例示出了在增强现实系统中操作的方法2700。可以在执行图26的方法2600时采用方法2700。例如,方法2700可以被用来预测后续帧或图像中吸引最终用户注意的位置。Figure 27 illustrates a method 2700 operating in an augmented reality system according to one illustrated embodiment. The method 2700 may be employed when performing the method 2600 of Figure 26. For example, the method 2700 may be used to predict locations in subsequent frames or images that will attract the end user's attention.

在2702,对于至少一些帧中的每一个帧,增强现实系统的(例如,控制器子系统和/或其处理器)确定虚拟对象相对于最终用户参考帧在最终用户的视场内出现的位置。At 2702, for each of at least some of the frames, the augmented reality system (e.g., a controller subsystem and/or its processor) determines where a virtual object appears within the end user's field of view relative to the end user's reference frame.

在2704,增强现实系统至少部分地基于确定视场中的虚拟对象出现的位置来选择性地读出帧缓冲器。例如,被读出的部分可以具有被移动以与被所识别的位置接近,或者甚至匹配或者共对齐的中心。备选地,被读出的部分的边界可以被移动以在两个或甚至三个维度涵盖已确定的位置紧挨着的周围区域。例如,增强现实系统可以选择将被读出帧缓冲器的整个过渲染帧的一部分(例如,80%)以用于呈现给最终用户。增强现实系统可以选择该部分,使得边界相对于最终用户注意的当前位置被移动,例如在当前被呈现给最终用户的图像中。增强现实系统可以基于当前位置和所预测的位置的组合选择边界,同时设定边界使得这两个位置将在后续呈现的图像中被呈现给最终用户。At 2704, the augmented reality system selectively reads out a frame buffer based at least in part on determining the location at which the virtual object in the field of view appears. For example, the portion that is read out can have a center that is moved to be close to, or even match or co-aligned with, the identified location. Alternatively, the boundaries of the portion that is read out can be moved to encompass the area immediately surrounding the determined location in two or even three dimensions. For example, the augmented reality system can select a portion (e.g., 80%) of the entire over-rendered frame to be read out of the frame buffer for presentation to the end user. The augmented reality system can select the portion so that the boundary is moved relative to the current location of the end user's attention, such as in an image currently being presented to the end user. The augmented reality system can select the boundary based on a combination of the current location and the predicted location, while setting the boundary so that both locations will be presented to the end user in a subsequently presented image.

图28根据一个所示实施例示出了在增强现实系统中操作的方法2800。可以在执行图26的方法2600时采用方法2800。例如,方法2800可以被用来确定图像中已经吸引或者被预测吸引最终用户的注意的位置。Figure 28 illustrates a method 2800 operating in an augmented reality system according to one illustrated embodiment. The method 2800 may be employed when performing the method 2600 of Figure 26. For example, the method 2800 may be used to determine a location in an image that has attracted or is predicted to attract the end user's attention.

在2802,增强现实系统(例如,控制器子系统和/或其处理器)确定当新的虚拟被新引入最终用户的视场时新的虚拟对象出现的位置。增强现实系统可以采用本文描述的多种技术中的任何技术来识别虚拟对象的引入,其相对于呈现给最终用户的直接先前帧或图像是新的。这样,即使虚拟对象先前已在呈现的一些其它部分被呈现给最终用户,如果已经呈现了足够数量的中间图像,虚拟对象可以被识别为新引入的以使虚拟对象的再引入吸引最终用户的注意。At 2802, the augmented reality system (e.g., a controller subsystem and/or its processor) determines the location where a new virtual object appears when the new virtual object is newly introduced into the field of view of the end user. The augmented reality system can employ any of the various techniques described herein to identify the introduction of a virtual object that is new relative to the immediately preceding frame or image presented to the end user. In this way, even if the virtual object has previously been presented to the end user in some other portion of the presentation, if a sufficient number of intermediate images have been presented, the virtual object can be identified as newly introduced so that the reintroduction of the virtual object attracts the attention of the end user.

在2804,增强现实系统确定虚拟对象相对于至少一个先前帧中的方位在帧的新方位中的出现的位置。增强现实系统可以采用本文描述的多种技术中的任何技术来识别虚拟对象到多个图像中新的或不同的方位的移动,所述移动是相对于呈现给最终用户的直接先前帧或图像的移动。这样,即使先前已在呈现的一些其它部分的一些位置向最终用户呈现了虚拟对象,如果足够数量的中介图像已经被呈现,该虚拟对象可以被识别为被移动或正在移动以使虚拟对象在先前位置的再出现吸引最终用户的注意。At 2804, the augmented reality system determines the position of the virtual object at the new orientation of the frame relative to the orientation in at least one previous frame. The augmented reality system can employ any of the various techniques described herein to identify movement of the virtual object to a new or different orientation in the plurality of images relative to the immediately previous frame or image presented to the end user. In this way, even if the virtual object was previously presented to the end user at some location in some other portion of the presentation, if a sufficient number of intermediary images have been presented, the virtual object can be identified as having been moved or moving so that the reappearance of the virtual object at the previous location attracts the end user's attention.

在2806,增强现实系统确定在最终用户的视场中至少具有已定义的最小速度的虚拟对象的位置。增强现实系统可以采用本文描述的多种技术中的任何技术来确定虚拟对象从图像到图像的运动速度并将该速度与定义的或标称的速度相比较。该确定的速度可以相对于图像中的固定参考帧或相对于出现在图像中的其它虚拟对象和/或物理对象。At 2806, the augmented reality system determines the position of a virtual object that has at least a defined minimum velocity in the end user's field of view. The augmented reality system can employ any of the various techniques described herein to determine the velocity of the virtual object from image to image and compare the velocity to a defined or nominal velocity. The determined velocity can be relative to a fixed reference frame in the image or relative to other virtual objects and/or physical objects present in the image.

在2808,增强现实系统至少部分地基于图像中虚拟对象的位置来确定各个图像的一部分以呈现。增强现实系统可以采用本文中描述的多种技术中的任何技术来确定各个图像的所述部分来呈现。该确定可以基于任何不同的因素。所述因素可以例如包括指示最终用户正在注意、聚焦或已经以其他方式吸引了最终用户的注意的图像或场景中的位置的因素或数据。因素可以例如包括最终用户被预测注意、聚焦或将以其他方式吸引最终用户的注意的图像或场景中的位置的因素或数据。增强现实系统可以采用本文描述的多种技术中的任何技术来经由实际检测或经由预测识别已经吸引最终用户的注意的位置。At 2808, the augmented reality system determines a portion of each image to present based at least in part on the position of the virtual object in the image. The augmented reality system can employ any of the various techniques described herein to determine the portion of each image to present. This determination can be based on any different factors. The factors can, for example, include factors or data indicating a location in the image or scene that the end user is attending to, focusing on, or has otherwise attracted the end user's attention. The factors can, for example, include factors or data indicating a location in the image or scene that the end user is predicted to attend to, focus on, or will otherwise attract the end user's attention. The augmented reality system can employ any of the various techniques described herein to identify, via actual detection or via prediction, a location that has attracted the end user's attention.

在2810,增强现实系统中读出帧缓冲器的一部分以用于至少一个后续帧。例如,被读出的部分将图像的中心至少向各个图像的已确定将被呈现的部分偏移。增强现实系统可以采用本文描述的多种技术中的任何技术来从帧缓冲器读出帧的一部分,该部分基于最终用户实际或预测的注意中心的位置移动图像的中心或边界。At 2810, the augmented reality system reads out a portion of a frame buffer for at least one subsequent frame. For example, the portion read out shifts the center of the image toward at least the portion of the respective image determined to be presented. The augmented reality system can employ any of the various techniques described herein to read out a portion of a frame from the frame buffer that shifts the center or boundary of the image based on the actual or predicted center of attention of the end user.

图29根据一个所示实施例示出了在增强现实系统中操作的方法2900。可以在执行图26的方法2600时采用方法2800。特别地,方法2900可以被用来基于预测的最终用户的头部运动确定帧的哪部分将被读出。FIG29 illustrates a method 2900 operating in an augmented reality system according to one illustrated embodiment. The method 2800 may be employed when performing the method 2600 of FIG26. In particular, the method 2900 may be used to determine which portion of a frame to read out based on the predicted end user's head motion.

在2902,增强现实系统(例如,控制器子系统和/或其处理器)预测最终用户的头部运动的发生。增强现实系统可以采用本文描述的多种技术中的任何技术来预测头部运动。这种技术包括但不限于检测新的虚拟对象、正在移动的虚拟对象、快速移动的虚拟对象、先前选择的虚拟对象和/或视觉吸引力的虚拟对象在图像中的出现。At 2902, the augmented reality system (e.g., a controller subsystem and/or its processor) predicts the occurrence of head motion of the end user. The augmented reality system can employ any of the various techniques described herein to predict head motion. Such techniques include, but are not limited to, detecting the appearance of new virtual objects, moving virtual objects, rapidly moving virtual objects, previously selected virtual objects, and/or visually appealing virtual objects in an image.

在2904,增强现实系统确定各个帧或图像的一部分以至少部分地基于所预测的头部运动来呈现。增强现实系统可以采用本文描述的多种技术中的任何技术来确定将被使用的帧的部分。例如,增强现实系统可以选择所述部分,使得边界涵盖所预测的头部运动的预测结束点的位置。当头部运动预测是建立在虚拟对象的出现上时(例如,新引入的、移动的、有吸引力的外观、先前由最终用户选择的),终点可以与该虚拟对象在后续帧或图像中的位置重合。At 2904, the augmented reality system determines a portion of each frame or image to present based at least in part on the predicted head motion. The augmented reality system can employ any of the various techniques described herein to determine the portion of the frame to be used. For example, the augmented reality system can select the portion so that the boundary encompasses the location of the predicted end point of the predicted head motion. When the head motion prediction is based on the appearance of a virtual object (e.g., newly introduced, moving, attractive in appearance, previously selected by the end user), the end point can coincide with the location of the virtual object in a subsequent frame or image.

图30根据一个所示实施例示出了在增强现实系统中操作的方法3000。FIG30 illustrates a method 3000 of operating in an augmented reality system, according to one illustrated embodiment.

增强现实系统可过渲染帧,产生较对给定的显示技术的最大面积和最大分辨率所需的更大的帧。例如,在头戴或头部安装的增强现实系统中可由该设备的多种参数设定可用于显示或投影的面积。同样,尽管增强现实系统可能能够以多个不同的分辨率进行操作,该设备将设置上限或最大分辨率。过渲染帧包括用于超过以最大分辨率显示的最大区域的一组像素的像素信息。这可以有利地允许增强现实系统仅读出的帧的一部分(例如,如果没有被中断,帧的每一个场的一部分)。这可允许增强现实系统移动呈现给用户的图像。The augmented reality system may overrender the frame, producing a larger frame than required for the maximum area and maximum resolution of a given display technology. For example, in a head-worn or head-mounted augmented reality system, the area available for display or projection may be set by various parameters of the device. Likewise, although the augmented reality system may be capable of operating at a number of different resolutions, the device will set an upper limit or maximum resolution. The overrendered frame includes pixel information for a group of pixels that exceeds the maximum area that can be displayed at the maximum resolution. This can advantageously allow the augmented reality system to read out only a portion of the frame (e.g., a portion of each field of the frame if it is not interrupted). This can allow the augmented reality system to move the image presented to the user.

在3002,增强现实系统(例如,控制器子系统和/或其处理器)过渲染用于已定义的视场的多个帧中的每一个帧。这需要生成最大分辨率的最大面积需要的更多的像素信息。例如,帧的面积可以通过最大面积的百分比增加,例如增加由帧定义的水平、垂直或者对角线方向中的像素信息。帧的尺寸越大,增强现实系统具有的移动呈现给用户的图像的边界自由度就越高。At 3002, an augmented reality system (e.g., a controller subsystem and/or its processor) over-renders each of a plurality of frames for a defined field of view. This may require generating more pixel information for a maximum area at a maximum resolution. For example, the area of a frame may be increased by a percentage of the maximum area, such as by increasing the pixel information in the horizontal, vertical, or diagonal direction defined by the frame. The larger the size of the frame, the more freedom the augmented reality system has to move the boundaries of the image presented to the user.

在3004,增强现实系统确定各个图像的一部分以呈现。增强现实系统可基于多种不同的因素来确定所述部分。例如,所述因素可以指示图像或场景中最终用户正在注意、聚焦或已经以其他方式吸引了最终用户的注意的位置。再一次的,多种技术可以被采用,包括但不限于眼睛追踪。又例如,所述因素可以指示图像或场景中最终用户被预测正在注意、聚焦或将以其他方式吸引最终用户的注意的位置。再一次地,多种技术可以被采用,包括但不限于:识别新出现的虚拟对象、快速或迅速移动的虚拟对象、视觉上具有吸引力的虚拟对象、先前已经指定的虚拟对象(例如,由最终用户或由先前追踪的最终用户的交互来指定)和/或基于垂直对象的固有性质吸引注意的虚拟对象。基于垂直对象的固有性质吸引注意的虚拟对象可以例如包括对广义的最终用户或者特定的最终用户,在视觉上代表关注或担心的对象或项目的虚拟对象(例如,即将来临的威胁)。At 3004, the augmented reality system determines a portion of each image to present. The augmented reality system may determine the portion based on a variety of different factors. For example, the factor may indicate a location in the image or scene that the end user is paying attention to, focusing on, or has otherwise attracted the end user's attention. Once again, a variety of techniques may be employed, including but not limited to eye tracking. For another example, the factor may indicate a location in the image or scene that the end user is predicted to be paying attention to, focusing on, or will otherwise attract the end user's attention. Once again, a variety of techniques may be employed, including but not limited to: identifying newly appearing virtual objects, fast or rapidly moving virtual objects, visually attractive virtual objects, previously designated virtual objects (e.g., designated by the end user or by the interaction of a previously tracked end user) and/or virtual objects that attract attention based on the inherent properties of vertical objects. Virtual objects that attract attention based on the inherent properties of vertical objects may, for example, include virtual objects that visually represent objects or items of concern or worry (e.g., an impending threat) to the end user in a broad sense or to a specific end user.

在3006,增强现实系统动态地将过渲染帧的一个或多个已确定的部分发(address)进缓冲器。所述已确定的部分可以例如具有被移动以与最终用户吸引、感兴趣或聚焦的所识别的位置接近或者甚至匹配或共对齐的中心。所识别的位置可以例如是已经吸引最终用户关注的先前图像或帧中的位置。所识别的位置可以例如是增强现实系统已预测将吸引最终用户注意的后续帧中的位置。一些实现采用多个帧缓冲器。如本文其他地方所述,这可有助于帧的呈现的中断。At 3006, the augmented reality system dynamically addresses one or more determined portions of the over-rendered frame into a buffer. The determined portion may, for example, have a center that is moved to be close to or even match or co-align with an identified location that is of interest, attraction, or focus to the end user. The identified location may, for example, be a location in a previous image or frame that has already attracted the end user's attention. The identified location may, for example, be a location in a subsequent frame that the augmented reality system has predicted will attract the end user's attention. Some implementations employ multiple frame buffers. As described elsewhere herein, this can facilitate interruption of the presentation of frames.

在3008,增强现实系统从帧缓冲器读出过渲染帧的已确定的部分。At 3008, the augmented reality system reads the determined portion of the over-rendered frame from the frame buffer.

本文描述了本发明的多种示例性实施例。这些实施例以非限制性的意义被引用。它们被提供以更广泛地示出本发明可应用的方面。多种改变可做出以替代本发明的描述和等同物而不背离本发明的真实精神和范围。此外,许多修改可以被作出以适应特定的情况、材料、组合物、过程、处理或步骤来达到本发明的目标、精神或范围。进一步地,如将被本领域的技术人员所理解的,本文描述和示出的每个个体差异具有分立组件和特征,其可容易地从任何其它几个实施例分离出或与任何其它几个实施例结合而不背离本发明的范围或精神。所有这样的修改旨在落入与本公开相关联的权利要求书的范围内。Various exemplary embodiments of the present invention are described herein. These embodiments are cited in a non-restrictive sense. They are provided to more broadly illustrate the applicable aspects of the present invention. Various changes may be made to replace the description and equivalents of the present invention without departing from the true spirit and scope of the present invention. In addition, many modifications may be made to adapt to specific situations, materials, compositions, processes, treatments or steps to achieve the goals, spirit or scope of the present invention. Further, as will be understood by those skilled in the art, each individual difference described and illustrated herein has discrete components and features that can be easily separated from or combined with any other embodiments without departing from the scope or spirit of the present invention from any other embodiments. All such modifications are intended to fall within the scope of the claims associated with this disclosure.

本发明包括可使用主题装置来执行的方法。该方法可包括提供这种适当设备的行为。这样的提供可由最终用户执行。换言之,“提供”行为仅仅需要最终用户获得、接触、处理、放置、设置、激活、上电或其他动作以在本主题方法中提供必要的设备。本文所列举的方法可以以所列举的事件的逻辑上可能的任何顺序和以事件的列举的顺序来实施。The present invention includes methods that can be performed using the subject apparatus. The methods can include the act of providing such appropriate equipment. Such provision can be performed by the end user. In other words, the act of "providing" simply requires the end user to obtain, access, handle, place, set up, activate, power on, or otherwise perform other actions to provide the necessary equipment in the subject method. The methods recited herein can be implemented in any logically possible order of the recited events and in the order in which the events are recited.

以上陈述了本发明的示例性方面以及考虑了材料选择和制造的细节。至于本发明的其它细节,他们可以与上述引用的专利和出版物以及本领域技术人员已广泛的理解或认识结合而得以认知。根据通常地或逻辑地被采用的附加行为,同样的道理也适用于本发明基于方法的方面。The above describes exemplary aspects of the present invention and details regarding material selection and manufacturing. As for other details of the present invention, they can be understood in conjunction with the above-referenced patents and publications and the broad understanding or knowledge of those skilled in the art. The same applies to the method-based aspects of the present invention, depending on the additional actions that are typically or logically employed.

此外,尽管已经参照可选地结合多个特征的若干例子描述了本发明,相对于本发明的每个变体,本发明并不受限于这些被描述或指示的情形。多种改变可做出以替代本发明的描述和等同物(不论其在本文中被列举还是为了简洁起见本文没有包括)而不背离本发明的真实精神和范围。此外,当提供数值范围时,应理解,该范围的上限和下限之间的每一个中间以及和任何其它声称的或在该声明范围的中间值均涵盖在本发明之中。Furthermore, although the present invention has been described with reference to several examples that optionally incorporate a plurality of features, the present invention is not limited to the embodiments described or indicated with respect to each variant of the present invention. Various changes may be made to replace the description of the present invention and equivalents (whether enumerated herein or not included herein for the sake of brevity) without departing from the true spirit and scope of the present invention. Furthermore, when a numerical range is provided, it is understood that every intermediate value between the upper and lower limits of the range and any other stated or intermediate values in the stated range are encompassed within the present invention.

此外,可以预期,,所述本发明的变体的任何可选特征可被独立地阐述和要求,或与本文所述的任何一个或多个特征组合来阐述和要求。对单数项的引用包括多个相同项目存在的可能性。更具体地,如本文和所附相关权利要求中所使用的,除非相反的特别声明,否则单数形式“一”、“一个”、“所述”和“该”包括多个指示对象。换句话说,在上述说明书以及与本公开相关的权利要求中该冠词的使用考虑到了“至少一个”所述主题项目。进一步指出,这样的权利要求可能被撰写以排除任何可选的组件。就这一点而言,该声明旨在作为前置基础以用于结合权利要求要素的叙述来使用这类排除性术语,如“仅”,“只”等,或使用“否定”限制。Furthermore, it is contemplated that any optional feature of the variants of the invention may be set forth and claimed independently, or in combination with any one or more of the features described herein. References to singular items include the possibility that a plurality of the same item exists. More specifically, as used herein and in the accompanying related claims, the singular forms "a", "an", "said", and "the" include multiple referents unless specifically stated to the contrary. In other words, the use of the article in the foregoing specification and in the claims associated with the present disclosure contemplates "at least one" of the subject items. It is further noted that such claims may be drafted to exclude any optional components. In this regard, this statement is intended to serve as a pre-emptive basis for the use of such exclusive terms, such as "only", "only", etc., in conjunction with the recitation of claim elements, or the use of a "negative" limitation.

如果不使用这种排他性术语,与本公开内容相关的权利要求中的术语“包括”应当允许包括任何额外的组件——不论给定数目的组件是否被列举在这样的权利要求中,或者增加的特征可被视为转化这种权利要求中阐述的组件的性质。除非本文具体地进行了定义,本文使用的所有技术和科学术语将被给予尽可能宽泛的通常理解含义,同时保持权利要求的有效性。If such exclusive terminology is not used, the term "comprising" in claims related to the present disclosure should allow the inclusion of any additional components - regardless of whether a given number of components are recited in such claims, or whether the added features can be considered to transform the nature of the components recited in such claims. Unless specifically defined herein, all technical and scientific terms used herein are to be given the broadest commonly understood meaning possible while maintaining claim validity.

本发明的广度并不受限于所提供的实施例和/或主题说明书,而是仅受限于与本公开内容关联的权利要求语言的范围。The breadth of the present invention is not limited by the examples provided and/or subject specification, but is limited only by the scope of the claim language associated with this disclosure.

Claims (26)

1.一种在虚拟图像呈现系统中操作的方法,所述方法包括:1. A method of operating in a virtual image rendering system, the method comprising: 在虚拟或增强图像呈现系统的一个或多个传感器处检测指示呈现给最终用户的帧序列中的帧的第一部分中第一组像素中的第一相邻像素之间的第一间隔所对应的第一分辨率将不同于呈现给所述最终用户的帧的第二部分中第二组像素中的第二相邻像素之间的不同间隔所对应的第二分辨率的指示;The system detects at one or more sensors that an indication is made that the first resolution corresponding to the first interval between the first adjacent pixels in the first group of pixels in the first part of a frame sequence presented to the end user will be different from the second resolution corresponding to the different interval between the second adjacent pixels in the second group of pixels in the second part of a frame presented to the end user. 部分地或全部地基于与所述指示有关的方向特征,选择要为其修改所述第一间隔的帧的所述第一组像素;The first group of pixels of the frame for which the first interval is to be modified is selected, either partially or entirely based on directional features related to the indication. 至少通过在虚拟图像呈现系统的一个或多个调制子系统处将驱动第一组像素的驱动信号修改为用于在帧序列中的所述帧之后的后续帧的已修改的驱动信号,至少部分地基于所述指示调整所述第一部分中的第一组像素表示的分辨率,以生成由所述第一部分中已调整的第一组像素表示的第一分辨率;以及At least by modifying the drive signal driving the first group of pixels at one or more modulation subsystems of the virtual image rendering system to a modified drive signal for subsequent frames after the frame in the frame sequence, at least in part based on the indication, the resolution represented by the first group of pixels in the first portion is adjusted to generate a first resolution represented by the adjusted first group of pixels in the first portion; and 至少通过开始帧的呈现、在完成所述呈现之前中断所述呈现、以及用已调整的第一组像素替代由所述第一组像素表示的所述第一部分,以至少以所述第一分辨率呈现所述帧的第一部分,并以所述第二分辨率呈现所述帧的第二部分,在包括所述第一部分和所述第二部分的所述后续帧中呈现一个或多个虚拟对象给最终用户。The first portion of the frame is presented at least at the first resolution and the second portion of the frame is presented at the second resolution by at least by starting the presentation of the frame, interrupting the presentation before the presentation is completed, and replacing the first portion represented by the first set of pixels with an adjusted first set of pixels. In the subsequent frames including the first portion and the second portion, one or more virtual objects are presented to the end user. 2.如权利要求1所述的方法,还包括:2. The method of claim 1, further comprising: 在所述虚拟或增强图像呈现系统的一个或多个调制子系统处,部分地或全部地基于所述指示,将用于第一组像素的驱动信号的电特征修改为已修改的驱动信号。At one or more modulation subsystems of the virtual or enhanced image rendering system, the electrical characteristics of the driving signal for the first group of pixels are modified to a modified driving signal, based in part or in whole on the instruction. 3.如权利要求1所述的方法,还包括:3. The method of claim 1, further comprising: 调整一组像素特征,包括所感知的尺寸或所感知的亮度之中的至少一个;Adjust a set of pixel features, including at least one of perceived size or perceived brightness; 调整最终用户可感知的一个或多个像素特征;以及Adjust one or more pixel features that are perceptible to the end user; and 监测超过标称头部运动值的头部运动。Monitor head movements that exceed the nominal head movement value. 4.如权利要求1所述的方法,进一步包括:4. The method of claim 1, further comprising: 基于已检测的头部运动的方向选择所述帧的第一组像素,其中所述第一组像素的方向与已检测的头部运动的方向相同;以及A first group of pixels in the frame is selected based on the direction of the detected head movement, wherein the direction of the first group of pixels is the same as the direction of the detected head movement; and 增加至少一个后续帧的第一组像素的尺寸。Increase the size of the first group of pixels in at least one subsequent frame. 5.如权利要求1所述的方法,进一步包括调整第一组像素的可变聚焦组件。5. The method of claim 1, further comprising adjusting a variable focus component of the first group of pixels. 6.如权利要求1所述的方法,进一步包括调整第一组像素的可变尺寸源。6. The method of claim 1, further comprising adjusting the variable-size source of the first group of pixels. 7.如权利要求1所述的方法,进一步包括调整第一组像素的抖动。7. The method of claim 1, further comprising adjusting the jitter of the first group of pixels. 8.如权利要求1所述的方法,进一步包括:8. The method of claim 1, further comprising: 部分地或全部地基于一个或多个传感器检测的最终用户的头部运动的方向选择所述帧的所述第一组像素,其中所述第一组像素的方向与所述头部运动的方向相同;以及The first set of pixels in the frame is selected, in part or in whole, based on the direction of the end-user's head movement detected by one or more sensors, wherein the direction of the first set of pixels is the same as the direction of the head movement; and 响应于所述头部运动增加所述至少一个后续帧的第一组像素的亮度。The brightness of the first group of pixels in the at least one subsequent frame is increased in response to the head movement. 9.如权利要求1所述的方法,进一步包括:9. The method of claim 1, further comprising: 部分地或全部地基于一个或多个传感器检测的最终用户的头部运动的方向选择所述第一组像素,其中所述第一组像素的第一方向与所述头部运动的方向相反;以及The first set of pixels is selected, in part or in whole, based on the direction of the end-user's head movement detected by one or more sensors, wherein the first direction of the first set of pixels is opposite to the direction of the head movement; and 响应于所述头部运动,减少所述至少一个后续帧的第一组像素的尺寸。In response to the head movement, the size of the first group of pixels in the at least one subsequent frame is reduced. 10.如权利要求1所述的方法,进一步包括:10. The method of claim 1, further comprising: 部分地或全部地基于一个或多个传感器检测的最终用户的头部运动的方向选择所述第一组像素,其中所述第一组像素的第一方向与所述头部运动的方向相反;以及The first set of pixels is selected, in part or in whole, based on the direction of the end-user's head movement detected by one or more sensors, wherein the first direction of the first set of pixels is opposite to the direction of the head movement; and 响应于所述头部运动,减少所述至少一个后续帧的第一组像素的亮度。In response to the head movement, the brightness of the first group of pixels in the at least one subsequent frame is reduced. 11.如权利要求1所述的方法,其中所述指示部分地或全部地基于检测所述最终用户的头部运动属性已经超过头部运动属性的标称值。11. The method of claim 1, wherein the indication is based in part or in whole on detecting that the end user's head motion attributes have exceeded the nominal value of the head motion attributes. 12.如权利要求11所述的方法,其中所述头部运动属性包括所述头部运动的速度或所述头部运动的加速度之中的至少一个,且所述指示部分地或全部地基于通过惯性传感器接收到的信号。12. The method of claim 11, wherein the head motion attribute includes at least one of the velocity of the head motion or the acceleration of the head motion, and the indication is based in part or in whole on a signal received by an inertial sensor. 13.如权利要求1所述的方法,其中所述指示部分地或全部地基于通过成像器接收到的信号。13. The method of claim 1, wherein the indication is based in part or in whole on a signal received by the imager. 14.如权利要求1所述的方法,其中所述至少一个后续帧的提供是部分地或全部地基于光栅扫描型帧、螺旋扫描型帧或利萨茹扫描型帧之中的至少一个。14. The method of claim 1, wherein the provision of the at least one subsequent frame is based in part or in whole on at least one of raster scan frames, spiral scan frames, or Lissajous scan frames. 15.一种用于呈现虚拟内容的虚拟或增强图像呈现系统,包括:15. A virtual or augmented image rendering system for presenting virtual content, comprising: 一个或多个传感器,其被配置为检测指示呈现给最终用户的帧序列中的帧的第一部分中第一组像素中的第一相邻像素之间的第一间隔所对应的第一分辨率将不同于呈现给最终用户的帧的第二部分中第二组像素中的第二相邻像素之间的不同间隔所对应的第二分辨率的指示;One or more sensors are configured to detect an indication that the first resolution corresponding to a first interval between first adjacent pixels in a first group of pixels in a first part of a frame sequence presented to an end user will be different from the second resolution corresponding to a different interval between second adjacent pixels in a second group of pixels in a second part of a frame presented to the end user. 所述虚拟或增强图像呈现系统被配置为,部分地或全部地基于与所述指示有关的方向特征,选择帧的所述第一组像素;The virtual or enhanced image rendering system is configured to select the first group of pixels of a frame, in part or in whole, based on directional features related to the indication; 虚拟或增强图像呈现系统中的一个或多个调制子系统,其包括投影子系统和至少一个处理器并且操作地耦合到所述一个或多个传感器,且被配置为至少通过将驱动第一组像素的驱动信号修改为用于在帧序列中的所述帧之后的后续帧的已修改的驱动信号,至少部分地基于所述指示调整由所述第一部分中的第一组像素表示的分辨率,以生成由所述第一部分中已调整的第一组像素表示的第一分辨率;以及One or more modulation subsystems in a virtual or enhanced image rendering system, comprising a projection subsystem and at least one processor and operatively coupled to the one or more sensors, and configured to adjust the resolution represented by the first group of pixels in the first portion, at least in part, based on the indication, by modifying the drive signal driving the first group of pixels to a modified drive signal for subsequent frames after the first frame in a frame sequence; and to generate a first resolution represented by the adjusted first group of pixels in the first portion; 一个或多个投影机,其操作地耦合到所述一个或多个调制子系统,且被配置为至少通过开始帧的呈现、在完成所述呈现之前中断所述呈现、以及用已调整的第一组像素替代用所述第一组像素表示的所述第一部分,以至少以第一分辨率呈现所述帧的第一部分并以第二分辨率呈现所述帧的第二部分,来在包括所述第一部分和所述第二部分的所述后续帧中呈现一个或多个虚拟对象给最终用户。One or more projectors, operatively coupled to the one or more modulation subsystems, and configured to present one or more virtual objects to an end user in subsequent frames including the first and second portions, by at least presenting a first portion of the frame at a first resolution and a second portion of the frame at a second resolution, and by at least presenting a first portion of the frame at a first resolution and replacing the first portion represented by the first set of pixels with an adjusted first set of pixels. 16.如权利要求15所述的虚拟或增强图像呈现系统,其中所述一个或多个调制子系统进一步被配置为调整最终用户可感知的一个或多个像素特征。16. The virtual or enhanced image rendering system of claim 15, wherein the one or more modulation subsystems are further configured to adjust one or more pixel features perceptible to the end user. 17.如权利要求15所述的虚拟或增强图像呈现系统,其中所述一个或多个调制子系统进一步被配置为部分地或全部地基于所述指示,将用于第一组像素的驱动信号的一个或多个电特征修改为已修改的驱动信号。17. The virtual or enhanced image rendering system of claim 15, wherein the one or more modulation subsystems are further configured to modify one or more electrical characteristics of the driving signal for the first group of pixels to a modified driving signal, in part or in whole, based on the instruction. 18.如权利要求17所述的虚拟或增强图像呈现系统,其中所述一个或多个电特征包括所述驱动信号的电压或电流。18. The virtual or enhanced image rendering system of claim 17, wherein one or more electrical features include the voltage or current of the drive signal. 19.如权利要求17所述的虚拟或增强图像呈现系统,其中所述一个或多个电特征包括所述驱动信号的振幅或斜率。19. The virtual or enhanced image rendering system of claim 17, wherein one or more electrical features include the amplitude or slope of the drive signal. 20.如权利要求15所述的虚拟或增强图像呈现系统,进一步包括被配置为检测最终用户的头部运动的一个或多个传感器,且所述一个或多个调制子系统进一步被配置为调整一组像素特征,所述一组像素特征包括所感知的尺寸或所感知的亮度之中的至少一个。20. The virtual or augmented image rendering system of claim 15, further comprising one or more sensors configured to detect head movements of an end user, and the one or more modulation subsystems further configured to adjust a set of pixel features, the set of pixel features including at least one of perceived size or perceived brightness. 21.如权利要求15所述的虚拟或增强图像呈现系统,其中所述虚拟或增强图像呈现系统进一步被配置为:基于已检测的头部运动的方向选择所述帧的第一组像素,其中所述第一组像素的方向与已检测的头部运动的方向相同;以及增加至少一个后续帧的第一组像素的尺寸。21. The virtual or enhanced image rendering system of claim 15, wherein the virtual or enhanced image rendering system is further configured to: select a first group of pixels of the frame based on the direction of a detected head movement, wherein the direction of the first group of pixels is the same as the direction of the detected head movement; and increase the size of the first group of pixels in at least one subsequent frame. 22.如权利要求15所述的虚拟或增强图像呈现系统,其中所述虚拟或增强图像呈现系统进一步被配置为调整第一组像素的可变聚焦组件。22. The virtual or enhanced image rendering system of claim 15, wherein the virtual or enhanced image rendering system is further configured to adjust a variable focus component of a first set of pixels. 23.如权利要求15所述的虚拟或增强图像呈现系统,其中所述虚拟或增强图像呈现系统进一步被配置为调整第一组像素的可变尺寸源。23. The virtual or enhanced image rendering system of claim 15, wherein the virtual or enhanced image rendering system is further configured to adjust a variable-size source of the first set of pixels. 24.如权利要求15所述的虚拟或增强图像呈现系统,其中所述虚拟或增强图像呈现系统进一步被配置为调整第一组像素的抖动。24. The virtual or enhanced image rendering system of claim 15, wherein the virtual or enhanced image rendering system is further configured to adjust the jitter of the first group of pixels. 25.如权利要求15所述的虚拟或增强图像呈现系统,进一步包括操作地耦合到微处理器的一个或多个传感器,所述微处理器被配置为:部分地或全部地基于所述操作地耦合到微处理器的一个或多个传感器检测的头部运动的方向选择所述帧的所述第一组像素,其中所述第一组像素的方向与已检测的头部运动的方向相同;以及响应于所述已检测的头部运动增加所述至少一个后续帧的第一组像素的亮度。25. The virtual or enhanced image rendering system of claim 15, further comprising one or more sensors operatively coupled to a microprocessor, the microprocessor being configured to: select, in part or in whole, a first set of pixels of the frame based on the direction of a head movement detected by the one or more sensors operatively coupled to the microprocessor, wherein the direction of the first set of pixels is the same as the direction of the detected head movement; and increase the brightness of the first set of pixels of the at least one subsequent frame in response to the detected head movement. 26.如权利要求15所述的虚拟或增强图像呈现系统,进一步包括操作地耦合到微处理器的一个或多个传感器,所述微处理器被配置为:部分地或全部地基于所述操作地耦合到微处理器的一个或多个传感器检测的头部运动的方向选择所述第一组像素,其中所述第一组像素的方向与已检测的头部运动的方向相反;以及响应于所述操作地耦合到微处理器的一个或多个传感器检测的头部运动,减少所述至少一个后续帧的第一组像素的尺寸。26. The virtual or enhanced image rendering system of claim 15, further comprising one or more sensors operatively coupled to a microprocessor, the microprocessor being configured to: select the first set of pixels, partially or entirely, based on the direction of a head movement detected by the one or more sensors operatively coupled to the microprocessor, wherein the direction of the first set of pixels is opposite to the direction of the detected head movement; and reduce the size of the first set of pixels in the at least one subsequent frame in response to the head movement detected by the one or more sensors operatively coupled to the microprocessor.
HK18108141.1A 2013-03-15 2018-06-25 Display system and method HK1248851B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201361801219P 2013-03-15 2013-03-15
US61/801,219 2013-03-15

Publications (2)

Publication Number Publication Date
HK1248851A1 HK1248851A1 (en) 2018-10-19
HK1248851B true HK1248851B (en) 2021-07-16

Family

ID=

Similar Documents

Publication Publication Date Title
AU2019272052B2 (en) Display system and method
HK1248851B (en) Display system and method