GB2635734A - Methods and systems for augmented/virtual reality - Google Patents
Methods and systems for augmented/virtual reality Download PDFInfo
- Publication number
- GB2635734A GB2635734A GB2317935.1A GB202317935A GB2635734A GB 2635734 A GB2635734 A GB 2635734A GB 202317935 A GB202317935 A GB 202317935A GB 2635734 A GB2635734 A GB 2635734A
- Authority
- GB
- United Kingdom
- Prior art keywords
- user
- nodes
- content
- information
- control unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/163—Wearable computers, e.g. on a belt
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/131—Protocols for games, networked simulations or virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/038—Indexing scheme relating to G06F3/038
- G06F2203/0381—Multimodal input, i.e. interface arrangements enabling the user to issue commands by simultaneous use of input devices of different nature, e.g. voice plus gesture on digitizer
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/038—Indexing scheme relating to G06F3/038
- G06F2203/0384—Wireless input, i.e. hardware and software details of wireless interface arrangements for pointing devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/80—Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A computer-implemented method is disclosed, the method comprising determining a digital control unit to display in a digital space; displaying, via an augmented reality user device of a user, the digital control unit in the digital space; displaying, via the augmented reality user device, a plurality of nodes in the digital space, each of the respective plurality of nodes being independently controllable; receiving a user input from the user at the digital control unit; determining, based on the user input, content to be displayed at at least one node of the plurality of nodes; and displaying, via the augmented reality user device, the determined content at the at least one node of the plurality of nodes. The nodes may display content from multiple different data input streams. The position of the nodes may be changed in response to a trigger event. The trigger event may be a user command, a gesture from the user or a detection of a gaze of the user.
Description
METHODS AND SYSTEMS FOR AUGMENTEDNIRTUAL REALITY
Technical Field
This disclosure relates generally to methods and systems for augmented and/or virtual reality (AR / VR).
Background
Augmented reality has become an increasingly important technological tool, with applications in everything from healthcare to gaming, and architecture to modelling.
Currently AR is developing in two independent paths -hardware and software -but without any cohesive device-agnostic and content-agnostic platform. As such, users have difficulty in using AR technology, third party software is not designed for multiple hardware providers, or to work in conjunction with other software (and likewise hardware is not designed with existing software in mind), meaning that AR devices and applications are inefficient and lack ease of use. Moreover, due to the number of different providers and the multiple applications that are necessary for users to perform various tasks, there are inherent security risks and a lack of shared resources. For example, most applications require unique login details, and may have different levels of privacy and security.
There is therefore a need to provide an augmented reality system which addresses some or all of the shortcomings listed above.
Summary
Aspects and optional features of the present disclosure are described in the accompanying claims.
One aspect of the present disclosure provides a computer-implemented method comprising determining a digital control unit to display in a digital space; displaying, via an augmented reality user device of a user, the digital control unit in the digital space; displaying, via the augmented reality user device, a plurality of nodes in the digital space, each of the respective plurality of nodes being independently controllable; receiving a user input from the user at the digital control unit; determining, based on the user input, content to be displayed at at least one node of the plurality of nodes; and displaying, via the augmented reality user device, the determined content at the at least one node of the plurality of nodes.
Optionally, the operation of displaying the determined content at the at least one node of the plurality of nodes may comprise displaying the determined content at the at least one node of the plurality of nodes while maintaining content displayed at at least one other node of the plurality of nodes.
Optionally, the method may further comprise receiving a plurality of data input streams; and the operation of determining the content to be displayed may comprise determining the content from the plurality of data input streams.
Optionally, the method may further comprise receiving device information of the augmented reality user device, the device information indicating a capability of the augmented reality user device; and the operation of determining the content to be displayed may comprise selecting content at least partially based on the device information.
Optionally, the device information comprises at least one of: a battery life of the user device, a bandwidth of the user device, and a processing power of the user device.
Optionally, the digital control unit is displayed in a near field reaching distance from the user, and/or wherein the plurality of nodes are displayed in a far field distance from the user.
Optionally, the digital space is displayed as an overlay on a real world scene.
Optionally, the method may further comprise capturing a view of the digital space from the augmented reality user device, the view including the digital control unit and/or the plurality of nodes; and sending, from the augmented reality user device, the captured view of the digital space to a second augmented reality user device. The captured view may optionally include a real world view from a perspective of the user.
Optionally, the method may further comprise receiving a further user input, the further user input being directed at one of the plurality of nodes; determining a type of digital control unit; and updating the display of the digital control unit based on the determined type of digital control unit.
Optionally, the digital control unit may be a type of digital control unit, and the type of digital control unit may be used to interpret an associated action with a user input received at the digital control unit.
Optionally, the method may further comprise receiving a trigger event; and changing a position of at least one node relative to the user in the digital space in response to receiving the trigger event. The trigger event may optionally be at least one of: a user command, a gesture from the user, detection of a gaze of the user. The trigger event may optionally be based on user activity, user data, scheduling information, and/or preset settings.
Optionally, the method may further comprise saving a representation of the digital space in memory.
Optionally, the method may further comprise receiving a far-field user input; and adjusting a position of one or more of the plurality of nodes relative to a position of the user in the digital space, based on the
received far-field user input.
Optionally, determining content to be displayed at the at least one node of the plurality of nodes may comprise selecting, for each of the at least one node of the plurality of nodes, content from at least one content source; and determining concurrency information for the selected content; and displaying the determined content may comprise displaying the determined content based on the determined concurrency information.
Optionally, displaying the plurality of nodes in the digital space may comprise obtaining position information for the plurality of nodes. The method may further comprise accessing user accessibility information, and obtaining position information for the plurality of nodes may comprise determining position information for each of the plurality of nodes based at least partially on the user accessibility information.
Obtaining position information for the plurality of nodes may optionally comprise obtaining, from one or more sensors, a real world view from a perspective of the user device, and determining position information for each of the plurality of nodes based at least partially on the real world view.
Obtaining position information for the plurality of nodes may optionally comprise determining a head level of the user and determining position information for each of the plurality of nodes based at least partially on the determined head level of the user.
The method may further comprise determining a user's line of sight, and wherein upon detection of forward motion of the user, hiding or removing any of the nodes or the digital content unit positioned in the user's line of sight.
Another aspect of the present disclosure provides a system comprising one or more processors; a memory storing instructions that, when executed by the one or more processors, cause the system to perform the operations of the method described herein.
Brief description of the drawings
Some specific implementations are now described, by way of example only, with reference to the accompanying drawings, in which: Figure 1 illustrates an overview of an augmented reality, or virtual reality, system according to embodiments of the disclosure; Figures 2a and 2b illustrate 3-dimensional graphical user interfaces (3D GUIs) in a digital
space according to embodiments of the disclosure;
Figure 2c illustrates positioning of a node in accordance with embodiments of the disclosure; Figures 3a and 3b illustrate 3-dimensional graphical user interfaces (3D GUIs) in a digital space according to embodiments of the disclosure; Figure 4 illustrates an interaction with an augmented reality system according to embodiments
of the disclosure from a user's perspective;
Figure 5 shows a schematic data flow diagram for an augmented reality system according to embodiments of the disclosure; Figure 6 shows a data flow for an augmented reality system according to embodiments of the disclosure; Figure 7 shows an example application for an augmented reality system according to
embodiments of the disclosure;
Figures 8a-8f show flowcharts of methods for an augmented reality system according to embodiments of the disclosure; Figure 9 illustrates a block diagram of a computing device; and Figure 10 illustrates a diagram of a computer-readable medium.
Detailed description
Aspects and features of the disclosure will be described below. In overview, and without limitation, the present application relates to methods and systems for augmented reality and virtual reality.
The description of Figures 1 to 4 will describe various aspects of the general system and method, whilst the descriptions of Figures 5 to 7 will describe data flow both for a general case and for specific example applications of the system. The descriptions of the flowcharts of Figure 8a to 8f will describe the method with various optional operations. It is to be understood that elements of one or all of these additional aspects of the general method may be applicable in any implementation of the general method.
Figure 1 illustrates an overview of an augmented reality, or virtual reality, system according to embodiments of the disclosure. Shown in Figure 1 is an AR/VR provider 2 which is communicatively coupled with one or more user devices 3a to 3d. The ARNR provider 2 may be a device or a system such as system 100. The ARNR provider 2 may provide a digital entity with which one or more user devices 3 may interact. The digital entity may be virtual, and may be displayed in 2 or 3 dimensions in digital space, such as a 2-D or 3-D hologram, via one or more of user devices 3a to 3d. For example, a digital entity may be a virtual kiosk that is displayed in a digital space. The virtual kiosk may be displayed based on a geographic location of the user device 3, or one or more geographic locations for a virtual kiosk may be predetermined. For example, a virtual kiosk may be provided at an area of high foot traffic, such as a stadium, tourist attraction, hospital, public transport station or the like. In some embodiments, multiple user devices may interact with the AR/VR provider 2 simultaneously or substantially simultaneously. In some embodiments, multiple user devices may interact with the same digital entity, e.g. the same instance of the digital entity. In some embodiments, each of multiple user devices displays a different (e.g. independent) instance of the digital entity. In some embodiments, multiple user devices 3 may interact with the ARNR provider 2 collaboratively or independently from each other. That is, multiple user devices 3 may each provide input data, for example by interacting with the same instance of a digital entity, or with independent instances of the digital entity. For example, in some embodiments, multiple users may provide inputs to the ARNR provider 2 (e.g. via the digital entity) in response a single request. For example, multiple users may use an online payment system to jointly pay (e.g. contribute) for a single service, or each of multiple users may input their own information simultaneously for a group activity or programme. In other embodiments, multiple users may independently interact with the AR/VR provider 2 via their respective user devices 3. For example, each of multiple user devices 3 may obtain travel information for a respective destination, by interacting with the same ARNR provider 2 simultaneously.
The user devices 3 may be different models and/or have different manufacturers. The user devices 3 may operate using different operating systems, communication protocols and/or functions. Although depicted in Figure 1 as virtual reality headsets, one or more of the user devices 3 may be an AR device, a VR device, a headset, a mobile phone, a camera, a lens, a pair of glasses (e.g. with smart glass), or the like. The user devices 3 may have different capabilities, such as battery life, processing power, integrated processor(s), computer tethering, sensor inputs, range, and the like.
Although depicted in Figure 1 as being positioned centrally relative to the user devices 3, ARNR provider 2 may be positioned anywhere within a communication range of the user devices 3. ARNR provider 2 may be configured to provide content to the one or more user devices 3 for the one or more user devices 3 to display in a digital space to their respective users. The selection of content for a particular one of the user devices 3 may be dependent on information from said user device 3.
For example, the selection of content for a particular one of the user devices 3 may be dependent on accessibility information indicating accessibility needs or settings for the user of said device, capability information indicating the technological capability of the device such as battery life, processing power, bandwidth, and the like, user information such as preprogrammed age or preference information, subscription information, location information, time of day information, sensor information such as motion sensor information, health information, current lighting information, recent user activity or the like.
The communication between the one or more user devices 3 and the ARNR provider 2 may be bidirectional. In some embodiments, one or more of the user devices 3 may communicate with each other, either unidirectionally or bidirectionally.
Figures 2a and 2b illustrate 3-dimensional graphical user interfaces (3D GUIs) in a digital space according to embodiments of the disclosure. As shown in Figures 2a and 2b, the 3D GUI comprises a plurality of nodes 4. Each node 4 may host a channel, which is a user-selectable process, service and/or utility. That is, each node 4 represents a location in the digital space at which a channel may be presented or hosted. The channels may operate concurrently. In some embodiments, the channels persist in the digital space. Each node 4 may be controlled by multimodal input. The nodes 4 may be displayed to the user via the user device 3.
The nodes 4 and/or the channels may be provided to the user in a preset or pre-determined arrangement. In some embodiments, the nodes 4, and therefore the channels presented or hosted on said nodes 4, may be head-locked -that is, as the user's head rotates, the channels presented to him/her remain in the same position relative to the user's face. In other words, the nodes rotate with the user's face such that a channel being hosted on node 4a directly in front of the user's face in a starting position remains directly in front of the user's face as the user's head rotates. In other embodiments, the nodes may be spatially anchored -that is, as the user's head rotates, the user faces different channels, e.g. from node 4a to node 4c. In other words, as the user's head rotates, the nodes, and therefore channels, appear fixed in their original positions from the perspective of the user.
In some embodiments, the channels may be arranged based at least partially on user data such as historical use data (e.g. data representing how the user has previously interacted with one or more channels), user schedule information (e.g. by accessing a user's calendar, and/or determining previous user interactions based on day of the week, time of day, etc.), and/or user preferences, such as those provided during an onboarding process.
In some embodiments, the nodes 4 and/or the channels may be arranged based on a determined location. That is, the system may determine the location of the user (e.g. the location of the user device) and arrange one or more channels accordingly. In some embodiments, the user device 3 may determine the location of the user and may determine the arrangement of one or more channels. In some embodiments, the user device 3 may determine the location of the user and may transmit location information to the system 100 (described in more detail with reference to Figure 5), and the system 100 may determine the arrangement of one or more channels. In some embodiments, the system 100 may comprise the ARNR provider 2 and/or one or more additional components, such as a local server and/or cloud services. System 100 may be a distributed system. In some embodiments, the system 100 may also comprise the user device(s) 3, although in other embodiments the user devices 3 are external to the system 100. Some aspects described herein may be performed by the ARNR provider 2 in some embodiments, and some aspects may be performed by the user device(s) 3. For example, the system (e.g. the ARNR provider 2, a user device 3, or the like) may determine, using image or object recognition, a room or other setting, such as by identifying furniture or a room layout. That is, in some embodiments, the object recognition is performed locally (e.g. by the user device 3 or by the ARNR provider 2) based on image data obtained via the user device 3. In some embodiments, the object recognition is performed remotely, e.g. by transmitting the image data obtained by the user device 3 to a remote server or cloud service, which performs the object recognition. In some embodiments, one or more rooms or settings may be known or recognizable to the system, for example by registering (e.g. pre-scanning) the room or setting during an onboarding process, background mapping, or by user input. In some embodiments, GPS and, or another positioning system may be used to determine a location, and therefore a setting, of the user (e.g. the user device). The matching of a user's surroundings to a known (e.g. registered) setting may be performed by the user device 3, by the AR/VR provider 2, and/or by a remote server (e.g. communicatively coupled with a secure database containing the registered setting information for said user). In some embodiments, one or more beacons, near-field communication devices, network information or the like may similarly be used to determine the setting and/or location of the user device and place spatially anchored data.
Although five nodes are shown in Figure 2a, it is to be understood that this is merely exemplary, and any number of nodes may be provided. In some embodiments, and as shown in Figure 2b, the nodes 4 are arranged around the user, for example surrounding the user in up to 360 degrees. Some or all of the nodes 4 may be arranged similarly to numbers on a clock face, time dial or the like. That is, some or all of the nodes 4 may be arranged based on a time. In some embodiments, some or all of the nodes 4 may be displayed in a continuous scrolling manner (e.g. may be configured to continuously scroll), for example based on user activities, schedules, inputs or the like. Additionally or alternatively, although all of the nodes 4 are illustrated as being displayed at a similar distance from the user, it is to be understood that this is merely exemplary. The nodes 4 may be displayed in a near field reaching distance, e.g. to appear within arm's reach of the user, in a far field distance from the user, or in between. The nodes 4 may be displayed at different distances relative to the user, for example as will be described with reference to Figures 3a and 3b.
The nodes 4 may be independently controlled. That is, a first content may be provided to a first node 4a whilst a second content may be provided or maintained on a second node 4b. The first content and the second content may be from different sources, or have different forms. For example, a first content presented on node 4a may be a video content, whilst a calendar content may be maintained on node 4b. During use, the content being provided to node 4b may, for example, be updated to instead show email content. In some embodiments, the content being provided to node 4b may be related or associated with the content being provided to node 4a. For example, the content being provided to node 4a may be a map providing directions to a train station to a user, and the content being provided to node 4b may be an up-to-date train timetable with departure information from the train station.
In some embodiments, one or more nodes 4 may be kept in a low power state. For example, a node 4f directly behind the user's head, may be in a low power state, e.g. while the user's head is facing away from node 4f. The low power state may entail, for example, reduced frequency of updating content displayed on said nodes and/or reducing or disabling inputs to said nodes. For example, the system may not recognize gesture controls to a node 4f behind a user's head, but may register voice controls. In some examples, nodes in the user's periphery may also be kept in a low power state.
The user may interact with channels on each of the nodes 4. In some embodiments, the user may interact with a channel hosted on a first node 4a, for example to control the content presented on the first node 4a. In some embodiments, the user may interact with a channel on a first node 4a to control content presented on one or more other nodes 4b-4f, for example while the content presented on the first node 4a is maintained. A user may interact with a channel by any input means, including but not limited to: gestures, voice, inputs on an input device (for example through buttons, a touchscreen, a haptic input device or the like), eye gaze, motion, and the like.
A user may, additionally or alternatively, interact with a node 4, for example to change a position of a node 4, change an arrangement of nodes 4, remove a node 4, insert a node 4, or the like. Such interactions may take the form of any input means, including but not limited to: gestures, voice, inputs on an input device (for example through buttons, a touchscreen, a haptic input device or the like), eye gaze, motion, and the like.
The nodes 4 may be positioned according to a predetermined arrangement. The predetermined arrangement may be preset by the user or may be determined by the system, for example based on historical user data, user preference data and/or previous user activity. The nodes 4 may be positioned based on, for example, a height of a user, an eye level of the user, accessibility information of the user (for example in a visible region for vision-impaired users), or the like.
The position of the nodes 4 may be adjusted, for example in response to a user input, or in response to a detection of motion. In some embodiments, the system may hide, minimize or remove any content being displayed on a node 4a directly in front of the user, e.g. in the user's eyeline, in response to the detection of forward motion.
Figure 2c illustrates positioning of a node 4 in accordance with embodiments of the disclosure. As shown in Figure 2c, a user is wearing an AR or VR headset (user device 3). The system may determine a region 5 in which one or more nodes 4 may be displayed. For example, sensors communicatively coupled or integrated in the user device 3 may be used to determine the display region 5. The user's eye level, height, gaze, settings and the like may be used to determine the position and/or the size of a display region 5. Although depicted as having an upper limit, it is to be understood that this is purely exemplary. In some embodiments, the region 5 may extend as a semi-dome above the user's head. For example, a constellation map may be provided to a user in the region 5 in a 3-D spherical arrangement around the user's head.
In some embodiments, the display region 5 may be limited to a region above a user's feet-level. That is, the display region 5 may be defined to allow for a region 6 which is to be kept free from content display. The region 6 may be based on the position of the user's feet, the height of the user, accessibility of the user, and the like, and may be used to enable the user to ensure that the space in front of them is clear of obstacles or tripping hazards. The size of the region 6 may vary based on movement of the user. For example, the region 6 for a user walking at a higher speed may be larger than a region 6 for a user walking at a slower speed or standing still. The size, position and/or shape of the region 6 may, additionally or alternatively, be based on a direction of movement. Motion and direction may be determined using a sensor integrated or communicatively coupled with the user device 3, such as a gyroscope, accelerometer or the like.
It is to be noted that, although a user's "feet-level" has been referred to in the description of region 6, the disclosure is not so limited. For example, the region 6 may be used to refer to an area of ground in front of the user, and/or in front of a user's wheelchair, mobility apparatus or the like. As another example (not illustrated), the region 6 may be positioned at head-height to protect the user's face from a beam. In some embodiments, the region 6 may be an area above a table surface, for example to allow a user to write or type on an external computer whilst using the AR system.
The size, position and/or shape of display region 5 and/or region 6 may be altered by the user, for example by means of voice input, gesture input, device input (e.g. button(s) or touchscreen), or the like.
Although the user device 3 is depicted in Figures 2a-2c as a AR or VR headset, it is to be understood that this is merely exemplary. In some embodiments, the user device 3 may be a camera, an electronic device such as a smartphone, a portable computer, a tablet, smart glasses, a helmet, or the like.
Additionally or alternatively, nodes may be grouped together into one or more groups. A first group of nodes may be a collection of nodes which are spatially arranged against the geometry of the surroundings of the user that can change based on time, location or use. Certain applications or channels may be capable of being hosted by nodes of the first group of nodes. That is, a first application group may comprise applications and/or channels that are compatible with being hosted in a spatially arranged manner in accordance with the first group of nodes, e.g. arranged against the geometry of the surroundings. Applications and/or channels in the first application group may be used simultaneously and may be pinned and operated in a node which is positioned based on the surroundings of the user. For example, if a user closes an application from the first application group which is currently being executed on a node 4a, the user may open another application from the first application group on said node 4a. Applications and/or content being displayed or executed on other nodes 4b, 4c, 4d may persist unchanged when the another application is executed.
Other applications or channels, e.g. applications and/or channels in a second application group, may be hosted by nodes which are not spatially arranged against the geometry of the surroundings of the user. That is, applications and/or channels of the second application group are configured to take over, or substantially take over, a user's surroundings. For example, applications and/or channels of the second application group may require 50%, 60%, 70%, 80%, 90% or 100% of available display space. In some embodiments, executing an application and/or channel of the second application group may close any other currently executing applications or channels. Nodes of a third group of nodes may be considered portals. Nodes of the third group of nodes may display a representation of an application, which, when selected, is configured to launch said application in the user's digital space. That is, the representation of the application may persist in a user's digital space, although when executed, the application represented may, for example, close other applications which are currently running. Launching the application may refer to executing an application outside of a group of nodes which is currently being displayed -for example, if the first group of nodes are displayed and a representation of an application not contained in the first group of application is launched, any currently running applications and/or channels of the first application group may be closed and the represented application may be executed.
The user may change the positions of nodes 4, for example through a gesture, voice or any other input means. For example, a user may adjust the height, size or shape of a node 4. The user may adjust the positions of all nodes 4 simultaneously, for example moving all nodes up by some distance, or individually, e.g. independently of each other, or in groups. For example, a user may select multiple nodes 4 to move. The user may switch positions of nodes 4, e.g. nodes 4a and 4b, or may switch channels hosted on nodes 4, e.g. nodes 4a and 4b. The nodes 4 may be at different heights or be provided at different apparent distances from the user. That is, nodes 4a and 4b (for example) may be displayed such that they appear to be at different distances from the user. One node may be within arm's reach, for example, whilst another may be outside arm's reach.
Figures 3a and 3b illustrate 3-dimensional graphical user interfaces (3D GUIs) in a digital space according to embodiments of the disclosure.
As depicted in Figure 3a, a user may be shown, via the user device 3, one or more nodes 4. As shown in Figure 3a, nodes 4a-4f may be displayed in a far-field distance in a digital space. That is, nodes 4a-4f may be displayed at first apparent distance from the user. The first apparent distance may be outside of the user's arm's reach.
Node 7 may be one of the plurality of nodes 4. Node 7 may be referred to as a digital control unit. That is, node 7 may host a channel which operates as a control unit, such as a virtual kiosk or dock. Throughout this description, the terms "node 7" and "digital control unit 7" may be used interchangeably. Digital control unit 7 may be displayed at a second apparent distance from the user, for example, in a near field distance from the user. For example, the second apparent distance may be less than the first apparent distance. E.g. the digital control unit 7 may appear closer to the user than one or more of the plurality of nodes 4.
The user may provide inputs to the digital control unit 7 to control channels displayed on one or more of the plurality of nodes 4. The inputs may be gesture-based, voice-based, eye-gaze-based, motion-based or the like. The user may control one or more nodes 4 from the digital control unit 7. That is, the user may control the channels and/or the content being displayed or hosted on one or more of the nodes 4, by interacting with the digital control unit 7 (e.g. by performing inputs (e.g. speech inputs, gestures, etc.) at the digital control unit 7). The digital control unit 7 may be in communication with one or more nodes 4. The content of one or more of the nodes 4 may be controlled via the digital control unit 7. For example, the user may select and/or interact with content being displayed on one or more nodes 4 by interacting with the digital control unit 7. The digital control unit 7 may be in bidirectional communication with one or more of the nodes 4. For example, a user may interact with the digital control unit 7 to affect one or more of the content, position, user interface, size or arrangement of one or more of the nodes 4. Similarly, one or more of the nodes 4 may affect the display, type, form, size, location and/or position of the digital control unit 7. For example, content being displayed on one or more of the nodes 4, and/or user interaction with said content, may be used to determine display parameters (e.g. size, position, type, form, shape, distance, etc.) of the digital control unit 7. This is illustrated by the arrows in Figures 3a and 3b.
As illustrated in Figure 3b, the nodes may be positioned up to 360 degrees around the user (e.g. around the user device 3). One or more nodes 4, e.g. nodes 4a, 4b, 4c, 4d, 4f, 4g, may be communicatively coupled with the digital control unit 7. That is, one or more nodes 4, e.g. 4a-g, may engage in bidirectional communication with the digital control unit 7. The one or more nodes 4, e.g. nodes 4a-g, may communicate with the digital control unit 7 independently from each other.
The digital control unit 7 may have a particular type. A type of digital control unit 7 may have an associated shape, display, size and/or display position, and/or may have associated recognized inputs.
In some embodiments, the type of digital control unit 7 to be displayed may be based at least in part on content displayed on one or more of the nodes 4, a user's location, the user's current activity, a time of day, user scheduling information (e.g. from a user's calendar), or the like. In some embodiments, the type of digital control unit 7 may be selected by the user, for example by a voice command or gesture. In some examples, a type of digital control unit 7 may be selected based on a task the user is performing, or based on a user query. For example, a user asking for train ticket information may be presented a digital control unit 7 in the form of a ticket kiosk. In another example, a user may have a calendar appointment to have an online meeting, and the digital control unit 7 may be presented a conference interface.
The actions associated with user inputs may be determined based on the type of digital control unit 7. For example, the same input gesture (e.g. pressing a virtual button on the digital control unit 7) may elicit different responses based on the type of digital control unit 7. If the digital control unit 7 is a ticket kiosk for example, a virtual button press (e.g. a pointing gesture) may select a train ticket to purchase.
The same gesture, if received on a digital control unit 7 of another type (such as a conference interface), may call a contact from the user's address book.
In some embodiments, one or more user inputs that trigger an action on one type of digital control unit 7 may not be recognized when the digital control unit 7 is of a different type.
Each node 4 may display content via a user interface. The user interface may be determined by the system. For example, a user interface may be selected from a plurality of available user interfaces based at least in part on one or more of: a data stream from which content is obtained, the content to be displayed on a particular node, content being displayed on one or more other nodes, a type of content to be displayed on a particular node, a user selection, user accessibility information, or the like. In some embodiments, a user interface for a particular node may be based, at least in part, on the size, position, distance and/or shape of the node on which content is to be displayed using said user interface.
In some embodiments, a user interface displayed on one node may be associated with a user interface displayed on another node. For example, a user interface on a first node 4a may be used to control or affect one or more of content displayed on another node 4b and/or a user interface displayed on another node 4b.
In some embodiments, a user interface for digital control unit 7 may be selected, and/or may be automatically determined.
Figure 4 illustrates an interaction with an augmented reality system according to embodiments of the disclosure from a user's perspective. As shown in Figure 4, a user using a user device 3 is presented with one or more nodes 4a, 4b, which may appear at a distance from the user. The nodes 4a, 4b may appear to be behind and/or in front of features of the real environment. As illustrated for example, the nodes 4a, 4b may be displayed to be behind a workstation but in front of a landscape. As indicated previously, the nodes 4a, 4b may be spatially anchored, or head-locked.
Although Figure 4 shows two nodes 4a, 4b, it is to be understood that this is merely exemplary. Any number of nodes may be provided, in any arrangement up to 360 degrees around the user.
Nodes 4a and 4b may be completely independent. That is, content and/or user interfaces displayed on each of the nodes 4a, 4b may be independent from each other. Although the nodes 4a, 4b are illustrated as equally distant from the user, it is to be understood that this is merely exemplary. Nodes 4a, 4b may be displayed at different apparent distances from the user, at different heights, be different sizes, shapes etc. Figure 5 shows a schematic data flow diagram for an augmented reality system 100 according to embodiments of the disclosure. The illustrated components of system 100 may be implemented in software housed on a single electronic device and/or in a distributed system.
System 100 comprises a system control module 110, an input/output module 120, one or more application programming interface (API) hubs 130-1, 130-2, a media module 140, a visualization management module 150, a workflow manager 160, a renderer 170, a client manager 180, storage 190, and a client device to system API module 135.
The input/output module 120 may be configured to receive input from one or more sensors 200.
Sensors 200 may comprise one or more cameras, microphone(s), motion sensor(s), gyroscope(s), accelerometer(s), health sensors such as heart rate monitor, blood pressure monitor, smartwatch, pulse, oximeter, scale, thermometer, GPS sensor, and/or pressure sensor(s), smartphone pedometer, altimeter, and the like.
One or more sensors 200 may be integrated in a user device 3. Additionally or alternatively, one or more sensors 200 may be external to the user device 3 and/or to the system 100. The one or more sensors 200 may be communicatively coupled to one or both of the user device 3 or the system 100. Data from the one or more sensors 200 may be obtained via Android AoSP API(s), cloud loT via web API or Office 365 / Azure, via a smartphone API interface, and/or via near-field communication, Bluetooth, WiFi, or the like. Sensor data may be used, stored internally, stored in the cloud, and/or stored by a content platform. Sensor data may be distributed. As such, post processing and/or predictive processing may be delivered in system 100 (e.g. in template screen module 152).
Data from the one or more sensors 200 may be received via the input/output module 120 and transmitted to one or more of the API hubs 130 and to the system control module 110. Data from a sensor database 190-1 may be accessed based on the received sensor data. Data from a sensor database 190-1 may comprise multi-channel streams of data recording, for example; heart rate, body temperature, barometric pressure, magnetometer, gyroscope, accelerometer, blood pressure, oxy-haemoglobin, deoxy-haemoglobin, total haemoglobin concentration change, UV exposure, electrodermal activity, location, hydration, electro-cardiograph, orientation, proximity, bioimpedance, altimeter, ambient light, oximetry, camera feed, and the like. The sensor data may be buffered. In some embodiments, the sensor data may comprise stored channels of information from any sensor type or source, including (but not limited to) device-integrated, cloud loT, connectivity attached, or any attached device, such as a smartphone. Additionally or alternatively, data from the sensors 200 may be stored in the sensor database 190-1. The sensor database 190-1 may be local, remote, or both.
The system 100 may comprise one or more API hubs 130. In some embodiments, the system 100 may comprise a first API hub 130-1, for example a 2D API hub, and a second API hub 130-2, for example a 3D API hub. An API hub may be used to access, obtain, retrieve and/or select APIs from third-party sources, such as Azure, Android OS, OS and the like. For example, a 2D API hub may be used to access APIs relating to two-dimensional data. A 3D API hub may similarly be used to access APIs relating to three-dimensional data. The 3D API hub may bridge from Android AOSP to a deployment API, such as Microsoft 365.
The one or more API hubs 130 may additionally access content from one or more data streams 300. The one or more data streams 300 may be provided by one or more third parties.
The system control module 110 controls when and/or how content from one or more data streams 300 is forwarded for visualization. In some embodiments, the system control module 110 comprises subcomponents, such as a concurrency manager 112, an arbitration manager 114, a power manager 116, a bandwidth manager 118 and a state manager 119. In some embodiments, the one or more of these subcomponents may be combined. For example, the concurrency manager and the arbitration manager may form a single subcomponent, and the power manager and the bandwidth manager may form another subcomponent.
The concurrency manager 112 is configured to control the system 100, including when content is delivered to each user device 3, the node 4 at which particular content is displayed, quality levels of the content, a channel for presenting the content and the like. That is, the concurrency manager 112 may control each service, channel and node provided to each of one or more user devices 3. The concurrency manager 112 may use device and/or capability information for one or more user devices 3. Device and/or capability information may be provided by the one or more devices 3, for example via sensors 200 and/or via a client manager 180. For example, a device ID for a user device 3 may be obtained and associated device information may be accessed from a database or from local memory.
Device and/or capability information may comprise one or more of: network bandwidth available to each of one or more user devices 3, battery life of each of one or more user devices 3, compute/graphics load of each of one or more user devices 3, connectivity subsystem load of each of one or more user devices 3, sensor input traffic for each of one or more user devices 3, and/or rendering priorities associated with one or more user devices 3. Capability information may, in some embodiments, comprise an indication of network bandwidth available to the system as a whole.
The concurrency manager 112 may control and/or optimize output from the system 100. The outputs from the system may comprise outputs from the system 100 to the one or more user devices 3 and/or outputs from the system to one or more peripheral devices, such as a speaker or display unit. For example, the concurrency manager 112 may use the device and/or capability information to control and/or optimize one or more of: timing of information (e.g. content) being delivered to one or more user devices 3, a rendering quality level of content being delivered to one or more user devices 3 and/or to one or more nodes 4 provided to said one or more user devices 3, battery usage of one or more user devices 3, quality of service (QoS) for one or more user device 3, concurrent feature set, a choice and number of avatar, skeletal, motion-capture, and hologram levels rendered, for one or more user devices 3. A concurrent feature set may include information on node data complexity. Data corresponding to content may be multilayered. For example, displaying health information content (such as will be described with reference to Figure 7) may comprise data from multiple sources and, in some cases, a map overlay indicating a geographical location of a patient or user. In some examples, such as in forensic science, the content data may comprise multilayered data including, for example, 3D scans of a crime scene, of objects found at the crime scene, and the like. That is, content may comprise data from multiple sources which is received concurrently (or made available concurrently) and/or may be displayed concurrently. In some embodiments, concurrent data may be stored. In some embodiments, not all data that is available and/or stored need be viewed, in order to save device processing load. Stored data may be viewed later.
The concurrency manager 112 may communicate with the arbitration manager 114 to control each channel and node and rendering of features, such as utilities, media, data, analytics and streaming options, for example a choice of avatar, skeletal or motion-capture, hologram levels and the like, depending on device and/or capability information such as available bandwidth, latency traffic, and/or load on each system resource, for one or more user devices 3.
The arbitration manager 114 may control first and third party solutions access to system utilities, for example based on input and/or data from the concurrency manager. The arbitration manager 114 may be configured to manage traffic for controlling each data source. That is, the concurrency manager 112 may determine what to control (e.g. which data sources), whilst the arbitration manager 114 may determine how to control the determined data, and may be configured to physically control data packets and the like.
The power manager 116 may receive priority information, e.g. an indication of a priority of each content item, an indication of a priority of a channel or node 4, and/or an indication of a priority of a user device 3, from the concurrency manager 112. The power manager 116 may control the overall power management strategy and/or optimization of the system 100. That is, power manager 116 is configured to control power consumption, battery life and load on the system 100 and/or on the user devices 3. For example, an amount of 3D data may be stored and processed on local servers for point cloud information used in digital twins. Said 3D data may be to many degrees of accuracy, creating power and bandwidth variations.
Content and 3D models may be recorded, presented and shared in a point cloud format, such as LAS, .XYZ. The geolocation information and ownership of the virtual objects may be registered in a unique reference ID, enabling layers of digital information to be analyzed and visualized in reference to a specific physical location. This processing may be run on a point cloud correlation and metadata engine module 142.
The bandwidth manager 118 may monitor data traffic, connectivity interfaces, and/or available bandwidth and/or latencies associated with the system 100 and/or one or more user devices 3. The bandwidth manager 118 may control system capabilities such as utilities, media broadcasting streaming capabilities including choice of avatar, skeletal or motion-capture, and levels of hologram, power and thermal management and battery life of one or more user devices 3 in order to prioritize the system 100, thereby optimizing the user experience and QoS.
In some embodiments, a look-up table may be used to determine a subsystem bandwidth budget and estimated utilization for a person-based suite of services and/or utilities.
The state manager 119 may be configured to provide a status. For example, on boot-up, the status provided by the state manager 119 may correspond to an initial state indicated by workflow manager 160 (e.g. with an initial setting for node positions, default layouts, user interfaces, arrangement of content, and the like, for example based on user access settings, subscriptions, etc.). After boot-up, e.g. when the system is live (i.e. in operation), the status provided by the state manager 119 may reflect a refresh process state.
The state manager 119 may obtain device information from user device(s) 3 and/or sensor information (e.g. sensor inputs) from the one or more sensors 200 via the input/output module 120. Device information and/or sensor information may be used by the state manager 119 to determine, for example, a layout or arrangement of content, nodes 4 or the like. The state manager 119 may further obtain information from the workflow manager 160, indicating user accessibility information, user access information (e.g. geographical restrictions, subscription information, account settings such as age restrictions and the like).
The system control module 110, and its subcomponents, may receive data from other components of the system 100, such as workflow manager 160, input/output module 120, and/or one or more API hubs 130. As will be described presently, workflow manager 160 may transmit device and/or user information to the system control module 110, such as user credential and access information, user accessibility information, suite and channel information and analytic data. Such data may be used in the prioritization and/or arbitration of content for a particular user device 3, and/or for a particular user using user device 3.
Media module 140 is configured to access, obtain and/or retrieve media content from storage or memory, such as storage 190. Content to be accessed, obtained and/or retrieved may be selected by a user, or determined by the system 100. The media module 140 may be configured to process media content retrieved from storage 190. The media module 140 may be configured to process 2D media content and/or 3D media content. Although depicted as a single component, it is to be understood that this is merely exemplary; the media module 140 may be distributed. For example, a first media module may handle 2D content and a second media module may handle 3D content. Media module 140 may incorporate core logic and a user interface, in order to offer the capability to view web player content and/or have universal playback controls, including 2D or 3D video player.
In some embodiments, the system 100 may use higher complexity data, such as a cloud of point data, point clouds. In some embodiments, the media module 140 may further comprise a correlation and metadata module 142, which includes a correlation and metadata configuration engine. The correlation and metadata module 142 may be configured to interpret point cloud data.
Media module 140 is configured to transmit media content data from memory and/or storage 190 to the visualization management module 150. In some embodiments, the media module 140 is configured to transmit media content data from one or more data streams 300, e.g. via the one or more API hubs 130, to the visualization management module 150. In some embodiments, the media module 140 is configured to transmit media content data from local and/or remote storage 190 to the visualization management module 150. The media content data may comprise multimedia data, such as graphical data, video data, audio data, textual data, image data, and the like, and/or metadata describing the media data. The media module 140 may manage the digital rights of content, and may additionally or alternatively handle any restrictions, such as region-locking or geofencing. The media module 140 may be based on, or interact with, a content delivery network (CDN) such as Azure. Media module 140 may be configured to enable user control of data; that is, the user device 3 may communicate with the system 100 to select, edit, delete, or otherwise interact with data stored within the system 100 and/or data from external data sources.
The visualization management module 150 receives control information from the system control module 110 and media data from the media module 140, and optionally workflow information from a workflow manager 160. Workflow information may comprise information indicating default node(s) and/or node positions to be displayed to the user, and/or an ordering or arrangement of said nodes.
Control information from the system control module 110 may comprise concurrency information indicating the content to be displayed at a particular node 4 for a particular user device 3. Concurrency information may provide timing information for the presentation of content at a particular node 4 for a particular user device 3. Control information may comprise information indicating the content to be displayed at each node of a plurality of nodes 4 for one or more user devices 3. Control information may provide information on how content is to be displayed at each of a plurality of nodes 4 for one or more user devices 3, such as a refresh frequency, a rendering quality, a resolution, and the like.
Control information may additionally or alternatively comprise node positioning information, indicating the size, shape, position in 3D digital space, etc. of each of a plurality of nodes 4, for each of one or more user devices 3.
Workflow information from the workflow manager 160 may comprise user information, such as user credentials and/or access or restrictions (e.g. related to a user's age, geolocation, account permissions, subscription, and the like). Workflow information from the workflow manager 160 may additionally or alternatively comprise accessibility information of a user and/or channel lookup information for a user device 3 (for example indicating the channels hosted on particular nodes 4 displayed to a user via user device 3).
Accessibility information may comprise any special user accessibility requirements, and may be stored in one or more databases or storage components 190. The accessibility information may comprise information on input modalities, audio, and/or rendering preferences, and may indicate use of text-to-speech and/or speech-to-text.
For example, for visual impairment, accessibility information may include data which identifies a user's visible field and which may be used to adjust the digital space to project digital information into the user's region of sight. Priority may be given to real world information, followed by data, and then analytics. A decision tree of information required by the user may be controlled, for example via voice command. Additionally or alternatively, spatial audio and text-to-speech may be used as priority input modalities. In some examples, a repetitive "follow-me" spatial audio message may be used to assist the vision impaired user towards a particular position or location. In some examples, a combination of inputs such as digital twin, mesh data, and/or object recognition may be used to identify obstructions and navigate the vision impaired user to the target destination.
In another example, for hearing impairment, accessibility information may include data for enabling visible navigation markers, e.g. breadcrumbs, to be used as an input modality. In some examples, speech-to-text may be primarily used.
In another example, for physical impairment, accessibility information may include hands-free and/or gesture-free input modalities.
In another example, for neurodivergent users such as autistic users, accessibility information may include preferences for content that benefit the user. For example, calm or repetitive spatial audio, hologram level communication, repetitive content may be used. Moreover, the system 100 may use the accessibility information to position nodes and/or provide clear instructions to a user based on accessibility information. For example, the system 100 may be configured to provide a message such as "{First name}, it is time for {lunch/games/program} and may launch a utility or content on a node 4c, and/or may adjust said node 4c to a position directly in front of the user.
In some embodiments, the visualization management module 150 receives sensor data from the one or more sensors 200, via the system control unit 110 and/or the input/output module 120. Sensor data may be used by the visualization management module 150 to adjust and/or optimize how nodes 4 are positioned in digital space, and/or how content should be displayed to a user, for example a level of transparency, or background shading of digital assets to enhance clarity. In some embodiments, the visualization module 150 may integrate sensor data using the Internet of Things (loT), the Industrial Internet of Things (1IoT), and/or directly connected sensors (e.g. sensors 200) into layers of information for the user to visualize in 3D via a user device 3.
The visualization management module 150 is configured to determine content to be displayed at one or more nodes 4 for each of one or more user devices 3, using information as described above. The visualization management module 150 may be configured to enable 3D visualization of complex layered information, such as geospatial, or geolocated datapoints via Point Cloud formats, Computer Aided Design (CAD), building construction using BIM formats, or PLM data systems.
The visualization management module 150 may employ a node, or sub-node, template screen module 152, to load default, or user-saved, template 2D and 3D screens to represent the data infeed, such as calculation screens. The templates may also contain options of layout of nodes that may be selected by the user to represent data according to their preferred layout.
The visualization management module 150 transmits instructions to the renderer 170, which renders the 2D and/or 3D content for display via the user device 3. The renderer instantiation used will be dependent on the make or model of the user device 3, and, or web application, desktop application, smartphone device(s). A standard set of renderers may be used to enable parallel streaming to multiple user devices 3. One instantiation of a device agnostic rendering could use a game engine software, such as Unity or Unreal Engine. A system can incorporate multiple renderers to enable varying complexities and system costs of user device 3 categories of product, such as low end, mid-tier and premium tier user device 3 categories, making system 100 device agnostic.
The workflow manager 160 is configured to retrieve user information from storage, such as a user database. Additionally, the workflow manager 160 may identify content being displayed on each of the nodes 4 for each of one or more user devices 3. The user information may be stored locally and/or remotely, e.g. in a cloud, or on a remote server. As described above, the user information may comprise user credentials, user access (e.g. user restriction) information, accessibility information, and the like. The user information may comprise node and channel information, e.g. node positioning information (e.g. from previous system use), channel information including mapping information between channels and nodes (e.g. indicating which channels are hosted on which nodes), analytic information, and the like. The workflow manager 160 may additionally access user activity information, such as scheduling information, historical information indicating how a user has previously interacted with system 100, e.g. corresponding user activity to time of day, day of week, geolocation, and the like. The workflow manager 160 may be configured to retrieve device information from storage, such as the user database and/or a device database. For example, a user database may comprise information on one or more user devices 3 associated with a particular user. A device database may comprise device capability information, e.g. processing power, integrated sensor capabilities, operating system information, connectivity information, wireless data protocol information and the like.
The system 100 may optionally comprise a client manager 180. Client manager 180 may receive data from the renderer 170, for example data indicating what content is to be displayed on which node, node positioning information, and the like. Said data may be saved in a database, e.g. user database 190-2, and used in a subsequent session, and/or may be associated with a particular user device 3. The client manager 180 may hold the clients' credentials and service access rights such as which data the user can view, or which services the user can access. The client manager may also incorporate analytics to improve the user experience. In some embodiments, accessibility information associated with the user of user device 3 may be stored and/or accessed by the client manager 180.
Storage 190 may be local storage and/or remote storage. Storage 190 may be distributed and may comprise multiple databases and/or storage sites. Data stored in one or more of storage 190 may be stored in Edge storage and/or cloud storage. Preferably, data stored in one or more of storage 190 is stored in Edge storage, as Edge storage is secure and robust. Moreover, Edge storage reduces latency and network data loads.
Storage 190 may comprise one or more databases, as described above. The databases may be of one or more of the following forms: 3D database (3dDb): A 3dDb may enable a common 3D file repository for all channels and/or nodes to seamlessly share 3D data. The 3D data may be stored in a cloud-based, on-premises and/or hybrid database or file storage system integrated directly into the system 100. API access to this data may enable access to stored 3D models, scenes and/or sequences, such as "Self-Authored Presentations" developed via on-device and/or Web based design tools, such as Stage. These provide the ability to create a scene and sequence, e.g. "Save and immediately view on AR devices". These can also be used as UX prototyping and/or development of demo data. The API information may be available on a dashboard or developer site, such as VERTX. Learning Management Systems (LMS) such as those used in education may be directly integrated into the 3dDb, making system 100 LMS agnostic.
Volumetric Database: A volumetric database is a high bandwidth storage which may be used to store objects, 6D data recordings and 3D image capture data postprocessed by, e.g., a Vision Processing CPU/GPU. Input into the volumetric database may be via one or more API hubs 130, or may be directly interfaced to (a) 1st or 3rd party 3D image capture vision processing system, (b) 3rd party content and technology platforms delivering 3D information, such as 3D models, 3D sequences, 3D presentations, and the like. The volumetric database output may be controlled by the media module 140. In some embodiments, the volumetric database may be served by a file management system. In some embodiments, alternate latency requirements may be addressed by using alternate cloud providers, or using EDGE systems and/or on-premise servers.
Digital Twins and Mesh Database: Digital Twins are employed to create an augmented world location, or space, that can be accessed through user devices 3, e.g. AR/VR devices, for personal and shared experiences. The Digital Twin aids user experience and QoS, such as fast localization and/or pre-scanned spaces. In some embodiments, user devices 3 (e.g. ARNR devices) that can create a mesh of a user's physical location may export or save their Mesh data to the Digital Twin and Mesh Database. The system 100 (e.g. system control unit 110) may control which Digital Twin experience is to be used, and may be used to select a scene the user wants to adopt. For example, available scenes in a factory setting may include one or more of: maintenance schedules, KPI, dashboards, product throughput, inventory control, command and control, security screens, and productivity tools. These augmented solutions enable factory monitoring and control, layout planning and optimization anywhere in the world without the need for travel. This reduces time and improves productivity.
ISV.apk Database: 3rd party applications may be integrated into the system 100, e.g. in the workflow manager 160, either as an SDK (software development kit) for fully integrated system experiences, or using an.apk file format. The.apk files may be stored in the ISV (independent software vendor) .apk database for ease of management, change control and for an OTA MDM (overthe-air mobile device management) app manager system to push.apk updates to a new or existing subscriber enrolled hardware device (e.g. user device 3), as supported. The ISV.apk database may be controlled by the system controls and sub-systems, such as the workflow manager 160, a credentials manager or the like, to determine a level of access per user, and 3D utilities. Where possible, the 3rd party ISV.apk may use resources of the system 100 and be unlocked for ease of use.
In some embodiments, to avoid the need to preload software such as an OTA (over the air) MDM (mobile device manager) or an instantiation of developed system 100 and system 100 GUI (graphical user interface) on a user device 3, the system 100 may be installed via a user-or system-accessed web-page on a pre-installed device web-browser. During system boot, the system 100 may run a background installation and update through a system 100 diff.apk fetch from the ISV.apk database. In some embodiments, the system 100 may instantiate an application load through a user action via a web-page. This loads the.apk into the user device 3 local storage. System 100 may automatically add the third party solution to the system 100 GUI and node application menu.
Sensor Database: A sensor database may comprise, or may provide access to, loT hub data, and/or data associated with connectivity-attached sensor accessories.
2D database: A 2D database may store partner content and/or first party data.
In some embodiments, one or more user devices 3 may transmit data, AR/VR information and, or real-world camera images to other devices via user device 3 API 135, or other system 100 functions such as M2MN. The API type may vary according to the user device 3 make and model, such APIs may include Android OS API, iOS, or others as specified by the user device 3.
In some embodiments, one or more additional subcomponents (not shown) may be provided in system 100.
An analytics metering dashboard (AMD) sub-system may be provided in system 100. The AMD subsystem may measure, record and/or report data such as minutes of use, gaze data (e.g. indicating statistics relating to the user's attention, such as duration of gaze on one or more objects or people, a speed that a user's eyes (e.g. retina) follow a light or an object, or the like. Such information may be used for health data, such as concussion testing, and system derived actions. The AMD sub-system may additionally or alternatively measure, record and/or report user data, such as data relating to physical attributes of the user (e.g. for the purpose of system-controlled user privacy, security and/or authentication). User data may comprise retina size, pupil dilation, interpupillary distance, iris data, facial recognition, body recognition, object recognition, location recognition, room recognition and/or people recognition.
A security and authentication sub-system may be provided in system 100. The security and authentication sub-system may build a dataset to cross-reference user credential data with user data to determine a probability of authentication. The probability of authentication is used to determine access rights to system capabilities, such as payment and ticketing transactions, social messaging, and/or age-related content.
A credentials sub-system may be provided in system 100. The credentials sub-system may store, or access, user credentials stored in a security database. The information may be created during a customer on-boarding process, and may be updated via the client management module and/or via a client management web browser, either through an external device or through user device 3.
One or more of these sub-systems may be incorporated into one or more of the already described modules of system 100.
In some embodiments, system 100 may comprise a geospatial sub-system. The geospatial subsystem may be configured to integrate a geospatial standard, such as first or third party solutions. The geospatial subsystem may access and/or generate data for enabling placement of 3D content anywhere in the world. The geospatial sub-system may interact with and/or access one or more alternative localization technologies, such as beacons, mobile positioning, assisted-GPS, Wi-Fi triangulation, and the like. Data associated with one or more of these localization technologies may be accessed directly from devices where the information is available, such as a user device 3 or a separate user device, or via a client application, for example accessing an Android or iOS API. The data may then be transferred to the system 100.
In some embodiments, system 100 may comprise a Web Real-Time Communication (WebRTC) subsystem. The webRTC sub-system may integrate one or multiple WebRTC directly into one or more nodes 4. This enables many video streams and web browser or streaming events to be provided to a user via user device 3. A wide range of simplex and/or duplex instantiations may be included in the bandwidth budgets of system 100. An activity mode of each individual webRTC unit may be controlled by the system control unit 110, e.g. by the power manager 116 and/or the bandwidth manager 118, for example based on the input of the concurrency manager 112.
In some embodiments, system 100 may comprise a many to many network (M2MN) video conferencing sub-system. A M2MN component may be configured to enable multi-point voice over IP (VolP) calls. For example, client-side multi-ARNR device (e.g. user device 3) to multi-viewers of device formats such as PC, tablet, smartphone and the like, supports ARNR agnostic formats including assisted reality. The M2MN may comprise, or act as, an XR bridge to a professional video conferencing network integrated into a corporate tenant, and may be a powershell. The M2MN sub-system may enable a friends-and-family call and/or a worker-co-worker to see-what-I-see, of both the digital space and the real world. This provides affordable shared experiences where non-AR users may have access to said experiences. The client-side multi-device experience may be enabled through a client viewer APK. The client viewer APK may additionally provide enhanced services such as control, log-in via (e.g.) QR code or the like.
Figure 6 shows a data flow for an augmented reality system according to embodiments of the disclosure.
Inputs from one or more sensors 200 may be received via the input/output module 120. Device information, including device capability information, e.g. battery life, make and model, operating system, processing power, available bandwidth, connectivity information and the like, of a user device 3 may be received from a user device 3. Data received via the input/output module 120 may be transmitted to the state manager 119. In some embodiments, device information may be transmitted to one or more of the power and bandwidth managers 116, 118 as well as the state manager 119 of the system control unit 110.
Information from the power and bandwidth managers 116, 118, may be sent to the state manager 119 -for example indicating node priority level(s), quality information (such as a maximum content resolution or the like) or similar. The state manager 119 may provide input to the media module 140, for example indicating information used for content selection. Said information for content selection may include geographical restrictions, age restrictions, access restrictions, accessibility information (e.g. an indication to not include content with strobing lights), content quality (e.g. resolution) and the like. That is, data output from the state manager 119 to the media module 140 may indicate, or be used to determine, what data sources (such as data streams 300 and/or from local or remote storage 190) are to be used and/or what content is to be selected from the data sources. The state manager 119 may provide input to the visualization management module 150, for example information on a node layout, positioning information of one or more nodes 4, information indicating the arrangement of nodes and/or user interfaces being used by channels on the one or more nodes 4 or the like.
Content data, for example from storage 190 and/or from one or more content sources 300 such as data streams, may be provided to the media module 140. In some embodiments, image capture data of the real world (e.g. around the user, or showing a scene remote from the user's location) may be provided, for example from a depth camera or the like.
Data from the system control unit 110 (e.g. one or more of the power and bandwidth managers 116, 118, the concurrency and arbitration managers 112, 114, and the state manager 119) may be provided to the workflow manager 160.
The workflow manager 160 may obtain data from the client manager 180, indicating user access information, user settings, user accessibility information and the like. The workflow manager 160 may output data such as visualization quality data, content selection data, concurrency data and the like to the visualization management module 150. The content selection may be based, at least in part, on user credential information, user access information, user location data, user subscription data and the like. Data output from the workflow manager 160 to the visualization management module 150 may indicate a node 4, user device 3, and/or time (or order) content is to be displayed. The data output from the workflow manager 160 to the visualization management module 150 may additionally or alternatively indicate quality information for displaying content, such as resolution, refresh rate, size, brightness, etc. The workflow manager 160 may also provide information regarding initial settings to the state manager 119. For example, the workflow manager 160 may transmit information indicating an initial arrangement of nodes and/or channels for a user, based at least in part on the user's account information (such as accessibility information, usage restrictions such as geographical restrictions, content restrictions, parental locks, age restrictions and the like) to the state manager 119. Said information may be used to initialize the digital space for a user upon start-up or boot-up.
The visualization management module 150 may output display instructions and content to the renderer 170 to render the content for display, such as 3D display or 2D display. The content may be rendered to be overlaid on a user's view of the real world. Data from the renderer 170 may be sent to the client manager 180 and to a user device 3 to display the content on at least one node 4. The content may be displayed at at least one node 4 positioned in digital space. The content may be displayed via a user input interface 400 on the at least one node 4. The user input interface 400 may be selected from a plurality of available user input interfaces, for example by user selection and/or based on the type of content to be displayed. The user input interface 400 may be used to determine how user inputs are to be interpreted. For example, each user input interface 400 may have a corresponding mapping between input gestures, voice commands, gaze detection, and the like, and associated actions.
The user input interface 400 on node 4 may output data to the client manager 180, for example information indicating how the user interacts with the content displayed on the user input interface 400. That is, information indicating a user input provided to the user input interface 400 may be sent to the client manager 180. Other user information, such as minutes of use, gaze detection, physiological information and the like may also be sent to the client manager.
The client manager 180 may output data (e.g. instructions) to a user input interface 410 of another node, such as a digital control unit 7. The digital control unit 7 may be displayed to the user via the user device 3 in digital space. The digital control unit 7 may be displayed at a different apparent distance from the user than the apparent distance at which one or more nodes 4 are displayed. In some embodiments, however, the digital control unit 7 may be displayed at the same apparent distance from the user as the apparent distance at which one or more nodes 4 are displayed.
Similarly to the user input interface 400, the user input interface 410 may be selected from a plurality of user input interfaces, for example based on user selection and/or based on the content to be displayed at the digital control unit 7. User inputs received at the user input interface 410 (e.g. by user device 3 and/or by one or more sensors 200 indicating a user input made to the user input interface 410) may be sent to the client manager 180.
User input data indicating user inputs to one or more of the user input interfaces 400, 410 may be used to control the content to be displayed on node 4 or digital control unit 7 respectively, allowing the user to interact with the content, remove content, change content being displayed and the like. Data from the client manager 180, which may include, or be based on, user input data, may be sent to the workflow manager 160 which then outputs instructions and data to the visualization management module 150 and the state manager 119 as described above.
Figure 7 shows an example application for an augmented reality system according to embodiments of the disclosure.
Figure 7 relates to an example application for the system 100 in which a first responder 70 is provided with augmented reality. An on-site first responder 70 may be provided with an ARNR device 3, such as smart glasses, an AR/VR headset, or a mobile phone. In some examples, a patient 71 may be fitted with one or more health sensors (not shown), such as a heart rate monitor, a pulse oximeter, a blood pressure monitor, ECG / EKG sensors, brain sensors, thermometers and the like. The first responder's device 3 may be provided with the data being recorded by the sensors that are directly on the patient's body in real time, e.g. via system 100. In some examples, additional contextual information is obtained by using loT data and/or an established semantic ontology. The first responder's device 3 may comprise a camera or similar sensor configured to perform eye tracking of the patient 71 and configured to obtain gaze data of the patient 71, for example in order to provide additional information on the responsiveness, focus and attention of the patient 71.
The first responder's device 3 may be used to assist in a criticality assessment. For example, a concussion testing sequence may be created using random field of vision ophthalmology light tests. These tests may supersede less accurate and less objective field tests such as following a finger, focusing on an object, etc. The random field of vision ophthalmology light tests may provide a deterministic health score, which can be compared with previous results, and may be provided to health care providers (such as a hospital or a doctor 72), enabling prompt and consistent treatment. These tests may also be repeated on the field to determine whether there is any change in the patient's condition.
The first responder's device 3 may, additionally or alternatively, provide guided assistance to the first responder 70. For example, the first responder's view 80 (e.g. both their view of the real world 82 (i.e. the patient) and their view of digital space 84, illustrated in Figure 7 as an overlay indicating a patients heart rate and blood pressure) may be transmitted to a remote location, for example to the location of a more highly trained or more experienced health care provider 72. The remote health care provider 72 may observe the first responder's view 80 via their own AR/VR device 3a, on a computer screen, or through any audiovisual means. The remote health care provider 72 may then provide detailed and guided instructions to the first responder 70 via the first responder's device 3 to further improve the immediate care delivered to the patient 71. The guided instructions may be overlaid onto the first responder's view 80 of the patient 71, and the remote health care provider 72 may observe as the first responder 70 performs an examination or procedure in real time, and may offer advice, directions or corrections.
The system 100 may, in some embodiments, trigger an automatic action, for example based on the received patient data. For example, if a patient's deterministic health score is below a threshold, an ambulance may be called automatically (e.g. depicted in Figure 7 as 73) to the location of the patient 71. In some examples, the data collected by the sensors and/or the device 3 may be sent to paramedics 73 or to a hospital (e.g. remote health care provider 72) to ensure that the correct equipment and help is delivered as quickly as possible, and to ensure that an ambulance takes the patient to a suitable hospital (for example a specialist cardiac center, a neurology center, a spinal center, etc).
The system 100 may connect with a tele-health system 73. The data collected by sensors and/or the first responder's device 3 may be saved and/or sent to a subsequent health care provider 72, which may reduce duplicating tests unnecessarily and providing important information as quickly as possible.
Figures 8a-8f show flowcharts of methods for an augmented reality system according to embodiments of the disclosure.
Figure 8a illustrates a method 800 according to embodiments of the disclosure. The method 800 may be performed by system 100. Method 800 comprises, in an operation entitled "DETERMINING DIGITAL CONTROL UNIT", determining 810 a digital control unit 7 to display in a digital space. As described previously, the digital control unit 7 may have a type. The type of digital control unit 7 may be used to interpret user inputs and may be used to determine a shape, size and/or position to display the digital control unit 7 in digital space. Determining a type of digital control unit 7 will be described in more detail with reference to Fig. 9c. The digital control unit 7 may be determined by the system control module 110, for example using inputs from sensors 200 received via input/output module 120, and/or using data accessed or stored by the client manager 180.
Method 800 further comprises, in an operation entitled "DISPLAYING DIGITAL CONTROL UNIT", displaying 820 the digital control unit 7 in the digital space. The digital control unit 7 may be displayed at a predetermined position in the digital space, for example based on the type of digital control unit 7. The digital control unit 7 may be displayed via a user's augmented reality user device 3, which is communicatively coupled with the system 100.
Method 800 further comprises, in an operation entitled "DISPLAYING PLURALITY OF NODES", displaying 830 a plurality of nodes 4 in the digital space via the user's augmented reality user device 3. Each of the plurality of nodes 4 may be independently controllable. That is, the workflow module 160 may identify content being displayed on each of the plurality of nodes 4, and the concurrency manager 112 may be used to independently control content being sent to, e.g., one node 4a of the plurality of nodes 4. In some embodiments, the plurality of nodes 4 may be displayed in a far field distance from the user. That is, the apparent distance of the plurality of nodes 4 from the user in digital space may be out of arm's reach of the user. In some embodiments, the digital control unit 7 may be displayed in a near field reaching distance from the user. That is, the apparent distance of the digital control unit 7 may be displayed in digital space within arm's reach of the user. In some embodiments, the plurality of nodes may be displayed in a far field distance from the user while the digital control unit 7 may be displayed in a near field distance from the user. For example, the digital control unit 7 and the plurality of nodes 4 may be displayed at different apparent distances from the user. In some embodiments, displaying the plurality of nodes may comprise obtaining position information for the plurality of nodes 4, for example from the visualization management module 150. The method 800 may optionally further comprise accessing user accessibility information. The method 800 may, in some embodiments, comprise determining position information for each of the plurality of nodes 4 based at least partially on the user accessibility information. In some embodiments, obtaining position information for the plurality of nodes may comprise obtaining, from one or more sensors, a real world video from a perspective of the user device, and determining position information for each of the plurality of nodes 4 based at least partially on the real world view. In some embodiments, obtaining position information of the plurality of nodes 4 may comprise determining a head level of the user and determining position information for each of the plurality of nodes based at least partially on the determined head level of the user.
Method 800 further comprises, in an operation entitled "RECEIVING USER INPUT", receiving 840 a user input from the user at the digital control unit 7. The user input may be, for example, a gesture-based input and/or a voice command, or any user input as described with reference to Figs. 1 through 7. The user input may be interpreted based on the type of digital control unit 7. The user input may be received and/or processed by system 100, e.g. by client manager 180.
Method 800 may optionally further comprise, in an operation entitled "RECEIVING PLURALITY OF DATA INPUT STREAMS", receiving 850 a plurality of data input streams. The plurality of data input streams may be from a variety of data sources, such as first part or third party sources. The data input streams may provide content such as video data, audio data, image data, textual data or the like. In some embodiments, a single data input stream may be received. The plurality of data input streams may be received via the API hubs 130 and/or via the media module 140.
Method 800 may optionally further comprise, in an operation entitled "RECEIVING DEVICE INFORMATION", receiving 860 device information of the augmented reality user device 3. The device information may comprise information indicating a capability of the augmented reality user device. The device information may comprise at least one of: a battery life of the user device 3 (e.g. a remaining battery life of the user device 3), a bandwidth of the user device 3 (e.g. a network bandwidth available to the user device 3), a processing power of the user device 3. The device information may comprise at least one of: an operating system of the user device 3, communication protocols of the user device 3, a make and/or model of the user device 3 or the like.
Method 800 comprises, in an operation entitled "DETERMINING CONTENT TO BE DISPLAYED", determining 870 content to be displayed at at least one node (e.g. node 4a) of the plurality of nodes 4, based on the user input. For example, the user input may be used to determine the content to be displayed (e.g. to select content from one or more data input streams and/or from storage 190), and/or to determine how the content is to be displayed. For example, the user input may provide an indication of the node 4 at which the content is to be displayed, the quality of the content to be displayed (e.g. resolution, refresh rate, size, brightness, transparency and the like). In some embodiments, determining the content to be displayed comprises determining or selecting content from a plurality of data input streams. In some embodiments, determining the content to be displayed comprises selecting content at least partially based on the device information of the user device 3. The content to be displayed may be determined by the system 100, for example by the workflow manager 160 and/or the system control module 110 (e.g. the concurrency manager 112).
Method 800 comprises, in an operation entitled "DISPLAYING DETERMINED CONTENT", displaying 880 the determined content at the at least one node (e.g. node 4a) of the plurality of nodes 4 in the digital space. In some embodiments, displaying the determined content at the at least one node (e.g. node 4a) of the plurality of nodes 4 comprises displaying the determined content at one node (e.g. node 4a) whilst maintaining content being displayed at at least one other node (e.g. nodes 4b, 4c) of the plurality of nodes 4. In some embodiments, the digital space (e.g. the nodes 4 and/or the digital control unit 7) may be displayed as an overlay on a real world scene.
Method 800 may optionally comprise, in an operation entitled "CAPTURING VIEW OF DIGITAL SPACE", capturing 882 a view of the digital space from the user's augmented reality user device 3.
The view of the digital space may include the digital control unit 7 and/or the plurality of nodes 4. In some embodiments, the captured view may comprise a real world view from the perspective of the user.
Method 800 may optionally comprise, in an operation entitled "SENDING CAPTURED VIEW TO SECOND DEVICE", sending 884 the captured view to a second augmented reality user device 3a.
Method 800 may optionally comprise, in an operation entitled "SAVING CAPTURED VIEW', saving 886 the captured view. The captured view may be saved in storage 190, such as local storage and/or remote storage, and may be retrieved at a later time.
In some embodiments, both operations 884 and 886 may be performed. In some embodiments, either one of operations 884 and 886 may be performed.
Method 800 may optionally comprise, in an operation entitled "RECEIVING FURTHER USER INPUT", receiving 890 a further user input. The further user input may be directed at one node (e.g. node 4a) of the plurality of nodes 4. As described previously throughout this description, the user input may take any form, including but not limited to a gesture input, voice input, eye gaze input, and the like.
Method 800 may optionally further comprise, in an operation entitled "DETERMINING TYPE OF DIGITAL CONTROL UNIT", determining 892 a type of the digital control unit 7. The system 100 may determine a type of the digital control unit 7, for example based on contextual data such as location information (e.g. GPS data) of the user device 3, location recognition information (e.g. identifying a train station via object recognition or image recognition, and/or identifying a location by using a beacon, etc.), user scheduling information (e.g. from a user's calendar and/or user activity data), and the like. For example, upon recognizing that the user is in a train station, the digital control unit 7 may be of a ticket kiosk type, and may be configured to perform the functions or activities of a ticket kiosk (such as providing train timetables, station information, allowing purchase of train tickets, etc.). Determining the type of digital control unit may comprise determining, or selecting, a user interface for the digital control unit 7.
Method 800 may optionally further comprise, in an operation entitled "UPDATING DISPLAY OF DIGITAL CONTROL UNIT", updating the display of the digital control unit 7 based on the determined type of digital control unit 7. The display, e.g. the form, and/or a user interface of the digital control unit 7 may be based on the type of digital control unit 7. For example, if the digital control unit 7 is determined to be of the type of a ticket kiosk, the digital control unit 7 may be displayed as a ticket kiosk. That is, the user interface presented to the user on the digital control unit 7 may be associated with the determined type of digital control unit 7. The user may be displayed a user interface (e.g. an input means such as a touchpad or keypad) consistent with a ticket kiosk.
Method 800 may optionally further comprise, in an operation entitled "RECEIVING TRIGGER EVENT", receiving 910 a trigger event. The trigger event may comprise a user command, a gesture from the user, and/or a detection of user's gaze. The trigger event may be based on a user activity, user data, scheduling information of the user, and/or one or more preset settings. For example, the trigger event may be based on the detection of user movement.
Method 800 may optionally further comprise, in an operation entitled "CHANGING POSITION OF NODE(S)", changing 920 the position of at least one node 4a of the plurality of nodes 4 relative to the user in the digital space, in response to receiving the trigger event. For example, upon detecting that the user is moving forwards (e.g. as a trigger event), a node 4a directly in front of the user may be moved away from the area in front of the user's face, e.g. to provide the user with a dear view of where they are going.
In some embodiments, the method may further comprise receiving a far-field user input. A far-field user input may be a user input which is directed to a node or a digital control unit which is displayed in the far field, e.g. at an apparent distance from the user which is outside of arm's reach of the user. In some embodiments, the method may further comprise adjusting a position of one or more of the plurality of nodes 4 in the digital space relative to the user, based on the far-field user input.
In some embodiments, determining 870 content to be displayed at the at least one node of the plurality of nodes 4 may optionally comprise, in an operation entitled "DETERMINING A USER INTERFACE", determining 872, for each of the at least one node of the plurality of nodes 4, a user interface from a plurality of user interfaces as described with reference to Figures 1 to 6.
In some embodiments, determining 870 content to be displayed at the at least one node of the plurality of nodes 4 may further comprise, in an operation entitled "SELECTING CONTENT", selecting 874, for each of the at least one plurality of nodes 4, content from at least one content source, e.g. a data input stream and/or storage 190.
In some embodiments, determining 870 content to be displayed at the at least one node of the plurality of nodes 4 may further comprise, in an operation entitled "DETERMINING CONCURRENCY INFORMATION", determining 876 concurrency information for the selected content, as described with reference to Figures 5 and 6. Displaying 880 the determined content may comprise displaying the determined content based on the determined concurrency information.
The method may further comprise determining a user's line of sight. Upon detection of forward motion of the user, the method may further comprise hiding or removing any of the nodes or the digital content unit positioned in the user's line of sight.
Although described in a particular order, one or more of the operations of method 800 may be performed in other orders, simultaneously and/or substantially simultaneously.
System 100 may be provided on a computing device such as computing device 700, described with reference to Fig. 9. The method 800 may be carried out by a computing device 700.
Figure 9 illustrates a block diagram of one implementation of a computing device 700 within which a set of instructions, for causing the computing device to perform any one or more of the methodologies discussed herein, may be executed. In alternative implementations, the computing device may be connected (e.g., networked) to other machines in a Local Area Network (LAN), an intranet, an extranet, or the Internet. The computing device may operate in the capacity of a server or a client machine in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The computing device may be a personal computer (PC), a tablet computer, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single computing device is illustrated, the term "computing device" shall also be taken to include any collection of machines (e.g., computers) that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
The example computing device 700 includes a processing device 702, a main memory 704 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 706 (e.g., flash memory, static random access memory (SRAM), etc.), and a secondary memory (e.g., a data storage device 718), which communicate with each other via a bus 730.
Processing device 702 represents one or more general-purpose processors such as a microprocessor, central processing unit, or the like. More particularly, the processing device 702 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIVV) microprocessor, processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 702 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. Processing device 702 is configured to execute the processing logic (instructions 722) for performing the operations and steps discussed herein.
The computing device 700 may further include a network interface device 708. The computing device 700 also may include a video display unit 710 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 712 (e.g., a keyboard or touchscreen), a cursor control device 714 (e.g., a mouse or touchscreen), and an audio device 716 (e.g., a speaker).
The data storage device 718 may include one or more machine-readable storage media (or more specifically one or more non-transitory computer-readable storage media) 728 on which is stored one or more sets of instructions 722 embodying any one or more of the methodologies or functions described herein. The instructions 722 may also reside, completely or at least partially, within the main memory 704 and/or within the processing device 702 during execution thereof by the computer system 700, the main memory 704 and the processing device 702 also constituting computer-readable storage media.
The various methods described above may be implemented by a computer program. The computer program may include computer code arranged to instruct a computer to perform the functions of one or more of the various methods described above. The computer program and/or the code 1010 for performing such methods may be provided to an apparatus, such as a computer, on one or more computer readable media or, more generally, a computer program product, depicted in Figure 10.
The computer readable media 1000 may be transitory or non-transitory. The one or more computer readable media 1000 could be, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, or a propagation medium for data transmission, for example for downloading the code over the Internet. Alternatively, the one or more computer readable media 1000 could take the form of one or more physical computer readable media such as semiconductor or solid state memory, magnetic tape, a removable computer diskette, a flash drive, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disc, and an optical disk, such as a CD-ROM, CD-R/W or DVD.
In an implementation, the modules, components and other features described herein can be implemented as discrete components or integrated in the functionality of hardware components such as ASICS, FPGAs, DSPs or similar devices.
A "hardware component" is a tangible (e.g., non-transitory) physical component (e.g., a set of one or more processors) capable of performing certain operations and may be configured or arranged in a certain physical manner. A hardware component may include dedicated circuitry or logic that is permanently configured to perform certain operations. A hardware component may be or include a special-purpose processor, such as a field programmable gate array (FPGA) or an ASIC. A hardware component may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations.
Accordingly, the phrase "hardware component" should be understood to encompass a tangible entity that may be physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein.
In addition, the modules and components can be implemented as firmware or functional circuitry within hardware devices. Further, the modules and components can be implemented in any combination of hardware devices and software components, or only in software (e.g., code stored or otherwise embodied in a machine-readable medium or in a transmission medium).
Unless specifically stated otherwise, as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as " receiving", "determining", "comparing", "assigning", "finding," "imputing", "identifying" or the like, refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
It is to be understood that the above description is intended to be illustrative, and not restrictive. Many other implementations will be apparent to those of skill in the art upon reading and understanding the above description. Although the present disclosure has been described with reference to specific example implementations, it will be recognized that the disclosure is not limited to the implementations described, but can be practiced with modification and alteration within the spirit and scope of the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative sense rather than a restrictive sense. The scope of the disclosure should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
Claims (23)
- CLAIMS1. A computer-implemented method comprising: determining a digital control unit to display in a digital space; displaying, via an augmented reality user device of a user, the digital control unit in the digital space; displaying, via the augmented reality user device, a plurality of nodes in the digital space, each of the respective plurality of nodes being independently controllable; receiving a user input from the user at the digital control unit; determining, based on the user input, content to be displayed at at least one node of the plurality of nodes; and displaying, via the augmented reality user device, the determined content at the at least one node of the plurality of nodes.
- 2. The method of claim 1, wherein displaying the determined content at the at least one node of the plurality of nodes comprises displaying the determined content at the at least one node of the plurality of nodes while maintaining content displayed at at least one other node of the plurality of nodes.
- 3. The method of claim 1 or claim 2, further comprising: receiving a plurality of data input streams; and wherein determining the content to be displayed comprises determining the content from the plurality of data input streams.
- 4. The method of any preceding claim, further comprising: receiving device information of the augmented reality user device, the device information indicating a capability of the augmented reality user device; and wherein determining the content to be displayed comprises selecting content at least partially based on the device information.
- 5. The method of claim 4, wherein the device information comprises at least one of: a battery life of the user device, a bandwidth of the user device, and a processing power of the user device.
- 6. The method of any preceding claim, wherein the digital control unit is displayed in a near field reaching distance from the user, and/or wherein the plurality of nodes are displayed in a farfield distance from the user.
- 7. The method of any preceding claim, wherein the digital space is displayed as an overlay on a real world scene.
- 8. The method of any preceding claim, further comprising: capturing a view of the digital space from the augmented reality user device, the view including the digital control unit and/or the plurality of nodes; and sending, from the augmented reality user device, the captured view of the digital space to a second augmented reality user device.
- 9. The method of claim 8, wherein the captured view includes a real world view from a perspective of the user.
- 10. The method of any preceding claim, further comprising: receiving a further user input, the further user input being directed at one of the plurality of nodes; determining a type of digital control unit; and updating the display of the digital control unit based on the determined type of digital control unit.
- 11. The method of any preceding claim, wherein the digital control unit is a type of digital control unit, and the type of digital control unit is used to interpret an associated action with a user input received at the digital control unit.
- 12. The method of any preceding claim, further comprising: receiving a trigger event; and changing a position of at least one node relative to the user in the digital space in response to receiving the trigger event.
- 13. The method of claim 12, wherein the trigger event is at least one of: a user command, a gesture from the user, detection of a gaze of the user.
- 14. The method of claim 12 or claim 13, wherein the trigger event is based on user activity, user data, scheduling information, and/or preset settings.
- 15. The method of any preceding claim, further comprising saving a representation of the digital space in memory.
- 16. The method of any preceding claim, further comprising:receiving a far-field user input; andadjusting a position of one or more of the plurality of nodes relative to a position of the user in the digital space, based on the received far-field user input.
- 17. The method of any preceding claim, wherein determining content to be displayed at the at least one node of the plurality of nodes comprises: selecting, for each of the at least one node of the plurality of nodes, content from at least one content source; and determining concurrency information for the selected content; and wherein displaying the determined content comprises displaying the determined content based on the determined concurrency information.
- 18. The method of any preceding claim, wherein displaying the plurality of nodes in the digital space comprises obtaining position information for the plurality of nodes.
- 19. The method of claim 18, further comprising accessing user accessibility information, and wherein obtaining position information for the plurality of nodes comprises determining position information for each of the plurality of nodes based at least partially on the user accessibility information.
- 20. The method of claim 18 or 19, wherein obtaining position information for the plurality of nodes comprises obtaining, from one or more sensors, a real world view from a perspective of the user device, and determining position information for each of the plurality of nodes based at least partially on the real world view.
- 21. The method of any one of claims 18 to 20, wherein obtaining position information for the plurality of nodes comprises determining a head level of the user and determining position information for each of the plurality of nodes based at least partially on the determined head level of the user.
- 22. The method of claim 21, further comprising determining a user's line of sight, and wherein upon detection of forward motion of the user, hiding or removing any of the nodes or the digital content unit positioned in the user's line of sight.
- 23. A system comprising: one or more processors; a memory storing instructions that, when executed by the one or more processors, cause the system to perform the operations of the method of any preceding claim.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2317935.1A GB2635734A (en) | 2023-11-23 | 2023-11-23 | Methods and systems for augmented/virtual reality |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2317935.1A GB2635734A (en) | 2023-11-23 | 2023-11-23 | Methods and systems for augmented/virtual reality |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| GB202317935D0 GB202317935D0 (en) | 2024-01-10 |
| GB2635734A true GB2635734A (en) | 2025-05-28 |
Family
ID=89429166
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| GB2317935.1A Pending GB2635734A (en) | 2023-11-23 | 2023-11-23 | Methods and systems for augmented/virtual reality |
Country Status (1)
| Country | Link |
|---|---|
| GB (1) | GB2635734A (en) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20200279104A1 (en) * | 2018-06-27 | 2020-09-03 | Facebook Technologies, Llc | Gesture-based casting and manipulation of virtual content in artificial-reality environments |
| US10916065B2 (en) * | 2018-05-04 | 2021-02-09 | Facebook Technologies, Llc | Prevention of user interface occlusion in a virtual reality environment |
| KR20220057388A (en) * | 2020-10-29 | 2022-05-09 | 주식회사 팝스라인 | Terminal for providing virtual augmented reality and control method thereof |
| WO2022146936A1 (en) * | 2020-12-31 | 2022-07-07 | Sterling Labs Llc | Method of grouping user interfaces in an environment |
| US20220292788A1 (en) * | 2019-04-03 | 2022-09-15 | Magic Leap, Inc. | Methods, systems, and computer program product for managing and displaying webpages in a virtual three-dimensional space with a mixed reality system |
-
2023
- 2023-11-23 GB GB2317935.1A patent/GB2635734A/en active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10916065B2 (en) * | 2018-05-04 | 2021-02-09 | Facebook Technologies, Llc | Prevention of user interface occlusion in a virtual reality environment |
| US20200279104A1 (en) * | 2018-06-27 | 2020-09-03 | Facebook Technologies, Llc | Gesture-based casting and manipulation of virtual content in artificial-reality environments |
| US20220292788A1 (en) * | 2019-04-03 | 2022-09-15 | Magic Leap, Inc. | Methods, systems, and computer program product for managing and displaying webpages in a virtual three-dimensional space with a mixed reality system |
| KR20220057388A (en) * | 2020-10-29 | 2022-05-09 | 주식회사 팝스라인 | Terminal for providing virtual augmented reality and control method thereof |
| WO2022146936A1 (en) * | 2020-12-31 | 2022-07-07 | Sterling Labs Llc | Method of grouping user interfaces in an environment |
Also Published As
| Publication number | Publication date |
|---|---|
| GB202317935D0 (en) | 2024-01-10 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12265655B2 (en) | Moving windows between a virtual display and an extended reality environment | |
| US11997422B2 (en) | Real-time video communication interface with haptic feedback response | |
| US12105283B2 (en) | Conversation interface on an eyewear device | |
| KR102832466B1 (en) | Real-time, real-size eyewear experience | |
| CN116685941A (en) | Media content items with haptic feedback enhancements | |
| US20240069637A1 (en) | Touch-based augmented reality experience | |
| CN118355646A (en) | Shared AR session creation | |
| US12321656B2 (en) | Application casting | |
| EP4268056A1 (en) | Conversation interface on an eyewear device | |
| CN114207557B (en) | Synchronize virtual and physical camera positions | |
| WO2018075523A9 (en) | Audio/video wearable computer system with integrated projector | |
| CA3255529A1 (en) | Displaying images using wearable multimedia devices | |
| US20260010242A1 (en) | Device-to-device collocated ar using hand tracking | |
| US20240020920A1 (en) | Incremental scanning for custom landmarkers | |
| GB2635734A (en) | Methods and systems for augmented/virtual reality | |
| KR20250067932A (en) | AR graphics support for tasks | |
| US20240357286A1 (en) | Enhance virtual audio capture in augmented reality (ar) experience recordings | |
| US12314485B2 (en) | Device-to-device collocated AR using hand tracking | |
| US12033118B1 (en) | Calendar with group messaging capabilities | |
| US20250259398A1 (en) | Maintaining ar/vr content at a re-defined position | |
| US20250336158A1 (en) | Remote presence on an xr device | |
| CN121057992A (en) | Augmented Reality Mood Board Manipulation | |
| WO2024220690A1 (en) | Enhance virtual audio capture in ar experience recordings | |
| EP4472246A2 (en) | Application casting | |
| WO2023039520A1 (en) | Interactive communication management and information delivery systems and methods |