[go: up one dir, main page]

HK1221365A1 - Interactive augmented reality using a self-propelled device - Google Patents

Interactive augmented reality using a self-propelled device Download PDF

Info

Publication number
HK1221365A1
HK1221365A1 HK16109466.8A HK16109466A HK1221365A1 HK 1221365 A1 HK1221365 A1 HK 1221365A1 HK 16109466 A HK16109466 A HK 16109466A HK 1221365 A1 HK1221365 A1 HK 1221365A1
Authority
HK
Hong Kong
Prior art keywords
computing device
virtual environment
image
self
mobile computing
Prior art date
Application number
HK16109466.8A
Other languages
Chinese (zh)
Inventor
‧波洛
F‧波洛
‧卡羅爾
J‧卡罗尔
‧卡斯塔特-史密斯
S‧卡斯塔特-史密斯
‧英格拉姆
R‧英格拉姆
Original Assignee
斯飞乐有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/054,636 external-priority patent/US9827487B2/en
Application filed by 斯飞乐有限公司 filed Critical 斯飞乐有限公司
Publication of HK1221365A1 publication Critical patent/HK1221365A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/32Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using local area network [LAN] connections
    • A63F13/327Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using local area network [LAN] connections using wireless networks, e.g. Wi-Fi® or piconet
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/33Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections
    • A63F13/332Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using wireless networks, e.g. cellular phone networks
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method is disclosed for operating a mobile computing device. The method may include a communication link between the mobile computing device and a second computing device. The second computing device may provide a virtual environment for the mobile computing device. Furthermore, the mobile computing device may allow a user to control a self-propelled device, which may be rendered as a virtual entity upon the virtual environment.

Description

Interactive augmented reality using self-propelled devices
Background
With the improvement of mobile computing devices, users are able to use their devices for a variety of different purposes. Not only can a user operate a smart phone to make phone calls and browse the internet, but the user can also perform a variety of different tasks using his smart phone, for example.
Drawings
The disclosure herein is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:
FIG. 1 shows an exemplary system for operating a computing device, according to one embodiment;
FIG. 2 shows an exemplary method for operating a computing device, according to one embodiment;
3A-3B show examples of processed images according to one embodiment;
FIG. 4 shows an exemplary hardware diagram of a system for operating a computing device, according to one embodiment;
FIG. 5 shows an exemplary method of controlling a self-propelled apparatus as an augmented reality entity using a mobile computing device linked to a second computing device;
fig. 6A-6B show an example of controlling a self-propelled apparatus as an augmented reality entity using a mobile computing device linked to a second computing device.
Detailed Description
Embodiments described herein provide a computing device that can detect one or more rounded objects (e.g., a ball, a self-propelled device with a spherical housing) in an image and track the detected rounded objects. The computing device may utilize the detected rounded object as input for performing one or more operations or processes on the computing device.
According to some embodiments, one or more images including a real-time video frame may be received from an image acquisition device of a computing device. The computing device may run one or more applications or in one or more modes using the image capture device components to receive visual input. The visual input may be a scene in which a lens of the image acquisition device is focused or pointed and/or an object in the scene. For example, a scene may include an object of interest that is moving and has a circular shape.
Embodiments provide a computing device that receives a plurality of images to detect one or more rounded objects (corresponding to one or more objects of interest) in one or more images. For example, a circular object depicted in the image may correspond to an object of interest having a housing or structure with at least one circle or partial circle, such as an ellipse, oval, disk, sphere, and so forth. For example, the object of interest may correspond to a ball, a rounded object, a cylindrical object, or a self-propelled device with a spherical housing, among others, that includes the scene (e.g., visual input detected by the image acquisition device). In some examples, the self-propelled device may be modified (e.g., post-assembled) to include a rounded or spherical appearance (e.g., attaching a rounded object to the self-propelled device, or dropping a table tennis ball in the hopper of a remote control car). The computing device may process and utilize the detected objects in the image as input to perform one or more operations or processes on the computing device.
In some embodiments, each image received may be processed separately to detect one or more rounded objects. The computing device may use one or more detection techniques, together or separately, to detect a circular object. In accordance with one or more embodiments, the detection technique may include using an image filter and detection algorithm based on the size of the rounded object. In addition, detection techniques may be used to determine positional information for one or more rounded objects based on their relative positions in one or more images. Detecting a rounded object in an image may enable a computing device to track the motion of the rounded object, as well as the velocity and/or acceleration of the motion.
Upon detecting the one or more rounded objects in the received image, the computing device may utilize the detected one or more rounded objects and the respective location information as input for performing additional operations or processing. In one embodiment, the computing device may adjust the image including the detected rounded object and present the adjusted image on the display device. In other embodiments, the computing device may use the detected rounded object as an input for controlling the detected object (e.g., as a remote device).
In accordance with one or more embodiments, the image acquisition device may be distinct and separate from the computing device that detects the one or more rounded objects in the one or more images. The image acquisition device and the computing device may wirelessly communicate with each other to enable the computing device to receive one or more images from the image acquisition device. A recording device, such as a video capture device, may also be separate from the computing device and in wireless communication with the computing device. In other embodiments, the devices may be part of one device or may be combined together as one device.
The embodiments described herein also provide operations and/or processes performed by the recording device and/or the image acquisition device and/or the computing device to be performed at different times, in different orders (e.g., shifted in time).
One or more embodiments described herein provide that methods, techniques, and actions performed by a computing device are performed programmatically, or as a computer-implemented method. As used herein, programmatically refers to through the use of code, or computer-executable instructions. The instructions may be stored in one or more memory resources of the computing device. The programmatically performed steps may or may not be automated.
One or more embodiments described herein may be implemented using programmed modules or components of a system. A programmed module or component may comprise a program, a subroutine, a portion of a program, or a software component or a hardware component capable of performing one or more of the described tasks or functions. As used herein, a module or component may exist on a hardware component that is separate from other modules or components. Alternatively, a module or component may be a shared unit or process of other modules, programs, or machines.
Some embodiments described herein may generally require the use of computing devices, including processing and memory resources. For example, one or more embodiments described herein may be implemented in whole or in part on computing devices such as digital cameras, digital camcorders, desktop computers, cellular or smart phones, personal digital assistants (PDSs), notebook computers, printers, digital photo frames, and tablet devices. Storage, processing, and network resources may be utilized in connection with the establishment, use, or execution of any of the embodiments described herein (including in connection with the execution of any of the methods or in connection with the implementation of any of the systems).
Furthermore, one or more embodiments described herein may be implemented using instructions executable by one or more processors. The instructions may be carried on a computer readable medium. The machines shown or described below with respect to the figures provide examples of processing resources and computer-readable media on which instructions for implementing the present invention may be carried and/or executed. In particular, many of the machines shown in the embodiments of the present invention include a processor and various forms of memory for storing data and instructions. Examples of computer readable media include persistent memory storage devices, such as hard drives on personal computers or servers. Other examples of computer storage media include portable storage units such as CD or DVD units, flash memory (such as carried on a smartphone, multifunction device, or tablet computer), and magnetic memory. Computers, terminals, internet-enabled devices (e.g., mobile devices such as cellular telephones) are all examples of machines and devices that utilize processors, memory, and instructions stored on computer-readable media. In addition, embodiments may be embodied in the form of computer programs or computer usable carrier media capable of carrying such programs.
Description of the System
FIG. 1 shows an exemplary system for operating a computing device, according to one embodiment. For example, the system described with respect to fig. 1 may be implemented on a mobile multifunction computing device (e.g., smartphone, tablet device) with integrated image acquisition components. In a variation, the system 100 may be implemented on a notepad, notebook, or other computing device that may operate in an environment where the camera is controlled or operated to track moving objects.
In the example of fig. 1, the operating system 100 processes the image input to dynamically detect a circular object, such as an object in motion. The detected rounded object may be a part of the housing of the device in motion and/or under control of the computing device. According to some described examples, a rounded object is detected as part of a programming process, wherein the device with which the rounded object is integrated is controlled in motion or operation. In a variant, a circular object is detected in motion as part of a programming process, where the presence of the object is used to generate other programming processes, such as augmented reality using the circular object in motion as an input. Thus, the system 100 may detect a circular object corresponding to an object of interest in motion. The detection of such objects may provide input that enables the computing device to perform other operations, such as controlling the object of interest or incorporating a representation of the object of interest into augmented reality displayed on the computing device.
Still further, the system 100 may perform a scale analysis of a plurality of images depicting a scene including an object of interest. In particular, the system 100 may perform a dimensional analysis in order to determine a distance of the object of interest from the image acquisition device and/or the computing device. To this end, in one example, a rounded object may be effectively detected and processed by the components of the system 100 in the image.
In one embodiment, system 100 includes object detection 110, image adjustment 130, User Interface (UI) component 140, device control 150, and Wireless Communications (WCOM) 160. The components of the system 100 combine to receive a plurality of images from an image acquisition device and to automatically process the images to detect one or more rounded objects depicted in the images. Each image may be processed using one or more techniques such that the detected objects may be processed as input for performing one or more operations on the computing device.
According to one embodiment, object detection 110 may also include subcomponents such as an image filter 112, gradient detection 120, and image markers 122. These components may be combined so that the object detection 110 is able to detect and track one or more objects detected in multiple images.
A computing device may run one or more applications and/or in one or more different modes. In one embodiment, system 100 may be operated in response to a user executing or launching an application or program that performs one or more processes (e.g., a gaming application or device calibration setup program) using visual input detected by an image acquisition device. Object detection 110 may receive visual input 114, such as image input, from an image acquisition device to detect one or more rounded objects in one or more images. For example, an image acquisition device (e.g., an integrated camera) may receive and acquire a scene (e.g., from any perspective and/or object at which a lens is aimed). The visual input 114 may be in the form of a series of images or through a video input (e.g., a plurality of images taken in succession at 30-60 frames per second).
In some embodiments, a preview of an image being received by a computing device may be provided on a display of the computing device. For example, the visual input 114 may also be provided to the UI component 140 to cause the UI component 140 to generate a preview image of the received visual input 114 (and one or more features that may be presented by the preview image, e.g., zoom-out or zoom-in features, capture image features). A display device (e.g., a touch-sensitive display device) may present a dynamically changing real-time image of the scene at which the image acquisition device is currently pointing. This image may include one or more objects of interest having a circular feature in the scene (e.g., having a circular housing or portion of a housing). The user may also retrieve and store one or more images of the scene by pressing a retrieve button or trigger, or by using additional user interface features (e.g., tapping a "retrieve image" graphical feature provided on the touch-sensitive display).
According to some embodiments, the object detection 110 may process each image individually to detect the object of interest. In particular, the object of interest may be designated to match a particular shape, such as a hemisphere or sphere (or other spherical portion). For each image received via the visual input 114, the object detection 110 may run image recognition software and/or other image processing methods to detect the circular designated features of the object of interest. For example, the object detection 110 may scan pixels of the separately acquired image corresponding to a sphere, hemisphere, or other variation (depending on the specified circular feature).
Object detection 110 may use different detection techniques to detect one or more rounded objects in a single image, collectively or individually. In one embodiment, image filter 112 of object detection 110 may receive one or more images and use a filter, such as a grayscale filter, on each received image. Using a grayscale filter on an image, individual pixels of the image can be converted to grayscale (e.g., based on intensity information). Image filter 112 may provide grayscale images 116 of each received image to grayscale detection 120 and image markers 122. Once represented in a gray scale image, the trained object detector may scan gray scale pixels of a potential circular object corresponding to the object of interest. The use of a grayscale filter facilitates fast image object detection so that an object of interest can be monitored in real time as it moves.
In some embodiments, the objects of interest may include additional features to facilitate their respective tracking. For example, a circular feature of the object of interest may also be combined with additional features; such as other structurally visible landmarks, brightness of color (e.g., white, silver, yellow, etc.), illumination, or surface patterns. As an example, objects of interest may be colored bright (e.g., white), and using a grayscale filter on the processed input image may produce objects with lighter shades of gray than other portions of the same scene. In one embodiment, the grayscale images 116 may be provided to gradient detection 120 and image markers 122 that use one or more image processing methods (e.g., applying one or more algorithms) to detect rounded objects in the respective grayscale images 116.
Still further, in a variant, known (or predetermined) information about the object of interest may be used when performing object detection. For example, the user may provide input 126 corresponding to visual indicia of an object of interest. For example, such input 126 may include an estimated size (e.g., radius or diameter) of a circular object and a color of the object entered via an input mechanism (e.g., using one or more buttons, or a touch-sensitive display screen). In another variation, the user may provide an input 126 on the touch-sensitive display screen that corresponds to a circular gesture to indicate the approximate size of the object presented on the display. In other examples, information about one or more rounded objects may be stored in a memory of the computing device. The gradient detection 120 may use known information to detect rounded objects in the respective images.
In one embodiment, the gradient detection 120 may process individual pixels (or certain pixels) of the grayscale image 116 to determine the individual gradients of the individual pixels. The gradient of a particular pixel may be based on surrounding pixels, including immediately adjacent pixels. The gradient corresponds to a vector in which the luminance level of the pixel value increases in the direction of the vector. For example, the luminance level of a pixel is lower or smaller than the luminance levels of other pixels in the gradient direction. Image markers 122 implement logic that determines, for each pixel in the gradient direction, a point for which the gradient marker is within a distance equal to a radius of a rounded object (e.g., the actual radius or radius of curvature). For example, the user may indicate (via user input 126) that a circular object to be detected has a radius of a certain size or an approximate pixel length. For example, the approximate pixel length may be set to twenty pixels. In other examples, the approximate pixel length may be assumed or predetermined based on previously stored information regarding the size of the circular object in the image. For each pixel, using the determined gradient, image marker 122 may mark a point within twenty pixels in length of that particular pixel in the direction of the gradient having the highest brightness level. This may represent the center or middle point of a circular object. From the plurality of markers (e.g., the markers for the individual pixels of the grayscale image 116), the object detection 110 may assume that the image region with the plurality of markers corresponds to the center of a circular object, and may determine the location or position of the circular object in the individual grayscale images 116 (if a circular object is present). The determined position information of the circular object may be relative to the image.
Gradient detection 120 may also access parameters (which may be stored in a database or a list in a storage resource of the computing device) when determining gradients for individual pixels in grayscale image 116. The parameters may include, for example, a brightness threshold and a gradient threshold. If the neighboring pixels (pixels within a radial distance of the particular pixel, in a direction away from the particular pixel) do not exhibit a pattern of increased luminance, the luminance threshold may instruct the gradient detection 120 to ignore computing the gradient of the particular pixel. In another example, the gradient threshold may instruct the gradient detection 120 to ignore computing the gradient of a particular pixel if the neighboring pixels do not exhibit a sufficiently strong change in brightness level in a direction away from the particular pixel. For pixels where the gradient is not calculated, the image marker 122 will not provide a marker.
In another embodiment, object detection 110 may apply different radii of a circular object to detect a circular object in grayscale image 116. Because the gradient detection 120 and image markers 122 can process each single pixel in the grayscale image 116, the object detection 110 assumes that, for each grayscale image 116, the radius used to detect a rounded object can be different in one image region than another. Due to the different angles, orientations, distances and positions of the image capture device relative to the rounded object, the size of the rounded object may vary with how far it is from the user. Object detection 110 may receive information about the orientation and position of the image acquisition device (e.g., where the lens of the image acquisition device is aligned or focused) to determine whether a given pixel in the image represents a ground or floor that is closer to the user or the user's foot, or whether the pixel represents a point that is further away from the user (e.g., closer to the horizon). Object detection 110 also assumes that if a particular point in the image represents a region closer to the user, then the rounded object at or near that point is generally larger in the image than if the rounded object is further from the user. As a result, for a point representing an area closer to the user, the image marker 122 may apply a larger radius when marking the area along the gradient direction of the point (since the circular sphere to be detected is assumed to be larger in the image).
For points representing areas further away from the user, a circular object at or near the point typically appears smaller in the image. As a result, after determining that the given point represents a region further away from the user, the image marker 122 may apply a smaller radius when marking a region (e.g., ten pixels long, instead of twenty pixels long) in the gradient direction along the point. In this way, the object detection 110 may also determine whether the rounded object is moving in one or more directions, as the size of the rounded object may become larger from one image to another in the sequence of images (e.g., if it is moving closer to the user) or smaller from one image to another (e.g., if it is moving further away from the user).
For example, the computing device may detect its position relative to the ground based on an accelerometer and/or other sensing mechanisms of the computing device. Additionally, the computing device may determine a ground plane and a horizon for the image provided by the image acquisition device. Based on the determined information, for example, if the user is holding the computing device in a manner such that the lens of the image acquisition device is aimed closer to her area, the computing device may increase the radius (e.g., size) of the rounded object being detected. On the other hand, if the user is holding the computing device in a manner such that the lens of the image acquisition device is aimed at a region further away from the user (e.g., a further distance from the proximal end), the radius of the rounded object being detected may be reduced.
As discussed, for each determined gradient (for each pixel), the image marker 122 may mark a point (e.g., a location in the image) in the direction of the gradient within a distance equal to the radius of the rounded object. In this way, rather than the user indicating that the circular object to be detected has a certain radius, the image marker 122 may mark a point within the changing radius in the direction of the gradient having the highest brightness level. The radius may vary as a function of whether a lens of an image acquisition device of the computing device is aimed closer to or further away from the user.
In certain embodiments, the estimated size of the one or more rounded objects may be automatically configured and/or stored in a memory resource of the computing device. For example, the rounded object may be a self-propelled device having a rounded housing (e.g., a spherical housing), paired with the user's computing device, or wirelessly connected with the user's computing device. Information about the user's self-propelled device is stored in a storage resource of the computing device.
Object detection 110 may also apply different types of filters with user input 126. In some embodiments, other types of filters may be applied by the image filter 112 depending on the color of the rounded object of interest being detected by the object detection 110. For example, if the color of the rounded object of interest is dark, the user may provide an input 126 indicating the color of the ball.
Object detection 110 may detect one or more rounded objects in one or more images and also provide information corresponding to the detected objects. The information about the detected object may include the color, size, shape (e.g., elliptical or spherical) of the object, the position of the object relative to the image, and so on. In one embodiment, information about the detected rounded object may be stored in a storage resource of the computing device.
According to one or more embodiments, the system 100 utilizes the detected one or more objects from the image (including location information of the one or more objects) as input to perform one or more operations or processes on the computing device. In the exemplary system of fig. 1, object detection 110 may provide information and images (e.g., object detection image 124) about the detected object to image adjustment 130 and device control 150. For example, the image adjustment 130 may be run as part of or with a gaming application.
In some embodiments, the system 100 may determine from the object detection image 124 that a rounded object of interest is moving. For example, the two or more images may include rounded objects corresponding to objects of interest in the scene, corresponding to different time stages (e.g., sequential). In a first image of the two or more images, a circular object may be detected at a first position relative to the image. In the next image in the sequence, a circular object may be detected at a second position relative to the image. As a result of the change in position of the rounded object detected in the different images, the device control 150 and/or the image adjustment 130 may determine that the rounded object is in motion (or has moved). In other examples, the circular object of interest may be in a stationary position, but is rotating in place. A rotating circular object may be detected in the image.
Still further, a circular object detected from the image input may be converted into coordinate data relating to the image depiction for a given reference frame. This coordinate reference frame may be converted into a real world coordinate system in order to observe and/or predict information about the location of the object of interest. If the depth of the object from the image acquisition device can be determined by a scale analysis of the object depiction, the location of the object of interest can be mapped from the image depiction of the object. In addition to the location information, the velocity of the object of interest may be determined from a reference frame depicted by an image of the object when mapped to a determined reference frame of the real-world object of interest.
In another example, an object of interest having a circular feature may be placed on top of an object having a known height, rather than on the ground or floor (e.g., on top of a coffee table or on top of a remote control toy truck). By using information about known (or predetermined) objects of interest and known heights, the object detection 110 can determine a more accurate location of the object in the image. In addition, based on information (such as known information and height) and the reference frame, the image adjustment 130 may provide a dimensionally accurate (relative to the exact location of the object of interest and the location of the image acquisition device) image to be presented on the display device.
For example, the image adjustment 130 may process the object detection image 124 to dynamically adjust at least a portion of the image. The image adjustment 130 may also have access to a set of rules for mapping the coordinate reference frame and the real-world coordinate system so that the rendered adjusted image may be dimensionally accurate relative to the real-world scene. The rules may correspond to real world parameters. In some examples, the real world parameters may be changed, for example, by user input, if accurate reproduction of the scale is not desired.
Image adjustment 130 may also store adjusted image 132 in a memory resource of the computing device. This enables the user to access the memory resource to view any stored, adjusted images 132 at a later time. In other embodiments, the system 100 can perform additional processing of the image for performing other operations. The image adjustment 130 may, for example, access previously processed images and perform additional adjustments to the images. In certain variations, the processing performed by object detection 110 and/or image adjustment 130 may be performed on stored, rather than "live" or real-time, images. Thus, for example, the processing described by object detection 110 and/or image adjustment 130 may optionally be shifted temporally relative to when the image was acquired and/or stored.
In other examples, the image adjustment 130 may dynamically overlay or replace a detected rounded object in an image with a graphical image (e.g., a character, animation, some other object other than a ball), or change the image of the detected rounded object itself in real-time (e.g., change the color of the rounded object or distort the rounded object). The adjusted image 132 may be provided to a user interface component so that a user of the computing device may see the presentation including the adjusted detected object on a display device. The user interface component 140 may also generate one or more features that may be presented on a display device of the computing device. As described, in some examples, the rendered, adjusted content may be dimensionally accurate relative to the location of the actual object of interest and the location of the image acquisition device.
For example, a user may watch a real-time sporting event (e.g., a child playing football in a park) and aim the lens of their computing device at a playing field. The received visual input 114 may include a plurality of image frames detected by an image acquisition device. The object detection 110 may detect a rounded object (corresponding to an object of interest, such as a soccer ball) in each image of the sequence of images (e.g., the position of the ball in each image may be different because the ball is moving). Image adjustment 130 may process a detected object (e.g., a detected soccer ball) by highlighting the ball or providing a halo of light around the ball and providing an image to user interface component 140. The user is then able to see the adjusted image 132 (with any additional features) on the display device of the computing device.
In other embodiments, the device control 150 may process the object detection image 124 to provide one or more controls for a remote device. For example, the detected rounded object may be a self-propelled device paired with or wirelessly connected to the user's computing device. The user may see a representation of the received visual input 114, including the detected rounded object (e.g., the user's self-propelled device), on a display device of the computing device. The user may then interact with the display representation of the self-propelled device on the touch-sensitive display screen by providing user input 152 (e.g., touch input on the touch-sensitive display screen) to the device controls 150.
The device control 150 receives the object detection image 124 and user input 152 to determine what type of control information 154 to generate. From the received information corresponding to a rounded object detected via the object detection image 124, the control information 154 may be determined by detecting a user input at a particular location of the display screen, e.g., determining that the location corresponds to the location of the rounded object detected on the image. For example, a user may tap on a displayed representation of the self-propelled device to cause it to rotate, or tap and drag their finger in one direction to cause the self-propelled device to move accordingly. The control information 154 is provided to the WCOM160 so that the information can be provided to the remote control.
For example, the device control 150 determines the position of the detected rounded object in the object detection image 124 (e.g., the position of the detected rounded object relative to the image). When a user provides input or performs a gesture on the representation of the self-propelled device (e.g., drags the representation to the right on the touch-sensitive display screen), the device control 150 may determine a path of the gesture relative to a reference (e.g., an initial position of the display representation of the self-propelled device) and convert the gesture into a control signal for moving the self-propelled device in the same manner. The control signal may be provided to the self-propelled device such that the self-propelled device moves accordingly.
WCOM160 is used to exchange data between system 100 and other external devices (e.g., self-propelled devices such as remote devices or users). In one embodiment, the WCOM160 can implement a variety of different protocols, such as a Bluetooth communication protocol, a Wi-Fi communication protocol, an infrared communication protocol, and so forth. Control information 154 may be wirelessly transmitted to a remote device (e.g., a self-propelled appliance) such that the self-propelled appliance performs a specified action corresponding to the user command.
In certain embodiments, device control 150 may provide control information 154 for calibrating one or more components of a remote device (e.g., for calibrating one or more components of a computing device). For example, a rounded object of interest (e.g., a user's self-propelled device) may have one or more gyroscopes. Over time, the gyroscope may need to be recalibrated due to gyroscope drift. Using the object detection image 124, the appliance control 150 may detect, for example, whether the self-propelled device is moving, or whether it is stationary, and provide control information 154 (which includes calibration information) to be sent to the self-propelled device via the WCOM 160. Based on detecting the course of motion of the self-propelled device and the object detection image 124, the apparatus control 150 may calibrate the gyroscope of the self-propelled device. In one embodiment, equipment control 150 may also receive device information 162 from the self-propelled device, which includes current information about the components of the self-propelled device (including real-time information about the gyroscope). The device control 150 may use this information to calibrate the gyroscope of the self-propelled apparatus. In this way, system 100 can be used to eliminate gyroscope drift.
Similarly, one or more gyroscopes of a computing device may also be calibrated by using a detected circular object. For example, when a circular object is determined to be stationary, the device control 150 may calibrate the gyroscope of the computing device by using the detected circular object as a reference point.
Object detection 110 may also provide object detection image 118 to other components of the computing device or system and/or a remote device (not shown in FIG. 1) for additional processing. Other components may process the detected rounded object as input for performing other operations.
In accordance with one or more embodiments, the visual input 114 received by the system 100 is also provided from an image acquisition device remote from the computing device. For example, the image acquisition device may be separate from the computing device. The image capture device may detect and/or capture the scene and send the visual input 114 to the computing device (e.g., using a cable or wirelessly). For example, the system 100 can receive the visual input 114 via the WCOM160 when the visual input 114 is received wirelessly from an image acquisition device. The system 100 may then detect one or more rounded objects from the image received by the remote image acquisition device and perform additional operations based on the processed detected objects.
In another embodiment, the system 100 may receive an image (e.g., a frame from a real-time video) and then process the image after a duration of time (e.g., detect one or more rounded objects and/or adjust the image). For example, the visual input 114 provided by the image acquisition device is first stored in a memory resource of the computing device before the object detection 110 retrieves or receives the visual input for processing. The object detection 110 may perform processing in response to the user input 126 (thereby causing the visual input 114 to be retrieved or received by the object detection 110 from a memory resource). In other embodiments, the system 100 may perform additional processing even after the image is dynamically processed. For example, the image adjustment 130 may perform additional processing to provide a different adjusted image to the user interface component after the image adjustment 130 processes the object detection image 124 for the first time. In this way, the user may view different adjustment images 132 (e.g., adjustment images with different graphical image overlays) at different times.
In some embodiments, some of the components described in system 100 may be provided as a single component or as part of the same component. For example, object detection 110 and image adjustment 130 may be provided as part of the same component. In another example, object detection 110 and device control 150 may be provided as part of the same component. In other embodiments, the components described in system 100 may be provided as part of a device operating system or as part of one or more applications (e.g., part of a camera application, a remote device control application, or a gaming application). Logic may be implemented in a camera application (e.g., software) and/or with hardware of an image acquisition device.
In some examples, the object of interest may alternatively have other predetermined shapes that may be subjected to scale and image analysis to detect objects in one or more images. For example, the system 100 may utilize other object detection and/or image processing techniques to detect objects having shapes other than circular, such as classifiers trained to detect objects having particular two-dimensional shapes (e.g., having rectangular or triangular shapes). The object detection 110 may detect other predetermined shapes for converting dimensional information about the image of the object into spatial and/or positional information in the real world.
Additionally or alternatively, the examples described herein may also be used for neighboring object analysis. In neighboring object analysis, information about objects that are neighboring the object of interest may be determined based on its known spatial relationship to the object of interest. For example, the object of interest may be a tank, where the location of the tank may be determined in the image. Guns equipped on tanks can also be tracked using neighboring object analysis.
Also, multiple objects of interest (e.g., having a circular or spherical shape) may be tracked simultaneously in one or more images of a given scene. The object of interest may be in motion or stationary. Object detection 110 may detect a plurality of objects of interest and location information for each tracked object. The position information of each tracked object relative to other tracked objects may be used to determine other objects or points of interest that are in the middle of the object of interest. For example, a plurality of spherical objects may be detected and tracked in an image to determine a point cloud for determining the location of other objects that may be proximate to one or more tracked objects.
Method of producing a composite material
FIG. 2 shows an exemplary method for operating a computing device in accordance with the present invention. A method such as that described by the embodiment of fig. 2 may be implemented using components such as those described in conjunction with the embodiment of fig. 1. Accordingly, reference to elements of FIG. 1 is intended to illustrate suitable elements or components for performing the described steps or sub-steps.
The computing device may receive a plurality of images from its image acquisition device (step 200). The plurality of images correspond to visual input of a scene aimed at or focused by a lens of the image acquisition device. In some embodiments, the image input may correspond to real-time video provided (sub-step 202) by an image acquisition device. The real-time video may include a plurality of frames (where each frame may be an image) that are detected and/or acquired per second (e.g., 30 frames per second).
Each image may be individually processed to detect one or more rounded objects in the image (step 210). The computing device may run image recognition software and/or other image processing methods to detect rounded objects in the respective images. The circular object in the image may correspond to an object of interest that is present in the scene and has characteristics of a circular shell. In one embodiment, a rounded object may be detected by applying known information for determining the object of interest (substep 212). The known information may be provided by a user or may be preconfigured and/or stored in a memory resource of the computing device.
In accordance with one or more embodiments, the known information can include the dimensions of the rounded object of interest (substep 214). The size of a circular object may correspond to the radius of the object (or, for example, the radius of an ellipse) or the diameter of the object, or in some cases, the relative size compared to the screen size of the display device. By using size information about a circular object, such as the radius or diameter of the circular object, one or more circular objects can be detected in the respective images. Additionally, the known information may include information corresponding to a location of the computing device relative to the ground (substep 216). Depending on the orientation, direction, and position at which the user aims the lens of the computing device, the rounded object may be determined to be small in size (relative to the screen size), or large in size. In such a case, the radius or diameter of the circular object may dynamically change in size. For example, by applying a grayscale filter, such as discussed above with respect to fig. 1, a circular object may be detected in each image. In addition, information of the rounded object, such as position information of the rounded object with respect to the image, may also be determined.
By detecting one or more rounded objects in the image, the computing device may utilize the detected objects and the location information of the detected objects as input for performing one or more operations or processes on the computing device (step 220). For example, the computing device may adjust the image by generating an overlay, or by replacing a circular object detected in the image with a graphical feature that simulates the motion of the circular object in the image (substep 222). Such graphical adjustments may be useful for gaming applications that use real-world scenes as part of a game.
In one embodiment, the overlay may also be a graphical feature having three-dimensional characteristics. The graphical feature may be presented as part of a display image so that it coincides with the detected three-dimensional nature and motion of the rounded object. For example, if a circular object is rotated 180 degrees, the overlaid graphical features may also be rotated with similar features.
The computing device may also process the detected object as an input for wirelessly controlling the detected rounded object (e.g., self-propelled device) (substep 224). For example, a user may control an image containing a representation of a self-propelled device on a display device to control the actual self-propelled device (e.g., an object of interest) to move at a certain speed in a certain direction. The user may also tap a target location on the touch-sensitive display screen to cause the self-propelled device to move to the target location. In another embodiment, the detected rounded object may be used as an input for projecting or displaying an image based on the real-life object and the real-life rounded object in the vicinity of the user (substep 226). The detected rounded object may also be used to calibrate one or more components of the detected rounded object (substep 228).
Other applications of processing detected objects as input may include tracking a ball in ball sports, or using a ball and/or puck as a marker or fixed reference point (e.g., fiducial marker). The puck can be used, for example, to place other objects at reference or measurement points in the image. In addition, other applications include the use of detected objects for spatial mapping or navigation. Points, object positions, navigation waypoints, etc. may be given virtual points that correspond to specific locations over a long period of time. The computing device may provide information regarding the location and position of the detected rounded object to the rounded object so that the rounded object (e.g., a self-propelled device) may measure its own progress toward the target location and direct itself to the desired location. In other applications, the absolute position of an object of known height can be calculated by selecting (e.g., tapping the display screen) the position of the object from the image, based on the detected circular object.
Figures 3A-3B show examples of processed images according to one embodiment. Exemplary user interface features provided on a display device, such as described with the embodiment of fig. 3A-3B, may be implemented using components such as described in connection with the embodiment of fig. 1. Accordingly, reference to elements of FIG. 1 is intended to illustrate suitable elements or components for performing the described steps or sub-steps.
Fig. 3A shows an image of a scene including a representation of a circular or spherical object (e.g., a ball). A circular object may be moving. However, because fig. 3A shows a single image of the image sequence, for example, a circular object is depicted as being stationary. The computing device may provide such images on the display device as a preview of the object being detected and/or acquired by the image acquisition device of the computing device.
After the computing device detects the rounded object and the location or position of the rounded object in the plurality of images, the detected rounded object may be used as input to perform one or more additional operations. In one embodiment, the image may be adjusted by visually changing the detected rounded object. For example, a user may operate a computing device in order to play a game (e.g., operate a game application). In fig. 3B, the image including the detected circular object may be adjusted by overlaying or replacing the detected circular object with another graphical image (e.g., a dragon) as part of the game. Because the spherical object moves in real-time in the real world, the received sequence of images also depicts that the ball is moving.
The respective images may be processed to detect a spherical object, and the detected spherical object may be processed as input so that the dragon displayed on the display device of the computing device moves accordingly. In some variations, the presented image may be dynamically adjusted. For example, the graphical image of the dragon may be dynamically changed to the graphical image of a lizard in response to a trigger, such as a user input, or an object of interest moving to a particular location or next to another object of interest.
Additionally or alternatively, a computing device, such as a device implementing the system 100 of fig. 1, may detect multiple rounded objects in an image and track the position and/or motion of the multiple rounded objects, while another computing device controls the object of interest. In some embodiments, the image acquisition device and/or the computing device that processes the images to detect one or more rounded objects may be separate or distinct from the computing device that controls the motion of the object of interest, such as a rounded or spherical self-propelled device. Still further, in one variation, the content (e.g., graphical overlays) presented in accordance with detected and tracked objects on a computing device may be dynamically changed in accordance with one or more triggers or inputs provided by another computing device.
For example, multiple users may be integrated with each other in a gaming environment, where a first user uses a first computing device to control the motion of an object of interest, while second and third users track the object of interest and present the content on their respective computing devices. The fourth user may use the fourth computing device to control what is being displayed on the devices of the second and third users (e.g., dynamically adjust what graphical images are displayed in place of the detected object). For example, in one embodiment, the second user may track objects of interest and view rendered content that is different from that rendered on the third user's computing device (e.g., according to which content the fourth user controls to be displayed to the respective users). The computing devices may communicate with each other via a wireless communication protocol, such as bluetooth or Wi-Fi.
In another example of use, multiple users with respective computing devices may detect and track ten objects of interest, including one or more self-propelled devices. Control of one or more objects of interest may be passed between users.
Hardware diagrams
FIG. 4 shows an exemplary hardware diagram of a computer system upon which embodiments described herein may be implemented. For example, in the context of FIG. 1, system 100 may be implemented using a computer system such as that described in FIG. 4. In one embodiment, computing device 400 may correspond to a mobile computing device, such as a cellular device capable of telephone conversations, messaging, and data services. Examples of such devices include smart phones, cell phones or tablet devices for cellular carriers, digital cameras, or laptops and desktop computers (e.g., PCs). Computing device 400 includes processing resources (e.g., one or more processors) 410, memory resources 420, a display device 430, one or more communication subsystems 440 (including wireless communication subsystems), an input mechanism 450, and a camera component 460. Display device 430 may be a touch-sensitive display screen that may also receive input from a user. In one embodiment, at least one communication subsystem 440 transmits and receives cellular data over a data channel and a voice channel.
The processing resource 410 is configured with software and/or other logic to perform one or more processes, steps and other functions of the embodiments and in applications such as those described by fig. 1-3. Processing resources 410 are configured to implement system 100 (as described in connection with fig. 1) with instructions and data stored in memory resources 420. For example, instructions for implementing object detection (including image filters, gradient detection, image markers), image adjustment, user interface components, and device control may be stored in the memory resources 420 of the computing device 400.
The processing resource 410 may execute instructions for operating object detection and for receiving images (via visual input 462) that have been acquired by the lens and/or other components 460 of the image acquisition device. After detecting the one or more rounded objects in the one or more images, the processing resource 410 may execute instructions for causing the adjusted image 412 to be presented on the display device 430. The processing resource 410 may also execute instructions to provide device control 414 for a remote device via the communication subsystem 440.
In some embodiments, the processing resource 410 may execute and operate various different applications and/or functions, such as, for example, a home page or start screen, an application launch page, a messaging application (e.g., an SMS messaging application, an email application, an IM application), a phone application, a gaming application, a calendar application, a document application, a web browser application, a clock application, a camera application, a media viewing application (e.g., for video, images, audio), a social media application, a financial application, and device settings.
Interactive augmented reality
Fig. 5 shows an exemplary method of operating a self-propelled apparatus as an augmented reality entity using a mobile computing device linked to a second computing device. In one or more embodiments, a method for operating a self-propelled device may include controlling motion of the self-propelled device in a real-world environment (502). Controlling the self-propelled device may be performed by a mobile computing device. The mobile computing device may include programming functionality that enables a user to control the motion of the self-propelled means by using controls presented or displayed on a display of the mobile computing device.
In many embodiments, the method further includes interacting with a second computing device providing the virtual environment to cause the second computing device to present the virtual entity in the virtual environment (504). The virtual entity may correspond to a self-propelled device in the real world. In many embodiments, the interaction may be performed by a mobile computing device. For example, a user of a mobile computing device may load an application that causes the mobile computing device to interact with a second computing device. Alternatively or in addition, the second computing device may perform functions to communicate with the mobile computing device. For example, a user of the second computing device may provide input that causes the second computing device to interact with the mobile computing device. The second computing device may include a controller whereby a user may operate the controller to interact with the virtual environment. Using the controller, the user may then facilitate the second computing device interacting with the mobile computing device.
In many embodiments, once the mobile computing device and the second computing device have interacted with each other, some or all of the virtual environment may be displayed on the mobile computing device via the second computing device (506). A virtual entity corresponding to the self-propelled device may also be presented in the virtual environment (508). For example, a feature in a virtual environment may represent a self-propelled device in the real world. In these and related embodiments, a user of a mobile computing device may simultaneously control a self-propelled device in the real world and a virtual entity in a virtual environment. In a related embodiment, the virtual environment may be provided as augmented reality in which virtual entities corresponding to the self-propelled devices may interact.
In many embodiments, the method further includes detecting an event associated with the self-propelled device in the real-world environment, and then generating a virtual event based on the detected event (510). For example, the detected event may relate to a particular motion or interaction of the self-propelled device in a real-world environment. The event may be detected by the mobile computing device or the second computing device via the interface. The mobile computing device may then incorporate the virtual event into the virtual environment. In doing so, the mobile computing device may translate the particular motion or interaction into a virtual event, such as a control or command in a virtual environment. Additionally or alternatively, the real world event may involve the self-propelled device striking a surface. Such an impact may correspond to a virtual event in which the virtual entity performs a function. The functionality may be configured by the user or may be preprogrammed in the game or application.
Fig. 6A-6B show an example of using a mobile computing device linked to a second computing device 602 to control a self-propelled device 608 as a virtual entity 610 in a virtual environment 612. An exemplary system such as that shown in fig. 6A may include a mobile computing device 604 in communication with a second computing device 602. The mobile computing device 604 may also operate on the mobile computing device 608. The communication link 606 between the mobile computing device 604 and the second computing device 602 may allow the virtual environment 612 to be displayed on the mobile computing device 604 through the second computing device 602. Additionally, mobile computing device 604 may then present self-propelled device 608 as a virtual entity 610 within virtual environment 612.
Referring to fig. 6A, the second computing device 602 may provide the virtual environment 612 to the mobile computing device 604 via the communication link 606. The communication link 606 may be created by the mobile computing device 604 interacting with the second computing device 602. The second computing device 602 may be any computing device operable to provide such an environment 612. For example, the second computing device may be a desktop computer or a notebook computer. Alternatively, the second computer may be a gaming console, such as those in the XBOX series of consoles developed by MICROSOFT, or those in the PLAYSTATION series of consoles developed by SONYCOMPUTERENTERTERTANT. Alternatively, the second computing device may be any computing device capable of enabling the virtual environment 612 to communicate with the mobile computing device 604.
The virtual environment 612 provided by the second computing device may be displayed on the mobile computing device 604. Virtual environment 612 may be an environment developed for user interaction through a second computing device. For example, virtual environment 612 may be a three-dimensional map provided by an online computer program. Alternatively, virtual environment 612 may be a customized, real-world, adjusted or augmented reality map having any number or variety. Additionally or alternatively, the virtual environment 612 may be provided by a game console, for example, as part of a mobile game, mini game, instant game, or the like.
Additionally, the mobile computing device 604 may also be configured to access save data associated with the virtual environment 612 stored on the second computing device 602. The saved data may correspond to, for example, previous interactions by the user within virtual environment 612. Similarly, the second computing device 602 can also be configured to communicate save data to the mobile computing device 604 and to consistently link previous interactions within the virtual environment 612 to the mobile computing device 604. Thus, for example, a user playing a game with virtual environment 612 on second computing device 602 may save data associated with the game, then access the saved data using mobile computing device 604, and utilize self-propelled device 608 as virtual symbol 612 (as shown on fig. 6B) presented in virtual environment 612.
Referring to fig. 6B, the self-propelled device 608 may be controlled by the mobile computing device 604. As shown, a virtual entity 610 representing the self-propelled device 608 may be presented on the mobile computing device 604 as an entity within a virtual environment 612. Additionally, the mobile computing device 604 may be configured to detect events related to the self-propelled device 608 in the real world, and then determine virtual events based on the detected events. The virtual event may then be merged into virtual environment 612. As an example, as shown in fig. 6B, the self-propelled device 608 may be represented in an augmented reality environment as a pterosaur symbol. A user of the mobile computing device 604 may manipulate the self-propelled device 608 by using the controls 614 to move a dinosaur presented on the mobile computing device 604. Moreover, the mobile computing device 604 may also allow for a variety of correlations to exist between inputs for the self-propelled device 608 and virtual events for correspondence with the virtual entity 610 of the self-propelled device 608.
Thus, in practice, a user may play a role anywhere in the real world by operating the self-propelled device 608 in a real world environment. The parallel augmented reality world may be displayed on the mobile computing device 608 through a virtual entity 610 in a virtual environment 612 corresponding to the self-propelled device 608. Using controls 614 for the self-propelled device 608, a user can operate the self-propelled device 608 and arrange for it to interact with the augmented reality world 612 via the virtual entity 610.
It is contemplated that the embodiments described herein may be extended to the various elements and concepts described herein independently of other concepts, concepts or systems and that the embodiments include combinations of elements recited anywhere in this application. Although the embodiments are described in detail herein with reference to the accompanying drawings, it is to be understood that the invention is not limited to those precise embodiments. Thus, many modifications and variations will be apparent to practitioners skilled in the art. It is therefore intended that the scope of the invention be defined by the following claims and their equivalents. Furthermore, it is contemplated that a particular feature described either individually or as part of an embodiment may be combined with other individually described features or parts of other embodiments even if the other features and embodiments do not address the particular feature. Thus, the absence of a reference to a combination should not prevent the inventors from claiming such a combination.
One or more embodiments described herein provide that methods, techniques, and actions performed by a computing device are performed programmatically, or as a computer-implemented method. Programmatically refers to instructions executable by the code or computer. The programmatically performed steps may or may not be automatic.
One or more embodiments described herein may be implemented using programmed modules or components. A programmed module or component may comprise a program, a subroutine, a portion of a program, or a software component or a hardware component capable of performing one or more of the mentioned tasks or functions. As used herein, a module or component may exist on a hardware component independently of other modules or components. Alternatively, a module or component may be a shared element or process of other modules, programs, or machines.
Furthermore, one or more embodiments described herein may be implemented using instructions executable by one or more processors. The instructions may be carried on a computer readable medium. The machines shown and described in connection with the following figures provide examples of processing resources and computer-readable media on which instructions for implementing embodiments of the present invention may be carried and/or executed. In particular, many of the machines shown with embodiments of the present invention include a processor and various forms of memory for storing data and instructions. Examples of computer readable media include persistent memory storage devices, such as a hard drive on a personal computer or server. Other examples of computer storage media include portable storage units (such as CD or DVD units), flash memory (such as is carried on many cellular phones and tablet computers), or magnetic memory. Computers, terminals, network-enabled devices (e.g., mobile devices such as cellular telephones) are all examples of machines and apparatuses that utilize a processor, memory, and instructions stored on a computer-readable medium. In addition, embodiments may be embodied in the form of computer programs or computer usable carrier media that may carry such programs.
Although the illustrated embodiments have been described herein with reference to the accompanying drawings, specific embodiments and variations of the details are within the scope of the disclosure. It is intended that the scope of the invention be defined by the following claims and their equivalents. Furthermore, particular features described either individually or as part of an embodiment can be combined with other individually described features or with part of other embodiments. Thus, no combination is described nor should it prevent the inventors from claiming such a combination.
While certain embodiments of the present invention have been described above, it will be understood that the described embodiments are by way of example only. Therefore, the invention should not be limited to the described embodiments. Rather, the scope of the invention described herein should be limited only in accordance with the claims that follow when taken in conjunction with the above description and accompanying drawings.

Claims (24)

1. A method for operating a mobile computing device to control a self-propelled device, the method comprising:
controlling movement of a self-propelled device in a real environment;
interacting with a second computing device providing a virtual environment to cause the second computing device to present a virtual entity in the virtual environment, the virtual entity corresponding to a self-propelled apparatus moving in a real-world environment; and
displaying, by the second computing device, a virtual environment on the mobile computing device in accordance with the interaction with the second computing device.
2. The method of claim 1, wherein the method further comprises:
detecting an event related to a self-propelled device in a real-world environment;
determining a virtual event from the detected event; and
the virtual event is merged into a virtual environment.
3. The method of claim 1, further comprising:
accessing save data associated with the virtual environment stored on the second computing device, the save data corresponding to a previous interaction in the virtual environment; and
the saved data is communicated to the mobile computing device and previous interactions in the virtual environment are consistently linked to the mobile computing device.
4. A method as recited in claim 3, wherein the previous interactions and current controls of the self-propelled device correspond to game play in a virtual environment.
5. A method as recited in claim 4, wherein the gameplay is associated with one or more of a mini game, an instant play game, or a mobile game.
6. The method of claim 1, wherein the virtual environment displayed on the mobile computing device is linked to a display of the virtual environment implemented by the second computing device.
7. The method of claim 1, wherein the virtual environment corresponds to augmented reality.
8. The method of claim 1, wherein the mobile computing device is one or more of a smartphone, a tablet, or a laptop.
9. The method of claim 1, wherein the interaction with the second computing device is performed by the mobile computing device.
10. The method of claim 1, wherein the interacting step is performed by the second computing device.
11. A system, characterized in that the system comprises:
a self-propelled device; and
a mobile computing device operably linked to the self-propelled appliance, the mobile computing device configured to interact with a second computing device, the second computing device providing a virtual environment to be displayed on the mobile computing device;
wherein the self-propelled device is associated with an entity presented on a virtual environment.
12. The system of claim 11, wherein events associated with the self-propelled device are augmented and merged into a virtual environment as virtual events.
13. The system of claim 11, wherein the mobile computing device is configured to access save data associated with the virtual environment stored on the second computing device, the save data corresponding to previous interactions in the virtual environment.
14. The system of claim 13, wherein previous interactions in a virtual environment are consistently linked to current interactions by a user operating the self-propelled device via the mobile computing device.
15. The system of claim 14, wherein the previous interactions and the current interactions correspond to game play within the virtual environment.
16. A system as recited in claim 15, wherein the game play is associated with one or more of a mini game, an instant play game, or a mobile game.
17. The system of claim 11, wherein the virtual environment corresponds to augmented reality.
18. The system of claim 11, wherein the mobile computing device is one or more of a smartphone, a tablet computer, or a laptop computer.
19. The system of claim 11, wherein the mobile computing device performs one or more operations that enable the mobile computing device to interact with the second computing device.
20. The system of claim 11, wherein the second computing device performs one or more operations that interact with the mobile computing device.
21. A non-transitory computer-readable medium storing instructions that, when executed by one or more processors, cause the one or more processors to:
enabling a user to control motion of a self-propelled device in a real-world environment using a mobile computing device;
interacting with a second computing device providing a virtual environment, such that the second computing device presents a virtual entity in the virtual environment that corresponds to a self-propelled appliance in a real-world environment; and
displaying, by the second computing device, a virtual environment on the mobile computing device.
22. The non-transitory computer-readable medium of claim 21, wherein the instructions further cause the one or more processors to:
detecting an event related to a self-propelled device in a real-world environment;
determining a virtual event according to the detected event; and
the virtual event is merged into a virtual environment.
23. The non-transitory computer-readable medium of claim 21, wherein the instructions further cause the one or more processors to:
accessing save data associated with the virtual environment stored on the second computing device, the save data corresponding to a previous interaction of the virtual environment; and
the saved data is communicated to the mobile computing device and previous interactions in the virtual environment are consistently linked to the mobile computing device.
24. The non-transitory computer-readable medium of claim 21, wherein the instructions further cause the one or more processors to:
the virtual environment is presented as augmented reality.
HK16109466.8A 2013-10-15 2014-10-09 Interactive augmented reality using a self-propelled device HK1221365A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US14/054,636 US9827487B2 (en) 2012-05-14 2013-10-15 Interactive augmented reality using a self-propelled device
US14/054,636 2013-10-15
PCT/US2014/059973 WO2015057494A1 (en) 2013-10-15 2014-10-09 Interactive augmented reality using a self-propelled device

Publications (1)

Publication Number Publication Date
HK1221365A1 true HK1221365A1 (en) 2017-05-26

Family

ID=52828562

Family Applications (1)

Application Number Title Priority Date Filing Date
HK16109466.8A HK1221365A1 (en) 2013-10-15 2014-10-09 Interactive augmented reality using a self-propelled device

Country Status (5)

Country Link
EP (1) EP3058717A4 (en)
CN (1) CN105745917A (en)
AU (1) AU2014334669A1 (en)
HK (1) HK1221365A1 (en)
WO (1) WO2015057494A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3119445A1 (en) 2021-02-03 2022-08-05 Adam Pyrométrie "RAKU" electric ceramic kiln on domestic power supply

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SE0203908D0 (en) * 2002-12-30 2002-12-30 Abb Research Ltd An augmented reality system and method
US7704119B2 (en) * 2004-02-19 2010-04-27 Evans Janet E Remote control game system with selective component disablement
FR2908322B1 (en) * 2006-11-09 2009-03-06 Parrot Sa METHOD FOR DEFINING GAMING AREA FOR VIDEO GAMING SYSTEM
KR100969873B1 (en) * 2008-06-27 2010-07-13 가톨릭대학교 산학협력단 Robot game system and robot game method linking virtual space and real space
US8571781B2 (en) * 2011-01-05 2013-10-29 Orbotix, Inc. Self-propelled device with actively engaged drive system
JP5591281B2 (en) * 2011-06-03 2014-09-17 任天堂株式会社 Information processing system, information processing apparatus, information processing program, and moving image reproduction control method
US20130050069A1 (en) * 2011-08-23 2013-02-28 Sony Corporation, A Japanese Corporation Method and system for use in providing three dimensional user interface

Also Published As

Publication number Publication date
CN105745917A (en) 2016-07-06
WO2015057494A1 (en) 2015-04-23
AU2014334669A1 (en) 2016-05-05
EP3058717A1 (en) 2016-08-24
EP3058717A4 (en) 2017-07-26

Similar Documents

Publication Publication Date Title
US9827487B2 (en) Interactive augmented reality using a self-propelled device
US10192310B2 (en) Operating a computing device by detecting rounded objects in an image
US10048751B2 (en) Methods and systems for gaze-based control of virtual reality media content
US11262835B2 (en) Human-body-gesture-based region and volume selection for HMD
US9983687B1 (en) Gesture-controlled augmented reality experience using a mobile communications device
CN106383587B (en) Augmented reality scene generation method, device and equipment
KR102491443B1 (en) Display adaptation method and apparatus for application, device, and storage medium
US10864433B2 (en) Using a portable device to interact with a virtual space
US8847879B2 (en) Motionbeam interaction techniques for handheld projectors
CN108619721A (en) Range information display methods, device and computer equipment in virtual scene
CN111566596B (en) Real World Portal for Virtual Reality Displays
KR20230053717A (en) Systems and methods for precise positioning using touchscreen gestures
JP2016148901A (en) Information processing apparatus, information processing program, information processing system, and information processing method
JP2016148896A (en) Information processing apparatus, information processing program, information processing system, and information processing method
CN112755517A (en) Virtual object control method, device, terminal and storage medium
HK1221365A1 (en) Interactive augmented reality using a self-propelled device
JPWO2017217375A1 (en) Image display apparatus, image display method, and image display program
Quek et al. Obscura: A mobile game with camera based mechanics
EP4381480A1 (en) Adaptive rendering of game to capabilities of device
CN117234333A (en) VR object selection method, device, electronic equipment and readable storage medium
HK40043851A (en) Method and apparatus for controlling virtual object, terminal and storage medium