HK1175004A - Intelligent gameplay photo capture - Google Patents
Intelligent gameplay photo capture Download PDFInfo
- Publication number
- HK1175004A HK1175004A HK13102144.6A HK13102144A HK1175004A HK 1175004 A HK1175004 A HK 1175004A HK 13102144 A HK13102144 A HK 13102144A HK 1175004 A HK1175004 A HK 1175004A
- Authority
- HK
- Hong Kong
- Prior art keywords
- player
- captured
- photograph
- photographs
- event
- Prior art date
Links
Description
Technical Field
The invention relates to smart game-play photo capture.
Background
The electronic game platform may obtain user input from a plurality of sources. As one example, a game player may utilize a handheld controller device to provide control input to an electronic game platform. As another example, the body position of the player may be obtained via one or more cameras or optical elements of the electronic gaming platform. The electronic gaming platform may track the movement of the player's body position to use it as a control input for the player.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
Implementations are disclosed for identifying, capturing, and presenting high quality photo representations of actions occurring during play of a game employing motion tracking input techniques. As one example, a method is disclosed that includes capturing, via an optical interface, a plurality of photographs of a player in a capture volume during play of an electronic game. The method also includes, for each captured photograph of the plurality of captured photographs, comparing an event-based scoring parameter to an event depicted by or corresponding to the captured photograph. The method also includes assigning respective scores to the plurality of captured photographs based at least in part on the comparison to the event-based scoring parameter. The method also includes associating the captured photograph at the electronic storage medium with a respective score assigned to the captured photograph.
Drawings
FIG. 1 depicts an example gaming environment in accordance with at least one implementation.
FIG. 2 is a schematic diagram depicting an example computing environment in accordance with at least one implementation.
FIG. 3 is a flow diagram depicting an example method in accordance with at least one implementation.
FIGS. 4, 5, and 6 are diagrams depicting different states of a player's example body position as may be captured in a photograph.
Detailed Description
As described herein, multiple photographs of a player may be captured during active game play. Multiple photographs may be scored to identify and present one or more higher scoring photographs. Scoring of the captured photograph may be based on one or more event-based scoring parameters or photographic characteristics of the captured photograph, such as blur, sharpness, exposure, brightness, contrast, hue, temperature, saturation, or other suitable characteristics. For example, a photograph captured by a camera may appear blurred, particularly if the objects in the photograph are moving in poor ambient lighting conditions. The vector field of the model may be used to score, with higher scores being provided for photographs exhibiting lower velocity or acceleration (e.g., as indicated by blurring). A baseline photograph may be captured prior to game play to provide control inputs for adjusting exposure and/or recording parameters of one or more cameras to reduce blur or optimize other suitable photographic features in the captured photograph.
Further, event-based scoring parameters may include actions of a player, a group of players, and/or other observing users. For example, a photograph capturing the player's actions during game play may be scored according to the player's pose. In one particular example, the gestures of the user are scored according to how precisely they match a predefined gesture (e.g., a virtual gesture displayed on a graphical display). As yet another example, a photograph may be considered for capture of the reactions of other users who observe a player during game play when scoring the photograph. In one particular example, photos that capture a player jumping high and a group of users reacting by cheering are scored based on the player's jump and the reaction of the group. In other words, capturing the motion of a crowd may be of interest, and thus may contribute to a higher score for a photograph. As yet another example, photographs capturing a player's facial expressions may be scored according to predefined criteria. In one particular example, a facial recognition algorithm is performed on a photograph capturing the player's face to determine if they are laughing and to increase the score of the photograph if the player is laughing.
FIG. 1 depicts an example gaming environment 100 according to at least one implementation. Gaming environment 100 includes a gaming system 110, gaming system 110 including one or more of a game console 112 and a vision subsystem 114. Vision subsystem 114 may include one or more cameras or optical elements. As one example, vision subsystem 114 may include one or more range cameras 115, 116 and an RGB camera 117.
The range cameras 115, 116 may provide depth sensing functionality, for example. In at least some implementations, the range cameras or depth sensors 115, 116 may include an infrared light projector and sensors for capturing reflected infrared light and/or ambient light. The RGB camera 117 may capture visible light from an ambient light source. In some implementations, vision subsystem 114 may also include an audio sensor 119 to detect audio signals in game environment 100 during game play. In one example, the audio sensor 119 may take the form of a microphone array.
Gaming environment 100 also includes a graphical display 118. Graphical display 118 may be a device separate from gaming system 110 or, alternatively, may comprise a component of gaming system 110. The game console 112 may communicate with the vision subsystem 114 to receive input signals from range cameras or depth sensors 115, 116, an RGB camera 117, and an audio sensor 119. The game console 112 may communicate with a graphical display 118 to present graphical information to the player.
A human user, referred to herein as player 120, may interact with game system 110 within capture volume 122. Capture volume 122 may correspond to a physical space that may be captured by one or more cameras or optical elements of vision subsystem 114. Player 120 may move within capture volume 122 to provide user input to game console 112 via vision subsystem 114. The player 120 may additionally utilize another user input device to interact with the gaming system 110, such as, for example, a controller, a mouse, a keyboard, or a microphone.
FIG. 2 is a schematic diagram depicting an example computing environment 200 according to at least one implementation. Computing environment 200 may include a computing device 210, one or more other computing devices (such as other computing device 212), and a server device 214, which may communicate with each other via a network 216. The network 216 may include, for example, one or more of a wide area network (e.g., the internet) or a local area network (e.g., an intranet).
Computing device 210 may correspond to one example implementation of the aforementioned gaming system 110 that includes at least game console 112 and vision subsystem 114. Computing device 210 may include one or more processors for executing instructions, such as example processor 220. These processors may be single core or multicore. Computing device 210 may include a computer-readable storage medium 222 having stored thereon or including instructions 224, which instructions 224 are executable by one or more processors, such as example processor 220, to perform one or more operations, processes, or methods described herein. In some implementations, the programs executed thereon may be configured for parallel or distributed processing.
The computer-readable storage medium 222 may include removable media and/or built-in devices. In some implementations, the computer-readable storage medium 222 may include optical memory devices (e.g., CD, DVD, HD-DVD, blu-ray disc, etc.), semiconductor memory devices (e.g., RAM, EPROM, EEPROM, etc.), and/or magnetic memory devices (e.g., hard disk drive, floppy disk drive, tape drive, MRAM, etc.), among others.
In some implementations, computer-readable storage media 222 may include removable computer-readable storage media that may be used to store and/or transfer data and/or instructions that may be executed to implement the methods and processes described herein. The removable computer-readable storage medium may take the form of, inter alia, a CD, DVD, HD-DVD, blu-ray disc, EEPROM, and/or floppy disk.
In some implementations, computer-readable storage medium 222 may include one or more physical, non-transitory devices. Rather, in some embodiments, aspects of the instructions described herein may propagate in a transient manner through a pure signal (e.g., an electromagnetic signal, an optical signal, etc.) that is not held by the physical device for at least a finite duration. In addition, data and/or other forms of information pertaining to the present invention may propagate through a pure signal.
As one example, computing device 210 may establish criteria that define one or more moments of interest corresponding to predefined player movements or positions occurring in a capture volume (such as capture volume 122 described previously). In at least some implementations, the established criteria may correspond to an expected gesture that is likely to be taken by the user during game play. The expected pose may be game-related and/or may depend on the particular stage of the game with which the player is interacting. In some implementations, the moment of interest may be defined by a phase of game play or an action of the avatar. For example, in a ski game, the moment of interest may be defined as the game phase in which the virtual avatar jumps off the ski jump. In some implementations, moments of interest may selectively trigger photo capture and scoring during a player's game play. In some implementations, photos may be generated and scored relatively continuously during the player's game play.
The player may assume a number of different body positions during game play. For example, the player may indicate the jump position by extending the arms and legs outward, and/or by moving the player's feet off the ground. As another example, a player may indicate a skiing position by posing a tucked skiing position. As yet another example, a user may indicate to plug a virtual hole present in a game by positioning the player's body at a particular location that corresponds to the location of the virtual hole. Two or more players may cooperate to indicate yet some other action or action in the game. Computing device 210 may interpret the player's position via input signals received from vision subsystem 230. Vision subsystem 230 may correspond to, for example, vision subsystem 114 described previously.
Computing device 210 may capture, via one or more cameras of vision subsystem 230, a plurality of photographs of the player during game play. As one example, the aforementioned RGB camera 117 may be utilized to capture photographs of one or more players during active game play. Photographs captured via the vision subsystem 230 may be stored in a data store, such as, for example, local data store 226. Photos captured via vision subsystem 230 may additionally or alternatively be stored in a remote data store, such as server device 214.
Computing device 210 may score the captured photographs along a scale of desirability. In some implementations, the desirability scale may be associated with criteria established to define one or more moments of interest, wherein if a player's movement or position corresponds to the criteria established to define one or more moments of interest, a relatively higher score is assigned to the captured photograph. Computing device 210 may score the captured photographs using a variety of information, including information obtained from one or more depth or RGB cameras of visualization subsystem 230.
In at least some implementations, the computing device 210 may score the captured photographs based at least in part on one or more event-based scoring parameters or photographic characteristics of the photographs. The photo features may include, for example, one or more of blur, sharpness, exposure, brightness, contrast, hue, temperature, saturation, and the like, as well as other suitable photo features. Note that event-based scoring parameters may include photo features. As one example, the computing device 210 may evaluate blur of different photo regions of the captured photo, where blur occurring in certain photo regions of the captured photo may be weighted more heavily than blur occurring in other photo regions of the captured photo.
Further, event-based scoring parameters may include actions of a player, a group of players, and/or other observing users. For example, facial recognition and/or skeletal frame recognition of the player and/or user may be utilized to identify one or more regions in the photograph that correspond to the player's face, arms, legs, torso, etc. As one example, a photograph capturing a player may be scored based on how precisely the player takes a predefined gesture (such as a virtual gesture displayed on the graphical display 118). In some implementations, the player's pose may be scored along a scale of desirability of how close different parts of the player's body are to the virtual location. In some implementations where multiple players interact with computing device 210, computing device 210 may score the pose of each player in the captured photograph. Further, when multiple players score highly simultaneously, a bonus award (bonus) of points or multiplier may be implemented. In some implementations, the player or user's facial expressions may be scored based on established criteria to contribute to the score of the photograph. For example, the established criteria may specify that if a photo captures a player who is laughing, the score for the photo is increased.
Further, scoring of photos may be based not only on the actions of the player, but also on the actions of other users in the room around them. For example, if a player's photo received a high score due to a very high jump, other users observing the player may exhilarate and cheer the jump, and the action of the user's crowd catching the jump may also be of interest and increase the higher score of the photo.
In some implementations, the computing device 210 may score the captured photographs based at least in part on the audio signals captured by the audio sensor 119 at the time the photographs were captured. For example, sounds produced by a player, a group of players, and/or other viewing users may be identified and scored for a photograph. In one particular example, the score of the photograph may be increased based on the captured audio exceeding a sound level threshold (such as when the crowd is cheering). As another example, the score of a photograph may be increased based on the captured audio matching a model audio signature (e.g., player singing). As yet another example, photo capture may be triggered in response to an audio level exceeding an audio threshold and/or captured audio matching a model audio signature.
Computing device 210 may present the scored photographs to the player via input/output device 228. As one example, the input/output device 228 may include a graphical display, such as the graphical display 118 of FIG. 1 previously described. In some implementations, the computing device 210 may filter the captured photographs based at least in part on the respective scores assigned to the captured photographs, and may change the varying presentation of the captured photographs to the player in response to the filtering. For example, captured photographs may be filtered so that photographs with relatively high scores are presented to the user, while photographs with lower scores are not presented to the user. Further, scores for the captured photographs may be presented to the user, and the computing device 210 may prompt the player to save or share one or more of the captured photographs after presenting the respective scores to the user.
In at least some implementations, scoring of the captured photographs may be performed remotely, such as by server device 214. For example, computing device 210 may send one or more of the captured photographs to server device 214 via network 216 for scoring, whereby server device 214 may respond to computing device 210 via network 216 with respective scores for the one or more of the captured photographs.
In some implementations, server device 214 may host a (host) social networking platform that enables players or users of computing device 210 to interact with players or users of other computing devices 212 via a social network. In at least some implementations, computing device 210 or server device 214 may identify one or more of the captured photographs that have been shared by a player of the computing device with one or more players within the social network. Scoring of the captured photographs may be further based at least in part on whether one or more of the captured photographs were shared by the player within the social network. For example, the score for a photo may be increased in response to a player sharing the photo with another player. Sharing of photos may also occur via text messaging, email, or other suitable form of communication. In some implementations, the computing device 210 may score the captured photographs based at least in part on the number of people who viewed and/or reacted to the captured performance of the player.
In at least some implementations, computing device 210 or server device 214 may identify player or user interactions with captured photographs, and may change scores of captured photographs in response to such interactions. Examples of user interactions include player ratings, player reviews, photo sharing (e.g., as previously discussed), and so forth. For example, computing device 210 or server device 214 may identify one or more player ratings assigned to the captured photograph, such as via a social network. Scoring of the captured photographs may be further based at least in part on the one or more player ratings. For example, captured photographs that are assigned a higher player rating may be scored relatively higher than captured photographs that are assigned a lower player rating. The player rating may associate thumbs up/thumbs down information, star rating information, numeric rating information, comments, or other suitable information with the captured photograph, for example, as metadata.
The score of the captured photograph may be associated with the captured photograph in a data store (e.g., data store 226) as scoring information. In some cases, scoring information may be utilized to select a subset of the captured photographs to present to the player, e.g., as previously discussed. The scoring information may also be utilized as feedback to computing device 210, which may be used to determine when or whether to capture more photographs of the player. For example, photos associated with a relatively low score may cause computing device 210 to capture more photos of the player during subsequent game play in an attempt to capture photos with higher scores. As another example, photographs captured during one or more particular moments in the game may be attributed or correlated to a higher score relative to other moments in the game. Computing device 210 may capture a photograph of the player during one or more particular moments of subsequent game play attributed to or associated with a higher score in an attempt to capture a higher scoring photograph.
FIG. 3 is a flow diagram depicting an example method 300 in accordance with at least one implementation. Method 300 may include a method of identifying, capturing, and presenting a high quality photo representation of actions occurring during play of a game employing motion tracking input techniques (as previously described with reference to fig. 1 and 2). As one example, the photograph may be obtained via an optical interface during play of an electronic game. As one example, method 300 may be performed, at least in part, by computing device 210, server device 214, or a combination thereof, as previously described.
Operation 302 includes establishing a photo baseline condition via capture of a baseline photo prior to capturing a plurality of photos. In at least some implementations, the baseline photograph may include a combination of optical information obtained from two or more cameras or optical elements. For example, the optical information obtained from an RGB camera may be combined with the optical information obtained from one or more range cameras. Operation 304 includes, prior to capturing a plurality of photographs of the player during game play, adjusting exposure and/or recording parameters of the vision subsystem in response to the baseline photograph.
In at least some implementations, the adjustment of the exposure and/or recording parameters of the camera may be based on the entire capture volume of the baseline photograph or may be based on a particular region within the capture volume. For example, exposure and/or recording parameters may be adjusted in response to a region in the baseline photograph that corresponds to the player's face. In this way, a photograph of the player's face may be obtained during game play even under poor ambient lighting conditions.
In some implementations, the baseline process may be performed more frequently, such as each time a picture of the user is taken. By performing the baseline process more frequently, the computing system may adapt to changing conditions, such as a user turning on lights, more quickly.
Operation 306 comprises establishing criteria defining one or more moments of interest corresponding to predefined player movements or positions occurring in the capture volume and/or game play events that direct the player to assume a gesture. In at least some implementations, the established criteria correspond to an expected pose that is likely to be taken by the player during game play. For example, a player may perform a particular task within a game by moving or positioning the player's body to a particular location within the capture volume. The moments of interest at which these locations may be expected to occur may be based on the game being played by the player and/or the particular phase of the game with which the user is interacting. For example, the moments of interest may be initiated by triggers located in different phases of the game where the player is expected to be in a particular pose. As another example, the moment of interest may be initiated by detection of a gesture defined by established criteria.
In some implementations, the moments of interest can be utilized to selectively trigger initiation of photo capture. As such, establishing criteria may include identifying trigger events during play of the electronic game. For example, the trigger event may include identifying a gesture taken by the player. As another example, a trigger event may include a given stage of an electronic game. As yet another example, the trigger event may include a movement or gesture of the player's avatar representation that takes an approximate established criterion. In some implementations where audio is captured via an audio interface during play of an electronic game, a trigger event may include the captured audio exceeding a sound level threshold or exhibiting a model audio signature.
In some implementations, operation 306 may be omitted and photo capture may be performed relatively continuously throughout game play. For example, in a computing system with a suitably large amount of computing resources, a photograph may be captured at each frame during game play.
Operation 308 comprises capturing a plurality of photographs of the player. As one example, multiple photographs may be captured via the RGB camera of the vision subsystem. In at least some implementations, capturing a plurality of photographs of the player includes capturing the plurality of photographs if the movement or position of the player corresponds at least in part to criteria established to define one or more moments of interest.
Operation 310 includes, for each captured photograph of the plurality of captured photographs, comparing an event-based scoring parameter to an event depicted by or corresponding to the captured photograph. As discussed above, the event-based scoring parameters may include various criteria established that may be compared to events corresponding to captured images. For example, the established criteria may define one or more predefined player movements or gestures within the capture volume. As another example, the established criteria may define one or more predefined avatar movements or poses of an avatar within the electronic game.
In some implementations, the event-based scoring parameters may include capturing predefined movements or gestures of one or more other players or people within the body. Accordingly, captured photographs may be scored based on actions of multiple players and/or responses from other users observing a player or group of players. In some implementations where audio is captured via an audio interface during game play, at a time corresponding to the captured photograph, the event-based scoring parameters may include a sound level threshold or model audio signature that may be compared to the captured audio corresponding to the captured photograph.
Operation 312 comprises assigning respective scores to the plurality of captured photographs based at least in part on the comparison to the event-based scoring parameter. Points may be awarded from a desirability scale associated with criteria established for event-based scoring parameters, with relatively higher points being awarded to captured photographs if the player's movement or position corresponds to the established criteria. As one example, assigning respective scores to captured photographs may include scoring a photograph of the captured photographs higher if the photograph depicts that the player is attempting to assume a predefined body gesture.
The scoring performed at operation 312 may be further based at least in part on one or more photo features identified in the captured photo. The photo features may include, for example, one or more of blur, sharpness, exposure, brightness, contrast, hue, temperature, saturation, and the like, as well as other suitable photo features. As one example, scoring the captured photograph includes scoring the captured photograph a relatively higher score if the captured photograph exhibits less blur, and a relatively lower score may be assigned to the captured photograph as an increasing function of blur in the captured photograph.
In at least some implementations, scoring the captured photographs may utilize weights associated with different portions of the captured scene. As one example, a photo feature of a photo representation of a player's face within a captured photo may be weighted more heavily than a photo feature of a photo representation of a player's lower body within a captured photo. Operation 312 may include, for example, evaluating blur or other photo characteristics of different photo regions of the captured photo, where blur occurring in certain photo regions of the captured photo is weighted more heavily in scoring than blur occurring in other photo regions of the captured photo. For example, if the blur is in a photograph area in the photograph that corresponds to the player's face, the blur may more severely reduce the score of the photograph.
In at least some implementations, blur or other photographic features may be identified or evaluated from a combination of optical information obtained from an RGB camera and one or more range cameras of a vision subsystem. For example, a function may be utilized to combine three or more scores, including a skeletal motion score, an RGB score, a depth score, and a lighting score. High-pass filtering or other suitable schemes may be applied to these optical signals or combinations thereof to identify or assess blur.
In at least some implementations, scoring performed at operation 312 may also include identifying one or more of the captured photographs that are shared by the player with one or more other players within the social network. The scoring may be further based at least in part on whether one or more of the captured photographs were shared by the player within the social network. Photos shared with a large number of players may be increased in score to a greater extent than photos not shared with other players or shared with a lesser number of players.
In at least some implementations, the scoring performed at operation 312 may also include identifying one or more player ratings assigned to a photograph of the captured photographs. The scoring may further be based at least in part on one or more player ratings of the photograph. For example, the score of a photograph may be increased in response to a positive or higher player rating, while the score of a photograph may be decreased in response to a negative or lower player rating.
Operation 314 includes associating the captured photograph at the electronic storage medium with a respective score assigned to the captured photograph. For example, the assigned scores and associated photos may be stored in data store 226 of FIG. 2. In some implementations, the scored photographs may be saved in the electronic storage medium in response to receiving an indication from a user (e.g., via a user device) to save one or more scored photographs. Because the score is associated with the photograph in the electronic storage medium, the score can be easily retrieved to perform various operations. For example, the scored photographs may be utilized to adjust a difficulty level of the electronic game or perform another suitable operation.
Operation 316 includes presenting the player with one or more scored photographs. In at least some implementations, when presenting scored photographs to a player, operation 316 may also include prompting the player with a query regarding user actions that may be taken with respect to one or more of the relatively higher scored photographs. For example, the prompting may be performed via an output device, such as a graphical display. The query may include a query as to whether the player wants to save, upload, and/or send scored photographs to a desired location or to a desired user.
FIGS. 4, 5, and 6 are diagrams depicting different states of a player's example body position as may be captured in a photograph. FIG. 4 depicts a player that may be captured in a photograph in which there is no blur in the photograph. FIG. 5 depicts a player that may be captured in another photograph with blur in the photograph area corresponding to the player's face. FIG. 6 depicts a player that may be captured in yet another photograph, where blur is present in a different photograph area corresponding to the player's lower body.
If, for example, scoring of the captured photograph is based on the amount of blur or the location of the blur in the captured photograph, different scores may be assigned to fig. 4, 5, and 6 relative to each other. For example, the score of fig. 4 may be higher than fig. 5 and 6 because fig. 4 includes less blur than fig. 5 and 6. If, for example, the blur in the face region has a heavier weight than the blur in a region corresponding to another part of the player's body, the score of FIG. 5 may be lower than FIG. 6 because FIG. 5 has more blur in the player's face region than FIG. 6. It should be appreciated, however, that fig. 4, 5, and 6 provide merely some non-limiting examples of how scoring may be utilized to differentiate photographs captured by a gaming system.
It is to be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated may be performed in the sequence illustrated, in other sequences, in parallel, or in some cases omitted. Also, the order of the above-described processes may be changed.
The subject matter of the inventions includes all novel and nonobvious combinations and subcombinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.
Claims (10)
1. A method (300) of processing a photograph of a player obtained during play of an electronic game via an optical interface, the method comprising:
capturing (308) a plurality of photographs of the player in a capture volume via the optical interface during play of an electronic game; for each captured photograph of the plurality of captured photographs, comparing an event-based scoring parameter to an event depicted by or corresponding to the captured photograph (310); assigning (312) respective scores to the plurality of captured photographs based at least in part on the comparison; and associating (314) the captured photograph at an electronic storage medium with a respective score assigned to the captured photograph.
2. The method of claim 1, further comprising:
establishing criteria defining one or more predefined player movements or gestures within the capture volume;
wherein the event-based scoring parameters include established criteria defining one or more predefined player movements or gestures within the capture volume; and wherein the event depicted by or corresponding to the captured photograph comprises the player's movement or gesture within the capture volume.
3. The method of claim 1, further comprising:
capturing audio via an audio interface at a time corresponding to the captured photograph; wherein the event-based scoring parameter comprises a sound level threshold or a model audio signature; and wherein the event depicted by or corresponding to the captured photograph comprises a sound level or audio signature of the captured audio corresponding to the captured photograph.
4. The method of claim 1, further comprising:
establishing criteria defining one or more predefined avatar movements or poses of an avatar within the electronic game;
wherein the event-based scoring parameters comprise established criteria defining the player within the electronic game; and wherein the event depicted by or corresponding to the captured photograph comprises a movement or gesture of a player within the capture volume that represents the avatar.
5. The method of claim 1, further comprising:
wherein the event-based scoring parameters include predefined movements or gestures of one or more other players or persons within the capture volume; and wherein the event depicted by or corresponding to the captured photograph includes movement or gesture of the one or more other players or persons within the capture volume.
6. The method of claim 1, further comprising:
filtering the captured photographs based at least in part on the respective scores assigned to the captured photographs; and
changing presentation of the captured photograph to the player in response to the filtering.
7. The method of claim 1, further comprising:
filtering the captured photographs based at least in part on the respective scores assigned to the captured photographs;
presenting the respective scores to the player; and
prompting the player to save or share one or more of the captured photographs upon presenting the respective scores to the player.
8. The method of claim 1, further comprising:
identifying a trigger event during play of the electronic game; initiate the capturing of the plurality of photographs in response to identifying the trigger event.
9. The method of claim 8, further comprising:
capturing audio via an audio interface during play of the electronic game; wherein the trigger event comprises the captured audio exceeding a sound level threshold or exhibiting a model audio signature.
10. The method of claim 9, further comprising:
establishing criteria defining one or more moments of interest corresponding to predefined movements or poses of the player occurring within the capture volume or of an avatar within the electronic game representing the player; and
wherein the trigger event comprises movement or pose of the player or an avatar representing the player taking approximate established criteria defining the one or more moments of interest.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/975,166 | 2010-12-21 |
Publications (2)
Publication Number | Publication Date |
---|---|
HK1175004A true HK1175004A (en) | 2013-06-21 |
HK1175004B HK1175004B (en) | 2020-02-07 |
Family
ID=
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9848106B2 (en) | Intelligent gameplay photo capture | |
JP7184914B2 (en) | Discovery and detection of events in interactive content | |
US11738270B2 (en) | Simulation system, processing method, and information storage medium | |
KR101741864B1 (en) | Recognizing user intent in motion capture system | |
JP6062547B2 (en) | Method and apparatus for controlling augmented reality | |
US12059632B2 (en) | Augmented reality system for enhancing the experience of playing with toys | |
JP2018143777A (en) | Sharing three-dimensional gameplay | |
US12172087B2 (en) | Systems and methods for improved player interaction using augmented reality | |
EP3117882B1 (en) | Apparatus and method of user interaction | |
CN103258184A (en) | Methods for capturing depth data of a scene and applying computer actions | |
US10086283B2 (en) | Motion scoring method and apparatus | |
CN105451837B (en) | User-generated recordings of skeletal animations | |
JP7691965B2 (en) | Gesture-Based Skill Search | |
JP7369674B2 (en) | Programs, methods and viewing devices | |
CN117836039A (en) | Systems and methods for haptic feedback effects | |
US9025832B2 (en) | Automated sensor driven friending | |
WO2012166988A2 (en) | Automated sensor driven match-making | |
JP5318016B2 (en) | GAME SYSTEM, GAME SYSTEM CONTROL METHOD, AND PROGRAM | |
HK1175004A (en) | Intelligent gameplay photo capture | |
HK1175004B (en) | Intelligent gameplay photo capture | |
HK1176448B (en) | Recognizing user intent in motion capture system |