[go: up one dir, main page]

US20180227482A1 - Scene-aware selection of filters and effects for visual digital media content - Google Patents

Scene-aware selection of filters and effects for visual digital media content Download PDF

Info

Publication number
US20180227482A1
US20180227482A1 US15/427,030 US201715427030A US2018227482A1 US 20180227482 A1 US20180227482 A1 US 20180227482A1 US 201715427030 A US201715427030 A US 201715427030A US 2018227482 A1 US2018227482 A1 US 2018227482A1
Authority
US
United States
Prior art keywords
digital media
visual digital
media item
visual
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/427,030
Inventor
Stefan Johannes Josef HOLZER
Matteo Munaro
Abhishek Kar
Alexander Jay Bruen Trevor
Krunal Ketan Chande
Michelle Jung-Ah Ho
Radu Bogdan Rusu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fyusion Inc
Original Assignee
Fyusion Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fyusion Inc filed Critical Fyusion Inc
Priority to US15/427,030 priority Critical patent/US20180227482A1/en
Assigned to Fyusion, Inc. reassignment Fyusion, Inc. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KAR, ABHISHEK, CHANDE, KRUNAL KETAN, HO, MICHELLE JUNG-AH, HOLZER, Stefan Johannes Josef, MUNARO, MATTEO, RUSU, RADU BOGDAN, TREVOR, ALEXANDER JAY BRUEN
Publication of US20180227482A1 publication Critical patent/US20180227482A1/en
Priority to US18/634,975 priority patent/US12381995B2/en
Priority to US19/236,672 priority patent/US20250310468A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • H04N5/23222
    • G06K9/00711
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/683Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory
    • H04N5/23216
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability

Definitions

  • the present disclosure relates to the selection, recommendation, and application of filters and effects to visual digital media content.
  • Visual digital media content is commonly modified by applying filters and effects.
  • a visual filter may sharpen, blur, or emboss an image to introduce a desired visual effect.
  • current techniques are limited in their ability to modify complex digital media content such as video or multi-views.
  • users may face a large number of choices when attempting to select a particular filter or effect to apply to a digital media content item. Accordingly, it is desirable to develop improved mechanisms and processes relating to selecting filters and effects to apply to digital media content items.
  • a process implemented at a client device and/or embodied in a computer readable media includes analyzing a visual digital media item with a processor to identify one or more characteristics associated with the visual digital media item, where the characteristics include a physical object represented in the visual digital media item.
  • a visual digital media modification is selected from a plurality of visual digital media modifications based on the identified characteristics for application to the visual digital media item.
  • the selected visual digital media modification is provided for presentation in a user interface for selection by a user.
  • the visual digital media item may be a video stream such as a live camera view captured via a camera.
  • the visual digital media item includes a surround view of the object, where the surround view of the object includes spatial information, scale information, and a plurality of different viewpoint images of the object.
  • the characteristics may include structure information indicating a physical context in which the physical object is positioned, pose information indicating an attitude or position associated with the physical object, and/or movement information indicating a degree of velocity or acceleration of the object.
  • the visual digital media modification may include a virtual object positioned within the visual digital media item, an artificial light source that appears to be blocked by the physical object in the visual digital media item, a change to the color of a portion of the visual digital media item, and/or motion blur indicating movement associated with the object.
  • the object may be a human being, and the visual digital media modification may include a text bubble appearing in proximity to a face.
  • analyzing the visual digital media item may involve receiving the visual digital media item at a server via a network from a client device, where providing the visual digital media modification for presentation in a user interface includes transmitting a message via the network to the client device.
  • analyzing the visual digital media item may involve transmitting the visual digital media item to a server via a network from a client device and receiving a response message at the client device, the response message identifying the one or more characteristics.
  • FIG. 1 illustrates one example of a process for performing visual digital media modification selection.
  • FIG. 3 illustrates one example of a process for performing pose detection for an object.
  • FIG. 4 illustrates one example of a system that can be used to perform live video stream filtering.
  • FIG. 5 illustrates one example of a process for performing live filtering of a video stream.
  • FIG. 6 illustrates one example of a process for performing live filter processing of a video stream at a server.
  • FIG. 7 illustrates an example of a surround view acquisition system.
  • FIG. 8 illustrates an example of a device capturing multiple views of an object of interest.
  • FIG. 9 illustrates an example of a device capturing views of an object of interest to create a multi-view media representation to which a filter may be applied.
  • FIG. 10 illustrates a particular example of a computer system that can be used with various embodiments of the present invention.
  • a system uses a processor in a variety of contexts. However, it will be appreciated that a system can use multiple processors while remaining within the scope of the present invention unless otherwise noted.
  • the techniques and mechanisms of the present invention will sometimes describe a connection between two entities. It should be noted that a connection between two entities does not necessarily mean a direct, unimpeded connection, as a variety of other entities may reside between the two entities.
  • a processor may be connected to memory, but it will be appreciated that a variety of bridges and controllers may reside between the processor and memory. Consequently, a connection does not necessarily mean a direct, unimpeded connection unless otherwise noted.
  • improved mechanisms and processes are described for the selection, application, and recommendation of modifications to visual digital media items.
  • modifications may include image filters, virtual objects, digital effects, or other such alterations.
  • Such improved mechanisms and process allow a user to be presented with modifications that are specifically relevant to the content, structure, context, motion, and/or poses of a visual digital media item. In this way, a user may be presented with only the relevant modifications and need not manually sift through a large number of irrelevant modifications. Further, if modifications are applied in a way that reflects the modified visual digital media item, then the modifications may appear to be more realistic, and a larger number of modifications may be possible. Also, specific modifications may be applied to different parts of the same scene.
  • Digital media modifications also referred to herein as filters, modify and/or add to the visual data of a visual digital media item such as a static image, a video stream, or a multi-view interactive digital media representation.
  • filters can include any techniques for altering a visual digital media item.
  • a filter can alter the color information of the captured visual data by changing contrast, changing brightness, or applying a transformation to the underlying color matrix.
  • a filter can add additional elements to the scene such as 2D or 3D stickers or text placed relative to an object in the visual digital media item, or any other such alteration.
  • An artificial object can be placed relative to a two-dimensional object in two-dimensional space and/or relative to the three-dimensional reference coordinate system of a multi-view interactive digital media representation in three-dimensional space.
  • the application of the filters can happen live (e.g., in a live media stream or a in a camera view) or in post-processing.
  • techniques and mechanisms described herein provide for improved user experience of a computer system.
  • large set of different filters can be applied to visual digital media content. Due to the vast number of different filters that can be applied, conventional techniques require either a user to manually review a large number of filters to find the best filter for a certain use case, or application developers to limit the amount of filters the user can select from. Both options are suboptimal for the user.
  • techniques and mechanisms described herein allow a user to be presented with filters specifically relevant to the user's content.
  • the process 100 may be performed at a client device. Alternately, the process 100 may be performed at a server. In yet another implementation, some operations shown in the process 100 may be performed at client device while other operations are performed at a server in communication with the client device. In particular embodiments, the client device and the server may be implemented as different processes running on the same physical device. In other embodiments, the client device and the server may be implemented on different physical devices in communication via a network.
  • the process 100 begins when a request to apply a filter to a visual digital media item is received at 102 .
  • the request may be received when a user specifically requests to apply a filter to a visual digital media item.
  • the request may be generated automatically when triggered by a particular action such as the activation of a camera at a client device.
  • content information may include any indication of objects represented in the visual digital media item.
  • the identified content may include a human being, an animal, a plant, text, an inanimate object such as a vehicle, or an abstract shape such as a ball.
  • Structure information may include a ground plane or a wall. Semantic areas such as sky, grass, and water may also be identified.
  • pose information may indicate an attitude or position of an identified object. For example, if a human being is identified as being represented in the visual digital media item, then pose information may indicate whether the human being is sitting, standing, walking, or arranged in some other posture. Pose information may also be applied to other types of objects. For instance, pose information may indicate the position of a vehicle relative to the viewer, the stance of an animal, the attitude or position of a deformable inanimate object, or other such positioning information. Movement information may indicate the velocity or acceleration of an identified object. The movement may be identified relative to scene structure, another object, the viewpoint, or any other reference plane or point.
  • a variety of techniques may be used to identify content, structure, pose, and movement information. Identifying such information may involve applying a content recognition algorithm to visual media. For instance, a recognition algorithm may be applied to an image, a video frame, one or more images in a multi-view of an object, or a stream of video frames.
  • the specific techniques used to identify content, structure, pose, and movement information may depend on the particular implementation. For instance, different techniques may be used based on characteristics of the visual digital media item or the identify content, structure, pose, or movement information to be identified.
  • one or more visual digital media modifications to present are selected at 108 .
  • one or more specific filters may be selected based on the content and/or context of the scene depicted in the visual digital media content. That is, the identification of objects such as cars, food or people, or the identification of semantic areas such as sky, grass, water may trigger the selection, recommendation, and/or application of specific filters. For example, if a car is present in the captured scene and is detected, then car-specific filters such as stickers, exhaust fumes, and other types of augmented reality modifications may be made available. As another example, if a person is present in the scene then filters specific for humans may be added, such as speech bubbles.
  • filters may be selected that apply to particular contexts, such as fashion shots or sporting events. For some objects, such as people and vehicles, poses and movements may also be used to trigger filters. If multiple objects are present, such as both a person and a vehicle, then filters may be selected that are specific to the combination of objects, instead of or in addition to filters that are specific to only persons or only vehicles.
  • selected filters may be applied automatically to the visual digital media modification.
  • the server may transmit an instruction to the client machine to apply one or more selected filters.
  • one or more filters may be provided for manual selection by a user in a user interface.
  • the server may transmit an instruction to the client machine identifying one or more filters to present.
  • one or more of a variety of context-specific filters may be selected.
  • certain filters may be selected for certain objects.
  • vehicles may be modified to include stickers, color changes, motion blur, or other such vehicle-specific alterations.
  • moving artificial objects may be added that react with the scene structure such as balls that bounce off of the ground, objects that accumulate on the ground, weather patterns such as rain or snow that interact with the scenery, artificial light sources that are blocked by objects in the scene, or other such alterations.
  • an artificial object may be automatically positioned relative to the scene and/or objects.
  • a filter may change the color of the sky or alter the “style” of a visual digital media item. For instance, the style may be altered to appear cartoonish or retro.
  • effects and filters may be added based on a detected pose.
  • a filter may be selected that will cause a laser beam or fire to shoot from the person's hands.
  • effects or elements which correlate both people can be added, such as hearts indicating affection.
  • the person can be cut out of the scene and pasted in one or more times in a different position or pose, such as dancing.
  • parts of a human body can be replaced. For instance, a head may be replaced with an apple, a crocodile head, or some other object.
  • the context can be changed. For instance, when a person is detected as jumping, a filter may be applied to depict alligators or some other hazard beneath the person.
  • elements can be added that interact with a person. For instance, a snake can be added that moves up a person's body, or lightning can be added that traverses a person's body. An infinite variety of such alterations is possible.
  • a person may be cut or copied from a scene and then pasted back into the same scene one or more times in the same pose but a different position. For instance, a single dancing person may be copied multiple times in the same pose to create a crowd of dancing people.
  • analyzing the visual digital media item may involve transmitting some or all of the visual digital media item to a server for processing.
  • the server may then process the visual digital media item and response with one or more recommend digital media modifications.
  • Techniques for client-server interactions are discussed in greater detail with respect to FIGS. 4, 5, and 6 .
  • visual digital media modifications may be selected based at least in part on explicit classifications or categorizations. For instance, a user or system administrator may specifically identify a particular type of modification as pertaining to a vehicle, a person, or both. Alternately, or additionally, visual digital media modifications may be selected based at least in part on implicit or machine-generated classifications or categorizations. For instance, the system may analyze user selections to help determine which modifications to suggest for which visual digital media items. Additional details regarding machine classification are discussed with respect to FIG. 2 .
  • the selections are provided for presentation in a user interface.
  • the specific technique for providing the selections for presentation may depend in part on the specific implementation. For instance, if the analysis is performed at a server, then providing the selections for presentation may involve transmitting a message to a client device with instructions for presenting the selections in a user interface at the client device. Alternately, if the analysis is performed at a client device, then the selections may be presented directly in a user interface.
  • analysis may continue until one or more conditions or criteria are met. These may include, but are not limited to: the receipt of user input indicating a request to stop analysis, the selection of a particular filter or filters for presentation, and the termination of a live or prerecorded media stream.
  • the procedure 200 may be performed in order to help determine which visual digital media modifications to suggest for which visual digital media items. To accomplish this, user-selections of filters may be analyzed to identify characteristics likely to make particular filters attractive in specific situations.
  • the procedure 200 may be performed at a server having access to a wide range of user selections or at a client machine.
  • the procedure 200 may be initiated when a request is received at 202 to perform visual digital media modification preprocessing.
  • the request may be generated automatically (for instance periodically) or manually (for instance by a system administrator).
  • a visual digital media item is selected for analysis at 204 .
  • a visual digital media item may be selected for analysis when it is associated with user-provided input indicating a selection of a filter for application to the item.
  • all filtered visual digital media items may be selected for analysis in some sequence.
  • a subset of the available data may be analyzed. For instance, a designated number of filtered visual digital media items may be selected in a particular category, such as filtered visual digital media items that include people, vehicles, or animals.
  • a visual digital media item is selected for analysis, it is analyzed at 206 to identify content and structure information and at 208 to identify pose and movement information. These analyses may be substantially similar to the operations 104 and 106 discussed with respect to FIG. 1 .
  • one or more user-selected visual digital media modification is identified at 210 .
  • a user initially viewing the media item may have been presented with a set of filters to choose from and then selected a particular filter to apply to the media item. The user's choice may then have been recorded for further analysis.
  • one or more weights associated with the identified content, structure, pose, and/or movement information is updated.
  • updating the one or more weights may involve indicating a connection between a characteristic of the media item and the selected filter.
  • a particular filter is identified as having been applied to a particular visual digital media item that analysis reveals to include a moving car, stationary flowers, and a walking dog.
  • the weights linking the particular filter to each of these characteristics are increased. For instance, if the filter includes flames that shoot in a particular direction, then users may be more likely to apply the filter to moving cars. Over time, the weight linking the filter to the media item characteristic of a moving car may be increased so that eventually the system may automatically recommend the shooting flames filter when the media item includes a car without needing a human to explicitly flag the filter as being specific to vehicles.
  • analysis may continue until one or more conditions or criteria are met. These may include, but are not limited to: the receipt of user input indicating a request to stop analysis, the selection of a particular set of weights, and the analysis of all available user selections.
  • FIG. 3 illustrates one example of a process for performing pose detection for an object.
  • pose detection may be used for any of various purposes.
  • pose detection may be used to trigger a filter.
  • pose detection may be used to determine if a person is exhibiting a particular pose, such as pumping a fist in the air. If such a pose is detected, then the video stream may be altered to depict lightning extending from the fist.
  • skeleton detection may be used to trigger a photo to be captured. For instance, a person may position a camera to take a self-image and then move in front of the camera.
  • the camera may then capture an image when it identifies the person's skeleton and determines that the person has stopped moving or has entered into a particular pose, such as jumping in the air.
  • the procedure 300 is initiated when a request is received at 302 to perform pose detection for a video stream.
  • skeleton detection operations for the video stream are performed.
  • skeleton detection operations may be performed using any of various suitable methods.
  • a convolutional neural network may be applied to an image to first detect all objects in the scene and then estimate the skeleton joints for those that belong to the “person” category.
  • static skeleton detection at the server may be combined with server-side skeleton detection and/or tracking across prior frames. For instance, the results of one or more skeleton detection operations for previous skeleton detection messages may be analyzed to aid in the detection of a skeleton for the current frame.
  • non-visual data such as accelerometer or gyroscopic data may be analyzed to aid in skeleton detection.
  • pose detection the detected human skeleton may be used to determine whether the arrangement of the skeleton at a particular point in time matches one or more of a discrete set of human poses.
  • pose detection may be accomplished by first estimating a homography from the skeleton joints that in order to frontalize the skeleton for a bitter pose. Then, pose detection may be performed by analyzing spatial relations of the frontalized joints. Next, a temporal filtering method may be applied to remove spurious detections. In particular embodiments, such techniques may be applied to detect poses for either individuals or for multiple people.
  • pose detection may involves scaling or stretching location information associated with the detected skeleton and then comparing the location information with the predetermined location information associated with specific poses, where a high degree of similarity between the detected skeleton information and the predetermined skeleton pose information would indicate a match.
  • pose detection may trigger different events, such as the application of specific filters to a video stream.
  • the detection of a specific pose may trigger the recommendation of one or more filters to the user for the user to select.
  • pose detection may be used to suggest or identify start and/or end times for an effect as well as the type of effect that could be added.
  • FIG. 4 shown is one example of a system that can be used to perform a live video stream filtering.
  • a combination of client and server applications is used to implement a filtering mechanism that runs live in a capture device application, such as with a camera on a smartphone. While the camera is recording, the user points the camera at an object.
  • the smartphone then communicates with the server, and collectively the two devices analyze the video stream to provide a filtered view of the video stream in real time.
  • the client is depicted as device 404 , which can be a capture device such as a digital camera, smartphone, mobile device, etc.
  • the server is depicted as system 402 , which receives images selected from the video stream at the client device.
  • the video stream at the client device is divided into video frames 451 through 461 .
  • the server processes the frames sent from the client device and response with filtering information that can be used to apply a filter to the video stream at the client device.
  • the client device includes a camera 406 for capturing a video stream, a communications interface 408 capable of communicating with the server, a processor 400 , memory 402 , and a display screen 404 on which the video stream may be presented.
  • the client and server may coordinate to apply a filter to the video stream at least in part due to limited computing resources at the client machine.
  • the network latency and processing time involved in transmitting video frames to the server means that the video stream at the client device has progressed to a new video frame before receiving the filter processing message from the server with the filter information associated with the preceding frame.
  • the first request 471 transmits the frame 451 to the server, while the first response 472 corresponding to the frame 451 arrives while the frame 455 is being processed.
  • the second request 473 and third request 474 transmit frames 455 and 457 respectively, but the corresponding second and third responses 475 and 476 are not received until the video stream has arrived at frames 459 and 461 respectively.
  • the client application sends (and also receives) data in a sparse manner, meaning that data is sent to the server potentially not for all frames captured by the camera. Therefore, in order to present a filtered result for a live stream, the information received from the server is tracked or propagated to new frames received from the camera until new information from the server is available. For example, in FIG. 4 , the client device may propagate information received in the first response 472 through frames 456 , 457 , and 458 until the second response 475 is received for the processing of frame 459 .
  • FIG. 5 shown is one example of a process for performing live filtering of a video stream.
  • the process shown in FIG. 5 may be performed at a client machine in communication with a server, such as the client machine 104 in communication with the server 105 shown in FIG. 1 .
  • the two devices may coordinate to split the processing operations required to apply a filter to a live video stream.
  • a live filtering process 500 begins with the client device receiving a request to perform filtering of a video stream at 502 .
  • the request may be generated based on user input requesting the application of a filter. Alternately, the request may be generated automatically when the client device detects that a video stream is being captured or displayed at the client device.
  • the system select a video stream frame for processing at 504 .
  • video stream frames may be processed sequentially. For instance, each frame in a live video stream may be processed prior to presenting the video stream to the user so that a filter may be applied.
  • criteria may be used to select a video stream frame for transmission to the server. For example, if the filtering process has just been initiated, then the client device may select the first available video stream frame for processing. As another example, one or more criteria may be applied to select the video stream frame. For instance, the client device may select a video stream frame that exceeds a threshold level of light or detail to allow for sufficient information for applying a filter. As yet another example, the client device may select a video stream frame for processing after a designated period of time or number of frames have passed since the last video stream frame was transmitted to the server.
  • information about the selected frame is transmitted to the server at 508 .
  • a variety of information may be transmitted to the server.
  • some or all of the image data associated with the frame may be transmitted.
  • the entire frame may be transmitted.
  • the frame may be compressed or down sampled to reduce bandwidth usage.
  • IMU information such as gyroscopic data, compass data, or accelerometer data may be transmitted. This IMU information may provide data about the position, velocity, acceleration, direction, rotation, or other such characteristics of the device around the time that the frame was captured.
  • GPS information may be transmitted.
  • the specific information transmitted to the server may depend on the type of processing being performed at the server and/or the type of filter being applied at the client device.
  • the server sends messages that include information for applying filters to frames, but these filter processing messages are sent at a lag when compared with the live processing and presentation of the video stream.
  • a filter is applied based on existing data that is locally available at the client machine.
  • applying a filter based on locally available data may involve propagating information from one frame to another. For instance, a current frame may be analyzed to identify the same feature (e.g., an object corner or an area of color) that was identified in the preceding frame.
  • a multitude of approaches can be used to propagate information from one frame to another.
  • One such approach is frame-to-frame tracking, which can be based on information that may include, but is not limited to: tracking of sparse keypoints, dense or sparse optical flow, patch tracking, tracking of geometric instances, or other such information.
  • Another such approach is frame-to-frame matching, which involve techniques that may include, but are not limited to: descriptor based matching of keypoints which are detected in both frames, patch matching, detection and matching of higher level features (e.g. a human face), or other such techniques. Both approaches can focus the tracking and matching efforts on regions or features of interest if such regions or features are identified.
  • a special processing cases the time from the first frame that is sent to the server to the frame when the corresponding results are received back from the server. Since there is no server-created scene interpretation available until the results of the first frame are received, the client device may not know which specific information in the scene needs to be propagated. Various approaches are possible for handling this situation. In one example, all or most information in the frame is equally propagated. For instance, keypoints may be distributed over the whole image. In a second example, an efficient method for estimating one or more regions of interest may be applied on the client device. For instance, a bounding box for the region may be computed. Then, the propagation of information may be concentrated on the region or regions of interest. In a third example, matching methods may be applied to directly match the information extracted from the first frame to the frame after which the results from the server are available.
  • a filter is applied based on both the locally available data and the data provided by the server.
  • new information received from the server may be combined with the information propagated from frame to frame.
  • old information may be replaced with new information received from the server.
  • old information may be combined with new information in a weighted fashion, for instance based on relative confidence values associated with server results and propagation results.
  • the specific operations performed to apply a filter may depend in large part upon the specific type of filter being applied.
  • a caption bubble may be applied to a video of a person when the person exhibits a particular pose.
  • the server may perform skeleton detection to facilitate pose estimation while the client device tracks low-level image features such as a point associated with a person's elbow or a surface area that is part of the background. Then, the client device may combine the low-level feature tracking information with the skeleton detection information provided by the server to determine whether the person is positioned in the particular pose.
  • a filter may be applied to a vehicle based on its position (e.g., crossing a finish line). In this second example, the server may perform segmentation to identify the segmentation and characteristics of the vehicle, while the client device tracks low-level features such as shapes to propagate the location of the vehicle between communication with the server.
  • the filtered frame is provided for presentation at 516 .
  • providing the filtered frame for presentation may involve displaying the filtered frame as part of the video stream on a display screen.
  • the filtered frame may be stored to memory and or persistent storage for later playback.
  • the filtered frame may be transmitted to a separate device for presentation, such as an augmented reality or virtual reality device in communication with the client device.
  • a determination is made as to whether to process an addition frame.
  • additional frames may be processed until any of a variety of conditions are met. These conditions may include, but are not limited to: receiving user input indicating a request to terminate live filtering, determining that the video stream has terminated, or determining that the server is inaccessible via the network.
  • the procedure 600 may be performed in order to perform server-side processing to facilitate the live filtering of a media stream at a client device.
  • the procedure 600 may be initiated at 602 when a live filtering request message for a video stream is received from a client device.
  • a live filtering request message may include the identity of the client device as well as any information necessary for performing live filtering, such as image data information associated with one or more video stream frames, IMU information, or GPS information.
  • the server may identify information associated with one or more prior video frames in the video stream.
  • the prior video frame information may include any raw data transmitted from the client device in early live filtering request messages.
  • the prior video frame information may include processed or filtered data generated by processing previous live filtering request messages.
  • the server performs filter processing operations for the video stream.
  • the specific filter processing operations performed may depend in large part on the particular type of filter being applied to the video stream.
  • Some examples of processing operations running on the server may include, but are not limited to: detection, segmentation, and pose estimation. Such methods may be applied to objects that include, but are not limited to: humans, animals, vehicles, inanimate objects, and plants.
  • Other examples of methods running on the server may include, but are not limited to: depth estimation, scene reconstruction, scene decomposition, and semantic labeling.
  • the filter processing operations may generate a wide range of information.
  • the filter processing operations may generate location information that identifies locations of high-level features such as faces or skeleton components on image data sent from the client device.
  • the filter processing operations may include or identify virtual elements to overlay on top of the video stream at the client device.
  • the filter processing operation may identify the video stream as including footage of a running dog and then indicate as one filter possibility a cape that could be overlain on the moving image of the dog to generate a visual effect of a “super dog.”
  • the filter processing operations may include semantic elements such as labels for recognized objects or words generated by applying optical character recognition to text in the video stream.
  • the filter processing message may include any suitable information, including location information identifying the location of features in the scene, semantic information that identifies the meaning of particular elements in the scene, or data for overlaying virtual elements on top of the scene.
  • a multi-view interactive digital media representation includes much more information than a single image. Whereas a single image may include information such as a grid of color pixels and the date/time of capture, a multi-view interactive digital media representation includes information such as such as grids of color pixels, date/time of capture, spatial information (flow/3D), location, and inertial measurement unit information (IMU) (i.e., compass, gravity, orientation).
  • a multi-view interactive digital media representation brings focus to an object of interest because it provides separation between the foreground and background.
  • a multi-view interactive digital media representation provides more information about the scale, context, and shape of the object of interest.
  • aspects of the object that are not visible from a single view can be provided in a multi-view interactive digital media representation.
  • the surround view acquisition system 700 is depicted in a flow sequence that can be used to generate a surround view.
  • the data used to generate a surround view can come from a variety of sources.
  • data such as, but not limited to two-dimensional (2D) images 704 can be used to generate a surround view.
  • 2D images can include color image data streams such as multiple image sequences, video data, etc., or multiple images in any of various formats for images, depending on the application.
  • Another source of data that can be used to generate a surround view includes location information 706 .
  • This location information 706 can be obtained from sources such as accelerometers, gyroscopes, magnetometers, GPS, Wi-Fi, IMU-like systems (Inertial Measurement Unit systems), and the like.
  • sources such as accelerometers, gyroscopes, magnetometers, GPS, Wi-Fi, IMU-like systems (Inertial Measurement Unit systems), and the like.
  • Yet another source of data that can be used to generate a surround view can include depth images 708 .
  • These depth images can include depth, 3D, or disparity image data streams, and the like, and can be captured by devices such as, but not limited to, stereo cameras, time-of-flight cameras, three-dimensional cameras, and the like.
  • the data can then be fused together at sensor fusion block 710 .
  • a surround view can be generated a combination of data that includes both 2D images 704 and location information 706 , without any depth images 708 provided.
  • depth images 708 and location information 706 can be used together at sensor fusion block 710 .
  • Various combinations of image data can be used with location information at 706 , depending on the application and available data.
  • the data that has been fused together at sensor fusion block 710 is then used for content modeling 711 and context modeling 714 .
  • the subject matter featured in the images can be separated into content and context.
  • the content can be delineated as the object of interest and the context can be delineated as the scenery surrounding the object of interest.
  • the content can be a three-dimensional model, depicting an object of interest, although the content can be a two-dimensional image in some embodiments.
  • the context can be a two-dimensional model depicting the scenery surrounding the object of interest.
  • the context can also include three-dimensional aspects in some embodiments.
  • the context can be depicted as a “flat” image along a cylindrical “canvas,” such that the “flat” image appears on the surface of a cylinder.
  • some examples may include three-dimensional context models, such as when some objects are identified in the surrounding scenery as three-dimensional objects.
  • the models provided by content modeling 711 and context modeling 714 can be generated by combining the image and location information data.
  • context and content of a surround view are determined based on a specified object of interest.
  • an object of interest is automatically chosen based on processing of the image and location information data. For instance, if a dominant object is detected in a series of images, this object can be selected as the content.
  • a user specified target 702 can be chosen. It should be noted, however, that a surround view can be generated without a user specified target in some applications.
  • one or more enhancement algorithms can be applied at enhancement algorithm(s) block 716 .
  • various algorithms can be employed during capture of surround view data, regardless of the type of capture mode employed. These algorithms can be used to enhance the user experience. For instance, automatic frame selection, stabilization, view interpolation, filters, and/or compression can be used during capture of surround view data.
  • these enhancement algorithms can be applied to image data after acquisition of the data. In other examples, these enhancement algorithms can be applied to image data during capture of surround view data.
  • automatic frame selection can be used to create a more enjoyable surround view. Specifically, frames are automatically selected so that the transition between them will be smoother or more even.
  • This automatic frame selection can incorporate blur- and overexposure- detection in some applications, as well as more uniformly sampling poses such that they are more evenly distributed.
  • stabilization can be used for a surround view in a manner similar to that used for video.
  • key frames in a surround view can be stabilized to produce improvements such as smoother transitions, improved/enhanced focus on the content, etc.
  • improvements such as smoother transitions, improved/enhanced focus on the content, etc.
  • there are many additional sources of stabilization for a surround view such as by using IMU information, depth information, computer vision techniques, direct selection of an area to be stabilized, face detection, and the like.
  • IMU information can be very helpful for stabilization.
  • IMU information provides an estimate, although sometimes a rough or noisy estimate, of the camera tremor that may occur during image capture. This estimate can be used to remove, cancel, and/or reduce the effects of such camera tremor.
  • depth information if available, can be used to provide stabilization for a surround view. Because points of interest in a surround view are three-dimensional, rather than two-dimensional, these points of interest are more constrained and tracking/matching of these points is simplified as the search space reduces. Furthermore, descriptors for points of interest can use both color and depth information and therefore, become more discriminative.
  • automatic or semi-automatic content selection can be easier to provide with depth information. For instance, when a user selects a particular pixel of an image, this selection can be expanded to fill the entire surface that touches it.
  • content can also be selected automatically by using a foreground/background differentiation based on depth. In various examples, the content can stay relatively stable/visible even when the context changes.
  • computer vision techniques can also be used to provide stabilization for surround views. For instance, key points can be detected and tracked. However, in certain scenes, such as a dynamic scene or static scene with parallax, no simple warp exists that can stabilize everything. Consequently, there is a trade-off in which certain aspects of the scene receive more attention to stabilization and other aspects of the scene receive less attention. Because a surround view is often focused on a particular object of interest, a surround view can be content-weighted so that the object of interest is maximally stabilized in some examples.
  • Another way to improve stabilization in a surround view includes direct selection of a region of a screen. For instance, if a user taps to focus on a region of a screen, then records a convex surround view, the area that was tapped can be maximally stabilized. This allows stabilization algorithms to be focused on a particular area or object of interest.
  • face detection can be used to provide stabilization. For instance, when recording with a front-facing camera, it is often likely that the user is the object of interest in the scene. Thus, face detection can be used to weight stabilization about that region. When face detection is precise enough, facial features themselves (such as eyes, nose, mouth) can be used as areas to stabilize, rather than using generic key points.
  • view interpolation can be used to improve the viewing experience.
  • synthetic, intermediate views can be rendered on the fly. This can be informed by content-weighted key point tracks and IMU information as described above, as well as by denser pixel-to-pixel matches. If depth information is available, fewer artifacts resulting from mismatched pixels may occur, thereby simplifying the process.
  • view interpolation can be applied during capture of a surround view in some embodiments. In other embodiments, view interpolation can be applied during surround view generation.
  • filters can also be used during capture or generation of a surround view to enhance the viewing experience.
  • aesthetic filters can similarly be applied to surround images.
  • these filters can be extended to include effects that are ill-defined in two dimensional photos. For instance, in a surround view, motion blur can be added to the background (i.e. context) while the content remains crisp.
  • a drop-shadow can be added to the object of interest in a surround view.
  • compression can also be used as an enhancement algorithm 716 .
  • compression can be used to enhance user-experience by reducing data upload and download costs.
  • surround views use spatial information, far less data can be sent for a surround view than a typical video, while maintaining desired qualities of the surround view.
  • the IMU, key point tracks, and user input, combined with the view interpolation described above, can all reduce the amount of data that must be transferred to and from a device during upload or download of a surround view.
  • a variable compression style can be chosen for the content and context.
  • This variable compression style can include lower quality resolution for background information (i.e. context) and higher quality resolution for foreground information (i.e. content) in some examples.
  • the amount of data transmitted can be reduced by sacrificing some of the context quality, while maintaining a desired level of quality for the content.
  • a surround view 718 is generated after any enhancement algorithms are applied.
  • the surround view can provide a multi-view interactive digital media representation.
  • the surround view can include three-dimensional model of the content and a two-dimensional model of the context.
  • the context can represent a “flat” view of the scenery or background as projected along a surface, such as a cylindrical or other-shaped surface, such that the context is not purely two-dimensional.
  • the context can include three-dimensional aspects.
  • surround views provide numerous advantages over traditional two-dimensional images or videos. Some of these advantages include: the ability to cope with moving scenery, a moving acquisition device, or both; the ability to model parts of the scene in three-dimensions; the ability to remove unnecessary, redundant information and reduce the memory footprint of the output dataset; the ability to distinguish between content and context; the ability to use the distinction between content and context for improvements in the user-experience; the ability to use the distinction between content and context for improvements in memory footprint (an example would be high quality compression of content and low quality compression of context); the ability to associate special feature descriptors with surround views that allow the surround views to be indexed with a high degree of efficiency and accuracy; and the ability of the user to interact and change the viewpoint of the surround view.
  • the characteristics described above can be incorporated natively in the surround view representation, and provide the capability for use in various applications. For instance, surround views can be used in applying filters or visual effects.
  • a surround view 718 user feedback for acquisition 720 of additional image data can be provided.
  • a surround view is determined to need additional views to provide a more accurate model of the content or context, a user may be prompted to provide additional views.
  • these additional views can be processed by the system 700 and incorporated into the surround view.
  • FIG. 8 shown is an example of a device capturing multiple views of an object of interest from different locations.
  • the capture device is indicated as camera 811 , and moves from location 822 to location 824 and from location 824 to location 826 .
  • the multiple camera views 802 , 804 , and 806 captured by camera 811 can be fused together into a three-dimensional (3D) model.
  • 3D three-dimensional
  • multiple images can be captured from various viewpoints and fused together to provide a multi-view digital media representation.
  • camera 811 moves to locations 822 , 824 , and 826 , respectively, along paths 828 and 830 , in proximity to an object of interest 808 .
  • Scenery can surround the object of interest 808 such as object 88 .
  • Views 802 , 804 , and 806 are captured by camera 811 from locations 822 , 824 , and 826 and include overlapping subject matter.
  • each view 802 , 804 , and 806 includes the object of interest 808 and varying degrees of visibility of the scenery surrounding the object 810 .
  • view 802 includes a view of the object of interest 808 in front of the cylinder that is part of the scenery surrounding the object 808 .
  • View 804 shows the object of interest 808 to one side of the cylinder
  • view 806 shows the object of interest without any view of the cylinder.
  • the various views 802 , 804 , and 806 along with their associated locations 822 , 824 , and 826 , respectively, provide a rich source of information about object of interest 808 and the surrounding context that can be used to produce a multi-view digital media representation, such as a surround view.
  • the various views 802 , 804 , and 806 provide information about different sides of the object of interest and the relationship between the object of interest and the scenery. These views also provide information about the relative size and scale of the object of interest in relation to the scenery.
  • views from different sides of the object provide information about the shape and texture of the object. According to various embodiments, this information can be used to parse out the object of interest 808 into content and the scenery 88 as the context. In particular examples, the content can then be used for applying filters.
  • FIG. 9 shown is an example of a device capturing views of an object of interest.
  • multiple views of the object 908 may be captured by the device 970 from different locations.
  • data is acquired when a user taps a record button 980 on capture device 970 to begin recording images of the object.
  • filtering can be provided at the device 970 , and prompts for the user to capture particular views can be provided during the session.
  • the system can prompt the user to move the device 970 in a particular direction or may prompt the user to provide additional information.
  • filtering suggestions may be reiteratively refined to provide accurate results.
  • the user may choose to stop recording by tapping the record button 980 again. In other examples, the user can tap and hold the record button during the session, and release to stop recording.
  • the recording captures a series of images that can be used to generate a multi-view digital media representation that can be for filtering either in real-time or after-the-fact.
  • applying a filter to a multi-view digital media representation may involve processing a succession of images taken from different perspectives.
  • the client device may perform low-level processing such as two-dimensional analysis of individual images.
  • the server may perform high-level processing such as combining different individual images to produce a three-dimensional model of an object that is the subject of a multi-view video.
  • a system 1000 suitable for implementing particular embodiments of the present invention includes a processor 1001 , a memory 1003 , a communications interface 1011 , a filter interface 1013 , and a bus 1015 (e.g., a PCI bus).
  • the filter interface 1013 may include separate input and output interfaces, or may be a unified interface supporting both operations.
  • the processor 1001 is responsible for such tasks such as optimization.
  • the communications interface 1011 is typically configured to send and receive data packets or data segments over a network.
  • interfaces the device supports include Ethernet interfaces, frame relay interfaces, cable interfaces, DSL interfaces, token ring interfaces, and the like.
  • various very high-speed interfaces may be provided such as fast Ethernet interfaces, Gigabit Ethernet interfaces, ATM interfaces, HSSI interfaces, POS interfaces, FDDI interfaces and the like.
  • these interfaces may include ports appropriate for communication with the appropriate media.
  • they may also include an independent processor and, in some instances, volatile RAM.
  • the independent processors may control such communications intensive tasks as packet switching, media control and management.
  • the system 1000 uses memory 1003 to store data and program instructions and maintained a local side cache.
  • the program instructions may control the operation of an operating system and/or one or more applications, for example.
  • the memory or memories may also be configured to store received metadata and batch requested metadata.
  • the present invention relates to tangible, machine readable media that include program instructions, state information, etc. for performing various operations described herein.
  • machine-readable media include hard disks, floppy disks, magnetic tape, optical media such as CD-ROM disks and DVDs; magneto-optical media such as optical disks, and hardware devices that are specially configured to store and perform program instructions, such as read-only memory devices (ROM) and programmable read-only memory devices (PROMs).
  • program instructions include both machine code, such as produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter.
  • a potentially infinite variety of filters or modifications may be applied to digital media content.
  • visual elements such as angel wings, bat wings, butterfly wings, plane wings and engines, or a jetpack with exhaust fumes may be added.
  • visual elements such as a dinosaur tail, a squirrel tail, or a raccoon tail may be added.
  • visual elements may be added to replace the person's clothing with a superhero costume or to add a cape to the person's existing attire.
  • visual elements may be added to depict a megaphone, flames, or a speech bubble near the person's mouth.
  • visual elements may be added to replace a person's clothing or depict a person's body as a skeleton.
  • the person's body may be replaced with one exhibiting more muscles or deformed to appear to exhibit more muscles.
  • visual elements may be added to make the person appear to be underwater as a scuba diver or mermaid.
  • visual elements may be added to make the person appear to be a flying angel or super hero. For instance, a person's legs may be moved to make the person appear to be not supported by the ground. When a person is detected with arms uplifted, visual elements may be added to cause rainbows, money, or angels to appear over the person. When a person is detected with hands arranged in a boxing pose, visual elements may be added to make the person appear to be wearing boxing gloves or holding a weapon. A person's facial features or body may be modified to make the person appear to have the head or body of an animal, a fruit, a robot, or some other such object.
  • a person's facial features may be detected and then used to select a corresponding emoticon, which then may be used to replace the person's head.
  • the dog's head and the person's head may be swapped.
  • a person may be made to appear much thinner, heavier, more muscular, less muscular, or wavier than in reality.
  • Motion blur may be added to make a person appear to be spinning very quickly.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

Provided are mechanisms and processes for scene-aware selection of filters and effects for visual digital media content. In one example, a digital media item is analyzed with a processor to identify one or more characteristics associated with the digital media item, where the characteristics include a physical object represented in the digital media item. Based on the identified characteristics, a digital media modification is selected from a plurality of digital media modifications for application to the digital media item. The digital media modification may then be provided for presentation in a user interface for selection by a user.

Description

    TECHNICAL FIELD
  • The present disclosure relates to the selection, recommendation, and application of filters and effects to visual digital media content.
  • DESCRIPTION OF RELATED ART
  • Visual digital media content is commonly modified by applying filters and effects. For example, a visual filter may sharpen, blur, or emboss an image to introduce a desired visual effect. However, current techniques are limited in their ability to modify complex digital media content such as video or multi-views. Further, users may face a large number of choices when attempting to select a particular filter or effect to apply to a digital media content item. Accordingly, it is desirable to develop improved mechanisms and processes relating to selecting filters and effects to apply to digital media content items.
  • Overview
  • Provided are various mechanisms and processes relating to the selection, recommendation, and application of filters and effects to visual digital media content.
  • In one aspect, which may include at least a portion of the subject matter of any of the preceding and/or following examples and aspects, a process implemented at a client device and/or embodied in a computer readable media includes analyzing a visual digital media item with a processor to identify one or more characteristics associated with the visual digital media item, where the characteristics include a physical object represented in the visual digital media item. Next, a visual digital media modification is selected from a plurality of visual digital media modifications based on the identified characteristics for application to the visual digital media item. Then, the selected visual digital media modification is provided for presentation in a user interface for selection by a user. The visual digital media item may be a video stream such as a live camera view captured via a camera. In some implementations, the visual digital media item includes a surround view of the object, where the surround view of the object includes spatial information, scale information, and a plurality of different viewpoint images of the object.
  • In another aspect, which may include at least a portion of the subject matter of any of the preceding and/or following examples and aspects, the characteristics may include structure information indicating a physical context in which the physical object is positioned, pose information indicating an attitude or position associated with the physical object, and/or movement information indicating a degree of velocity or acceleration of the object. The visual digital media modification may include a virtual object positioned within the visual digital media item, an artificial light source that appears to be blocked by the physical object in the visual digital media item, a change to the color of a portion of the visual digital media item, and/or motion blur indicating movement associated with the object. The object may be a human being, and the visual digital media modification may include a text bubble appearing in proximity to a face.
  • In yet another aspect, which may include at least a portion of the subject matter of any of the preceding and/or following examples and aspects, analyzing the visual digital media item may involve receiving the visual digital media item at a server via a network from a client device, where providing the visual digital media modification for presentation in a user interface includes transmitting a message via the network to the client device. In still another aspect, analyzing the visual digital media item may involve transmitting the visual digital media item to a server via a network from a client device and receiving a response message at the client device, the response message identifying the one or more characteristics.
  • These and other embodiments are described further below with reference to the figures.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The disclosure may best be understood by reference to the following description taken in conjunction with the accompanying drawings, which illustrate particular embodiments of the present invention.
  • FIG. 1 illustrates one example of a process for performing visual digital media modification selection.
  • FIG. 2 illustrates one example of a process for performing visual digital media modification preprocessing.
  • FIG. 3 illustrates one example of a process for performing pose detection for an object.
  • FIG. 4 illustrates one example of a system that can be used to perform live video stream filtering.
  • FIG. 5 illustrates one example of a process for performing live filtering of a video stream.
  • FIG. 6 illustrates one example of a process for performing live filter processing of a video stream at a server.
  • FIG. 7 illustrates an example of a surround view acquisition system.
  • FIG. 8 illustrates an example of a device capturing multiple views of an object of interest.
  • FIG. 9 illustrates an example of a device capturing views of an object of interest to create a multi-view media representation to which a filter may be applied.
  • FIG. 10 illustrates a particular example of a computer system that can be used with various embodiments of the present invention.
  • DETAILED DESCRIPTION
  • Reference will now be made in detail to some specific examples of the present disclosure including the best modes contemplated by the inventors for carrying out the present disclosure. Examples of these specific embodiments are illustrated in the accompanying drawings. While the present disclosure is described in conjunction with these specific embodiments, it will be understood that it is not intended to limit the present disclosure to the described embodiments. On the contrary, it is intended to cover alternatives, modifications, and equivalents as may be included within the spirit and scope of the present disclosure as defined by the appended claims.
  • In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. Particular example embodiments of the present invention may be implemented without some or all of these specific details. In other instances, well known process operations have not been described in detail in order not to unnecessarily obscure the present invention.
  • Various techniques and mechanisms of the present invention will sometimes be described in singular form for clarity. However, it should be noted that some embodiments include multiple iterations of a technique or multiple instantiations of a mechanism unless noted otherwise. For example, a system uses a processor in a variety of contexts. However, it will be appreciated that a system can use multiple processors while remaining within the scope of the present invention unless otherwise noted. Furthermore, the techniques and mechanisms of the present invention will sometimes describe a connection between two entities. It should be noted that a connection between two entities does not necessarily mean a direct, unimpeded connection, as a variety of other entities may reside between the two entities. For example, a processor may be connected to memory, but it will be appreciated that a variety of bridges and controllers may reside between the processor and memory. Consequently, a connection does not necessarily mean a direct, unimpeded connection unless otherwise noted.
  • According to various embodiments, improved mechanisms and processes are described for the selection, application, and recommendation of modifications to visual digital media items. Such modifications may include image filters, virtual objects, digital effects, or other such alterations. Such improved mechanisms and process allow a user to be presented with modifications that are specifically relevant to the content, structure, context, motion, and/or poses of a visual digital media item. In this way, a user may be presented with only the relevant modifications and need not manually sift through a large number of irrelevant modifications. Further, if modifications are applied in a way that reflects the modified visual digital media item, then the modifications may appear to be more realistic, and a larger number of modifications may be possible. Also, specific modifications may be applied to different parts of the same scene.
  • Digital media modifications, also referred to herein as filters, modify and/or add to the visual data of a visual digital media item such as a static image, a video stream, or a multi-view interactive digital media representation. According to various embodiments, filters can include any techniques for altering a visual digital media item. For example, a filter can alter the color information of the captured visual data by changing contrast, changing brightness, or applying a transformation to the underlying color matrix. As another example, a filter can add additional elements to the scene such as 2D or 3D stickers or text placed relative to an object in the visual digital media item, or any other such alteration. An artificial object can be placed relative to a two-dimensional object in two-dimensional space and/or relative to the three-dimensional reference coordinate system of a multi-view interactive digital media representation in three-dimensional space. The application of the filters can happen live (e.g., in a live media stream or a in a camera view) or in post-processing.
  • According to various embodiments, techniques and mechanisms described herein provide for improved user experience of a computer system. In many contexts, large set of different filters can be applied to visual digital media content. Due to the vast number of different filters that can be applied, conventional techniques require either a user to manually review a large number of filters to find the best filter for a certain use case, or application developers to limit the amount of filters the user can select from. Both options are suboptimal for the user. In contrast, techniques and mechanisms described herein allow a user to be presented with filters specifically relevant to the user's content.
  • With reference FIG. 1, shown is one example of a process for performing visual digital media modification selection. In some implementations, the process 100 may be performed at a client device. Alternately, the process 100 may be performed at a server. In yet another implementation, some operations shown in the process 100 may be performed at client device while other operations are performed at a server in communication with the client device. In particular embodiments, the client device and the server may be implemented as different processes running on the same physical device. In other embodiments, the client device and the server may be implemented on different physical devices in communication via a network.
  • According to various embodiments, the process 100 begins when a request to apply a filter to a visual digital media item is received at 102. The request may be received when a user specifically requests to apply a filter to a visual digital media item. Alternately, the request may be generated automatically when triggered by a particular action such as the activation of a camera at a client device.
  • Next, at 104, the visual digital media item is analyzed to identify content and structure information. According to various embodiments, content information may include any indication of objects represented in the visual digital media item. For instance, the identified content may include a human being, an animal, a plant, text, an inanimate object such as a vehicle, or an abstract shape such as a ball. Structure information may include a ground plane or a wall. Semantic areas such as sky, grass, and water may also be identified.
  • Then, at 106, the visual digital media item is analyzed to identify pose and movement information. In some implementations, pose information may indicate an attitude or position of an identified object. For example, if a human being is identified as being represented in the visual digital media item, then pose information may indicate whether the human being is sitting, standing, walking, or arranged in some other posture. Pose information may also be applied to other types of objects. For instance, pose information may indicate the position of a vehicle relative to the viewer, the stance of an animal, the attitude or position of a deformable inanimate object, or other such positioning information. Movement information may indicate the velocity or acceleration of an identified object. The movement may be identified relative to scene structure, another object, the viewpoint, or any other reference plane or point.
  • A variety of techniques may be used to identify content, structure, pose, and movement information. Identifying such information may involve applying a content recognition algorithm to visual media. For instance, a recognition algorithm may be applied to an image, a video frame, one or more images in a multi-view of an object, or a stream of video frames. The specific techniques used to identify content, structure, pose, and movement information may depend on the particular implementation. For instance, different techniques may be used based on characteristics of the visual digital media item or the identify content, structure, pose, or movement information to be identified.
  • After analyzing the visual digital media item, one or more visual digital media modifications to present are selected at 108. According to various embodiments, one or more specific filters may be selected based on the content and/or context of the scene depicted in the visual digital media content. That is, the identification of objects such as cars, food or people, or the identification of semantic areas such as sky, grass, water may trigger the selection, recommendation, and/or application of specific filters. For example, if a car is present in the captured scene and is detected, then car-specific filters such as stickers, exhaust fumes, and other types of augmented reality modifications may be made available. As another example, if a person is present in the scene then filters specific for humans may be added, such as speech bubbles. As still another example, filters may be selected that apply to particular contexts, such as fashion shots or sporting events. For some objects, such as people and vehicles, poses and movements may also be used to trigger filters. If multiple objects are present, such as both a person and a vehicle, then filters may be selected that are specific to the combination of objects, instead of or in addition to filters that are specific to only persons or only vehicles.
  • In particular embodiments, selected filters may be applied automatically to the visual digital media modification. For example, the server may transmit an instruction to the client machine to apply one or more selected filters. Alternately, or additionally, one or more filters may be provided for manual selection by a user in a user interface. For instance, the server may transmit an instruction to the client machine identifying one or more filters to present.
  • In some implementations, one or more of a variety of context-specific filters may be selected. In one example, certain filters may be selected for certain objects. For instance, vehicles may be modified to include stickers, color changes, motion blur, or other such vehicle-specific alterations. In a second example, moving artificial objects may be added that react with the scene structure such as balls that bounce off of the ground, objects that accumulate on the ground, weather patterns such as rain or snow that interact with the scenery, artificial light sources that are blocked by objects in the scene, or other such alterations. In a third example, an artificial object may be automatically positioned relative to the scene and/or objects. Specific examples of inserting virtual objects include, but are not limited to: 3D text bubbles appearing next to a human's face, “vroom” text appearing near a car's engine, or hats or clothing being placed on humans or animals. In a fourth example, a filter may change the color of the sky or alter the “style” of a visual digital media item. For instance, the style may be altered to appear cartoonish or retro.
  • In some embodiments, effects and filters may be added based on a detected pose. In one example, if a person is detected in a particular pose, then a filter may be selected that will cause a laser beam or fire to shoot from the person's hands. In a second example, if multiple people are detected close to one another, then effects or elements which correlate both people can be added, such as hearts indicating affection. In a third example, if a person is detected, then the person can be cut out of the scene and pasted in one or more times in a different position or pose, such as dancing. In a fourth example, parts of a human body can be replaced. For instance, a head may be replaced with an apple, a crocodile head, or some other object. In a fifth example, the context can be changed. For instance, when a person is detected as jumping, a filter may be applied to depict alligators or some other hazard beneath the person. In a sixth example, elements can be added that interact with a person. For instance, a snake can be added that moves up a person's body, or lightning can be added that traverses a person's body. An infinite variety of such alterations is possible. In a seventh example, a person may be cut or copied from a scene and then pasted back into the same scene one or more times in the same pose but a different position. For instance, a single dancing person may be copied multiple times in the same pose to create a crowd of dancing people.
  • In particular embodiments, analyzing the visual digital media item may involve transmitting some or all of the visual digital media item to a server for processing. The server may then process the visual digital media item and response with one or more recommend digital media modifications. Techniques for client-server interactions are discussed in greater detail with respect to FIGS. 4, 5, and 6.
  • According to various embodiments, visual digital media modifications may be selected based at least in part on explicit classifications or categorizations. For instance, a user or system administrator may specifically identify a particular type of modification as pertaining to a vehicle, a person, or both. Alternately, or additionally, visual digital media modifications may be selected based at least in part on implicit or machine-generated classifications or categorizations. For instance, the system may analyze user selections to help determine which modifications to suggest for which visual digital media items. Additional details regarding machine classification are discussed with respect to FIG. 2.
  • Once the one or more visual digital media modifications are selected, at 110 the selections are provided for presentation in a user interface. According to various embodiments, the specific technique for providing the selections for presentation may depend in part on the specific implementation. For instance, if the analysis is performed at a server, then providing the selections for presentation may involve transmitting a message to a client device with instructions for presenting the selections in a user interface at the client device. Alternately, if the analysis is performed at a client device, then the selections may be presented directly in a user interface.
  • Then, a determination is made at 112 as to whether to continue to analyze the visual digital media item. According to various embodiments, analysis may continue until one or more conditions or criteria are met. These may include, but are not limited to: the receipt of user input indicating a request to stop analysis, the selection of a particular filter or filters for presentation, and the termination of a live or prerecorded media stream.
  • With reference FIG. 2, shown is one example of a process for performing visual digital media modification preprocessing. According to various embodiments, the procedure 200 may be performed in order to help determine which visual digital media modifications to suggest for which visual digital media items. To accomplish this, user-selections of filters may be analyzed to identify characteristics likely to make particular filters attractive in specific situations. The procedure 200 may be performed at a server having access to a wide range of user selections or at a client machine. The procedure 200 may be initiated when a request is received at 202 to perform visual digital media modification preprocessing. According to various embodiments, the request may be generated automatically (for instance periodically) or manually (for instance by a system administrator).
  • Next, a visual digital media item is selected for analysis at 204. In some implementations, a visual digital media item may be selected for analysis when it is associated with user-provided input indicating a selection of a filter for application to the item. In one example, all filtered visual digital media items may be selected for analysis in some sequence. Alternately, a subset of the available data may be analyzed. For instance, a designated number of filtered visual digital media items may be selected in a particular category, such as filtered visual digital media items that include people, vehicles, or animals. Once a visual digital media item is selected for analysis, it is analyzed at 206 to identify content and structure information and at 208 to identify pose and movement information. These analyses may be substantially similar to the operations 104 and 106 discussed with respect to FIG. 1.
  • After the visual digital media item is analyzed, one or more user-selected visual digital media modification is identified at 210. For instance, a user initially viewing the media item may have been presented with a set of filters to choose from and then selected a particular filter to apply to the media item. The user's choice may then have been recorded for further analysis. At 212, one or more weights associated with the identified content, structure, pose, and/or movement information is updated.
  • According to various embodiments, updating the one or more weights may involve indicating a connection between a characteristic of the media item and the selected filter. In one example, a particular filter is identified as having been applied to a particular visual digital media item that analysis reveals to include a moving car, stationary flowers, and a walking dog. In this example, the weights linking the particular filter to each of these characteristics are increased. For instance, if the filter includes flames that shoot in a particular direction, then users may be more likely to apply the filter to moving cars. Over time, the weight linking the filter to the media item characteristic of a moving car may be increased so that eventually the system may automatically recommend the shooting flames filter when the media item includes a car without needing a human to explicitly flag the filter as being specific to vehicles.
  • Next, a determination is made at 214 as to whether to continue to analyze visual digital media items. According to various embodiments, analysis may continue until one or more conditions or criteria are met. These may include, but are not limited to: the receipt of user input indicating a request to stop analysis, the selection of a particular set of weights, and the analysis of all available user selections.
  • FIG. 3 illustrates one example of a process for performing pose detection for an object. According to various embodiments, pose detection may be used for any of various purposes. In one example, pose detection may be used to trigger a filter. For instance, pose detection may be used to determine if a person is exhibiting a particular pose, such as pumping a fist in the air. If such a pose is detected, then the video stream may be altered to depict lightning extending from the fist. In another example, skeleton detection may be used to trigger a photo to be captured. For instance, a person may position a camera to take a self-image and then move in front of the camera. The camera may then capture an image when it identifies the person's skeleton and determines that the person has stopped moving or has entered into a particular pose, such as jumping in the air. The procedure 300 is initiated when a request is received at 302 to perform pose detection for a video stream.
  • Then, at 304, prior video frame information associated with the video stream is identified. At 306 skeleton detection operations for the video stream are performed. According to various embodiments, skeleton detection operations may be performed using any of various suitable methods. In one example, a convolutional neural network may be applied to an image to first detect all objects in the scene and then estimate the skeleton joints for those that belong to the “person” category. In a second example, static skeleton detection at the server may be combined with server-side skeleton detection and/or tracking across prior frames. For instance, the results of one or more skeleton detection operations for previous skeleton detection messages may be analyzed to aid in the detection of a skeleton for the current frame. In a third example, non-visual data such as accelerometer or gyroscopic data may be analyzed to aid in skeleton detection.
  • After performing skeleton detection, at 308 pose detection is performed. In pose detection, the detected human skeleton may be used to determine whether the arrangement of the skeleton at a particular point in time matches one or more of a discrete set of human poses. In some implementations, pose detection may be accomplished by first estimating a homography from the skeleton joints that in order to frontalize the skeleton for a bitter pose. Then, pose detection may be performed by analyzing spatial relations of the frontalized joints. Next, a temporal filtering method may be applied to remove spurious detections. In particular embodiments, such techniques may be applied to detect poses for either individuals or for multiple people.
  • In some embodiments, pose detection may involves scaling or stretching location information associated with the detected skeleton and then comparing the location information with the predetermined location information associated with specific poses, where a high degree of similarity between the detected skeleton information and the predetermined skeleton pose information would indicate a match. When pose detection is used, different poses may trigger different events, such as the application of specific filters to a video stream. Alternately, or additionally, the detection of a specific pose may trigger the recommendation of one or more filters to the user for the user to select. In either case, pose detection may be used to suggest or identify start and/or end times for an effect as well as the type of effect that could be added.
  • With reference to FIG. 4, shown is one example of a system that can be used to perform a live video stream filtering. As depicted, a combination of client and server applications is used to implement a filtering mechanism that runs live in a capture device application, such as with a camera on a smartphone. While the camera is recording, the user points the camera at an object. The smartphone then communicates with the server, and collectively the two devices analyze the video stream to provide a filtered view of the video stream in real time.
  • In the present embodiment, the client is depicted as device 404, which can be a capture device such as a digital camera, smartphone, mobile device, etc. The server is depicted as system 402, which receives images selected from the video stream at the client device. The video stream at the client device is divided into video frames 451 through 461. The server processes the frames sent from the client device and response with filtering information that can be used to apply a filter to the video stream at the client device. The client device includes a camera 406 for capturing a video stream, a communications interface 408 capable of communicating with the server, a processor 400, memory 402, and a display screen 404 on which the video stream may be presented.
  • According to various embodiments, the client and server may coordinate to apply a filter to the video stream at least in part due to limited computing resources at the client machine. However, as discussed herein, the network latency and processing time involved in transmitting video frames to the server means that the video stream at the client device has progressed to a new video frame before receiving the filter processing message from the server with the filter information associated with the preceding frame. For instance, in FIG. 4, the first request 471 transmits the frame 451 to the server, while the first response 472 corresponding to the frame 451 arrives while the frame 455 is being processed. Similarly, the second request 473 and third request 474 transmit frames 455 and 457 respectively, but the corresponding second and third responses 475 and 476 are not received until the video stream has arrived at frames 459 and 461 respectively.
  • In some implementations, the client application sends (and also receives) data in a sparse manner, meaning that data is sent to the server potentially not for all frames captured by the camera. Therefore, in order to present a filtered result for a live stream, the information received from the server is tracked or propagated to new frames received from the camera until new information from the server is available. For example, in FIG. 4, the client device may propagate information received in the first response 472 through frames 456, 457, and 458 until the second response 475 is received for the processing of frame 459.
  • With reference to FIG. 5, shown is one example of a process for performing live filtering of a video stream. According to various embodiments, the process shown in FIG. 5 may be performed at a client machine in communication with a server, such as the client machine 104 in communication with the server 105 shown in FIG. 1. The two devices may coordinate to split the processing operations required to apply a filter to a live video stream.
  • In the present example, a live filtering process 500 begins with the client device receiving a request to perform filtering of a video stream at 502. In some implementations, the request may be generated based on user input requesting the application of a filter. Alternately, the request may be generated automatically when the client device detects that a video stream is being captured or displayed at the client device. Next, the system select a video stream frame for processing at 504. According to various embodiments, video stream frames may be processed sequentially. For instance, each frame in a live video stream may be processed prior to presenting the video stream to the user so that a filter may be applied.
  • At 506, a determination is made as to whether the selected video stream frame meets a designated criterion. In some implementations, any of a variety of criteria may be used to select a video stream frame for transmission to the server. For example, if the filtering process has just been initiated, then the client device may select the first available video stream frame for processing. As another example, one or more criteria may be applied to select the video stream frame. For instance, the client device may select a video stream frame that exceeds a threshold level of light or detail to allow for sufficient information for applying a filter. As yet another example, the client device may select a video stream frame for processing after a designated period of time or number of frames have passed since the last video stream frame was transmitted to the server.
  • If the selected frame meets the designated criterion, then information about the selected frame is transmitted to the server at 508. According to various embodiments, a variety of information may be transmitted to the server. In one example, some or all of the image data associated with the frame may be transmitted. For instance, the entire frame may be transmitted. Alternately, the frame may be compressed or down sampled to reduce bandwidth usage. In a second example, IMU information such as gyroscopic data, compass data, or accelerometer data may be transmitted. This IMU information may provide data about the position, velocity, acceleration, direction, rotation, or other such characteristics of the device around the time that the frame was captured. In a third example, GPS information may be transmitted. In some implementations, the specific information transmitted to the server may depend on the type of processing being performed at the server and/or the type of filter being applied at the client device.
  • Next, a determination is made at 510 as to whether a new filter processing message has been received from the server. As shown in FIG. 1, the server sends messages that include information for applying filters to frames, but these filter processing messages are sent at a lag when compared with the live processing and presentation of the video stream.
  • If no new filter processing message has been received, then at 512 a filter is applied based on existing data that is locally available at the client machine. In some embodiments, applying a filter based on locally available data may involve propagating information from one frame to another. For instance, a current frame may be analyzed to identify the same feature (e.g., an object corner or an area of color) that was identified in the preceding frame. According to various embodiments, a multitude of approaches can be used to propagate information from one frame to another. One such approach is frame-to-frame tracking, which can be based on information that may include, but is not limited to: tracking of sparse keypoints, dense or sparse optical flow, patch tracking, tracking of geometric instances, or other such information. Another such approach is frame-to-frame matching, which involve techniques that may include, but are not limited to: descriptor based matching of keypoints which are detected in both frames, patch matching, detection and matching of higher level features (e.g. a human face), or other such techniques. Both approaches can focus the tracking and matching efforts on regions or features of interest if such regions or features are identified.
  • In some implementations, a special processing cases the time from the first frame that is sent to the server to the frame when the corresponding results are received back from the server. Since there is no server-created scene interpretation available until the results of the first frame are received, the client device may not know which specific information in the scene needs to be propagated. Various approaches are possible for handling this situation. In one example, all or most information in the frame is equally propagated. For instance, keypoints may be distributed over the whole image. In a second example, an efficient method for estimating one or more regions of interest may be applied on the client device. For instance, a bounding box for the region may be computed. Then, the propagation of information may be concentrated on the region or regions of interest. In a third example, matching methods may be applied to directly match the information extracted from the first frame to the frame after which the results from the server are available.
  • If instead a new filter processing message has been received, then at 514 a filter is applied based on both the locally available data and the data provided by the server. According to various embodiments, new information received from the server may be combined with the information propagated from frame to frame. To accomplish this goal, various approaches may be used. In one example, old information may be replaced with new information received from the server. In a second example, old information may be combined with new information in a weighted fashion, for instance based on relative confidence values associated with server results and propagation results.
  • According to various embodiments, the specific operations performed to apply a filter may depend in large part upon the specific type of filter being applied. In one example, a caption bubble may be applied to a video of a person when the person exhibits a particular pose. In this first example, the server may perform skeleton detection to facilitate pose estimation while the client device tracks low-level image features such as a point associated with a person's elbow or a surface area that is part of the background. Then, the client device may combine the low-level feature tracking information with the skeleton detection information provided by the server to determine whether the person is positioned in the particular pose. In a second example, a filter may be applied to a vehicle based on its position (e.g., crossing a finish line). In this second example, the server may perform segmentation to identify the segmentation and characteristics of the vehicle, while the client device tracks low-level features such as shapes to propagate the location of the vehicle between communication with the server.
  • After applying the filter to the selected frame, the filtered frame is provided for presentation at 516. In some implementations, providing the filtered frame for presentation may involve displaying the filtered frame as part of the video stream on a display screen. Alternately, or additionally, the filtered frame may be stored to memory and or persistent storage for later playback. In a different example, the filtered frame may be transmitted to a separate device for presentation, such as an augmented reality or virtual reality device in communication with the client device. Finally, at 518 a determination is made as to whether to process an addition frame. According to various embodiments, additional frames may be processed until any of a variety of conditions are met. These conditions may include, but are not limited to: receiving user input indicating a request to terminate live filtering, determining that the video stream has terminated, or determining that the server is inaccessible via the network.
  • With reference to FIG. 6, shown is one example of a configuration for performing live filtering server processing. In some implementations, the procedure 600 may be performed in order to perform server-side processing to facilitate the live filtering of a media stream at a client device. The procedure 600 may be initiated at 602 when a live filtering request message for a video stream is received from a client device. According to various embodiments, as discussed with respect to FIGS. 1 and 2, a variety of information may be included in a live filtering request message. For instance, the request message may include the identity of the client device as well as any information necessary for performing live filtering, such as image data information associated with one or more video stream frames, IMU information, or GPS information.
  • In particular embodiments, after receiving the request message, at 604 the server may identify information associated with one or more prior video frames in the video stream. For example, the prior video frame information may include any raw data transmitted from the client device in early live filtering request messages. Alternately, or additionally, the prior video frame information may include processed or filtered data generated by processing previous live filtering request messages.
  • Then, at 606 the server performs filter processing operations for the video stream. The specific filter processing operations performed may depend in large part on the particular type of filter being applied to the video stream. Some examples of processing operations running on the server may include, but are not limited to: detection, segmentation, and pose estimation. Such methods may be applied to objects that include, but are not limited to: humans, animals, vehicles, inanimate objects, and plants. Other examples of methods running on the server may include, but are not limited to: depth estimation, scene reconstruction, scene decomposition, and semantic labeling.
  • According to various embodiments, the filter processing operations may generate a wide range of information. For example, the filter processing operations may generate location information that identifies locations of high-level features such as faces or skeleton components on image data sent from the client device. As another example, the filter processing operations may include or identify virtual elements to overlay on top of the video stream at the client device. For instance, the filter processing operation may identify the video stream as including footage of a running dog and then indicate as one filter possibility a cape that could be overlain on the moving image of the dog to generate a visual effect of a “super dog.” As yet another example, the filter processing operations may include semantic elements such as labels for recognized objects or words generated by applying optical character recognition to text in the video stream.
  • After performing the filter processing operations, at 608 a filter processing message is transmitted to the client device. According to various embodiments, the filter processing message may include any suitable information, including location information identifying the location of features in the scene, semantic information that identifies the meaning of particular elements in the scene, or data for overlaying virtual elements on top of the scene.
  • With reference to FIG. 7, shown is an example of a surround view acquisition system that can be used to generate a multi-view interactive digital media representation that can be used for the application of filters or visual effects. A multi-view interactive digital media representation includes much more information than a single image. Whereas a single image may include information such as a grid of color pixels and the date/time of capture, a multi-view interactive digital media representation includes information such as such as grids of color pixels, date/time of capture, spatial information (flow/3D), location, and inertial measurement unit information (IMU) (i.e., compass, gravity, orientation). A multi-view interactive digital media representation brings focus to an object of interest because it provides separation between the foreground and background. In addition, a multi-view interactive digital media representation provides more information about the scale, context, and shape of the object of interest. Furthermore, by providing multiple views, aspects of the object that are not visible from a single view can be provided in a multi-view interactive digital media representation.
  • In the present example embodiment, the surround view acquisition system 700 is depicted in a flow sequence that can be used to generate a surround view. According to various embodiments, the data used to generate a surround view can come from a variety of sources. In particular, data such as, but not limited to two-dimensional (2D) images 704 can be used to generate a surround view. These 2D images can include color image data streams such as multiple image sequences, video data, etc., or multiple images in any of various formats for images, depending on the application. Another source of data that can be used to generate a surround view includes location information 706. This location information 706 can be obtained from sources such as accelerometers, gyroscopes, magnetometers, GPS, Wi-Fi, IMU-like systems (Inertial Measurement Unit systems), and the like. Yet another source of data that can be used to generate a surround view can include depth images 708. These depth images can include depth, 3D, or disparity image data streams, and the like, and can be captured by devices such as, but not limited to, stereo cameras, time-of-flight cameras, three-dimensional cameras, and the like.
  • In the present example embodiment, the data can then be fused together at sensor fusion block 710. In some embodiments, a surround view can be generated a combination of data that includes both 2D images 704 and location information 706, without any depth images 708 provided. In other embodiments, depth images 708 and location information 706 can be used together at sensor fusion block 710. Various combinations of image data can be used with location information at 706, depending on the application and available data.
  • In the present example embodiment, the data that has been fused together at sensor fusion block 710 is then used for content modeling 711 and context modeling 714. During this process, the subject matter featured in the images can be separated into content and context. The content can be delineated as the object of interest and the context can be delineated as the scenery surrounding the object of interest. According to various embodiments, the content can be a three-dimensional model, depicting an object of interest, although the content can be a two-dimensional image in some embodiments. Furthermore, in some embodiments, the context can be a two-dimensional model depicting the scenery surrounding the object of interest. Although in many examples the context can provide two-dimensional views of the scenery surrounding the object of interest, the context can also include three-dimensional aspects in some embodiments. For instance, the context can be depicted as a “flat” image along a cylindrical “canvas,” such that the “flat” image appears on the surface of a cylinder. In addition, some examples may include three-dimensional context models, such as when some objects are identified in the surrounding scenery as three-dimensional objects. According to various embodiments, the models provided by content modeling 711 and context modeling 714 can be generated by combining the image and location information data.
  • According to various embodiments, context and content of a surround view are determined based on a specified object of interest. In some examples, an object of interest is automatically chosen based on processing of the image and location information data. For instance, if a dominant object is detected in a series of images, this object can be selected as the content. In other examples, a user specified target 702 can be chosen. It should be noted, however, that a surround view can be generated without a user specified target in some applications.
  • In the present example embodiment, one or more enhancement algorithms can be applied at enhancement algorithm(s) block 716. In particular example embodiments, various algorithms can be employed during capture of surround view data, regardless of the type of capture mode employed. These algorithms can be used to enhance the user experience. For instance, automatic frame selection, stabilization, view interpolation, filters, and/or compression can be used during capture of surround view data. In some examples, these enhancement algorithms can be applied to image data after acquisition of the data. In other examples, these enhancement algorithms can be applied to image data during capture of surround view data.
  • According to particular example embodiments, automatic frame selection can be used to create a more enjoyable surround view. Specifically, frames are automatically selected so that the transition between them will be smoother or more even. This automatic frame selection can incorporate blur- and overexposure- detection in some applications, as well as more uniformly sampling poses such that they are more evenly distributed.
  • In some example embodiments, stabilization can be used for a surround view in a manner similar to that used for video. In particular, key frames in a surround view can be stabilized to produce improvements such as smoother transitions, improved/enhanced focus on the content, etc. However, unlike video, there are many additional sources of stabilization for a surround view, such as by using IMU information, depth information, computer vision techniques, direct selection of an area to be stabilized, face detection, and the like.
  • For instance, IMU information can be very helpful for stabilization. In particular, IMU information provides an estimate, although sometimes a rough or noisy estimate, of the camera tremor that may occur during image capture. This estimate can be used to remove, cancel, and/or reduce the effects of such camera tremor.
  • In some examples, depth information, if available, can be used to provide stabilization for a surround view. Because points of interest in a surround view are three-dimensional, rather than two-dimensional, these points of interest are more constrained and tracking/matching of these points is simplified as the search space reduces. Furthermore, descriptors for points of interest can use both color and depth information and therefore, become more discriminative. In addition, automatic or semi-automatic content selection can be easier to provide with depth information. For instance, when a user selects a particular pixel of an image, this selection can be expanded to fill the entire surface that touches it. Furthermore, content can also be selected automatically by using a foreground/background differentiation based on depth. In various examples, the content can stay relatively stable/visible even when the context changes.
  • According to various examples, computer vision techniques can also be used to provide stabilization for surround views. For instance, key points can be detected and tracked. However, in certain scenes, such as a dynamic scene or static scene with parallax, no simple warp exists that can stabilize everything. Consequently, there is a trade-off in which certain aspects of the scene receive more attention to stabilization and other aspects of the scene receive less attention. Because a surround view is often focused on a particular object of interest, a surround view can be content-weighted so that the object of interest is maximally stabilized in some examples.
  • Another way to improve stabilization in a surround view includes direct selection of a region of a screen. For instance, if a user taps to focus on a region of a screen, then records a convex surround view, the area that was tapped can be maximally stabilized. This allows stabilization algorithms to be focused on a particular area or object of interest.
  • In some examples, face detection can be used to provide stabilization. For instance, when recording with a front-facing camera, it is often likely that the user is the object of interest in the scene. Thus, face detection can be used to weight stabilization about that region. When face detection is precise enough, facial features themselves (such as eyes, nose, mouth) can be used as areas to stabilize, rather than using generic key points.
  • According to various examples, view interpolation can be used to improve the viewing experience. In particular, to avoid sudden “jumps” between stabilized frames, synthetic, intermediate views can be rendered on the fly. This can be informed by content-weighted key point tracks and IMU information as described above, as well as by denser pixel-to-pixel matches. If depth information is available, fewer artifacts resulting from mismatched pixels may occur, thereby simplifying the process. As described above, view interpolation can be applied during capture of a surround view in some embodiments. In other embodiments, view interpolation can be applied during surround view generation.
  • In some examples, filters can also be used during capture or generation of a surround view to enhance the viewing experience. Just as many popular photo sharing services provide aesthetic filters that can be applied to static, two-dimensional images, aesthetic filters can similarly be applied to surround images. However, because a surround view representation is more expressive than a two-dimensional image, and three-dimensional information is available in a surround view, these filters can be extended to include effects that are ill-defined in two dimensional photos. For instance, in a surround view, motion blur can be added to the background (i.e. context) while the content remains crisp. In another example, a drop-shadow can be added to the object of interest in a surround view.
  • In various examples, compression can also be used as an enhancement algorithm 716. In particular, compression can be used to enhance user-experience by reducing data upload and download costs. Because surround views use spatial information, far less data can be sent for a surround view than a typical video, while maintaining desired qualities of the surround view. Specifically, the IMU, key point tracks, and user input, combined with the view interpolation described above, can all reduce the amount of data that must be transferred to and from a device during upload or download of a surround view. For instance, if an object of interest can be properly identified, a variable compression style can be chosen for the content and context. This variable compression style can include lower quality resolution for background information (i.e. context) and higher quality resolution for foreground information (i.e. content) in some examples. In such examples, the amount of data transmitted can be reduced by sacrificing some of the context quality, while maintaining a desired level of quality for the content.
  • In the present embodiment, a surround view 718 is generated after any enhancement algorithms are applied. The surround view can provide a multi-view interactive digital media representation. In various examples, the surround view can include three-dimensional model of the content and a two-dimensional model of the context. However, in some examples, the context can represent a “flat” view of the scenery or background as projected along a surface, such as a cylindrical or other-shaped surface, such that the context is not purely two-dimensional. In yet other examples, the context can include three-dimensional aspects.
  • According to various embodiments, surround views provide numerous advantages over traditional two-dimensional images or videos. Some of these advantages include: the ability to cope with moving scenery, a moving acquisition device, or both; the ability to model parts of the scene in three-dimensions; the ability to remove unnecessary, redundant information and reduce the memory footprint of the output dataset; the ability to distinguish between content and context; the ability to use the distinction between content and context for improvements in the user-experience; the ability to use the distinction between content and context for improvements in memory footprint (an example would be high quality compression of content and low quality compression of context); the ability to associate special feature descriptors with surround views that allow the surround views to be indexed with a high degree of efficiency and accuracy; and the ability of the user to interact and change the viewpoint of the surround view. In particular example embodiments, the characteristics described above can be incorporated natively in the surround view representation, and provide the capability for use in various applications. For instance, surround views can be used in applying filters or visual effects.
  • According to various example embodiments, once a surround view 718 is generated, user feedback for acquisition 720 of additional image data can be provided. In particular, if a surround view is determined to need additional views to provide a more accurate model of the content or context, a user may be prompted to provide additional views. Once these additional views are received by the surround view acquisition system 700, these additional views can be processed by the system 700 and incorporated into the surround view.
  • With reference to FIG. 8, shown is an example of a device capturing multiple views of an object of interest from different locations. The capture device is indicated as camera 811, and moves from location 822 to location 824 and from location 824 to location 826. The multiple camera views 802, 804, and 806 captured by camera 811 can be fused together into a three-dimensional (3D) model. According to various embodiments, multiple images can be captured from various viewpoints and fused together to provide a multi-view digital media representation.
  • In the present example embodiment, camera 811 moves to locations 822, 824, and 826, respectively, along paths 828 and 830, in proximity to an object of interest 808. Scenery can surround the object of interest 808 such as object 88. Views 802, 804, and 806 are captured by camera 811 from locations 822, 824, and 826 and include overlapping subject matter. Specifically, each view 802, 804, and 806 includes the object of interest 808 and varying degrees of visibility of the scenery surrounding the object 810. For instance, view 802 includes a view of the object of interest 808 in front of the cylinder that is part of the scenery surrounding the object 808. View 804 shows the object of interest 808 to one side of the cylinder, and view 806 shows the object of interest without any view of the cylinder.
  • In the present example embodiment, the various views 802, 804, and 806 along with their associated locations 822, 824, and 826, respectively, provide a rich source of information about object of interest 808 and the surrounding context that can be used to produce a multi-view digital media representation, such as a surround view. For instance, when analyzed together, the various views 802, 804, and 806 provide information about different sides of the object of interest and the relationship between the object of interest and the scenery. These views also provide information about the relative size and scale of the object of interest in relation to the scenery. Furthermore, views from different sides of the object provide information about the shape and texture of the object. According to various embodiments, this information can be used to parse out the object of interest 808 into content and the scenery 88 as the context. In particular examples, the content can then be used for applying filters.
  • With reference to FIG. 9, shown is an example of a device capturing views of an object of interest. During a filter session, multiple views of the object 908 may be captured by the device 970 from different locations. In the present example, data is acquired when a user taps a record button 980 on capture device 970 to begin recording images of the object.
  • The user moves 828 the capture device 970 from location 822 to location 824 along path 828 and from location 824 to location 826 along path 830. As described in more detail throughout this application, filtering can be provided at the device 970, and prompts for the user to capture particular views can be provided during the session. In particular, the system can prompt the user to move the device 970 in a particular direction or may prompt the user to provide additional information. As the user records different views of the object, filtering suggestions may be reiteratively refined to provide accurate results. The user may choose to stop recording by tapping the record button 980 again. In other examples, the user can tap and hold the record button during the session, and release to stop recording. In the present embodiment, the recording captures a series of images that can be used to generate a multi-view digital media representation that can be for filtering either in real-time or after-the-fact.
  • In some implementations, applying a filter to a multi-view digital media representation may involve processing a succession of images taken from different perspectives. In such an example, the client device may perform low-level processing such as two-dimensional analysis of individual images. The server, on the other hand, may perform high-level processing such as combining different individual images to produce a three-dimensional model of an object that is the subject of a multi-view video.
  • With reference to FIG. 10, shown is a particular example of a computer system that can be used to implement particular examples of the present invention. For instance, the computer system 1000 can be used to apply one or more filters or effects to visual digital media content according to various embodiments described above. According to particular example embodiments, a system 1000 suitable for implementing particular embodiments of the present invention includes a processor 1001, a memory 1003, a communications interface 1011, a filter interface 1013, and a bus 1015 (e.g., a PCI bus). The filter interface 1013 may include separate input and output interfaces, or may be a unified interface supporting both operations. When acting under the control of appropriate software or firmware, the processor 1001 is responsible for such tasks such as optimization. Various specially configured devices can also be used in place of a processor 1001 or in addition to processor 1001. The complete implementation can also be done in custom hardware. The communications interface 1011 is typically configured to send and receive data packets or data segments over a network. Particular examples of interfaces the device supports include Ethernet interfaces, frame relay interfaces, cable interfaces, DSL interfaces, token ring interfaces, and the like.
  • In addition, various very high-speed interfaces may be provided such as fast Ethernet interfaces, Gigabit Ethernet interfaces, ATM interfaces, HSSI interfaces, POS interfaces, FDDI interfaces and the like. Generally, these interfaces may include ports appropriate for communication with the appropriate media. In some cases, they may also include an independent processor and, in some instances, volatile RAM. The independent processors may control such communications intensive tasks as packet switching, media control and management.
  • According to particular example embodiments, the system 1000 uses memory 1003 to store data and program instructions and maintained a local side cache. The program instructions may control the operation of an operating system and/or one or more applications, for example. The memory or memories may also be configured to store received metadata and batch requested metadata.
  • Because such information and program instructions may be employed to implement the systems/methods described herein, the present invention relates to tangible, machine readable media that include program instructions, state information, etc. for performing various operations described herein. Examples of machine-readable media include hard disks, floppy disks, magnetic tape, optical media such as CD-ROM disks and DVDs; magneto-optical media such as optical disks, and hardware devices that are specially configured to store and perform program instructions, such as read-only memory devices (ROM) and programmable read-only memory devices (PROMs). Examples of program instructions include both machine code, such as produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter.
  • According to various embodiments, a potentially infinite variety of filters or modifications may be applied to digital media content. Although various examples have been describe elsewhere in this application, some additional examples are provided here to provide additional context. When a person is detected with arms outstretched, visual elements such as angel wings, bat wings, butterfly wings, plane wings and engines, or a jetpack with exhaust fumes may be added. When a person is detected in a leaning posture, visual elements such as a dinosaur tail, a squirrel tail, or a raccoon tail may be added. When a person is detecting standing with hands on hips, visual elements may be added to replace the person's clothing with a superhero costume or to add a cape to the person's existing attire. When a person is detected as yelling, for instance with hands cupped around the mouth, visual elements may be added to depict a megaphone, flames, or a speech bubble near the person's mouth. Depending on a person's pose, visual elements may be added to replace a person's clothing or depict a person's body as a skeleton. When a person is detected as standing in a body builder's pose, the person's body may be replaced with one exhibiting more muscles or deformed to appear to exhibit more muscles. When a person is detected as having a hand over a mouth, visual elements may be added to make the person appear to be underwater as a scuba diver or mermaid. When a person is detect as leaning forward in a flying position, visual elements may be added to make the person appear to be a flying angel or super hero. For instance, a person's legs may be moved to make the person appear to be not supported by the ground. When a person is detected with arms uplifted, visual elements may be added to cause rainbows, money, or angels to appear over the person. When a person is detected with hands arranged in a boxing pose, visual elements may be added to make the person appear to be wearing boxing gloves or holding a weapon. A person's facial features or body may be modified to make the person appear to have the head or body of an animal, a fruit, a robot, or some other such object. A person's facial features may be detected and then used to select a corresponding emoticon, which then may be used to replace the person's head. When a person is detected as walking a dog, the dog's head and the person's head may be swapped. A person may be made to appear much thinner, heavier, more muscular, less muscular, or wavier than in reality. Motion blur may be added to make a person appear to be spinning very quickly. The preceding examples provide additional context about the types of visual modifications that could be made, but a potentially infinite variety of visual modifications may be provided according to various embodiments.
  • Although particular features have been described as part of each example in the present disclosure, any combination of these features or additions of other features are intended to be included within the scope of this disclosure. Accordingly, the embodiments described herein are to be considered as illustrative and not restrictive. Furthermore, although many of the components and processes are described above in the singular for convenience, it will be appreciated by one of skill in the art that multiple components and repeated processes can also be used to practice the techniques of the present disclosure.
  • While the present disclosure has been particularly shown and described with reference to specific embodiments thereof, it will be understood by those skilled in the art that changes in the form and details of the disclosed embodiments may be made without departing from the spirit or scope of the invention. Specifically, there are many alternative ways of implementing the processes, systems, and apparatuses described. It is therefore intended that the invention be interpreted to include all variations and equivalents that fall within the true spirit and scope of the present invention.

Claims (20)

What is claimed is:
1. A method comprising:
analyzing a visual digital media item with a processor to identify one or more characteristics associated with the visual digital media item, wherein the characteristics include a physical object represented in the visual digital media item;
based on the identified characteristics, selecting a visual digital media modification from a plurality of visual digital media modifications for application to the visual digital media item; and
providing the selected visual digital media modification for presentation in a user interface for selection by a user.
2. The method recited in claim 1, wherein the visual digital media item is a video stream.
3. The method recited in claim 2, the raw video stream is a live camera view captured via a camera.
4. The method recited in claim 1, wherein the visual digital media item includes a surround view of the object, wherein the surround view of the object includes spatial information, scale information, or a plurality of different viewpoint images of the object.
5. The method recited in claim 1, wherein the characteristics include structure information indicating a physical context in which the physical object is positioned.
6. The method recited in claim 1, wherein the characteristics include pose information indicating an attitude or position associated with the physical object.
7. The method recited in claim 1, wherein the characteristics include movement information indicating a degree of velocity or acceleration of the object.
8. The method recited in claim 1, wherein the visual digital media modification includes a virtual object positioned within the visual digital media item.
9. The method recited in claim 1, wherein the visual digital media modification includes an artificial light source that appears to be blocked by the physical object in the visual digital media item.
10. The method recited in claim 1, wherein the visual digital media modification includes motion blur indicating movement associated with the object or a change to the color of a portion of the visual digital media item.
11. The method recited in claim 1, wherein the object is a human being, and wherein the visual digital media modification includes a text bubble appearing in proximity to a face.
12. The method recited in claim 1, the method further comprising:
receiving the visual digital media item at a server via a network from a client device, wherein providing the visual digital media modification for presentation in a user interface includes transmitting a message via the network to the client device.
13. The method recited in claim 1, wherein analyzing the visual digital media item comprises transmitting the visual digital media item to a server via a network from a client device and receiving a response message at the client device, the response message identifying the one or more characteristics.
14. The method recited in claim 1, wherein the visual digital media modification is automatically applied to the visual digital media item when it is determined that the visual digital media modification meets one or more designated criteria.
15. The method recited in claim 1, wherein the one or more designated criteria includes determining that the object is a person raising a hand.
16. A computing device comprising:
a camera configured to capture a visual digital media item;
a processor configured to identify one or more characteristics associated with the visual digital media item, wherein the characteristics include a physical object represented in the visual digital media item and to select, based on the identified characteristics, visual digital media modification from a plurality of visual digital media modifications for application to the visual digital media item; and
a display screen configured to present a user interface providing the selected visual digital media modification for user selection.
17. The computing device recited in claim 16, wherein the visual digital media item includes a surround view of the object, wherein the surround view of the object includes spatial information, scale information, and a plurality of different viewpoint images of the object.
18. The computing device recited in claim 16, wherein identifying the object comprises transmitting the visual digital media item to a server via a network from the computing device and receiving a response message at the computing device, the response message identifying the one or more characteristics.
19. The computing device recited in claim 16, wherein the characteristics include information selected from the group consisting of: structure information indicating a physical context in which the physical object is positioned, pose information indicating an attitude or position associated with the physical object, and movement information indicating a degree of velocity or acceleration of the object.
20. One or more non-transitory computer readable media having instructions stored thereon for performing a method, the method comprising:
analyzing a visual digital media item with a processor to identify one or more characteristics associated with the visual digital media item, wherein the characteristics include a physical object represented in the visual digital media item;
based on the identified characteristics, selecting a visual digital media modification from a plurality of visual digital media modifications for application to the visual digital media item; and
providing the visual digital media modification for presentation in a user interface for selection by a user.
US15/427,030 2017-02-07 2017-02-07 Scene-aware selection of filters and effects for visual digital media content Abandoned US20180227482A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US15/427,030 US20180227482A1 (en) 2017-02-07 2017-02-07 Scene-aware selection of filters and effects for visual digital media content
US18/634,975 US12381995B2 (en) 2017-02-07 2024-04-14 Scene-aware selection of filters and effects for visual digital media content
US19/236,672 US20250310468A1 (en) 2017-02-07 2025-06-12 Scene-aware selection of filters and effects for visual digital media content

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/427,030 US20180227482A1 (en) 2017-02-07 2017-02-07 Scene-aware selection of filters and effects for visual digital media content

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/634,975 Continuation US12381995B2 (en) 2017-02-07 2024-04-14 Scene-aware selection of filters and effects for visual digital media content

Publications (1)

Publication Number Publication Date
US20180227482A1 true US20180227482A1 (en) 2018-08-09

Family

ID=63037485

Family Applications (3)

Application Number Title Priority Date Filing Date
US15/427,030 Abandoned US20180227482A1 (en) 2017-02-07 2017-02-07 Scene-aware selection of filters and effects for visual digital media content
US18/634,975 Active US12381995B2 (en) 2017-02-07 2024-04-14 Scene-aware selection of filters and effects for visual digital media content
US19/236,672 Pending US20250310468A1 (en) 2017-02-07 2025-06-12 Scene-aware selection of filters and effects for visual digital media content

Family Applications After (2)

Application Number Title Priority Date Filing Date
US18/634,975 Active US12381995B2 (en) 2017-02-07 2024-04-14 Scene-aware selection of filters and effects for visual digital media content
US19/236,672 Pending US20250310468A1 (en) 2017-02-07 2025-06-12 Scene-aware selection of filters and effects for visual digital media content

Country Status (1)

Country Link
US (3) US20180227482A1 (en)

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190116322A1 (en) * 2017-10-13 2019-04-18 Fyusion, Inc. Skeleton-based effects and background replacement
US10645294B1 (en) 2019-05-06 2020-05-05 Apple Inc. User interfaces for capturing and managing visual media
EP3703358A1 (en) * 2019-02-28 2020-09-02 Samsung Electronics Co., Ltd. Electronic device and content generation method
US20210102820A1 (en) * 2018-02-23 2021-04-08 Google Llc Transitioning between map view and augmented reality view
US11054973B1 (en) 2020-06-01 2021-07-06 Apple Inc. User interfaces for managing media
US11102414B2 (en) 2015-04-23 2021-08-24 Apple Inc. Digital viewfinder user interface for multiple cameras
US11112964B2 (en) 2018-02-09 2021-09-07 Apple Inc. Media capture lock affordance for graphical user interface
US11128792B2 (en) 2018-09-28 2021-09-21 Apple Inc. Capturing and displaying images with multiple focal planes
US11165949B2 (en) 2016-06-12 2021-11-02 Apple Inc. User interface for capturing photos with different camera magnifications
US11178335B2 (en) 2018-05-07 2021-11-16 Apple Inc. Creative camera
US11204692B2 (en) 2017-06-04 2021-12-21 Apple Inc. User interface camera effects
US11212449B1 (en) 2020-09-25 2021-12-28 Apple Inc. User interfaces for media capture and management
US11321857B2 (en) 2018-09-28 2022-05-03 Apple Inc. Displaying and editing images with depth information
US11350026B1 (en) 2021-04-30 2022-05-31 Apple Inc. User interfaces for altering visual media
US20220188547A1 (en) * 2020-12-16 2022-06-16 Here Global B.V. Method, apparatus, and computer program product for identifying objects of interest within an image captured by a relocatable image capture device
US20220262108A1 (en) * 2021-01-12 2022-08-18 Fujitsu Limited Apparatus, program, and method for anomaly detection and classification
US11468625B2 (en) 2018-09-11 2022-10-11 Apple Inc. User interfaces for simulated depth effects
US11481941B2 (en) * 2020-08-03 2022-10-25 Google Llc Display responsive communication system and method
US11587253B2 (en) 2020-12-23 2023-02-21 Here Global B.V. Method, apparatus, and computer program product for displaying virtual graphical data based on digital signatures
US11706521B2 (en) 2019-05-06 2023-07-18 Apple Inc. User interfaces for capturing and managing visual media
CN116503289A (en) * 2023-06-20 2023-07-28 北京天工异彩影视科技有限公司 Visual special effect application processing method and system
US11722764B2 (en) 2018-05-07 2023-08-08 Apple Inc. Creative camera
US11770601B2 (en) 2019-05-06 2023-09-26 Apple Inc. User interfaces for capturing and managing visual media
US11778339B2 (en) 2021-04-30 2023-10-03 Apple Inc. User interfaces for altering visual media
US11784975B1 (en) * 2021-07-06 2023-10-10 Bank Of America Corporation Image-based firewall system
US11829192B2 (en) 2020-12-23 2023-11-28 Here Global B.V. Method, apparatus, and computer program product for change detection based on digital signatures
US11830103B2 (en) 2020-12-23 2023-11-28 Here Global B.V. Method, apparatus, and computer program product for training a signature encoding module and a query processing module using augmented data
US20230384235A1 (en) * 2022-05-24 2023-11-30 Hon Hai Precision Industry Co., Ltd. Method for detecting product for defects, electronic device, and storage medium
US11900662B2 (en) 2020-12-16 2024-02-13 Here Global B.V. Method, apparatus, and computer program product for training a signature encoding module and a query processing module to identify objects of interest within an image utilizing digital signatures
US11991295B2 (en) 2021-12-07 2024-05-21 Here Global B.V. Method, apparatus, and computer program product for identifying an object of interest within an image from a digital signature generated by a signature encoding module including a hypernetwork
US12112024B2 (en) 2021-06-01 2024-10-08 Apple Inc. User interfaces for managing media styles
US12401889B2 (en) 2023-05-05 2025-08-26 Apple Inc. User interfaces for controlling media capture settings
US12506953B2 (en) 2021-12-03 2025-12-23 Apple Inc. Device, methods, and graphical user interfaces for capturing and displaying media

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240357104A1 (en) * 2023-04-21 2024-10-24 Nokia Technologies Oy Determining regions of interest using learned image codec for machines

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6185343B1 (en) * 1997-01-17 2001-02-06 Matsushita Electric Works, Ltd. Position detection system and method
US20050018045A1 (en) * 2003-03-14 2005-01-27 Thomas Graham Alexander Video processing
US20130120581A1 (en) * 2011-11-11 2013-05-16 Sony Europe Limited Apparatus, method and system
US20130147905A1 (en) * 2011-12-13 2013-06-13 Google Inc. Processing media streams during a multi-user video conference
US8589069B1 (en) * 2009-11-12 2013-11-19 Google Inc. Enhanced identification of interesting points-of-interest
US20140152834A1 (en) * 2012-12-05 2014-06-05 At&T Mobility Ii, Llc System and Method for Processing Streaming Media
US9043222B1 (en) * 2006-11-30 2015-05-26 NexRf Corporation User interface for geofence associated content
US20160001137A1 (en) * 2014-07-07 2016-01-07 Bradley Gene Phillips Illumination system for a sports ball
US20160171330A1 (en) * 2014-12-15 2016-06-16 Reflex Robotics, Inc. Vision based real-time object tracking system for robotic gimbal control
US20160267676A1 (en) * 2015-03-11 2016-09-15 Panasonic Intellectual Property Management Co., Ltd. Motion detection system
US20170024094A1 (en) * 2015-07-22 2017-01-26 Enthrall Sports LLC Interactive audience communication for events
US20170213385A1 (en) * 2016-01-26 2017-07-27 Electronics And Telecommunications Research Institute Apparatus and method for generating 3d face model using mobile device
US20170256066A1 (en) * 2016-03-05 2017-09-07 SmartPitch LLC Highly accurate baseball pitch speed detector using widely available smartphones
US20190025544A1 (en) * 2016-03-30 2019-01-24 Fujifilm Corporation Imaging apparatus and focus control method

Family Cites Families (513)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2534821A (en) 1945-10-23 1950-12-19 Lucas Ltd Joseph Control means
US5613056A (en) 1991-02-19 1997-03-18 Bright Star Technology, Inc. Advanced tools for speech synchronized animation
GB2256567B (en) 1991-06-05 1995-01-11 Sony Broadcast & Communication Modelling system for imaging three-dimensional models
US5706417A (en) 1992-05-27 1998-01-06 Massachusetts Institute Of Technology Layered representation for image coding
US5495576A (en) 1993-01-11 1996-02-27 Ritchey; Kurtis J. Panoramic image based virtual reality/telepresence audio-visual system and method
JP3679426B2 (en) 1993-03-15 2005-08-03 マサチューセッツ・インスティチュート・オブ・テクノロジー A system that encodes image data into multiple layers, each representing a coherent region of motion, and motion parameters associated with the layers.
US5613048A (en) 1993-08-03 1997-03-18 Apple Computer, Inc. Three-dimensional image synthesis using view interpolation
US6327381B1 (en) 1994-12-29 2001-12-04 Worldscape, Llc Image transformation and synthesis methods
ZA962306B (en) 1995-03-22 1996-09-27 Idt Deutschland Gmbh Method and apparatus for depth modelling and providing depth information of moving objects
US5850352A (en) 1995-03-31 1998-12-15 The Regents Of The University Of California Immersive video, including video hypermosaicing to generate from multiple video views of a scene a three-dimensional video mosaic from which diverse virtual video scene images are synthesized, including panoramic, scene interactive and stereoscopic images
EP0838068B1 (en) 1995-07-10 2005-10-26 Sarnoff Corporation Method and system for rendering and combining images
US5847714A (en) 1996-05-31 1998-12-08 Hewlett Packard Company Interpolation method and apparatus for fast image magnification
US6108440A (en) 1996-06-28 2000-08-22 Sony Corporation Image data converting method
US5926190A (en) 1996-08-21 1999-07-20 Apple Computer, Inc. Method and system for simulating motion in a computer graphics application using image registration and view interpolation
US6080063A (en) 1997-01-06 2000-06-27 Khosla; Vinod Simulated real time game play with live event
US6031564A (en) 1997-07-07 2000-02-29 Reveo, Inc. Method and apparatus for monoscopic to stereoscopic image conversion
KR100582856B1 (en) 1997-09-23 2006-05-24 코닌클리케 필립스 일렉트로닉스 엔.브이. Motion Estimation and Motion Compensated Interpolation
CA2321049A1 (en) 1998-02-23 1999-08-26 Algotec Systems Ltd. Automatic path planning system and method
US6266068B1 (en) 1998-03-13 2001-07-24 Compaq Computer Corporation Multi-layer image-based rendering for video synthesis
US6504569B1 (en) 1998-04-22 2003-01-07 Grass Valley (U.S.), Inc. 2-D extended image generation from 3-D data extracted from a video sequence
JP2000059793A (en) 1998-08-07 2000-02-25 Sony Corp Image decoding apparatus and image decoding method
US6281903B1 (en) 1998-12-04 2001-08-28 International Business Machines Corporation Methods and apparatus for embedding 2D image content into 3D models
GB9826596D0 (en) 1998-12-04 1999-01-27 P J O Ind Limited Conductive materials
US6636633B2 (en) 1999-05-03 2003-10-21 Intel Corporation Rendering of photorealistic computer graphics images
US20010046262A1 (en) 2000-03-10 2001-11-29 Freda Robert M. System and method for transmitting a broadcast television signal over broadband digital transmission channels
US7796162B2 (en) 2000-10-26 2010-09-14 Front Row Technologies, Llc Providing multiple synchronized camera views for broadcast from a live venue activity to remote viewers
US20020024517A1 (en) 2000-07-14 2002-02-28 Komatsu Ltd. Apparatus and method for three-dimensional image production and presenting real objects in virtual three-dimensional space
US6778207B1 (en) 2000-08-07 2004-08-17 Koninklijke Philips Electronics N.V. Fast digital pan tilt zoom video
US6573912B1 (en) 2000-11-07 2003-06-03 Zaxel Systems, Inc. Internet system for virtual telepresence
US7146023B2 (en) 2000-12-15 2006-12-05 Sony Corporation Image processor, image signal generating method, information recording medium, and image processing program
US20040104935A1 (en) 2001-01-26 2004-06-03 Todd Williamson Virtual reality immersion system
EP1363246A4 (en) 2001-02-23 2006-11-08 Fujitsu Ltd DISPLAY CONTROL DEVICE, INFORMATION TERMINAL DEVICE EQUIPPED WITH DISPLAY CONTROL DEVICE, AND POINT POSITION CONTROL DEVICE
US20020190991A1 (en) 2001-05-16 2002-12-19 Daniel Efran 3-D instant replay system and method
US6983283B2 (en) 2001-10-03 2006-01-03 Sun Microsystems, Inc. Managing scene graph memory using data staging
US20030086002A1 (en) 2001-11-05 2003-05-08 Eastman Kodak Company Method and system for compositing images
AU2002359541A1 (en) 2001-11-30 2003-06-17 Zaxel Systems, Inc. Image-based rendering for 3d object viewing
US7631277B1 (en) 2001-12-14 2009-12-08 Apple Inc. System and method for integrating media objects
US7171344B2 (en) 2001-12-21 2007-01-30 Caterpillar Inc Method and system for providing end-user visualization
US6975756B1 (en) 2002-03-12 2005-12-13 Hewlett-Packard Development Company, L.P. Image-based photo hulls
US7631261B2 (en) 2002-09-12 2009-12-08 Inoue Technologies, LLC Efficient method for creating a visual telepresence for large numbers of simultaneous users
US7589732B2 (en) 2002-11-05 2009-09-15 Autodesk, Inc. System and method of integrated spatial and temporal navigation
JP4007899B2 (en) 2002-11-07 2007-11-14 オリンパス株式会社 Motion detection device
GB0229432D0 (en) 2002-12-18 2003-01-22 Flux Solutions Ltd Image display system
US6811264B2 (en) 2003-03-21 2004-11-02 Mitsubishi Electric Research Laboratories, Inc. Geometrically aware projector
US20040222987A1 (en) 2003-05-08 2004-11-11 Chang Nelson Liang An Multiframe image processing
NZ525956A (en) 2003-05-16 2005-10-28 Deep Video Imaging Ltd Display control system for use with multi-layer displays
US6968973B2 (en) 2003-05-31 2005-11-29 Microsoft Corporation System and process for viewing and navigating through an interactive video tour
WO2004114063A2 (en) 2003-06-13 2004-12-29 Georgia Tech Research Corporation Data reconstruction using directional interpolation techniques
US8199222B2 (en) 2007-03-05 2012-06-12 DigitalOptics Corporation Europe Limited Low-light video frame enhancement
US8682097B2 (en) 2006-02-14 2014-03-25 DigitalOptics Corporation Europe Limited Digital image enhancement with reference images
US7317457B2 (en) 2003-07-21 2008-01-08 Autodesk, Inc. Processing image data
US20050046645A1 (en) 2003-07-24 2005-03-03 Breton Pierre Felix Autoscaling
US7161606B2 (en) 2003-09-08 2007-01-09 Honda Motor Co., Ltd. Systems and methods for directly generating a view using a layered approach
CA2551053A1 (en) 2003-11-03 2005-05-12 Bracco Imaging S.P.A. Stereo display of tube-like structures and improved techniques therefor ("stereo display")
US7961194B2 (en) 2003-11-19 2011-06-14 Lucid Information Technology, Ltd. Method of controlling in real time the switching of modes of parallel operation of a multi-mode parallel graphics processing subsystem embodied within a host computing system
JP4966015B2 (en) 2003-12-01 2012-07-04 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Motion compensated inverse filtering using bandpass filters for motion blur reduction
WO2005081524A1 (en) 2004-02-23 2005-09-01 Koninklijke Philips Electronics N.V. Reducing artefacts in scan-rate conversion of image signals by combining interpolation and extrapolation of images
US20050186548A1 (en) 2004-02-25 2005-08-25 Barbara Tomlinson Multimedia interactive role play system
US7262783B2 (en) 2004-03-03 2007-08-28 Virtual Iris Studios, Inc. System for delivering and enabling interactivity with images
US7657060B2 (en) 2004-03-31 2010-02-02 Microsoft Corporation Stylization of video
US7633511B2 (en) 2004-04-01 2009-12-15 Microsoft Corporation Pop-up light field
US7257272B2 (en) 2004-04-16 2007-08-14 Microsoft Corporation Virtual image generation
US7505054B2 (en) 2004-05-12 2009-03-17 Hewlett-Packard Development Company, L.P. Display resolution systems and methods
US8021300B2 (en) 2004-06-16 2011-09-20 Siemens Medical Solutions Usa, Inc. Three-dimensional fly-through systems and methods using ultrasound data
US7015926B2 (en) 2004-06-28 2006-03-21 Microsoft Corporation System and process for generating a two-layer, 3D representation of a scene
WO2007018523A2 (en) 2004-07-28 2007-02-15 Sarnoff Corporation Method and apparatus for stereo, multi-camera tracking and rf and video track fusion
JP4079375B2 (en) 2004-10-28 2008-04-23 シャープ株式会社 Image stabilizer
US8823821B2 (en) 2004-12-17 2014-09-02 Mitsubishi Electric Research Laboratories, Inc. Method and system for processing multiview videos for view synthesis using motion vector predictor list
US8854486B2 (en) 2004-12-17 2014-10-07 Mitsubishi Electric Research Laboratories, Inc. Method and system for processing multiview videos for view synthesis using skip and direct modes
KR100511210B1 (en) 2004-12-27 2005-08-30 주식회사지앤지커머스 Method for converting 2d image into pseudo 3d image and user-adapted total coordination method in use artificial intelligence, and service besiness method thereof
IL166305A0 (en) 2005-01-14 2006-01-15 Rafael Armament Dev Authority Automatic conversion from monoscopic video to stereoscopic video
JP2006260527A (en) 2005-02-16 2006-09-28 Toshiba Corp Image matching method and image interpolation method using the same
WO2006089417A1 (en) 2005-02-23 2006-08-31 Craig Summers Automatic scene modeling for the 3d camera and 3d video
US7570810B2 (en) 2005-02-24 2009-08-04 Seiko Epson Corporation Method and apparatus applying digital image filtering to color filter array data
US7587101B1 (en) 2005-02-28 2009-09-08 Adobe Systems Incorporated Facilitating computer-assisted tagging of object instances in digital images
WO2006102244A2 (en) 2005-03-18 2006-09-28 Kristin Acker Interactive floorplan viewer
FR2884341B1 (en) 2005-04-11 2008-05-02 Gen Electric METHOD AND SYSTEM FOR ENHANCING A DIGITAL IMAGE GENERATED FROM AN X-RAY DETECTOR
US7474848B2 (en) 2005-05-05 2009-01-06 Hewlett-Packard Development Company, L.P. Method for achieving correct exposure of a panoramic photograph
US7565029B2 (en) 2005-07-08 2009-07-21 Seiko Epson Corporation Method for determining camera position from two-dimensional images that form a panorama
US8963926B2 (en) 2006-07-11 2015-02-24 Pandoodle Corporation User customized animated video and method for making the same
CN101243392A (en) 2005-08-15 2008-08-13 皇家飞利浦电子股份有限公司 Systems, devices and methods for end-user programmed augmented reality glasses
JP2009505550A (en) 2005-08-17 2009-02-05 エヌエックスピー ビー ヴィ Video processing method and apparatus for depth extraction
US7957466B2 (en) 2005-09-16 2011-06-07 Sony Corporation Adaptive area of influence filter for moving object boundaries
US20070070069A1 (en) 2005-09-26 2007-03-29 Supun Samarasekera System and method for enhanced situation awareness and visualization of environments
US8094928B2 (en) 2005-11-14 2012-01-10 Microsoft Corporation Stereo video for gaming
US8160400B2 (en) 2005-11-17 2012-04-17 Microsoft Corporation Navigating images using image based geometric alignment and object based controls
US20070118801A1 (en) 2005-11-23 2007-05-24 Vizzme, Inc. Generation and playback of multimedia presentations
CN101375315B (en) 2006-01-27 2015-03-18 图象公司 Methods and systems for digitally re-mastering of 2D and 3D motion pictures for exhibition with enhanced visual quality
US9070402B2 (en) 2006-03-13 2015-06-30 Autodesk, Inc. 3D model presentation system with motion and transitions at each camera view point of interest (POI) with imageless jumps to each POI
US7577314B2 (en) 2006-04-06 2009-08-18 Seiko Epson Corporation Method and apparatus for generating a panorama background from a set of images
US7778491B2 (en) 2006-04-10 2010-08-17 Microsoft Corporation Oblique image stitching
JP5051500B2 (en) 2006-05-17 2012-10-17 株式会社セガ Information processing apparatus and program and method for generating squeal sound in the apparatus
US7803113B2 (en) 2006-06-14 2010-09-28 Siemens Medical Solutions Usa, Inc. Ultrasound imaging of rotation
EP2033164B1 (en) 2006-06-23 2015-10-07 Imax Corporation Methods and systems for converting 2d motion pictures for stereoscopic 3d exhibition
US7940971B2 (en) 2006-07-24 2011-05-10 Siemens Medical Solutions Usa, Inc. System and method for coronary digital subtraction angiography
US20080033641A1 (en) 2006-07-25 2008-02-07 Medalia Michael J Method of generating a three-dimensional interactive tour of a geographic location
US9369679B2 (en) 2006-11-07 2016-06-14 The Board Of Trustees Of The Leland Stanford Junior University System and process for projecting location-referenced panoramic images into a 3-D environment model and rendering panoramic images from arbitrary viewpoints within the 3-D environment model
US8078004B2 (en) 2006-11-09 2011-12-13 University Of Delaware Geometric registration of images by similarity transformation using two reference points
US8947452B1 (en) 2006-12-07 2015-02-03 Disney Enterprises, Inc. Mechanism for displaying visual clues to stacking order during a drag and drop operation
US7809212B2 (en) 2006-12-20 2010-10-05 Hantro Products Oy Digital mosaic image construction
US20090262074A1 (en) 2007-01-05 2009-10-22 Invensense Inc. Controlling and accessing content using motion processing on mobile devices
US8994644B2 (en) 2007-01-26 2015-03-31 Apple Inc. Viewing images with tilt control on a hand-held device
US8538795B2 (en) 2007-02-12 2013-09-17 Pricelock, Inc. System and method of determining a retail commodity price within a geographic boundary
US20080198159A1 (en) 2007-02-16 2008-08-21 Matsushita Electric Industrial Co., Ltd. Method and apparatus for efficient and flexible surveillance visualization with context sensitive privacy preserving and power lens data mining
US7900225B2 (en) 2007-02-20 2011-03-01 Google, Inc. Association of ads with tagged audiovisual content
JP4345829B2 (en) 2007-03-09 2009-10-14 ソニー株式会社 Image display system, image display apparatus, image display method, and program
US8593506B2 (en) 2007-03-15 2013-11-26 Yissum Research Development Company Of The Hebrew University Of Jerusalem Method and system for forming a panoramic image of a scene having minimal aspect distortion
WO2009023044A2 (en) 2007-04-24 2009-02-19 21 Ct, Inc. Method and system for fast dense stereoscopic ranging
US7688229B2 (en) 2007-04-30 2010-03-30 Navteq North America, Llc System and method for stitching of video for routes
DE102007029476A1 (en) 2007-06-26 2009-01-08 Robert Bosch Gmbh Image processing apparatus for shadow detection and suppression, method and computer program
US8358332B2 (en) 2007-07-23 2013-01-22 Disney Enterprises, Inc. Generation of three-dimensional movies with improved depth control
US8229163B2 (en) 2007-08-22 2012-07-24 American Gnc Corporation 4D GIS based virtual reality for moving target prediction
US7945802B2 (en) 2007-09-17 2011-05-17 International Business Machines Corporation Modifying time progression rates in a virtual universe
EP2201784B1 (en) 2007-10-11 2012-12-12 Koninklijke Philips Electronics N.V. Method and device for processing a depth-map
JP4926916B2 (en) 2007-11-07 2012-05-09 キヤノン株式会社 Information processing apparatus, information processing method, and computer program
US8503744B2 (en) 2007-11-19 2013-08-06 Dekel Shlomi Dynamic method and system for representing a three dimensional object navigated from within
US8531449B2 (en) 2007-12-18 2013-09-10 Navteq B.V. System and method for producing multi-angle views of an object-of-interest from images in an image dataset
US20090163185A1 (en) 2007-12-24 2009-06-25 Samsung Electronics Co., Ltd. Method and system for creating, receiving and playing multiview images, and related mobile communication device
US7917243B2 (en) 2008-01-08 2011-03-29 Stratasys, Inc. Method for building three-dimensional objects containing embedded inserts
US8103134B2 (en) 2008-02-20 2012-01-24 Samsung Electronics Co., Ltd. Method and a handheld device for capturing motion
US10872322B2 (en) 2008-03-21 2020-12-22 Dressbot, Inc. System and method for collaborative shopping, business and entertainment
US8214751B2 (en) 2008-04-15 2012-07-03 International Business Machines Corporation Dynamic spawning of focal point objects within a virtual universe system
US8189959B2 (en) 2008-04-17 2012-05-29 Microsoft Corporation Image blending using multi-splines
US8346017B2 (en) 2008-04-30 2013-01-01 Microsoft Corporation Intermediate point between images to insert/overlay ads
US9113214B2 (en) 2008-05-03 2015-08-18 Cinsay, Inc. Method and system for generation and playback of supplemented videos
US20090282335A1 (en) 2008-05-06 2009-11-12 Petter Alexandersson Electronic device with 3d positional audio function and method
US8174503B2 (en) 2008-05-17 2012-05-08 David H. Cain Touch-based authentication of a mobile device through user generated pattern creation
US8401276B1 (en) 2008-05-20 2013-03-19 University Of Southern California 3-D reconstruction and registration
US8160391B1 (en) 2008-06-04 2012-04-17 Google Inc. Panoramic image fill
FR2933220B1 (en) 2008-06-27 2010-06-18 Inst Francais Du Petrole METHOD FOR CONSTRUCTING A HYBRID MESH FROM A CPG TYPE MESH
US20100007715A1 (en) 2008-07-09 2010-01-14 Ortery Technologies, Inc. Method of Shooting Angle Adjustment for an Image Capturing Device that Moves Along a Circular Path
JP4990852B2 (en) 2008-07-31 2012-08-01 Kddi株式会社 Free viewpoint video generation system and recording medium for three-dimensional movement
BRPI0822727A2 (en) 2008-07-31 2015-07-14 Tele Atlas Bv 3d Navigation Data Display Method
US9307165B2 (en) 2008-08-08 2016-04-05 Qualcomm Technologies, Inc. In-camera panorama image stitching assistance
DE102008058489A1 (en) 2008-08-19 2010-04-15 Siemens Aktiengesellschaft A method of encoding a sequence of digitized images
EP2157800B1 (en) 2008-08-21 2012-04-18 Vestel Elektronik Sanayi ve Ticaret A.S. Method and apparatus for increasing the frame rate of a video signal
WO2010026496A1 (en) 2008-09-07 2010-03-11 Sportvu Ltd. Method and system for fusing video streams
WO2010037512A1 (en) 2008-10-02 2010-04-08 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Intermediate view synthesis and multi-view data signal extraction
US20100100492A1 (en) 2008-10-16 2010-04-22 Philip Law Sharing transaction information in a commerce network
US20100098258A1 (en) 2008-10-22 2010-04-22 Karl Ola Thorn System and method for generating multichannel audio with a portable electronic device
US20100110069A1 (en) 2008-10-31 2010-05-06 Sharp Laboratories Of America, Inc. System for rendering virtual see-through scenes
WO2010052550A2 (en) 2008-11-05 2010-05-14 Easywalk Capital S.A. System and method for creating and broadcasting interactive panoramic walk-through applications
US9621768B1 (en) 2008-12-08 2017-04-11 Tata Communications (America) Inc. Multi-view media display
EP2385705A4 (en) 2008-12-30 2011-12-21 Huawei Device Co Ltd Method and device for generating stereoscopic panoramic video stream, and method and device of video conference
TW201028964A (en) 2009-01-23 2010-08-01 Ind Tech Res Inst Depth calculating method for two dimension video and apparatus thereof
CN102308320B (en) 2009-02-06 2013-05-29 香港科技大学 Generating three-dimensional models from images
CN102685514B (en) 2009-02-19 2014-02-19 松下电器产业株式会社 Reproducing device, recording method, and recording medium reproducing system
US8503826B2 (en) 2009-02-23 2013-08-06 3DBin, Inc. System and method for computer-aided image processing for generation of a 360 degree view model
US8199186B2 (en) 2009-03-05 2012-06-12 Microsoft Corporation Three-dimensional (3D) imaging based on motionparallax
US8866841B1 (en) 2009-04-03 2014-10-21 Joshua Distler Method and apparatus to deliver imagery with embedded data
EP2417559A4 (en) 2009-04-08 2015-06-24 Stergen Hi Tech Ltd Method and system for creating three-dimensional viewable video from a single video stream
US20100259595A1 (en) 2009-04-10 2010-10-14 Nokia Corporation Methods and Apparatuses for Efficient Streaming of Free View Point Video
US20100305857A1 (en) 2009-05-08 2010-12-02 Jeffrey Byrne Method and System for Visual Collision Detection and Estimation
US9479768B2 (en) 2009-06-09 2016-10-25 Bartholomew Garibaldi Yukich Systems and methods for creating three-dimensional image media
US8933925B2 (en) 2009-06-15 2015-01-13 Microsoft Corporation Piecewise planar reconstruction of three-dimensional scenes
US8249302B2 (en) 2009-06-30 2012-08-21 Mitsubishi Electric Research Laboratories, Inc. Method for determining a location from images acquired of an environment with an omni-directional camera
US20110007072A1 (en) 2009-07-09 2011-01-13 University Of Central Florida Research Foundation, Inc. Systems and methods for three-dimensionally modeling moving objects
US8715031B2 (en) 2009-08-06 2014-05-06 Peter Sui Lun Fong Interactive device with sound-based action synchronization
US8275590B2 (en) 2009-08-12 2012-09-25 Zugara, Inc. Providing a simulation of wearing items such as garments and/or accessories
JP5209121B2 (en) 2009-09-18 2013-06-12 株式会社東芝 Parallax image generation device
US9083956B2 (en) 2009-09-28 2015-07-14 Samsung Electronics Co., Ltd. System and method for creating 3D video
WO2011049519A1 (en) 2009-10-20 2011-04-28 Telefonaktiebolaget Lm Ericsson (Publ) Method and arrangement for multi-view video compression
US10375287B2 (en) 2009-10-21 2019-08-06 Disney Enterprises, Inc. Object trail-based analysis and control of video
KR101631912B1 (en) 2009-11-03 2016-06-20 엘지전자 주식회사 Mobile terminal and control method thereof
KR101594048B1 (en) 2009-11-09 2016-02-15 삼성전자주식회사 3 device and method for generating 3 dimensional image using cooperation between cameras
US9174123B2 (en) 2009-11-09 2015-11-03 Invensense, Inc. Handheld computer systems and techniques for character and command recognition related to human movements
US9445072B2 (en) 2009-11-11 2016-09-13 Disney Enterprises, Inc. Synthesizing views based on image domain warping
US8817071B2 (en) 2009-11-17 2014-08-26 Seiko Epson Corporation Context constrained novel view interpolation
US8643701B2 (en) 2009-11-18 2014-02-04 University Of Illinois At Urbana-Champaign System for executing 3D propagation for depth image-based rendering
KR101282196B1 (en) 2009-12-11 2013-07-04 한국전자통신연구원 Apparatus and method for separating foreground and background of based codebook In a multi-view image
US8515134B2 (en) 2009-12-11 2013-08-20 Nxp B.V. System and method for motion estimation using image depth information
US10080006B2 (en) 2009-12-11 2018-09-18 Fotonation Limited Stereoscopic (3D) panorama creation on handheld device
US9766089B2 (en) 2009-12-14 2017-09-19 Nokia Technologies Oy Method and apparatus for correlating and navigating between a live image and a prerecorded panoramic image
US8447136B2 (en) 2010-01-12 2013-05-21 Microsoft Corporation Viewing media in the context of street-level images
US9052894B2 (en) 2010-01-15 2015-06-09 Apple Inc. API to replace a keyboard with custom controls
JP2011166264A (en) 2010-02-05 2011-08-25 Sony Corp Image processing apparatus, imaging device and image processing method, and program
US8947455B2 (en) 2010-02-22 2015-02-03 Nike, Inc. Augmented reality design system
US9381429B2 (en) 2010-02-24 2016-07-05 Valve Corporation Compositing multiple scene shots into a video game clip
US20110234750A1 (en) 2010-03-24 2011-09-29 Jimmy Kwok Lap Lai Capturing Two or More Images to Form a Panoramic Image
US8581905B2 (en) 2010-04-08 2013-11-12 Disney Enterprises, Inc. Interactive three dimensional displays on handheld devices
US20110254835A1 (en) 2010-04-20 2011-10-20 Futurity Ventures LLC System and method for the creation of 3-dimensional images
US8798992B2 (en) 2010-05-19 2014-08-05 Disney Enterprises, Inc. Audio noise modification for event broadcasting
US8762041B2 (en) 2010-06-21 2014-06-24 Blackberry Limited Method, device and system for presenting navigational information
US8730267B2 (en) 2010-06-21 2014-05-20 Celsia, Llc Viewpoint change on a display device based on movement of the device
US9134799B2 (en) 2010-07-16 2015-09-15 Qualcomm Incorporated Interacting with a projected user interface using orientation sensors
EP2596475B1 (en) 2010-07-19 2019-01-16 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Filling disocclusions in a virtual view
US20120019557A1 (en) 2010-07-22 2012-01-26 Sony Ericsson Mobile Communications Ab Displaying augmented reality information
US9411413B2 (en) 2010-08-04 2016-08-09 Apple Inc. Three dimensional user interface effects on a display
US20120127172A1 (en) 2010-08-18 2012-05-24 Industrial Technology Research Institute Systems and methods for image refinement using circuit model optimization
US8711143B2 (en) 2010-08-25 2014-04-29 Adobe Systems Incorporated System and method for interactive image-based modeling of curved surfaces using single-view and multi-view feature curves
US20120057006A1 (en) 2010-09-08 2012-03-08 Disney Enterprises, Inc. Autostereoscopic display system and method
JP5413344B2 (en) 2010-09-27 2014-02-12 カシオ計算機株式会社 Imaging apparatus, image composition method, and program
US8905314B2 (en) 2010-09-30 2014-12-09 Apple Inc. Barcode recognition using data-driven classifier
US9027117B2 (en) 2010-10-04 2015-05-05 Microsoft Technology Licensing, Llc Multiple-access-level lock screen
US20130208900A1 (en) 2010-10-13 2013-08-15 Microsoft Corporation Depth camera with integrated three-dimensional audio
US20120092348A1 (en) 2010-10-14 2012-04-19 Immersive Media Company Semi-automatic navigation with an immersive image
US8668647B2 (en) 2010-10-15 2014-03-11 The University Of British Columbia Bandpass sampling for elastography
US8705892B2 (en) 2010-10-26 2014-04-22 3Ditize Sl Generating three-dimensional virtual tours from two-dimensional images
US9171372B2 (en) 2010-11-23 2015-10-27 Qualcomm Incorporated Depth estimation based on global motion
US9472161B1 (en) 2010-12-01 2016-10-18 CIE Games LLC Customizing virtual assets
US8629886B2 (en) 2010-12-07 2014-01-14 Microsoft Corporation Layer combination in a surface composition system
JP5751986B2 (en) 2010-12-08 2015-07-22 キヤノン株式会社 Image generation device
US8494285B2 (en) 2010-12-09 2013-07-23 The Hong Kong University Of Science And Technology Joint semantic segmentation of images and scan data
US20120167146A1 (en) 2010-12-28 2012-06-28 White Square Media Llc Method and apparatus for providing or utilizing interactive video with tagged objects
US8655345B2 (en) 2011-01-08 2014-02-18 Steven K. Gold Proximity-enabled remote control
US8953022B2 (en) 2011-01-10 2015-02-10 Aria Glassworks, Inc. System and method for sharing virtual and augmented reality scenes between users and viewers
US8803912B1 (en) 2011-01-18 2014-08-12 Kenneth Peyton Fouts Systems and methods related to an interactive representative reality
US20120236201A1 (en) 2011-01-27 2012-09-20 In The Telling, Inc. Digital asset management, authoring, and presentation techniques
US8621355B2 (en) 2011-02-02 2013-12-31 Apple Inc. Automatic synchronization of media clips
US8942917B2 (en) 2011-02-14 2015-01-27 Microsoft Corporation Change invariant scene recognition by an agent
US20120207308A1 (en) 2011-02-15 2012-08-16 Po-Hsun Sung Interactive sound playback device
EP2490448A1 (en) 2011-02-18 2012-08-22 Siemens Aktiengesellschaft Encoding method and image encoding device for compressing an image sequence
WO2012117729A1 (en) 2011-03-03 2012-09-07 パナソニック株式会社 Video provision device, video provision method, and video provision program capable of providing vicarious experience
US8745487B2 (en) 2011-03-17 2014-06-03 Xerox Corporation System and method for creating variable data print samples for campaigns
WO2012131895A1 (en) 2011-03-29 2012-10-04 株式会社東芝 Image encoding device, method and program, and image decoding device, method and program
KR101890850B1 (en) 2011-03-30 2018-10-01 삼성전자주식회사 Electronic apparatus for displaying a guide with 3d view and method thereof
KR101758164B1 (en) 2011-04-08 2017-07-26 엘지전자 주식회사 Mobile twrminal and 3d multi-angle view controlling method thereof
US9313390B2 (en) 2011-04-08 2016-04-12 Qualcomm Incorporated Systems and methods to calibrate a multi camera device
US20120258436A1 (en) 2011-04-08 2012-10-11 Case Western Reserve University Automated assessment of cognitive, fine-motor, and memory skills
EP2511137B1 (en) 2011-04-14 2019-03-27 Harman Becker Automotive Systems GmbH Vehicle Surround View System
US8203605B1 (en) 2011-05-11 2012-06-19 Google Inc. Point-of-view object selection
US8600194B2 (en) 2011-05-17 2013-12-03 Apple Inc. Positional sensor-assisted image registration for panoramic photography
US8970665B2 (en) 2011-05-25 2015-03-03 Microsoft Corporation Orientation-based generation of panoramic fields
JP5818514B2 (en) 2011-05-27 2015-11-18 キヤノン株式会社 Image processing apparatus, image processing method, and program
US8675049B2 (en) 2011-06-09 2014-03-18 Microsoft Corporation Navigation model to render centered objects using images
WO2012169103A1 (en) 2011-06-10 2012-12-13 パナソニック株式会社 Stereoscopic image generating device and stereoscopic image generating method
US20120314899A1 (en) 2011-06-13 2012-12-13 Microsoft Corporation Natural user interfaces for mobile image viewing
KR101249901B1 (en) 2011-06-22 2013-04-09 엘지전자 주식회사 Mobile communication terminal and operation method thereof
JP6017854B2 (en) 2011-06-24 2016-11-02 本田技研工業株式会社 Information processing apparatus, information processing system, information processing method, and information processing program
US9600933B2 (en) 2011-07-01 2017-03-21 Intel Corporation Mobile augmented reality system
EP2547111B1 (en) 2011-07-12 2017-07-19 Samsung Electronics Co., Ltd. Method and apparatus for processing multi-view image using hole rendering
US9041734B2 (en) 2011-07-12 2015-05-26 Amazon Technologies, Inc. Simulating three-dimensional features
US9336240B2 (en) 2011-07-15 2016-05-10 Apple Inc. Geo-tagging digital images
KR20130018627A (en) 2011-08-09 2013-02-25 삼성전자주식회사 Method and apparatus for encoding and decoding multi-view video data
US20130212538A1 (en) 2011-08-19 2013-08-15 Ghislain LEMIRE Image-based 3d environment emulator
US20130050573A1 (en) 2011-08-25 2013-02-28 Comcast Cable Communications, Llc Transmission of video content
WO2013032955A1 (en) 2011-08-26 2013-03-07 Reincloud Corporation Equipment, systems and methods for navigating through multiple reality models
US8928729B2 (en) 2011-09-09 2015-01-06 Disney Enterprises, Inc. Systems and methods for converting video
US20130063487A1 (en) 2011-09-12 2013-03-14 MyChic Systems Ltd. Method and system of using augmented reality for applications
US8994736B2 (en) 2011-09-23 2015-03-31 Adobe Systems Incorporated Methods and apparatus for freeform deformation of 3-D models
EP2581268B2 (en) 2011-10-13 2019-09-11 Harman Becker Automotive Systems GmbH Method of controlling an optical output device for displaying a vehicle surround view and vehicle surround view system
WO2013058735A1 (en) 2011-10-18 2013-04-25 Hewlett-Packard Development Company, L.P. Depth mask assisted video stabilization
US9401041B2 (en) 2011-10-26 2016-07-26 The Regents Of The University Of California Multi view synthesis method and display devices with spatial and inter-view consistency
TWI436234B (en) 2011-11-07 2014-05-01 Shuttle Inc Method for unlocking mobile device, mobile device and application program using for the same
JP2013101526A (en) 2011-11-09 2013-05-23 Sony Corp Information processing apparatus, display control method, and program
US8515982B1 (en) 2011-11-11 2013-08-20 Google Inc. Annotations for three-dimensional (3D) object data models
US20130129304A1 (en) 2011-11-22 2013-05-23 Roy Feinson Variable 3-d surround video playback with virtual panning and smooth transition
US9626798B2 (en) 2011-12-05 2017-04-18 At&T Intellectual Property I, L.P. System and method to digitally replace objects in images or video
KR101846447B1 (en) 2011-12-08 2018-04-06 엘지전자 주식회사 Mobile terminal and control method for mobile terminal
US10008021B2 (en) 2011-12-14 2018-06-26 Microsoft Technology Licensing, Llc Parallax compensation
KR20130068318A (en) 2011-12-15 2013-06-26 삼성전자주식회사 Device and method for operating a function of wireless terminal
US20140340404A1 (en) 2011-12-16 2014-11-20 Thomson Licensing Method and apparatus for generating 3d free viewpoint video
KR20130073459A (en) 2011-12-23 2013-07-03 삼성전자주식회사 Method and apparatus for generating multi-view
KR20130074383A (en) 2011-12-26 2013-07-04 삼성전자주식회사 Method and apparatus for view generation using multi-layer representation
US9118905B2 (en) 2011-12-30 2015-08-25 Google Inc. Multiplane panoramas of long scenes
US9024970B2 (en) 2011-12-30 2015-05-05 Here Global B.V. Path side image on map overlay
US9883163B2 (en) 2012-01-09 2018-01-30 Disney Enterprises, Inc. Method and system for determining camera parameters from a long range gradient based on alignment differences in non-point image landmarks
US9489562B2 (en) 2012-01-11 2016-11-08 77 Elektronika Muszeripari Kft Image processing method and apparatus
US8944939B2 (en) 2012-02-07 2015-02-03 University of Pittsburgh—of the Commonwealth System of Higher Education Inertial measurement of sports motion
US11282287B2 (en) 2012-02-24 2022-03-22 Matterport, Inc. Employing three-dimensional (3D) data predicted from two-dimensional (2D) images using neural networks for 3D modeling applications and other applications
US20120162253A1 (en) 2012-03-05 2012-06-28 David Collins Systems and methods of integrating virtual flyovers and virtual tours
EP2639660A1 (en) 2012-03-16 2013-09-18 Siemens Aktiengesellschaft Method and system for controller transition
US9367951B1 (en) 2012-03-21 2016-06-14 Amazon Technologies, Inc. Creating realistic three-dimensional effects
KR101818778B1 (en) 2012-03-23 2018-01-16 한국전자통신연구원 Apparatus and method of generating and consuming 3d data format for generation of realized panorama image
US8504842B1 (en) 2012-03-23 2013-08-06 Google Inc. Alternative unlocking patterns
US9503702B2 (en) 2012-04-13 2016-11-22 Qualcomm Incorporated View synthesis mode for three-dimensional video coding
US20140009462A1 (en) 2012-04-17 2014-01-09 3Dmedia Corporation Systems and methods for improving overall quality of three-dimensional content by altering parallax budget or compensating for moving objects
JP6016061B2 (en) 2012-04-20 2016-10-26 Nltテクノロジー株式会社 Image generation apparatus, image display apparatus, image generation method, and image generation program
US9129179B1 (en) 2012-05-10 2015-09-08 Amazon Technologies, Inc. Image-based object location
US9153073B2 (en) 2012-05-23 2015-10-06 Qualcomm Incorporated Spatially registered augmented video
US8819525B1 (en) 2012-06-14 2014-08-26 Google Inc. Error concealment guided robustness
US10127722B2 (en) 2015-06-30 2018-11-13 Matterport, Inc. Mobile capture visualization incorporating three-dimensional and two-dimensional imagery
US10139985B2 (en) 2012-06-22 2018-11-27 Matterport, Inc. Defining, displaying and interacting with tags in a three-dimensional model
US9256983B2 (en) 2012-06-28 2016-02-09 Here Global B.V. On demand image overlay
US10068547B2 (en) 2012-06-29 2018-09-04 Disney Enterprises, Inc. Augmented reality surface painting
US9069932B2 (en) 2012-07-06 2015-06-30 Blackberry Limited User-rotatable three-dimensionally rendered object for unlocking a computing device
US9118886B2 (en) 2012-07-18 2015-08-25 Hulu, LLC Annotating general objects in video
US8966356B1 (en) 2012-07-19 2015-02-24 Google Inc. Providing views of three-dimensional (3D) object data models
US9898742B2 (en) 2012-08-03 2018-02-20 Ebay Inc. Virtual dressing room
KR101899819B1 (en) 2012-08-03 2018-09-20 엘지전자 주식회사 Mobile terminal and method for controlling thereof
US8873812B2 (en) 2012-08-06 2014-10-28 Xerox Corporation Image segmentation using hierarchical unsupervised segmentation and hierarchical classifiers
JP5787843B2 (en) 2012-08-10 2015-09-30 株式会社東芝 Handwriting drawing apparatus, method and program
TWI476626B (en) 2012-08-24 2015-03-11 Ind Tech Res Inst Authentication method and code setting method and authentication system for electronic apparatus
KR20140030735A (en) 2012-09-03 2014-03-12 삼성전자주식회사 Apparatus and method for display
KR101429349B1 (en) 2012-09-18 2014-08-12 연세대학교 산학협력단 Apparatus and method for reconstructing intermediate view, recording medium thereof
WO2014047465A2 (en) 2012-09-21 2014-03-27 The Schepens Eye Research Institute, Inc. Collision prediction
US9094670B1 (en) 2012-09-25 2015-07-28 Amazon Technologies, Inc. Model generation and database
JP6216169B2 (en) 2012-09-26 2017-10-18 キヤノン株式会社 Information processing apparatus and information processing method
US20140087877A1 (en) 2012-09-27 2014-03-27 Sony Computer Entertainment Inc. Compositing interactive video game graphics with pre-recorded background video content
US9250653B2 (en) 2012-09-28 2016-02-02 City University Of Hong Kong Capturing, processing, and reconstructing audio and video content of mobile devices
US20140100995A1 (en) 2012-10-05 2014-04-10 Sanu Koshy Collection and Use of Consumer Data Associated with Augmented-Reality Window Shopping
US9270885B2 (en) 2012-10-26 2016-02-23 Google Inc. Method, system, and computer program product for gamifying the process of obtaining panoramic images
US8773502B2 (en) 2012-10-29 2014-07-08 Google Inc. Smart targets facilitating the capture of contiguous images
US9098911B2 (en) 2012-11-01 2015-08-04 Google Inc. Depth map generation from a monoscopic image based on combined depth cues
US9459820B2 (en) 2012-11-08 2016-10-04 Ricoh Company, Ltd. Display processing apparatus, display processing method, and computer program product
US9189884B2 (en) 2012-11-13 2015-11-17 Google Inc. Using video to encode assets for swivel/360-degree spinners
US20140153832A1 (en) 2012-12-04 2014-06-05 Vivek Kwatra Facial expression editing in images based on collections of images
DE102012112104A1 (en) 2012-12-11 2014-06-12 Conti Temic Microelectronic Gmbh PROCESS AND DEVICE FOR PROCESSABILITY ANALYSIS
US9171373B2 (en) 2012-12-26 2015-10-27 Ncku Research And Development Foundation System of image stereo matching
US8908041B2 (en) 2013-01-15 2014-12-09 Mobileye Vision Technologies Ltd. Stereo assist with rolling shutters
US20140199050A1 (en) 2013-01-17 2014-07-17 Spherical, Inc. Systems and methods for compiling and storing video with static panoramic background
US9922356B1 (en) 2013-01-22 2018-03-20 Carvana, LLC Methods and systems for online transactions
US8929602B2 (en) 2013-01-31 2015-01-06 Seiko Epson Corporation Component based correspondence matching for reconstructing cables
US9916815B2 (en) 2013-02-11 2018-03-13 Disney Enterprises, Inc. Enhanced system and method for presenting a view of a virtual space to a user based on a position of a display
US20140375684A1 (en) 2013-02-17 2014-12-25 Cherif Atia Algreatly Augmented Reality Technology
US20140232634A1 (en) 2013-02-19 2014-08-21 Apple Inc. Touch-based gestures modified by gyroscope and accelerometer
US9504850B2 (en) 2013-03-14 2016-11-29 Xcision Medical Systems Llc Methods and system for breathing-synchronized, target-tracking radiation therapy
US10600089B2 (en) 2013-03-14 2020-03-24 Oracle America, Inc. System and method to measure effectiveness and consumption of editorial content
US10115248B2 (en) 2013-03-14 2018-10-30 Ebay Inc. Systems and methods to fit an image of an inventory part
US9426451B2 (en) 2013-03-15 2016-08-23 Digimarc Corporation Cooperative photography
NL2010463C2 (en) 2013-03-15 2014-09-16 Cyclomedia Technology B V METHOD FOR GENERATING A PANORAMA IMAGE
US20140267616A1 (en) 2013-03-15 2014-09-18 Scott A. Krig Variable resolution depth representation
US20160055330A1 (en) 2013-03-19 2016-02-25 Nec Solution Innovators, Ltd. Three-dimensional unlocking device, three-dimensional unlocking method, and program
JP5997645B2 (en) 2013-03-26 2016-09-28 キヤノン株式会社 Image processing apparatus and method, and imaging apparatus
WO2014155670A1 (en) 2013-03-29 2014-10-02 株式会社 東芝 Stereoscopic video processing device, stereoscopic video processing method, and stereoscopic video processing program
US9398215B2 (en) 2013-04-16 2016-07-19 Eth Zurich Stereoscopic panoramas
US20160104316A1 (en) 2013-04-28 2016-04-14 Geosim Systems Ltd. Use of borderlines in urban 3d-modeling
US9922447B2 (en) 2013-04-30 2018-03-20 Mantis Vision Ltd. 3D registration of a plurality of 3D models
US20150318020A1 (en) 2014-05-02 2015-11-05 FreshTake Media, Inc. Interactive real-time video editor and recorder
CN103279300B (en) 2013-05-24 2016-12-28 京东方科技集团股份有限公司 The unlocking method of a kind of touch screen terminal, device and touch screen terminal
US20140365888A1 (en) 2013-06-05 2014-12-11 Narrable, Llc User-controlled disassociation and reassociation of audio and visual content in a multimedia presentation
US10387729B2 (en) 2013-07-09 2019-08-20 Outward, Inc. Tagging virtualized content
US20150022677A1 (en) 2013-07-16 2015-01-22 Qualcomm Incorporated System and method for efficient post-processing video stabilization with camera path linearization
JP2015022458A (en) 2013-07-18 2015-02-02 株式会社Jvcケンウッド Image processing device, image processing method, and image processing program
US20150046875A1 (en) 2013-08-07 2015-02-12 Ut-Battelle, Llc High-efficacy capturing and modeling of human perceptual similarity opinions
EP3031206B1 (en) 2013-08-09 2020-01-22 ICN Acquisition, LLC System, method and apparatus for remote monitoring
US9742974B2 (en) 2013-08-10 2017-08-22 Hai Yu Local positioning and motion estimation based camera viewing system and methods
CN104375811B (en) 2013-08-13 2019-04-26 腾讯科技(深圳)有限公司 A kind of sound effect treatment method and device
EP3061241A4 (en) 2013-08-20 2017-04-05 Smarter TV Ltd. System and method for real-time processing of ultra-high resolution digital video
GB2518603B (en) 2013-09-18 2015-08-19 Imagination Tech Ltd Generating an output frame for inclusion in a video sequence
EP2860699A1 (en) 2013-10-11 2015-04-15 Telefonaktiebolaget L M Ericsson (Publ) Technique for view synthesis
US10319035B2 (en) 2013-10-11 2019-06-11 Ccc Information Services Image capturing and automatic labeling system
US20150130799A1 (en) 2013-11-12 2015-05-14 Fyusion, Inc. Analysis and manipulation of images and video for generation of surround views
US10754511B2 (en) 2013-11-20 2020-08-25 Google Llc Multi-view audio and video interactive playback
US20150188967A1 (en) 2013-12-30 2015-07-02 HearHere Radio, Inc. Seamless integration of audio content into a customized media stream
US9449227B2 (en) 2014-01-08 2016-09-20 Here Global B.V. Systems and methods for creating an aerial image
CN104778170A (en) 2014-01-09 2015-07-15 阿里巴巴集团控股有限公司 Method and device for searching and displaying commodity image
US20150198443A1 (en) 2014-01-10 2015-07-16 Alcatel-Lucent Usa Inc. Localization activity classification systems and methods
US10244223B2 (en) 2014-01-10 2019-03-26 Ostendo Technologies, Inc. Methods for full parallax compressed light field 3D imaging systems
US9865033B1 (en) 2014-01-17 2018-01-09 Amazon Technologies, Inc. Motion-based image views
US9710964B2 (en) 2014-01-23 2017-07-18 Max-Planck-Gesellschaft Zur Foerderung Der Wissenschaften E.V. Method for providing a three dimensional body model
US20150213784A1 (en) 2014-01-24 2015-07-30 Amazon Technologies, Inc. Motion-based lenticular image display
KR102265326B1 (en) 2014-02-03 2021-06-15 삼성전자 주식회사 Apparatus and method for shooting an image in eletronic device havinag a camera
CN104834933B (en) 2014-02-10 2019-02-12 华为技术有限公司 Method and device for detecting image saliency area
US10303324B2 (en) 2014-02-10 2019-05-28 Samsung Electronics Co., Ltd. Electronic device configured to display three dimensional (3D) virtual space and method of controlling the electronic device
US20150235408A1 (en) 2014-02-14 2015-08-20 Apple Inc. Parallax Depth Rendering
US9865058B2 (en) 2014-02-19 2018-01-09 Daqri, Llc Three-dimensional mapping system
US9633050B2 (en) 2014-02-21 2017-04-25 Wipro Limited Methods for assessing image change and devices thereof
KR102188149B1 (en) 2014-03-05 2020-12-07 삼성메디슨 주식회사 Method for Displaying 3-Demension Image and Display Apparatus Thereof
US10203762B2 (en) 2014-03-11 2019-02-12 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
US9720934B1 (en) 2014-03-13 2017-08-01 A9.Com, Inc. Object recognition of feature-sparse or texture-limited subject matter
US20170124770A1 (en) 2014-03-15 2017-05-04 Nitin Vats Self-demonstrating object features and/or operations in interactive 3d-model of real object for understanding object's functionality
US9911243B2 (en) 2014-03-15 2018-03-06 Nitin Vats Real-time customization of a 3D model representing a real product
US9807411B2 (en) 2014-03-18 2017-10-31 Panasonic Intellectual Property Management Co., Ltd. Image coding apparatus, image decoding apparatus, image processing system, image coding method, and image decoding method
KR102223064B1 (en) 2014-03-18 2021-03-04 삼성전자주식회사 Image processing apparatus and method
US10373260B1 (en) 2014-03-18 2019-08-06 Ccc Information Services Inc. Imaging processing system for identifying parts for repairing a vehicle
EP3123385B1 (en) 2014-03-25 2020-03-11 Sony Corporation 3d graphical authentication by revelation of hidden objects
US10321117B2 (en) 2014-04-11 2019-06-11 Lucasfilm Entertainment Company Ltd. Motion-controlled body capture and reconstruction
CA2889778A1 (en) 2014-04-28 2015-10-28 Modest Tree Media Inc. Virtual interactive learning environment
WO2015167739A1 (en) 2014-04-30 2015-11-05 Replay Technologies Inc. System for and method of generating user-selectable novel views on a viewing device
CN109945844B (en) 2014-05-05 2021-03-12 赫克斯冈技术中心 Measurement subsystem and measurement system
DE102014208663A1 (en) 2014-05-08 2015-11-12 Conti Temic Microelectronic Gmbh DEVICE AND METHOD FOR PROVIDING INFORMATION DATA TO A VEHICLE ENVIRONMENT OBJECT IN A VIDEO IMAGE CURRENT
US20150325044A1 (en) 2014-05-09 2015-11-12 Adornably, Inc. Systems and methods for three-dimensional model texturing
CA2955969A1 (en) 2014-05-16 2015-11-19 Divergent Technologies, Inc. Modular formed nodes for vehicle chassis and their methods of use
US10852838B2 (en) 2014-06-14 2020-12-01 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
US9910505B2 (en) 2014-06-17 2018-03-06 Amazon Technologies, Inc. Motion control for managing content
US20150371440A1 (en) 2014-06-19 2015-12-24 Qualcomm Incorporated Zero-baseline 3d map initialization
US10574974B2 (en) 2014-06-27 2020-02-25 A9.Com, Inc. 3-D model generation using multiple cameras
US10242493B2 (en) 2014-06-30 2019-03-26 Intel Corporation Method and apparatus for filtered coarse pixel shading
KR101590256B1 (en) 2014-06-30 2016-02-04 조정현 3d image creating method using video photographed with smart device
WO2016004330A1 (en) 2014-07-03 2016-01-07 Oim Squared Inc. Interactive content generation
US9225897B1 (en) 2014-07-07 2015-12-29 Snapchat, Inc. Apparatus and method for supplying content aware photo filters
US20160012646A1 (en) 2014-07-10 2016-01-14 Perfetch, Llc Systems and methods for constructing a three dimensional (3d) color representation of an object
US10089785B2 (en) 2014-07-25 2018-10-02 mindHIVE Inc. Real-time immersive mediated reality experiences
JP6715441B2 (en) 2014-07-28 2020-07-01 パナソニックIpマネジメント株式会社 Augmented reality display system, terminal device and augmented reality display method
US9836464B2 (en) 2014-07-31 2017-12-05 Microsoft Technology Licensing, Llc Curating media from social connections
KR101946019B1 (en) 2014-08-18 2019-04-22 삼성전자주식회사 Video processing apparatus for generating paranomic video and method thereof
DE102014012285A1 (en) 2014-08-22 2016-02-25 Jenoptik Robot Gmbh Method and axle counting device for non-contact axle counting of a vehicle and axle counting system for road traffic
SG10201405182WA (en) 2014-08-25 2016-03-30 Univ Singapore Technology & Design Method and system
US20160061581A1 (en) 2014-08-26 2016-03-03 Lusee, Llc Scale estimating method using smart device
US9697624B2 (en) 2014-08-28 2017-07-04 Shimadzu Corporation Image processing apparatus, radiation tomography apparatus, and method of performing image processing
US20160078287A1 (en) 2014-08-29 2016-03-17 Konica Minola Laboratory U.S.A., Inc. Method and system of temporal segmentation for gesture analysis
CA2998482A1 (en) 2014-09-12 2016-03-17 Kiswe Mobile Inc. Methods and apparatus for content interaction
US20160077422A1 (en) 2014-09-12 2016-03-17 Adobe Systems Incorporated Collaborative synchronized multi-device photography
US9693009B2 (en) 2014-09-12 2017-06-27 International Business Machines Corporation Sound source selection for aural interest
US10313656B2 (en) 2014-09-22 2019-06-04 Samsung Electronics Company Ltd. Image stitching for three-dimensional video
KR102276847B1 (en) 2014-09-23 2021-07-14 삼성전자주식회사 Method for providing a virtual object and electronic device thereof
JP6372696B2 (en) 2014-10-14 2018-08-15 ソニー株式会社 Information processing apparatus, information processing method, and program
US10008027B1 (en) 2014-10-20 2018-06-26 Henry Harlyn Baker Techniques for determining a three-dimensional representation of a surface of an object from a set of images
US10650574B2 (en) 2014-10-31 2020-05-12 Fyusion, Inc. Generating stereoscopic pairs of images from a single lens camera
US10726560B2 (en) 2014-10-31 2020-07-28 Fyusion, Inc. Real-time mobile device capture and generation of art-styled AR/VR content
US9940541B2 (en) 2015-07-15 2018-04-10 Fyusion, Inc. Artificially rendering images using interpolation of tracked control points
US10726593B2 (en) 2015-09-22 2020-07-28 Fyusion, Inc. Artificially rendering images using viewpoint interpolation and extrapolation
US10262426B2 (en) 2014-10-31 2019-04-16 Fyusion, Inc. System and method for infinite smoothing of image sequences
US10275935B2 (en) 2014-10-31 2019-04-30 Fyusion, Inc. System and method for infinite synthetic image generation from multi-directional structured image array
US10176592B2 (en) 2014-10-31 2019-01-08 Fyusion, Inc. Multi-directional structured image array capture on a 2D graph
US10719939B2 (en) 2014-10-31 2020-07-21 Fyusion, Inc. Real-time mobile device capture and generation of AR/VR content
US10372319B2 (en) 2014-11-13 2019-08-06 Here Global B.V. Method, apparatus and computer program product for enabling scrubbing of a media file
TWI554103B (en) 2014-11-13 2016-10-11 聚晶半導體股份有限公司 Image capturing device and digital zoom method thereof
US11507624B2 (en) 2014-11-18 2022-11-22 Yahoo Assets Llc Method and system for providing query suggestions based on user feedback
KR102350235B1 (en) 2014-11-25 2022-01-13 삼성전자주식회사 Image processing method and apparatus thereof
US9865069B1 (en) 2014-11-25 2018-01-09 Augmented Reality Concepts, Inc. Method and system for generating a 360-degree presentation of an object
CN104462365A (en) 2014-12-08 2015-03-25 天津大学 Multi-view target searching method based on probability model
US10360601B1 (en) 2014-12-11 2019-07-23 Alexander Omeed Adegan Method for generating a repair estimate through predictive analytics
WO2016106383A2 (en) 2014-12-22 2016-06-30 Robert Bosch Gmbh First-person camera based visual context aware system
US9811911B2 (en) 2014-12-29 2017-11-07 Nbcuniversal Media, Llc Apparatus and method for generating virtual reality content based on non-virtual reality content
KR101804364B1 (en) 2014-12-30 2017-12-04 한국전자통신연구원 Super Multi-View image system and Driving Method Thereof
US9998663B1 (en) 2015-01-07 2018-06-12 Car360 Inc. Surround image capture and processing
US10284794B1 (en) 2015-01-07 2019-05-07 Car360 Inc. Three-dimensional stabilized 360-degree composite image capture
US9754355B2 (en) 2015-01-09 2017-09-05 Snap Inc. Object recognition based photo filters
GB2534847A (en) 2015-01-28 2016-08-10 Sony Computer Entertainment Europe Ltd Display
KR101803474B1 (en) 2015-03-02 2017-11-30 한국전자통신연구원 Device for generating multi-view immersive contents and method thereof
JP6345617B2 (en) 2015-03-05 2018-06-20 株式会社神戸製鋼所 Residual stress estimation method and residual stress estimation apparatus
US9928544B1 (en) 2015-03-10 2018-03-27 Amazon Technologies, Inc. Vehicle component installation preview image generation
US20160275723A1 (en) 2015-03-20 2016-09-22 Deepkaran Singh System and method for generating three dimensional representation using contextual information
TW201637432A (en) 2015-04-02 2016-10-16 Ultracker Technology Co Ltd Real-time image stitching device and real-time image stitching method
CA2981134A1 (en) 2015-04-06 2016-10-13 Ricoh Company, Ltd. Information processing apparatus, information processing method, and information processing program
JP6587421B2 (en) 2015-05-25 2019-10-09 キヤノン株式会社 Information processing apparatus, information processing method, and program
US9813621B2 (en) 2015-05-26 2017-11-07 Google Llc Omnistereo capture for mobile devices
US10038887B2 (en) 2015-05-27 2018-07-31 Google Llc Capture and render of panoramic virtual reality content
US10019657B2 (en) 2015-05-28 2018-07-10 Adobe Systems Incorporated Joint depth estimation and semantic segmentation from a single image
WO2016197303A1 (en) 2015-06-08 2016-12-15 Microsoft Technology Licensing, Llc. Image semantic segmentation
US9704298B2 (en) 2015-06-23 2017-07-11 Paofit Holdings Pte Ltd. Systems and methods for generating 360 degree mixed reality environments
US10750161B2 (en) 2015-07-15 2020-08-18 Fyusion, Inc. Multi-view interactive digital media representation lock screen
US10852902B2 (en) 2015-07-15 2020-12-01 Fyusion, Inc. Automatic tagging of objects on a multi-view interactive digital media representation of a dynamic entity
US10698558B2 (en) 2015-07-15 2020-06-30 Fyusion, Inc. Automatic tagging of objects on a multi-view interactive digital media representation of a dynamic entity
US10147211B2 (en) 2015-07-15 2018-12-04 Fyusion, Inc. Artificially rendering images using viewpoint interpolation and extrapolation
US10242474B2 (en) 2015-07-15 2019-03-26 Fyusion, Inc. Artificially rendering images using viewpoint interpolation and extrapolation
US11095869B2 (en) 2015-09-22 2021-08-17 Fyusion, Inc. System and method for generating combined embedded multi-view interactive digital media representations
US10222932B2 (en) 2015-07-15 2019-03-05 Fyusion, Inc. Virtual reality environment based manipulation of multilayered multi-view interactive digital media representations
US11006095B2 (en) 2015-07-15 2021-05-11 Fyusion, Inc. Drone based capture of a multi-view interactive digital media
KR20170011190A (en) 2015-07-21 2017-02-02 엘지전자 주식회사 Mobile terminal and control method thereof
CN107924579A (en) 2015-08-14 2018-04-17 麦特尔有限公司 The method for generating personalization 3D head models or 3D body models
US9989965B2 (en) 2015-08-20 2018-06-05 Motionloft, Inc. Object detection and analysis via unmanned aerial vehicle
US10157333B1 (en) 2015-09-15 2018-12-18 Snap Inc. Systems and methods for content tagging
US9317881B1 (en) 2015-09-15 2016-04-19 Adorno Publishing Group, Inc. Systems and methods for generating interactive content for in-page purchasing
CN107208836B (en) * 2015-09-16 2019-09-06 深圳市大疆灵眸科技有限公司 Systems and methods for supporting photography with different effects
US11783864B2 (en) 2015-09-22 2023-10-10 Fyusion, Inc. Integration of audio into a multi-view interactive digital media representation
US9878209B2 (en) 2015-09-24 2018-01-30 Intel Corporation Facilitating dynamic monitoring of body dimensions over periods of time based on three-dimensional depth and disparity
US9773302B2 (en) 2015-10-08 2017-09-26 Hewlett-Packard Development Company, L.P. Three-dimensional object model tagging
US10152825B2 (en) 2015-10-16 2018-12-11 Fyusion, Inc. Augmenting multi-view image data with synthetic objects using IMU and image data
US10192129B2 (en) 2015-11-18 2019-01-29 Adobe Systems Incorporated Utilizing interactive deep learning to select objects in digital visual media
WO2017095948A1 (en) 2015-11-30 2017-06-08 Pilot Ai Labs, Inc. Improved general object detection using neural networks
US10176636B1 (en) 2015-12-11 2019-01-08 A9.Com, Inc. Augmented reality fashion
WO2017123387A1 (en) 2016-01-13 2017-07-20 Jingyi Yu Three-dimensional acquisition and rendering
US10217207B2 (en) 2016-01-20 2019-02-26 Ez3D, Llc System and method for structural inspection and construction estimation using an unmanned aerial vehicle
US9858675B2 (en) 2016-02-11 2018-01-02 Adobe Systems Incorporated Object segmentation, including sky segmentation
US20170249339A1 (en) 2016-02-25 2017-08-31 Shutterstock, Inc. Selected image subset based search
US9928875B2 (en) 2016-03-22 2018-03-27 Nec Corporation Efficient video annotation with optical flow based estimation and suggestion
US9704257B1 (en) 2016-03-25 2017-07-11 Mitsubishi Electric Research Laboratories, Inc. System and method for semantic segmentation using Gaussian random field network
US9972092B2 (en) 2016-03-31 2018-05-15 Adobe Systems Incorporated Utilizing deep learning for boundary-aware image segmentation
US10692050B2 (en) 2016-04-06 2020-06-23 American International Group, Inc. Automatic assessment of damage and repair costs in vehicles
EP3475785A4 (en) 2016-04-22 2020-05-13 SZ DJI Technology Co., Ltd. SYSTEMS AND METHODS FOR PROCESSING IMAGE DATA BASED ON A USER'S INTEREST
US10210613B2 (en) 2016-05-12 2019-02-19 Siemens Healthcare Gmbh Multiple landmark detection in medical images based on hierarchical feature learning and end-to-end training
US9886771B1 (en) 2016-05-20 2018-02-06 Ccc Information Services Inc. Heat map of vehicle damage
US10657647B1 (en) 2016-05-20 2020-05-19 Ccc Information Services Image processing system to detect changes to target objects using base object models
US10580140B2 (en) 2016-05-23 2020-03-03 Intel Corporation Method and system of real-time image segmentation for image processing
US10032067B2 (en) 2016-05-28 2018-07-24 Samsung Electronics Co., Ltd. System and method for a unified architecture multi-task deep learning machine for object recognition
US20170357910A1 (en) 2016-06-10 2017-12-14 Apple Inc. System for iteratively training an artificial intelligence using cloud-based metrics
US10306203B1 (en) 2016-06-23 2019-05-28 Amazon Technologies, Inc. Adaptive depth sensing of scenes by targeted light projections
WO2018022181A1 (en) 2016-07-28 2018-02-01 Lumileds Llc Dimming led circuit augmenting dc/dc controller integrated circuit
KR102542515B1 (en) 2016-07-28 2023-06-12 삼성전자주식회사 Image display apparatus and method for displaying image
US10030979B2 (en) 2016-07-29 2018-07-24 Matterport, Inc. Determining and/or generating a navigation path through a captured three-dimensional model rendered on a device
EP3497550B1 (en) 2016-08-12 2023-03-15 Packsize, LLC Systems and methods for automatically generating metadata for media documents
US10055882B2 (en) 2016-08-15 2018-08-21 Aquifi, Inc. System and method for three-dimensional scanning and for capturing a bidirectional reflectance distribution function
US10261762B2 (en) 2016-08-16 2019-04-16 Sap Se User interface template generation using dynamic in-memory database techniques
WO2018052665A1 (en) 2016-08-19 2018-03-22 Fyusion, Inc. Virtual reality environment based manipulation of multi-layered multi-view interactive digital media representations
DE112017004150T5 (en) 2016-08-19 2019-06-13 Fyusion, Inc. AUTOMATIC MARKING OF DYNAMIC OBJECTS IN A MULTIVIEW DIGITAL PRESENTATION
US10659759B2 (en) 2016-08-29 2020-05-19 Stratus Systems, Inc. Selective culling of multi-dimensional data sets
US10272282B2 (en) 2016-09-20 2019-04-30 Corecentric LLC Systems and methods for providing ergonomic chairs
US10147459B2 (en) 2016-09-22 2018-12-04 Apple Inc. Artistic style transfer for videos
US10152059B2 (en) * 2016-10-10 2018-12-11 Qualcomm Incorporated Systems and methods for landing a drone on a moving base
US10204448B2 (en) 2016-11-04 2019-02-12 Aquifi, Inc. System and method for portable active 3D scanning
US11295458B2 (en) 2016-12-01 2022-04-05 Skydio, Inc. Object tracking by an unmanned aerial vehicle using visual sensors
US10472091B2 (en) 2016-12-02 2019-11-12 Adesa, Inc. Method and apparatus using a drone to input vehicle data
KR20180067908A (en) 2016-12-13 2018-06-21 한국전자통신연구원 Apparatus for restoring 3d-model and method for using the same
US10038894B1 (en) 2017-01-17 2018-07-31 Facebook, Inc. Three-dimensional scene reconstruction from set of two dimensional images for consumption in virtual reality
US10437879B2 (en) 2017-01-18 2019-10-08 Fyusion, Inc. Visual search using multi-view interactive digital media representations
US20180211373A1 (en) 2017-01-20 2018-07-26 Aquifi, Inc. Systems and methods for defect detection
US20180211404A1 (en) 2017-01-23 2018-07-26 Hong Kong Applied Science And Technology Research Institute Co., Ltd. 3d marker model construction and real-time tracking using monocular camera
US10592199B2 (en) 2017-01-24 2020-03-17 International Business Machines Corporation Perspective-based dynamic audio volume adjustment
JP7159057B2 (en) 2017-02-10 2022-10-24 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Free-viewpoint video generation method and free-viewpoint video generation system
US10165259B2 (en) 2017-02-15 2018-12-25 Adobe Systems Incorporated Generating novel views of a three-dimensional object based on a single two-dimensional image
US10467760B2 (en) 2017-02-23 2019-11-05 Adobe Inc. Segmenting three-dimensional shapes into labeled component shapes
GB201703129D0 (en) 2017-02-27 2017-04-12 Metail Ltd Quibbler
US11379688B2 (en) 2017-03-16 2022-07-05 Packsize Llc Systems and methods for keypoint detection with convolutional neural networks
JP6929953B2 (en) 2017-03-17 2021-09-01 マジック リープ, インコーポレイテッドMagic Leap,Inc. Room layout estimation method and technique
US11037300B2 (en) 2017-04-28 2021-06-15 Cherry Labs, Inc. Monitoring system
US10699481B2 (en) 2017-05-17 2020-06-30 DotProduct LLC Augmentation of captured 3D scenes with contextual information
US10777018B2 (en) 2017-05-17 2020-09-15 Bespoke, Inc. Systems and methods for determining the scale of human anatomy from images
US10313651B2 (en) 2017-05-22 2019-06-04 Fyusion, Inc. Snapshots at predefined intervals or angles
US10200677B2 (en) 2017-05-22 2019-02-05 Fyusion, Inc. Inertial measurement unit progress estimation
US11138432B2 (en) 2017-05-25 2021-10-05 Fyusion, Inc. Visual feature tagging in multi-view interactive digital media representations
US20180286098A1 (en) 2017-06-09 2018-10-04 Structionsite Inc. Annotation Transfer for Panoramic Image
JP6939111B2 (en) 2017-06-13 2021-09-22 コニカミノルタ株式会社 Image recognition device and image recognition method
US11069147B2 (en) 2017-06-26 2021-07-20 Fyusion, Inc. Modification of multi-view interactive digital media representation
US9968257B1 (en) 2017-07-06 2018-05-15 Halsa Labs, LLC Volumetric quantification of cardiovascular structures from medical imaging
US10560628B2 (en) 2017-10-30 2020-02-11 Visual Supply Company Elimination of distortion in 360-degree video playback
US10769411B2 (en) 2017-11-15 2020-09-08 Qualcomm Technologies, Inc. Pose estimation and model retrieval for objects in images
US10628667B2 (en) 2018-01-11 2020-04-21 Futurewei Technologies, Inc. Activity recognition method using videotubes
CN108875529A (en) 2018-01-11 2018-11-23 北京旷视科技有限公司 Face space-location method, device, system and computer storage medium
US11019283B2 (en) 2018-01-18 2021-05-25 GumGum, Inc. Augmenting detected regions in image or video data
US11567627B2 (en) 2018-01-30 2023-01-31 Magic Leap, Inc. Eclipse cursor for virtual content in mixed reality displays
US10692183B2 (en) 2018-03-29 2020-06-23 Adobe Inc. Customizable image cropping using body key points
US10592747B2 (en) 2018-04-26 2020-03-17 Fyusion, Inc. Method and apparatus for 3-D auto tagging
US10382739B1 (en) 2018-04-26 2019-08-13 Fyusion, Inc. Visual annotation using tagging sessions
EP3837137A4 (en) 2018-06-26 2022-07-13 Itay Katz Contextual driver monitoring system
US10834163B2 (en) 2018-10-18 2020-11-10 At&T Intellectual Property I, L.P. Methods, devices, and systems for encoding portions of video content according to priority content within live video content
US10922573B2 (en) 2018-10-22 2021-02-16 Future Health Works Ltd. Computer based object detection within a video or image
US20200137380A1 (en) 2018-10-31 2020-04-30 Intel Corporation Multi-plane display image synthesis mechanism
WO2020092177A2 (en) 2018-11-02 2020-05-07 Fyusion, Inc. Method and apparatus for 3-d auto tagging
US10949978B2 (en) 2019-01-22 2021-03-16 Fyusion, Inc. Automatic background replacement for single-image and multi-view captures
US20200234397A1 (en) 2019-01-22 2020-07-23 Fyusion, Inc. Automatic view mapping for single-image and multi-view captures

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6185343B1 (en) * 1997-01-17 2001-02-06 Matsushita Electric Works, Ltd. Position detection system and method
US20050018045A1 (en) * 2003-03-14 2005-01-27 Thomas Graham Alexander Video processing
US9043222B1 (en) * 2006-11-30 2015-05-26 NexRf Corporation User interface for geofence associated content
US8589069B1 (en) * 2009-11-12 2013-11-19 Google Inc. Enhanced identification of interesting points-of-interest
US20130120581A1 (en) * 2011-11-11 2013-05-16 Sony Europe Limited Apparatus, method and system
US20130147905A1 (en) * 2011-12-13 2013-06-13 Google Inc. Processing media streams during a multi-user video conference
US20140152834A1 (en) * 2012-12-05 2014-06-05 At&T Mobility Ii, Llc System and Method for Processing Streaming Media
US20160001137A1 (en) * 2014-07-07 2016-01-07 Bradley Gene Phillips Illumination system for a sports ball
US20160171330A1 (en) * 2014-12-15 2016-06-16 Reflex Robotics, Inc. Vision based real-time object tracking system for robotic gimbal control
US20160267676A1 (en) * 2015-03-11 2016-09-15 Panasonic Intellectual Property Management Co., Ltd. Motion detection system
US20170024094A1 (en) * 2015-07-22 2017-01-26 Enthrall Sports LLC Interactive audience communication for events
US20170213385A1 (en) * 2016-01-26 2017-07-27 Electronics And Telecommunications Research Institute Apparatus and method for generating 3d face model using mobile device
US20170256066A1 (en) * 2016-03-05 2017-09-07 SmartPitch LLC Highly accurate baseball pitch speed detector using widely available smartphones
US20190025544A1 (en) * 2016-03-30 2019-01-24 Fujifilm Corporation Imaging apparatus and focus control method

Cited By (77)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11711614B2 (en) 2015-04-23 2023-07-25 Apple Inc. Digital viewfinder user interface for multiple cameras
US11490017B2 (en) 2015-04-23 2022-11-01 Apple Inc. Digital viewfinder user interface for multiple cameras
US12149831B2 (en) 2015-04-23 2024-11-19 Apple Inc. Digital viewfinder user interface for multiple cameras
US11102414B2 (en) 2015-04-23 2021-08-24 Apple Inc. Digital viewfinder user interface for multiple cameras
US11245837B2 (en) 2016-06-12 2022-02-08 Apple Inc. User interface for camera effects
US11641517B2 (en) 2016-06-12 2023-05-02 Apple Inc. User interface for camera effects
US11165949B2 (en) 2016-06-12 2021-11-02 Apple Inc. User interface for capturing photos with different camera magnifications
US12132981B2 (en) 2016-06-12 2024-10-29 Apple Inc. User interface for camera effects
US11962889B2 (en) 2016-06-12 2024-04-16 Apple Inc. User interface for camera effects
US12314553B2 (en) 2017-06-04 2025-05-27 Apple Inc. User interface camera effects
US11204692B2 (en) 2017-06-04 2021-12-21 Apple Inc. User interface camera effects
US11687224B2 (en) 2017-06-04 2023-06-27 Apple Inc. User interface camera effects
US10855936B2 (en) 2017-10-13 2020-12-01 Fyusion, Inc. Skeleton-based effects and background replacement
US20190116322A1 (en) * 2017-10-13 2019-04-18 Fyusion, Inc. Skeleton-based effects and background replacement
US10469768B2 (en) 2017-10-13 2019-11-05 Fyusion, Inc. Skeleton-based effects and background replacement
US10356341B2 (en) * 2017-10-13 2019-07-16 Fyusion, Inc. Skeleton-based effects and background replacement
US11112964B2 (en) 2018-02-09 2021-09-07 Apple Inc. Media capture lock affordance for graphical user interface
US11977731B2 (en) 2018-02-09 2024-05-07 Apple Inc. Media capture lock affordance for graphical user interface
US12530116B2 (en) 2018-02-09 2026-01-20 Apple Inc. Media capture lock affordance for graphical user interface
US20210102820A1 (en) * 2018-02-23 2021-04-08 Google Llc Transitioning between map view and augmented reality view
US11722764B2 (en) 2018-05-07 2023-08-08 Apple Inc. Creative camera
US11178335B2 (en) 2018-05-07 2021-11-16 Apple Inc. Creative camera
US12170834B2 (en) 2018-05-07 2024-12-17 Apple Inc. Creative camera
US11468625B2 (en) 2018-09-11 2022-10-11 Apple Inc. User interfaces for simulated depth effects
US12154218B2 (en) 2018-09-11 2024-11-26 Apple Inc. User interfaces simulated depth effects
US11669985B2 (en) 2018-09-28 2023-06-06 Apple Inc. Displaying and editing images with depth information
US12394077B2 (en) 2018-09-28 2025-08-19 Apple Inc. Displaying and editing images with depth information
US11895391B2 (en) 2018-09-28 2024-02-06 Apple Inc. Capturing and displaying images with multiple focal planes
US11321857B2 (en) 2018-09-28 2022-05-03 Apple Inc. Displaying and editing images with depth information
US11128792B2 (en) 2018-09-28 2021-09-21 Apple Inc. Capturing and displaying images with multiple focal planes
US11189067B2 (en) 2019-02-28 2021-11-30 Samsung Electronics Co., Ltd. Electronic device and content generation method
EP3703358A1 (en) * 2019-02-28 2020-09-02 Samsung Electronics Co., Ltd. Electronic device and content generation method
KR20200105234A (en) * 2019-02-28 2020-09-07 삼성전자주식회사 Electronic device and method for generating contents
KR102646684B1 (en) * 2019-02-28 2024-03-13 삼성전자 주식회사 Electronic device and method for generating contents
US10735642B1 (en) 2019-05-06 2020-08-04 Apple Inc. User interfaces for capturing and managing visual media
US11770601B2 (en) 2019-05-06 2023-09-26 Apple Inc. User interfaces for capturing and managing visual media
US10681282B1 (en) 2019-05-06 2020-06-09 Apple Inc. User interfaces for capturing and managing visual media
US10645294B1 (en) 2019-05-06 2020-05-05 Apple Inc. User interfaces for capturing and managing visual media
US10652470B1 (en) 2019-05-06 2020-05-12 Apple Inc. User interfaces for capturing and managing visual media
US11223771B2 (en) 2019-05-06 2022-01-11 Apple Inc. User interfaces for capturing and managing visual media
US12192617B2 (en) 2019-05-06 2025-01-07 Apple Inc. User interfaces for capturing and managing visual media
US11706521B2 (en) 2019-05-06 2023-07-18 Apple Inc. User interfaces for capturing and managing visual media
US10674072B1 (en) * 2019-05-06 2020-06-02 Apple Inc. User interfaces for capturing and managing visual media
US10735643B1 (en) 2019-05-06 2020-08-04 Apple Inc. User interfaces for capturing and managing visual media
US10791273B1 (en) 2019-05-06 2020-09-29 Apple Inc. User interfaces for capturing and managing visual media
US12081862B2 (en) 2020-06-01 2024-09-03 Apple Inc. User interfaces for managing media
US11330184B2 (en) 2020-06-01 2022-05-10 Apple Inc. User interfaces for managing media
US11054973B1 (en) 2020-06-01 2021-07-06 Apple Inc. User interfaces for managing media
US11617022B2 (en) 2020-06-01 2023-03-28 Apple Inc. User interfaces for managing media
US12086913B2 (en) 2020-08-03 2024-09-10 Google Llc Display responsive communication system and method
US11481941B2 (en) * 2020-08-03 2022-10-25 Google Llc Display responsive communication system and method
US12155925B2 (en) 2020-09-25 2024-11-26 Apple Inc. User interfaces for media capture and management
US11212449B1 (en) 2020-09-25 2021-12-28 Apple Inc. User interfaces for media capture and management
US12073615B2 (en) * 2020-12-16 2024-08-27 Here Global B.V. Method, apparatus, and computer program product for identifying objects of interest within an image captured by a relocatable image capture device
US11900662B2 (en) 2020-12-16 2024-02-13 Here Global B.V. Method, apparatus, and computer program product for training a signature encoding module and a query processing module to identify objects of interest within an image utilizing digital signatures
US20220188547A1 (en) * 2020-12-16 2022-06-16 Here Global B.V. Method, apparatus, and computer program product for identifying objects of interest within an image captured by a relocatable image capture device
US11830103B2 (en) 2020-12-23 2023-11-28 Here Global B.V. Method, apparatus, and computer program product for training a signature encoding module and a query processing module using augmented data
US11587253B2 (en) 2020-12-23 2023-02-21 Here Global B.V. Method, apparatus, and computer program product for displaying virtual graphical data based on digital signatures
US11829192B2 (en) 2020-12-23 2023-11-28 Here Global B.V. Method, apparatus, and computer program product for change detection based on digital signatures
US12094163B2 (en) 2020-12-23 2024-09-17 Here Global B.V. Method, apparatus, and computer program product for displaying virtual graphical data based on digital signatures
US20220262108A1 (en) * 2021-01-12 2022-08-18 Fujitsu Limited Apparatus, program, and method for anomaly detection and classification
US12100199B2 (en) * 2021-01-12 2024-09-24 Fujitsu Limited Apparatus, program, and method for anomaly detection and classification
US12101567B2 (en) 2021-04-30 2024-09-24 Apple Inc. User interfaces for altering visual media
US11416134B1 (en) 2021-04-30 2022-08-16 Apple Inc. User interfaces for altering visual media
US11778339B2 (en) 2021-04-30 2023-10-03 Apple Inc. User interfaces for altering visual media
US11350026B1 (en) 2021-04-30 2022-05-31 Apple Inc. User interfaces for altering visual media
US11539876B2 (en) 2021-04-30 2022-12-27 Apple Inc. User interfaces for altering visual media
US11418699B1 (en) 2021-04-30 2022-08-16 Apple Inc. User interfaces for altering visual media
US12112024B2 (en) 2021-06-01 2024-10-08 Apple Inc. User interfaces for managing media styles
US11784975B1 (en) * 2021-07-06 2023-10-10 Bank Of America Corporation Image-based firewall system
US12506953B2 (en) 2021-12-03 2025-12-23 Apple Inc. Device, methods, and graphical user interfaces for capturing and displaying media
US11991295B2 (en) 2021-12-07 2024-05-21 Here Global B.V. Method, apparatus, and computer program product for identifying an object of interest within an image from a digital signature generated by a signature encoding module including a hypernetwork
US20230384235A1 (en) * 2022-05-24 2023-11-30 Hon Hai Precision Industry Co., Ltd. Method for detecting product for defects, electronic device, and storage medium
US12367378B2 (en) * 2022-05-24 2025-07-22 Hon Hai Precision Industry Co., Ltd. Method for detecting product for defects, electronic device, and storage medium
US12401889B2 (en) 2023-05-05 2025-08-26 Apple Inc. User interfaces for controlling media capture settings
US12495204B2 (en) 2023-05-05 2025-12-09 Apple Inc. User interfaces for controlling media capture settings
CN116503289A (en) * 2023-06-20 2023-07-28 北京天工异彩影视科技有限公司 Visual special effect application processing method and system

Also Published As

Publication number Publication date
US20240267481A1 (en) 2024-08-08
US20250310468A1 (en) 2025-10-02
US12381995B2 (en) 2025-08-05

Similar Documents

Publication Publication Date Title
US12381995B2 (en) Scene-aware selection of filters and effects for visual digital media content
US10628675B2 (en) Skeleton detection and tracking via client-server communication
US11632533B2 (en) System and method for generating combined embedded multi-view interactive digital media representations
US10863210B2 (en) Client-server communication for live filtering in a camera view
US10855936B2 (en) Skeleton-based effects and background replacement
US11776199B2 (en) Virtual reality environment based manipulation of multi-layered multi-view interactive digital media representations
US11956412B2 (en) Drone based capture of multi-view interactive digital media
US10726560B2 (en) Real-time mobile device capture and generation of art-styled AR/VR content
US10958891B2 (en) Visual annotation using tagging sessions
US10719939B2 (en) Real-time mobile device capture and generation of AR/VR content
US20220012495A1 (en) Visual feature tagging in multi-view interactive digital media representations
US10861213B1 (en) System and method for automatic generation of artificial motion blur
KR20190026762A (en) Estimate pose in 3D space
US12261990B2 (en) System and method for generating combined embedded multi-view interactive digital media representations
US12495134B2 (en) Drone based capture of multi-view interactive digital media
US10353946B2 (en) Client-server communication for live search using multi-view digital media representations
US20250259376A1 (en) Augmented reality environment based manipulation of multi-layered multi-view interactive digital media representations
CN117991891A (en) Interaction method, device, head-mounted device, interaction system and storage medium
CN117671538A (en) A first-person perspective drone video saliency prediction method and system

Legal Events

Date Code Title Description
AS Assignment

Owner name: FYUSION, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HOLZER, STEFAN JOHANNES JOSEF;MUNARO, MATTEO;KAR, ABHISHEK;AND OTHERS;SIGNING DATES FROM 20170227 TO 20170303;REEL/FRAME:041529/0088

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION