[go: up one dir, main page]

WO2015120413A1 - Real-time imaging systems and methods for capturing in-the-moment images of users viewing an event in a home or local environment - Google Patents

Real-time imaging systems and methods for capturing in-the-moment images of users viewing an event in a home or local environment Download PDF

Info

Publication number
WO2015120413A1
WO2015120413A1 PCT/US2015/015071 US2015015071W WO2015120413A1 WO 2015120413 A1 WO2015120413 A1 WO 2015120413A1 US 2015015071 W US2015015071 W US 2015015071W WO 2015120413 A1 WO2015120413 A1 WO 2015120413A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
event
viewers
image
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2015/015071
Other languages
French (fr)
Inventor
William Dickinson
Daniel MAGY
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FANPICS LLC
Original Assignee
FANPICS LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by FANPICS LLC filed Critical FANPICS LLC
Publication of WO2015120413A1 publication Critical patent/WO2015120413A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • G06Q10/40
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2668Creating a channel for a dedicated end-user group, e.g. insertion of targeted commercials based on end-user profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41415Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance involving a public display, viewable by several users in a public space outside their home, e.g. movie theatre, information kiosk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/61Network physical structure; Signal processing
    • H04N21/6106Network physical structure; Signal processing specially adapted to the downstream path of the transmission network
    • H04N21/6131Network physical structure; Signal processing specially adapted to the downstream path of the transmission network involving transmission via a mobile phone network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/06Selective distribution of broadcast services, e.g. multimedia broadcast multicast service [MBMS]; Services to user groups; One-way selective calling services
    • H04W4/08User group management

Definitions

  • This patent document relates to systems, devices, and processes for image capture, processing and communications to various users.
  • Group events like sporting events or concerts typically bring large crowds of people to event venues for watching the event live.
  • Such events are often televised and enjoyed by smaller groups or individuals in the comforts of their home or at smaller, local gatherings (e.g., such as pubs, bars, and restaurants).
  • local gatherings e.g., such as pubs, bars, and restaurants.
  • the reactions of individuals watching the live or televised performances can be highly animated.
  • a photograph taken of the spectators watching and enjoying the event may provide him or her with pleasant memories of the event.
  • An online social network is an online service, platform, or site that focuses on social networks and relations between individuals, groups, organizations, etc., that forms a social structure determined by their interactions, e.g., which can include shared interests, activities, backgrounds, or real-life connections.
  • a social network service can include a representation of each user (e.g., as a user profile), social links, and a variety of additional services.
  • user profiles can include photos, lists of interests, contact information, and other personal information.
  • Online social network services are web- based and provide means for users to interact over the Internet, e.g., such as private or public messaging, e-mail, instant messaging, etc. Social networking sites allow users to share photos, ideas, activities, events, and interests within their individual networks.
  • Techniques, systems, and devices are disclosed for real-time image and video capturing, processing, and delivery of viewers viewing a content stream of an event (e.g., such as a televised sporting event, concert, etc.) at home or small-group gathering in a private or public place (e.g., such as a bar, pub, restaurant, outdoor screen, etc.).
  • an event e.g., such as a televised sporting event, concert, etc.
  • a private or public place e.g., such as a bar, pub, restaurant, outdoor screen, etc.
  • an imaging service system includes an imaging unit arranged at a place including a home or a public or private place of gathering, where the place includes one or more display devices to present visual and/or audio content, the imaging unit including one or more cameras arranged to capture images of one or more viewers at the place to view of an event on the one or more display devices, in which the images include photos or video, a data processing unit in communication with the one or more cameras, the data processing unit including a processor, a memory, and a wireless transmitter and receiver, the data processing unit configured to at least partially process the captured images and transmit the images to another device, and a trigger module in communication with one or both of the data processing unit and the one or more cameras to generate a trigger associated with an occurrence of the event or a reaction by the one or more viewers to the occurrence of the event, in which the generated trigger causes the one or more cameras to initiate the capture of the images of the viewers at the place, or causes the data processing unit to identify a captured photo or video frame among a sequence of the photos or the
  • an imaging service device includes one or more cameras arranged to capture images of one or more viewers at a place to view of an event on one or more display devices, in which the images include photos or video; an image processing unit to process the captured images to produce processed images, in which the image processing unit includes a processor, a memory and a wireless transmitter and receiver to at least partially process the captured images and transmit the images to another device; and a trigger module in
  • the trigger module includes a sensor to detect at least one of a sound, visual stimulus, or mechanical perturbation of the one or more viewers or visual and/or audio content from the one or more display device.
  • the image processing unit is configured to be in communication with at least one of the one or more display devices or a user device of the one or more viewers to present the processed images on a display screen to the one or more viewers in real-time with respect to the occurrence during their viewing of the event.
  • a method for providing images of viewers viewing an event remotely from the event venue includes capturing, using one or more cameras arranged at a place to view of an event on a display device, images including a sequence of photos and/or video of one or more viewers at locations in the place, in which the capturing is initiated responsive to a triggering signal received during the viewing of the event, or in which the capturing includes continuously capturing the images of the one or more viewers during the viewing of the event; processing, using a data processing unit in communication with the one or more cameras, the images to produce processed images, in which the processed images include images of the reaction by the one or more viewers to the occurrence of the event; and distributing the processed images to a viewer of the one or more viewers.
  • imaging devices are embedded in, connected to, or otherwise associated with a video/audio display device (e.g., such as a TV or computer) that is displaying live video content of an event and configured to image the viewers during key moments (e.g., emotional reaction moments) of the event, such as a goal scored during a sporting event.
  • a video/audio display device e.g., such as a TV or computer
  • key moments e.g., emotional reaction moments
  • Metadata can be added to the captured content that is associated with the event and the moment, and the images are stored and then shared via applications, social networks, email, etc.
  • an imaging service system includes an imaging unit arranged proximate a content display device to capture images or video of viewers of the content display device, and one or more computers in communication with the imaging unit to process the images or video.
  • the imaging unit includes one or more cameras arranged to capture images or video of one or more viewers viewing presented content of an event on the content display device, a trigger module in communication with the one or more cameras to initiate the capture of the images or video based on an occurrence of the event, in which the captured images or video display a reaction by the one or more viewers to the occurrence of the event, and a processing unit including a memory unit and a processor configured to process and store the captured images or video.
  • the one or more computers are configured to receive the images or video from the imaging unit and process the images or video to form processed images or video.
  • FIG. 1 A shows a diagram of an exemplary integrated viewer reaction-capture system of the disclosed technology.
  • FIG. IB shows a block diagram of an exemplary event- viewing imager device of the disclosed technology.
  • FIG. 2 shows a diagram of an exemplary method to capture, process, and deliver images of viewers of a broadcasted event using an integrated consumer reaction-capture system of the disclosed technology.
  • FIGS. 3A and 3B show illustrative diagrams of an exemplary embodiment of an integrated viewer reaction-capture system of the disclosed technology.
  • FIGS. 4A and 4B show illustrative diagrams of other exemplary embodiments of an integrated viewer reaction-capture system of the disclosed technology.
  • FIG. 5 shows an illustrative diagram of another exemplary embodiment of an integrated viewer reaction-capture system of the disclosed technology.
  • FIG. 6 shows an example of a communication network for implementing an image capturing, processing, and delivery service system of the disclosed technology.
  • some main issues or difficulties include capturing the image or images in a short period of time and at just the right moment, capturing the image or images in focus of the individual spectator and/or group of spectators in the context of the moment, preparing the captured image or images so they can be easily and rapidly accessed, e.g., such as delivering the image or images directly to the user and/or integrating the image content and/or the image or images into a social network, e.g., particularly a social network with a series of specific mechanisms with a unique interface.
  • some main issues or difficulties include distributing this image content to other viewers watching the event shortly after the image content has been captured.
  • Systems, devices, and methods are disclosed for real-time image capturing, processing, and delivery of viewers viewing a content stream of an event (e.g., such as a televised sporting event, concert, etc.) at home or a small-group gathering in a private or public place (e.g., such as a bar, pub, restaurant, outdoor display screens, etc.).
  • Images captured, processed, delivered, and/or displayed using the disclosed technology can include still photos, video, or both or a mixture of still photos or video.
  • the disclosed technology includes a platform to capture photos and video of the individual viewers watching the event and to process and distribute the captured photos and/or video to the users of the platform.
  • the disclosed technology can provide the 'in-the-moment' images (e.g., photos and/or video) to the users while they continue to watch the event, e.g., including immediately after the special 'moment' occurred, which allows the users to share their reactions captured in the images through social media applications.
  • images e.g., photos and/or video
  • the disclosed technology can provide the 'in-the-moment' images (e.g., photos and/or video) to the users while they continue to watch the event, e.g., including immediately after the special 'moment' occurred, which allows the users to share their reactions captured in the images through social media applications.
  • a series of photos or video of viewers viewing an event on a display in a remote private or public setting that includes an exemplary image capturing, processing, and delivery system of the disclosed technology can be taken and made available rapidly (e.g., in real-time, during the event), providing a virtual layout of the individuals in the gathering at the private or public setting, e.g., such as a viewer's home, or a bar or restaurant.
  • the photos or video show images of users enjoying themselves, which is an entirely new medium through which fans and advertisers/brands can interact with one another.
  • imaging devices are embedded in, connected to, or otherwise associated with a video/audio display device (e.g., such as a TV, computer, tablet, smartphone, etc.) that is displaying live video content of an event and configured to image the viewers during key moments (e.g., emotional reaction moments) of the event, such as a goal scored during a sporting event.
  • a video/audio display device e.g., such as a TV, computer, tablet, smartphone, etc.
  • key moments e.g., emotional reaction moments
  • modern consumer devices such as televisions, game consoles, computers, and mobile devices like smartphones, tablets, and wearable devices can employ image capture, processing, and communication devices of the disclosed technology to capture images (e.g., photos and/or video) of users while they are viewing the video content of the event.
  • wired or wirelessly connected imaging units can be directly interfaced with a tablet or laptop to capture images of a user.
  • Built-in imaging units in televisions e.g., smart TVs
  • the disclosed technology includes systems for image capture, processing and delivery of images from still photo and/or video camera devices that attach or interact directly or indirectly to a data processing device, an image capture trigger unit, and a content display console, e.g., such as a television, computer, mobile device, radio, etc.
  • a content display console e.g., such as a television, computer, mobile device, radio, etc.
  • the disclosed technology uses existing camera devices that are
  • video/audio content display devices e.g., TVs
  • live video content of an event to capture the reaction moments of the viewers and deliver processed images content to them in real-time.
  • the camera system is active so it can capture the viewers' reactions to the content being displayed.
  • the broadcasted or streamed content can include content transmitted from a single transmitter to multiple receiving units or a single receiving unit, or can include content stored on a device and presented for display on the same or other device.
  • the camera devices can be initiated to capture images by the trigger during a significant moment in the event (e.g., an event that evokes an emotional reaction from the audience), or the camera devices can continuously capture video and/or photos of the viewer(s), in which the trigger is used to identify the timing of particular sequence or video or image set to be isolated and used (for delivery). Metadata can be added to each piece of captured content and is associated with the event.
  • a significant moment in the event e.g., an event that evokes an emotional reaction from the audience
  • Metadata can be added to each piece of captured content and is associated with the event.
  • the images can be stored locally and/or in the cloud, and the processed photos/video can be shared via a software application ('app') on a user's mobile communication device (e.g., smartphone, tablet, smartwatch, smartglasses, etc.) associated with the image capture and processing system, via social networks and associated social media apps, via email, via messaging, and/or by displaying on user devices, etc.
  • a software application e.g., smartphone, tablet, smartwatch, smartglasses, etc.
  • the images can be provided to the individual users captured in any given photo or video using the software application associated with the image capturing, processing, and delivery technology.
  • the software app can reside on a user device.
  • Such images of the individual users can be saved to the respective users' accounts with the software app, and be available for viewing, sharing, and other user-desired functions on the application.
  • the user can use the software app to provide the images to an online social network.
  • the software app can operate functions of the user's mobile device where the software app resides to communicate with the particular social network and obtain a token issued by the social network that can be utilized to access a portion of the user's online social networking profile, e.g., via an application programming interface provided by the online social network, from which the user can receive a request to share the photo and/or video on the particular social network, and thereby generate a 'post' or other sharing notification on the social network using the token and the processed image.
  • FIG. 1 A shows a diagram of an exemplary integrated viewer reaction-capture system 100 of the disclosed technology.
  • an event is displayed to a viewer 102 (e.g., which can be multiple viewers or a single viewer in a home or other private or public venue) using a display device that displays video and/or audio content 101, e.g., which can include but is not limited to a computer, television, tablet, mobile device, or wearable screen such as smartglasses, smartwatch, or other device that can display video content, or device that can solely produce audio content such as a radio.
  • a display device that displays video and/or audio content 101
  • video and/or audio content 101 e.g., which can include but is not limited to a computer, television, tablet, mobile device, or wearable screen such as smartglasses, smartwatch, or other device that can display video content, or device that can solely produce audio content such as a radio.
  • the viewer 102 can be situated at home, a bar, or anywhere remotely from the event where the event is being broadcast or streamed.
  • a significant moment e.g., such as a reaction-invoking or emotional moment
  • images of the viewer 102 are captured by a camera 105 of the system 100 that is operated by an event- viewing imager and processing device 103 of the system 100.
  • the event- viewing imager and processing device 103 shown in FIG. IB, includes a data processing unit and data communications unit, and is in data communication with the camera 105.
  • the event- viewing imager and processing device 103 is configured to control image capturing of the viewer
  • the system 100 includes a trigger unit 104 to generate trigger data corresponding to the significant moment.
  • the trigger unit 104 can include a sensor to detect a stimulus associated with the significant moment of the event to generate the trigger data.
  • the trigger data can include a signal produced by the sensor that has a distinguishing feature, e.g., such as a baseline electrical signal with a signal spike corresponding to the detection of the significant moment.
  • the trigger data can be used by the device 103 to initiate the capture of images via the camera 105, or to identify an image (e.g., a photo or video frame) in a series of continuously captured photos or video.
  • the stimulus that the trigger unit 104 detects can include a sound stimulus (e.g., of a particular volume or frequency), a visual stimulus (e.g., rapid acceleration of movements produced by the viewer 102, or facial expressions by the viewer 102), mechanical perturbations (e.g., clapping, stomping, etc.), voice control by the viewer 102 (e.g., such as predetermined words or phrases), among other stimuli.
  • a sound stimulus e.g., of a particular volume or frequency
  • a visual stimulus e.g., rapid acceleration of movements produced by the viewer 102, or facial expressions by the viewer 102
  • mechanical perturbations e.g., clapping, stomping, etc.
  • voice control by the viewer 102 e.g., such as predetermined words or phrases
  • FIG. IB shows a block diagram of the event- viewing imager and processing device 103.
  • the device 103 can include a power source 115, which can include a battery, such as a rechargeable battery, and/or a converter to convert AC electrical power into DC when the device
  • the device 103 is plugged into an electrical outlet in the home or private or public viewing location.
  • the device 103 includes a data processing and communications unit 113 to process and store the captured images in real-time, and/or transmit the raw and/or processed images to one or more external devices, e.g., such as one or more centralized computer systems in a communication network accessible via the Internet (referred to as 'the cloud'), or one or more user mobile communication devices of the viewer 102 (e.g., smartphone, tablet, smartwatch, smartglasses, etc.).
  • the data processing and communications unit 113 can include a processor to process data and a memory in communication with the processor to store data.
  • the processor can include a central processing unit (CPU) or other processor, such as a microcontroller unit (MCU).
  • CPU central processing unit
  • MCU microcontroller unit
  • the memory can include and store processor-executable code, which when executed by the processor, configures the data processing unit 113 to perform various operations, e.g., such as receiving information, commands, and/or data, processing information and data, and transmitting or providing information/data to another entity or to a user.
  • processor-executable code which when executed by the processor, configures the data processing unit 113 to perform various operations, e.g., such as receiving information, commands, and/or data, processing information and data, and transmitting or providing information/data to another entity or to a user.
  • the data processing and communications unit 113 can be implemented by a computer system in the cloud (e.g., one or more servers in the cloud).
  • the memory can store information and data, such as instructions, software, values, images, and other data processed or referenced by the processor.
  • RAM Random Access Memory
  • ROM Read Only Memory
  • Flash Memory Flash Memory devices
  • I/O input/output unit
  • wireless transmitter/receiver unit e.g., including, but not limited to, Universal Serial Bus (USB), IEEE 1394 (Fire Wire), Bluetooth, IEEE 802.111, Wireless Local Area Network (WLAN), Wireless Personal Area Network (WPAN), Wireless Wide Area Network (WWAN), WiMAX, IEEE 802.16 (Worldwide Interoperability for
  • the I/O of the data processing and communications unit 113 can also interface with other external interfaces, sources of data storage, and/or visual or audio display devices, etc. to retrieve and transfer data and information that can be processed by the processor, stored in the memory, or exhibited on an output unit of an external device.
  • an external display device can be configured to be in data communication with the data processing unit, e.g., via the I/O, which can include a visual display device, an audio display device, and/or sensory device, e.g., which can include a smartphone, tablet, and/or wearable technology device, among others.
  • the data processing and communications unit 113 can include a wireless transmitter/receiver (Tx/Rx) unit 114 to wirelessly transmit and receive data to and from an external device, such as the computer system.
  • Tx/Rx wireless transmitter/receiver
  • the wireless Tx/Rx unit 114 transmits raw data (e.g., photos and/or video) captured by the camera 105 and trigger data acquired by the trigger 104 to the computer system in the cloud or the user's communication device for processing of the captured images associated with the significant moment of the viewed event.
  • the device 103 processes the raw data captured by the camera 105 and the trigger data acquired by the trigger 104 to produce processed images that can be stored on the device 103 and/or transmitted to the external devices by the Tx/Rx unit 114, e.g., and able to be displayed and shared on the user's device (e.g., via the software app, in some exemplary scenarios).
  • the device 103 includes a display in some embodiments, for example, the device 103 includes a display in
  • the device 103 can be used to process and display the captured images to the viewer 102 in real-time (e.g., instantaneously after capture and processing) via the display device 101(e.g., TV, smartphone, tablet, wearable display device, laptop, etc.).
  • the display device 101 e.g., TV, smartphone, tablet, wearable display device, laptop, etc.
  • the processed images can be transmitted to the display device 101, and processed via the processing unit of the display device 101, to be presented on the display of the device 101, e.g., including simultaneously with the broadcasted event on the (e.g., in a smaller viewing window on the event viewing window of the display device 101, or in another presentation window that can be accessed on the display device 101).
  • the processed images can be transmitted to the display device 101 wirelessly via the Tx/Rx unit 114 to a receiver of the display device 101, or by wireless or wired communication over the Internet (e.g., via a server in the cloud in communication with the device 103 and the display device 101), or by wired communication between the device 103 and the display device 101 via a wired communication cable.
  • the device 103 can be incorporated into an existing device, e.g., such as a television, gaming console, computer, tablet, mobile device, or other device that includes a camera or image or video capturing apparatus.
  • the camera 105 can be included as part of the display device 101 (e.g., a smart TV) on which the user 102 is viewing the event.
  • the 'host device' (the display device 101) of the device 103 includes the data processing and communications unit 113 that is in communication with the camera 105 to control the camera 105 of the display device 101 for capturing the images, and to wirelessly communicate, process and/or store the captured imaging data.
  • the trigger unit 104 can also be included as part of the host device and in communication with the data processing and communications unit 113.
  • the device 103 and/or camera 105 may be included in the existing TV or gaming console or computer connected to the Internet, such that the system 100 includes a software layer, e.g., such as an application program interface (API), added to the existing console or computer device infrastructure to control image capture and data transfer to a computer system via the Internet for subsequent processing and delivery (e.g., server in the cloud, user mobile device, or other).
  • a software layer e.g., such as an application program interface (API)
  • API application program interface
  • the software layer can utilize the existing device infrastructure to process and deliver the images to the viewer 102.
  • the software layer may also include a user-interactive software layer to receive user input and display output (e.g., captured and/or processed images of the viewer 102, or received images or data from other viewers in other settings using the disclosed image capture, processing, and delivery service).
  • user input and display output e.g., captured and/or processed images of the viewer 102, or received images or data from other viewers in other settings using the disclosed image capture, processing, and delivery service.
  • the device 103 is configured to capture images of the viewer 102 during an event displayed on the device 101 based on trigger data produced by the trigger unit 104 associated with a significant moment during the viewing of the event.
  • the camera 105 can be triggered by stimuli caused by the viewer 102, e.g., from a visual or audio cue based on the reaction from the viewer 102, or by a stimuli from a remote location with respect to the viewer 102, e.g., including from content information from the display device 101 and/or from the device 103.
  • the captured photos/video of the viewer 102 can be constantly or continuously captured during the broadcasting/streaming of the event, such that the trigger can be used to isolate the video section or images just before their reaction and during their reaction to capture the entire reaction sequence.
  • the one or more cameras 105 of the system 100 can be configured to continuously capture still photos at a given frequency (e.g., 2 photos/second or faster) or continuous video and to store a predetermined amount (e.g., the most recent 2 minutes) of the raw photos and/or video in the memory of the data processing and communications unit 113, e.g., by a sliding buffer technique.
  • the data processing and communications unit 113 can update the stored raw image data by deleting the oldest image data in the storage as it adds the most recently captured raw image data to the store (e.g., for every second of image data deleted at the beginning of the image data time segment, a new second of image data can be added to the end of the image data time segment).
  • the sliding buffer can store the relevant captured image data into a non-deleting location of the memory, or upload the relevant captured image data to the computer system in the cloud;
  • the relevant captured image data includes (i) a predetermined portion of the most recent past image data currently in the sliding buffer since the trigger occurrence, and (ii) a predetermined amount of new captured raw image data since the trigger occurrence as it comes into the buffer.
  • the device 103 can be directly connected to the display device 101 to receive signal communications from the device 101 and/or provide signal communications to the device 101 (e.g., such as processed images captured), such that the received signals can be used to trigger the event- viewing imager and processing device 103 to capture the images via the camera 105.
  • the received signal can include data associated with the audio content of the event being viewed (e.g., such as crowd noise), in which the data processing and communication unit 113 processes the received data to identify a trigger event (e.g., substantial increase in the crowd noise) to initiate the image capture of the camera 105 or identify the significant moment in a series of captured photos or video during a continuous capture mode by the camera 105.
  • the device 103 can then be used to process and/or display the images to the viewer 102 in real-time (e.g., instantaneously after capture and processing) via the display device 101.
  • the system 100 can be networked to another display device located in another location than that of the display device 101, such that the captured/processed images can be presented to the other remotely located individual or group interacting with the viewer 102, e.g., connected to the remote user over a network such as a social network or other connection.
  • the system 100 has access to the network for providing the shared content.
  • the captured image content of the viewer 102 can be processed and then made available to the viewer 102 so he/she/they can (1) add his/her/their photos and/or video to a social network, and/or (2) send the processed photos and/or video directly to others, e.g., including other viewers of the broadcasted event at other viewing locations (e.g., including at the live event venue), via the software app resident of the device 103 or the mobile device of the viewer 102.
  • the device 103 can be connected to other devices via the Internet which are connected through user's profiles or networks so that the content captured can be shared with each other, e.g., being prompted or automatically.
  • the viewer 102 reacts and their video or photos are captured by the system 100, which can then be saved and the user can share this content with other users or networks or it can be automatically displayed to other viewers of the broadcasted event that are connected in some manner, e.g., such as a social network friend, follower, or username acceptance.
  • the images of the user's reaction can be displayed to these connections after it has occurred or effectively during their live reaction.
  • the viewer 102 of the event could have a window or multiple windows appear that is a live stream of their connections reaction to that moment in the event so they can all experience the event together virtually, e.g., which can be displayed on a display of the device 103 or the display device 101.
  • the event content provider e.g., TV station(s) broadcasting the content
  • the event content provider can embed signals in the content stream that function as triggers to cause the system 100 to capture images of the viewer 102.
  • the embedded signals can be encoded in the broadcast as they are recording or feeding the stream to their networks, which can signify an emotional and/or significant event of the content being streamed.
  • These embedded signals can then be parsed/read by the system 100, e.g., via the data processing unit 113 of the device 103, to trigger immediate image capture of the viewer 102 using the camera 105 while the viewer 102 views the event at his/her/their home or other small gathering location, or to identify the frame of a continuous photo series or video capture by the camera 105 at which to process the moments leading up to, at, and after the significant moment for a particular time span.
  • a person or groups of persons attending a live event at the event venue can also have their photos/video, which are captured at the event venue by an image capture system operating at the event, streamed live or displayed post-moment to others in connection with their viewing devices while they are viewing the event on the display device 101 in their home, bar, or other private or public setting.
  • the live event attendee can also have the video or photos of their connections streamed live to them, or sent or made available on a network, e.g. ,via the software app, showing the reactions of other event attendees at the event venue or of the viewers at the remote locations (from the event) watching from home, bar, etc.
  • FIG. 2 shows a diagram of a method 200 to capture, process, and deliver images of the viewers of a broadcasted event using an exemplary integrated viewer reaction-capture system of the disclosed technology, e.g., such as the system 100.
  • a significant event 201 that occurs during an event (e.g., sports game, concert, TV show or movie, etc.) being viewed by one or more individual viewers (e.g., the viewer 102) on the display device 101 may cause the viewer 102 to have a reaction 202 (e.g., display emotional expression and/or behavior) in the viewer's private or public venue.
  • the method includes a process 210 to trigger image capture of the viewer 102 based on the significant event 201.
  • the trigger of process 210 can include a centralized trigger provided by a signal received by the system 100 or a localized trigger initiated by an audio, optical, or mechanical perturbation based the sensor of the trigger unit 104 of the device 103being triggered at the location of the viewer 102, or the trigger can be initiated by the viewer 102 him/herself.
  • the method includes a process 220 to cause image capture of the viewer 102 based on the trigger of the process 210 for a duration of time.
  • the duration can be a pre-configured time duration based on the type of trigger.
  • the time duration can continue based on feedback from the sensor to determine the duration of the image capture in real-time based on that particular moment.
  • the trigger instead of being a trigger in the process 210 that causes the camera to capture the images in the process 220, the trigger can be an identifier that segments, isolates or filters images of the viewer 102 if the camera is configured to continuously capture the images.
  • the method 200 includes a process 230 to process the captured images to produce processed images.
  • the method includes a process 240 to deliver the images to the viewer 102 (e.g., via share the processed images via social networks or send to
  • the method can also include a process 250 to display the processed/delivered image of the viewer 102 on one or multiple user devices in real-time, e.g., including the display device 101, mobile devices of the viewer 102 and/or his/her/their socially-connected friends, etc.
  • the method 200 can include image pre-processing techniques performed prior to the process 210 and earlier events 201 and 202.
  • the method can include a process to capture reference images (e.g., sequence of photographs and/or video) of the viewer 102 and/or environment of the place that the event is being viewed on the display device 101.
  • the image pre-processing techniques can include a process to perform object recognition to the reference images to identify people and/or objects (e.g., couch, chairs, bar stools, tables, etc.) in the environment at the place.
  • the process can include assigning labeling information to the captured reference images.
  • the image pre-processing techniques can include a process to generate a map of locations (e.g., to a grid) corresponding physical locations in the environment of the place, which can include creating coordinates associated with the mapped locations that are associated with physical locations of the place, e.g., which can include the objects and/or people recognized in the captured reference images.
  • a map of locations e.g., to a grid
  • the image pre-processing techniques can include a process to generate a map of locations (e.g., to a grid) corresponding physical locations in the environment of the place, which can include creating coordinates associated with the mapped locations that are associated with physical locations of the place, e.g., which can include the objects and/or people recognized in the captured reference images.
  • the image pre-processing techniques can include a process to present a mapping image to the viewer 102 (e.g., via the software app, interactive website viewable on a web browser, text message, email, etc.) to request a particular mapped location in the environment to where the viewer 102 is occupying, e.g., during the viewing of the event on the display device 102.
  • the image preprocessing techniques can include a process to receive a response by the viewer 102 including the viewer-identified mapped location.
  • the process to receive the viewer response can include receiving updated mapped locations from the viewer 102 in instances where the viewer changed locations in the environment while viewing the event.
  • the process 230 includes a process to produce the processed image of the viewer 102 during the viewing of the significant moment (e.g., including before, at, and after the significant moment) of the event.
  • the process to produce the processed image can include a process to determine an image space of the captured image (e.g., one or more photos and/or video frames) containing at least one of the viewers 102 at a particular location in the map of locations, e.g., for the images associated with the significant moment.
  • the process to produce the processed image can include a process to generate the processed image based on the determined image space, e.g., by producing a segmented image by cropping at least one of the captured images to a size defined by the image space, e.g., in which the producing the segmented image can include compensating for overlapping of two or more of the captured images (e.g. which can include forming a merged image).
  • the process to produce the processed image can include a process to assign metadata to the generated image.
  • the metadata can include information associated with the event, the user, the place, the camera 105 and/or the trigger, or other type of information related to the significant moment during which the viewer 102 is being captured.
  • the photos/videos are labelled with metadata including the time the image was captured.
  • This content can then be placed within a stream of content that corresponds with the content at the event, e.g., such as images and video at the event venue.
  • the system can utilize the content data to build an event story that connects the viewers (e.g., users of the imaging service) at their home, bar, or other public or private gathering venues with the most valuable content (e.g., emotional or otherwise significant content) to the viewer in a quick and seamless manner.
  • the system 100 can include a user downloadable software application that can be implemented on the event- viewing imager device 103, which can communicate with the content being viewed to capture the images of the viewer, or which can be triggered via different methods.
  • the system 100 can be configured to constantly capture video clips or photos while the viewer 102 is viewing content of the event (e.g., such as a live sports event), in which the most relevant capture period of the video or photos is identified using a time stamp, central trigger, central server poll, or triggers embedded in the video stream.
  • the system 100 can be configured such that the imaging devices are triggered to capture images during an emotional event manually, e.g., based on a manual trigger.
  • the image capture can be initiated by the viewer manually based on a sensed occurrence of the viewer.
  • the system 100 can be configured to capture images during an emotional event when there is a threshold in volume or movement of the viewer that the camera is monitoring that is exceeded.
  • the event- viewing imager device 103 can be configured to detect emotions of the viewer's face to trigger imaging once a threshold is exceeded.
  • the system 100 can be configured such that the imaging devices are triggered to capture images during an emotional event automatically via a central location, which triggers all devices utilizing the system.
  • TV stations or media distribution centers can embed signals as triggers to signify big events that can be parsed/read on the system 100 to trigger immediate image capture when the user views the event.
  • Metadata can be added to the captured photos or video of the viewer, which can include information about the event or moment the user is reacting to and what user is viewing. This information can be called from a central location such as a server (e.g., in the cloud or at the event venue) when an event is triggered, or is pulled from a service that provides real-time event data. Users can edit their images to upload and share within social networks or other viewing platforms.
  • Metadata added to the images of the captured viewer(s) can also contain information of the individual(s) captured, as well as the moment being celebrated.
  • Viewer A and Viewer B are watching event II, either separately or together.
  • Viewer B is captured by the device 103 of the system 100 reacting to an exciting moment X.
  • Viewers A and B are connected to a network of users of the system 100.
  • the captured content of Viewer B during the exciting moment X can then be presented to Viewer A, including the captured/processed image of Viewer B with metadata showing the moment X being reacted to, and/or who the viewer is, e.g., such as Viewer B's name, username, location, etc.
  • the device 103 can be resident on a user device having a camera for image capture, in which the exemplary software application can be installed and configured on the user device to operate the real-time image capturing, processing, and/or delivery of spectators viewing a content stream of an event at a small or large gathering place.
  • the user device can include a smartphone, tablet, laptop or desktop computer, etc.
  • the user can place his/her user device at the content viewing location (e.g., the user's home, or a public gather such as at a bar, pub, restaurant, outdoor screen, etc.) to capture images of the viewers based on the trigger event as described previously, e.g., such as caused by a particular occurrence during the event being viewed based on a trigger signal included in the content stream, and/or a reaction exhibited by the viewers detected by the user device to generate the trigger signal for image capture.
  • the content viewing location e.g., the user's home, or a public gather such as at a bar, pub, restaurant, outdoor screen, etc.
  • FIGS. 3A and 3B show illustrative diagrams of the integrated viewer reaction-capture system 300 of the disclosed technology, in which the device 103 is resident on one or more user devices (e.g., smartphone, tablet, laptop, computer, wearable device, etc.) placed in the viewing environment for image capture of the viewer 102 during viewing of the event on the display device 101.
  • user devices e.g., smartphone, tablet, laptop, computer, wearable device, etc.
  • a user opens the software app on his/her mobile device to communicate with and control one or more event-viewing imager devices 103, e.g., shown as device 103a and 103c in FIGS. 3 A and 3B.
  • the user can place his/her mobile devices having the device 103 in a desired location in the viewing area to utilize the camera 105 (e.g., shown as camera 105a and camera 105c in this example) for image capture of the user, e.g., viewers 102a, 102b, and 102c, during viewing of the event on the display device 101 (e.g., a smart TV, as shown in FIGS. 3A and 3B).
  • the devices 103a and 103c resident on the respective user devices that are placed in the viewing environment for image capture, can include respective trigger units 104a and 104c, to detect the stimuli to generate the trigger data associated with the significant moment for creating the in-the-moment images of the viewers 102a, 102b, and 102c.
  • the trigger unit 104 can include existing components in the user device, e.g., such as microphones, accelerometers, camera, or other component capable of sensing audio, visual, and/or mechanical stimuli.
  • the software app can be displayed and/or operated via the display device 101, e.g., as shown by software app user interface screen 510 that is presented in a portion of the display of the device 101.
  • the devices 103a and 103c, resident on the respective user devices are operable to receive the trigger data produced by the trigger 104a and/or 104c, to capture photos and/or video or identify the frame of continuous photo or video capture of the viewers 102a, 102b, and 102c.
  • the devices 103a and 103c can receive trigger data from other sources, e.g., such as content- embedded signal data in the content stream detected by the device 103, or by a signal provided through the software application operating on the device 103, such as a signal or time identified from an automatic or manual triggering that occurred from within the event itself (e.g., a trigger in the stadium, arena, concert, etc.).
  • the trigger data can be generated and provided to the devices 103a and/or 103c from another device that is not capturing the content (e.g., including a viewer 102b using his/her user device).
  • the user using the software app, can select the desired event to be viewed.
  • This selected event implementation can provide additional information to the software app to identify which times to isolate the continuous photo/video capture of the viewers 102a, 102b, and 102c, or to identify the times to trigger the image capture by the cameras 105a and 105c.
  • This exemplary feature can enhance data processing efficiencies by ensuring the image content is reduced to manageable quantities of high quality and desired content for the viewers 102a, 102b, and 102c.
  • the user device that include the device 103 can transmit the captured images to a data processing unit on a computer system (e.g., server) in the cloud, e.g., associated with the software app operating on the user devices, to process the captured images to produce processed images.
  • the computer system can process the captured images and send the processed images back to the device 103 or to other user devices of the viewers.
  • the device 103 resident on the user devices positioned to capture the images can perform the image processing to produce the processed images.
  • the processed photos/video content can be displayed on the user devices that include the device 103, other user devices (e.g., such as that held by the viewer 102b), or the software app user interface screen 510 presented on the display device 101 (e.g., shown as interface screen 511 in FIG. 3B), any of which can be connected via the software application, e.g., based on viewer's location proximity or user-connected accounts, or the event that they have selected using the software app.
  • the processed content of the viewers 102a, 102b, and 102c imaged at the significant moment of the event can be attached to additional data or photos/video of the event being viewed to display relevant information of the moment they just witnessed combined with their own reactions. For example, such additional data or photos/video can include information about the team or player involved in the significant moment at the event, statistics, etc.
  • the user can choose which content they wish to share of themselves via social media or the application or event stream connected. For example, this selected content for sharing can be added to a timeline of the event which is structured to show this content in a series of events in chronological order.
  • the content shared can also be displayed to connected viewers in different places for other viewers at other event-viewing locations (e.g., friends' homes, bars, etc.) to see, e.g., on their display device 101 via the software app interface screen 510, or their own mobile devices operating the software app.
  • This can allow the event viewers to see the reactions of their connected friends/family during the event after emotional moments occur. In some implementations, for example, this can happen automatically via premising connections or prompted to share after an emotional event and capture occurs.
  • FIGS. 4A and 4B show illustrative diagrams of exemplary embodiments of the device 103 in an integrated viewer reaction-capture system 400 of the disclosed technology.
  • FIG. 4A shows an exemplary multi-camera event- viewing imager device 103 that includes a plurality of cameras 105 that can move to pan and capture multiple images simultaneously of the viewers 102a, 102b, and 102c in the viewing environment.
  • the multiple camera configuration and ability for the cameras 105 to move to various image views and focuses allows the device 103 in FIG. 4 A to be placed in a variety of locations and positions in the viewing environment for optimal image capture of the viewers 102 during viewing of the event on the display device 101.
  • the device 103 can operate a software application that interacts with the software app on the user devices to allow user-control and direct interaction with the device 103.
  • the device 103 includes the trigger unit 104 to determine the optimal time to initiate image capture and/or identify the frame used to create the photo or video sequence associated with the significant moment of the event being viewed.
  • the device 103 can perform the image processing and delivery processes 230 and 240 to produce and provide the processed images of the viewers in the significant moment.
  • the exemplary multi-camera event- viewing imager device 103 can be operated by the user as previously described for the exemplary embodiments of the device 103 in FIGS. 3 A and 3B, or beforehand or later in this patent document.
  • FIG. 4B shows an exemplary interactive event- viewing imager device 103 that includes a display screen to present an interactive software application on the device 103 a to the viewers 102a, 102b, and 102c, e.g., while they are viewing the event on the display device 101.
  • the device 103 in FIG. 4B includes one or more cameras 105 that can move to pan and capture multiple images simultaneously of the viewers 102a, 102b, and 102c in the viewing environment.
  • the interactive software application that runs on the exemplary interactive event-viewing imager device 103 in FIG. 4B can operate to display the processed images to the viewers 102a, 102b, and 102c, as well as shared images provided to the user from his/her friends, family, etc.
  • the device 103 of FIG. 4B includes the trigger unit 104 to determine the optimal time to initiate image capture and/or identify the frame used to create the photo or video sequence associated with the significant moment of the event being viewed.
  • the device 103 can perform the image processing and delivery processes 230 and 240 to produce and provide the processed images of the viewers in the significant moment.
  • the exemplary interactive event- viewing imager device 103 can be operated by the user as previously described for the exemplary embodiments of the device 103 in FIGS. 3A and 3B, or beforehand or later in this patent document.
  • FIG. 5 shows an exemplary embodiment of the interactive and/or multi-camera event- viewing imager device 103 in an integrated viewer reaction-capture system 500 of the disclosed technology.
  • the interactive and/or multi-camera event- viewing imager device 103 of the system 500 can be designed in a themed configuration (e.g., such as sports related items, like a football helmet as shown in FIG. 5, a ball, a toy mascot or bobble head or statue or action figure, etc.; music or musician related item; or other themed configuration) or a furnishing (e.g., furniture item, like a lighting as shown in FIG. 5; art, like a painting, sculpture, etc.; or other furnishing).
  • a themed configuration e.g., such as sports related items, like a football helmet as shown in FIG. 5, a ball, a toy mascot or bobble head or statue or action figure, etc.; music or musician related item; or other themed configuration
  • a furnishing e.g., furniture item, like a lighting
  • Such themed configurations or furnishing configurations can be designed to fit any decor of the viewing environment to which the system 500 is implemented.
  • the exemplary interactive and/or multi-camera event- viewing imager device 103 can be operated by the user as previously described for the exemplary embodiments of the device 103 in FIGS. 3A, 3B, 4A, 4B, or beforehand or later in this patent document.
  • a unique location ID and a unique device ID can be created and stored for the system and the viewers that use the system 100 during their viewing of an event.
  • This can act as a tethering point where users can "check in", e.g. via the software app, to determine and store their location and/or event to be viewed, which can allow the viewers to easily access their captured and processed photos and/or video, as well as receive other benefits such as notifications, selective advertising or promotions, etc.
  • the processed images provided to the users allow the users to 'tell a narrative' of their experience viewed from their couch, bar stool, etc.
  • the software app lets the users (e.g., fans) have the ability to remember and share their candid reactions, while allowing the imaging service system (e.g., operated by one or more servers in the cloud and/or through the device 103) to provide the users with a variety of dynamic and responsive messaging, and/or advertising. Also, for example, by checking-in or actively choosing the event being consumed, the photos/video recordings can be triggered or used based on predetermined parameters (e.g., generated by the imaging service system 100, which can be provided to the device 103) associated with the event, which can be used to optimally isolate the significant moments in the event being consumed.
  • predetermined parameters e.g., generated by the imaging service system 100, which can be provided to the device 103 associated with the event, which can be used to optimally isolate the significant moments in the event being consumed.
  • users that check-in to event E using the software app allow the system 100 to provide image capture data to the device to isolate the photos/video at time 12:32: 12 for 10 seconds taken of the continuous images. This can allow the most appropriate portions of content to be isolated and labeled for the viewer to subsequently or later upload/save.
  • Another exemplary benefit of the check-in includes the ability to identify which event is being consumed to provide a live feed of reactions after big moments occur, such as streaming this to other user devices or smart televisions, and/or coupling the captured and processed images with content associated with the event to provide further added information and context to the processed images provided to the viewer 102. Exemplary techniques for user activation and/or check in are described.
  • a user e.g., viewer 102
  • the system 100 can determine the location of the viewer 102, e.g., in which metadata can be assigned to the captured photos and/or video based on the user's location.
  • the software application can pre-populate locations in the viewing environment with the checked-in viewers and prompt or notify the viewer in the location to notify the viewer the event will start, to remind them to turn on the device 103, or a message or an update of the imaging service system, etc.
  • the system 100 can perform facial recognition used to identify a user and activate the system to check the user in.
  • a user can be checked in by use of computer vision to identify and read certain markers (e.g., such as a user displaying a QR code on his/her mobile device to the camera 105 of the system 100), which can 'tag' user and determine his/her location.
  • markers e.g., such as a user displaying a QR code on his/her mobile device to the camera 105 of the system 100
  • a user can be checked in by use of geo location of the user's mobile device,
  • the trigger unit 104 can include a sensor to detect a stimulus in the external environment at the place (e.g., home, bar, restaurant, etc.) that the content is being viewed, e.g., on the display device 101. .
  • the trigger unit 104 can detect a sound stimulus (e.g., of a particular volume or frequency), a visual stimulus (e.g., rapid acceleration of movements produced by the viewer 102, or facial expressions by the viewer 102), mechanical perturbations (e.g., clapping, stomping, etc.), voice control by the viewer 102 (e.g., such as predetermined words or phrases), among other stimuli.
  • a sound stimulus e.g., of a particular volume or frequency
  • a visual stimulus e.g., rapid acceleration of movements produced by the viewer 102, or facial expressions by the viewer 102
  • mechanical perturbations e.g., clapping, stomping, etc.
  • voice control by the viewer 102 e.g.,
  • the sensor of the trigger unit 104 can generate a trigger signal that can be used to initiate image capture of photos and/or video of the viewer 102 by the camera 105.
  • the trigger unit 104 is in communication with the data processing and communications unit 113 of the device 103, such that the trigger signal is received by the data processing unit 113 to convert to trigger data, which can be used to initiate the image capture of the viewer 102 by the camera 105, or can be used to identify the significant moment for image processing of the continuously captured photographs and/or video by the one or more cameras 105.
  • the trigger unit 104 can include a data processing unit including a processor and memory to process and store and/or buffer the trigger data.
  • the trigger unit 104 can include the sensor to detect the stimulus in the external environment at the place, in which the exemplary data processing unit of the trigger unit 140 produces trigger data that can include image capture instructions (e.g., that can be used to initiate the image capture by the camera 105), temporal information (e.g., that can be used by the data processing and communications unit 113 to identify the significant moment for image processing of the continuously captured photographs and/or video by the one or more cameras 105), stimuli information (e.g., that can identify and characterize the type of stimuli detected to be used to trigger the image capture initiation or identification).
  • image capture instructions e.g., that can be used to initiate the image capture by the camera 105
  • temporal information e.g., that can be used by the data processing and communications unit 113 to identify the significant moment for image processing of the continuously captured photographs and/or video by the one or more cameras 105
  • the trigger unit 104 can be housed in the device 103 in communication with the data processing and communications unit 113, in the display device 101 in wired or wireless communication with the device 103, or can be housed in an independent housing as a stand-alone device in wired or wireless communication with the device 103.
  • the trigger unit 104 can be configured in a user's device (e.g., smartphone, tablet, wearable device, etc.) and in communication with the data processing and communications unit 113 of the device 103, in which the trigger unit 104 includes executable program instructions stored in memory to control and/or receive information from a sensing unit or device of the user device, e.g. including, but not limited to, a microphone, an accelerometer, a camera, etc.
  • a user e.g., the viewer 102
  • can click an activate button on a remote e.g., such as the user's mobile device or an independent remote control that communicates with the device 103
  • a game controller or other device in communication with the device 103 when the user wishes to initiate the image capture sequence, e.g., such as when a significant moment occurs that the user wants to record the reactions of the viewers who are watching the event in the local setting, or if there was a significant moment occurring at the local gathering independent of what is occurring at the event being viewed.
  • the user can also use voice commands to activate the image capture sequence, e.g., in which the trigger data is generated by the trigger unit 104 of the system 100 upon sensing and processing the voice activation.
  • a voice-based trigger can include voice recognition techniques to identify only predetermined individuals to control the trigger of the image capture or identification of the significant moment among continuous images from which to produce the processed images.
  • Central Command Trigger During an emotional moment during an event being broadcast, for example, a trigger can be activated at the event (e.g., by an automated system or a person spectating the event via pressing of a button/clicking of a mouse on a digital command console, an automatic verbal or motion initiation of trigger, or a physical pushing of a trigger button) such that the triggering signals are sent out via a central server which triggers all units running the software to go through their established course of actions to either start capture, start the parse, or analyze the buffering video/image that is captured to deliver reaction clips/images to the user.
  • the software app will have an active connection with the server/triggering/parsing system (this could be accomplished via TCP/IP, UDP, or active polling).
  • Stats Trigger For example, a trigger can be generated from sports stats information, sports broadcasting companies; websites/services that provide real time sports analytics.
  • Text/Score Changes Trigger For example, text information (e.g., such as a score) changes that are displayed on a screen during an event, such as "Touchdown,” "Goal,” etc. can trigger the software. Also, for example, text information changes can include a change in the score, like the movement of points from 7 to 13.
  • text information e.g., such as a score
  • changes can include a change in the score, like the movement of points from 7 to 13.
  • Gesture Trigger For example, software built into the application/ API/system can be used to determine when an emotional moment has occurred based on the physical gestures or volume of those watching the event. Facial recognition software can also be used to detect facial emotion. Software that can read minute changes in skin color and pupil dilation can be used to tell when an event is happening; this means heart rate can be assessed as well as changes in the radiation of "color" in one's face.
  • the viewer 102 may be recording his/her own bioanalytical data (e.g., heart rate, motion data, etc.) via a wearable device or smartphone, which can be utilized by the system 100 as the trigger to initiate image capture or identify the point of the photo sequence or video frame in continuous image capture that corresponds to the significant moment for image processing.
  • his/her own bioanalytical data e.g., heart rate, motion data, etc.
  • integrated triggers in the exemplary video stream can be parsed/read to trigger the local camera/video system to capture images from all the cameras. These images can then be tagged with a unique event ID and device ID that may then be processed by software.
  • the software app once installed, has a number of ways in which it can be used to trigger the capture of images and footage, to process the captured images, and to deliver, share, and present the processed image data to various users. Exemplary features are described that help deliver the highly emotional reactions to the viewers watching an event at a home, a bar, a restaurant, a coffee shop, or other private or public setting.
  • the movement of pixels in the images can be compared to each fraction of a second, or a whole second, to create a timeline with respect to the event being viewed.
  • the software app can create and present a timeline view that includes images of the viewers watching the event and images of the actions in the event itself, where a user can slide through their photos and/or video sections that correspond to the moments captured in the event (e.g., moments of the game). For example, the camera can be on at all times taking "loops" of video. If a trigger does occur, then the relevant block will be saved, stored, processed (with metadata) and sent to a user's app. The images can be pulled out of specific sections, which can be edited as requested by the user. For example, a user can use the software or associated mobile/web/app to interface with their existing social media platforms, email, or any other places that images can be saved.
  • the exemplary software can provide an overlaying of contextual info that tells the user a combination of what the event was, the score, who was playing and location of the photo. For example, events can be presented in a timeline for each game, episode, etc.
  • Metadata can be associated with the processed images to provide context to the moment being captured.
  • the metadata can include, but is not limited to, a game score, the teams playing in a sporting event, the players playing at the event or the players involved with the significant moment, the time the moment occurred, the location of the event, a description of the event, and any other data associated with the event or people, places, or things involved.
  • system data can be associated with the process images, e.g., such as location data and time data of where and when the viewers are viewing the event, names of other viewers at the home, bar, etc. viewing the event with the user.
  • the metadata can also include user metadata, e.g., including demographic data, online social network data, usage data, advertisement data including user-targeted advertising and user-engagement of advertisement data, location information, or other user type data.
  • the metadata can be processed with the captured image data to be synchronized using any appropriate synchronization technique, e.g., such as by timestamp matching of images, or by assigning unique codes to each image or group of images.
  • the disclosed systems for real-time image and video capturing, processing, and delivery of viewers viewing a content stream of an event at a small or large gathering place can be implemented by the following methods.
  • the exemplary software app can be downloaded by the system 100, including on the device 103, on the display device 101, and/or on user devices.
  • the software app may already be pre-installed on the device 103, display device 101, and/or the user's device.
  • a camera can be added to the device 103 if one does not already exist.
  • the user activates the software app of the system 100 and a unique identifier is generated for the system 100 based on the time of activation, location of the device 103, and/or event to be streamed on the display device 101.
  • Viewers at the gathering can individually establish their attendance of the event viewing using the software app of the system 100 directly, e.g., via the device 103 and/or via the viewers' mobile devices.
  • the viewers' attendance may be established using GPS and/or device proximity (e.g., proximity to the device 103).
  • the software app of the system 100 can control operation of the device 103 to constantly record images up to a fixed length of time into the past (e.g., beyond this, photos or video can be deleted).
  • the software app of the system 100 receives an event trigger, the recorded images are retrieved and pushed to the computing system or computer in the cloud for storage.
  • Image processing can be performed by the device 103 or the computing system (e.g., server in the cloud in
  • the last editing parameters e.g., cropping
  • the last recall e.g., facial recognition and motion detection
  • Processed images e.g., edits
  • the image processing can include viewer recognition processing of objects in the captured photos or video to determine the number of viewers 102 viewing the event on the display device 101.
  • the viewer recognition processing can include analyzing pixel data in the captured images to determine shapes and features indicative of human faces and/or bodies.
  • the viewer recognition processing can include facial recognition techniques to identify each individual viewer, i.e., identify the viewer's unique identity.
  • the image processing techniques can include utilizing the viewer recognition data to provide real-time targeted advertising to the identified viewers 102 based on the facial recognition processing.
  • the image processing techniques can include using the viewer recognition data to provide real-time targeted advertising to the group of viewers viewing the event, e.g., in some examples based solely on the number of viewers gathered.
  • the targeted advertising can include pushing selected advertisements or promotions from specific vendors for products or services related to the number of viewers, location of the viewers, and event being viewed (e.g., type, time, date, etc.).
  • the viewer recognition data includes three or more viewers, then the selected advertising for one or more of these specific viewers could include a pizza advertisement or promotion for pizza & delivery specials during the event at the local gathering for viewing the event.
  • stored images on the server in the cloud can be delivered from the cloud to the device 103 running an app in real-time, e.g., immediately after the significant moment occurred for which the viewer 102 was imaged.
  • stored images on the server in the cloud can be delivered from the cloud, e.g., during commercial breaks, to the device 103 running an app for latent review on the device 101 at a convenient or non-interruptive time.
  • Images and other data can be used for targeted advertising. For example, on a mobile device an advertiser can sponsor the content.
  • the content captured (e.g., photos or video) of the viewer(s) can have associated advertising content, images, video or messages associated with them, e.g., such as merging the content, showing it simultaneously, overlaying it, and/or running after one another. Data of the captured event viewer(s) and/or the using viewing this content can be used to identify the associated
  • advertisement content e.g., such as: User A or group is captured and the demographic and their interest data is known; the associated moment captured of the individual(s) is one that causes a negative reaction; this is then associated with advertisements contextual to this moment and data on the user being captured; and the data on the viewer of the content created is also subject to specifically which content is displayed to him/her.
  • advertisement content e.g., such as: User A or group is captured and the demographic and their interest data is known; the associated moment captured of the individual(s) is one that causes a negative reaction; this is then associated with advertisements contextual to this moment and data on the user being captured; and the data on the viewer of the content created is also subject to specifically which content is displayed to him/her.
  • the viewer being captured can identify which team or performer he/she is supporting (e.g., to identify a positive or negative reaction to the event).
  • Information about the event such as which team scored also identifies a positive/negative reaction.
  • the user can calibrate the image capture focus, e.g., such as a couch or seating area or bar, etc., to focus the camera lens, or, an automatic cropping area to produce the desired images.
  • this can be performed automatically by the camera 105 detecting motion of the viewer(s) and apply cropping or camera focus on this area/area.
  • this can be performed manually by use of software controls of the camera 105 via the software app operated by the user on the device 103 or the user's device (e.g., smartphone, tablet, smartwatch, smartglasses, laptop, etc.) to adjust the focus while viewing the adjustment on the display from the app.
  • raw photographs and/or video can be captured and stored in the cloud, as well as stored locally on the capturing device 103 or other device that can be connected with the camera 105, e.g., such as computer, console, TV, external memory device, etc.
  • the user(s) is/are notified (e.g., via a message delivered on the user's mobile device) to retrieve the captured image(s) associated with the triggered event.
  • the users can retrieve the captured image(s) associated with the triggered event directly on the display device 101 (e.g., console, computer, TV, etc.).
  • the user(s) can perform image processing (e.g., including cropping) of the retrieved image(s). User(s) can then share or perform other processes with their processed images (e.g., such as sharing on a social media site). Additionally or alternatively, for example, the system 100 can edit the captured raw images (e.g., locally on the device 103 and/or on the computer system in the cloud via upload straight from capturing or associated device) and display on the display device 101.
  • image processing e.g., including cropping
  • User(s) can then share or perform other processes with their processed images (e.g., such as sharing on a social media site).
  • the system 100 can edit the captured raw images (e.g., locally on the device 103 and/or on the computer system in the cloud via upload straight from capturing or associated device) and display on the display device 101.
  • FIG. 6 shows an example of a communication network including computers (e.g., servers 612, 614) for implementing the image capturing, processing, and delivery service system 100 that communicates with remote devices (e.g., the device 103) over a network 610 (e.g., the Internet).
  • the servers 612, 614 in the network 610 can be operated by a commercial entity to provide the imaging capture, processing, and/or delivery service for real-time image capturing, processing, and distribution of 'in-the-moment' images of the viewer 102 during the event being viewed (e.g., or live event being attended) at a home, bar, restaurant, ballroom, or other public or private venue.
  • the servers 612, 614 can be configured to perform the processes 230 and/or 240 of the method 200 upon receiving partially processed or raw images (e.g., photos and/or video) from the event-viewing imager and processing device 103, e.g. in which the partially processed or raw images are provided to the servers 612, 614 over the network 610.
  • the servers 612, 614 can include software modules to perform various techniques of the process 230 and/or 240, which can also reside on the devices 103, or on the user devices via the software app resident on the end user devices.
  • the system 100 can be implemented in a private or public setting where a live event is taking place, e.g., such as a wedding, a party, a dance club or other night club, or outside venue like a BBQ, etc.
  • the system 100 can be configured in a portable configuration, e.g., where one or more devices 103 can be placed at appropriate locations at the live event venue similar that shown in FIG. 3 A (e.g., the device 103 is embodied in a user's mobile
  • FIG. 4A e.g., the device 103 is embodied in a multi-camera imager unit
  • FIG. 4B e.g., the device 103 is embodied in a user-interactive imager unit
  • FIG. 5 e.g., the device 103 is embodied in an event-related item or location-related decor or furnishing.
  • the system 100 can be triggered manually or based on certain stimuli at the event, e.g., including, but not limited to, voice recognition of certain words or phrases, certain lighting, certain sounds or music, etc.
  • an imaging service system includes an imaging unit arranged at a place including a home or a public or private place of gathering, where the place includes one or more display devices to present visual and/or audio content, the imaging unit including one or more cameras arranged to capture images of one or more viewers at the place to view of an event on the one or more display devices, in which the images include photos or video, a data processing unit in communication with the one or more cameras, the data processing unit including a processor, a memory, and a wireless transmitter and receiver, the data processing unit configured to at least partially process the captured images and transmit the images to another device, and a trigger module in communication with one or both of the data processing unit and the one or more cameras to generate a trigger associated with an occurrence of the event or a reaction by the one or more viewers to the occurrence of the event, in which the generated trigger causes the one or more cameras to initiate the capture of the images of the viewers at the place, or causes the data processing unit to identify a captured photo or video frame
  • Example 2 includes the system as in example 1, in which the public place of gathering includes a bar, a pub, a restaurant, or an outdoor display screen.
  • Example 3 includes the system as in example 1 , in which the one or more computers are operable to distribute the processed images to the one or more viewers using wireless communication to a mobile device of a viewer of the one or more viewers.
  • Example 4 includes the system as in example 3, in which the one or more computers are operable to provide an interactive software application on the mobile device, in which the software application is configured to present the processed images to the viewer.
  • Example 5 includes the system as in example 4, in which the one or more computers are configured to process the images including selecting an advertisement to be presented with the processed images to the viewer via the software application.
  • Example 6 includes the system as in example 1 , in which the one or more computers are operable to send the processed images to a social network site.
  • Example 7 includes the system as in example 1, in which the one or more computers are operable to provide the processed images for purchase by the one or more viewers.
  • Example 8 includes the system as in example 1, in which the trigger module includes a sensor to detect at least one of a sound, visual stimulus, or mechanical perturbation of the one or more viewers or the visual and/or audio content to cause initiation of the capture of the images or to cause identification of the captured photo or video frame to be associated with the occurrence.
  • the trigger module includes a sensor to detect at least one of a sound, visual stimulus, or mechanical perturbation of the one or more viewers or the visual and/or audio content to cause initiation of the capture of the images or to cause identification of the captured photo or video frame to be associated with the occurrence.
  • Example 9 includes the system as in example 8, in which the trigger module is operable to detect a voice command by a viewer to cause the initiation of the capture of the images or to cause the identification of the captured photo or video frame to be associated with the occurrence.
  • Example 10 includes the system as in example 1, in which the imaging unit is in communication with the one or more display devices, and the trigger module includes a signal receiver to receive a signal encoded in the presented content from the one or more display devices to cause initiation of the capture of the images or to cause identification of the captured photo or video frame to be associated with the occurrence.
  • Example 11 includes the system as in example 1 , in which the trigger module includes a signal receiver to receive a signal provided by the one or more computers to cause initiation of the capture of the images or to cause identification of the captured photo or video frame to be associated with the occurrence.
  • the trigger module includes a signal receiver to receive a signal provided by the one or more computers to cause initiation of the capture of the images or to cause identification of the captured photo or video frame to be associated with the occurrence.
  • Example 12 includes the system as in example 1, in which the one or more cameras are operable to continuously capture the images of the one or more viewers during the viewing of the event.
  • Example 13 includes the system as in example 12, in which the one or more cameras are configured to capture a temporal series of photos or continuous video of the one or more viewers for a predetermined duration of time before and after the generation of the trigger by the trigger module.
  • Example 14 includes the system as in example 13, in which the temporal series of photos or the continuous video is stored in a sliding buffer of the memory of the data processing unit, in which the sliding buffer is configured to store a predetermined amount of recently captured temporal series of photos or continuous video.
  • Example 15 includes the system as in example 12, in which the data processing unit of the imaging unit is configured to perform facial recognition analysis of the continuously captured images to determine one or more facial expressions of the one or more viewers.
  • Example 16 includes the system as in example 15, in which the data processing unit is configured is configured to identify the captured photo or video frame among the continuous sequence of the photos or the video to be associated with the occurrence based on the particular facial expression.
  • Example 17 includes the system as in example 15, in which one or both of the data processing unit of the imaging unit and the one or more computers are configured to determine a number of viewers at the place, and to process the images including selecting an advertisement based on the number of viewers to be presented with the processed images.
  • Example 18 includes the system as in example 17, in which the one or more computers are operable to provide an interactive software application on the mobile device, in which the software application is configured to present the processed images to the viewer with the selected advertisement.
  • Example 19 includes the system as in example 1, in which one or both of the data processing unit of the imaging unit and the one or more computers are configured to process the images including attaching metadata to the processed images.
  • Example 20 includes the system as in example 19, in which the processed images include links to external websites.
  • Example 21 includes the system as in example 19, in which the metadata includes data associated the event for viewing, data associated with one or more viewers, and/or data associated with the place.
  • Example 22 includes the system as in example 21, in which the event includes a sporting event, and the metadata associated with the event includes a score, team or player playing in the sporting event, time the occurrence occurred, location of the sporting event, or a description of the event.
  • Example 23 includes the system as in example 21, in which the metadata associated with the user includes demographic data, online social network data, usage data, or location information.
  • Example 24 includes the system as in example 1, in which the one or more display devices include a television, a computer, a mobile device including a tablet, smartphone, smartglasses, or smartwatch, a gaming console, or a radio.
  • the one or more display devices include a television, a computer, a mobile device including a tablet, smartphone, smartglasses, or smartwatch, a gaming console, or a radio.
  • Example 25 includes the system as in example 1, in which the imaging unit is included as part of the one or more display devices.
  • an imaging service device includes one or more cameras arranged to capture images of one or more viewers at a place to view of an event on one or more display devices, in which the images include photos or video; an image processing unit to process the captured images to produce processed images, in which the image processing unit includes a processor, a memory and a wireless transmitter and receiver to at least partially process the captured images and transmit the images to another device; and a trigger module in communication with one or both of the image processing unit and the one or more cameras to generate a trigger associated with an occurrence of the event or a reaction by the one or more viewers to the occurrence of the event, in which the generated trigger causes the one or more cameras to initiate the capture of the images of the viewers at the place, or causes the image processing unit to identify a captured photo or video frame among a continuous sequence of the photos or the video to be associated with the occurrence.
  • the trigger module includes a sensor to detect at least one of a sound, visual stimulus, or mechanical perturbation of the one or more viewers or visual and/or audio content from the one or more display device.
  • the image processing unit is configured to be in communication with at least one of the one or more display devices or a user device of the one or more viewers to present the processed images on a display screen to the one or more viewers in real-time with respect to the occurrence during their viewing of the event.
  • Example 27 includes the device as in example 26, in which the trigger module includes a signal receiver to receive a signal encoded in the presented content from the one or more display devices to cause initiation of the capture of the images or to cause identification of the captured photo or video frame to be associated with the occurrence.
  • the trigger module includes a signal receiver to receive a signal encoded in the presented content from the one or more display devices to cause initiation of the capture of the images or to cause identification of the captured photo or video frame to be associated with the occurrence.
  • Example 28 includes the device as in example 26, in which the trigger module includes a signal receiver to receive a signal provided by a computer in communication with the imaging service device over a communication network to cause initiation of the capture of the images or to cause identification of the captured photo or video frame to be associated with the occurrence.
  • the trigger module includes a signal receiver to receive a signal provided by a computer in communication with the imaging service device over a communication network to cause initiation of the capture of the images or to cause identification of the captured photo or video frame to be associated with the occurrence.
  • Example 29 includes the device as in example 26, in which the one or more cameras are configured to capture a temporal series of photos or continuous video of the one or more viewers for a predetermined duration of time before and after the generation of the trigger by the trigger module, and in which the temporal series of photos or the continuous video is stored in a sliding buffer of the memory of the image processing unit, in which the sliding buffer is configured to store a predetermined amount of recently captured temporal series of photos or continuous video.
  • Example 30 includes the device as in example 29, in which the image processing unit is configured to perform recognition analysis of objects in the captured temporal series of photos or continuous video to determine facial or body features or expressions of the one or more viewers.
  • Example 31 includes the device as in example 30, in which the image processing unit is configured to identify the captured photo or video frame among the continuous sequence of the photos or the video to be associated with the occurrence based on particular facial or body expression.
  • Example 32 includes the device as in example 30, in which the image processing unit is configured to determine a number of viewers at the place, and to process the images including selecting an advertisement based on the number of viewers to be presented with the processed images.
  • Example 33 includes the device as in example 26, in which the image processing unit is configured to be in communication with one or more computers on a network via the Internet to transmit the images from the imaging service device to the one or more computers for further processing or distribution of the processed images.
  • Example 34 includes the device as in example 26, in which the display device includes a television, a computer, a mobile device including a tablet, smartphone, smartglasses, or smartwatch, a gaming console, or a radio.
  • the display device includes a television, a computer, a mobile device including a tablet, smartphone, smartglasses, or smartwatch, a gaming console, or a radio.
  • a method for providing images of viewers viewing an event remotely from the event venue includes capturing, using one or more cameras arranged at a place to view of an event on a display device, images including a sequence of photos and/or video of one or more viewers at locations in the place, in which the capturing is initiated responsive to a triggering signal received during the viewing of the event, or in which the capturing includes continuously capturing the images of the one or more viewers during the viewing of the event; processing, using a data processing unit in communication with the one or more cameras, the images to produce processed images, in which the processed images include images of the reaction by the one or more viewers to the occurrence of the event; and distributing the processed images to a viewer of the one or more viewers.
  • Example 36 includes the method as in example 35, in which the place includes a home or a public or private place of gathering.
  • Example 37 includes the method as in example 36, in which the public place of gathering includes a bar, a pub, a restaurant, or an outdoor display screen.
  • Example 38 includes the method as in example 35, in which the processing the images includes: mapping the locations to a grid corresponding to predetermined positions associated with the place; determining an image space containing an individual at a particular location in the mapped locations based on the coordinates; and generating the processed image based on the determined image space.
  • Example 39 includes the method as in examples 35 or 38, in which the processing the images includes: assigning metadata with the processed image, the metadata including information associated with one or more of the event, the place, or the individual in the processed image.
  • Example 40 includes the method as in example 35, further including: capturing a sequence of reference images of the place including location areas corresponding to physical locations of the place; assigning a reference label to each reference image of the sequence of reference images; forming a reference image coordinate space in each of the reference images, the forming the reference image coordinate space including a mapping of the location areas; and generating image template data for each of the image location areas associated with each of the reference images, the image template data based on at least a portion of the reference image coordinate space that is substantially centered on the image location area.
  • Example 41 includes the method as in example 40, in which the processing the images includes: assigning an image label to the captured images of the one or more viewers at the place viewing the event, the image label including information corresponding to the reference label; obtaining the image template data of the corresponding reference image for the image based on the image label; and producing the processed image for each of the mapped image location areas, the processed image including image properties corresponding to the image template data.
  • Example 42 includes the method as in example 41, in which the image label includes a code corresponding to one or more of the event, the camera, the occurrence including temporal information, or a sequence number of the image.
  • Example 43 includes the method as in example 35, in which the distributing the processed images includes: transmitting the processed images to the viewer using a wireless communication link to a mobile device of the viewer operating an interactive software application on the mobile device; and presenting the processed images using a display of the mobile device via the software application.
  • Example 44 includes the method as in example 35, in which the triggering signal includes at least one of a sound, visual stimulus, or mechanical perturbation of the one or more viewers or the visual and/or audio content to cause the initiation of the capture of the images.
  • Example 45 includes the method as in example 45, in which the trigger signal includes a voice command by a viewer.
  • Example 46 includes the method as in example 35, further including: detecting a triggering signal including at least one of a sound, visual stimulus, or mechanical perturbation of the one or more viewers or the visual and/or audio content; and processing the trigger signal to select an image among the sequence of photos or video to identify the occurrence, in which the processed images include a series of images before, during, and after the occurrence of the one or more viewers.
  • Example 47 includes the method as in example 47, in which the trigger signal includes a voice command by a viewer.
  • Example 48 includes the method as in example 35, further including: presenting the processed images on a display screen of the one or more display devices or a user device of the one or more viewers to the one or more viewers in real-time with respect to the occurrence during their viewing of the event.
  • Implementations of the subject matter and the functional operations described in this patent document can be implemented in various systems, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them.
  • Implementations of the subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus.
  • the computer readable medium can be a machine- readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them.
  • data processing apparatus encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers.
  • the apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
  • a computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a computer program does not necessarily correspond to a file in a file system.
  • a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code).
  • a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a
  • the processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output.
  • the processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
  • processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
  • a processor will receive instructions and data from a read only memory or a random access memory or both.
  • the essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • a computer need not have such devices.
  • Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices.
  • semiconductor memory devices e.g., EPROM, EEPROM, and flash memory devices.
  • the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • General Engineering & Computer Science (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

Methods, systems, and devices are disclosed for providing in-the -moment photos or video of individuals viewing an event. In one aspect, an imaging service system includes an imaging unit including one or more cameras arranged to capture images of one or more viewers viewing presented content of an event on a content display device, a trigger module in communication with the one or more cameras to initiate the capture of the images based on an occurrence of the event, in which the captured images display a reaction by the one or more viewers to the occurrence of the event, and a processing unit including a memory unit and a processor configured to process and store the captured images. The system includes one or more computers in communication with the imaging unit to receive the images from the imaging unit and process the images to form processed images.

Description

REAL-TIME IMAGING SYSTEMS AND METHODS FOR CAPTURING ΓΝ-ΤΗΕ- MOMENT IMAGES OF USERS VIEWING AN EVENT IN A HOME OR LOCAL
ENVIRONMENT
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This patent document claims the priority and benefits of U.S. Provisional Application No. 61/937,455 entitled " REAL-TIME IMAGING AND COMMUNICATIONS SYSTEMS AND METHODS FOR IMAGING EVENT CONTENT VIEWERS," filed February 7, 2014, of which the entire disclosure is incorporated herein by reference for all purposes.
TECHNICAL FIELD
[0002] This patent document relates to systems, devices, and processes for image capture, processing and communications to various users.
BACKGROUND
[0003] Group events like sporting events or concerts typically bring large crowds of people to event venues for watching the event live. Such events are often televised and enjoyed by smaller groups or individuals in the comforts of their home or at smaller, local gatherings (e.g., such as pubs, bars, and restaurants). During various events, particularly for sports and concerts, the reactions of individuals watching the live or televised performances can be highly animated. A photograph taken of the spectators watching and enjoying the event may provide him or her with pleasant memories of the event.
[0004] Photos are becoming more commonly shared through social media using online social networks and users connected via devices. An online social network is an online service, platform, or site that focuses on social networks and relations between individuals, groups, organizations, etc., that forms a social structure determined by their interactions, e.g., which can include shared interests, activities, backgrounds, or real-life connections. A social network service can include a representation of each user (e.g., as a user profile), social links, and a variety of additional services. For example, user profiles can include photos, lists of interests, contact information, and other personal information. Online social network services are web- based and provide means for users to interact over the Internet, e.g., such as private or public messaging, e-mail, instant messaging, etc. Social networking sites allow users to share photos, ideas, activities, events, and interests within their individual networks. SUMMARY
[0005] Techniques, systems, and devices are disclosed for real-time image and video capturing, processing, and delivery of viewers viewing a content stream of an event (e.g., such as a televised sporting event, concert, etc.) at home or small-group gathering in a private or public place (e.g., such as a bar, pub, restaurant, outdoor screen, etc.).
[0006] In one aspect, an imaging service system includes an imaging unit arranged at a place including a home or a public or private place of gathering, where the place includes one or more display devices to present visual and/or audio content, the imaging unit including one or more cameras arranged to capture images of one or more viewers at the place to view of an event on the one or more display devices, in which the images include photos or video, a data processing unit in communication with the one or more cameras, the data processing unit including a processor, a memory, and a wireless transmitter and receiver, the data processing unit configured to at least partially process the captured images and transmit the images to another device, and a trigger module in communication with one or both of the data processing unit and the one or more cameras to generate a trigger associated with an occurrence of the event or a reaction by the one or more viewers to the occurrence of the event, in which the generated trigger causes the one or more cameras to initiate the capture of the images of the viewers at the place, or causes the data processing unit to identify a captured photo or video frame among a sequence of the photos or the video to be associated with the occurrence; and the imaging service system includes one or more computers in communication with the imaging unit to receive the captured images from the imaging unit and to process the images to produce processed images, in which the processed images include images of the reaction by the one or more viewers to the occurrence of the event.
[0007] In one aspect, an imaging service device includes one or more cameras arranged to capture images of one or more viewers at a place to view of an event on one or more display devices, in which the images include photos or video; an image processing unit to process the captured images to produce processed images, in which the image processing unit includes a processor, a memory and a wireless transmitter and receiver to at least partially process the captured images and transmit the images to another device; and a trigger module in
communication with one or both of the image processing unit and the one or more cameras to generate a trigger associated with an occurrence of the event or a reaction by the one or more viewers to the occurrence of the event, in which the generated trigger causes the one or more cameras to initiate the capture of the images of the viewers at the place, or causes the image processing unit to identify a captured photo or video frame among a continuous sequence of the photos or the video to be associated with the occurrence. The trigger module includes a sensor to detect at least one of a sound, visual stimulus, or mechanical perturbation of the one or more viewers or visual and/or audio content from the one or more display device. The image processing unit is configured to be in communication with at least one of the one or more display devices or a user device of the one or more viewers to present the processed images on a display screen to the one or more viewers in real-time with respect to the occurrence during their viewing of the event.
[0008] In one aspect, a method for providing images of viewers viewing an event remotely from the event venue includes capturing, using one or more cameras arranged at a place to view of an event on a display device, images including a sequence of photos and/or video of one or more viewers at locations in the place, in which the capturing is initiated responsive to a triggering signal received during the viewing of the event, or in which the capturing includes continuously capturing the images of the one or more viewers during the viewing of the event; processing, using a data processing unit in communication with the one or more cameras, the images to produce processed images, in which the processed images include images of the reaction by the one or more viewers to the occurrence of the event; and distributing the processed images to a viewer of the one or more viewers.
[0009] The subject matter described in this patent document and attached appendices can be implemented in specific ways that provide one or more of the following features. In some aspects, for example, imaging devices are embedded in, connected to, or otherwise associated with a video/audio display device (e.g., such as a TV or computer) that is displaying live video content of an event and configured to image the viewers during key moments (e.g., emotional reaction moments) of the event, such as a goal scored during a sporting event. Metadata can be added to the captured content that is associated with the event and the moment, and the images are stored and then shared via applications, social networks, email, etc. For example, in one exemplary embodiment, an imaging service system includes an imaging unit arranged proximate a content display device to capture images or video of viewers of the content display device, and one or more computers in communication with the imaging unit to process the images or video. The imaging unit includes one or more cameras arranged to capture images or video of one or more viewers viewing presented content of an event on the content display device, a trigger module in communication with the one or more cameras to initiate the capture of the images or video based on an occurrence of the event, in which the captured images or video display a reaction by the one or more viewers to the occurrence of the event, and a processing unit including a memory unit and a processor configured to process and store the captured images or video. The one or more computers are configured to receive the images or video from the imaging unit and process the images or video to form processed images or video.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] FIG. 1 A shows a diagram of an exemplary integrated viewer reaction-capture system of the disclosed technology.
[0011] FIG. IB shows a block diagram of an exemplary event- viewing imager device of the disclosed technology.
[0012] FIG. 2 shows a diagram of an exemplary method to capture, process, and deliver images of viewers of a broadcasted event using an integrated consumer reaction-capture system of the disclosed technology.
[0013] FIGS. 3A and 3B show illustrative diagrams of an exemplary embodiment of an integrated viewer reaction-capture system of the disclosed technology.
[0014] FIGS. 4A and 4B show illustrative diagrams of other exemplary embodiments of an integrated viewer reaction-capture system of the disclosed technology.
[0015] FIG. 5 shows an illustrative diagram of another exemplary embodiment of an integrated viewer reaction-capture system of the disclosed technology.
[0016] FIG. 6 shows an example of a communication network for implementing an image capturing, processing, and delivery service system of the disclosed technology.
DETAILED DESCRIPTION
[0017] One of the easiest forms of communication is through photos or video. Photos and video capture and convey special moments, and sharing them is a way to show others that moment. Also, shared images and video content form the core of an interactive social media network. [0018] During various group events, particularly large group events including sports or concerts, the reactions of individuals watching the live performances are highly animated. A photograph of these situations provides a unique and yet highly beneficial and desired memento or keepsake for a viewer or spectator, especially if the image can be captured at a precise moment, tailored to remind the spectator of that specific moment, and easily and rapidly obtained. However, to achieve this, there are many technical difficulties. For example, some main issues or difficulties include capturing the image or images in a short period of time and at just the right moment, capturing the image or images in focus of the individual spectator and/or group of spectators in the context of the moment, preparing the captured image or images so they can be easily and rapidly accessed, e.g., such as delivering the image or images directly to the user and/or integrating the image content and/or the image or images into a social network, e.g., particularly a social network with a series of specific mechanisms with a unique interface. Also for example, some main issues or difficulties include distributing this image content to other viewers watching the event shortly after the image content has been captured.
[0019] Systems, devices, and methods are disclosed for real-time image capturing, processing, and delivery of viewers viewing a content stream of an event (e.g., such as a televised sporting event, concert, etc.) at home or a small-group gathering in a private or public place (e.g., such as a bar, pub, restaurant, outdoor display screens, etc.). Images captured, processed, delivered, and/or displayed using the disclosed technology can include still photos, video, or both or a mixture of still photos or video. The disclosed technology includes a platform to capture photos and video of the individual viewers watching the event and to process and distribute the captured photos and/or video to the users of the platform. The disclosed technology can provide the 'in-the-moment' images (e.g., photos and/or video) to the users while they continue to watch the event, e.g., including immediately after the special 'moment' occurred, which allows the users to share their reactions captured in the images through social media applications.
[0020] For example, a series of photos or video of viewers viewing an event on a display in a remote private or public setting that includes an exemplary image capturing, processing, and delivery system of the disclosed technology can be taken and made available rapidly (e.g., in real-time, during the event), providing a virtual layout of the individuals in the gathering at the private or public setting, e.g., such as a viewer's home, or a bar or restaurant. When shared, the photos or video show images of users enjoying themselves, which is an entirely new medium through which fans and advertisers/brands can interact with one another.
[0021] In some aspects, imaging devices are embedded in, connected to, or otherwise associated with a video/audio display device (e.g., such as a TV, computer, tablet, smartphone, etc.) that is displaying live video content of an event and configured to image the viewers during key moments (e.g., emotional reaction moments) of the event, such as a goal scored during a sporting event. For example, modern consumer devices such as televisions, game consoles, computers, and mobile devices like smartphones, tablets, and wearable devices can employ image capture, processing, and communication devices of the disclosed technology to capture images (e.g., photos and/or video) of users while they are viewing the video content of the event. In some examples, wired or wirelessly connected imaging units can be directly interfaced with a tablet or laptop to capture images of a user. Built-in imaging units in televisions (e.g., smart TVs) can also be used for image capture of users during the content viewing.
[0022] The disclosed technology includes systems for image capture, processing and delivery of images from still photo and/or video camera devices that attach or interact directly or indirectly to a data processing device, an image capture trigger unit, and a content display console, e.g., such as a television, computer, mobile device, radio, etc. In some implementations, for example, the disclosed technology uses existing camera devices that are
embedded/connected/associated to video/audio content display devices (e.g., TVs) displaying live video content of an event to capture the reaction moments of the viewers and deliver processed images content to them in real-time. For example, while the video/audio content is being broadcast or streamed to display device, the camera system is active so it can capture the viewers' reactions to the content being displayed. For example, the broadcasted or streamed content can include content transmitted from a single transmitter to multiple receiving units or a single receiving unit, or can include content stored on a device and presented for display on the same or other device. The camera devices can be initiated to capture images by the trigger during a significant moment in the event (e.g., an event that evokes an emotional reaction from the audience), or the camera devices can continuously capture video and/or photos of the viewer(s), in which the trigger is used to identify the timing of particular sequence or video or image set to be isolated and used (for delivery). Metadata can be added to each piece of captured content and is associated with the event. The images can be stored locally and/or in the cloud, and the processed photos/video can be shared via a software application ('app') on a user's mobile communication device (e.g., smartphone, tablet, smartwatch, smartglasses, etc.) associated with the image capture and processing system, via social networks and associated social media apps, via email, via messaging, and/or by displaying on user devices, etc.
[0023] For example, the images can be provided to the individual users captured in any given photo or video using the software application associated with the image capturing, processing, and delivery technology. The software app can reside on a user device. Such images of the individual users can be saved to the respective users' accounts with the software app, and be available for viewing, sharing, and other user-desired functions on the application. The user can use the software app to provide the images to an online social network. For example, the software app can operate functions of the user's mobile device where the software app resides to communicate with the particular social network and obtain a token issued by the social network that can be utilized to access a portion of the user's online social networking profile, e.g., via an application programming interface provided by the online social network, from which the user can receive a request to share the photo and/or video on the particular social network, and thereby generate a 'post' or other sharing notification on the social network using the token and the processed image.
[0024] FIG. 1 A shows a diagram of an exemplary integrated viewer reaction-capture system 100 of the disclosed technology. As shown in the example scenario shown in the diagram of FIG. 1A, an event is displayed to a viewer 102 (e.g., which can be multiple viewers or a single viewer in a home or other private or public venue) using a display device that displays video and/or audio content 101, e.g., which can include but is not limited to a computer, television, tablet, mobile device, or wearable screen such as smartglasses, smartwatch, or other device that can display video content, or device that can solely produce audio content such as a radio. For example, the viewer 102 can be situated at home, a bar, or anywhere remotely from the event where the event is being broadcast or streamed. When a significant moment, e.g., such as a reaction-invoking or emotional moment, occurs in the event and is displayed on the display of the device 101, images of the viewer 102 are captured by a camera 105 of the system 100 that is operated by an event- viewing imager and processing device 103 of the system 100. The event- viewing imager and processing device 103, shown in FIG. IB, includes a data processing unit and data communications unit, and is in data communication with the camera 105. The event- viewing imager and processing device 103 is configured to control image capturing of the viewer
102 by the camera 105 and to process and/or store the captured imaging data. The system 100 includes a trigger unit 104 to generate trigger data corresponding to the significant moment. In some embodiments, for example, the trigger unit 104 can include a sensor to detect a stimulus associated with the significant moment of the event to generate the trigger data. For example, the trigger data can include a signal produced by the sensor that has a distinguishing feature, e.g., such as a baseline electrical signal with a signal spike corresponding to the detection of the significant moment. The trigger data can be used by the device 103 to initiate the capture of images via the camera 105, or to identify an image (e.g., a photo or video frame) in a series of continuously captured photos or video. For example, the stimulus that the trigger unit 104 detects can include a sound stimulus (e.g., of a particular volume or frequency), a visual stimulus (e.g., rapid acceleration of movements produced by the viewer 102, or facial expressions by the viewer 102), mechanical perturbations (e.g., clapping, stomping, etc.), voice control by the viewer 102 (e.g., such as predetermined words or phrases), among other stimuli.
[0025] FIG. IB shows a block diagram of the event- viewing imager and processing device 103. The device 103 can include a power source 115, which can include a battery, such as a rechargeable battery, and/or a converter to convert AC electrical power into DC when the device
103 is plugged into an electrical outlet in the home or private or public viewing location. The device 103 includes a data processing and communications unit 113 to process and store the captured images in real-time, and/or transmit the raw and/or processed images to one or more external devices, e.g., such as one or more centralized computer systems in a communication network accessible via the Internet (referred to as 'the cloud'), or one or more user mobile communication devices of the viewer 102 (e.g., smartphone, tablet, smartwatch, smartglasses, etc.). The data processing and communications unit 113 can include a processor to process data and a memory in communication with the processor to store data. For example, the processor can include a central processing unit (CPU) or other processor, such as a microcontroller unit (MCU). For example, the memory can include and store processor-executable code, which when executed by the processor, configures the data processing unit 113 to perform various operations, e.g., such as receiving information, commands, and/or data, processing information and data, and transmitting or providing information/data to another entity or to a user. In some
implementations, the data processing and communications unit 113 can be implemented by a computer system in the cloud (e.g., one or more servers in the cloud). To support various functions of the data processing and communications unit 113, the memory can store information and data, such as instructions, software, values, images, and other data processed or referenced by the processor. For example, various types of Random Access Memory (RAM) devices, Read Only Memory (ROM) devices, Flash Memory devices, and other suitable storage media can be used to implement storage functions of the memory. The data processing and communications unit 113 can include an input/output unit (I/O) that can be connected to an external interface, source of data storage, or display device. For example, various types of wired or wireless interfaces compatible with typical data communication standards can be used in communications of the data processing unit via the wireless transmitter/receiver unit, e.g., including, but not limited to, Universal Serial Bus (USB), IEEE 1394 (Fire Wire), Bluetooth, IEEE 802.111, Wireless Local Area Network (WLAN), Wireless Personal Area Network (WPAN), Wireless Wide Area Network (WWAN), WiMAX, IEEE 802.16 (Worldwide Interoperability for
Microwave Access (WiMAX)), 3G/4G/LTE cellular communication methods, and parallel interfaces. The I/O of the data processing and communications unit 113 can also interface with other external interfaces, sources of data storage, and/or visual or audio display devices, etc. to retrieve and transfer data and information that can be processed by the processor, stored in the memory, or exhibited on an output unit of an external device. For example, an external display device can be configured to be in data communication with the data processing unit, e.g., via the I/O, which can include a visual display device, an audio display device, and/or sensory device, e.g., which can include a smartphone, tablet, and/or wearable technology device, among others. The data processing and communications unit 113 can include a wireless transmitter/receiver (Tx/Rx) unit 114 to wirelessly transmit and receive data to and from an external device, such as the computer system. In some implementations of the device 103, for example, the wireless Tx/Rx unit 114 transmits raw data (e.g., photos and/or video) captured by the camera 105 and trigger data acquired by the trigger 104 to the computer system in the cloud or the user's communication device for processing of the captured images associated with the significant moment of the viewed event. In other implementations, for example, the device 103 processes the raw data captured by the camera 105 and the trigger data acquired by the trigger 104 to produce processed images that can be stored on the device 103 and/or transmitted to the external devices by the Tx/Rx unit 114, e.g., and able to be displayed and shared on the user's device (e.g., via the software app, in some exemplary scenarios).
[0026] In some embodiments, for example, the device 103 includes a display in
communication with the data processing and communications unit 113 to display the processed images to the viewer 102 via, the software app that can be resident on the device 103. In some implementations, for example, the device 103 can be used to process and display the captured images to the viewer 102 in real-time (e.g., instantaneously after capture and processing) via the display device 101(e.g., TV, smartphone, tablet, wearable display device, laptop, etc.). The processed images can be transmitted to the display device 101, and processed via the processing unit of the display device 101, to be presented on the display of the device 101, e.g., including simultaneously with the broadcasted event on the (e.g., in a smaller viewing window on the event viewing window of the display device 101, or in another presentation window that can be accessed on the display device 101). For example, the processed images can be transmitted to the display device 101 wirelessly via the Tx/Rx unit 114 to a receiver of the display device 101, or by wireless or wired communication over the Internet (e.g., via a server in the cloud in communication with the device 103 and the display device 101), or by wired communication between the device 103 and the display device 101 via a wired communication cable.
[0027] In some implementations, for example, the device 103 can be incorporated into an existing device, e.g., such as a television, gaming console, computer, tablet, mobile device, or other device that includes a camera or image or video capturing apparatus. For example, the camera 105 can be included as part of the display device 101 (e.g., a smart TV) on which the user 102 is viewing the event. In such implementations, the 'host device' (the display device 101) of the device 103 includes the data processing and communications unit 113 that is in communication with the camera 105 to control the camera 105 of the display device 101 for capturing the images, and to wirelessly communicate, process and/or store the captured imaging data. In such implementations, the trigger unit 104 can also be included as part of the host device and in communication with the data processing and communications unit 113.
[0028] In some implementations, for example, the device 103 and/or camera 105 may be included in the existing TV or gaming console or computer connected to the Internet, such that the system 100 includes a software layer, e.g., such as an application program interface (API), added to the existing console or computer device infrastructure to control image capture and data transfer to a computer system via the Internet for subsequent processing and delivery (e.g., server in the cloud, user mobile device, or other). In some implementations, for example, the software layer (e.g., API) can utilize the existing device infrastructure to process and deliver the images to the viewer 102. In some implementations, for example, the software layer may also include a user-interactive software layer to receive user input and display output (e.g., captured and/or processed images of the viewer 102, or received images or data from other viewers in other settings using the disclosed image capture, processing, and delivery service).
[0029] In operation, for example, the device 103 is configured to capture images of the viewer 102 during an event displayed on the device 101 based on trigger data produced by the trigger unit 104 associated with a significant moment during the viewing of the event. For example, the camera 105 can be triggered by stimuli caused by the viewer 102, e.g., from a visual or audio cue based on the reaction from the viewer 102, or by a stimuli from a remote location with respect to the viewer 102, e.g., including from content information from the display device 101 and/or from the device 103. In some implementations, for example, the captured photos/video of the viewer 102 can be constantly or continuously captured during the broadcasting/streaming of the event, such that the trigger can be used to isolate the video section or images just before their reaction and during their reaction to capture the entire reaction sequence. In one example, the one or more cameras 105 of the system 100 can be configured to continuously capture still photos at a given frequency (e.g., 2 photos/second or faster) or continuous video and to store a predetermined amount (e.g., the most recent 2 minutes) of the raw photos and/or video in the memory of the data processing and communications unit 113, e.g., by a sliding buffer technique. The data processing and communications unit 113 can update the stored raw image data by deleting the oldest image data in the storage as it adds the most recently captured raw image data to the store (e.g., for every second of image data deleted at the beginning of the image data time segment, a new second of image data can be added to the end of the image data time segment). When a trigger occurs (e.g., based on detection of the significant moment in the event being viewed), the sliding buffer can store the relevant captured image data into a non-deleting location of the memory, or upload the relevant captured image data to the computer system in the cloud; the relevant captured image data includes (i) a predetermined portion of the most recent past image data currently in the sliding buffer since the trigger occurrence, and (ii) a predetermined amount of new captured raw image data since the trigger occurrence as it comes into the buffer. [0030] In some implementations, the device 103 can be directly connected to the display device 101 to receive signal communications from the device 101 and/or provide signal communications to the device 101 (e.g., such as processed images captured), such that the received signals can be used to trigger the event- viewing imager and processing device 103 to capture the images via the camera 105. In one example, the received signal can include data associated with the audio content of the event being viewed (e.g., such as crowd noise), in which the data processing and communication unit 113 processes the received data to identify a trigger event (e.g., substantial increase in the crowd noise) to initiate the image capture of the camera 105 or identify the significant moment in a series of captured photos or video during a continuous capture mode by the camera 105. For example, the device 103 can then be used to process and/or display the images to the viewer 102 in real-time (e.g., instantaneously after capture and processing) via the display device 101.
[0031] In some implementations, for example, the system 100 can be networked to another display device located in another location than that of the display device 101, such that the captured/processed images can be presented to the other remotely located individual or group interacting with the viewer 102, e.g., connected to the remote user over a network such as a social network or other connection. In such implementations, the system 100 has access to the network for providing the shared content.
[0032] After image capture, for example, the captured image content of the viewer 102 can be processed and then made available to the viewer 102 so he/she/they can (1) add his/her/their photos and/or video to a social network, and/or (2) send the processed photos and/or video directly to others, e.g., including other viewers of the broadcasted event at other viewing locations (e.g., including at the live event venue), via the software app resident of the device 103 or the mobile device of the viewer 102. In some implementations, the device 103 can be connected to other devices via the Internet which are connected through user's profiles or networks so that the content captured can be shared with each other, e.g., being prompted or automatically.
[0033] For example, when the significant moment occurs during the broadcast event, the viewer 102 reacts and their video or photos are captured by the system 100, which can then be saved and the user can share this content with other users or networks or it can be automatically displayed to other viewers of the broadcasted event that are connected in some manner, e.g., such as a social network friend, follower, or username acceptance. The images of the user's reaction can be displayed to these connections after it has occurred or effectively during their live reaction. For example, after the emotional moment, the viewer 102 of the event could have a window or multiple windows appear that is a live stream of their connections reaction to that moment in the event so they can all experience the event together virtually, e.g., which can be displayed on a display of the device 103 or the display device 101.
[0034] In some implementations, for example, the event content provider (e.g., TV station(s) broadcasting the content) can embed signals in the content stream that function as triggers to cause the system 100 to capture images of the viewer 102. For example, the embedded signals can be encoded in the broadcast as they are recording or feeding the stream to their networks, which can signify an emotional and/or significant event of the content being streamed. These embedded signals can then be parsed/read by the system 100, e.g., via the data processing unit 113 of the device 103, to trigger immediate image capture of the viewer 102 using the camera 105 while the viewer 102 views the event at his/her/their home or other small gathering location, or to identify the frame of a continuous photo series or video capture by the camera 105 at which to process the moments leading up to, at, and after the significant moment for a particular time span.
[0035] Simultaneously, for example, a person or groups of persons attending a live event at the event venue (e.g., such as a stadium, arena, theatre, festival, or any large group event venue) can also have their photos/video, which are captured at the event venue by an image capture system operating at the event, streamed live or displayed post-moment to others in connection with their viewing devices while they are viewing the event on the display device 101 in their home, bar, or other private or public setting. The live event attendee can also have the video or photos of their connections streamed live to them, or sent or made available on a network, e.g. ,via the software app, showing the reactions of other event attendees at the event venue or of the viewers at the remote locations (from the event) watching from home, bar, etc.
[0036] FIG. 2 shows a diagram of a method 200 to capture, process, and deliver images of the viewers of a broadcasted event using an exemplary integrated viewer reaction-capture system of the disclosed technology, e.g., such as the system 100. For example, a significant event 201 that occurs during an event (e.g., sports game, concert, TV show or movie, etc.) being viewed by one or more individual viewers (e.g., the viewer 102) on the display device 101 may cause the viewer 102 to have a reaction 202 (e.g., display emotional expression and/or behavior) in the viewer's private or public venue. The method includes a process 210 to trigger image capture of the viewer 102 based on the significant event 201. For example, the trigger of process 210 can include a centralized trigger provided by a signal received by the system 100 or a localized trigger initiated by an audio, optical, or mechanical perturbation based the sensor of the trigger unit 104 of the device 103being triggered at the location of the viewer 102, or the trigger can be initiated by the viewer 102 him/herself. The method includes a process 220 to cause image capture of the viewer 102 based on the trigger of the process 210 for a duration of time. In some implementations, for example, the duration can be a pre-configured time duration based on the type of trigger. For example, in the case of a localized trigger including an audio, optical, or mechanical perturbation based sensor, the time duration can continue based on feedback from the sensor to determine the duration of the image capture in real-time based on that particular moment. In other implementations, for example, instead of being a trigger in the process 210 that causes the camera to capture the images in the process 220, the trigger can be an identifier that segments, isolates or filters images of the viewer 102 if the camera is configured to continuously capture the images. The method 200 includes a process 230 to process the captured images to produce processed images. The method includes a process 240 to deliver the images to the viewer 102 (e.g., via share the processed images via social networks or send to
connections). In some implementations of the method 200, for example, the method can also include a process 250 to display the processed/delivered image of the viewer 102 on one or multiple user devices in real-time, e.g., including the display device 101, mobile devices of the viewer 102 and/or his/her/their socially-connected friends, etc.
[0037] In some implementations of the method 200, for example, the method 200 can include image pre-processing techniques performed prior to the process 210 and earlier events 201 and 202. For example, the method can include a process to capture reference images (e.g., sequence of photographs and/or video) of the viewer 102 and/or environment of the place that the event is being viewed on the display device 101. In some implementations of the method 200, for example, the image pre-processing techniques can include a process to perform object recognition to the reference images to identify people and/or objects (e.g., couch, chairs, bar stools, tables, etc.) in the environment at the place. In such implementations, for example, the process can include assigning labeling information to the captured reference images. In some implementations of the method 200, for example, the image pre-processing techniques can include a process to generate a map of locations (e.g., to a grid) corresponding physical locations in the environment of the place, which can include creating coordinates associated with the mapped locations that are associated with physical locations of the place, e.g., which can include the objects and/or people recognized in the captured reference images. In some implementations of the method 200, for example, the image pre-processing techniques can include a process to present a mapping image to the viewer 102 (e.g., via the software app, interactive website viewable on a web browser, text message, email, etc.) to request a particular mapped location in the environment to where the viewer 102 is occupying, e.g., during the viewing of the event on the display device 102. In some implementations of the method 200, for example, the image preprocessing techniques can include a process to receive a response by the viewer 102 including the viewer-identified mapped location. For example, the process to receive the viewer response can include receiving updated mapped locations from the viewer 102 in instances where the viewer changed locations in the environment while viewing the event.
[0038] In some implementations of the method 200, for example, the process 230 includes a process to produce the processed image of the viewer 102 during the viewing of the significant moment (e.g., including before, at, and after the significant moment) of the event. The process to produce the processed image can include a process to determine an image space of the captured image (e.g., one or more photos and/or video frames) containing at least one of the viewers 102 at a particular location in the map of locations, e.g., for the images associated with the significant moment. The process to produce the processed image can include a process to generate the processed image based on the determined image space, e.g., by producing a segmented image by cropping at least one of the captured images to a size defined by the image space, e.g., in which the producing the segmented image can include compensating for overlapping of two or more of the captured images (e.g. which can include forming a merged image). The process to produce the processed image can include a process to assign metadata to the generated image. For example, the metadata can include information associated with the event, the user, the place, the camera 105 and/or the trigger, or other type of information related to the significant moment during which the viewer 102 is being captured. For example, the photos/videos are labelled with metadata including the time the image was captured. This content can then be placed within a stream of content that corresponds with the content at the event, e.g., such as images and video at the event venue. The system can utilize the content data to build an event story that connects the viewers (e.g., users of the imaging service) at their home, bar, or other public or private gathering venues with the most valuable content (e.g., emotional or otherwise significant content) to the viewer in a quick and seamless manner.
[0039] The system 100 can include a user downloadable software application that can be implemented on the event- viewing imager device 103, which can communicate with the content being viewed to capture the images of the viewer, or which can be triggered via different methods. In some examples, the system 100 can be configured to constantly capture video clips or photos while the viewer 102 is viewing content of the event (e.g., such as a live sports event), in which the most relevant capture period of the video or photos is identified using a time stamp, central trigger, central server poll, or triggers embedded in the video stream. In some examples, the system 100 can be configured such that the imaging devices are triggered to capture images during an emotional event manually, e.g., based on a manual trigger. For example, the image capture can be initiated by the viewer manually based on a sensed occurrence of the viewer. In some examples, the system 100 can be configured to capture images during an emotional event when there is a threshold in volume or movement of the viewer that the camera is monitoring that is exceeded. For example, the event- viewing imager device 103 can be configured to detect emotions of the viewer's face to trigger imaging once a threshold is exceeded. In other examples, the system 100 can be configured such that the imaging devices are triggered to capture images during an emotional event automatically via a central location, which triggers all devices utilizing the system. For example, TV stations or media distribution centers can embed signals as triggers to signify big events that can be parsed/read on the system 100 to trigger immediate image capture when the user views the event.
[0040] In implementations of the disclosed image capture, processing, and delivery platform, metadata can be added to the captured photos or video of the viewer, which can include information about the event or moment the user is reacting to and what user is viewing. This information can be called from a central location such as a server (e.g., in the cloud or at the event venue) when an event is triggered, or is pulled from a service that provides real-time event data. Users can edit their images to upload and share within social networks or other viewing platforms.
[0041] For example, metadata added to the images of the captured viewer(s) can also contain information of the individual(s) captured, as well as the moment being celebrated. In an illustrative example, Viewer A and Viewer B are watching event II, either separately or together. At some instance during their viewing of event II, Viewer B is captured by the device 103 of the system 100 reacting to an exciting moment X. Viewers A and B are connected to a network of users of the system 100. The captured content of Viewer B during the exciting moment X can then be presented to Viewer A, including the captured/processed image of Viewer B with metadata showing the moment X being reacted to, and/or who the viewer is, e.g., such as Viewer B's name, username, location, etc.
[0042] In some embodiments of the system 100, the device 103 can be resident on a user device having a camera for image capture, in which the exemplary software application can be installed and configured on the user device to operate the real-time image capturing, processing, and/or delivery of spectators viewing a content stream of an event at a small or large gathering place. For example, the user device can include a smartphone, tablet, laptop or desktop computer, etc. In this exemplary embodiment of the system 100, the user can place his/her user device at the content viewing location (e.g., the user's home, or a public gather such as at a bar, pub, restaurant, outdoor screen, etc.) to capture images of the viewers based on the trigger event as described previously, e.g., such as caused by a particular occurrence during the event being viewed based on a trigger signal included in the content stream, and/or a reaction exhibited by the viewers detected by the user device to generate the trigger signal for image capture.
[0043] FIGS. 3A and 3B show illustrative diagrams of the integrated viewer reaction-capture system 300 of the disclosed technology, in which the device 103 is resident on one or more user devices (e.g., smartphone, tablet, laptop, computer, wearable device, etc.) placed in the viewing environment for image capture of the viewer 102 during viewing of the event on the display device 101. In this example, a user opens the software app on his/her mobile device to communicate with and control one or more event-viewing imager devices 103, e.g., shown as device 103a and 103c in FIGS. 3 A and 3B. The user can place his/her mobile devices having the device 103 in a desired location in the viewing area to utilize the camera 105 (e.g., shown as camera 105a and camera 105c in this example) for image capture of the user, e.g., viewers 102a, 102b, and 102c, during viewing of the event on the display device 101 (e.g., a smart TV, as shown in FIGS. 3A and 3B). The devices 103a and 103c, resident on the respective user devices that are placed in the viewing environment for image capture, can include respective trigger units 104a and 104c, to detect the stimuli to generate the trigger data associated with the significant moment for creating the in-the-moment images of the viewers 102a, 102b, and 102c. For example, the trigger unit 104 can include existing components in the user device, e.g., such as microphones, accelerometers, camera, or other component capable of sensing audio, visual, and/or mechanical stimuli. In some implementations, the software app can be displayed and/or operated via the display device 101, e.g., as shown by software app user interface screen 510 that is presented in a portion of the display of the device 101. During the event being viewed, the devices 103a and 103c, resident on the respective user devices, are operable to receive the trigger data produced by the trigger 104a and/or 104c, to capture photos and/or video or identify the frame of continuous photo or video capture of the viewers 102a, 102b, and 102c. Also, the devices 103a and 103c can receive trigger data from other sources, e.g., such as content- embedded signal data in the content stream detected by the device 103, or by a signal provided through the software application operating on the device 103, such as a signal or time identified from an automatic or manual triggering that occurred from within the event itself (e.g., a trigger in the stadium, arena, concert, etc.). Similarly, the trigger data can be generated and provided to the devices 103a and/or 103c from another device that is not capturing the content (e.g., including a viewer 102b using his/her user device).
[0044] In some implementations, the user, using the software app, can select the desired event to be viewed. This selected event implementation can provide additional information to the software app to identify which times to isolate the continuous photo/video capture of the viewers 102a, 102b, and 102c, or to identify the times to trigger the image capture by the cameras 105a and 105c. This exemplary feature can enhance data processing efficiencies by ensuring the image content is reduced to manageable quantities of high quality and desired content for the viewers 102a, 102b, and 102c.
[0045] The user device that include the device 103 can transmit the captured images to a data processing unit on a computer system (e.g., server) in the cloud, e.g., associated with the software app operating on the user devices, to process the captured images to produce processed images. The computer system can process the captured images and send the processed images back to the device 103 or to other user devices of the viewers. Additionally, or alternatively, the device 103 resident on the user devices positioned to capture the images can perform the image processing to produce the processed images. [0046] The processed photos/video content can be displayed on the user devices that include the device 103, other user devices (e.g., such as that held by the viewer 102b), or the software app user interface screen 510 presented on the display device 101 (e.g., shown as interface screen 511 in FIG. 3B), any of which can be connected via the software application, e.g., based on viewer's location proximity or user-connected accounts, or the event that they have selected using the software app. The processed content of the viewers 102a, 102b, and 102c imaged at the significant moment of the event can be attached to additional data or photos/video of the event being viewed to display relevant information of the moment they just witnessed combined with their own reactions. For example, such additional data or photos/video can include information about the team or player involved in the significant moment at the event, statistics, etc.
[0047] The user can choose which content they wish to share of themselves via social media or the application or event stream connected. For example, this selected content for sharing can be added to a timeline of the event which is structured to show this content in a series of events in chronological order. The content shared can also be displayed to connected viewers in different places for other viewers at other event-viewing locations (e.g., friends' homes, bars, etc.) to see, e.g., on their display device 101 via the software app interface screen 510, or their own mobile devices operating the software app. This can allow the event viewers to see the reactions of their connected friends/family during the event after emotional moments occur. In some implementations, for example, this can happen automatically via premising connections or prompted to share after an emotional event and capture occurs.
[0048] FIGS. 4A and 4B show illustrative diagrams of exemplary embodiments of the device 103 in an integrated viewer reaction-capture system 400 of the disclosed technology. FIG. 4A shows an exemplary multi-camera event- viewing imager device 103 that includes a plurality of cameras 105 that can move to pan and capture multiple images simultaneously of the viewers 102a, 102b, and 102c in the viewing environment. The multiple camera configuration and ability for the cameras 105 to move to various image views and focuses allows the device 103 in FIG. 4 A to be placed in a variety of locations and positions in the viewing environment for optimal image capture of the viewers 102 during viewing of the event on the display device 101. The exemplary multi-camera event- viewing imager device 103 in FIG. 4 A can operate a software application that interacts with the software app on the user devices to allow user-control and direct interaction with the device 103. As shown in FIG. 4A, the device 103 includes the trigger unit 104 to determine the optimal time to initiate image capture and/or identify the frame used to create the photo or video sequence associated with the significant moment of the event being viewed. The device 103 can perform the image processing and delivery processes 230 and 240 to produce and provide the processed images of the viewers in the significant moment. The exemplary multi-camera event- viewing imager device 103 can be operated by the user as previously described for the exemplary embodiments of the device 103 in FIGS. 3 A and 3B, or beforehand or later in this patent document.
[0049] FIG. 4B shows an exemplary interactive event- viewing imager device 103 that includes a display screen to present an interactive software application on the device 103 a to the viewers 102a, 102b, and 102c, e.g., while they are viewing the event on the display device 101. The device 103 in FIG. 4B includes one or more cameras 105 that can move to pan and capture multiple images simultaneously of the viewers 102a, 102b, and 102c in the viewing environment. The interactive software application that runs on the exemplary interactive event-viewing imager device 103 in FIG. 4B can operate to display the processed images to the viewers 102a, 102b, and 102c, as well as shared images provided to the user from his/her friends, family, etc. that may be viewing the event with a system of the present technology. The device 103 of FIG. 4B includes the trigger unit 104 to determine the optimal time to initiate image capture and/or identify the frame used to create the photo or video sequence associated with the significant moment of the event being viewed. The device 103 can perform the image processing and delivery processes 230 and 240 to produce and provide the processed images of the viewers in the significant moment. The exemplary interactive event- viewing imager device 103 can be operated by the user as previously described for the exemplary embodiments of the device 103 in FIGS. 3A and 3B, or beforehand or later in this patent document.
[0050] FIG. 5 shows an exemplary embodiment of the interactive and/or multi-camera event- viewing imager device 103 in an integrated viewer reaction-capture system 500 of the disclosed technology. The interactive and/or multi-camera event- viewing imager device 103 of the system 500 can be designed in a themed configuration (e.g., such as sports related items, like a football helmet as shown in FIG. 5, a ball, a toy mascot or bobble head or statue or action figure, etc.; music or musician related item; or other themed configuration) or a furnishing (e.g., furniture item, like a lighting as shown in FIG. 5; art, like a painting, sculpture, etc.; or other furnishing). Such themed configurations or furnishing configurations can be designed to fit any decor of the viewing environment to which the system 500 is implemented. The exemplary interactive and/or multi-camera event- viewing imager device 103 can be operated by the user as previously described for the exemplary embodiments of the device 103 in FIGS. 3A, 3B, 4A, 4B, or beforehand or later in this patent document.
[0051] Example Techniques of User Activation
[0052] For each place the system 100 is utilized, a unique location ID and a unique device ID can be created and stored for the system and the viewers that use the system 100 during their viewing of an event. This can act as a tethering point where users can "check in", e.g. via the software app, to determine and store their location and/or event to be viewed, which can allow the viewers to easily access their captured and processed photos and/or video, as well as receive other benefits such as notifications, selective advertising or promotions, etc. For example, the processed images provided to the users (e.g., checked-in using the software app) allow the users to 'tell a narrative' of their experience viewed from their couch, bar stool, etc. at the place of their event viewing through the photos and video of themselves 'in-the-moment'. The software app lets the users (e.g., fans) have the ability to remember and share their candid reactions, while allowing the imaging service system (e.g., operated by one or more servers in the cloud and/or through the device 103) to provide the users with a variety of dynamic and responsive messaging, and/or advertising. Also, for example, by checking-in or actively choosing the event being consumed, the photos/video recordings can be triggered or used based on predetermined parameters (e.g., generated by the imaging service system 100, which can be provided to the device 103) associated with the event, which can be used to optimally isolate the significant moments in the event being consumed. In an illustrative example, users that check-in to event E using the software app allow the system 100 to provide image capture data to the device to isolate the photos/video at time 12:32: 12 for 10 seconds taken of the continuous images. This can allow the most appropriate portions of content to be isolated and labeled for the viewer to subsequently or later upload/save. Another exemplary benefit of the check-in includes the ability to identify which event is being consumed to provide a live feed of reactions after big moments occur, such as streaming this to other user devices or smart televisions, and/or coupling the captured and processed images with content associated with the event to provide further added information and context to the processed images provided to the viewer 102. Exemplary techniques for user activation and/or check in are described.
[0053] For example, a user (e.g., viewer 102) can check in using the app, so that the system 100 can determine the location of the viewer 102, e.g., in which metadata can be assigned to the captured photos and/or video based on the user's location. In some implementations, the software application can pre-populate locations in the viewing environment with the checked-in viewers and prompt or notify the viewer in the location to notify the viewer the event will start, to remind them to turn on the device 103, or a message or an update of the imaging service system, etc. In some implementations, for example, the system 100 can perform facial recognition used to identify a user and activate the system to check the user in. Whereas in some implementations, for example, a user can be checked in by use of computer vision to identify and read certain markers (e.g., such as a user displaying a QR code on his/her mobile device to the camera 105 of the system 100), which can 'tag' user and determine his/her location. For example, a user can be checked in by use of geo location of the user's mobile device,
communication via Bluetooth with the device 103 of the system 100, and other location identification techniques using the user's mobile device to identify the user for capturing images during event viewing in a home, bar, or other setting.
[0054] Example Techniques to Trigger Image Capture
[0055] Detection Trigger. In some implementations, for example, the trigger unit 104 can include a sensor to detect a stimulus in the external environment at the place (e.g., home, bar, restaurant, etc.) that the content is being viewed, e.g., on the display device 101. . For example, the trigger unit 104 can detect a sound stimulus (e.g., of a particular volume or frequency), a visual stimulus (e.g., rapid acceleration of movements produced by the viewer 102, or facial expressions by the viewer 102), mechanical perturbations (e.g., clapping, stomping, etc.), voice control by the viewer 102 (e.g., such as predetermined words or phrases), among other stimuli. In some embodiments, for example, the sensor of the trigger unit 104 can generate a trigger signal that can be used to initiate image capture of photos and/or video of the viewer 102 by the camera 105. In some embodiments, for example, the trigger unit 104 is in communication with the data processing and communications unit 113 of the device 103, such that the trigger signal is received by the data processing unit 113 to convert to trigger data, which can be used to initiate the image capture of the viewer 102 by the camera 105, or can be used to identify the significant moment for image processing of the continuously captured photographs and/or video by the one or more cameras 105.
[0056] In some implementations, for example, the trigger unit 104 can include a data processing unit including a processor and memory to process and store and/or buffer the trigger data. For example, the trigger unit 104 can include the sensor to detect the stimulus in the external environment at the place, in which the exemplary data processing unit of the trigger unit 140 produces trigger data that can include image capture instructions (e.g., that can be used to initiate the image capture by the camera 105), temporal information (e.g., that can be used by the data processing and communications unit 113 to identify the significant moment for image processing of the continuously captured photographs and/or video by the one or more cameras 105), stimuli information (e.g., that can identify and characterize the type of stimuli detected to be used to trigger the image capture initiation or identification).
[0057] In some embodiments, for example, the trigger unit 104 can be housed in the device 103 in communication with the data processing and communications unit 113, in the display device 101 in wired or wireless communication with the device 103, or can be housed in an independent housing as a stand-alone device in wired or wireless communication with the device 103. In some embodiments, for example, the trigger unit 104 can be configured in a user's device (e.g., smartphone, tablet, wearable device, etc.) and in communication with the data processing and communications unit 113 of the device 103, in which the trigger unit 104 includes executable program instructions stored in memory to control and/or receive information from a sensing unit or device of the user device, e.g. including, but not limited to, a microphone, an accelerometer, a camera, etc.
[0058] Manual Trigger. In some implementations, for example, a user (e.g., the viewer 102) can click an activate button on a remote (e.g., such as the user's mobile device or an independent remote control that communicates with the device 103), a game controller, or other device in communication with the device 103 when the user wishes to initiate the image capture sequence, e.g., such as when a significant moment occurs that the user wants to record the reactions of the viewers who are watching the event in the local setting, or if there was a significant moment occurring at the local gathering independent of what is occurring at the event being viewed. In some implementations, for example, the user can also use voice commands to activate the image capture sequence, e.g., in which the trigger data is generated by the trigger unit 104 of the system 100 upon sensing and processing the voice activation. For example, a voice-based trigger can include voice recognition techniques to identify only predetermined individuals to control the trigger of the image capture or identification of the significant moment among continuous images from which to produce the processed images.
[0059] Central Command Trigger. During an emotional moment during an event being broadcast, for example, a trigger can be activated at the event (e.g., by an automated system or a person spectating the event via pressing of a button/clicking of a mouse on a digital command console, an automatic verbal or motion initiation of trigger, or a physical pushing of a trigger button) such that the triggering signals are sent out via a central server which triggers all units running the software to go through their established course of actions to either start capture, start the parse, or analyze the buffering video/image that is captured to deliver reaction clips/images to the user. In this exemplary case, the software app will have an active connection with the server/triggering/parsing system (this could be accomplished via TCP/IP, UDP, or active polling).
[0060] Stats Trigger. For example, a trigger can be generated from sports stats information, sports broadcasting companies; websites/services that provide real time sports analytics.
[0061] Text/Score Changes Trigger. For example, text information (e.g., such as a score) changes that are displayed on a screen during an event, such as "Touchdown," "Goal," etc. can trigger the software. Also, for example, text information changes can include a change in the score, like the movement of points from 7 to 13.
[0062] Gesture Trigger. For example, software built into the application/ API/system can be used to determine when an emotional moment has occurred based on the physical gestures or volume of those watching the event. Facial recognition software can also be used to detect facial emotion. Software that can read minute changes in skin color and pupil dilation can be used to tell when an event is happening; this means heart rate can be assessed as well as changes in the radiation of "color" in one's face. In some examples, the viewer 102 may be recording his/her own bioanalytical data (e.g., heart rate, motion data, etc.) via a wearable device or smartphone, which can be utilized by the system 100 as the trigger to initiate image capture or identify the point of the photo sequence or video frame in continuous image capture that corresponds to the significant moment for image processing.
[0063] Integrated Trigger. For example, integrated triggers in the exemplary video stream can be parsed/read to trigger the local camera/video system to capture images from all the cameras. These images can then be tagged with a unique event ID and device ID that may then be processed by software.
[0064] Example Application Software
[0065] The software app, once installed, has a number of ways in which it can be used to trigger the capture of images and footage, to process the captured images, and to deliver, share, and present the processed image data to various users. Exemplary features are described that help deliver the highly emotional reactions to the viewers watching an event at a home, a bar, a restaurant, a coffee shop, or other private or public setting. In exemplary feature of the software app, when a trigger is initiated and images are captured, the movement of pixels in the images can be compared to each fraction of a second, or a whole second, to create a timeline with respect to the event being viewed. For example, the software app can create and present a timeline view that includes images of the viewers watching the event and images of the actions in the event itself, where a user can slide through their photos and/or video sections that correspond to the moments captured in the event (e.g., moments of the game). For example, the camera can be on at all times taking "loops" of video. If a trigger does occur, then the relevant block will be saved, stored, processed (with metadata) and sent to a user's app. The images can be pulled out of specific sections, which can be edited as requested by the user. For example, a user can use the software or associated mobile/web/app to interface with their existing social media platforms, email, or any other places that images can be saved. The exemplary software can provide an overlaying of contextual info that tells the user a combination of what the event was, the score, who was playing and location of the photo. For example, events can be presented in a timeline for each game, episode, etc.
[0066] Metadata can be associated with the processed images to provide context to the moment being captured. For example, when the event is a sports game being viewed, the metadata can include, but is not limited to, a game score, the teams playing in a sporting event, the players playing at the event or the players involved with the significant moment, the time the moment occurred, the location of the event, a description of the event, and any other data associated with the event or people, places, or things involved. In addition to the metadata associated with the event, system data can be associated with the process images, e.g., such as location data and time data of where and when the viewers are viewing the event, names of other viewers at the home, bar, etc. viewing the event with the user. The metadata can also include user metadata, e.g., including demographic data, online social network data, usage data, advertisement data including user-targeted advertising and user-engagement of advertisement data, location information, or other user type data. The metadata can be processed with the captured image data to be synchronized using any appropriate synchronization technique, e.g., such as by timestamp matching of images, or by assigning unique codes to each image or group of images.
[0067] In some aspects, the disclosed systems for real-time image and video capturing, processing, and delivery of viewers viewing a content stream of an event at a small or large gathering place (e.g., including a home, bar, pub, restaurant, outdoor display screen, etc.) can be implemented by the following methods. For example, the exemplary software app can be downloaded by the system 100, including on the device 103, on the display device 101, and/or on user devices. In some instances, for example, the software app may already be pre-installed on the device 103, display device 101, and/or the user's device. In some instances, for example, a camera can be added to the device 103 if one does not already exist. The user activates the software app of the system 100 and a unique identifier is generated for the system 100 based on the time of activation, location of the device 103, and/or event to be streamed on the display device 101. Viewers at the gathering can individually establish their attendance of the event viewing using the software app of the system 100 directly, e.g., via the device 103 and/or via the viewers' mobile devices. Alternatively, for example, the viewers' attendance may be established using GPS and/or device proximity (e.g., proximity to the device 103). In some
implementations, for example, the software app of the system 100 can control operation of the device 103 to constantly record images up to a fixed length of time into the past (e.g., beyond this, photos or video can be deleted). When the software app of the system 100 receives an event trigger, the recorded images are retrieved and pushed to the computing system or computer in the cloud for storage.
[0068] Users can then recall their video/photos to edit and share. Image processing can be performed by the device 103 or the computing system (e.g., server in the cloud in
communication with the device 103). For example, the last editing parameters (e.g., cropping) can be applied to the next recall. For example, facial recognition and motion detection can aid with image cropping. Processed images (e.g., edits) are saved to the cloud, which can then be linked to social media sites for sharing (e.g., based on user preferences of the app of the system 100).
[0069] The image processing (e.g., the process 230 of the method 200) can include viewer recognition processing of objects in the captured photos or video to determine the number of viewers 102 viewing the event on the display device 101. In some implementations, the viewer recognition processing can include analyzing pixel data in the captured images to determine shapes and features indicative of human faces and/or bodies. In some implementations, the viewer recognition processing can include facial recognition techniques to identify each individual viewer, i.e., identify the viewer's unique identity. The image processing techniques can include utilizing the viewer recognition data to provide real-time targeted advertising to the identified viewers 102 based on the facial recognition processing. In some implementations, the image processing techniques can include using the viewer recognition data to provide real-time targeted advertising to the group of viewers viewing the event, e.g., in some examples based solely on the number of viewers gathered. In one example, the targeted advertising can include pushing selected advertisements or promotions from specific vendors for products or services related to the number of viewers, location of the viewers, and event being viewed (e.g., type, time, date, etc.). Illustratively, if the viewer recognition data includes three or more viewers, then the selected advertising for one or more of these specific viewers could include a pizza advertisement or promotion for pizza & delivery specials during the event at the local gathering for viewing the event.
[0070] In implementations, stored images on the server in the cloud can be delivered from the cloud to the device 103 running an app in real-time, e.g., immediately after the significant moment occurred for which the viewer 102 was imaged. In some implementations, stored images on the server in the cloud can be delivered from the cloud, e.g., during commercial breaks, to the device 103 running an app for latent review on the device 101 at a convenient or non-interruptive time. Images and other data can be used for targeted advertising. For example, on a mobile device an advertiser can sponsor the content. In some examples, the content captured (e.g., photos or video) of the viewer(s) can have associated advertising content, images, video or messages associated with them, e.g., such as merging the content, showing it simultaneously, overlaying it, and/or running after one another. Data of the captured event viewer(s) and/or the using viewing this content can be used to identify the associated
advertisement content, e.g., such as: User A or group is captured and the demographic and their interest data is known; the associated moment captured of the individual(s) is one that causes a negative reaction; this is then associated with advertisements contextual to this moment and data on the user being captured; and the data on the viewer of the content created is also subject to specifically which content is displayed to him/her. For example, in the case of a competition such as a sporting event, the viewer being captured can identify which team or performer he/she is supporting (e.g., to identify a positive or negative reaction to the event). Information about the event such as which team scored also identifies a positive/negative reaction.
[0071] In some implementations, for example, the user can calibrate the image capture focus, e.g., such as a couch or seating area or bar, etc., to focus the camera lens, or, an automatic cropping area to produce the desired images. For example, this can be performed automatically by the camera 105 detecting motion of the viewer(s) and apply cropping or camera focus on this area/area. Also, for example, this can be performed manually by use of software controls of the camera 105 via the software app operated by the user on the device 103 or the user's device (e.g., smartphone, tablet, smartwatch, smartglasses, laptop, etc.) to adjust the focus while viewing the adjustment on the display from the app.
[0072] For example, raw photographs and/or video can be captured and stored in the cloud, as well as stored locally on the capturing device 103 or other device that can be connected with the camera 105, e.g., such as computer, console, TV, external memory device, etc. For example, based on a push notification due to the triggering of the image capture, the user(s) is/are notified (e.g., via a message delivered on the user's mobile device) to retrieve the captured image(s) associated with the triggered event. In some examples, the users can retrieve the captured image(s) associated with the triggered event directly on the display device 101 (e.g., console, computer, TV, etc.). Once retrieved, the user(s) can perform image processing (e.g., including cropping) of the retrieved image(s). User(s) can then share or perform other processes with their processed images (e.g., such as sharing on a social media site). Additionally or alternatively, for example, the system 100 can edit the captured raw images (e.g., locally on the device 103 and/or on the computer system in the cloud via upload straight from capturing or associated device) and display on the display device 101.
[0073] FIG. 6 shows an example of a communication network including computers (e.g., servers 612, 614) for implementing the image capturing, processing, and delivery service system 100 that communicates with remote devices (e.g., the device 103) over a network 610 (e.g., the Internet). The servers 612, 614 in the network 610 can be operated by a commercial entity to provide the imaging capture, processing, and/or delivery service for real-time image capturing, processing, and distribution of 'in-the-moment' images of the viewer 102 during the event being viewed (e.g., or live event being attended) at a home, bar, restaurant, ballroom, or other public or private venue. The servers 612, 614 can be configured to perform the processes 230 and/or 240 of the method 200 upon receiving partially processed or raw images (e.g., photos and/or video) from the event-viewing imager and processing device 103, e.g. in which the partially processed or raw images are provided to the servers 612, 614 over the network 610. For example, the servers 612, 614 can include software modules to perform various techniques of the process 230 and/or 240, which can also reside on the devices 103, or on the user devices via the software app resident on the end user devices.
[0074] In some embodiments of the disclosed image capturing, processing, and delivery technology, the system 100 can be implemented in a private or public setting where a live event is taking place, e.g., such as a wedding, a party, a dance club or other night club, or outside venue like a BBQ, etc. The system 100 can be configured in a portable configuration, e.g., where one or more devices 103 can be placed at appropriate locations at the live event venue similar that shown in FIG. 3 A (e.g., the device 103 is embodied in a user's mobile
communication device), FIG. 4A (e.g., the device 103 is embodied in a multi-camera imager unit), FIG. 4B (e.g., the device 103 is embodied in a user-interactive imager unit), and/or FIG. 5 (e.g., the device 103 is embodied in an event-related item or location-related decor or furnishing). The system 100 can be triggered manually or based on certain stimuli at the event, e.g., including, but not limited to, voice recognition of certain words or phrases, certain lighting, certain sounds or music, etc.
[0075] Examples
[0076] The following examples are illustrative of several embodiments of the present technology. Other exemplary embodiments of the present technology may be presented prior to the following listed examples, or after the following listed examples.
[0077] In an example of the present technology (example 1), an imaging service system includes an imaging unit arranged at a place including a home or a public or private place of gathering, where the place includes one or more display devices to present visual and/or audio content, the imaging unit including one or more cameras arranged to capture images of one or more viewers at the place to view of an event on the one or more display devices, in which the images include photos or video, a data processing unit in communication with the one or more cameras, the data processing unit including a processor, a memory, and a wireless transmitter and receiver, the data processing unit configured to at least partially process the captured images and transmit the images to another device, and a trigger module in communication with one or both of the data processing unit and the one or more cameras to generate a trigger associated with an occurrence of the event or a reaction by the one or more viewers to the occurrence of the event, in which the generated trigger causes the one or more cameras to initiate the capture of the images of the viewers at the place, or causes the data processing unit to identify a captured photo or video frame among a sequence of the photos or the video to be associated with the occurrence; and the imaging service system includes one or more computers in communication with the imaging unit to receive the captured images from the imaging unit and to process the images to produce processed images, in which the processed images include images of the reaction by the one or more viewers to the occurrence of the event.
[0078] Example 2 includes the system as in example 1, in which the public place of gathering includes a bar, a pub, a restaurant, or an outdoor display screen.
[0079] Example 3 includes the system as in example 1 , in which the one or more computers are operable to distribute the processed images to the one or more viewers using wireless communication to a mobile device of a viewer of the one or more viewers.
[0080] Example 4 includes the system as in example 3, in which the one or more computers are operable to provide an interactive software application on the mobile device, in which the software application is configured to present the processed images to the viewer.
[0081] Example 5 includes the system as in example 4, in which the one or more computers are configured to process the images including selecting an advertisement to be presented with the processed images to the viewer via the software application.
[0082] Example 6 includes the system as in example 1 , in which the one or more computers are operable to send the processed images to a social network site.
[0083] Example 7 includes the system as in example 1, in which the one or more computers are operable to provide the processed images for purchase by the one or more viewers.
[0084] Example 8 includes the system as in example 1, in which the trigger module includes a sensor to detect at least one of a sound, visual stimulus, or mechanical perturbation of the one or more viewers or the visual and/or audio content to cause initiation of the capture of the images or to cause identification of the captured photo or video frame to be associated with the occurrence.
[0085] Example 9 includes the system as in example 8, in which the trigger module is operable to detect a voice command by a viewer to cause the initiation of the capture of the images or to cause the identification of the captured photo or video frame to be associated with the occurrence.
[0086] Example 10 includes the system as in example 1, in which the imaging unit is in communication with the one or more display devices, and the trigger module includes a signal receiver to receive a signal encoded in the presented content from the one or more display devices to cause initiation of the capture of the images or to cause identification of the captured photo or video frame to be associated with the occurrence.
[0087] Example 11 includes the system as in example 1 , in which the trigger module includes a signal receiver to receive a signal provided by the one or more computers to cause initiation of the capture of the images or to cause identification of the captured photo or video frame to be associated with the occurrence.
[0088] Example 12 includes the system as in example 1, in which the one or more cameras are operable to continuously capture the images of the one or more viewers during the viewing of the event.
[0089] Example 13 includes the system as in example 12, in which the one or more cameras are configured to capture a temporal series of photos or continuous video of the one or more viewers for a predetermined duration of time before and after the generation of the trigger by the trigger module.
[0090] Example 14 includes the system as in example 13, in which the temporal series of photos or the continuous video is stored in a sliding buffer of the memory of the data processing unit, in which the sliding buffer is configured to store a predetermined amount of recently captured temporal series of photos or continuous video.
[0091] Example 15 includes the system as in example 12, in which the data processing unit of the imaging unit is configured to perform facial recognition analysis of the continuously captured images to determine one or more facial expressions of the one or more viewers.
[0092] Example 16 includes the system as in example 15, in which the data processing unit is configured is configured to identify the captured photo or video frame among the continuous sequence of the photos or the video to be associated with the occurrence based on the particular facial expression.
[0093] Example 17 includes the system as in example 15, in which one or both of the data processing unit of the imaging unit and the one or more computers are configured to determine a number of viewers at the place, and to process the images including selecting an advertisement based on the number of viewers to be presented with the processed images.
[0094] Example 18 includes the system as in example 17, in which the one or more computers are operable to provide an interactive software application on the mobile device, in which the software application is configured to present the processed images to the viewer with the selected advertisement.
[0095] Example 19 includes the system as in example 1, in which one or both of the data processing unit of the imaging unit and the one or more computers are configured to process the images including attaching metadata to the processed images.
[0096] Example 20 includes the system as in example 19, in which the processed images include links to external websites.
[0097] Example 21 includes the system as in example 19, in which the metadata includes data associated the event for viewing, data associated with one or more viewers, and/or data associated with the place.
[0098] Example 22 includes the system as in example 21, in which the event includes a sporting event, and the metadata associated with the event includes a score, team or player playing in the sporting event, time the occurrence occurred, location of the sporting event, or a description of the event.
[0099] Example 23 includes the system as in example 21, in which the metadata associated with the user includes demographic data, online social network data, usage data, or location information.
[00100] Example 24 includes the system as in example 1, in which the one or more display devices include a television, a computer, a mobile device including a tablet, smartphone, smartglasses, or smartwatch, a gaming console, or a radio.
[00101] Example 25 includes the system as in example 1, in which the imaging unit is included as part of the one or more display devices. [00102] In an example of the present technology (example 26), an imaging service device includes one or more cameras arranged to capture images of one or more viewers at a place to view of an event on one or more display devices, in which the images include photos or video; an image processing unit to process the captured images to produce processed images, in which the image processing unit includes a processor, a memory and a wireless transmitter and receiver to at least partially process the captured images and transmit the images to another device; and a trigger module in communication with one or both of the image processing unit and the one or more cameras to generate a trigger associated with an occurrence of the event or a reaction by the one or more viewers to the occurrence of the event, in which the generated trigger causes the one or more cameras to initiate the capture of the images of the viewers at the place, or causes the image processing unit to identify a captured photo or video frame among a continuous sequence of the photos or the video to be associated with the occurrence. The trigger module includes a sensor to detect at least one of a sound, visual stimulus, or mechanical perturbation of the one or more viewers or visual and/or audio content from the one or more display device. The image processing unit is configured to be in communication with at least one of the one or more display devices or a user device of the one or more viewers to present the processed images on a display screen to the one or more viewers in real-time with respect to the occurrence during their viewing of the event.
[00103] Example 27 includes the device as in example 26, in which the trigger module includes a signal receiver to receive a signal encoded in the presented content from the one or more display devices to cause initiation of the capture of the images or to cause identification of the captured photo or video frame to be associated with the occurrence.
[00104] Example 28 includes the device as in example 26, in which the trigger module includes a signal receiver to receive a signal provided by a computer in communication with the imaging service device over a communication network to cause initiation of the capture of the images or to cause identification of the captured photo or video frame to be associated with the occurrence.
[00105] Example 29 includes the device as in example 26, in which the one or more cameras are configured to capture a temporal series of photos or continuous video of the one or more viewers for a predetermined duration of time before and after the generation of the trigger by the trigger module, and in which the temporal series of photos or the continuous video is stored in a sliding buffer of the memory of the image processing unit, in which the sliding buffer is configured to store a predetermined amount of recently captured temporal series of photos or continuous video.
[00106] Example 30 includes the device as in example 29, in which the image processing unit is configured to perform recognition analysis of objects in the captured temporal series of photos or continuous video to determine facial or body features or expressions of the one or more viewers.
[00107] Example 31 includes the device as in example 30, in which the image processing unit is configured to identify the captured photo or video frame among the continuous sequence of the photos or the video to be associated with the occurrence based on particular facial or body expression.
[00108] Example 32 includes the device as in example 30, in which the image processing unit is configured to determine a number of viewers at the place, and to process the images including selecting an advertisement based on the number of viewers to be presented with the processed images.
[00109] Example 33 includes the device as in example 26, in which the image processing unit is configured to be in communication with one or more computers on a network via the Internet to transmit the images from the imaging service device to the one or more computers for further processing or distribution of the processed images.
[00110] Example 34 includes the device as in example 26, in which the display device includes a television, a computer, a mobile device including a tablet, smartphone, smartglasses, or smartwatch, a gaming console, or a radio.
[00111] In an example of the present technology (example 35), a method for providing images of viewers viewing an event remotely from the event venue includes capturing, using one or more cameras arranged at a place to view of an event on a display device, images including a sequence of photos and/or video of one or more viewers at locations in the place, in which the capturing is initiated responsive to a triggering signal received during the viewing of the event, or in which the capturing includes continuously capturing the images of the one or more viewers during the viewing of the event; processing, using a data processing unit in communication with the one or more cameras, the images to produce processed images, in which the processed images include images of the reaction by the one or more viewers to the occurrence of the event; and distributing the processed images to a viewer of the one or more viewers.
[00112] Example 36 includes the method as in example 35, in which the place includes a home or a public or private place of gathering.
[00113] Example 37 includes the method as in example 36, in which the public place of gathering includes a bar, a pub, a restaurant, or an outdoor display screen.
[00114] Example 38 includes the method as in example 35, in which the processing the images includes: mapping the locations to a grid corresponding to predetermined positions associated with the place; determining an image space containing an individual at a particular location in the mapped locations based on the coordinates; and generating the processed image based on the determined image space.
[00115] Example 39 includes the method as in examples 35 or 38, in which the processing the images includes: assigning metadata with the processed image, the metadata including information associated with one or more of the event, the place, or the individual in the processed image.
[00116] Example 40 includes the method as in example 35, further including: capturing a sequence of reference images of the place including location areas corresponding to physical locations of the place; assigning a reference label to each reference image of the sequence of reference images; forming a reference image coordinate space in each of the reference images, the forming the reference image coordinate space including a mapping of the location areas; and generating image template data for each of the image location areas associated with each of the reference images, the image template data based on at least a portion of the reference image coordinate space that is substantially centered on the image location area.
[00117] Example 41 includes the method as in example 40, in which the processing the images includes: assigning an image label to the captured images of the one or more viewers at the place viewing the event, the image label including information corresponding to the reference label; obtaining the image template data of the corresponding reference image for the image based on the image label; and producing the processed image for each of the mapped image location areas, the processed image including image properties corresponding to the image template data.
[00118] Example 42 includes the method as in example 41, in which the image label includes a code corresponding to one or more of the event, the camera, the occurrence including temporal information, or a sequence number of the image.
[00119] Example 43 includes the method as in example 35, in which the distributing the processed images includes: transmitting the processed images to the viewer using a wireless communication link to a mobile device of the viewer operating an interactive software application on the mobile device; and presenting the processed images using a display of the mobile device via the software application.
[00120] Example 44 includes the method as in example 35, in which the triggering signal includes at least one of a sound, visual stimulus, or mechanical perturbation of the one or more viewers or the visual and/or audio content to cause the initiation of the capture of the images.
[00121] Example 45 includes the method as in example 45, in which the trigger signal includes a voice command by a viewer.
[00122] Example 46 includes the method as in example 35, further including: detecting a triggering signal including at least one of a sound, visual stimulus, or mechanical perturbation of the one or more viewers or the visual and/or audio content; and processing the trigger signal to select an image among the sequence of photos or video to identify the occurrence, in which the processed images include a series of images before, during, and after the occurrence of the one or more viewers.
[00123] Example 47 includes the method as in example 47, in which the trigger signal includes a voice command by a viewer.
[00124] Example 48 includes the method as in example 35, further including: presenting the processed images on a display screen of the one or more display devices or a user device of the one or more viewers to the one or more viewers in real-time with respect to the occurrence during their viewing of the event.
[00125] Implementations of the subject matter and the functional operations described in this patent document can be implemented in various systems, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine- readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term "data processing apparatus" encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
[00126] A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a
communication network.
[00127] The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
[00128] Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices.
Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
[00129] While this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple
embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
[00130] Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all
embodiments.
[00131] Only a few implementations and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.

Claims

CLAIMS What is claimed is:
1. An imaging service system, comprising:
an imaging unit arranged at a place including a home or a public or private place of gathering, where the place includes one or more display devices to present visual and/or audio content, the imaging unit comprising:
one or more cameras arranged to capture images of one or more viewers at the place to view of an event on the one or more display devices, wherein the images include photos or video,
a data processing unit in communication with the one or more cameras, the data processing unit including a processor, a memory, and a wireless transmitter and receiver, the data processing unit configured to at least partially process the captured images and transmit the images to another device, and
a trigger module in communication with one or both of the data processing unit and the one or more cameras to generate a trigger associated with an occurrence of the event or a reaction by the one or more viewers to the occurrence of the event, wherein the generated trigger causes the one or more cameras to initiate the capture of the images of the viewers at the place, or causes the data processing unit to identify a captured photo or video frame among a sequence of the photos or the video to be associated with the occurrence; and
one or more computers in communication with the imaging unit to receive the captured images from the imaging unit and to process the images to produce processed images, wherein the processed images include images of the reaction by the one or more viewers to the occurrence of the event.
2. The system as in claim 1, wherein the public place of gathering includes a bar, a pub, a restaurant, or an outdoor display screen.
3. The system as in claim 1, wherein the one or more computers are operable to distribute the processed images to the one or more viewers using wireless communication to a mobile device of a viewer of the one or more viewers.
4. The system as in claim 3, wherein the one or more computers are operable to provide an interactive software application on the mobile device, wherein the software application is configured to present the processed images to the viewer.
5. The system as in claim 4, wherein the one or more computers are configured to process the images including selecting an advertisement to be presented with the processed images to the viewer via the software application.
6. The system as in claim 1 , wherein the one or more computers are operable to send the processed images to a social network site.
7. The system as in claim 1, wherein the one or more computers are operable to provide the processed images for purchase by the one or more viewers.
8. The system as in claim 1, wherein the trigger module includes a sensor to detect at least one of a sound, visual stimulus, or mechanical perturbation of the one or more viewers or the visual and/or audio content to cause initiation of the capture of the images or to cause
identification of the captured photo or video frame to be associated with the occurrence.
9. The system as in claim 8, wherein the trigger module is operable to detect a voice command by a viewer to cause the initiation of the capture of the images or to cause the identification of the captured photo or video frame to be associated with the occurrence.
10. The system as in claim 1, wherein the imaging unit is in communication with the one or more display devices, and the trigger module includes a signal receiver to receive a signal encoded in the presented content from the one or more display devices to cause initiation of the capture of the images or to cause identification of the captured photo or video frame to be associated with the occurrence.
11. The system as in claim 1 , wherein the trigger module includes a signal receiver to receive a signal provided by the one or more computers to cause initiation of the capture of the images or to cause identification of the captured photo or video frame to be associated with the occurrence.
12. The system as in claim 1, wherein the one or more cameras are operable to continuously capture the images of the one or more viewers during the viewing of the event.
13. The system as in claim 12, wherein the one or more cameras are configured to capture a temporal series of photos or continuous video of the one or more viewers for a predetermined duration of time before and after the generation of the trigger by the trigger module.
14. The system as in claim 13, wherein the temporal series of photos or the continuous video is stored in a sliding buffer of the memory of the data processing unit, wherein the sliding buffer is configured to store a predetermined amount of recently captured temporal series of photos or continuous video.
15. The system as in claim 12, wherein the data processing unit of the imaging unit is configured to perform facial recognition analysis of the continuously captured images to determine one or more facial expressions of the one or more viewers.
16. The system as in claim 15, wherein the data processing unit is configured is configured to identify the captured photo or video frame among the continuous sequence of the photos or the video to be associated with the occurrence based on the particular facial expression.
17. The system as in claim 15, wherein one or both of the data processing unit of the imaging unit and the one or more computers are configured to determine a number of viewers at the place, and to process the images including selecting an advertisement based on the number of viewers to be presented with the processed images.
18. The system as in claim 17, wherein the one or more computers are operable to provide an interactive software application on the mobile device, wherein the software application is configured to present the processed images to the viewer with the selected advertisement.
19. The system as in claim 1, wherein one or both of the data processing unit of the imaging unit and the one or more computers are configured to process the images including attaching metadata to the processed images.
20. The system as in claim 19, wherein the processed images include links to external websites.
21. The system as in claim 19, wherein the metadata includes data associated the event for viewing, data associated with one or more viewers, and/or data associated with the place.
22. The system as in claim 21, wherein the event includes a sporting event, and the metadata associated with the event includes a score, team or player playing in the sporting event, time the occurrence occurred, location of the sporting event, or a description of the event.
23. The system as in claim 21, wherein the metadata associated with the user includes demographic data, online social network data, usage data, or location information.
24. The system as in claim 1, wherein the one or more display devices include a television, a computer, a mobile device including a tablet, smartphone, smartglasses, or smartwatch, a gaming console, or a radio.
25. The system as in claim 1 , wherein the imaging unit is included as part of the one or more display devices.
26. An imaging service device, comprising:
one or more cameras arranged to capture images of one or more viewers at a place to view of an event on one or more display devices, wherein the images include photos or video; an image processing unit to process the captured images to produce processed images, wherein the image processing unit includes a processor, a memory and a wireless transmitter and receiver to at least partially process the captured images and transmit the images to another device; and
a trigger module in communication with one or both of the image processing unit and the one or more cameras to generate a trigger associated with an occurrence of the event or a reaction by the one or more viewers to the occurrence of the event, wherein the generated trigger causes the one or more cameras to initiate the capture of the images of the viewers at the place, or causes the image processing unit to identify a captured photo or video frame among a continuous sequence of the photos or the video to be associated with the occurrence,
wherein the trigger module includes a sensor to detect at least one of a sound, visual stimulus, or mechanical perturbation of the one or more viewers or visual and/or audio content from the one or more display device, and
wherein the image processing unit is configured to be in communication with at least one of the one or more display devices or a user device of the one or more viewers to present the processed images on a display screen to the one or more viewers in real-time with respect to the occurrence during their viewing of the event.
27. The device as in claim 26, wherein the trigger module includes a signal receiver to receive a signal encoded in the presented content from the one or more display devices to cause initiation of the capture of the images or to cause identification of the captured photo or video frame to be associated with the occurrence.
28. The device as in claim 26, wherein the trigger module includes a signal receiver to receive a signal provided by a computer in communication with the imaging service device over a communication network to cause initiation of the capture of the images or to cause
identification of the captured photo or video frame to be associated with the occurrence.
29. The device as in claim 26, wherein the one or more cameras are configured to capture a temporal series of photos or continuous video of the one or more viewers for a predetermined duration of time before and after the generation of the trigger by the trigger module, and wherein the temporal series of photos or the continuous video is stored in a sliding buffer of the memory of the image processing unit, wherein the sliding buffer is configured to store a predetermined amount of recently captured temporal series of photos or continuous video.
30. The device as in claim 29, wherein the image processing unit is configured to perform recognition analysis of objects in the captured temporal series of photos or continuous video to determine facial or body features or expressions of the one or more viewers.
31. The device as in claim 30, wherein the image processing unit is configured to identify the captured photo or video frame among the continuous sequence of the photos or the video to be associated with the occurrence based on particular facial or body expression.
32. The device as in claim 30, wherein the image processing unit is configured to determine a number of viewers at the place, and to process the images including selecting an advertisement based on the number of viewers to be presented with the processed images.
33. The device as in claim 26, wherein the image processing unit is configured to be in communication with one or more computers on a network via the Internet to transmit the images from the imaging service device to the one or more computers for further processing or distribution of the processed images.
34. The device as in claim 26, wherein the display device includes a television, a computer, a mobile device including a tablet, smartphone, smartglasses, or smartwatch, a gaming console, or a radio.
35. A method for providing images of viewers viewing an event remotely from the event venue, comprising:
capturing, using one or more cameras arranged at a place to view of an event on a display device, images including a sequence of photos and/or video of one or more viewers at locations in the place, wherein the capturing is initiated responsive to a triggering signal received during the viewing of the event, or wherein the capturing includes continuously capturing the images of the one or more viewers during the viewing of the event;
processing, using a data processing unit in communication with the one or more cameras, the images to produce processed images, wherein the processed images include images of the reaction by the one or more viewers to the occurrence of the event; and
distributing the processed images to a viewer of the one or more viewers.
36. The method of claim 35, wherein the place includes a home or a public or private place of gathering.
37. The method of claim 36, wherein the public place of gathering includes a bar, a pub, a restaurant, or an outdoor display screen.
38. The method of claim 35, wherein the processing the images includes:
mapping the locations to a grid corresponding to predetermined positions associated with the place;
determining an image space containing an individual at a particular location in the mapped locations based on the coordinates; and
generating the processed image based on the determined image space.
39. The method of claim 35 or 38, wherein the processing the images includes: assigning metadata with the processed image, the metadata including information associated with one or more of the event, the place, or the individual in the processed image.
40. The method of claim 35, further comprising:
capturing a sequence of reference images of the place including location areas corresponding to physical locations of the place;
assigning a reference label to each reference image of the sequence of reference images; forming a reference image coordinate space in each of the reference images, the forming the reference image coordinate space including a mapping of the location areas; and
generating image template data for each of the image location areas associated with each of the reference images, the image template data based on at least a portion of the reference image coordinate space that is substantially centered on the image location area.
41. The method of claim 40, wherein the processing the images includes:
assigning an image label to the captured images of the one or more viewers at the place viewing the event, the image label including information corresponding to the reference label; obtaining the image template data of the corresponding reference image for the image based on the image label; and
producing the processed image for each of the mapped image location areas, the processed image including image properties corresponding to the image template data.
42. The method of claim 41, wherein the image label includes a code corresponding to one or more of the event, the camera, the occurrence including temporal information, or a sequence number of the image.
43. The method of claim 35, wherein the distributing the processed images includes:
transmitting the processed images to the viewer using a wireless communication link to a mobile device of the viewer operating an interactive software application on the mobile device; and
presenting the processed images using a display of the mobile device via the software application.
44. The method of claim 35, wherein the triggering signal includes at least one of a sound, visual stimulus, or mechanical perturbation of the one or more viewers or the visual and/or audio content to cause the initiation of the capture of the images.
45. The method of claim 44, wherein the trigger signal includes a voice command by a viewer.
46. The method of claim 35, further comprising:
detecting a triggering signal including at least one of a sound, visual stimulus, or mechanical perturbation of the one or more viewers or the visual and/or audio content; and
processing the trigger signal to select an image among the sequence of photos or video to identify the occurrence,
wherein the processed images include a series of images before, during, and after the occurrence of the one or more viewers.
47. The method of claim 46, wherein the trigger signal includes a voice command by a viewer.
48. The method of claim 35, further comprising:
presenting the processed images on a display screen of the one or more display devices or a user device of the one or more viewers to the one or more viewers in real-time with respect to the occurrence during their viewing of the event.
PCT/US2015/015071 2014-02-07 2015-02-09 Real-time imaging systems and methods for capturing in-the-moment images of users viewing an event in a home or local environment Ceased WO2015120413A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201461937455P 2014-02-07 2014-02-07
US61/937,455 2014-02-07

Publications (1)

Publication Number Publication Date
WO2015120413A1 true WO2015120413A1 (en) 2015-08-13

Family

ID=53778525

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/015071 Ceased WO2015120413A1 (en) 2014-02-07 2015-02-09 Real-time imaging systems and methods for capturing in-the-moment images of users viewing an event in a home or local environment

Country Status (1)

Country Link
WO (1) WO2015120413A1 (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9940576B1 (en) 2017-01-27 2018-04-10 International Business Machines Corporation Context-based photography and captions
WO2018071145A1 (en) * 2016-10-12 2018-04-19 Google Llc Actionable suggestions for activities
GB2563267A (en) * 2017-06-08 2018-12-12 Reactoo Ltd Methods and systems for generating a reaction video
US10423822B2 (en) 2017-03-15 2019-09-24 International Business Machines Corporation Video image overlay of an event performance
WO2019222051A1 (en) * 2018-05-16 2019-11-21 Gatekeeper Security, Inc. Facial detection and recognition for pedestrian traffic
US10839200B2 (en) 2018-05-16 2020-11-17 Gatekeeper Security, Inc. Facial detection and recognition for pedestrian traffic
US10846612B2 (en) 2016-11-01 2020-11-24 Google Llc Actionable suggestions for activities
CN112105981A (en) * 2018-05-01 2020-12-18 斯纳普公司 Automatic sending image capture glasses
EP3682386A4 (en) * 2017-09-14 2021-01-13 Epstein, Adam J. AUTOMATION OF PLACES WITH MULTIPLE ACTIVITIES
US20210120310A1 (en) * 2016-09-23 2021-04-22 DISH Technologies L.L.C. Integrating broadcast media streams with user media streams
US11140308B2 (en) 2018-07-25 2021-10-05 International Business Machines Corporation Life-logging system with third-person perspective
CN113767643A (en) * 2019-03-13 2021-12-07 巴鲁斯株式会社 Live broadcast transmission system and live broadcast transmission method
US20220086537A1 (en) * 2018-03-30 2022-03-17 Scener Inc. Socially annotated audiovisual content
JP2022164730A (en) * 2019-03-13 2022-10-27 バルス株式会社 Live distribution system and live distribution method
US11501541B2 (en) 2019-07-10 2022-11-15 Gatekeeper Inc. Imaging systems for facial detection, license plate reading, vehicle overview and vehicle make, model and color detection
US11538257B2 (en) 2017-12-08 2022-12-27 Gatekeeper Inc. Detection, counting and identification of occupants in vehicles
US11736663B2 (en) 2019-10-25 2023-08-22 Gatekeeper Inc. Image artifact mitigation in scanners for entry control systems
US12190669B2 (en) 2017-09-14 2025-01-07 Adam J. Epstein Multi-activity venue automation
US12464231B2 (en) 2021-03-29 2025-11-04 1908268 Ontario Inc. System and method for automated control of cameras in a venue

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070154113A1 (en) * 2005-12-30 2007-07-05 Hon Hai Precision Industry Co., Ltd. System and method for image measuring
US20090232354A1 (en) * 2008-03-11 2009-09-17 Sony Ericsson Mobile Communications Ab Advertisement insertion systems and methods for digital cameras based on object recognition
US20110115930A1 (en) * 2009-11-17 2011-05-19 Kulinets Joseph M Image management system and method of selecting at least one of a plurality of cameras
US20130014142A1 (en) * 2009-03-20 2013-01-10 Echostar Technologies L.L.C. Systems and methods for memorializing a viewers viewing experience with captured viewer images
US8643746B2 (en) * 2011-05-18 2014-02-04 Intellectual Ventures Fund 83 Llc Video summary including a particular person

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070154113A1 (en) * 2005-12-30 2007-07-05 Hon Hai Precision Industry Co., Ltd. System and method for image measuring
US20090232354A1 (en) * 2008-03-11 2009-09-17 Sony Ericsson Mobile Communications Ab Advertisement insertion systems and methods for digital cameras based on object recognition
US20130014142A1 (en) * 2009-03-20 2013-01-10 Echostar Technologies L.L.C. Systems and methods for memorializing a viewers viewing experience with captured viewer images
US20110115930A1 (en) * 2009-11-17 2011-05-19 Kulinets Joseph M Image management system and method of selecting at least one of a plurality of cameras
US8643746B2 (en) * 2011-05-18 2014-02-04 Intellectual Ventures Fund 83 Llc Video summary including a particular person

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11659251B2 (en) * 2016-09-23 2023-05-23 DISH Technologies L.L.C. Integrating broadcast media streams with user media streams
US20210120310A1 (en) * 2016-09-23 2021-04-22 DISH Technologies L.L.C. Integrating broadcast media streams with user media streams
WO2018071145A1 (en) * 2016-10-12 2018-04-19 Google Llc Actionable suggestions for activities
US12468965B2 (en) 2016-11-01 2025-11-11 Google Llc Actionable suggestions for activities
US10846612B2 (en) 2016-11-01 2020-11-24 Google Llc Actionable suggestions for activities
US11887016B2 (en) 2016-11-01 2024-01-30 Google Llc Actionable suggestions for activities
US10255549B2 (en) 2017-01-27 2019-04-09 International Business Machines Corporation Context-based photography and captions
US9940576B1 (en) 2017-01-27 2018-04-10 International Business Machines Corporation Context-based photography and captions
US10423822B2 (en) 2017-03-15 2019-09-24 International Business Machines Corporation Video image overlay of an event performance
US11151364B2 (en) 2017-03-15 2021-10-19 International Business Machines Corporation Video image overlay of an event performance
GB2563267A (en) * 2017-06-08 2018-12-12 Reactoo Ltd Methods and systems for generating a reaction video
US12190669B2 (en) 2017-09-14 2025-01-07 Adam J. Epstein Multi-activity venue automation
EP3682386A4 (en) * 2017-09-14 2021-01-13 Epstein, Adam J. AUTOMATION OF PLACES WITH MULTIPLE ACTIVITIES
US11538257B2 (en) 2017-12-08 2022-12-27 Gatekeeper Inc. Detection, counting and identification of occupants in vehicles
US20220086537A1 (en) * 2018-03-30 2022-03-17 Scener Inc. Socially annotated audiovisual content
US11871093B2 (en) * 2018-03-30 2024-01-09 Wp Interactive Media, Inc. Socially annotated audiovisual content
CN112105981B (en) * 2018-05-01 2022-12-02 斯纳普公司 Automatically send image capture glasses
CN112105981A (en) * 2018-05-01 2020-12-18 斯纳普公司 Automatic sending image capture glasses
US11087119B2 (en) 2018-05-16 2021-08-10 Gatekeeper Security, Inc. Facial detection and recognition for pedestrian traffic
EP3794503A4 (en) * 2018-05-16 2022-01-12 Gatekeeper, Inc. FACIAL DETECTION AND RECOGNITION FOR PEDESTRIAN TRAFFIC
WO2019222051A1 (en) * 2018-05-16 2019-11-21 Gatekeeper Security, Inc. Facial detection and recognition for pedestrian traffic
US10839200B2 (en) 2018-05-16 2020-11-17 Gatekeeper Security, Inc. Facial detection and recognition for pedestrian traffic
US11140308B2 (en) 2018-07-25 2021-10-05 International Business Machines Corporation Life-logging system with third-person perspective
EP3941080A4 (en) * 2019-03-13 2023-02-15 Balus Co., Ltd. LIVE STREAMING SYSTEM AND METHOD
CN113767643A (en) * 2019-03-13 2021-12-07 巴鲁斯株式会社 Live broadcast transmission system and live broadcast transmission method
JP7188831B2 (en) 2019-03-13 2022-12-13 バルス株式会社 Live distribution system and live distribution method
CN113767643B (en) * 2019-03-13 2024-04-05 巴鲁斯株式会社 Live broadcast transmission system and live broadcast transmission method
JP2022164730A (en) * 2019-03-13 2022-10-27 バルス株式会社 Live distribution system and live distribution method
US11501541B2 (en) 2019-07-10 2022-11-15 Gatekeeper Inc. Imaging systems for facial detection, license plate reading, vehicle overview and vehicle make, model and color detection
US11736663B2 (en) 2019-10-25 2023-08-22 Gatekeeper Inc. Image artifact mitigation in scanners for entry control systems
US12464231B2 (en) 2021-03-29 2025-11-04 1908268 Ontario Inc. System and method for automated control of cameras in a venue

Similar Documents

Publication Publication Date Title
WO2015120413A1 (en) Real-time imaging systems and methods for capturing in-the-moment images of users viewing an event in a home or local environment
US20220150572A1 (en) Live video streaming services
US11924397B2 (en) Generation and distribution of immersive media content from streams captured via distributed mobile devices
US9832516B2 (en) Systems and methods for multiple device interaction with selectably presentable media streams
TWI515032B (en) System, method, viewing device for collaborative entertainment platform and machine-readable medium
US10778727B2 (en) Content enabling system
US9264770B2 (en) Systems and methods for generating media asset representations based on user emotional responses
CN105190480B (en) Message processing device and information processing method
US20140178029A1 (en) Novel Augmented Reality Kiosks
US8789082B2 (en) Method and apparatus for enabling interactive dynamic movies
US11589128B1 (en) Interactive purchasing of products displayed in video
JP2003529975A (en) Automatic creation system for personalized media
CN1656808A (en) Presentation synthesizer
US20230031160A1 (en) Information processing apparatus, information processing method, and computer program
KR102313309B1 (en) Personalized live broadcasting system
JP2023153790A (en) program
JP6523038B2 (en) Sensory presentation device
WO2020050097A1 (en) Information processing device, method, and program for generating composite image for user
JP7605108B2 (en) AI information processing device and AI information processing method
US20250004692A1 (en) Shared viewing experience enhancement
JP2023063614A (en) Delivery device, delivery method and delivery program
Li et al. Mobilevideotiles: video display on multiple mobile devices
EP4451689A1 (en) System and method for tagging and transforming long form streamed video content into user identified video segments and content cues for services
EP4451688A1 (en) System and method for tagging and transforming long form streamed video content into user identified video segments and content cues for services
KR102196938B1 (en) Method and apparatus for providing advertisement content and recording medium thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15746179

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15746179

Country of ref document: EP

Kind code of ref document: A1