[go: up one dir, main page]

WO2013116163A1 - Method of creating a media composition and apparatus therefore - Google Patents

Method of creating a media composition and apparatus therefore Download PDF

Info

Publication number
WO2013116163A1
WO2013116163A1 PCT/US2013/023499 US2013023499W WO2013116163A1 WO 2013116163 A1 WO2013116163 A1 WO 2013116163A1 US 2013023499 W US2013023499 W US 2013023499W WO 2013116163 A1 WO2013116163 A1 WO 2013116163A1
Authority
WO
WIPO (PCT)
Prior art keywords
media
electronic device
low resolution
remote
clip
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2013/023499
Other languages
French (fr)
Inventor
Michael Edward ZALETEL
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US14/374,719 priority Critical patent/US20150058709A1/en
Publication of WO2013116163A1 publication Critical patent/WO2013116163A1/en
Anticipated expiration legal-status Critical
Priority to US15/599,621 priority patent/US20170257414A1/en
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41407Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring

Definitions

  • the present invention relates to methods and apparatuses for recording and storing media representations perceived by media recording devices.
  • an Individual user can only record media that the user can perceive from his or her own electronic device (i.e., camera, microphone, camcorder, smart phone, etc.). It may be desirable for an individual to record media .from a music concert, sporting event or other event that the user is unable to attend, or from a vantage point thai is different than user's vantage point.
  • a user also .might want to create a composition, in realtime that combines media from his or device with media from many other devices.
  • the present invention relates to methods and apparatuses for recording media of a remote nature perceived by a remote sensor of a remote media recording device onto a first memory device of a first, portable electronic, device.
  • the .first memor device records a low resolution or placeholder version of the remote media recording device input, the Sow resolution or placeholder version being later replaced by a corresponding high resolution version of the media input.
  • the invention can be a method of creating a final media composition using a media composition program residing on a first electronic device comprising a first display device, a first memory device, arid first user input means, the method comprising: a) receiving, on the first electronic device, a plurality of low resolution media streams of high resolution media clips from a plurality of remote electronic devices; b) displaying, in the first display device, visual indicia of each of the low resolution media streams being received by the first electronic device; c) activating one or more of the low resolution media streams being received by the first electronic device in response to user input via the first user input means; d) for each low resolution media stream that is activated in step c), recording a low resolution media clip segment of that low resolution media stream in an interim media composition thai resides on the first memory device; e) for each low resolution media clip segment recorded in the interim media composition, receiving on the first electronic device a high resolution media clip segment from the remote electronic device that corresponds to mat low
  • the invention can be a method of creating a video composition comprising: a) displaying, in a first display device of a first electronic device, a plurality of remote camera views perceived by a plurality of remote camera lenses of a pluralit of remote electronic devices; b) activating one or more of the plurality of the remote camera views displayed in the first display device via user input means of the first electronic device; c) for each remote camera view that is activated in step b), recording, on a first memory device of the first electronic device, a low resolution video clip segment of the remote camera view as part of an interim video composition, and wherein for each remote camera view that is activated i step b); d) for each low resolution video clip segment recorded in step c), acquiring from the remote electronic devices a high resolution video clip segment that corresponds to that low resolution video clip segment; and e) automatically replacing the low resolution video clip segment in the video composition recorded on the first memory device of the first electronic device with the high resolution video clip segments.
  • the invention can be a non-transitory computer-readable storage medium encoded with instructions which, when executed on a processor of a first electronic device, perform a method comprising: a) displaying, in a first display device of a first electronic device, a.
  • step b) activating one or more of the plurality of the remote camera views displayed in the first display device in response to user input inputted via user input means of the first electronic device; c) for each remote camera view that is activated in step b): (i) recording, on a first memory device of the first electronic device, a low resolution video clip of the remote camera v ie w as part of a video composition; and (2) generating and transmitting a first record signal to the remote electronic devices, thereby causing a high resolution video clip of the remote camera view to he recorded on the remote electronic device capturing that remote camera view; d) for each high resolution video clip recorded in ste c), generating and transmitting a signal that causes tie high resolution video clips from the remote electronic devices to be transmitted to the fust electronic device; and e) upon the first portable electronic device receiving the high resolution video clips transmitted in step d), automatically replacing the low resolution video clips in the video composition
  • the invention can be a method of creating a final media composition using a media composition program residing on a first electronic device comprising a first display device, a first memory device, and first user input means, the method comprising: a) receiving, on the first electronic device, a plurality of low resolution media streams of high resolution media clip files from one or more databases, the high resolution media clip files stored on the one or more databases; b) displaying, in the first displa device, visual, indicia of each of the low resolution media streams being received by the first electronic device; c) activating one or more of the low resolution media streams being received by the first electronic device in response to user input via the first user input means; d) for each tow resolution media stream that is activated in step c), recording a low resolution media clip segment of thai low resolution media stream in an interim media composition that resides on the first memory device; e) for each low resolution media clip segment recorded in the interim media composition, receiving on the first electronic device a high resolution media clip
  • the invention can be a method of creating a video composition comprising: a) displaying, in a first display device of a first portable electronic device, a first camera vie perceived by a first camera lens of the first portable electronic device; b) transmitting, to the first portable electronic device, a plurality of low resolution video streams of high resoiuiion video clips previously stored in one or more databases; c) displaying, in the first display device of the first electronic device, the low resolution video streams, wherein the first camera view and the low resolution video streams are simultaneously displayed in the first display device: d) recording, on the first memory device of the first portable electronic device, a low resolution video clip for each of the low resolution video streams activated by a user as part, of a video composition; e) for each, low resolution video clip recorded on the first memory device of the first portable electronic device, transmitting corresponding ones of the high resolution clips from the one or more databases to the first portable electronic device; and f) automatically replacing the low resolution
  • the invention can be a method of creating a .final media composition using a media composition program residing on a first electronic device comprising a first display device, a first .memory device, and first user input means, the method comprising: a) displaying, in the first display device, a visual indicia for each of a plurality of electronic media recording devices; b) recording, on each of the electronic media recording devices, a perceived event as a media clip that contains an electronic media recording device identifier; c) selectively activating each of the visual indicia of the plurality of electronic media recording devices during step b) to generate and record a proxy clip segment in an interim video composition on the first memory device, wherein, each proxy clip segment is associated with the electronic media recording device whose visual indicia was activated to generate that proxy clip segment and a temporal period; d) for each proxy clip segment recorded in the interim media composition, receiving on.
  • the invention can ' be a mio-transitory computer-readable storage medium encoded with, instructions which, when executed on. a processor of a first electronic device, perform any one of the methods described above.
  • the invention can be an electronic device comprising: a. first processor; a first memory device; a first transceiver; and instructions residing on the first memory device, which when executed by the first processor, causes the first processor to perform an of the methods described above.
  • Figure I is a schematic of an electronic device in accordance with an embodiment of the present invention.
  • Figure 2 is a schematic diagram of a system overview in accordance with an embodiment of the present invention.
  • Figure 3 is a schematic diagram illustrating communication between a first electronic device and a plurality of remote electronic devices
  • Figure 4 is a screen shot of a login page in accordance with an embodiment of the present invention.
  • Figure 5 is a screen shot of a list of remote electronic devices tha have initialed a sharing status in accordance with an embodiment of the present invention.
  • Figure 6 is a screen shot of a first electronic device illustrating the streaming of multiple low resolution video clips from a plurality of remote electronic devices in accordance with an embodiment of the present, invention.
  • FIG. 7 is a screen shot .illustrating an edit switching window in accordance with an embodiment of the present invention.
  • Figure 8 is a screen shot illustrating how a user can select audio tracks for the audio for a multi-camera session in accordance with an embodiment of the present, invention
  • Figure 9 is a screen shot illustrating a searching tool for locating video clips based on qualification criteria in accordance with an embodiment of the present invention.
  • the present invention relates to methods and apparatus for recording media clips such as camera views or audio input from one or more remote electronic devices (which may be referred to herein as players) positioned at various different, locations throughout the world onto a first electronic device (which may be referred to herein as a stage).
  • the remote electronic devices may be remote .media recording devices that are capable of recording any type of media including video, audio, still photos, text graphics or the like.
  • the remote electronic devices are deemed remote due to the remote electronic devices being at a location, that is different from the location of the first electronic device that is acting as a stage, regardless of whether one or more of the remote electronic devices is adjacent to the first electronic device or thousands of miles away from the first electronic device.
  • the first electronic device or stage is able to create a media composition with various media clips or feeds from the different remote electronic devices or players.
  • the first electronic device or stage can record media from the remote electronic devices and record/store the media onto its own memory.
  • the media is video
  • the first electronic device or stage can record video based on the camera views perceived from caraera leases of one or more of the remote electronic devices or players and store the recorded video from the one or more remote electronic devices or players onto its own memory.
  • the first electronic device or stage can simultaneously record video based on the camera view perceived from its own camera lens and store that video into ts memory (the first electronic device can. alternatively store other types of media, such as that listed above).
  • the user of the first electronic device can switch back and forth among and between the various views so that the media composition that is created is a fully edited media composition.
  • the media composition is created and stored into the memory of the first electronic device or stage as a composition of the media recorded from the remote electronic devices/players and/or the media recorded directly by the first electronic device/stage.
  • the user can then use editing techniques to maneuver the different media clips by changing their order in the final media composition and create transitions in the media composition, such as thai which is described in U.S. Patent Application Publication No. 2012/0308209, filed September 8, 201 1 , the entirety of which is incorporated herein by reference.
  • the present invention is tm application for an. electronic device, which can be a portable electronic device such as a mobile communication device, a camera or a desktop computer.
  • the appiication is a video composition creation program such that the media described herein above is video.
  • the appiication is an audio composition creaiion program such that the media described herein above is audio.
  • the application is an audio/ ideo/still photo composition creation program. Any combination of media can be used with, the mvention described herein.
  • the first electronic device or stage comprises at least one camera/camcorder lens or at least one audio input sensor.
  • the first electronic device may not comprise a camera/camcorder, but rather may remotely connect to another electronic device that does comprise a camera camcorder.
  • the first electronic device or stage merely comprises a microphone for detecting and storing audio.
  • the electronic device or mobile communication device may be a smart, phone or tablet, such as but not limited to, an iPhone* or an iPad*, or a Blackberry*", Windows ' *, Mac OS* " , bada* " or Android* ' enabled device, that preferably but not necessarily comprises at least one camera/camcorder.
  • the present invention may be an application that can be purchased and downloaded 10 the electronic device or mobile communication device by the user.
  • the download of the application may be done through a wired or wireless connection to the manufacturer's or service provider's application database.
  • the present invention would reside on a computer readable medium located within the portable electronic device, mobile communication device, desktop computer or mobile camera/camcorder.
  • the electronic device 100 may be a portable electronic device, which includes mobile communication devices such as a smart phone or tablet thai comprises a camera/camcorder, whereby the user downloads the present invention as an application and stores the application on a computer readable medium, located within the electronic device 100.
  • the electronic device 100 may be manufactured with the features of the present invention built in.
  • the electronic device 100 comprises a display device 101 , a Jens 102, a flash 103, a processor .104, a power source 105, a memory 1.06 and a transceiver 107.
  • the lens J 02 aad the flash .103 may be omitted from the electronic device 100. Further, as discussed in more detail below, the electronic device 100 may comprise any number offenses 102 or flashes 103.
  • the electronic device 100 is a mobile communication device such as a mobiie phone, smart phone or tablet, such as but not limited to, an iPhone* iPad*, Android* ' , Blackberry* ' , bada* or Windows* ' enabled device.
  • the invention is not so limited and the electronic device 100 may also be a digital camera, camcorder or surveillance camera that has the present invention stored in a computer readable medium therein, or a desktop computer that has an attached or embedded earners and the present invention, stored in a computer readable medium therein.
  • the electronic device 100 may also be a camera, camcorder or the like that does not have the present invention stored therein, but rather is in communication (wireless or wired) with another electronic device that does have the present invention stored therein.
  • the electronic device 100 acts as a bridge for an external camera device over WiFi or other communication pathways, it should be noted that in alternate embodiments, the present invention may be stored on a computer readable medium within the electronic device 100 prior to the user purchasing the electronic device 100,
  • the processor 104 is configured to control the operation of the display device 101, lens 102, Hash 103, power source 1 5, memory 106 and transceiver 107.
  • the power source 105 which may be batteries, solar power or the like, is configured to provide power to the display device 1 .1 , lens 102, flash 103, processor 104, memory 106 and transceiver 107.
  • the memory 106 is configured to store photographs and/or video clips recorded by the lens 102 of the electronic device 100 or recorded by a lens of a remote or second electronic device that is different from the electronic device 100, as will be better understood from the discussion below.
  • the memory 106 can also be used to store audio, graphics, text or any other type of media perceived by a remote electronic device with which the electronic device 100 is in operable electronic communication.
  • the transceiver 107 is capable of transmitting signals from the electronic device 100 to remote electronic devices and is also capable of receiving signals from the remote electronic devices. In some instances, the transceiver 107 communicates with remote electronic devices through a server. Thus, the transceiver 107 enables communication among and between various different electronic devices,
  • the lens 102 is a standard camera or camcorder lens that is configured to record video clips and photographs in response to a user input.
  • the electronic device 100 of the present invention may include more than one lens 1 2.
  • the electronic device .100 may comprise a first lens on (he front of the electronic device 100 and a second lens on the back of the electronic device 1 0.
  • the flash. 1 3 is configured to provide light, to the area being recorded by the lens 02. in one embodiment where the camera/camcorder of the electronic device 100 comprises more than one lens 102, the electronic device 100 may also include more than owe flash 103, each flash 1 3 corresponding to a lens 1 2.
  • the invention is not so limited and in alternate embodiments the Hash 103 may be omitted, in certain embodiments both the lens 102 and the Hash 103 may be omitted.
  • the lens 102 and the flash 1 3 will not be necessary.
  • the display device 101 is configured to display a view from the perspective of the lens .1 2 to enable the user to see the area of which they are taking a photograph or video clip.
  • the display device 1 1 is configured to display an image of a real-world event perceived by the lens 102 of the electronic device 100, prior to, during and after the recording of a video clip or photograph.
  • the display device 10 Alternatively, as will he understood from the description below, the display device 10 !
  • the display device .10.1 is a touch-screen that further comprises a graphical user interface (GUI) through the use of an onscreen touch interface configured to receive user inputted commands.
  • GUI graphical user interface
  • the user input means referred to herein is achieved by a user touching the GUI in a desired location to achieve a desired functionality.
  • the electronic device 1 0 may further comprise a separate, mechanical user interface, such as, for example buttons, triggers, or scroll wheels.
  • the present invention resides on a computer readable medium within a mobile communication device such, as a smart phone or tablet, in such embodiments, the electronic device 100 may be configured snch that if a video clip, audio clip, or photograph is being recorded and a composition is being created when the user receives a phone call, text message, system alert, or simply needs to leave the application, the video clip, photograph or audio clip and or composition is automatically saved or cached in the memory 106 so not to be lost,
  • the electronic device .100 may further comprise advanced features such as a global positioning system (GPS) chip, a compass, an accelerometer chip, a gyroscope chip, a thermometer chip, a temperature sensor, a facial, detection system or service Application Programming Interface (“ ⁇ "), a voice detection system or service API, a Speech-To ⁇ Texi (STT) system or service API, a Text-To- Speech (TTS) system or service API, a translation system or service, a pixel-motion detection system or service API, a.
  • GPS global positioning system
  • the present inveniton is further configured to monitor and save any data reeored or obtained by any of the above mentioned chips, sensors, systems and components (collectively referred to hereinafter as "advanced features")- Further, the resulting data recorded or obtained by any of the advanced features may be saved as metadata and incorporated into recorded video clips, photographs or compositions created by the present inveniton.
  • GPS coordinates, compass headings, aecelerometer and gyroscope readings, temperature and altitude data may be recorded and saved into a recorded video clip, photograph or composition.
  • an assisted GPS chip could be utilized within the functionality of the present invention to provide stich things as automatic captions or titles with location (Philadelphia, PA) by looking up GPS coordinates in a world city database on the fly. This may allow users to record live video from cameras worldwide, whereby each recorded media segment could show the GPS coordinates or city. GPS could also be used to display running log of distance traveled from beginning of video to end of video or for example, current speed in miles per hour.
  • the digital compass chip could be utilized to optionally display (burn-in) to the video clip or composition the direction the camera is facing such as SW or NNE 280 degrees. Further, a compass chip could also be used in combination with GPS, Gyroscope and a HUD (heads up display) to help a user .replicate a video taken years prior at same exact location. For example, a user could take a video at the same spot every month for two years and use the present invention to load older, previously recorded video clips and then add a newly recorded video clip taken at precisely the same location, direction and angle of view.
  • the axis gyroscope could be used for scientific applications along with acce!er crystalter data and could be burned, into a recorded video clip or composition for late analysis. Further, it also could be used to auto-stabilize shaky video clips or photographs recorded, by the present invention.
  • An altimeter could be used to bom in altitude information into a recorded media segment. This infonnation could appear at end of the composition it) the credits automatically or could be burned-in and adjusting real-time on a video clip or composition to show ascent or descent.
  • the temperature sensor could be used to automatically add temperature range to credits or to burn in on video. Further, a heart rate sensor could be used if a user wants heart rate information to be shown on a. video cl ip, for example if the user is on a roller coaster. f 00411
  • the Facial Detection system or service API can be used to determine the number of unique persons in the video clip(s), their names and other related information if available locally on the device 100 or via the Internet Information acquired via the facial detection system or service API may be used to automatically add captions, bubbles or applicable information on video clips, photographs, the title screen,, the credits screen or any other portion of the finalized composition.
  • the Voice Detection system or service API can be used to determine the number of unique persons in the video eiip(s), their identities or names and other related information if available locally on the device or via the Internet, information acquired via the voice detection system or service API. may be used, to automatically add captions, bubbles or applicable information on video clips, photographs, the title screen, the credits scree or any other portion of the finalized composition.
  • the Speech- ⁇ ' fext system or service API can. be used to convert the spoken word portions of a recorded audio track of a video clip or the audio track of an audio recording into written text where possible for the purposes of automatically adding subtitles, closed- captionhig or meta-data to a video clip or the final composition.
  • the Texi-To-Speech system or service API can be used to convert textual data either gathered automatically, such as current time, weather, date and location, or inputted by the user, such as titles and credits, into spoken voice audio for the purposes of automatically adding this audio to a recorded video clip or the final composition. This may be used to assisi the visually impaired or in combination with the Translation Service API to convert, the text gathered from the Speech-To-Text service into spoken audio track in an alternate language.
  • the Translation system or service API can be used for the purposes of automatically converting textual data either gathered automatically, such as current time, weather, date and location, or input by the user, such as titles and credits, into another language .for localization or versionhig when sharing over worldwide social networks or in combination with Speech- To-Texi and Text- To- Speech, to provide visual or audible translations of content.
  • a Pixel-Motion Detection system or service API can be used to determine the speed of movement either of the camera or the recording subject for the purposes of smoothing out camera, motion for an individual recorded video clip or the iinal composition.
  • the Pixel-Motion Detection system, or service API may also be used to automatically select a music background or sound FX audio based, on the measured movement for a individual recorded video clip or the final composition.
  • the Pixel-Motion Detection system of service API uses the beats per minute of a song to determine whether it matches the measured movement for a recorded video clip or final composition, in alternate embodiments, the determination of whether a song is "fast” or "slow” may be determined by the user.
  • a music database system or service API can be a locally or publicly accessible database of songs or tracks with information such as appropriate locations, seasons, times of day, genres and styles for the purposes of using known information about the video composition, and automatically selecting and incorporating a particular music track into a finalized composition based on the known information. For example, such a database might suggest a Holiday song on a snowy day in December in Colorado, USA or a Beach Boys song on a Sunny Day at the beach, in San Diego USA. In one embodiment, the song would he automatically added t the composition to simplify user input. In alternate embodiments, the user has the ability to selectively choose the variables that determine which songs are to be incorporated into the finalized composition.
  • An NFC chip could be used to display on a media segment the information communicated by nearby NFC or FID chips in products, signs or etc.
  • An ambient light sensor could he used to adjust exposure or to add ambient light data to meta data for later color correction assistance in editing.
  • a proximity sensor could be set somewhere on the lace of the .mobile device and is intended to delect when the phone is near a user's ear. This may be used to help control the present invention, for example, such as by allowing a user to put their finger over ihe sensor to zoom in instead of using touch screen or other user interface.
  • a Wi-Fi chip may be used for higher performance mobile devices and for live connection to the internet for city lookups from GPS data, and other information that may be desired in credits or as captions. The Wi-Fi chip could also be used for remote video or audio phone call and for those calls to be recorded live with permission as a part of the composition.
  • An audio recording microphone may be used to record audio, but con id a lso be used to control the present invention.
  • the microphone could be used for certain functions, such as pause, resume and zoom via voice commands or to auto-trigger recording of next live clip in surveillance situations. If two microphones are used, they could be used to detect compass direction of a sound being recorded out of the camera lens's view.
  • a motion sensor could be used to actively control the application without human intervention and to auto-trigger the recording of a next live clip in surveillance situations. Further, a motion sensor could be used to change the shutter speed in real-time to reduce motion blur on a recorded media segment.
  • the electronic device 100 may comprise a three- dimensional (3D) dual-lens camera.
  • the present invention is further configured to record 3D video clips and photographs, and include metadata that comprises depth information obtained from one of the above mentioned, advacned features into a finalized composition.
  • the system 200 comprises a plurality of electronic devices 2 1A-201 D and a server 202 that communicate via the Internet 204.
  • Each of the electronic devices 201 A-2 1 D may be a portable electronic device, and in certain embodiments each of the electronic devices 201A-201 D includes a camera with a. camera lens. More specifically, in certain embodiments each of the electronic devices 201 A-201 D includes all of the components described above with .reference to Figure 1. (i.e., all of the components of the electronic device 100).
  • each one of the electronic devices may be a smart phone, such as but not limited to the iPhone*, digital camera, a digital video camera, a personal computer or laptop.
  • onl a couple electronic devices 20.1.A- 20 ID are illustrated, the invention is not so limited and ma comprise any number of the above mentioned electronic devices.
  • any various numbers of combinations of the different types of electronic devices can be used within the scope of the present invention.
  • the server 102 comprises computer executable programs to perform the tasks and functions described herein and facilitates communication between the various electronic devices 201 A- 20 ID.
  • the electronic devices 201 A-D may be in operable electronic communication with the server 202 via a satellite network, a common carrier network(s), Wi- Fi, WiMAX or ny combination thereof.
  • the server 202 is configured to allow for operable communication between the electronic devices 201 A- D.
  • the server 102 may be omitted and the electronic devices 201 A-D can be in operable electronic communication with one another directly via the Internet 204, a satellite network, a common carrier nefwork(s), Wi-Fi, WiMAX or any combination thereof.
  • each electronic device 201 A-D comprises a computer executable program that allows for the operable electronic communication between the devices 201A-D.
  • any electronic device may be a stage, a piayer, or both, and their roles may change at any time.
  • multiple electronic devices that are each operating as a stage may be operably electronically communicating with one another
  • any one of the electronic devices 201 A-D may be in operable electronic communication with another electronic device 203, which may be a portable electronic device as has been described herein above or a non-portable electronic device.
  • the electronic device 201 A may generate its video feed from the electronic device 203.
  • the electronic device 203 may or may not have the inventive application or program on the device.
  • the electronic device 201 A is acting as a bridge for an external camera device (i.e., electronic device 203) over WiFi
  • the piayer stores the video clip locally in memory and ixansmits a low resolution video to the stage for a preview.
  • the transmission of the low resolution video may be done via the server 202 or not, and may be done over the Internet 204, satellite network, common carrier network(s), wi-fi, WiMAX or any combination thereof.
  • the stage When the stage receives the high-resoiution video clip, the stage replaces the low resolution video clip with the high resolution video clip in its memory and on a timeline, regardless of whether that position on the timeline is before other, subsequently recorded video clips. Therefore, the present, invention overcomes packet drop issues that occur when attempting to stream higher resolution clips or situations when an electronic device 201 A-D may lose connection altogether. This allows .for live, non-linear recording from one electronic device 201 A-D by another.
  • the present, invention allows for a. particular stage to be connected to and record from more than one player at any particular time.
  • the stage may choose which players it would like to be the mai display on its application, in embodiments that comprise the server 202, not only does the server 202 regulate the communication between the multiple electronic devices 20 ⁇ A-D, but the server 202 also stores the recorded, video clips, audio clips and photographs on a database of the server 202.
  • any electronic device 201 A-D may incorporate a saved video clip, audio clip, or photograph into their composition.
  • These saved video clips, audio clips, and photographs are saved in a library as pre-recorded, media thai can be used to simulate or emulate players. This feature will be discussed in more detail below.
  • FIG. 3 a schematic diagram illustrating communication between a first electronic device 300 and a plurality of remote electronic devices 30.1 A, 301 B, 301 C is provided, in the exemplified embodiment, the first electronic de ice 300 and the remote electronic devices 30 ! A-C are each illustrated as iPhone's*.
  • the invention is not to be so limited and each of the electronic devices 300, 301 A ⁇ C can be any one of the different types of electronic devices discussed above.
  • the first electronic device 300 which is operating a the stage, is in operable communication with each of the remote electronic devices 301A-C, which are operating as the players.
  • the first electronic device 300 can display a low resolution video (or other media) stream from each of the remote electronic devices 301 A-C on its first display device 302.
  • the .first electronic device 300 can display on its display screen. 302 live low resolution video stream feeds of views that are perceived, by camera lenses on the remote electronic devices 301 A-C.
  • the first electronic device 300 can then record video thai is being perceived by the camera lenses of the remote electronic devices 30.1 A-C.
  • the invention enables a high resolution video clip to be transferred ⁇ . the first electronic device to replace the low resolution video clip that is streamed during recording.
  • FIG. 4 a screen shot of a login page of the present invention . , which may be a mobile application, is illustrated on the first electronic device 300, It. should be appreciated that in certain embodiments the login page may b omitted and upon launch, the application will go directly to the recording page such as thai which is illustrated in Figure 6. Thus, the login page is only used in some, but not all, embodiments of the present invention.
  • an appiication window 301 wsli appear on the first display device 302 of the first electronic device 300.
  • the user will be prompted to create an account by entering in. a username and a password in the appropriate spaces on the application window 301.
  • the device Prior to signing in or creating an account, the device is indicated as being "not connected,” as shown in the bottom left hand comer of the screen shot of Figure 4,
  • the user has the options to exit remote camera, manage clips, run. worldwide remote camera, torn off flash, or reverse the camera direction as illustrated across the top of the first electronic device 300, The user can select any of these options using a user input means, which will be discussed below,
  • the user input means on the first electronic device 300 can merely be a user using his or her finger to select between stage status and player status in response to a prompt (i.e., touch screen).
  • a prompt i.e., touch screen
  • the invention is not to be so limited and in other embodiments the user input means on the first electronic device 300 can be via a mouse utilizing click and point technology, a user pressing a button on a keyboard that is operably coupled to the .first electronic device 300 (such as clicking the letter "S" for stage operation and "P" for player operation), or the incorporation of other buttons/toggle switches on the device. Any other technique can be used as the user input means on the first electronic device 300.
  • the inventive application will automatically launch in the stage status, suc thai the user would then use she user input to opt out of the stage status and into the player status, as described in more detail below with reference to Figure 6.
  • the first display device 302 of the first electronic device 300 will display a list of remote electronic devices that have initiated a sharing status, such as by loggin into the application and initiating a player status via user input means on the remote electronic devices.
  • Figure 5 illustrates a screen shot of the first electronic device 300 with a list window 304 overlayed ont the first display device 302 of the electronic device 300.
  • the first display device 302 may display a camera view that is perceived by the camera lens of the first electronic device 300 and the list window 304 can be overlayed on top of the camera view display.
  • the first electronic device 300 is acting as a stage.
  • the user initiates the stage status via user input means on the electronic device 300
  • the first electronic device 300 thai acts as the stage is referred to as the first electronic device while the electronic devices that are acting as players are referred to as the remote electronic devices, or the second electronic device.
  • the user does not need to select between stage status and player stains. Rather, in certain embodiments upon, launching the application, the list window 304 will be displayed on the first display device 302 of the first electronic device 300. in such embodiments, upon launch the application will automatically scan for remote electronic devices that are either active or that meet qualification criteria as discussed Anlagenow. The user can then select various remote electronic devices to stream a low resolution video clip from, upon which action the first electronic device 300 is automatically deemed a stage. Alternatively, the user can not seiect any of the remote electronic devices and can instead proceed to use the camera on the first electronic device 300. upon which action the first electronic device 300 is automatically deemed a player. Furthermore, it should be appreciated that when the first electronic device 300 is operating as a stage, remote electronic devices can still stream video from the first electronic device 300. Thus, in certain embodiments upon achieving stage slams, the first electronic device is both stage and a player.
  • the list window 304 is a list of the remote electronic devices that have initiated a sharing/player status as indicated above. In certain embodiments that utilke the list window 304, every remote electronic device worldwide that is activated and that has entered a sharing status will be provided in the list. However, in other em odiments a remote electronic device will first have to meet one or more qualification criteria prior to being placed in the list window 304 on the first display device 302 of the first, electronic device 300.
  • the qualification criteria can be defined by the first electronic device 300 via user input means of the first electronic device 300.
  • the one or more qualification criteria are selected from a group consisting of local area network connectivity, GPS radius from the first electronic device, location of the remote electronic device (i.e., desired geographical area), desired location (such as a particular venue, monument, stadium, etc.), and pre-defined group status.
  • Still other qualification criteria can include weather conditions, such that remote electronic devices having siinilar weather conditions to the first electronic device 300 will populate the list window 304.
  • Many other different criteria can be used as the qualification criteria to assist in determining which remote electronic devices should populate the list window 304 on the first electronic device 300.
  • the different types of criteria that can be used as the qualification criteria is not to be limiting of the present invention in all embodiments unless so specified in the claims.
  • the qualification criteria can be auto-extracted criteria, m suc embodiments, the program automatically determines certain characteristics of the first electronic device 300, such as the weather, time, altitude, humidity, geographic location, surrounding landscape and the like. As a result, the program can automatically create matches for the first electronic device 300 by finding remote electronic devices that are located at locations with similar weather, time, altitude, humidity, geographic location or surrounding landscape. Thus, in certain embodiments the invention automatically looks for live players nearby or worldwide that match some environmental or situational criteri (either actual or as specified by the stage or stage user).
  • the remote electronic devices provided in the list window 304 will be those that are connected to the same local area network as the first electronic device 300.
  • the remote electronic devices provided in. the list window 1 304 will be those that are located in the specified location. This can be accomplished utilizing GPS or other location finding means on the electronic devices.
  • the user can select to populate in the list window 304 all remote electronic devices within a particular mile kilometer radius .from the first electronic device 300.
  • the user ca select a desired location, such as a particular venue, monument, or stadium, and only view remote electronic devices on the list window 304 thai are located at that particular desired location.
  • a desired location such as a particular venue, monument, or stadium
  • the first electronic device 300 will be able to record video or audio of a sporting event or concert that the user or owner of the first electronic device 300 is not even attending.
  • the user can establish -pre-defined groups, such, as persons that the user is friends with, and can select to only have the remote electronic devices in those pre-defined groups portrayed in the list window 304.
  • a person may activate an electronic device as a. stage and search for live players that are nearby.
  • the presen invention may match live player thai, is located up the coast at the next beach, or at a beach 1,000 miles away in similar tropical, weather and son conditions.
  • the stage user may believe he is viewing live images thai are nearby when in fact they are at quite a distance away.
  • the list window 304 displays information about the various remote electronic devices to enable the user/owner of the first electronic device 300 to determin whether to view and/or record a low resolution video of the camera view (or audio or othe media) being perceived by the camera lens (or microphone or other input component) of that particular remote electronic device.
  • the list window 304 illustrates the usemame, location by city and state, and current time at that location on the list window 304. in other embodiments, the location can be displayed on the list window 304 based on a particular venue name, GPS coordinates, or the like.
  • the username can include the owner/operator of the particular remote electronic device's real name.
  • the remote electronic devices are displayed in the list window 304 in rows with a user selection box 305 next to each usemame.
  • the user can select one or more of the remote electronic devices.
  • the user selects a remote electronic device by clicking in the user selection box 305 (either via touch screen, mouse point/click techniques, keyboard, or the like).
  • the user will click in the user selection box 305 of each desired remote electronic device from which the user wants to view a live feed of the camera view perceived by those remote electronic devices, and then provide an indication that the user has completed making selections.
  • ihe user Upon adding all of the desired remote electronic devices, ihe user will use ihe user input, to select the done button 307 to proceed to the next w dow.
  • the inventive application upon launching the inventive application the user will be brought directly to the recording page illustrated ia Figure 6. In such embodiments, the user will not be prompted to first select from the list window the remote electronic device! s) that the user desires to view camera views from. Rather, in such embodiments the inventive program will automatically match players (i.e., remote electronic devices) for the user of the electronic device 3 ( H), and will display the matched player ' s as illustrated i Figure 6 and discussed below.
  • the inventive application can, i some embodiments, be programmed with, algorithms that automatically select matches for a user. These matches can he based on any factors or characteristics that are discussed herein including envi.ronme.nta! factors and qualification criteria that has been pre-set by a user.
  • a user activating as a player automatically enables stage users to record from that particular player's electronic device as discussed below.
  • a stage will first: request access to the player's camera view, and the player will be prompted to "allow" the stage access/connection to the player device.
  • the list window 304 indicates that there are 14 live remote electronic devices from which the user can select to view the camer views perceived by those remote electronic devices. These 1 live remote electronic devices are either the devices that met the user's pre-se!ected qualification criteria, or they can be the remote electronic devices that meet auto-extracted criteria, or it. can be a full list of all active remote electronic devices in the United States.
  • FIG. 6 a recording and editing page of the inventive application is illustrated on the fust electronic device.
  • a recording and editing page of the inventive application is illustrated on the fust electronic device.
  • the inventive application upon launching the inventive application, ihe user is brought directly to the recording and editing page. However, in other embodiments the user will follow die steps described above and then be brought to the recording and editing page.
  • the first display device 302 of the first electronic device 300 displays a first window 308 that overlays a primary window 309 of the first display devic 302.
  • the primary window 309 displays the first camera view that is perceived by a first camera lens of the first electronic device 300
  • the primary window 309 displays a low resolution media stream (which can be audio, still photograph, text, graphics, etc) of the remote electronic device.
  • the media can either be being perceived currently by the remote electronically device, or ii can be pre-stored media and the remote electronic device can simply be a library database (described in more detail below).
  • the pre-stored media can be computer-generated, such as a pre-stored media file saved on a server.
  • the invention includes a swap function that upo being activated, displays one of the remote camera views in. the primary window 309.
  • the first window 308 that overSays the primary window 309 displays the pluraliiy of remote camera views perceived b the camera lenses of the remote electronic devices that were selected from the list as discussed above with reference to Figure 5 as well as the first camera view that is perceived by the first camera lens of the first electronic device 300.
  • the first window 308 displays die plurality of remote camera views perceived by the camera lenses of the remote electronic devices as determined by the inventive program's matching algorithms.
  • the first window 308 displays the first camera view of the first electronic device 300 in a first thumbnail 310 of the first window 308, a first remote camera view of a first remote electronic device in a second thumbnail 11 of the first window 308, a second remote camera view of a second remote electronic device in a third thumbnail 312 of the first window 308, a third remote camera vie of a third remote electronic device in a fourth thumbnail 313 of the first window 308, and a fourth remote camera view of fourth remote electronic device in a fifth thumbnail 314 of the first window 308.
  • more or less than four remote camera views can be displayed in the first window 308 depending upon the number of remote cameras/users that are selected from the list window 304 as discussed above.
  • a selected one of the remote camera views can be displayed in the primary window 309 and in its respective thumbnail.
  • the camera views are omitted and. it can be any other media perceived by a remote electronic device,
  • each, of the thumbnails 310-314 displays a low resolution, media stream of the high resolution media clips indicative of the camera view that is perceived by the lens of the particular electronic device, initially, until activation and recordation are initiated as discussed below, the low resolution video stream of the high resolution media clips indicative of the camera view is merely displayed on the thumbnails 310-31.4 for the user to preview, but is not recorded or saved into the memory of the .first electronic device 300.
  • the first thumbnail 310 depicts a low resolution video stream of the high resolution media clip indicative of the first camera view that is perceived by the first camera lens of the first electronic device 300.
  • the first thumbnail 310 may depict the low resolution video stream of the high resolution media clip indicative of the first camera view regardless of whether the first electronic device 300 is set in a recording mode or not. Thus, in certain embodiments even if the first electronic device 300 is not set to record, the first camera view perceived by the first camera lens of the first electronic device 30 will be displayed as a Sow resolution video stream in the first t umbnail 310.
  • the second thumbnail 311 depicts a low resolution video stream of a high resolution media clip indicative of the first remote camera view that is perceived by the camera lens of the first remote camera.
  • the second thumbnail 31 1 may depict the low resolution video clip of he high resolution media dip indicati ve of the first remote camera vie regardless of whether the first remote camera is set in a recording mode or not.
  • the low resolution, video stream can merely be a low resolution medi stream, which includes audio, photography, graphics, text, or the like.
  • the thumbnails 310-314 are all selected.
  • the user can deselect any of one or more of the thumbnails 310-3.14 by using the user input, such as by tapping on the respective thumbnail 310-314, double tapping on the respective thumbnail 31 -314, sliding finger across the respective thumbnail 310-314, or the like.
  • none of the thumbnails 310-3 14 are selected and the user uses the user input means discussed above to select those thumbnails,
  • each of the first, second, third and fourth thumbnails 310-313 have been selected as discussed above.
  • each of the first, second, third and fourth thumbnails 310-313 are darkened/grayscaled.
  • the fifth thumbnail 314 has not been, selected (or has been deselected), and thus the fifth, thumbnail 14 remains white.
  • the particular coloring/grayscale used is not limiting of the present invention, hot it is merely a perceivable difference i the colors or other visible features of the thumbnails that indicates to the user which of the camera views is being recorded.
  • the first remote camera view of the first, remote electronic device, the second remote camera view of the second remote electronic device, and the third remote camera view of the third remote electronic device are available for use in a video composition that is to be created as discussed above.
  • the lo resolution video stream of the high resolution media clip indicative of the fourth camera view of the fourth remote electronic device is merel being displayed in the fifth thumbnail 314, but is not also being stored in the memory device of the first electronic device 300.
  • the low resolution media streams of the high resolution media clips of each of the first, second, third and fourth remote camera views are all displayed and streamed on the thumbnail, regardless of whether or not they are selected.
  • the thumbnails 310-3.14 are displaying low resolution media streams of high resolution media clips of the camera views of the remote electronic devices (and of the first electronic device in the first thumbnail 31 ),
  • the invention is not to be so limited.
  • the thumbnails 310-3.14 may merely he visual indicia of each of the low resolution video streams.
  • the visual indicia may be a usemame, a person's actual name, a location of the electronic device by GPS coordinates, venue name, city and state or the like or any other type of visual indicia of the remote electronic devices and the first electronic device, in still other embodiments, the visual indicia may be a blank thumbnail that is colored.
  • there is nothing actually streaming on the thumbnails 310-314 but the thumbnails 310 merely represent an electronic device by being visual indicia.
  • the present invention is not limited to video as the media being streamed, recorded and edited.
  • the media is audio
  • still photos, text, graphics, music and the like there would be no low resolution video stream to display on the thumbnails 310-314.
  • the visual indicia noted above can be used.
  • the media is audio only, if users double-taps on that particular colored proxy thumbnail to make it active, having headphones attached., the user could hear the audio from that particular device.
  • the user can use the user input means to activate recordation of those low resolution video streams.
  • the first electronic device 300 upon clicking on the record button 320, the first electronic device 300 wil begin recording into its memory device an extended Sow resolution video (or other media) cli from the .first electronic device 300 and each of the remote electronic devices that it has a low resolution video stream or other visual indicia displayed on one of the selected thumbnails 310-313,
  • the first electronic device 300 upon the user clicking or tapping on the record button 320, the first electronic device 300 will .record into its memory an extended low resolution video clip of the high resolution media clips indicative of the camera views of the first electronic device 300, the first remoie electronic device, the second remoie electronic device and the third remote electronic device.
  • the tow resolution video clip of the high resolution media clip indicative of the camera view of the fourth remote electronic device will not be recorded and saved in the memory of the first electronic dev ice 300.
  • the extended low resolution video clips of the high resolution video ciips of the camera views from each of several different electronic devices can be recorded into the memory of the first electronic device 300 at the same time, each saved as a separate file, in addition to recording the extended low resolution clips of the high resolution video clips of the camera views of the electronic devices separateiy, the user can also use the inventive application to create and record into the memory of the first electronic device 300 a single interim video composition thai is a combination of separate low resolution media clip segments from each, of the extended low resolution video clips recorded as discussed below.
  • the video composition that is being created is referred to herein as an interim video (or media) composition.
  • the video composition is referred to herein as a final video (or media) composition, it will be better understood from the description below that the interim video composition is a composition that includes low resolution video (or media) and the final video composition is a composition that includes high resolution video (or media).
  • the user prior to pressing the record button 320, the user will activate one or more of the thumbnails 3.10-314 such that the activated thumbnails (or the activated low resolution video streams) will be recorded as a specific temporal portion of a video composition.
  • the invention is described as activating the thumbnails, it. should be appreciated, that at certain points the invention is described as activating the low resolution media streams or other visual indicia which are depicted on the thumbnails.
  • the thumbnails are indicators, in certain instances, of the low resolution media streams.
  • the low resolution media stream or other visual indicia that 'C rres on s with that particular thumbnail is also considered activated.
  • the first thumbnail 3 it has been activated, as indicated by the activation symb l 325 displayed on the first thumbnail 310.
  • Activating the thumbnails 310-314 can be achieved by clicking or tapping on the thumbnails 310-314, doable, triple, quadruple (or more) tapping on the thumbnails 310-314, sliding the user's finger downwardly, upwardly, sideways or the like across the thumbnail 310-314 or by any other user input means.
  • the first electronic device 300 also records to memory an interim video composition that is created fay switching back and forth among and between the various Sow resolution video clips or camera views.
  • the first four thumbnails 3.10-313 are selected so thai upon pressing the record button 320 the extended low resolution video clips corresponding to the camera views of the first electronic device and the first, second and third remote electronic devices will be recorded to the memory of the .first electronic device 300 as discussed above.
  • the fifth thumbnail 314 is deselected so that upon pressing the record button 320 the camera view of the fourth remote electronic device will not be recorded to the memory of the .first electronic device. Because the fifth thumbnail 3 14 is deselected, the .fifth thumbnail and what it represents (i.e., the camera view of the fourth remote electronic device) is unavailable for inclusion in the video composition.
  • the deselected thumbnail can be later selected even during a recording session.
  • the user can press, tap or otherwise engage an addition button 335 which enables a user to add a new thumbnail/electronic device into the recording session.
  • the user can click the addition button 335, and then tap or click the .fifth thumbnail 314 to include fire low resolution video stream of the camera view of the fourth remote electronic device into the recording session so that the camera view of the fourth remote electronic device is available for use in the i terim video composition.
  • the activation symbol 325 is displayed on the first thumbnail 310 so tha the first electronic device 300 is activated for inclusion in the first temporal portion of the video composition.
  • the interim video composition upon pressing the record button 320, the interim video composition will begin being recorded to the memory of the first electronic device 300 with a low resolution media clip segment of the low resolution media stream corresponding to the camera view of the first electronic device 300 because the first thumbnail 310 corresponding to the first electronic device 300 is activated.
  • the user can deactivate the first thumbnail 3 10 (such as by single, double, triple or the like tapping the first thumbnail 310 or by any other user input means discussed above) and activate the second thumbnail 31 1 (or the third thumbnail 312 or the fourth thumbnail 313) using any of the user input means discussed herein.
  • the first thumbnail 3 10 such as by single, double, triple or the like tapping the first thumbnail 310 or by any other user input means discussed above
  • the second thumbnail 31 1 or the third thumbnail 312 or the fourth thumbnail 313
  • the activation symbol 325 will no longer be displayed in the first thumbnail 310, but will instead be displayed in the second thumbnail 311 (or the third thumbnail 312 or the fourth thumbnail 313),
  • the interim video composition will be recorded io the memory of the first electronic device 300 with a low resolution media clip segment of the low resolution media stream corresponding to the camera view of the first remote electronic device (or the second remote electronic device or the third remote electronic device, depending upon which of the thumbnails is activated).
  • the user can continue activating and deactivating the various thumbnails so that different low resolution media clip segments from the different electronic devices are included in the video composition at different temporal points thereof.
  • the user can switch between the different remote camera views by activating and deactivating the various respective thumbnails. For example, if a one minute video composition is being created, the user will tap the record button 320 and all of the low resolution video streams corresponding to the selected thumbnails will be recorded in the memory device of the first electronic device, in the exemplified embodiment, there are three remote electronic devices that are selected (and the first electronic device is selected). At the end of the one mimue recording, ail of the one-minute extended high resolution media clips (one from each of the remote electronic devices tha are selected) are transmitted to the first electronic device 300 and stored in the memory.
  • the final video composition that is created is an edited video that is one minute long, but that switches between the various selected camera views as often as the user desired, for instance twenty times or more, throughout the one-minute,
  • the video composition can be created by the user on the fly during recordation of th various low resolution video clips.
  • the user of the first electronic device 300 can switch between the various low resolution media streams that are being recorded and activate a specific one of the various low resolution video streams being recorded to be used in the video composition at a specific moment in time.
  • the user may be standing on a specific street corner in Paris and can initiate stage session. The user may find that there are three live players on a street corner in Paris near the user. The user can select ail three players and activate a record.
  • the user can activate onl one of the three players at a time to record from for the creation of the video composition.
  • the user can switch back and forth between the three players to obtain various different views from the street corner in Paris.
  • the user can use the user input means to stop recording by clicking the done button 320.
  • the one-minute (or other desired time) high resolution video clip will be transmitted to the memory device of the first electronic device 300.
  • the one-minute high resolution video cli will be a combination of the different camera views from the three different players at different temporal portions throughout the video clip. This negates the need for any later editing and creates a desired scene or video compilation automatically,
  • the invention has been described above such that only one of the selected thumbnails 310-313 is activated at a time, the invention is not to be so limited in all embodiments.
  • more than one of the selected thumbnails 310-313 can be activated and have the activation symbol 325 displayed thereon at a single time.
  • the low resolution media clip segments from more than one of the camera views will be included in the interim video composition at the same temporal point in time of the interim video composition. With video, this can be accomplished by utilizing picture in picture or split screen views on the interim, video composition.
  • this can be used to achieve a stereo or surround sound effect such that if audio at a concert is being recorded from different vantage points, the combined sound when all are activated and recorded to create a media composition, a surround sound effect is achieved.
  • a stage can record from each of the players at the same time and can acti vate each of the players at the same time as discussed above.
  • the media composition that, includes the sound from each of the players together at the same point in time in the media composition is replayed, the sound will be equivalent to a surround sound.
  • the low resolution media! clip segment of the low resolution media stream of the first electronic device and the low resolution media clip segment of the low resolution media stream of the first remote electronic device wilt be positioned sequentially in the interim media composition. If the first and second thumbnails 310, 31 1 corresponding to the low resolution media stream of the .first electronic device 300 and to the low resolution media stream of the first remote electronic device are activated concurrently during recording, the low resolutio media clip segment of the low resolution media stream of the first electronic device and the low resolution media clip segment of the low resolution media stream of the first remote electronic device will be positioned concurrently in the interim media composition. Concurrent positioning can be achieved as discussed above by using picture in picture or spli screen when the media is video, or by concurrent audio to achieve a surround sound effect when the media is sound/audio.
  • the user upon the user determining that the interim video composition is complete, such as after recording for five minutes, the user will use the user input to select (tap, click, etc.) the done button 330. This will signal completion of recording of the separate expanded low resolution media clips from each of the electronic devices corresponding to one of the selected thumbnails and completion of recording of the interim video composition.
  • the first electronic device 300 Upon tapping the done button 330, the first electronic device 300 will have stored in its memory an extended low resolution media clip (that comprises the low resolution media dip segments) corresponding to the high resolution media clips of the camera views (or the audio sounds perceived by) the electronic devices that were selected as discussed above.
  • the extended low resolution media clip of each of the camera views will have a length, in.
  • the memory of the first electronic device 300 will also have stored an interim video composition corresponding to the various low resolution media clip segments that correspond to the thumbnails that were activated at different points in time during the recording session.
  • This interim video composition is essentially a video composition that has been edited, "on the .fly" during the recording of the various electronic devices.
  • the interim video composition is a completely edited video composition that needs no further editing (although further editing is possible if desired, as discussed below with reference to Figures 7 and 8.
  • the interim video composition includes various segments of time (sequentially or simultaneously such as picture in picture and split views as discussed above) from different camera views from different remote electronic devices compiled into a single video.
  • die present invention overcomes packet drop issues thai occur whe attempting to stream higher .resolution clips or situations when an electronic device may lose connection altogether. This allows for live, non-linear recording from one electronic device by another.
  • a high resolution media clip of the remote camera view of that remote electronic device is being recorded and stored on the remote electronic device capturing or perceiving that remote camera view.
  • the first remote camera view of the first remote electronic device is selected and thus an extended low resolution media clip of the first remote camera view is both being displayed in the second thumbnail 31 1 and being stored and recorded into the memory device of the first electronic device 300.
  • a extended high resolution media clip of the first remote camera view is being recorded and stored on the first remote electronic device.
  • the second remote camera view of the second remote electronic device is selected and thus an extended low resolution media, clip of the second remote camera view is both being displayed in the third thumbnail 313 and being stored and recorded in the memory device of the first electronic device 300.
  • an extended high resolution media clip of the second remote camera view is being recorded and stored on the second remote electronic device.
  • the same thing occurs wit the activated thumbnails that correspond to low resolution media streams.
  • a low resolution media clip segment of the camera view perceived by the electronic device corresponding to the activated thumbnail is recorded into the memory of the first electronic device 300.
  • a high resolution media clip segment corresponding to the low resolution media clip segment is recorded and stored on the remote electronic device capturing or perceiving that remote camera view, it should be appreciated that the high resolution media clip segment i identical to the low resolution media clip segment except the high resolution media clip segment has a higher resolution than the Sow resolution media clip segment.
  • the terms high resolution and low resolution axe not intended to he limiting of the present invention, but raiher are merely used as terms to be uiidersiood relative to one another to indicate one resolution that is higher than another resolution.
  • one or more of the thumbnails on the displa of the first electronic device 300 can be used as a proxy (i.e., placeholder) for a later-added media.
  • the thumbnails will be used as visual indicia for a plurality of electronic media recording devices.
  • the media clip will contain an electronic media recording device identifier. This will enable the media clip to be later incorporated into a media composition.
  • the user will selectively activate the visual indicia of the plurality of electronic media devices to generate and record a proxy clip segment in an interim video composition on the first memory device.
  • Each proxy clip segment is associated with the electronic media recording device whose visual indicia was activated to generate that proxy clip segment.
  • each proxy clip segment is associated with a temporal period.
  • the proxy clip segment is a blank media segment, which can simply be a buzz, silence, a blue screen or any other type of media segment.
  • the proxy clip segment is used as a placeholder so that the media clips recorded on the electronic media recording devices can be later added to the media composition. This can be useful if the electronic media recording devices can not operably electronically communicate with the first electronic device, but can later be connected thereto or if the media clips can later be downloaded thereo .
  • the first electronic device will receive the media clip thai was recorded on the electronic media recording devices. Furthermore, for each media clip received, the media clip will be matched with the corresponding proxy clip segment based on the electronic media recording device identifier. A segment of the media clip that corresponds to the temporal per iod of that proxy clip segment can then be extracted and automatically used to replace the proxy clip segment, thereby creating a final media composition comprising the media clip segments.
  • the extended high resolution video clip that is recorded on the respective remote electronic devices is transmitted to the first electronic device 300,
  • an extended high resolution video clip corresponding to each one of the extended low resolution video clips that were saved locally on the memory of the first electronic device 300 is transmitted to the first electronic device 300.
  • ihe extended high resolution video clips replace the extended low resolution video clips in the memory of ihe first electronic device 300.
  • the replacement of the extended low resolution video clips with the extended high resolution video clips in the memory of the first electronic device 300 occurs automatic-ally upon the first electronic device 300 receiving the extended high resolution video clips from the remote electronic devices.
  • the first electronic device receives a high resolution media clip segment from the remote electronic device that corresponds to that low resolution media clip segment. Furthermore, the inventive application automatically replaces the low resolution media clip segments in the interim media composition with the high resolution media clip segments to create the final media composition comprising the high resolution media clips.
  • the interim media composition and the final media composition are the same singular file.
  • the high .resolution media cli segments and the extended high resolution media clips are transmitted from the remote electronic devices to th first electronic device at som time after the user clicks the done button 330 to indicate completion of recording.
  • This later time can be determined in a number of ways. Specifically, in some embodiments the extended high resolution media clips and the high resolution media clip segments of the remote camera views are automatically transmitted to the first electronic device upon the user clicking the done button.
  • the extended high resolution media clips transmitted to the first electronic device 300 will be temporally delimited by the time that the user began recording the extended low resolution media clips and the time ihe user ended recording of the low resolution media clips.
  • the extended high resolution media clips and ihe high resolution media clip segments can be transmitted to the first electronic device 300 upon determining that the extended high resolution media clips and the high resolution media clip segments will be transmitted at a data rate thai exceeds a predetermined threshold.
  • the remote electronic device wilt wait until the data rate exceeds the predetermined threshold, and at such time will transmit the extended high resolution media clips and the high resolution media clip segments.
  • transmitting (he extended high resolution media clips and the high resolution media clip segments from the remote electronic devices to the first electronic device 300 can be automatically initiated upon determining (hat the first electronic device 300 is no longer recording,
  • transmitting the extended high resolution media clips and the high resolution media cli segments of the remote camera views from the remote electronic devices to the first electronic device 300 includes wirelessly uploading the extended high resolution media clips and the high resolution media clip segments of the remote camera views from the respective remote electronic devices to the server, and wirelessly downloading the extended high resolution media clips and the high resolution media clip segments of the remote camera views from, the server to the first electronic device 300. Furthermore; in some instances uploading the extended high resolution media clips and the high resolution media clip segments to the server is automatically initiated upon determining that the extended high resoiittion media clips and the high resolution media clip segments of the respective remote camera views will be wirelessly uploaded from the respective remote electronic devices to the server at a data rate that exceeds a predetermined threshold.
  • the server is automatically initiated upon determining thai the extended high resolution media clips and the high resolution media clip segments of the respecti ve remote camera views will he wirelessly downloaded from the server to the first, electronic device 300 at a data rate that exceeds a predetermined threshold.
  • the extended high resolution media clips replace the extended low resolution media clips lor ⁇ hat remote camera view that were recorded/stored in the first memory device of the first electronic device 300. Furthermore, the high resolution media clip segments are combined together to form the final, media composition and replace the interim media composition which comprises the low resolution media clip segments.
  • the high resolution video clip is mmsmii led/routed through a server as has been discussed in more detail above.
  • the inventive application extracts the high resolution media clip segments that, correspond to the low resolution media clip segments of the interim media composition from the extended high resolution media clips stored/recorded on the memory device of the first electronic device. Then., the inventive application replaces the low resolution media clip segments of the interim media composition with the extracted high resolution media clip segments.
  • the video composition is stored in the memory of the first electronic device, in one embodiment, the video composition can be stored as a single file.
  • each of the separate high resolution video clips that are used to form a single video composition is saved as separate files, such as in the same folder or subfolder.
  • each of the separate high resolution video clips thai are used to form a single video composition is given metadata that effectively defines the video composition, such as associating each unique high resolution video clip with a unique video composition identifier thai defines the unique high resolution video clips as being a part of a particular video composition and a position identifier that defines the sequential ordering of that particular video clip in the video composition.
  • the low resolution video clip stored in die memory device of the first electronic device can be replaced with a completely unrelated high resolution clip that, was shot with a different, completely disconnected camera that was shooting at the same time. This can be accomplished by manual, file replacement.
  • the first display device 302 of the first electronic device 300 also has a switch to player icon 330, a download icon 331 and a number of recorded, players icon 332, Upo selecting the switch to player icon 330, the first electronic device 300 is taken out of stage mode and entered into player mode such that the first electronic device 300 can no longer vie and record camera views from remote electronic devices.
  • the switch to player icon. 330 may he the only mechanism by which, the first electronic device 300 can be put into player mode in some embodiments.
  • the download, icon 331 enables the user to download desired video clips.
  • the number of recorded players icon. 332 provides the user with an indication of the number of camera views from various electronic devices, including the first electronic device 300 and any remote electronic devices, that is currently bein recorded,
  • the first display device 302 of the first electronic device 300 comprises a second window 315.
  • the second window 315 is displayed on the display device 302 of. the first electronic device 300 simultaneously with the first window 308.
  • the second window 315 may only be displayed at certain times, such as after completion of a recording session.
  • the first electronic device 300 receives the .final media composition and stores it in its memory device, a graphical representation 316 of the final media composition is displayed in the second window 3.15.
  • the .final media composition is a complete video thai comprises various media segments obtained from several different, electronic devices. If recorded correctly or in a desired manner the first time around, no further editing will be required. Ho wever, if the final media composition is not exactly as the user desires based on timing of the different media clip segments in the .final media composition, the user can edit the final media composition by user input via the first user input means, such as by tapping or clicking on the graphical representation 316 in the second window 315.
  • the editing feature will be described in more detail below with, reference to Figures 7 and 8.
  • the user can click the addition button 335, Clicking th addition button 335 will enable the user to start a new recording session or add media that is already downloaded, onto the first electronic device 300 into the second window 315. Pushing or clicking the addition button. 335 may bring up a menu screen to enable the user to select what it is he desires to do.
  • the user may desire to add a transition so that when, all of the media compositions and other media, in the second window are played in succession, there are transitions (fade out, fade in, white screen, black screen, etc.) in between each separate media composition or other media file.
  • Transitions are identified by transition graphics 320. (001.1.7 ' ]
  • the second window 315 has graphical representations 317, 318, 323 indicative of several media compositions and other media files as well as transition graphics 320 that are saved to the memory of the first electronic device 300.
  • Each of the media compositions or other media files is represented by a different graphical representation 317, 318, 323 that is indicative of that particular media, composition or file.
  • a. user may edit a final media composition after it has been received and stored in the memory of the first electronic device 300.
  • the user may view (he final media composition and determine that the camera switching was not on the timing sequence as desired, or that the audio in the composition is not loud enough or is otherwise deficient.
  • the user can edit the final media composition by user input via the first user input means (i.e., clicking/tapping, double clicking/tapping, sliding finger along the graphical representatio 1 , or the like).
  • the inventive application Upon double clicking the graphical representation 31 or otherwise indicating that the user desires to edit the final media composition, the inventive application will bring tip an edit switching page, which is illustrated in Figure 7, In the edit, switching page, the user can modify the final video composition.
  • the edit switching page has a back button 401 , a play alt button 402, and a save button 403.
  • the edit switching page has an expanded high resolution media clip window 405. Within the expanded high resolution media clip window 405 are graphical .represen tations of each of the expanded high resolution media clips that are included in the particular final media composition being edited.
  • the final video composition would include high resolution media clip segments from each of the three remote electronic devices.
  • the expanded high, resolution media clip window 405 will include the entire expanded high resolution media clip that was recorded and saved in the memory of the first electronic device 300 from each of the three remote electronic devices.
  • the expanded high, resolution media clip window 405 includes a first graphical representation 41 1 of a first expanded high resolution media clip that was recorded from a first remote electronic device, a second graphical representation 412 of a second expanded high resolution media clip that was recorded form a second remote electronic device, and a third graphical representation 413 of a third expanded high resolution media clip that was recorded from a third electronic device.
  • Each of the expanded high resolution media clips is limited in length by the time the user hit. the record button 320 and the done button 330 (as long as each of the thumbnails corresponding to the first, second and third remote electronic devices was selected when the user hit the record button 320 and the end button 330).
  • the user can edit the final video composition by tapping or clicking the play all button 402. This will begin to play each of the first, second and third expanded high resolution media clips simultaneously. The user will then also activate one or more of the first, second and third expanded high resolution media clips (by tapping or clicking on one of the first, second or third graphical representations 41 1-41.3) so that only the activated expanded high resolution media clips are included in the edited video composition 415 at a pariicuiar lime during the video.
  • the editing feature is similar to the initial recording feature.
  • Each of the first, second and third graphical representations also includes a visual indicator (41 l a, 412a, 413a).
  • each of the visual indicators 41 la, 412a, 4.13a is a different colored box.
  • the edited video composition 415 includes a timing bar 16 that indicates the length in time of the edited video composition 1 5, The timing bar 416 is colored, along it to indicate which of the first, second and third expanded high resolution media clips is i the edited video composition 415 at a particular time.
  • the user can click/tap the save button to save the edited video composition 415 in the memory device of the first, electronic device 300.
  • a user in addition to editing the video aspect of a final media composition, a user can also edit the audio. Specifically, certain media compositions include both, video and audio.
  • the user can select the audio from a particular electronic device to be incorporated into the final/edited media composition along with video from a different electronic device.
  • all of the expanded, high resolution media clips are again provided.
  • the user can click the play all button 502 to start playing all of the expanded high, resolution media clips simultaneously.
  • the user can then select (by checking the circle as exemplified or clicking on the thumbnail, etc, ⁇ which of the expanded high resolution media clips should be playing audio at a particular moment in time in the final or edited media composition.
  • more than one of the expanded high resolution media clips can play audio in the final/edited media composition.
  • a first one of the expanded high resolution media clips can pla the left audio
  • a second one of the expanded high resolution media clips can play the right audio
  • a third ooe of the expanded high resolution media clips can be playing the video.
  • any of the various expanded high resolution media clips and l is various components (i.e., audio and media) can be used to create different portions of the final, or edited video composition.
  • the invention may include a library'' feature that includes a librar database of prerecorded videos or audio files that can emulate remote cameras on the server.
  • each player i.e., each electronic device
  • Each of the pre-recorded videos and audio files will have descriptive meta data about their original capture (i.e., the original recording of that particular file) that match specified or cureni live criteria so that the stage user (i.e., the user of the first electronic device 300) is potentially not even aware that the pre-recorded videos are not live.
  • the following criteria may be used to determine if a pre-recorded video or audio file located in the library, which may be on the server, could match a stage user's specified or live criteria: current weather, current time, relative time in terms of son position, compass direction, temperature, altitude, audio waveform, light meter reading, white balance temperature, contrast of shadows to highlights, humidity, air quality, nearby water temperature and quality, wind speed and direction, season, types of plant and other life nearby or detected, moon phase, genera! mood detected, gas price, economic and health indicators, per capita income, object recognition such as cars or bicycles or building types detected, population density, traffic volume, nearby transportation methods, facial cues, color similarities, fashion styles.
  • the inventive application can auto-extract data about the user's current environment and compare it with the meta data from the pre-recorded files in. the library.
  • the library database may include media files that were pre-recorded by electronic devices.
  • the library database may include computer generated media files that are generated to emulate a live player or remote electronic device.
  • the media files in the library database can be stored in the server and include all criteria needed to match actual live footage.
  • a computer could generate a pre-recorded media, (video) fil that is a three-dimensional rendering of Times Square in New York City, if a user enters stage mode and searches for players, the computer generated pre-recorded media file of Times Square can populate on the user's electronic device.
  • the server or computer may be able to generate a media file on the fly in order to match a particular user's live current conditions.
  • the computer may be able to generate a tnree-dimensionai rendering of Times Square with snow falling.
  • the user in. the stage mode will not be able to decipher that the computer generated file is not live footage from a live player/electronic device nearby.
  • the computer/server can create a media file to match a user's current environment to mimic a live camera (or other electronic device).
  • these pre-recorded media files can also mimic a user's current conditions.
  • User 1 may record Times Square on Day 1 and that recording will be stored in the library database.
  • User 2 may enter as a stage in Times Square on Day 2 (i.e., the following day). Even if there are no live players on Day 2, the prerecording .from User 1 from the previous day can be used to populate the user's player list, and the user may not realize that th pre-recording is not live.
  • the players or remote electronic devices described herein througout can be live electronic devices that are actually currently active, pre-recorded media that were actually recorded by live electronic devices at a previous time, or computer generated media that are either previously created and stored, or created while the user is active as a stage in order to mimic the user's environment, fO012$!
  • the space and time continuum are irrelevant so that live or virtual real-time muSticaraera sessions can be created by broadcasting files that are archi ed in the library and stored on the server.
  • the archived files are used to emulate live players. As one example, on July 10, 2013 in Central Park, there may be only 2 active players recording between 10:00am to 10: 15am. In.
  • the invention may automatically determine which pre-recorded audio and video files match the user's current conditions.
  • the user may be standing in a blizzard, and the present invention can locate pre-recorded video files that were taken during similar blizzard conditions, in certain instances, the pre-recorded video file can be at the same location during a blizzard that occurred years earlier, but the user of the stage or first electronic device 300 may be unaware that the video file he is viewing is pre-recorded and not live.
  • the stage user selects as a play er one of the pre-recorded vide clips, a low resolution video stream of the pre-recorded video clip will be presented in the thumbnails as discussed above. Then, if the user activates recording of the pre-recorded video clip, the low resolution video clip will be recorded onto the memory of the stage. Then, for each low resolution video clip recorded on the memory of the stage, a corresponding high resolution clip will be transmitted from the library database to the stage to automatically replace the low resolution video clip.
  • the pre-recorded video clips operate exactly as the live players, and the communication between the stage and the library' database (whether it be on the server or elsewhere) is exactly the same as the communication between die stage and the live players.
  • the stage's location and other data can be emulated or falsified in order to obtain live players that match a criteria, even if the criteria is different than the stage's current situation.
  • a user could launch the inventive application in San Diego and type into a search query for qualification criteria: "I am standing on a. street comer in Paris at 7pm in December and it is snowing.” if it is currently 7 pm in December in Paris and snowing, live cameras from Paris would show for selection. If no live cameras/players match the criteria of the search query, emulated, live players (i.e., prerecorded video files) matching the criteria would present for user selection.
  • the user could be filming themselves in front of a Green Screen in their home and could launch the inventive application as a stage and type (or say) the search query: "1 am standing in Times Square at sunset on a warm summer day facing north with the wind to my back.” If the current conditions in New York match that search query., live players in New York would be presented on the stage device for selection. If there are no matches, emulated live players (i.e., pre-recorded video files) would be presented for selection,
  • a screen shot illustrating a searching tool for locating video clips based on qualification criteria.
  • a user may request thai a list of players be provided that match a specific criteria.
  • the user may submit the request either by typing into a query box as illustrated in Figure 9, using voice detection software built into the device, or any other technique.
  • the user has typed in a search for "Beach" and representati e images of video files that were taken at the beach are displayed for user selection.
  • the video files displayed may be only live players, only pre-recorded video files, or a combination of live players and pre-recorded video files, in certain embodiments, the inventive system is programmed to first look for and provide the user of the stage with a list of live players that are nearby or worldwide that match some environmental or situational criteria. However, if there are not enough live players, or if the user wants more options, the pre-recorded video files can be provided to the user. The user can select the pre-recorded video files and stream the low resolution video clips from them in the same manner as with the live players discussed above, furthermore, high resolution video clips are transmitted corresponding to the low resolution video clips in the same manner for both the li ve players and the pre-recorded video files.
  • the user can simultaneously stream video from both live players (i.e., actual electronic devices that are concurrently operating the inventive appiicaiioii) and pre-recorded library files.
  • live players i.e., actual electronic devices that are concurrently operating the inventive appiicaiioii
  • pre-recorded library files i.e., actual electronic devices that are concurrently operating the inventive appiicaiioii
  • video compositions can be created that are a combination of live video feed and pre-recorded video feed.
  • the present invention can be used to trigger recordin on a second device (or a third device thru proxy player as discussed above) via Bluetooth or infrared or sound or light or other mechanism or even to manually trigger recording on the camera.
  • the stage would only show a temporary blank proxy that can be used for switching and then the user would replace that blank proxy with the disk file or would iranscode from tape or other means and replace that proxy file either by time stamp manual sync or audio waveform, or clapboard sync.
  • This technique can be used if: (1) an older camera is being used; (2) a camera that does not have WiFi or HDMT out is being used; or (3) when the user cannot afford an additional mobile or streaming device.
  • the invention enables the faithful reproduction of left/right stereo sound, multi-channel surround sound or follow sound by automatically detecting via GPS and Compass or manually via user input the spatial arrangement of ail cameras and audio recording devices in the session, in oilier words, if there is a conceit and the user switches to a camera close to the stage, it may be advantageous via GPS to identify active players in the session or in the vicinity on the left side of the stage and right side of the stage at time of recording. Rather than using stereo audio from the center stage camera mic, the present invention would use left stage camera audio for left stereo channel and right stage camera audio for right stereo channel in the final movie. In cases of 7-channel surround sound, the inventive application can pull audio from 7 different devices to create the stereo audio track (via auto or manual mixing) or the true multi-channel Dolby Digital or THX surround audio.
  • the high-resolution fle from the 2 3ti device can be automatically transferred to the stage in which case the low res file would be auioraaiicaily replaced or both users could decide to defer the transfer or it may not be possible to transfer at that time and so the stage user could manually replace the low-res file with the high-res file at a later time or through other means.
  • the high -res from the iPhone would be transferred to the stage but then once the tape from the old camcorder is digitized into an electronic file, the stage user could replace that proxy placeholder or temporary player preview with the desired camcorder footage. All switch edits done to the proxy or temporary previe would be applied to this new camcorder file automatically so that no manual editing would be required in the creation of the final multi- camera composition. In fact, any file in the composition can be replaced with any other file even if not related by time or place. It would take on the original files edit timing.
  • stage triggers record When stage triggers record, it sends a message to the player and instead of (or in addition to) recording its own camera input, the player sends a message to the streaming device to begin recording the live preview feed from the RED 4 . camera. [00141 j The streaming device also sends a message to the RED 4 camera to begin recording on its own hard drive in 4X HD resolution or higher. While the stage is recording, the streaming device is sending low-res stream or proxy thumbnails to the player and the player acting as a bridge is sending the same to the stage.
  • the stage stops recording, it sends a message to player to stop and player sends a message to the Wi-Fi streaming device to stop recording and send the high-res HD or SD file to the player and player sends to stage.
  • the HD high-res can be replaced with the 4x HD that was recorded locally on the RED ONE hard disk (via time data) to create Cinema quality final movies.
  • a stage user wants to record from three iPhone cameras, two iPod Touch devices with microphone input only and a GoPro Hero 3 Camera.
  • the problem with the GoPro Hero 3 camera is that although it has internal Wi-Fi, it only creates an ad-hoc network.
  • An ad-hoc network means it creates its only little Wi-Fi network that one electronic device can connect to but it is not capable of connecting to a broad. WiFi network. This works fine if there is only one stage device that wants to record its camera feed and the GoPro camera feed over Wi-Fi at the same time but if other devices need to be included in the session, this does not work.
  • the Go-Pro Hero 3 is connected to what the present invention call an offline-player over the GoPro 's Ad-Hoc network. Then the stage user taps the + button and adds a Proxy Player to the session which might show up as a solid colored rectangle with a label they define like "Go Pro Hero 3".
  • the stage user might switch back and forth between the three iPhones and sometimes choose the GoPro proxy to be the active camera. The stage might choose one or more of the microphones to be the audio the entire time.
  • the Player would receive from the GoPro the medium-res file and the Player would change its Wi-Fi network from the GoPro' s Ad-Hoc network to the Wi-Fi network that the stage is connected to.
  • the Player would automatically or manually be detected by stage and would transfer the medium-resolution file from the GoPro to the stage.
  • the user could also extract the media card from the GoPro and connect it to the stage device and import the full high-resolution clip recorded locally on the GoPro during the session and replace the proxy or medium resolution clip with, that final file. Again, no editing would be required.
  • Sync between the proxy and high res could happen automatically via audio waveform matching or manually by user.
  • This method of manual transfer is often referred to as saeakemei. This allows ANY media recording device (photos, videos or sound, etc,) to participate in a maUi camera session and to be eligible for live edit decisions.
  • the blank proxy on stage can be manually labeled like 1 85 Sony Betacam or 8-track recording console.
  • the invention enables the faithful reproduction of left right stereo sound, multi-channel surround sound or follow-sound by automatically detecting via GPS and Compass or manually via user input, the spatial arrangement of all cameras and audio recording devices in the session.
  • the stage could automatically use the device located on stage left audio for left stereo channel and device located stage right for right stereo channel in the final movie.
  • Stage user could also manually define this mix. of audio input devices before recording or while recording and could switch audio sources separately from, switching visual sources.
  • the stage could pull audio from 7 different devices to create die stereo audio track (via auto or manual mixing) or the trite multi-channel Dolby Digital or THX surround audio.
  • the stage user can elect to manually scan .for available remote media capture devices, or can elect to search by keyword or other criteria and choose from a list on goal is to eliminate as many steps as possible.
  • the preferred method is to automatically connect io relevant remote devices and present them with pre-selectsd thumbnails for each of those devices so that they can just tap record and begin live switch editing. They can deselect auto-connected, and selected devices, search for oilier devices and etc. but the present invention aims to achieve the fewest steps possible.
  • One objective is to maximize the number of possible usable remote media capture devices that can be incorporated into the users movie without them having to manually search for live players at the beach at sunset or search the global roll for those criteria.
  • User should just be able to launch the stage, and be presented with a reasonably manageable number of useable matches in describing order of match relevance.
  • stage user only has to launch or activate their device and available remote devices are automatically identified, connected and selected and record starts automatically on all devices, then user taps stop record and all high-res files are transferred and combined into the final edited composition, in this case, user does not have to scan, does not have to select players, and does not even have to tap record.
  • stage user gives them controls in settings to take back con trol of each of these areas but many will opt for the most automatic operation possible,
  • Scanning is the process where a stage user scans the network or the community for currentl available remote media recording devices.
  • the stage user might also scan the entire registered community for on-call devices or all devices, in the case of on-call or all community devices, the stage can send a direct request or a request through the community server that will send a push notification, email, text message or other notification to ail users in hopes that som will choose to become active and available quickly. Once they become active, they would automatically appear in the stages remote player fist or thumbnails vie or stage could receive a notification and decide to add them manually. Just because a remote device shows up in a list after scanning does not mean it is selected for the session.
  • [ ⁇ .51 j Selecting is when the stage user decides to include that remote media capture device in the currently recording session. This process of selecting happens automatically by default to save the user steps but the user can change that i settings so that after they see the list or thumbnail views of all available remote devices, they would have to manually select each one to .make it part of the session.
  • Switching can happen before or during recording or edit review.
  • 5 devices might automatically be selected on stage for inclusion in the session or stage user might manually select 5 devices. Once all devices have been selected for inclusion, and before recording the system automatically makes at least one of those devices ACTIVE for the purpose of the composition.
  • the thumbnail view or list could show 9 available devices but only 5 of those being selected indicated by a red border around them.
  • a cross-x icon might indicate the ACTIVE device(s) among the seiected devices. An unselected device cannot, be ACTIVE.
  • more than one device might, be active for video and separate devices active for audio. User can make active more than one audio, photo or video source at one time for example for Picture in Picture, split screen or stereo audio.
  • One important aspect of the invention is to emulate remote cameras on the server or on player devices usin pre-recorded media files having descriptive meta data about their original capture that match specified or current live criteria so that the stage user is potentially not even aware that those players (cameras) are not live.
  • the present invention defines some of the criteria may be user to determine if a Pre-Recorded File located on the server could match stage users specified or live criteria.
  • the present invention only transfer the 1 -minute .from each player when done recording. No editing, no post work, just real-time editing and the movie is done.
  • the present in vention essentially turns every piece of media ever recorded into a million live broadcasting reran television stations.
  • the present invention first look for live players nearby or worldwide that match some environmental or situational criteria (either actual or specified by stage), but, if not enough truly live players are found or user wants more options, the present invention show them the live stations (emulated players playing reruns of live sessions) that match the criteria, environment or situation so they can choose those and live switch between them while creating their movie.
  • Stage users location and other data can be emulated too.
  • User could launch the app in San Diego and type in "I am standing on a street corner in Paris at 7pm in December and it is snowing" and then, if it is currently 7pm in December in Paris and snowing, live cameras would show for selection, if not, emulated live cameras matching the criteria would show.
  • user could be filming themselves in rom of a Green Screen in. their home and could launch stage and type or say "I am standing in Times Square at sunset on a warm summer day facing north with the wind to my back" and if those are current conditions in New York, live players would show, if not emulated live players would show.
  • stage stops recording a one-minute multi-device session it receives from each device the full one-minute media clip so that they can edit the switching decisions later.
  • a multi-clip package is created containing everything and thai can be shared with the players or others so they can create their own live switch mix.
  • the edit, switching window recreates the original live recording by playing all players while you re-switch between them as you would have if you had been stage whiie recording live. Essentially each file plays back as an emulated live player so that you can edit the live switching decisions using the same methodology, it is iike a rerun of the original session that you can live-switch again.
  • the present invention can transfer the full high-resolution file from each device, that is not required in. the invention. In some cases it may be advantageous and expedient to only transfer the portion(s) actually used in the live edit switching session. Although this would restrict the stages ability to re-switch or re-edit, ii would make file transfers much faster and more efficient. At a later time, the full-length file could be added back into the package to expand the edhvabiUty. This is especially important in the case of emulated live players and the previous mentioned jukebox, example (below) where stage would only want to get the high/res portions used from each.
  • Sequential Mode when the stage taps each player in the preview, it is just so they can see each camera larger, at the end the three I - minute, high-res recordings are transferred to stage and placed in sequential order in the timeline.
  • the fmal movie would be 3 minutes long repeating the same event 3 times, in switching mode, when stage is tapping a. player clip in the preview, they are making a live switching edit decision (i.e.
  • the present invention still send all 3 one-minute high res recordings tot then those are combined in a multi-clip package with the live switching timing decisions and an edited composited movie is created that is one minute long for a one minute event but might switch between each of the thee cameras 20 tiroes.
  • This package allows the stage user or anyone else (if stage shares the package) to recreate the live moment and pla back all cameras at same tim and live s witch bet ween them or just manually edit to crea te a ne w final movie i minute in length.
  • This package can also be converted into a sequential sequence or a combination of switching and sequential.
  • thai is important m filmmaking, it's called a multi-take, switching mode.
  • the director uses 4 cameras on each 1 ⁇ minute take and time synehs each recreation of tire event with a clapboard, then he would be able to not only live switch and post-edit switch between each camera in a single take but be able to switch between all 16 cameras across the four takes to create the perfect movie. This goes hack to the core of our in vention not being tied to the typical concept of time and place when talking about Live recording and live switching.
  • stage device can also double as a player device for another stage and a player can also act as a stage.
  • a stage user may want to connect to 8 other remote media recording devices and perform live switches between those devices and his or her own local device.
  • another stage in the same area also may want to record the remote media feed of that that first, stage device in its' own composition with other unrelated player devices.
  • the stage scans and selects player devices, it may or may not select it's own device for the recording session bat it can also at the same time make its own device available to other stages as a player.
  • the stage would be recording its own device locally and recording the low res from all of its connected players onto its device while at the same time transmitting a low-res version of its own feed to one or more separate stages.
  • the recording times could differ.
  • the stage might start its own recording composition at 8:31 pro and finish at 8:33 pm whereas the 2 ad stage might start recording at 8:32 pm and finish at 8:35 pm.
  • the stage is capable of automatically splitting up its own recording into the two necessary pieces, one for its' own composition and one for the 2 t>li stages composition.
  • it would request the high- res .file from first stage and first stage would send that high res file as if it were a player.
  • the player would always record anytime any stage is actively recording and the player would keep track of these various connections and various start and stop times sending each stage the necessary piece of the full recording on its device that corresponds to that stages recent session or these transfers could be postponed until after the concert is over at which point each stage would request and the player would send the necessary pieces of the various recordings required for their compositions.
  • the invention is not limited to a single stage connecting in a single direction to multiple players. Although that may be a common scenario, the invention and the systems involved also allow for multiple stages connected to the same players, multiple players acting as stages connected to each other in both directions as player and stage and single players connected to more than one stage. For example, there are situations such as a wedding where every user present would like to be a stage but would also like to make their device available as a player as well In this case, 8 friends could all connect to each other as both stages and players so that they could all perform their own live switch in real-time including only the devices ihey want to include and excluding others.
  • each player will have to be constantly recording from the lime the first stage sends a record command until the time the last stage sends stop command. Then the stages would request or the players would send each connected stage the portion of the longer recording that corresponds to that stages start and stop time. Likewise, that player would he able to request from ail other stage/players the portions of the files they need for their compositions.
  • Each player could play a song or sequence of songs from their interna! or cloud-based music library and the stage actin as the DI or Mix-Tape maker would receive medium resoiuiioii quality audio from each and could automatically or manually live switc between them (perhaps with & dissolve near the end of each song) and the stage's audio output could be connected to a sound system that would play the live-mix for everyone in the bar.
  • each playe would, either transfer the entire high-resolution audio music file session or just the songs used to the stage for the creation of a full quality, high-resolution mix tape edit that could he shared with others or as a multi-cli package that could be edited.
  • Stage device connects to one or more nearby player devices at a concert and taps record and can hear low quality reference mix audio through headphones and then taps stop and player transfers the high quality atsdio from second device t he mixed with audio from first device to automatically create a true stereo (two mic) or surround sound (? mic) audio file.
  • each of the 9 other users recei ve a low res proxy to be added to their timeline.
  • each user can scroll thru their timeline and delete any unwanted images and then tap receive and ail devices would send, their high-res photos to each of the other devices or to an intermediate server for distribution to each device.
  • the end result would be a finished, customized photo slideshow of the birthday party in time sequence order using the desired contributions of all 1 cameras.
  • Stage user connects to one or more devices for the purpose of recording a single camera video that contains video from one device and synced audio from other separate devices.
  • a lone reporter may set up as a player, a single mobile device camera, on a car dashboard shooting video- oat of the window while the reporter is standing outside of the car in the rain and wind at a distance but within the cameras view holding a second mobile device to their mouth as a microphone.
  • the video from player device would replace audio on stage device and create a single angle camera video with a single mono audio track on stage that combines video from the player and audio from the stage.
  • Stage device sends out a request for players at a surfing contest to an intermediate server or over Wi-Fi. All nearby players receive a push notification to join the live session. Those that join automatically appear as thumbnails on stage. Stage selects one or more to record from including but not necessarily its own. camera. Then taps record. Player devices send low res proxy thumbnails or streaming video to stage arid stage either manually or automatically switches between each device while recording to create an edited, switched composition. When stage user taps stop, eac player device sends high resolution file to stage immediately or at a later time.
  • Stage then generates a muliticlip package consisting of all of the receives high resolution video and audio and photo files ⁇ and proxies for those not yet received) as well as the player details, environmental details and other meta data and the live switching decision edit details and also creates a final edited movie.
  • the final edited movie and the muliticlip package is then automatically or manually sent to all participating players so thai they can change the editing or change the switch timing or eliminate some clips in the muliticlip package and create their own new edited movie to share.
  • One component of the invention is that, the system ignores the space and time continuums allowing us to emulate live or even create impossible virtual real-time live rnulti- earaera sessions by broadcasting archives files as emulated live players.
  • the present invention will replay those first two from 2013 (if weather matches) so that users think there are 9, by 2020, the present invention could have hundreds of matching cameras from over the years that a user could switch to in their movie.
  • the present invention don't have to be limited to same time and place, the present invention could also show cameras thai recorded at 1 l m on July 15th each year or cameras that recorded in other parks around the world at different dates and times that match the environmental weather conditions and other criteria and are likely to be undetectably realistic live matches to stages actual or defined shooting scenario.
  • the stage can manually define a type of PROXY player that shows on the stage-switching preview as a colored solid thumbnail with, a user defined name or umber.
  • a signal could be sent to.
  • the capture device either digitally or thru analog means (such as stage plays a beep tone or clap sound) or stage user could manually or by instruction to another person verbally or with hand signals or etc., trigger recording on this non-connected device (maybe stage can send infrared signal to triaaer recording but device has no means of sending anything back to stage) and also a clapboard could be shown in front of camera or to give sound cue on audio microphone recording because the device ma or may not have time code.
  • stage would have a red, green and blue proxy thumbnail and stage would say "record now" and ail three cameras would start, recording and the stage could play a beep sound, or flash a torch light in view of all three film cameras or a clapboard could be used.
  • stage -user could tap the desired proxy thumbnail during shooting to switch to that camera at that moment and so on. They might switch back and forth between the proxies 100 times.
  • the files would be imported by the stage and automatic-ally or manually synced by tiraecode, beep tone, clapboard object motion detection or flash frame relative to the other files and relative to the live switching decision list. No editing or post would be required; the stage would automatically generate the final multi- camera movie and save it as a flattened movie file to disk. It is also possible i this use case to use a connected camera player device attached to the top of the lens of the film camera to provide better visual feedback, on the stage of that camera's point of view instead, of a solid color labeled proxy, stage would see this preview proxy and that would be replaced with the full res from film camera later.
  • stage starts recording a command would be sent to the proxy player which would in turn send a command either directly to the RED ONE or via the Teradek Cube connected to the RED ONE or another sync method would be used like beep tone or clapboard.
  • the Player would send the medium resolution to the stage or would request the high resolution from the Teradek Cube and after receipt transfer thai to the stage to replace the low res thumbnails or proxy.
  • the original hard disk file from the RED ONE containing the 4K or UltraHD (4 times HD) could be later imported to the stage and replace the proxy or high-res for super high res 4K final output Irs.
  • the intermediate device could be eliminated and in others both the proxy player and intermediate device could be eliminated if the RED camera offers necessary smart connectivity and if network and. processing bandwidth is not a constraint.
  • a computer generated 3D rendering of the environment surround the stage or of some other environment relevant to the composition.
  • a user could he standing near the Eiffel tower on a cold mo.rn.ing and wanting to create a multi-camera composition but no players are available and no pre-recorded emulated player files on server match the current, environment closely enough to provide an undetectable match with the stage users video.
  • the server could have on it a 3D model of Paris with models of ail buildings and the Eiffel. Tower, when stage user sends out a request for players, the server could present a live video of the exact environment being created in real-time in the same way a.
  • 3D Video Game would render live environment view for the player of the game.
  • Environmental cues like weather, time of day, position and cycle of the moon and stars, cloud coverage and wind speed, detected audio or colors in stages video could be used to make the 3D rendering of Paris extremely realistic. Then the stage could select this emulated computer generated live player and start recording, switching back and forth between the two camera views to create a single composition,
  • each camera device or microphone may perceive visual images or sound differently, it may be necessary for the player, the stage or an interim server to make adjustments to both the low-res thumbnails or stream and the final high-res files transferred from, each device so that they match each other in color, brightness, contrast, white balance (in the case of visuals) and match each other in tone, volume level and etc. (in the case of audio).
  • stage first selects players the stage could send out a stream of itself for players to match or the stage could analyze incoming player streams and send commands to players to color correct or audio correct in real-time.
  • lens correction may also be necessary to apply lens correction, frame rate correction, scale adjustment or motion stabilization to match all footage.This would insure that live players or emulated live players match, the stage and other players media as closely as possible so that when the even! is over, the composition is truly complete and will not require editing or post-correction.
  • a player device may only be able to capture a still photo every few seconds or may have a boring !ocked-down camera view which makes it seem like a still photo. This could either be mixed into the composition as a frozen, image or the image could be scaled or panned to create the appearance of live video. Filters could also be applied and layering effects could be overlaid such as snow falling or butterflies or clouds moving to make the still photo or fixed position video seem more lifelike and realistic and entertaining.
  • the system could instruct the stage to point the camera west and wait 15 minutes for the sun angle in the archived emulated li ve player shot of a beach in Hawaii so that both shots would match perfectly and create a beautiful composition.
  • the system could have asked that user to record another shot facing in the opposite direction s that the two clips could be later used as backgrounds for a two-person lacing dialogue scene where it is necessary to have a shot facing west and a shot facing east. This is almost never possible with traditional stock footage. If the player is actually live in. Hawaii at the same time the stage is live in Alaska, the system could provide instructions to both stage and player to make adjustments to their shooting angles so that combing the foreground and background would create an imperceptible match.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Television Signal Processing For Recording (AREA)

Description

METHOD OF CREATING A MEDIA COMPOSITION AND APPARATUS
THEREFORE
Cross Reference to Related Patent Applications
[0001 The prese t- application claims the benefit of United States Provisional Patent Application Serial No. 61/93.1 ,91 1 , filed January 26, 201.2, the entirety of which, is hereby incorporated by reference.
Field of the invention
|ΘΘ02) The present invention relates to methods and apparatuses for recording and storing media representations perceived by media recording devices.
Background the invention
[ΘΘ03] The proliferation of media recording devices has resulted in many people having a sensory recordin dev ice of some kind with them at all times, in many instances, a user will sporadically capture audio, videos and still photographs with their device at various times daring one event (e.g. a birthday party, a tourist attraction, a sporting event, etc.). All of the recorded media typically stay on the user's device as separate individual files until, at some later time, the user downloads the media files to a persoaat computer or loads them iaio a mobile video editing application one at a time.
|ΘΘ04] Using current technologies, an Individual user can only record media that the user can perceive from his or her own electronic device (i.e., camera, microphone, camcorder, smart phone, etc.). It may be desirable for an individual to record media .from a music concert, sporting event or other event that the user is unable to attend, or from a vantage point thai is different than user's vantage point. A user also .might want to create a composition, in realtime that combines media from his or device with media from many other devices. Thus, a need exists for a method and/or apparatus for recording and live-editing media onto a user's electronic device from one or more remote media capture device(s).
Summary of the Invention
[OOOSj The present invention relates to methods and apparatuses for recording media of a remote nature perceived by a remote sensor of a remote media recording device onto a first memory device of a first, portable electronic, device. In certain embodiments, the .first memor device records a low resolution or placeholder version of the remote media recording device input, the Sow resolution or placeholder version being later replaced by a corresponding high resolution version of the media input.
[0006] In. one embodiment, the invention can be a method of creating a final media composition using a media composition program residing on a first electronic device comprising a first display device, a first memory device, arid first user input means, the method comprising: a) receiving, on the first electronic device, a plurality of low resolution media streams of high resolution media clips from a plurality of remote electronic devices; b) displaying, in the first display device, visual indicia of each of the low resolution media streams being received by the first electronic device; c) activating one or more of the low resolution media streams being received by the first electronic device in response to user input via the first user input means; d) for each low resolution media stream that is activated in step c), recording a low resolution media clip segment of that low resolution media stream in an interim media composition thai resides on the first memory device; e) for each low resolution media clip segment recorded in the interim media composition, receiving on the first electronic device a high resolution media clip segment from the remote electronic device that corresponds to mat low resolution media clip segment; and f) automatically replacing the low resolution media clip segments in the interim media composition with the high resolution media clip segments to create the final media composition comprisin the high resolution media clip segments.
|0007] is another embodiment, the invention can be a method of creating a video composition comprising: a) displaying, in a first display device of a first electronic device, a plurality of remote camera views perceived by a plurality of remote camera lenses of a pluralit of remote electronic devices; b) activating one or more of the plurality of the remote camera views displayed in the first display device via user input means of the first electronic device; c) for each remote camera view that is activated in step b), recording, on a first memory device of the first electronic device, a low resolution video clip segment of the remote camera view as part of an interim video composition, and wherein for each remote camera view that is activated i step b); d) for each low resolution video clip segment recorded in step c), acquiring from the remote electronic devices a high resolution video clip segment that corresponds to that low resolution video clip segment; and e) automatically replacing the low resolution video clip segment in the video composition recorded on the first memory device of the first electronic device with the high resolution video clip segments. |0Θ08] In yet another embodiment, the invention can be a non-transitory computer-readable storage medium encoded with instructions which, when executed on a processor of a first electronic device, perform a method comprising: a) displaying, in a first display device of a first electronic device, a. plurality of remote camera views perceived by a plurality of remote camera lenses of a plurality of remote electronic devices; b) activating one or more of the plurality of the remote camera views displayed in the first display device in response to user input inputted via user input means of the first electronic device; c) for each remote camera view that is activated in step b): (i) recording, on a first memory device of the first electronic device, a low resolution video clip of the remote camera v ie w as part of a video composition; and (2) generating and transmitting a first record signal to the remote electronic devices, thereby causing a high resolution video clip of the remote camera view to he recorded on the remote electronic device capturing that remote camera view; d) for each high resolution video clip recorded in ste c), generating and transmitting a signal that causes tie high resolution video clips from the remote electronic devices to be transmitted to the fust electronic device; and e) upon the first portable electronic device receiving the high resolution video clips transmitted in step d), automatically replacing the low resolution video clips in the video composition recorded on the first memory device of the first electronic device with the high resolution video clips.
[0009] In still another embodiment; the invention can be a method of creating a final media composition using a media composition program residing on a first electronic device comprising a first display device, a first memory device, and first user input means, the method comprising: a) receiving, on the first electronic device, a plurality of low resolution media streams of high resolution media clip files from one or more databases, the high resolution media clip files stored on the one or more databases; b) displaying, in the first displa device, visual, indicia of each of the low resolution media streams being received by the first electronic device; c) activating one or more of the low resolution media streams being received by the first electronic device in response to user input via the first user input means; d) for each tow resolution media stream that is activated in step c), recording a low resolution media clip segment of thai low resolution media stream in an interim media composition that resides on the first memory device; e) for each low resolution media clip segment recorded in the interim media composition, receiving on the first electronic device a high resolution media clip segment from the one or more databases that corresponds to that low resolution media clip segment; and f) automatically replacing the low resolution media clip segments in the interim media composition with the high resolution media clip segments to create the final media composition comprising the high resolutio media clip segments. [βθΐβ'] In a further embodiment, the invention can be a method of creating a video composition comprising: a) displaying, in a first display device of a first portable electronic device, a first camera vie perceived by a first camera lens of the first portable electronic device; b) transmitting, to the first portable electronic device, a plurality of low resolution video streams of high resoiuiion video clips previously stored in one or more databases; c) displaying, in the first display device of the first electronic device, the low resolution video streams, wherein the first camera view and the low resolution video streams are simultaneously displayed in the first display device: d) recording, on the first memory device of the first portable electronic device, a low resolution video clip for each of the low resolution video streams activated by a user as part, of a video composition; e) for each, low resolution video clip recorded on the first memory device of the first portable electronic device, transmitting corresponding ones of the high resolution clips from the one or more databases to the first portable electronic device; and f) automatically replacing the low resolution video clips in the video composition recorded on the first memory device of the first portable electronic device with the high resolution video clips.
|ΘΘ31] In an even further embodiment, the invention can be a method of creating a .final media composition using a media composition program residing on a first electronic device comprising a first display device, a first .memory device, and first user input means, the method comprising: a) displaying, in the first display device, a visual indicia for each of a plurality of electronic media recording devices; b) recording, on each of the electronic media recording devices, a perceived event as a media clip that contains an electronic media recording device identifier; c) selectively activating each of the visual indicia of the plurality of electronic media recording devices during step b) to generate and record a proxy clip segment in an interim video composition on the first memory device, wherein, each proxy clip segment is associated with the electronic media recording device whose visual indicia was activated to generate that proxy clip segment and a temporal period; d) for each proxy clip segment recorded in the interim media composition, receiving on. the first electronic device the media clip recorded in ste b); and e) for each media clip received, in step d), matching the media clip with the corresponding proxy clip segment based on the electronic media recording device identifier and automatically extracting a segment of the media clip that corresponds to the temporal period of that proxy clip segment; and f) for each media clip segment extracted in step e), automatically replacing the proxy clip segment to which that media clip segment is matched with that media dip segment to create the final media composition comprising the media clip segments. |ΘΘ:Ι 2] In. other embodiments, the invention can' be a mio-transitory computer-readable storage medium encoded with, instructions which, when executed on. a processor of a first electronic device, perform any one of the methods described above.
1.0013] In yet other embodiments, the invention can be an electronic device comprising: a. first processor; a first memory device; a first transceiver; and instructions residing on the first memory device, which when executed by the first processor, causes the first processor to perform an of the methods described above.
Brief Description of the Drawings
|00J ] Figure I is a schematic of an electronic device in accordance with an embodiment of the present invention.
[0015] Figure 2 is a schematic diagram of a system overview in accordance with an embodiment of the present invention.
(0016] Figure 3 is a schematic diagram illustrating communication between a first electronic device and a plurality of remote electronic devices,
(0017) Figure 4 is a screen shot of a login page in accordance with an embodiment of the present invention.
(0018] Figure 5 is a screen shot of a list of remote electronic devices tha have initialed a sharing status in accordance with an embodiment of the present invention.
[0019] Figure 6 is a screen shot of a first electronic device illustrating the streaming of multiple low resolution video clips from a plurality of remote electronic devices in accordance with an embodiment of the present, invention.
(ΘΘ20) Figure 7 is a screen shot .illustrating an edit switching window in accordance with an embodiment of the present invention.
[0021] Figure 8 is a screen shot illustrating how a user can select audio tracks for the audio for a multi-camera session in accordance with an embodiment of the present, invention,
100221 Figure 9 is a screen shot illustrating a searching tool for locating video clips based on qualification criteria in accordance with an embodiment of the present invention.
Detailed Description of the Invention
(0623) The present invention relates to methods and apparatus for recording media clips such as camera views or audio input from one or more remote electronic devices ( which may be referred to herein as players) positioned at various different, locations throughout the world onto a first electronic device (which may be referred to herein as a stage). In certain embodiments, the remote electronic devices may be remote .media recording devices that are capable of recording any type of media including video, audio, still photos, text graphics or the like. The remote electronic devices are deemed remote due to the remote electronic devices being at a location, that is different from the location of the first electronic device that is acting as a stage, regardless of whether one or more of the remote electronic devices is adjacent to the first electronic device or thousands of miles away from the first electronic device.
[ΘΘ2 ] The first electronic device or stage is able to create a media composition with various media clips or feeds from the different remote electronic devices or players. Thus, the first electronic device or stage can record media from the remote electronic devices and record/store the media onto its own memory. In certain embodiments whereby the media is video, the first electronic device or stage can record video based on the camera views perceived from caraera leases of one or more of the remote electronic devices or players and store the recorded video from the one or more remote electronic devices or players onto its own memory. The first electronic device or stage can simultaneously record video based on the camera view perceived from its own camera lens and store that video into ts memory (the first electronic device can. alternatively store other types of media, such as that listed above). 1ΘΘ 5! I creating the media composition, the user of the first electronic device can switch back and forth among and between the various views so that the media composition that is created is a fully edited media composition. The media composition is created and stored into the memory of the first electronic device or stage as a composition of the media recorded from the remote electronic devices/players and/or the media recorded directly by the first electronic device/stage. The user can then use editing techniques to maneuver the different media clips by changing their order in the final media composition and create transitions in the media composition, such as thai which is described in U.S. Patent Application Publication No. 2012/0308209, filed September 8, 201 1 , the entirety of which is incorporated herein by reference.
|ΘΘ26] In one embodiment, the present invention is tm application for an. electronic device, which can be a portable electronic device such as a mobile communication device, a camera or a desktop computer. In one embodiment, the appiication is a video composition creation program such that the media described herein above is video. In another embodiment, the appiication is an audio composition creaiion program such that the media described herein above is audio. In other embodiments the application is an audio/ ideo/still photo composition creation program. Any combination of media can be used with, the mvention described herein.
}002?j in. some embodiments- the first electronic device or stage comprises at least one camera/camcorder lens or at least one audio input sensor. However, it should be noted thai in alternate embodiments the first electronic device may not comprise a camera/camcorder, but rather may remotely connect to another electronic device that does comprise a camera camcorder. Furthermore, in certain embodiments the first electronic device or stage merely comprises a microphone for detecting and storing audio. The electronic device or mobile communication device may be a smart, phone or tablet, such as but not limited to, an iPhone* or an iPad*, or a Blackberry*", Windows'*, Mac OS*", bada*" or Android*' enabled device, that preferably but not necessarily comprises at least one camera/camcorder. In such embodiments, the present invention may be an application that can be purchased and downloaded 10 the electronic device or mobile communication device by the user. As understood in the art, the download of the application may be done through a wired or wireless connection to the manufacturer's or service provider's application database. Thereafter, the present invention would reside on a computer readable medium located within the portable electronic device, mobile communication device, desktop computer or mobile camera/camcorder.
|0S28| Furthermore,, it should be appreciated that although much of the detail of the description of the invention below is directed to streaming, recording and saving video clips, the invention can also operate by streaming, recording and saving audio clips, still photos, text, graphics, music and the like.. Thus, although video is the media that is predominantly used in the description, any other type of media, cart be used including still, photos, text, graphics, audio, music and the like. At any point in which video clips are described as the media of choice in this description, any other media types can be used.
|0029] Referring to Figure I , a schematic of art electronic device 100 according to one embodiment of the present invention is illustrated. As noted above, the electronic device 100 may be a portable electronic device, which includes mobile communication devices such as a smart phone or tablet thai comprises a camera/camcorder, whereby the user downloads the present invention as an application and stores the application on a computer readable medium, located within the electronic device 100. In other embodiments, the electronic device 100 may be manufactured with the features of the present invention built in. The electronic device 100 comprises a display device 101 , a Jens 102, a flash 103, a processor .104, a power source 105, a memory 1.06 and a transceiver 107. it should be noted thai in some alternate embodiments, the lens J 02 aad the flash .103 may be omitted from the electronic device 100. Further, as discussed in more detail below, the electronic device 100 may comprise any number offenses 102 or flashes 103.
|ΘΘ3β| la one embodiment of the present invention, the electronic device 100 is a mobile communication device such as a mobiie phone, smart phone or tablet, such as but not limited to, an iPhone* iPad*, Android*', Blackberry*', bada* or Windows*' enabled device. The invention, however, is not so limited and the electronic device 100 may also be a digital camera, camcorder or surveillance camera that has the present invention stored in a computer readable medium therein, or a desktop computer that has an attached or embedded earners and the present invention, stored in a computer readable medium therein. The electronic device 100 may also be a camera, camcorder or the like that does not have the present invention stored therein,, but rather is in communication (wireless or wired) with another electronic device that does have the present invention stored therein. In still other embodiments, the electronic device 100 acts as a bridge for an external camera device over WiFi or other communication pathways, it should be noted that in alternate embodiments, the present invention may be stored on a computer readable medium within the electronic device 100 prior to the user purchasing the electronic device 100,
|0031| The processor 104 is configured to control the operation of the display device 101, lens 102, Hash 103, power source 1 5, memory 106 and transceiver 107. The power source 105, which may be batteries, solar power or the like, is configured to provide power to the display device 1 .1 , lens 102, flash 103, processor 104, memory 106 and transceiver 107. The memory 106 is configured to store photographs and/or video clips recorded by the lens 102 of the electronic device 100 or recorded by a lens of a remote or second electronic device that is different from the electronic device 100, as will be better understood from the discussion below. Of course, the memory 106 can also be used to store audio, graphics, text or any other type of media perceived by a remote electronic device with which the electronic device 100 is in operable electronic communication. The transceiver 107 is capable of transmitting signals from the electronic device 100 to remote electronic devices and is also capable of receiving signals from the remote electronic devices. In some instances, the transceiver 107 communicates with remote electronic devices through a server. Thus, the transceiver 107 enables communication among and between various different electronic devices,
f0032 The lens 102 is a standard camera or camcorder lens that is configured to record video clips and photographs in response to a user input. In one embodiment, the electronic device 100 of the present invention may include more than one lens 1 2. For example, in one embodiment, the electronic device .100 may comprise a first lens on (he front of the electronic device 100 and a second lens on the back of the electronic device 1 0.
|iJ033| The flash. 1 3 is configured to provide light, to the area being recorded by the lens 02. in one embodiment where the camera/camcorder of the electronic device 100 comprises more than one lens 102, the electronic device 100 may also include more than owe flash 103, each flash 1 3 corresponding to a lens 1 2. However, it should be noted that the invention is not so limited and in alternate embodiments the Hash 103 may be omitted, in certain embodiments both the lens 102 and the Hash 103 may be omitted. Specifically, in embodiments wherein the media is not video or is not still photography, the lens 102 and the flash 1 3 will not be necessary.
|0034 In certain embodiments that include the tens 102 and wherein the invention is used to record video or stilt photographs, the display device 101 is configured to display a view from the perspective of the lens .1 2 to enable the user to see the area of which they are taking a photograph or video clip. Thus, the display device 1 1 is configured to display an image of a real-world event perceived by the lens 102 of the electronic device 100, prior to, during and after the recording of a video clip or photograph. Alternatively, as will he understood from the description below, the display device 10 ! can be configured to display an image of a real- world event perceived by a camera lens of another portable electronic device with which the electronic device 100 is in communication, hi one embodiment, the display device .10.1 is a touch-screen that further comprises a graphical user interface (GUI) through the use of an onscreen touch interface configured to receive user inputted commands., in snch embodiments, the user input means referred to herein is achieved by a user touching the GUI in a desired location to achieve a desired functionality. In alternate embodiments, the electronic device 1 0 may further comprise a separate, mechanical user interface, such as, for example buttons, triggers, or scroll wheels.
(0035] As noted above, in one embodiment, the present invention resides on a computer readable medium within a mobile communication device such, as a smart phone or tablet, in such embodiments, the electronic device 100 may be configured snch that if a video clip, audio clip, or photograph is being recorded and a composition is being created when the user receives a phone call, text message, system alert, or simply needs to leave the application, the video clip, photograph or audio clip and or composition is automatically saved or cached in the memory 106 so not to be lost,
}0036| In alternate embodiments, the electronic device .100 may further comprise advanced features such as a global positioning system (GPS) chip, a compass, an accelerometer chip, a gyroscope chip, a thermometer chip, a temperature sensor, a facial, detection system or service Application Programming Interface ("ΑΡΓ"), a voice detection system or service API, a Speech-To~Texi (STT) system or service API, a Text-To- Speech (TTS) system or service API, a translation system or service, a pixel-motion detection system or service API, a. music database system or service, a heart rate sensor, a near field commmonkation (NFC) chip, a radio frequency identification (RFl'D) chip, an ambient light sensor, a motion sensor, an audio recording microphone, an altimeter chip, a Wi-Fi chip and/or a cellular chip. The present inveniton is further configured to monitor and save any data reeored or obtained by any of the above mentioned chips, sensors, systems and components (collectively referred to hereinafter as "advanced features")- Further, the resulting data recorded or obtained by any of the advanced features may be saved as metadata and incorporated into recorded video clips, photographs or compositions created by the present inveniton. The incorporation of such dat may be may be done in response to a user input or automatically assigned by the video composition creation program via a settings screen. Examples of the functionality of the advanced features of the electronic device 100 are discussed below. It should be understood that the descriptions below are examples and in no way limit the uses or the resulting data obtained via the advanced features in tire present invention,
|ΘΘ3?| GPS coordinates, compass headings, aecelerometer and gyroscope readings, temperature and altitude data may be recorded and saved into a recorded video clip, photograph or composition. For further example, an assisted GPS chip could be utilized within the functionality of the present invention to provide stich things as automatic captions or titles with location (Philadelphia, PA) by looking up GPS coordinates in a world city database on the fly. This may allow users to record live video from cameras worldwide, whereby each recorded media segment could show the GPS coordinates or city. GPS could also be used to display running log of distance traveled from beginning of video to end of video or for example, current speed in miles per hour.
|0038] The digital compass chip could be utilized to optionally display (burn-in) to the video clip or composition the direction the camera is facing such as SW or NNE 280 degrees. Further, a compass chip could also be used in combination with GPS, Gyroscope and a HUD (heads up display) to help a user .replicate a video taken years prior at same exact location. For example, a user could take a video at the same spot every month for two years and use the present invention to load older, previously recorded video clips and then add a newly recorded video clip taken at precisely the same location, direction and angle of view. (0039) The axis gyroscope could be used for scientific applications along with acce!eroaieter data and could be burned, into a recorded video clip or composition for late analysis. Further, it also could be used to auto-stabilize shaky video clips or photographs recorded, by the present invention. An altimeter could be used to bom in altitude information into a recorded media segment. This infonnation could appear at end of the composition it) the credits automatically or could be burned-in and adjusting real-time on a video clip or composition to show ascent or descent.
(0040) The temperature sensor could be used to automatically add temperature range to credits or to burn in on video. Further, a heart rate sensor could be used if a user wants heart rate information to be shown on a. video cl ip, for example if the user is on a roller coaster. f 00411 The Facial Detection system or service API can be used to determine the number of unique persons in the video clip(s), their names and other related information if available locally on the device 100 or via the Internet Information acquired via the facial detection system or service API may be used to automatically add captions, bubbles or applicable information on video clips, photographs, the title screen,, the credits screen or any other portion of the finalized composition.
[ΘΘ42] Similar to the Facial Detection system or service API, the Voice Detection system or service API can be used to determine the number of unique persons in the video eiip(s), their identities or names and other related information if available locally on the device or via the Internet, information acquired via the voice detection system or service API. may be used, to automatically add captions, bubbles or applicable information on video clips, photographs, the title screen, the credits scree or any other portion of the finalized composition.
[0043] The Speech-Ί 'fext system or service API can. be used to convert the spoken word portions of a recorded audio track of a video clip or the audio track of an audio recording into written text where possible for the purposes of automatically adding subtitles, closed- captionhig or meta-data to a video clip or the final composition.
[1)044] The Texi-To-Speech system or service API can be used to convert textual data either gathered automatically, such as current time, weather, date and location, or inputted by the user, such as titles and credits, into spoken voice audio for the purposes of automatically adding this audio to a recorded video clip or the final composition. This may be used to assisi the visually impaired or in combination with the Translation Service API to convert, the text gathered from the Speech-To-Text service into spoken audio track in an alternate language.
I I (0045} The Translation system or service API can be used for the purposes of automatically converting textual data either gathered automatically, such as current time, weather, date and location, or input by the user, such as titles and credits, into another language .for localization or versionhig when sharing over worldwide social networks or in combination with Speech- To-Texi and Text- To- Speech, to provide visual or audible translations of content.
[0Θ46] A Pixel-Motion Detection system or service API can be used to determine the speed of movement either of the camera or the recording subject for the purposes of smoothing out camera, motion for an individual recorded video clip or the iinal composition. Farther, the Pixel-Motion Detection system, or service API. may also be used to automatically select a music background or sound FX audio based, on the measured movement for a individual recorded video clip or the final composition. In one embodiment of the present invention, the Pixel-Motion Detection system of service API uses the beats per minute of a song to determine whether it matches the measured movement for a recorded video clip or final composition, in alternate embodiments, the determination of whether a song is "fast" or "slow" may be determined by the user.
[ΘΘ47] A music database system or service API can be a locally or publicly accessible database of songs or tracks with information such as appropriate locations, seasons, times of day, genres and styles for the purposes of using known information about the video composition, and automatically selecting and incorporating a particular music track into a finalized composition based on the known information. For example, such a database might suggest a Holiday song on a snowy day in December in Colorado, USA or a Beach Boys song on a Sunny Day at the beach, in San Diego USA. In one embodiment, the song would he automatically added t the composition to simplify user input. In alternate embodiments, the user has the ability to selectively choose the variables that determine which songs are to be incorporated into the finalized composition.
|0048| An NFC chip could be used to display on a media segment the information communicated by nearby NFC or FID chips in products, signs or etc. An ambient light sensor could he used to adjust exposure or to add ambient light data to meta data for later color correction assistance in editing. A proximity sensor could be set somewhere on the lace of the .mobile device and is intended to delect when the phone is near a user's ear. This may be used to help control the present invention, for example, such as by allowing a user to put their finger over ihe sensor to zoom in instead of using touch screen or other user interface. |0049| A Wi-Fi chip may be used for higher performance mobile devices and for live connection to the internet for city lookups from GPS data, and other information that may be desired in credits or as captions. The Wi-Fi chip could also be used for remote video or audio phone call and for those calls to be recorded live with permission as a part of the composition.
f OOS J An audio recording microphone may be used to record audio, but con id a lso be used to control the present invention. For example, the microphone could be used for certain functions, such as pause, resume and zoom via voice commands or to auto-trigger recording of next live clip in surveillance situations. If two microphones are used, they could be used to detect compass direction of a sound being recorded out of the camera lens's view.
[005 J] A motion sensor could be used to actively control the application without human intervention and to auto-trigger the recording of a next live clip in surveillance situations. Further, a motion sensor could be used to change the shutter speed in real-time to reduce motion blur on a recorded media segment.
[0052) is other alternate embodients, the electronic device 100 may comprise a three- dimensional (3D) dual-lens camera. In such embodiments, the present invention is further configured to record 3D video clips and photographs, and include metadata that comprises depth information obtained from one of the above mentioned, advacned features into a finalized composition.
[0053) Referring now to Figure 2, a schematic diagram of a system overview of one embodiment of the present invention is illustrated. The system 200 comprises a plurality of electronic devices 2 1A-201 D and a server 202 that communicate via the Internet 204. Each of the electronic devices 201 A-2 1 D may be a portable electronic device, and in certain embodiments each of the electronic devices 201A-201 D includes a camera with a. camera lens. More specifically, in certain embodiments each of the electronic devices 201 A-201 D includes all of the components described above with .reference to Figure 1. (i.e., all of the components of the electronic device 100). For example, each one of the electronic devices may be a smart phone, such as but not limited to the iPhone*, digital camera, a digital video camera, a personal computer or laptop. Although onl a couple electronic devices 20.1.A- 20 ID are illustrated, the invention is not so limited and ma comprise any number of the above mentioned electronic devices. Furthermore, any various numbers of combinations of the different types of electronic devices can be used within the scope of the present invention. 'The server 102 comprises computer executable programs to perform the tasks and functions described herein and facilitates communication between the various electronic devices 201 A- 20 ID. |ΘΘ54] In die exemplified embodiment of Figure 2, the electronic devices 201 A-D are in operable electronic communication with the server 202 via the internet 204. However, in alternate embodiments the electronic devices 201 A-D may be in operable electronic communication with the server 202 via a satellite network, a common carrier network(s), Wi- Fi, WiMAX or ny combination thereof. n accordance wit the present invention,, the server 202 is configured to allow for operable communication between the electronic devices 201 A- D. Nonetheless, it; should be noted that in one embodiment; the server 102 may be omitted and the electronic devices 201 A-D can be in operable electronic communication with one another directly via the Internet 204, a satellite network, a common carrier nefwork(s), Wi-Fi, WiMAX or any combination thereof.
|O055 The present invention allows for a particular one of the electronic devices 201 A-D, called a stage, to view and record photographs and video clips from the lens of any of the othe electronic devices 20! A-D, called players. According to one embodiment, each electronic device 201 A-D comprises a computer executable program that allows for the operable electronic communication between the devices 201A-D. Further, it should be noted that any electronic device may be a stage, a piayer, or both, and their roles may change at any time. In certain embodiments, multiple electronic devices that are each operating as a stage may be operably electronically communicating with one another
}0056| In certain embodiments, any one of the electronic devices 201 A-D may be in operable electronic communication with another electronic device 203, which may be a portable electronic device as has been described herein above or a non-portable electronic device. In such embodiments, the electronic device 201 A may generate its video feed from the electronic device 203. The electronic device 203 may or may not have the inventive application or program on the device. Specifically, in some embodiments the electronic device 201 A is acting as a bridge for an external camera device (i.e., electronic device 203) over WiFi
[6Θ57] According to one embodiment of the invention, which will be described in more detail below, after a stage decides to record a photo, video clip or other media from a piayer, the piayer stores the video clip locally in memory and ixansmits a low resolution video to the stage for a preview. A noted above, the transmission of the low resolution video (and. all other communication) may be done via the server 202 or not, and may be done over the Internet 204, satellite network, common carrier network(s), wi-fi, WiMAX or any combination thereof Thereafter, when ihe player is done recording or when the stage is finished recording via the player's camera lens, the high resolution video clip recorded by the player is transmitted to the stage.
[0958] When the stage receives the high-resoiution video clip, the stage replaces the low resolution video clip with the high resolution video clip in its memory and on a timeline, regardless of whether that position on the timeline is before other, subsequently recorded video clips. Therefore, the present, invention overcomes packet drop issues that occur when attempting to stream higher resolution clips or situations when an electronic device 201 A-D may lose connection altogether. This allows .for live, non-linear recording from one electronic device 201 A-D by another.
[0059] Further, the present, invention allows for a. particular stage to be connected to and record from more than one player at any particular time. In such instances, the stage may choose which players it would like to be the mai display on its application, in embodiments that comprise the server 202, not only does the server 202 regulate the communication between the multiple electronic devices 20 ϊ A-D, but the server 202 also stores the recorded, video clips, audio clips and photographs on a database of the server 202. Thereafter, any electronic device 201 A-D may incorporate a saved video clip, audio clip, or photograph into their composition. These saved video clips, audio clips, and photographs are saved in a library as pre-recorded, media thai can be used to simulate or emulate players. This feature will be discussed in more detail below.
[0060] Referring now to Figure 3, a schematic diagram illustrating communication between a first electronic device 300 and a plurality of remote electronic devices 30.1 A, 301 B, 301 C is provided, in the exemplified embodiment, the first electronic de ice 300 and the remote electronic devices 30 ! A-C are each illustrated as iPhone's*. However, the invention is not to be so limited and each of the electronic devices 300, 301 A~C can be any one of the different types of electronic devices discussed above. As can be seen, the first electronic device 300, which is operating a the stage, is in operable communication with each of the remote electronic devices 301A-C, which are operating as the players. This enables the first electronic device 300 to display a low resolution video (or other media) stream from each of the remote electronic devices 301 A-C on its first display device 302. Mo e specifically, utilizing the present invention described herein, the .first electronic device 300 can display on its display screen. 302 live low resolution video stream feeds of views that are perceived, by camera lenses on the remote electronic devices 301 A-C. The first electronic device 300 can then record video thai is being perceived by the camera lenses of the remote electronic devices 30.1 A-C. Finally, the invention enables a high resolution video clip to be transferred ίο. the first electronic device to replace the low resolution video clip that is streamed during recording.
[0061| Referring now to Figures 4-6, a siep«by~siep discussion of the methods, techniques and operation of the present invention will be described. Referring first to Figure 4, a screen shot of a login page of the present invention., which may be a mobile application, is illustrated on the first electronic device 300, It. should be appreciated that in certain embodiments the login page may b omitted and upon launch, the application will go directly to the recording page such as thai which is illustrated in Figure 6. Thus, the login page is only used in some, but not all, embodiments of the present invention.
[0062] i embodiments that utilize the login page, upo a user downloading the inventive application onto the first electronic device 300, an appiication window 301 wsli appear on the first display device 302 of the first electronic device 300. In the appiication window 301, the user will be prompted to create an account by entering in. a username and a password in the appropriate spaces on the application window 301. Prior to signing in or creating an account, the device is indicated as being "not connected," as shown in the bottom left hand comer of the screen shot of Figure 4, Furthermore, in the login, page the user has the options to exit remote camera, manage clips, run. worldwide remote camera, torn off flash, or reverse the camera direction as illustrated across the top of the first electronic device 300, The user can select any of these options using a user input means, which will be discussed below,
[0063] Upon creating an account, the user will he prompted to determine whether to be in a stage status or a player status via user input means on the first electronic device 300. The user input means on the first electronic device 300 can merely be a user using his or her finger to select between stage status and player status in response to a prompt (i.e., touch screen). However, the invention is not to be so limited and in other embodiments the user input means on the first electronic device 300 can be via a mouse utilizing click and point technology, a user pressing a button on a keyboard that is operably coupled to the .first electronic device 300 (such as clicking the letter "S" for stage operation and "P" for player operation), or the incorporation of other buttons/toggle switches on the device. Any other technique can be used as the user input means on the first electronic device 300.
0064j if the user selects to he in a player status, the camera view perceived b the camera lens of the first electronic device 300 will be available for viewing by other electronic devices that are in the stage status. If the user selects to be in the stage status, the camera views perceived by camera lenses of other remote electronic devices will be available for viewing by the first electronic device 300. in certain embodiments, the inventive application will automatically launch in the stage status, suc thai the user would then use she user input to opt out of the stage status and into the player status, as described in more detail below with reference to Figure 6.
| ΘΘ65| In certain embodiments, if the user selects to be in the stage status (or is automatically placed into such status), the first display device 302 of the first electronic device 300 will display a list of remote electronic devices that have initiated a sharing status, such as by loggin into the application and initiating a player status via user input means on the remote electronic devices. Figure 5 illustrates a screen shot of the first electronic device 300 with a list window 304 overlayed ont the first display device 302 of the electronic device 300. I certain embodiments the first display device 302 may display a camera view that is perceived by the camera lens of the first electronic device 300 and the list window 304 can be overlayed on top of the camera view display. As discussed above, in this embodiment the first electronic device 300 is acting as a stage. The user initiates the stage status via user input means on the electronic device 300, In certain embodiments, the first electronic device 300 thai acts as the stage is referred to as the first electronic device while the electronic devices that are acting as players are referred to as the remote electronic devices, or the second electronic device.
\0066] In certain embodiments, the user does not need to select between stage status and player stains. Rather, in certain embodiments upon, launching the application, the list window 304 will be displayed on the first display device 302 of the first electronic device 300. in such embodiments, upon launch the application will automatically scan for remote electronic devices that are either active or that meet qualification criteria as discussed beiow. The user can then select various remote electronic devices to stream a low resolution video clip from, upon which action the first electronic device 300 is automatically deemed a stage. Alternatively, the user can not seiect any of the remote electronic devices and can instead proceed to use the camera on the first electronic device 300. upon which action the first electronic device 300 is automatically deemed a player. Furthermore, it should be appreciated that when the first electronic device 300 is operating as a stage, remote electronic devices can still stream video from the first electronic device 300. Thus, in certain embodiments upon achieving stage slams, the first electronic device is both stage and a player.
f006? The list window 304 is a list of the remote electronic devices that have initiated a sharing/player status as indicated above. In certain embodiments that utilke the list window 304, every remote electronic device worldwide that is activated and that has entered a sharing status will be provided in the list. However, in other em odiments a remote electronic device will first have to meet one or more qualification criteria prior to being placed in the list window 304 on the first display device 302 of the first, electronic device 300. The qualification criteria can be defined by the first electronic device 300 via user input means of the first electronic device 300. in certain embodiments, the one or more qualification criteria are selected from a group consisting of local area network connectivity, GPS radius from the first electronic device, location of the remote electronic device (i.e., desired geographical area), desired location (such as a particular venue, monument, stadium, etc.), and pre-defined group status. Still other qualification criteria can include weather conditions, such that remote electronic devices having siinilar weather conditions to the first electronic device 300 will populate the list window 304. Many other different criteria can be used as the qualification criteria to assist in determining which remote electronic devices should populate the list window 304 on the first electronic device 300. The different types of criteria that can be used as the qualification criteria is not to be limiting of the present invention in all embodiments unless so specified in the claims.
[ΘΘ68] In other embodiments, the qualification criteria can be auto-extracted criteria, m suc embodiments, the program automatically determines certain characteristics of the first electronic device 300, such as the weather, time, altitude, humidity, geographic location, surrounding landscape and the like. As a result, the program can automatically create matches for the first electronic device 300 by finding remote electronic devices that are located at locations with similar weather, time, altitude, humidity, geographic location or surrounding landscape. Thus, in certain embodiments the invention automatically looks for live players nearby or worldwide that match some environmental or situational criteri (either actual or as specified by the stage or stage user).
j0069j if the user uses the user input means and selects to view remote electronic devices based on local area network connectivity, the remote electronic devices provided in the list window 304 will be those that are connected to the same local area network as the first electronic device 300. if the user uses the user input means and selects to view remote electronic devices based on desired geographical area or location of the remote electronic device, the remote electronic devices provided in. the list window1 304 will be those that are located in the specified location. This can be accomplished utilizing GPS or other location finding means on the electronic devices. Alternatively, the user can select to populate in the list window 304 all remote electronic devices within a particular mile kilometer radius .from the first electronic device 300. Similarly, the user ca select a desired location, such as a particular venue, monument, or stadium, and only view remote electronic devices on the list window 304 thai are located at that particular desired location. In this manner, the first electronic device 300 will be able to record video or audio of a sporting event or concert that the user or owner of the first electronic device 300 is not even attending. Finally, the user can establish -pre-defined groups, such, as persons that the user is friends with, and can select to only have the remote electronic devices in those pre-defined groups portrayed in the list window 304.
(00-70} in certain embodime ts, a person may activate an electronic device as a. stage and search for live players that are nearby. The presen invention may match live player thai, is located up the coast at the next beach, or at a beach 1,000 miles away in similar tropical, weather and son conditions. The stage user may believe he is viewing live images thai are nearby when in fact they are at quite a distance away. Features such as this maximize the number of Jive matches or usable match footage that can be incorporated into the user's video composition without requiring the stage user to manually search for live players at the beach at sunset or search the worldwide list of live players for those criteria, hi certain embodiments, users can launch the stage mode and be presented with a reasonably manageable number of usable matches in describing order of relevance,
|ΘΘ?1| The list window 304 displays information about the various remote electronic devices to enable the user/owner of the first electronic device 300 to determin whether to view and/or record a low resolution video of the camera view (or audio or othe media) being perceived by the camera lens (or microphone or other input component) of that particular remote electronic device. In the exemplified embodiment, the list window 304 illustrates the usemame, location by city and state, and current time at that location on the list window 304. in other embodiments, the location can be displayed on the list window 304 based on a particular venue name, GPS coordinates, or the like. Furthermore, the username can include the owner/operator of the particular remote electronic device's real name.
(0072] The remote electronic devices are displayed in the list window 304 in rows with a user selection box 305 next to each usemame. Using the user input means on the first electronic device 300, the user can select one or more of the remote electronic devices. The user selects a remote electronic device by clicking in the user selection box 305 (either via touch screen, mouse point/click techniques, keyboard, or the like). In certain embodiments, the user will click in the user selection box 305 of each desired remote electronic device from which the user wants to view a live feed of the camera view perceived by those remote electronic devices, and then provide an indication that the user has completed making selections. Upon adding all of the desired remote electronic devices, ihe user will use ihe user input, to select the done button 307 to proceed to the next w dow.
(ΘΘ73] In. certain embodiments as discussed above, upon launching the inventive application the user will be brought directly to the recording page illustrated ia Figure 6. In such embodiments, the user will not be prompted to first select from the list window the remote electronic device! s) that the user desires to view camera views from. Rather, in such embodiments the inventive program will automatically match players (i.e., remote electronic devices) for the user of the electronic device 3(H), and will display the matched player's as illustrated i Figure 6 and discussed below. Thus, the inventive application can, i some embodiments, be programmed with, algorithms that automatically select matches for a user. These matches can he based on any factors or characteristics that are discussed herein including envi.ronme.nta! factors and qualification criteria that has been pre-set by a user.
(0074) in certain embodiments, a user activating as a player automatically enables stage users to record from that particular player's electronic device as discussed below. However, in other embodiments a stage will first: request access to the player's camera view, and the player will be prompted to "allow" the stage access/connection to the player device.
(0075) In the exemplified embodiment, the list window 304 indicates that there are 14 live remote electronic devices from which the user can select to view the camer views perceived by those remote electronic devices. These 1 live remote electronic devices are either the devices that met the user's pre-se!ected qualification criteria, or they can be the remote electronic devices that meet auto-extracted criteria, or it. can be a full list of all active remote electronic devices in the United States.
(0076] Referring to Figure 6, a recording and editing page of the inventive application is illustrated on the fust electronic device. As discussed above, in certain embodiments, upon launching the inventive application, ihe user is brought directly to the recording and editing page. However, in other embodiments the user will follow die steps described above and then be brought to the recording and editing page.
(0077) In the recording and editing page, the first display device 302 of the first electronic device 300 displays a first window 308 that overlays a primary window 309 of the first display devic 302. In the exemplified embodiment, the primary window 309 displays the first camera view that is perceived by a first camera lens of the first electronic device 300, Of course, in embodiments wherein ihe media is not video, the primary window 309 displays a low resolution media stream (which can be audio, still photograph, text, graphics, etc) of the remote electronic device. The media can either be being perceived currently by the remote electronically device, or ii can be pre-stored media and the remote electronic device can simply be a library database (described in more detail below). Furthermore, in other embodiments the pre-stored media can be computer-generated, such as a pre-stored media file saved on a server. However, the invention includes a swap function that upo being activated, displays one of the remote camera views in. the primary window 309.
(ΘΘ78) The first window 308 that overSays the primary window 309 displays the pluraliiy of remote camera views perceived b the camera lenses of the remote electronic devices that were selected from the list as discussed above with reference to Figure 5 as well as the first camera view that is perceived by the first camera lens of the first electronic device 300. In. embodiments whereby the selection stage is omitted, the first window 308 displays die plurality of remote camera views perceived by the camera lenses of the remote electronic devices as determined by the inventive program's matching algorithms.
(0079) Specifically, in the exemplified embodiment the first window 308 displays the first camera view of the first electronic device 300 in a first thumbnail 310 of the first window 308, a first remote camera view of a first remote electronic device in a second thumbnail 11 of the first window 308, a second remote camera view of a second remote electronic device in a third thumbnail 312 of the first window 308, a third remote camera vie of a third remote electronic device in a fourth thumbnail 313 of the first window 308, and a fourth remote camera view of fourth remote electronic device in a fifth thumbnail 314 of the first window 308. Of course, more or less than four remote camera views can be displayed in the first window 308 depending upon the number of remote cameras/users that are selected from the list window 304 as discussed above. As noted above, upon a swap function being activated, such as by user input on the user input means, a selected one of the remote camera views can be displayed in the primary window 309 and in its respective thumbnail. Furthermore, as has been discussed herein, although the invention is being described herein directed to camera views, in other embodiments whereby the media is other than video/photograph, the camera views are omitted and. it can be any other media perceived by a remote electronic device,
|ΘΘ80] in the exemplified embodiment, each, of the thumbnails 310-314 displays a low resolution, media stream of the high resolution media clips indicative of the camera view that is perceived by the lens of the particular electronic device, initially, until activation and recordation are initiated as discussed below, the low resolution video stream of the high resolution media clips indicative of the camera view is merely displayed on the thumbnails 310-31.4 for the user to preview, but is not recorded or saved into the memory of the .first electronic device 300. In the exemplified embodiment, the first thumbnail 310 depicts a low resolution video stream of the high resolution media clip indicative of the first camera view that is perceived by the first camera lens of the first electronic device 300. The first thumbnail 310 may depict the low resolution video stream of the high resolution media clip indicative of the first camera view regardless of whether the first electronic device 300 is set in a recording mode or not. Thus, in certain embodiments even if the first electronic device 300 is not set to record, the first camera view perceived by the first camera lens of the first electronic device 30 will be displayed as a Sow resolution video stream in the first t umbnail 310. Similarly, the second thumbnail 311 depicts a low resolution video stream of a high resolution media clip indicative of the first remote camera view that is perceived by the camera lens of the first remote camera. The second thumbnail 31 1 may depict the low resolution video clip of he high resolution media dip indicati ve of the first remote camera vie regardless of whether the first remote camera is set in a recording mode or not. The same is true for each of the third, fourth and fifth thumbnails 312-31 in displaying the low resolution video clip of the respective high resolution media clips that are indicative of the remote camera views. Of course, the low resolution, video stream can merely be a low resolution medi stream, which includes audio, photography, graphics, text, or the like.
\00$l] In certain embodiments, upon the remote cameras that are chosen to be players being selected (such as from the list window 304 or automatically as determined by the inventive application, as discussed above), the thumbnails 310-314 are all selected. The user can deselect any of one or more of the thumbnails 310-3.14 by using the user input, such as by tapping on the respective thumbnail 310-314, double tapping on the respective thumbnail 31 -314, sliding finger across the respective thumbnail 310-314, or the like. Furthermore, in other embodiments upon, the remote cameras that are chosen to be players being selected, none of the thumbnails 310-3 14 are selected and the user uses the user input means discussed above to select those thumbnails,
10082] In the exemplified embodiment, each of the first, second, third and fourth thumbnails 310-313 have been selected as discussed above. As a result, each of the first, second, third and fourth thumbnails 310-313 are darkened/grayscaled. However,, the fifth thumbnail 314 has not been, selected (or has been deselected), and thus the fifth, thumbnail 14 remains white. The particular coloring/grayscale used is not limiting of the present invention, hot it is merely a perceivable difference i the colors or other visible features of the thumbnails that indicates to the user which of the camera views is being recorded. In the exemplified embodiment, the low resolution video streams of the high resolution media clips indicative of the first camera view of the firs! electronic device 300, the first remote camera view of the first, remote electronic device, the second remote camera view of the second remote electronic device, and the third remote camera view of the third remote electronic device are available for use in a video composition that is to be created as discussed above. However, the lo resolution video stream of the high resolution media clip indicative of the fourth camera view of the fourth remote electronic device is merel being displayed in the fifth thumbnail 314, but is not also being stored in the memory device of the first electronic device 300. However, it should be understood that in the exemplified embodiment,, the low resolution media streams of the high resolution media clips of each of the first, second, third and fourth remote camera views are all displayed and streamed on the thumbnail, regardless of whether or not they are selected.
|0083| In the exemplified embodiment, the thumbnails 310-3.14 are displaying low resolution media streams of high resolution media clips of the camera views of the remote electronic devices (and of the first electronic device in the first thumbnail 31 ), However, the invention is not to be so limited. The thumbnails 310-3.14 may merely he visual indicia of each of the low resolution video streams. Thus, the visual indicia may be a usemame, a person's actual name, a location of the electronic device by GPS coordinates, venue name, city and state or the like or any other type of visual indicia of the remote electronic devices and the first electronic device, in still other embodiments, the visual indicia may be a blank thumbnail that is colored. Thus, in certain embodiments there is nothing actually streaming on the thumbnails 310-314, but the thumbnails 310 merely represent an electronic device by being visual indicia.
[0084] As discussed above, the present invention is not limited to video as the media being streamed, recorded and edited. Thus, in embodiments whereby the media is audio, still photos, text, graphics, music and the like, there would be no low resolution video stream to display on the thumbnails 310-314. In such embodiments, the visual indicia noted above can be used. In the case where the media is audio only, if users double-taps on that particular colored proxy thumbnail to make it active, having headphones attached., the user could hear the audio from that particular device.
[0085] In the exemplified embodiment, while the low resolution video streams of the first camera view and the remote camera views are being displayed on the thumbnails 310-314 of the first window 308 (or in alternative embodiments, while the visual indicia is displayed on the thumbnails 3.10-314). the user can use the user input means to activate recordation of those low resolution video streams. Specifically, upon clicking on the record button 320, the first electronic device 300 wil begin recording into its memory device an extended Sow resolution video (or other media) cli from the .first electronic device 300 and each of the remote electronic devices that it has a low resolution video stream or other visual indicia displayed on one of the selected thumbnails 310-313, Thus, in the exemplified embodiment, upon the user clicking or tapping on the record button 320, the first electronic device 300 will .record into its memory an extended low resolution video clip of the high resolution media clips indicative of the camera views of the first electronic device 300, the first remoie electronic device, the second remoie electronic device and the third remote electronic device. Because the fourth remote electronic device has been deselected (i.e., the fifth thumbnail 3 3.4 is white), the tow resolution video clip of the high resolution media clip indicative of the camera view of the fourth remote electronic device will not be recorded and saved in the memory of the first electronic dev ice 300.
ft)086j is the manner discussed above, the extended low resolution video clips of the high resolution video ciips of the camera views from each of several different electronic devices, including both the first electronic device 300 and several remote electronic devices, can be recorded into the memory of the first electronic device 300 at the same time, each saved as a separate file, in addition to recording the extended low resolution clips of the high resolution video clips of the camera views of the electronic devices separateiy, the user can also use the inventive application to create and record into the memory of the first electronic device 300 a single interim video composition thai is a combination of separate low resolution media clip segments from each, of the extended low resolution video clips recorded as discussed below. ΙΘΘ87] During the recording process, the video composition that is being created is referred to herein as an interim video (or media) composition. After completion of recording, the video composition is referred to herein as a final video (or media) composition, it will be better understood from the description below that the interim video composition is a composition that includes low resolution video (or media) and the final video composition is a composition that includes high resolution video (or media).
[ΘΘ88] In the exemplified embodiment, prior to pressing the record button 320, the user will activate one or more of the thumbnails 3.10-314 such that the activated thumbnails (or the activated low resolution video streams) will be recorded as a specific temporal portion of a video composition. Although the invention is described as activating the thumbnails, it. should be appreciated, that at certain points the invention is described as activating the low resolution media streams or other visual indicia which are depicted on the thumbnails. The thumbnails are indicators, in certain instances, of the low resolution media streams. Thus, when a particular thumbnaii is activated, the low resolution media stream or other visual indicia that 'C rres on s with that particular thumbnail is also considered activated.
[0089] In. the exemplified embodiment, the first thumbnail 3 it) has been activated, as indicated by the activation symb l 325 displayed on the first thumbnail 310. Activating the thumbnails 310-314 can be achieved by clicking or tapping on the thumbnails 310-314, doable, triple, quadruple (or more) tapping on the thumbnails 310-314, sliding the user's finger downwardly, upwardly, sideways or the like across the thumbnail 310-314 or by any other user input means. Thus, in addition io separately recording the extended low resolution video clips of the high resolution video clips of the camera views of each of the first electronic device 300 and the remote electronic devices, the first electronic device 300 also records to memory an interim video composition that is created fay switching back and forth among and between the various Sow resolution video clips or camera views.
(0090) Thus, in the exemplified embodiment the first four thumbnails 3.10-313 are selected so thai upon pressing the record button 320 the extended low resolution video clips corresponding to the camera views of the first electronic device and the first, second and third remote electronic devices will be recorded to the memory of the .first electronic device 300 as discussed above. Furthermore, in the exemplified embodiment the fifth thumbnail 314 is deselected so that upon pressing the record button 320 the camera view of the fourth remote electronic device will not be recorded to the memory of the .first electronic device. Because the fifth thumbnail 3 14 is deselected, the .fifth thumbnail and what it represents (i.e., the camera view of the fourth remote electronic device) is unavailable for inclusion in the video composition. However, in certain embodiments the deselected thumbnail can be later selected even during a recording session. Specifically, in the exemplified embodiment the user can press, tap or otherwise engage an addition button 335 which enables a user to add a new thumbnail/electronic device into the recording session. Thus, the user can click the addition button 335, and then tap or click the .fifth thumbnail 314 to include fire low resolution video stream of the camera view of the fourth remote electronic device into the recording session so that the camera view of the fourth remote electronic device is available for use in the i terim video composition.
(ΘΘ91] In the exemplified embodiment the activation symbol 325 is displayed on the first thumbnail 310 so tha the first electronic device 300 is activated for inclusion in the first temporal portion of the video composition. Thus, in the exemplified embodiment, upon pressing the record button 320, the interim video composition will begin being recorded to the memory of the first electronic device 300 with a low resolution media clip segment of the low resolution media stream corresponding to the camera view of the first electronic device 300 because the first thumbnail 310 corresponding to the first electronic device 300 is activated. At some time after pressing the record button 320, .for example ten seconds after pressing the record button 320, the user can deactivate the first thumbnail 3 10 (such as by single, double, triple or the like tapping the first thumbnail 310 or by any other user input means discussed above) and activate the second thumbnail 31 1 (or the third thumbnail 312 or the fourth thumbnail 313) using any of the user input means discussed herein. At this point, the activation symbol 325 will no longer be displayed in the first thumbnail 310, but will instead be displayed in the second thumbnail 311 (or the third thumbnail 312 or the fourth thumbnail 313), Thus, starting at ten seconds after recording began, the interim video composition will be recorded io the memory of the first electronic device 300 with a low resolution media clip segment of the low resolution media stream corresponding to the camera view of the first remote electronic device (or the second remote electronic device or the third remote electronic device, depending upon which of the thumbnails is activated). The user can continue activating and deactivating the various thumbnails so that different low resolution media clip segments from the different electronic devices are included in the video composition at different temporal points thereof.
|ΘΘ92| Using this technique, the user can switch between the different remote camera views by activating and deactivating the various respective thumbnails. For example, if a one minute video composition is being created, the user will tap the record button 320 and all of the low resolution video streams corresponding to the selected thumbnails will be recorded in the memory device of the first electronic device, in the exemplified embodiment, there are three remote electronic devices that are selected (and the first electronic device is selected). At the end of the one mimue recording, ail of the one-minute extended high resolution media clips (one from each of the remote electronic devices tha are selected) are transmitted to the first electronic device 300 and stored in the memory. However, those three one-minute extended high resolution media clips are also combined into a final video composition with the live switching timing decisions that were made during the recording as discussed above. Thus, the final video composition that is created is an edited video that is one minute long, but that switches between the various selected camera views as often as the user desired, for instance twenty times or more, throughout the one-minute,
|Θ 93{ Thus, in certain embodiments the video composition can be created by the user on the fly during recordation of th various low resolution video clips. Specifically, the user of the first electronic device 300 can switch between the various low resolution media streams that are being recorded and activate a specific one of the various low resolution video streams being recorded to be used in the video composition at a specific moment in time. As one example, the user may be standing on a specific street corner in Paris and can initiate stage session. The user may find that there are three live players on a street corner in Paris near the user. The user can select ail three players and activate a record. However, in addition to recording three separate expanded low resolution video clips from each of the three players, the user can activate onl one of the three players at a time to record from for the creation of the video composition. Thus, the user can switch back and forth between the three players to obtain various different views from the street corner in Paris. After one minute or any other desired time, the user can use the user input means to stop recording by clicking the done button 320. Upon completion of recording, the one-minute (or other desired time) high resolution video clip will be transmitted to the memory device of the first electronic device 300. In this example the one-minute high resolution video cli will be a combination of the different camera views from the three different players at different temporal portions throughout the video clip. This negates the need for any later editing and creates a desired scene or video compilation automatically,
[0094] Although 'the invention has been described above such that only one of the selected thumbnails 310-313 is activated at a time, the invention is not to be so limited in all embodiments. In certain embodiments, more than one of the selected thumbnails 310-313 can be activated and have the activation symbol 325 displayed thereon at a single time. As a result, the low resolution media clip segments from more than one of the camera views will be included in the interim video composition at the same temporal point in time of the interim video composition. With video, this can be accomplished by utilizing picture in picture or split screen views on the interim, video composition. With audio, this can be used to achieve a stereo or surround sound effect such that if audio at a concert is being recorded from different vantage points, the combined sound when all are activated and recorded to create a media composition, a surround sound effect is achieved. In other words, if multiple remote electronic devices are located at a concert venue at different locations, a stage can record from each of the players at the same time and can acti vate each of the players at the same time as discussed above. Thus, when, the media composition, that, includes the sound from each of the players together at the same point in time in the media composition is replayed, the sound will be equivalent to a surround sound.
|0095] If the first and second thumbnails 310, 31 1 corresponding to the low resolution media stream of the first electronic device 300 and to the low resolution medi stream of the .first
*?7 remote electronic device are activated sequentially during recording, the low resolution media! clip segment of the low resolution media stream of the first electronic device and the low resolution media clip segment of the low resolution media stream of the first remote electronic device wilt be positioned sequentially in the interim media composition. If the first and second thumbnails 310, 31 1 corresponding to the low resolution media stream of the .first electronic device 300 and to the low resolution media stream of the first remote electronic device are activated concurrently during recording, the low resolutio media clip segment of the low resolution media stream of the first electronic device and the low resolution media clip segment of the low resolution media stream of the first remote electronic device will be positioned concurrently in the interim media composition. Concurrent positioning can be achieved as discussed above by using picture in picture or spli screen when the media is video, or by concurrent audio to achieve a surround sound effect when the media is sound/audio.
[ΘΘ96] Referring still to Figure 6, upon the user determining that the interim video composition is complete, such as after recording for five minutes, the user will use the user input to select (tap, click, etc.) the done button 330. This will signal completion of recording of the separate expanded low resolution media clips from each of the electronic devices corresponding to one of the selected thumbnails and completion of recording of the interim video composition. Upon tapping the done button 330, the first electronic device 300 will have stored in its memory an extended low resolution media clip (that comprises the low resolution media dip segments) corresponding to the high resolution media clips of the camera views (or the audio sounds perceived by) the electronic devices that were selected as discussed above. The extended low resolution media clip of each of the camera views will have a length, in. time equal to the time between the user tapping the record button 320 and the user tapping the done button 330 (as long as the thumbnail corresponding to the camera vie was selected during that entire time period). Additionally and separately, the memory of the first electronic device 300 will also have stored an interim video composition corresponding to the various low resolution media clip segments that correspond to the thumbnails that were activated at different points in time during the recording session. This interim video composition is essentially a video composition that has been edited, "on the .fly" during the recording of the various electronic devices. Thus, the interim video composition is a completely edited video composition that needs no further editing (although further editing is possible if desired, as discussed below with reference to Figures 7 and 8. The interim video composition includes various segments of time (sequentially or simultaneously such as picture in picture and split views as discussed above) from different camera views from different remote electronic devices compiled into a single video.
ΘΘ97] By only streaming and recording low resolution images of the remote electronic devices onto the display and into the memory of the first electronic device 300, die present invention overcomes packet drop issues thai occur whe attempting to stream higher .resolution clips or situations when an electronic device may lose connection altogether. This allows for live, non-linear recording from one electronic device by another.
[0098] Moreover, for each thumbnaiS or low resolution media stream that is selected as discussed above, a high resolution media clip of the remote camera view of that remote electronic device is being recorded and stored on the remote electronic device capturing or perceiving that remote camera view. Thus, for example, as noted above the first remote camera view of the first remote electronic device is selected and thus an extended low resolution media clip of the first remote camera view is both being displayed in the second thumbnail 31 1 and being stored and recorded into the memory device of the first electronic device 300. At the same time., a extended high resolution media clip of the first remote camera view is being recorded and stored on the first remote electronic device. Similarly, the second remote camera view of the second remote electronic device is selected and thus an extended low resolution media, clip of the second remote camera view is both being displayed in the third thumbnail 313 and being stored and recorded in the memory device of the first electronic device 300. At the same time, an extended high resolution media clip of the second remote camera view is being recorded and stored on the second remote electronic device.
[0099] Furthermore, the same thing occurs wit the activated thumbnails that correspond to low resolution media streams. Specifically, during the creation of the video or media composition, when one of the thumbnails is activated, a low resolution media clip segment of the camera view perceived by the electronic device corresponding to the activated thumbnail is recorded into the memory of the first electronic device 300. At the same time, a high resolution media clip segment corresponding to the low resolution media clip segment is recorded and stored on the remote electronic device capturing or perceiving that remote camera view, it should be appreciated that the high resolution media clip segment i identical to the low resolution media clip segment except the high resolution media clip segment has a higher resolution than the Sow resolution media clip segment. Thus, the terms high resolution and low resolution axe not intended to he limiting of the present invention, but raiher are merely used as terms to be uiidersiood relative to one another to indicate one resolution that is higher than another resolution.
[00308] in certain embodiments, one or more of the thumbnails on the displa of the first electronic device 300 can be used as a proxy (i.e., placeholder) for a later-added media. Specifically, in suc embodiments the thumbnails will be used as visual indicia for a plurality of electronic media recording devices. Upon initiating recording, a perceived event will be recorded on eac of the electronic media recording devices. The media clip will contain an electronic media recording device identifier. This will enable the media clip to be later incorporated into a media composition. The user will selectively activate the visual indicia of the plurality of electronic media devices to generate and record a proxy clip segment in an interim video composition on the first memory device. Each proxy clip segment is associated with the electronic media recording device whose visual indicia was activated to generate that proxy clip segment. Furthermore, each proxy clip segment is associated with a temporal period.
[ΘΘ103 ) in such an embodiment, the proxy clip segment is a blank media segment, which can simply be a buzz, silence, a blue screen or any other type of media segment. The proxy clip segment is used as a placeholder so that the media clips recorded on the electronic media recording devices can be later added to the media composition. This can be useful if the electronic media recording devices can not operably electronically communicate with the first electronic device, but can later be connected thereto or if the media clips can later be downloaded thereo .
|θθ!02| Then, for each proxy dip segment recorded in the interim media composition, the first electronic device will receive the media clip thai was recorded on the electronic media recording devices. Furthermore, for each media clip received, the media clip will be matched with the corresponding proxy clip segment based on the electronic media recording device identifier. A segment of the media clip that corresponds to the temporal per iod of that proxy clip segment can then be extracted and automatically used to replace the proxy clip segment, thereby creating a final media composition comprising the media clip segments.
[ΘΘ3.03] At some point in time after the user has clicked the done button 330 to indicate completion of recording, the extended high resolution video clip that is recorded on the respective remote electronic devices is transmitted to the first electronic device 300, Thus, an extended high resolution video clip corresponding to each one of the extended low resolution video clips that were saved locally on the memory of the first electronic device 300 is transmitted to the first electronic device 300. Upon receipt of the extended high resolution video clips by the first electronic device 300, ihe extended high resolution video clips replace the extended low resolution video clips in the memory of ihe first electronic device 300. The replacement of the extended low resolution video clips with the extended high resolution video clips in the memory of the first electronic device 300 occurs automatic-ally upon the first electronic device 300 receiving the extended high resolution video clips from the remote electronic devices.
(00104) Similarly, after the user has clicked the done button. 330 to indicate completion of recording, for each low resolution media clip segment recorded in the interim media composition, the first electronic device receives a high resolution media clip segment from the remote electronic device that corresponds to that low resolution media clip segment. Furthermore, the inventive application automatically replaces the low resolution media clip segments in the interim media composition with the high resolution media clip segments to create the final media composition comprising the high resolution media clips. In certain embodiments, ihe interim media composition and the final media composition are the same singular file.
[ΘΘΙ.05] As discussed above, the high .resolution media cli segments and the extended high resolution media clips are transmitted from the remote electronic devices to th first electronic device at som time after the user clicks the done button 330 to indicate completion of recording. This later time can be determined in a number of ways. Specifically, in some embodiments the extended high resolution media clips and the high resolution media clip segments of the remote camera views are automatically transmitted to the first electronic device upon the user clicking the done button. The extended high resolution media clips transmitted to the first electronic device 300 will be temporally delimited by the time that the user began recording the extended low resolution media clips and the time ihe user ended recording of the low resolution media clips.
J001061 in oilier embodiments, the extended high resolution media clips and ihe high resolution media clip segments can be transmitted to the first electronic device 300 upon determining that the extended high resolution media clips and the high resolution media clip segments will be transmitted at a data rate thai exceeds a predetermined threshold. Thus, it may be determined that upon the user ending recording, the data rate is too slow for transmission of the extended high resolution media clips and the high resolution media clip segments, in such instances, the remote electronic device wilt wait until the data rate exceeds the predetermined threshold, and at such time will transmit the extended high resolution media clips and the high resolution media clip segments. In still other embodiments. transmitting (he extended high resolution media clips and the high resolution media clip segments from the remote electronic devices to the first electronic device 300 can be automatically initiated upon determining (hat the first electronic device 300 is no longer recording,
[00107! hi certain embodiments, transmitting the extended high resolution media clips and the high resolution media cli segments of the remote camera views from the remote electronic devices to the first electronic device 300 includes wirelessly uploading the extended high resolution media clips and the high resolution media clip segments of the remote camera views from the respective remote electronic devices to the server, and wirelessly downloading the extended high resolution media clips and the high resolution media clip segments of the remote camera views from, the server to the first electronic device 300. Furthermore; in some instances uploading the extended high resolution media clips and the high resolution media clip segments to the server is automatically initiated upon determining that the extended high resoiittion media clips and the high resolution media clip segments of the respective remote camera views will be wirelessly uploaded from the respective remote electronic devices to the server at a data rate that exceeds a predetermined threshold. Similarly, in some instances downloading the extended high resolution media clips and the high resolution media clip segments from, the server is automatically initiated upon determining thai the extended high resolution media clips and the high resolution media clip segments of the respecti ve remote camera views will he wirelessly downloaded from the server to the first, electronic device 300 at a data rate that exceeds a predetermined threshold.
[ΘΘ108] As discussed above, upon ihe extended high resolution media clips of the remote camera views from the respective remote electronic devices being transmitted to and received by the first electronic device 300, the extended high resolution media clips replace the extended low resolution media clips lor {hat remote camera view that were recorded/stored in the first memory device of the first electronic device 300. Furthermore, the high resolution media clip segments are combined together to form the final, media composition and replace the interim media composition which comprises the low resolution media clip segments. In certain embodiments, the high resolution video clip is mmsmii led/routed through a server as has been discussed in more detail above.
| Ίί)9| In certain embodiments, after the extended high resolution media clips corresponding to the extended low resolution media clips are stored, in the memory of the first electronic device, the inventive application extracts the high resolution media clip segments that, correspond to the low resolution media clip segments of the interim media composition from the extended high resolution media clips stored/recorded on the memory device of the first electronic device. Then., the inventive application replaces the low resolution media clip segments of the interim media composition with the extracted high resolution media clip segments.
[001 ly| Upon being created, the video composition is stored in the memory of the first electronic device, in one embodiment, the video composition can be stored as a single file. In other embodiments, each of the separate high resolution video clips that are used to form a single video composition is saved as separate files, such as in the same folder or subfolder. In yet another embodiment, each of the separate high resolution video clips thai are used to form a single video composition is given metadata that effectively defines the video composition, such as associating each unique high resolution video clip with a unique video composition identifier thai defines the unique high resolution video clips as being a part of a particular video composition and a position identifier that defines the sequential ordering of that particular video clip in the video composition.
[ΘΘ1.1.1 ) hi another embodiment:, starting with frame 00:00:01, the software takes one frame at a time from each live camera or stored, camera file (whichever eamerafs) appear at. that frame, there could be more than one camera in a given frame as in case of Picfitre-in- Picture) and creates a new frame or a composite frame and writes those frames in sequence to a new digital file thai is sometimes referred to as a flattened file because it no longer has multiple layers that can be edited. S for a one-minute final flattened movie at 30 frames per second, this process of accessing the folder of related files and grabbing the needed frames and creating a new composite frame happens 1,800 times. The 1 ,800 frames are then saved as a single movie file which could contain substantial text (not visible) meta data about each camera or each frame. The same process occurs wit audio, photos, text, time overlays and the like,
|00112] in certain embodiments, the low resolution video clip stored in die memory device of the first electronic device can be replaced with a completely unrelated high resolution clip that, was shot with a different, completely disconnected camera that was shooting at the same time. This can be accomplished by manual, file replacement.
[00113] The first display device 302 of the first electronic device 300 also has a switch to player icon 330, a download icon 331 and a number of recorded, players icon 332, Upo selecting the switch to player icon 330, the first electronic device 300 is taken out of stage mode and entered into player mode such that the first electronic device 300 can no longer vie and record camera views from remote electronic devices. As discussed above, in certain embodiments that application automatically launches in stage mode, so selecting the switch to player icon. 330 may he the only mechanism by which, the first electronic device 300 can be put into player mode in some embodiments. The download, icon 331 enables the user to download desired video clips. The number of recorded players icon. 332 provides the user with an indication of the number of camera views from various electronic devices, including the first electronic device 300 and any remote electronic devices, that is currently bein recorded,
[00114] In the exemplified embodiment, the first display device 302 of the first electronic device 300 comprises a second window 315. In certain embodiments, the second window 315 is displayed on the display device 302 of. the first electronic device 300 simultaneously with the first window 308. However, the second window 315 may only be displayed at certain times, such as after completion of a recording session.
[001151 After the first electronic device 300 receives the .final media composition and stores it in its memory device, a graphical representation 316 of the final media composition is displayed in the second window 3.15. As discussed above, the .final media composition is a complete video thai comprises various media segments obtained from several different, electronic devices. If recorded correctly or in a desired manner the first time around, no further editing will be required. Ho wever, if the final media composition is not exactly as the user desires based on timing of the different media clip segments in the .final media composition, the user can edit the final media composition by user input via the first user input means, such as by tapping or clicking on the graphical representation 316 in the second window 315. The editing feature will be described in more detail below with, reference to Figures 7 and 8.
100336] If the user desires to add additional media to the second window 31 , the user can click the addition button 335, Clicking th addition button 335 will enable the user to start a new recording session or add media that is already downloaded, onto the first electronic device 300 into the second window 315. Pushing or clicking the addition button. 335 may bring up a menu screen to enable the user to select what it is he desires to do. In certain embodiments, prior to starting a new recording session or adding other media to the second window 3 15, the user may desire to add a transition so that when, all of the media compositions and other media, in the second window are played in succession, there are transitions (fade out, fade in, white screen, black screen, etc.) in between each separate media composition or other media file. Transitions are identified by transition graphics 320. (001.1.7'] In the exemplified embodiment, the second window 315 has graphical representations 317, 318, 323 indicative of several media compositions and other media files as well as transition graphics 320 that are saved to the memory of the first electronic device 300. Each of the media compositions or other media files is represented by a different graphical representation 317, 318, 323 that is indicative of that particular media, composition or file.
(00118] As discussed above, in certain embodiments it may be desirable for a. user to edit a final media composition after it has been received and stored in the memory of the first electronic device 300. The user may view (he final media composition and determine that the camera switching was not on the timing sequence as desired, or that the audio in the composition is not loud enough or is otherwise deficient. The user can edit the final media composition by user input via the first user input means (i.e., clicking/tapping, double clicking/tapping, sliding finger along the graphical representatio 1 , or the like).
(00119] Upon double clicking the graphical representation 31 or otherwise indicating that the user desires to edit the final media composition, the inventive application will bring tip an edit switching page, which is illustrated in Figure 7, In the edit, switching page, the user can modify the final video composition. Specifically, the edit switching page has a back button 401 , a play alt button 402, and a save button 403. Furthermore, the edit switching page has an expanded high resolution media clip window 405. Within the expanded high resolution media clip window 405 are graphical .represen tations of each of the expanded high resolution media clips that are included in the particular final media composition being edited. For example, if during the initial recording stage discussed above with reference to Figure 6 the user activated three different thumbnails (correlating to three different remote electronic devices) at different times during the recording session, the final video composition would include high resolution media clip segments from each of the three remote electronic devices. However, in order to enable the user to edit the final media composition, the expanded high, resolution media clip window 405 will include the entire expanded high resolution media clip that was recorded and saved in the memory of the first electronic device 300 from each of the three remote electronic devices. Thus, in the exemplified embodiment the expanded high, resolution media clip window 405 includes a first graphical representation 41 1 of a first expanded high resolution media clip that was recorded from a first remote electronic device, a second graphical representation 412 of a second expanded high resolution media clip that was recorded form a second remote electronic device, and a third graphical representation 413 of a third expanded high resolution media clip that was recorded from a third electronic device. Each of the expanded high resolution media clips is limited in length by the time the user hit. the record button 320 and the done button 330 (as long as each of the thumbnails corresponding to the first, second and third remote electronic devices was selected when the user hit the record button 320 and the end button 330).
(60120 j The user can edit the final video composition by tapping or clicking the play all button 402. This will begin to play each of the first, second and third expanded high resolution media clips simultaneously. The user will then also activate one or more of the first, second and third expanded high resolution media clips (by tapping or clicking on one of the first, second or third graphical representations 41 1-41.3) so that only the activated expanded high resolution media clips are included in the edited video composition 415 at a pariicuiar lime during the video. Thus, the editing feature is similar to the initial recording feature. Each of the first, second and third graphical representations also includes a visual indicator (41 l a, 412a, 413a). In the exemplified embodiment, each of the visual indicators 41 la, 412a, 4.13a is a different colored box. Of course, other visual indicators can be used. Furthermore, the edited video composition 415 includes a timing bar 16 that indicates the length in time of the edited video composition 1 5, The timing bar 416 is colored, along it to indicate which of the first, second and third expanded high resolution media clips is i the edited video composition 415 at a particular time. Upon completing creation of the edited video composition 415, the user can click/tap the save button to save the edited video composition 415 in the memory device of the first, electronic device 300.
[00121] Referring to Figure 7B, in addition to editing the video aspect of a final media composition, a user can also edit the audio. Specifically, certain media compositions include both, video and audio. Thus, using the screen in Figure 7B, the user can select the audio from a particular electronic device to be incorporated into the final/edited media composition along with video from a different electronic device. In this screen,, all of the expanded, high resolution media clips are again provided. The user can click the play all button 502 to start playing all of the expanded high, resolution media clips simultaneously. The user can then select (by checking the circle as exemplified or clicking on the thumbnail, etc,} which of the expanded high resolution media clips should be playing audio at a particular moment in time in the final or edited media composition. In certain embodiments, more than one of the expanded high resolution media clips can play audio in the final/edited media composition. Specifically, a first one of the expanded high resolution media clips can pla the left audio, a second one of the expanded high resolution media clips can play the right audio, and. a third ooe of the expanded high resolution media clips can be playing the video. Thus, any of the various expanded high resolution media clips and lis various components (i.e., audio and media) can be used to create different portions of the final, or edited video composition.
|ΘΘ122| in certain embodiments, it may be found that a user logs into the application using his or her first electronic device 300 as a stage, and that there are no remote electronic devices logged as players that meet the user's criteria at a particular time. Thus, in certain aspects the invention may include a library'' feature that includes a librar database of prerecorded videos or audio files that can emulate remote cameras on the server. In certain embodiments, each player (i.e., each electronic device) can have a library database of prerecorded videos or audio files stored thereon. Each of the pre-recorded videos and audio files will have descriptive meta data about their original capture (i.e., the original recording of that particular file) that match specified or cureni live criteria so that the stage user (i.e., the user of the first electronic device 300) is potentially not even aware that the pre-recorded videos are not live. For example, the following criteria may be used to determine if a pre-recorded video or audio file located in the library, which may be on the server, could match a stage user's specified or live criteria: current weather, current time, relative time in terms of son position, compass direction, temperature, altitude, audio waveform, light meter reading, white balance temperature, contrast of shadows to highlights, humidity, air quality, nearby water temperature and quality, wind speed and direction, season, types of plant and other life nearby or detected, moon phase, genera! mood detected, gas price, economic and health indicators, per capita income, object recognition such as cars or bicycles or building types detected, population density, traffic volume, nearby transportation methods, facial cues, color similarities, fashion styles. The inventive application can auto-extract data about the user's current environment and compare it with the meta data from the pre-recorded files in. the library.
|00123] in certain embodiments, the library database may include media files that were pre-recorded by electronic devices. However, in. other embodiments the library database may include computer generated media files that are generated to emulate a live player or remote electronic device. The media files in the library database can be stored in the server and include all criteria needed to match actual live footage. For example, a computer could generate a pre-recorded media, (video) fil that is a three-dimensional rendering of Times Square in New York City, if a user enters stage mode and searches for players, the computer generated pre-recorded media file of Times Square can populate on the user's electronic device. The server or computer may be able to generate a media file on the fly in order to match a particular user's live current conditions. Thus, if the user is in Times Square and it is snowing, the computer may be able to generate a tnree-dimensionai rendering of Times Square with snow falling. As a result, the user in. the stage mode will not be able to decipher that the computer generated file is not live footage from a live player/electronic device nearby. Thus, the computer/server can create a media file to match a user's current environment to mimic a live camera (or other electronic device).
(ΘΘ124) Alternatively, using pre-recorded media files that were actually pre-recorded from another live electronic device, these pre-recorded media files can also mimic a user's current conditions. Specifically, User 1 may record Times Square on Day 1 and that recording will be stored in the library database. User 2 may enter as a stage in Times Square on Day 2 (i.e., the following day). Even if there are no live players on Day 2, the prerecording .from User 1 from the previous day can be used to populate the user's player list, and the user may not realize that th pre-recording is not live. Thus, the players or remote electronic devices described herein througout can be live electronic devices that are actually currently active, pre-recorded media that were actually recorded by live electronic devices at a previous time, or computer generated media that are either previously created and stored, or created while the user is active as a stage in order to mimic the user's environment, fO012$! Thus, in certain embodiments the space and time continuum are irrelevant so that live or virtual real-time muSticaraera sessions can be created by broadcasting files that are archi ed in the library and stored on the server. The archived files are used to emulate live players. As one example, on July 10, 2013 in Central Park, there may be only 2 active players recording between 10:00am to 10: 15am. In. 2014 on the same date and time there may be seven active players recording. However, in 20.1 th two active players from 2013 will be saved and archived in the library as pre-recorded, video files, and will be presented to a stage in 2014 along with the seven active players. Thus, the stage users in 2014 at Central Park on July 1 between 10:00am and 10; 15am will think that there are nine users. By the year 2020, there will potentially be hundreds of matching video/audio/still photo files from over the years that a user could switch to in creating a video composition. Furthermore, the invention is not limited to only utilizing pre-recorded video/audio files that are at the same time and place. Rather, pre-recorded video/audio files from that recorded i other parks around the world at different dates and times that match the environmental conditions, weather conditions and other qualification criteria and are likely to be uadetectably realistic live matches to stages actual or defined shooting scenario. (00126} Thus, the invention may automatically determine which pre-recorded audio and video files match the user's current conditions. For example, the user may be standing in a blizzard, and the present invention can locate pre-recorded video files that were taken during similar blizzard conditions, in certain instances, the pre-recorded video file can be at the same location during a blizzard that occurred years earlier, but the user of the stage or first electronic device 300 may be unaware that the video file he is viewing is pre-recorded and not live.
(0ΘΙ.27) if the stage user selects as a play er one of the pre-recorded vide clips, a low resolution video stream of the pre-recorded video clip will be presented in the thumbnails as discussed above. Then, if the user activates recording of the pre-recorded video clip, the low resolution video clip will be recorded onto the memory of the stage. Then, for each low resolution video clip recorded on the memory of the stage, a corresponding high resolution clip will be transmitted from the library database to the stage to automatically replace the low resolution video clip. Thus, the pre-recorded video clips operate exactly as the live players, and the communication between the stage and the library' database (whether it be on the server or elsewhere) is exactly the same as the communication between die stage and the live players.
|ΘΘ1281 In certain embodiments, the stage's location and other data can be emulated or falsified in order to obtain live players that match a criteria, even if the criteria is different than the stage's current situation. For example, a user could launch the inventive application in San Diego and type into a search query for qualification criteria: "I am standing on a. street comer in Paris at 7pm in December and it is snowing." if it is currently 7 pm in December in Paris and snowing, live cameras from Paris would show for selection. If no live cameras/players match the criteria of the search query, emulated, live players (i.e., prerecorded video files) matching the criteria would present for user selection. Also, the user could be filming themselves in front of a Green Screen in their home and could launch the inventive application as a stage and type (or say) the search query: "1 am standing in Times Square at sunset on a warm summer day facing north with the wind to my back." If the current conditions in New York match that search query., live players in New York would be presented on the stage device for selection. If there are no matches, emulated live players (i.e., pre-recorded video files) would be presented for selection,
fO0129j The idea in both live stage mode and emulated stage mode is that users do not have to search for players or stick footage by keyword. Rather, the inventive application automatically detects the environment of the stage, or the user defines an environment and the inventive application presents all video, photos, audio and etc that match so the user can can "live-switch" to thai source while creating a video composition in real-time.
{00130] Referring to Figure 9, a screen shot illustrating a searching tool for locating video clips based on qualification criteria. As discussed above, in certain embodiments a user may request thai a list of players be provided that match a specific criteria. The user may submit the request either by typing into a query box as illustrated in Figure 9, using voice detection software built into the device, or any other technique. In the exemplified embodiment, the user has typed in a search for "Beach" and representati e images of video files that were taken at the beach are displayed for user selection. The video files displayed may be only live players, only pre-recorded video files, or a combination of live players and pre-recorded video files, in certain embodiments, the inventive system is programmed to first look for and provide the user of the stage with a list of live players that are nearby or worldwide that match some environmental or situational criteria. However, if there are not enough live players, or if the user wants more options, the pre-recorded video files can be provided to the user. The user can select the pre-recorded video files and stream the low resolution video clips from them in the same manner as with the live players discussed above, furthermore, high resolution video clips are transmitted corresponding to the low resolution video clips in the same manner for both the li ve players and the pre-recorded video files. 100131] in certain embodiments, the user can simultaneously stream video from both live players (i.e., actual electronic devices that are concurrently operating the inventive appiicaiioii) and pre-recorded library files. In. this manner, video compositions can be created that are a combination of live video feed and pre-recorded video feed.
[00132] Many permutations and variations can be accomplished in view of the above, some of which are described below, in certain embodiments, the present invention can be used to trigger recordin on a second device (or a third device thru proxy player as discussed above) via Bluetooth or infrared or sound or light or other mechanism or even to manually trigger recording on the camera. In such instances, the stage would only show a temporary blank proxy that can be used for switching and then the user would replace that blank proxy with the disk file or would iranscode from tape or other means and replace that proxy file either by time stamp manual sync or audio waveform, or clapboard sync. This technique can be used if: (1) an older camera is being used; (2) a camera that does not have WiFi or HDMT out is being used; or (3) when the user cannot afford an additional mobile or streaming device. This allows any media recording device {photos, videos or sound, etc) to participate in a mutti camera session and the blank prosy on stage can be manually labeled with an indicator such as "1 85 Sony Betaeanv * or "8-lrack recording console."
[00133] in certain embodiments, the invention enables the faithful reproduction of left/right stereo sound, multi-channel surround sound or follow sound by automatically detecting via GPS and Compass or manually via user input the spatial arrangement of ail cameras and audio recording devices in the session, in oilier words, if there is a conceit and the user switches to a camera close to the stage, it may be advantageous via GPS to identify active players in the session or in the vicinity on the left side of the stage and right side of the stage at time of recording. Rather than using stereo audio from the center stage camera mic, the present invention would use left stage camera audio for left stereo channel and right stage camera audio for right stereo channel in the final movie. In cases of 7-channel surround sound, the inventive application can pull audio from 7 different devices to create the stereo audio track (via auto or manual mixing) or the true multi-channel Dolby Digital or THX surround audio.
[ΘΘ1.34] Although the invention has been described whereby the full high resolution video clip (or audio clip or the like) is transferred from each, device at a specified time, this is not required in all embodiments. In some embodiments only the portion of a specific clip that is actually used in a live edit switching session is transferred. This will speed up file transfers and increase efficiency of the same. In such embodiments, the full length files can be added back into the package at a later time to expand ed.itabiliiy.
EXAMPLES
[00.135] The following paragraphs further describe the various embodiments of the invention with detailed explanations or examples in everyday language to assist the reader.
1 . AUTOMATIC OR MANUAL CLIP REPLACEMENT
j 001361 The replacement of low-resolution or prox placeholder clips from remote media recording devices with their related high-resolution files can happen either automatically at end of recording when the files transfer or can happen manually at some other time. n other words, if a user acting as the stage is recording from the camera or microphone on his or her own device while at the same time remotely recording from a 2ffti device acting as a player, a low-res version of the 2"" camera feed is being transferred while recording. When the stage user stops recording, the high-resolution fle from the 23ti device can be automatically transferred to the stage in which case the low res file would be auioraaiicaily replaced or both users could decide to defer the transfer or it may not be possible to transfer at that time and so the stage user could manually replace the low-res file with the high-res file at a later time or through other means.
[00137] Either the stage or the player can defer transfer of high-res until a later time for example if the high-res transfer would take too long or perhaps either devic needs to record something else immediately- in cases where only a proxy placeholder is used on ihe stage because the low-res live feed is either technologically unavailable or because of network bandwidth constraints, once the high-res file has been copied onto the stage users device, the stage user would select the proxy placeholder and manually import the high-res file as a replacement.
(ΘΘ138] It is also possible to replace the low-res or proxy placeholder clip with a completely unrelated high-res clip that was shot at the same time with a different (completely disconnected) camera by this manual file replacement process outlined above. One example of this might be that you have an old video camcorder that only records to tape and does not have any sort of preview output or Wi-Fi connectivity, in this case, the stage user can either define a proxy placeholder thumbnail on the stage or an ί Phone could be mounted to the front of the camcorder acting as a temporary player preview. At the end of recording, the high -res from the iPhone would be transferred to the stage but then once the tape from the old camcorder is digitized into an electronic file, the stage user could replace that proxy placeholder or temporary player preview with the desired camcorder footage. All switch edits done to the proxy or temporary previe would be applied to this new camcorder file automatically so that no manual editing would be required in the creation of the final multi- camera composition. In fact, any file in the composition can be replaced with any other file even if not related by time or place. It would take on the original files edit timing.
2. PLAYER ACTING AS A BRIDGE FOR A 3rd DEVICE CAMERA
[00139] In the case when the player is acting a bridge for a external camera device over Wi-Fi, there are actual ly thre resolutions. The low-resolution proxy, the high-resolution final recording and the original super-high, resolutio file recorded locally on the camera. |θθ 140] In other words...imagine a RED ONE 4 cinema camera connected to an HTTP Streaming device, that device is connected on its input side to the RED ONE 4K via its HDMi previe output and is connected on its output side to the player device via Wi-Fi. When stage triggers record, it sends a message to the player and instead of (or in addition to) recording its own camera input, the player sends a message to the streaming device to begin recording the live preview feed from the RED 4 . camera. [00141 j The streaming device also sends a message to the RED 4 camera to begin recording on its own hard drive in 4X HD resolution or higher. While the stage is recording, the streaming device is sending low-res stream or proxy thumbnails to the player and the player acting as a bridge is sending the same to the stage.
[00142] When the stage stops recording, it sends a message to player to stop and player sends a message to the Wi-Fi streaming device to stop recording and send the high-res HD or SD file to the player and player sends to stage. However, at a later time, the HD high-res can be replaced with the 4x HD that was recorded locally on the RED ONE hard disk (via time data) to create Cinema quality final movies.
[ΘΘ143] hi another example, a stage user wants to record from three iPhone cameras, two iPod Touch devices with microphone input only and a GoPro Hero 3 Camera. The problem with the GoPro Hero 3 camera is that although it has internal Wi-Fi, it only creates an ad-hoc network. An ad-hoc network means it creates its only little Wi-Fi network that one electronic device can connect to but it is not capable of connecting to a broad. WiFi network. This works fine if there is only one stage device that wants to record its camera feed and the GoPro camera feed over Wi-Fi at the same time but if other devices need to be included in the session, this does not work. So what, happens in this ease is the Go-Pro Hero 3 is connected to what the present invention call an offline-player over the GoPro 's Ad-Hoc network. Then the stage user taps the + button and adds a Proxy Player to the session which might show up as a solid colored rectangle with a label they define like "Go Pro Hero 3". During the recording session, the stage user might switch back and forth between the three iPhones and sometimes choose the GoPro proxy to be the active camera. The stage might choose one or more of the microphones to be the audio the entire time. At the end of recording, the Player would receive from the GoPro the medium-res file and the Player would change its Wi-Fi network from the GoPro' s Ad-Hoc network to the Wi-Fi network that the stage is connected to. Then the Player would automatically or manually be detected by stage and would transfer the medium-resolution file from the GoPro to the stage. However, the user could also extract the media card from the GoPro and connect it to the stage device and import the full high-resolution clip recorded locally on the GoPro during the session and replace the proxy or medium resolution clip with, that final file. Again, no editing would be required. Sync between the proxy and high res could happen automatically via audio waveform matching or manually by user.
[00144] It is also possible to trigger recording on a second device (or a third device thru proxy player) via Bluetooth or infrared or sound or light or other mechanism or even to manually trigger recording on the camera and stage would only show a temp blank proxy that can be used for switching and then user would replace that blank proxy with die disk file or would transeode from tape or other means and replace that, proxy file either by time stamp manual sync or audio waveform or clapboard sync.
[00145] This ma be used when utilizing an older camera or a camera that does not have Wi-Fi or HDM1 out or when user cannot afford an additional mobile device or streaming device. This method of manual transfer is often referred to as saeakemei. This allows ANY media recording device (photos, videos or sound, etc,) to participate in a maUi camera session and to be eligible for live edit decisions. The blank proxy on stage can be manually labeled like 1 85 Sony Betacam or 8-track recording console.
3. MULTI-DEVICE STEREO SOUND REPRODUCTION
[00146] The invention enables the faithful reproduction of left right stereo sound, multi-channel surround sound or follow-sound by automatically detecting via GPS and Compass or manually via user input, the spatial arrangement of all cameras and audio recording devices in the session. In other words, if there is a concert and user switches to a media input device close to the stage, it may be advantageous via GPS to identify active players in the session or in the vicinity on the left side of the stage and right side of the stage at time of recording. Rather than using stereo audio from the center stage microphone, the stage could automatically use the device located on stage left audio for left stereo channel and device located stage right for right stereo channel in the final movie. Stage user could also manually define this mix. of audio input devices before recording or while recording and could switch audio sources separately from, switching visual sources. In cases of 7-chaaael surround sound, the stage could pull audio from 7 different devices to create die stereo audio track (via auto or manual mixing) or the trite multi-channel Dolby Digital or THX surround audio.
4. VARIOUS CRITERIA CAN BE USED TO AUTOMATICALLY OR MANUALLY PRESENT AVAILABLE REMOTE MEDIA DEVICES TO STAGE
[00147] For example, someone might be making movie in on the beac at sunset and scans for live players nearby but none are found, so the invention can automatically match to a live player up the coast at the next beach or perhaps 1 ,000 miles away in similar tropical and weather and sun conditions. Although the stage user can elect to manually scan .for available remote media capture devices, or can elect to search by keyword or other criteria and choose from a list on goal is to eliminate as many steps as possible. When the stage user activates their session, the preferred method is to automatically connect io relevant remote devices and present them with pre-selectsd thumbnails for each of those devices so that they can just tap record and begin live switch editing. They can deselect auto-connected, and selected devices, search for oilier devices and etc. but the present invention aims to achieve the fewest steps possible.
[00148] One objective is to maximize the number of possible usable remote media capture devices that can be incorporated into the users movie without them having to manually search for live players at the beach at sunset or search the global roll for those criteria. User should just be able to launch the stage, and be presented with a reasonably manageable number of useable matches in describing order of match relevance.
5. AUTOMATIC SCAN, CONNECT, SELECT AND RECORD
[00149] The mos extreme simplification of the process is where the stage is configured to auto-record on launch or on power-up. In other words, stage user only has to launch or activate their device and available remote devices are automatically identified, connected and selected and record starts automatically on all devices, then user taps stop record and all high-res files are transferred and combined into the final edited composition, in this case, user does not have to scan, does not have to select players, and does not even have to tap record. Of course the present invention gives them controls in settings to take back con trol of each of these areas but many will opt for the most automatic operation possible,
6. THE DIFFERENCE BETWEE SCANNING, SELECTING AND SWITCHING
[00158] Scanning is the process where a stage user scans the network or the community for currentl available remote media recording devices. The stage user might also scan the entire registered community for on-call devices or all devices, in the case of on-call or all community devices, the stage can send a direct request or a request through the community server that will send a push notification, email, text message or other notification to ail users in hopes that som will choose to become active and available quickly. Once they become active, they would automatically appear in the stages remote player fist or thumbnails vie or stage could receive a notification and decide to add them manually. Just because a remote device shows up in a list after scanning does not mean it is selected for the session.
[ΘΘΙ.51 j Selecting is when the stage user decides to include that remote media capture device in the currently recording session. This process of selecting happens automatically by default to save the user steps but the user can change that i settings so that after they see the list or thumbnail views of all available remote devices, they would have to manually select each one to .make it part of the session.
{00152] Switching can happen before or during recording or edit review. In other words, after scanning, 5 devices might automatically be selected on stage for inclusion in the session or stage user might manually select 5 devices. Once all devices have been selected for inclusion, and before recording the system automatically makes at least one of those devices ACTIVE for the purpose of the composition. The thumbnail view or list could show 9 available devices but only 5 of those being selected indicated by a red border around them. A cross-x icon might indicate the ACTIVE device(s) among the seiected devices. An unselected device cannot, be ACTIVE. In some use cases, more than one device might, be active for video and separate devices active for audio. User can make active more than one audio, photo or video source at one time for example for Picture in Picture, split screen or stereo audio.
7. EMULATED REMOTE MEDIA RECORDING DEVICES
[00153] One important aspect of the invention is to emulate remote cameras on the server or on player devices usin pre-recorded media files having descriptive meta data about their original capture that match specified or current live criteria so that the stage user is potentially not even aware that those players (cameras) are not live. For example, in number 8 that follows titled Comprehensive Meta Data for Player Matching and Search, the present invention defines some of the criteria may be user to determine if a Pre-Recorded File located on the server could match stage users specified or live criteria.
(00154} For example, they can restrict to local Wi-Fi only or to a certain. GPS area radius. They can also search by keyword such as "beach, sunset" or by scenario using a natural language quer such as "1 am standing on a street corner in Paris at 4pm on a rainy day in November". This would be converted to a database query used to match li e players, emulated live players or archived media. User could select those to add to a multi-camera session or simply select to load the archived clip.
[00155] One problem, with stock media is that let's say you find three .10 minute clips of a rain afternoon street scene in Paris near the same corner but from different angles, currently you would have to download all three entire clips, which could take 20 minutes each and then you have to insert those clips into your editing timeline and try to trim each down to the 8 seconds you want to use of each, if you wan t to switch back and forth between them, it. becomes much more complex because you have to duplicate the clips and position and trim each, duplicate to try to simulate live switching back and forth. This would take a LOT of time.
|00156] By contrast, in the present invention, if you standing on a street corner in Parts and initiate a stage session and there are three liv players on a street corner in the rain in Pails right now near you, you can select ail three players and hit the start record button and switch back and forth between them for one minute and hit stop, the present invention only transfer the 1 -minute .from each player when done recording. No editing, no post work, just real-time editing and the movie is done.
(00157] Now, let's say there are no live players available and you still want a multi- camera effect for your .movie. Well, if over the last two years, 30 people have stood near that same comer in Paris while it was raining and contributed those clips to the live wail or global roll. The present invention would automatically present those 30 to you as live players (you wouldn't know the difference) but. our server would be emulating a live person recording live. Because most scenes in same weather at same time of day are indistinguishable, you wouldn't be able to tell the difference between a live or emulated live player. ow, why is this important? Because in the current process, as explained above, you would have to download each full length archived clip and perform a lot of editing to simulate a multi-camera session. By contrast, in our invention, yon just select 3 of the "playing" emulated live camera, angles and start recording. You then switch back and .forth between each emulated live player (1 0-15 minute archived clips playing on server) and when you hit stop one minute later, the emulated live server players transfer the high res of that 1. minute section only, no editing, no trimmi ng. All happens live and in real time.
(00158] The present in vention essentially turns every piece of media ever recorded into a million live broadcasting reran television stations. When you go live as a stage, the present invention first look for live players nearby or worldwide that match some environmental or situational criteria (either actual or specified by stage), but, if not enough truly live players are found or user wants more options, the present invention show them the live stations (emulated players playing reruns of live sessions) that match the criteria, environment or situation so they can choose those and live switch between them while creating their movie. This eliminates the need to search through stock media, play each result all the way thru, download eacb full length media file, import each file into a timeline, duplicate each for each time you switch to it and trim each duplicate to simulate a live multi-camera switching session. 8. COMPREHENSIVE META DATA FOR PLAYE MATCHING AND SEARC
|001S j Today, most stock photography is only indexed by keyword. This is extremely limiting. An important aspect of our invention that makes it work so well is the method in which the present invention capiure and infer additional information, about the device or the recorded file. For example, current weather, current time, relative time in terms of sun position, compass direction, temperature, altitude, audio waveform, light meter reading, white balance temperature, contrast of shadows to highlights, humidity, air quality, nearby water temperature and quality, wind speed and direction, season, types of plant and other life nearby or detected, moon phase, genera! mood detected, gas price, economic and health indicators, per capita income, object recognition such as cars or bicycles or building types detected, population density, traffic volume, nearby transportation methods, facial cues, color similarities, fashion styles, it is not required that all of this data be available for each and every remote device or recorded file but when it is, it will make it that, much easier to successfully and intelligently auto-select live and emulated-live players.
[ΘΘ16Θ} Cases in point, currently users can search stock libraries by keyword but not typically by time of day or date shot. Let alone searching for footage where the camera was pointed west at 5pm and son was at 36 degree angle to field, of view and it there was light rain in an urban area with population density of 400 per square block and wind speed of 10 degrees south by southwest. It is easy to see how valuable both tagging and searching by ail of that info would make it much easier to find matching, usable footage and is an important aspect of the invention.
9. EMULATED STAGE ENVIRONMENT
{ΘΘ16Ι j Stage users location and other data can be emulated too. User could launch the app in San Diego and type in "I am standing on a street corner in Paris at 7pm in December and it is snowing" and then, if it is currently 7pm in December in Paris and snowing, live cameras would show for selection, if not, emulated live cameras matching the criteria would show. Also, user could be filming themselves in rom of a Green Screen in. their home and could launch stage and type or say "I am standing in Times Square at sunset on a warm summer day facing north with the wind to my back" and if those are current conditions in New York, live players would show, if not emulated live players would show.
00162] The idea in both live stage mode and emulated stage mode is tha users do NOT have to search for players or stick footage b keyword, the present invention detect your environment or you define one and the present invention take care if the rest to show you all video, photos, audio and etc that match so you can "live-switch" to that source whiie you are creating your movie in real-time. Lois of people will record just audio for this (think sound of subway station at rush hour) and entire sessions and movies could be audio only like if user is creating a radio story.
10. MULTICLIP PACKAGE EXPLAINED
[00163] When stage stops recording a one-minute multi-device session, it receives from each device the full one-minute media clip so that they can edit the switching decisions later. A multi-clip package is created containing everything and thai can be shared with the players or others so they can create their own live switch mix.
1 1 . EDIT SWITCHING EXPLAINED AND FILE TRANSFER OPTIONS
(ΘΘ164] The edit, switching window recreates the original live recording by playing all players while you re-switch between them as you would have if you had been stage whiie recording live. Essentially each file plays back as an emulated live player so that you can edit the live switching decisions using the same methodology, it is iike a rerun of the original session that you can live-switch again.
[001651 However, although the present invention can transfer the full high-resolution file from each device, that is not required in. the invention. In some cases it may be advantageous and expedient to only transfer the portion(s) actually used in the live edit switching session. Although this would restrict the stages ability to re-switch or re-edit, ii would make file transfers much faster and more efficient. At a later time, the full-length file could be added back into the package to expand the edhvabiUty. This is especially important in the case of emulated live players and the previous mentioned jukebox, example (below) where stage would only want to get the high/res portions used from each.
12. AT LEAST THREE MODES OF RECORDING
[00166] There are at least three modes of operation or recording. Sequential Mode, Switching mode and Multi-Take Switching Mode. In sequential mode, when the stage taps each player in the preview, it is just so they can see each camera larger, at the end the three I - minute, high-res recordings are transferred to stage and placed in sequential order in the timeline. The fmal movie would be 3 minutes long repeating the same event 3 times, in switching mode, when stage is tapping a. player clip in the preview, they are making a live switching edit decision (i.e. show this camera at this moment in time), at the end of recording, the present invention still send all 3 one-minute high res recordings tot then those are combined in a multi-clip package with the live switching timing decisions and an edited composited movie is created that is one minute long for a one minute event but might switch between each of the thee cameras 20 tiroes. This package allows the stage user or anyone else (if stage shares the package) to recreate the live moment and pla back all cameras at same tim and live s witch bet ween them or just manually edit to crea te a ne w final movie i minute in length. This package can also be converted into a sequential sequence or a combination of switching and sequential. There is also a third type thai is important m filmmaking, it's called a multi-take, switching mode. In this mode, you may want to combine the concepts of sequential and switching when you are recreating a scene many times (multiple takes) as in when a director is filming a ! -minute scene where two people are having a scripted argument and says "take two, take three, take four" and so on. if the director uses 4 cameras on each 1~ minute take and time synehs each recreation of tire event with a clapboard, then he would be able to not only live switch and post-edit switch between each camera in a single take but be able to switch between all 16 cameras across the four takes to create the perfect movie. This goes hack to the core of our in vention not being tied to the typical concept of time and place when talking about Live recording and live switching.
1 3. MULTI-CLIP PACKAGE VS FINAL FLAT COMPOSITION
100167] In practice, what actually happens is thai starting with frame 00:00:01 , the software takes one frame at a time from each live camera or stored camera file (whichever camera! s) appear at that frame, remember could be more than one as in case of Pictnre-in- Picture) and creates a new frame or a composite frame and writes those frames in sequence to a new digital file that is sometimes referred, to as a flattened file because it no longer has multiple layers that can be edited. So for a one-minute final flattened movie at 30 frames per second, this process of accessing the folder of related files and grabbing the needed frames and creating a new composite frame happens 1 ,800 times. The 1,800 frame are then saves as a single movie file, which could contain substantial text (no visible) Meta data about each camera or each frame. This also happens with audio, photos, text, time overlays and etc.
14, STAGES CAN DOUBLE AS PLAYERS
[00168] A. stage device can also double as a player device for another stage and a player can also act as a stage. For example, a stage user may want to connect to 8 other remote media recording devices and perform live switches between those devices and his or her own local device. But another stage in the same area also may want to record the remote media feed of that that first, stage device in its' own composition with other unrelated player devices. In other words, when the stage scans and selects player devices, it may or may not select it's own device for the recording session bat it can also at the same time make its own device available to other stages as a player. In this case, the stage would be recording its own device locally and recording the low res from all of its connected players onto its device while at the same time transmitting a low-res version of its own feed to one or more separate stages. The recording times could differ. The stage might start its own recording composition at 8:31 pro and finish at 8:33 pm whereas the 2ad stage might start recording at 8:32 pm and finish at 8:35 pm. In this case, the stage is capable of automatically splitting up its own recording into the two necessary pieces, one for its' own composition and one for the 2t>li stages composition. At the end. of the 2"* stages recording session, it would request the high- res .file from first stage and first stage would send that high res file as if it were a player.
15. PLAYERS CAN SERVE MULTIPLE STAGES
Ι.ΘΘ169] There will be times when more than one stage user desi res to connect to and remote record from to a single player at the same time. For example, in a concert, there might be only one camera, up on stage and every stage in the audience would love to include that camera in their composition the entire duration of the concert. In this case, all stages could connect to and select that same single player. The player device would keep track of these connections and would either send low-res feeds to each, broadcast one low-res feed that all could receive or, if bandwidth limits, could be represented on each stage as a blank proxy thumbnail. The player would always record anytime any stage is actively recording and the player would keep track of these various connections and various start and stop times sending each stage the necessary piece of the full recording on its device that corresponds to that stages recent session or these transfers could be postponed until after the concert is over at which point each stage would request and the player would send the necessary pieces of the various recordings required for their compositions.
16. EVERYONE CAN BE A STAGE AND A PLAYER AT THE SAME TIME
fOOl?0| The invention is not limited to a single stage connecting in a single direction to multiple players. Although that may be a common scenario, the invention and the systems involved also allow for multiple stages connected to the same players, multiple players acting as stages connected to each other in both directions as player and stage and single players connected to more than one stage. For example, there are situations such as a wedding where every user present would like to be a stage but would also like to make their device available as a player as well In this case, 8 friends could all connect to each other as both stages and players so that they could all perform their own live switch in real-time including only the devices ihey want to include and excluding others. In this case, each player will have to be constantly recording from the lime the first stage sends a record command until the time the last stage sends stop command. Then the stages would request or the players would send each connected stage the portion of the longer recording that corresponds to that stages start and stop time. Likewise, that player would he able to request from ail other stage/players the portions of the files they need for their compositions.
17. USE CASE EXAMPLE - LIVE SWITCHING MUSIC AUDIO O MULTIPLE PLAYER DEVICES TO A MULTIPLE STAGE DEVICES TO CREATE A MULTIDEVICE MUSIC JUKEBOX
[00171] Let's say that 20 people using our app are at a bar and instead of using their devices to contribute to an edited composite, visual presentation, they want to contribute to an audio presentation (again the patent should make it clear that the invention, .makes no distinction between different types of media files, still photos, video, text, graphics, audio, music).
[00172) Each player could play a song or sequence of songs from their interna! or cloud-based music library and the stage actin as the DI or Mix-Tape maker would receive medium resoiuiioii quality audio from each and could automatically or manually live switc between them (perhaps with & dissolve near the end of each song) and the stage's audio output could be connected to a sound system that would play the live-mix for everyone in the bar. At the end of the session, each playe would, either transfer the entire high-resolution audio music file session or just the songs used to the stage for the creation of a full quality, high-resolution mix tape edit that could he shared with others or as a multi-cli package that could be edited.
1 8. USE CASE EXAMPLE - STEREO AUDIO RECORDING USING TWO OR MORE CONNECTED DEVICES
[00173] Stage device connects to one or more nearby player devices at a concert and taps record and can hear low quality reference mix audio through headphones and then taps stop and player transfers the high quality atsdio from second device t he mixed with audio from first device to automatically create a true stereo (two mic) or surround sound (? mic) audio file.
1 . USE CASE EXAMPLE - TIME SYNCED SLIDE SHOW
[00174] .10 users at a birthday party connect to each, other as simultaneous stages and players for the purpose of creating a single still photo slideshow of the event. As each user snaps a photo, each of the 9 other users recei ve a low res proxy to be added to their timeline. At the end of the event, each user can scroll thru their timeline and delete any unwanted images and then tap receive and ail devices would send, their high-res photos to each of the other devices or to an intermediate server for distribution to each device. The end result would be a finished, customized photo slideshow of the birthday party in time sequence order using the desired contributions of all 1 cameras.
20. USE CASE EXAMPLE - REMOTE MICROPHONE
[ΘΘ175] Stage user connects to one or more devices for the purpose of recording a single camera video that contains video from one device and synced audio from other separate devices. For example, in a hnrncane, a lone reporter may set up as a player, a single mobile device camera, on a car dashboard shooting video- oat of the window while the reporter is standing outside of the car in the rain and wind at a distance but within the cameras view holding a second mobile device to their mouth as a microphone. When the tap record on the stage (microphone) device, the video from player device would replace audio on stage device and create a single angle camera video with a single mono audio track on stage that combines video from the player and audio from the stage.
21. USE CASE EXAMPLE ~ MULTI-CAMERA RECORDING
[00376] Stage device sends out a request for players at a surfing contest to an intermediate server or over Wi-Fi. All nearby players receive a push notification to join the live session. Those that join automatically appear as thumbnails on stage. Stage selects one or more to record from including but not necessarily its own. camera. Then taps record. Player devices send low res proxy thumbnails or streaming video to stage arid stage either manually or automatically switches between each device while recording to create an edited, switched composition. When stage user taps stop, eac player device sends high resolution file to stage immediately or at a later time. Stage then generates a muliticlip package consisting of all of the receives high resolution video and audio and photo files {and proxies for those not yet received) as well as the player details, environmental details and other meta data and the live switching decision edit details and also creates a final edited movie. The final edited movie and the muliticlip package is then automatically or manually sent to all participating players so thai they can change the editing or change the switch timing or eliminate some clips in the muliticlip package and create their own new edited movie to share.
22. USE CASE EXAMPLE - MULTI-CAMERA RECORDING USING BOTH LIVE AND EMULATED LIVE PLAYERS (META DATA ENCODED ARCHIVED STOCK MEDIA BEING PLAYED AND BROADCAST FROM A CENTRAL SERVER. OR PLAYER DEVICES TO EMULATE LIVE PLAYERS.
jil01 ?7j One component of the invention is that, the system ignores the space and time continuums allowing us to emulate live or even create impossible virtual real-time live rnulti- earaera sessions by broadcasting archives files as emulated live players. On July 1 th 2013 in Central Park, there may be only 2 active players recording at 10am to 10:15am, in 2014 on same date and time there may be 7, but the present invention will replay those first two from 2013 (if weather matches) so that users think there are 9, by 2020, the present invention could have hundreds of matching cameras from over the years that a user could switch to in their movie. But the present invention don't have to be limited to same time and place, the present invention could also show cameras thai recorded at 1 l m on July 15th each year or cameras that recorded in other parks around the world at different dates and times that match the environmental weather conditions and other criteria and are likely to be undetectably realistic live matches to stages actual or defined shooting scenario.
(00178] Let's say there are no live players available and you still want a multi-camera effect for your movie. Well, if over the last two years, 30 people have stood near that same corner in Paris while it was raining and contributed those clips to the live wall or global roll. The present invention would automatically present those 30 to you as live players (you wouldn't know the difference) but our serve would be emulating a live person recording live. Because most scenes in same weather at same time of day are indistinguishable, you wouldn't be able to tell the difference between a live or emulated live player. Now, why is this important? Because in the current process, as explained above, you would have to download each full length archived clip and perform a lot of editing to simulate a multi-camera session. By contrast, in our invention, you just select 3 of the "playing" emulated live camera, angles and start recording. You then switch back and forth between each emulated live player (10-15 minute archived clips playing on server) and when you hit stop one minute later, the emulated live server players transfer the high res of that I. minute section only, no editing, no trimming. Ail happens live and in real time,
23. USE CASE EXAMPLE - AUDIO ONLY MULTI-DEVICE SESSION FOR MIX TAPE RECORDING
(00 J 79] On a bus equipped with Wi-Fi or using Bluetooth in a car or worldwide thru an intermediate server connecting all. devices via cellular, iO high-school students could all launch the app and act as both stage and player each, user both, broadcasting its own camera view or playing Us music library and receiving low or medium resolution proxies from all other 9 devices. Having their .headphones plugged in, they could then live switch between all of the players creating their own live composition for their listening pleasure and when season is done they could request the full high-res file to replace the low res in the final mix tape. In some situations, it may also be advantageous to include images or video or text from each player device to make a music video mix tape that incorporates both audio and visual imagery.
24. USE CASE - MULTI-CAMERA LIVE SWITCHING WHERE PLAYERS CONSIST OF NON-CONNECTED ANALOG OR DIGITAL RECORDING DEVICES.
ΙΘΘ18Θ] Sometimes it is desirable to create a multi-camera movie or to make live- switching edit decisions while recording .from cameras, microphones or other media capture devices thai cannot possibly be connected to the stage in by any electronic means in both or either a s nd or receive configuration .
(06181] in these cases, the stage can manually define a type of PROXY player that shows on the stage-switching preview as a colored solid thumbnail with, a user defined name or umber. When the stage starts recording, a signal could be sent to. the capture device either digitally or thru analog means (such as stage plays a beep tone or clap sound) or stage user could manually or by instruction to another person verbally or with hand signals or etc., trigger recording on this non-connected device (maybe stage can send infrared signal to triaaer recording but device has no means of sending anything back to stage) and also a clapboard could be shown in front of camera or to give sound cue on audio microphone recording because the device ma or may not have time code.
(66] 82] Once the stage stops -recording, another command is executed to stop recording on device. Then, the film could be developed and scanned to a digital file or the tape could be ingested and transcoded or the disk file could be manually transferred to stage for replacement of the proxy clip. Audio waveforms or cues., clapboard images, time code or other means could be automatically precisely position and sync that media source in the multi-clip timeline relative to the other recordings. No editing would, be required. For example, a movie producer to quickly live switch between three Panavision 35mm film cameras could use this method. The stage would have a red, green and blue proxy thumbnail and stage would say "record now" and ail three cameras would start, recording and the stage could play a beep sound, or flash a torch light in view of all three film cameras or a clapboard could be used. Over the course of 30 imnitt.es, by looking at what each film, camera is capturing instead of by looking at stage preview thumbnails, stage -user could tap the desired proxy thumbnail during shooting to switch to that camera at that moment and so on. They might switch back and forth between the proxies 100 times. Then when the Panavision 35mm film is developed and digitally scanned, the files would be imported by the stage and automatic-ally or manually synced by tiraecode, beep tone, clapboard object motion detection or flash frame relative to the other files and relative to the live switching decision list. No editing or post would be required; the stage would automatically generate the final multi- camera movie and save it as a flattened movie file to disk. It is also possible i this use case to use a connected camera player device attached to the top of the lens of the film camera to provide better visual feedback, on the stage of that camera's point of view instead, of a solid color labeled proxy, stage would see this preview proxy and that would be replaced with the full res from film camera later.
25. USE-CASE MULTI-CAMERA LIVE SWITCHING WHERE NON - S M AR.T-DE ICE DIGITAL CAMERAS (third devices) ARE CONNECTED TO THE STAGE THRU AN INTERMEDIATE PROXY PLAYER THAT RECEIVES THE CAMERA OR MIC SIGNAL EITHER DIRECTLY V A HDMI, VGA, Wi-Fi, ETC OR THRU AN ENCODING OR STREAMING SERVER DEVICE CONNECTED TO BOTH THE CAMERA AND THE PLAYER.
{00183] In this case, imagine a RED ONE digital cinema camera that has no way of directly connecting to a player or stage device and is not a smart device in and of itself or doesn't have Wi-Fi. Art intermediate device like a Teradek Cube could be connected to. the RED ONE via its HD-SDi or HDMI! out HD preview port. The Teradek cube would encode and compress that digital preview into tlmmbnai!s or low res stream that would be sent to the proxy player. This player, instead of (or in addition to) sending its own low-res camera feed to the stage, would instead or also forward the preview of the RED ONE being sent to it by the Terakdek Cube to the stage. When stage starts recording, a command would be sent to the proxy player which would in turn send a command either directly to the RED ONE or via the Teradek Cube connected to the RED ONE or another sync method would be used like beep tone or clapboard. When the recording is stopped, the Player would send the medium resolution to the stage or would request the high resolution from the Teradek Cube and after receipt transfer thai to the stage to replace the low res thumbnails or proxy. In addition, the original hard disk file from the RED ONE containing the 4K or UltraHD (4 times HD) could be later imported to the stage and replace the proxy or high-res for super high res 4K final output Irs. some cases with some cameras or devices, the intermediate device could be eliminated and in others both the proxy player and intermediate device could be eliminated if the RED camera offers necessary smart connectivity and if network and. processing bandwidth is not a constraint.
26, COMPUTER GENERATED EMULATED LIVE PLAYER
|ΘΘ184] in some cases, it may desirable to include in a composition a computer generated 3D rendering of the environment surround the stage or of some other environment relevant to the composition. For example, a user could he standing near the Eiffel tower on a cold mo.rn.ing and wanting to create a multi-camera composition but no players are available and no pre-recorded emulated player files on server match the current, environment closely enough to provide an undetectable match with the stage users video. The server could have on it a 3D model of Paris with models of ail buildings and the Eiffel. Tower, when stage user sends out a request for players, the server could present a live video of the exact environment being created in real-time in the same way a. 3D Video Game would render live environment view for the player of the game. Environmental cues like weather, time of day, position and cycle of the moon and stars, cloud coverage and wind speed, detected audio or colors in stages video could be used to make the 3D rendering of Paris extremely realistic. Then the stage could select this emulated computer generated live player and start recording, switching back and forth between the two camera views to create a single composition,
27. COLOR. CORRECTION, LENS CORRECTION OR AUDIO CORRECTION TO MATCH MEDIA SOURCES
(00185) Because each camera device or microphone may perceive visual images or sound differently, it may be necessary for the player, the stage or an interim server to make adjustments to both the low-res thumbnails or stream and the final high-res files transferred from, each device so that they match each other in color, brightness, contrast, white balance (in the case of visuals) and match each other in tone, volume level and etc. (in the case of audio). This would happen automatically or manually by user input during recording to eliminate the need for post editing or correction. This happens when stage first selects players, the stage could send out a stream of itself for players to match or the stage could analyze incoming player streams and send commands to players to color correct or audio correct in real-time. It may also be necessary to apply lens correction, frame rate correction, scale adjustment or motion stabilization to match all footage.This would insure that live players or emulated live players match, the stage and other players media as closely as possible so that when the even! is over, the composition is truly complete and will not require editing or post-correction.
28. LIVE OR ARCHIVED PHOTOS OR FIXED VIDEO USED AS EMULATED PLAYER VIDEO SOURCES
|Θ01$6| In some cases, a player device may only be able to capture a still photo every few seconds or may have a boring !ocked-down camera view which makes it seem like a still photo. This could either be mixed into the composition as a frozen, image or the image could be scaled or panned to create the appearance of live video. Filters could also be applied and layering effects could be overlaid such as snow falling or butterflies or clouds moving to make the still photo or fixed position video seem more lifelike and realistic and entertaining.
29. GREEN SCREE OR VECTOR. BACKGROUND REPLACEMENT AND MULTI- CAMERA RECORDING
[0Θ187] One of the current .limitations with current green screen recording technology and vector background replacement technology is that stock video backgrounds often do not match the studio lighting or lighting of the foreground clip. Because live and emulated live players transmit extended raeta data about the SUB position, temperature, weather and etc. it is possible to instruct the stage or another player recording the foreground subject to reposition the camera pointing in a different direction or to delay shooting until a different time of day for example to match other footage. For example, if a stage user is standing outside on a sunny day in Alaska in front of a green screen and wants to make a movie of an actor standi g on a beach in Hawaii, the system could instruct the stage to point the camera west and wait 15 minutes for the sun angle in the archived emulated li ve player shot of a beach in Hawaii so that both shots would match perfectly and create a beautiful composition. Also, when the original player was recording the shot of the beach in .Hawaii, the system could have asked that user to record another shot facing in the opposite direction s that the two clips could be later used as backgrounds for a two-person lacing dialogue scene where it is necessary to have a shot facing west and a shot facing east. This is almost never possible with traditional stock footage. If the player is actually live in. Hawaii at the same time the stage is live in Alaska, the system could provide instructions to both stage and player to make adjustments to their shooting angles so that combing the foreground and background would create an imperceptible match.
[00188] While the foregoing description and drawings represent the exemplary embodiments of the present invention, it will be understood that various additions. modifications and substitutions may be made therein without departing .from the spirit and scope of the present invention as defined in the accompanying claims. In particular, it will be clear to those skilled in the art that the present invention may be embodied in other specific forms, structures, arrangements, proportions, sixes, and with other elements, materials, and components, without departing from the spirit or essential characteristics thereof. One skilled in the art will appreciate that the invention may be used with man modifications of structure, arrangement, proportions, sizes, materials, and components and otherwise, used in the practice of the invention, which are particularly adapted to specific environments and operative requirements without departing from the principles of the present invention. The presently disclosed embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being defined by the appended claims, and not limited to the foregoing description or embodiments.

Claims

Claims What is claimed is:
1. A method of creating a final media composition using a media composition program residing on a first electronic device comprisin a first display device, a first memory device, and first user input means, the method comprising:
a) receiving, on the first electronic device, a plurality of low resolution media streams of high resolution media clips from a plurality of remote electronic devices;
b) displaying, in the first display device, visual indicia of each of the low resolution media streams being received by the .first electronic device;
c) activating one or more of the low resolution media streams being received b the first electronic device in response to user input via the first user input means;
d) for each low resolution media stream that is activated in step c). recording a low resolution media clip segment of that low resolution media stream in an interim media composition that resides on the first memory device;
e) for each low resolution media clip segment recorded in the interim media composition, receiving on the first electronic device a high, resolution media clip segment, from the remote electronic device that corresponds to that low resolution media clip segment; and
1} automatically replacing the low resolution media clip segments in the interim media composition with the high resolution media clip segments to create the final, media composition comprising the high resolution media clip segments.
2. The method according to claim I wherein, the interim media composition and the final media composition are the same singular file .
3. The method according to claim 1. further comprising:
wherein step d) further comprises;
d-1) for each low resolution media stream that is activated in step c), recording an extended low resolution media clip that comprises the low resolution media dip segment of that low resolution media siream on the first memory device separate from the interim media composition;
wherei step e) further comprises:
e-i) for each extended low resolution media clip recorded on the first memory device, the first electronic device receiving an extended high resolution media clip that corresponds to that extended low resolution media clip; and wherein step f) further comprises:
f-1 ) recording the extended high resolution media clips on the first memory device;
f-2) extracting the high resolution media clip segments that correspond to the low resolution media clip segments of the interim media composition from the extended high resolution media clips recorded on the first memory device; and
f-3) replacing the low resolution media clip segments of the interim media composition with the extracted high resolution media clip segments.
4. The method according to claim 1. wherein step c) further comprises:
c- 1 ) selecting one or more of the low resolution media streams being received by the first electronic device for recording in response to user input via the first user input means;
c-2) for each of the low resolution media streams selected in step c-1), recording an extended low resolution media clip of that low resolution media stream on the first memory device as a separate file;
c~3) activating one or more of the low .resolution media streams selected in step c-1).
5. The method according to claim I further comprising;
wherein step c) further comprises: activating a first of the plurality of the low resolution media streams and a second of the plurality of the low resolution media streams; and
wherein step d.) further comprises:
d-l) recording the low resolution media clip segment of the first low resolution media stream in the interim media composition: and
d-2) -recording the low resolution media clip segment of the second low resolution media stream in the interim media composition,
6. The method according to claim 5 wherein the first and second low resolution media streams are activated sequentially, and wherein the low resolution media clip segment of the first low resolution media stream and the low resolution media cli segment of the second iow resolution media stream are positioned sequentially in the interim media composition,
7. The method according to claim 5 wherein the first and second low resolution media streams are activated concurrently, and wherein the lo w resolution media clip segment of the first low resolution media stream and the low resolution media clip segment of the second low resolution media stream are positioned concurreutly in the interim media composition.
8. The method according to claim I wherein step e) is automatically initiated upon an end record signal being generated by the first electronic device ami being received by the remote electronic device.
9. The method according to claim 1 wherein step e) is automatically imitated upon deierminiiig that the high resolution media clip segments will be transmitted at a data rate that exceeds a predetermined threshold.
10. The method according to claim 1 wherein step e) is automatically initiated upon a window of the media composition program being opened or closed.
1 1. The method according to claim I wherein during step e), the high resolution media clip segments are routed through a server.
12. The method according to claim 1 1 wherein step e) comprises:
e-1) wireiessy uploading the high resolution media clip segments from the remote electronics device to the server; and
e-2) wirelessly downloading the high resolution media clip segments from the server to the first portable electronic device.
13. The method according to claim .12 wherein step e-1) is automatically initiated upon determining that the high resolution media clip segments will he wirelessly uploaded from the remote electronic devices to the server at a data rate that exceeds a. predetermined threshold; and wherein step e-2) is automatically initiated upon deierraining that the high resolution media clip segments will be wirelessly downloaded from the server to the first electronic device at a data rate thai exceeds a predetermined threshold.
14. The method according to claim 1 wherein the low resolution, audio clip segment is a low resolution audio clip segment and the high resolution media clip segment is a high resolution audio clip segment.
15. The method according to claim I wherein the low resolution media clip segment is a low resolution video clip segment and the high resolution media clip segment is a high resolution video clip segment.
16. The method according to claim 1 wherein:
wherein, for step a), the plurality of low resolution media streams comprise low resolution video streams, and wherein the plurali ty of low resoSodon. video streams comprise a plurality of remote camera views perceived by a plurality of remote camera lenses of the plurality of remote elecUOtnc devices, and;
wherein, for step b), the visual indicia comprises the remote camera views perceived by the pluralit of remote camera lenses of (he plurality of remote electronic devices.
1.7. The method according to claim 16 wherein step a) further comprises: displaying, i the first display device, a first camera view perceived by a first camera lens of the first electronic device, wherein the visual indicia and the first camera view are simultaneously displayed in the first display device.
18. The method according to claim Ϊ wherein step a) further comprises:
a-1) displaying a list of remote electronic devices that are available for remote connection;
a-2) selecting at least one of the remote electronic devices from the list; and a-3) for each remote eiectronic device selected from the list, receiving the low
resolution, media stream of a high resolution media clip from that remote eiectronic device.
19. The method according to claim 18 wherein the plurality of low resolution media streams received, in step a) are transmitted to the first electronic device over a common carrier network.
20. The method according to claim 18 wherein the remote electronic devices on the list are selected based on one or mote qualification criteria defined by the user of the first electronic device via the first user input means.
21. The method according to claim 20 wherein the one or more qualification criteria are selected from a group consisting of local area network connectivity, GPS radius from the first electronic device, location, of the remote electronic device and pre-defined group status.
22. A non-transitory computer-readable storage medium encoded with instructions which, when executed on a processor of a first, electronic device, perform a method comprising: a) receiving, on the first electronic device, a plurality of low resolution media streams of high resolution media clips from a plurality of remote electronic devices;
b) displaying, in the firs display device, visual indicia of each of the low resolution media streams being received by the first electronic device;
c) activating one or more of the low resolution media streams being received by the first electronic device in response to user input via the first user input means;
d) for each low resolution media stream thai is activated in step c), recording a low resolution media clip segmen of that low resolution media stream in an interim media composition that resides on the first memory device;
e) for each low resolution medi clip segment recorded in the interim media composition, receiving on the first electronic device a high resolution media clip segment from the remote electronic device that corresponds to that low resolution media clip segment; and ί) automatically replacing ilie low resolution media clip segments in the interim media composition with, the high resolution media clip segments to create the .final media composition comprising the high resolution media, clip segments.
23. An electronic device comprising:
a first processor:
a first memory device;
a first transceiver; and
instructions residing on the first memory device, which when executed by the first processor, causes the first, processor to:
a.) receive, on. the first electronic device, a plurality of tow resolution media, streams of high resolution media clips from a plurality of remote electronic devices:
b) display, in the first display device, visual indicia of each of the low resolution media streams being received by the first electronic device;
c) activate one or more of the low resolution media streams 'being received by the first electronic dev ice in response to user input via the first user input means;
d) for each low resolution media stream that is activated in step c), record a low resolution media cli segment of that low resolution media stream in an interim media composition that resides on the first memory device;
e) for eac low resolution media clip segment recorded in the interim media composition, receive on the first electronic device a. high resolution media clip segment from the remote electronic device that corresponds to that low resolution media clip segment; and f) automatically replace the low resolution media clip segments in the interim media composition with the high resolution media clip segments to create the .final media composition comprising the high resolution media clip segments.
24. A method of creating a video composition composing:
a) displaying, in a first display device of a first electronic device, a plurality of remote camera views perceived by a plurality of .remote camera lenses of a plurality of remote electronic devices;
b) activating one or more of the plurality of the remote camera views displayed in the first displa device via user input means of the first electronic device;
c) for each remote camera view that is activated in step h), recording, on a first memory device of the first electronic device, a low resolution video clip segment of the remote camera view as part of an interim video composition, and wherein for each remote camera view that is activated in step b); d) for each, low resolution video clip segment recorded in step e), acquiring from the remote electronic devices a high resolution video clip segment that corresponds to that low resolution video clip segment; and
e) automatically replacing the low resolution video clip segment in the video composition recorded on the first memory device of the first electronic device with the high resolution video clip segments,
25. The method according to claim 24 wherein step a) further comprises:
a-1 ) displaying, in the first display device, a list of remote electronic devices thai have initiated a sharing status via user input means of the remote electronic devices;
a-2) selecting a plurality of remote electronic devices from the list via user input means of the first electronic device:
a-3) displaying remote camera views of the remote electronic devices selected in step a-2) in the first display device.
26. The method according to claim 25 wherein step a- 1) is performed upon initiating a stage status for the iirst electronic device via user input means of the first electronic device.
27. The method according to claim 26 wherein remote electronic devices selected for the list meet one or more qualification criteria defined by the first electronic device via user input means of the first electronic device.
28. The method according to claim 2? wherein the one or more qualification criteria are selected from a group consisting of local area network connecti ity, GPS radius from the first electronic device, location of the remote electronic device and pre-defined group status.
29. The method according to claim 24 wherein step a) further comprises: displaying, in the first display device of the first electronic device, a first camera view perceived by a first camera lens of the first eiectronic device, wherein the first camera view and the plurality of remote camera views are simultaneously displayed in the first display device.
30. The method according to claim 29 wherein step a) further comprises: displaying the plurality of remote camera views in a first window that overlays a primary window in which the first camera view is displayed.
31. The method according to claim 30 wherein upon a swap function being activated, the first camera view is displayed in the overlay window and a selected one of the remote camera views is displayed in the primary window,
32. A non-transitory computer-readable storage medium encoded with instructions which, when executed on a processor of a first electronic device, perform a method comprising: a) displaying, in a first display device of a first electronic device, a plurality of remote camera, views perceived by a plurality of remote camera lenses of a plurality of remote electronic devices;
b) activating one or more of the plurality of the remote camera views displayed in the first display device in response to user input inputted via user input means of the first electronic device;
c) for each remote camera view that is activated in step b): (1) recording, on a first memory device of the first electronic device, a low resolution video clip of the remote camera view as part of a video composition; and (2) generating and transmitting a first record signal to the remote electronic devices, thereby causing a high resolution video clip of the remote camera view to be recorded on the remote electronic device capturing that remote camera view;
d) for each high resolution video cli recorded in step c), generating and transmitting a signal that causes the high resolution video clips from the remote electronic devices to be transmitted to the first electronic device; and
e) upon the first portable electronic device receiving the high resolution video clips transmitted in step d), automatically replacing the low resolution video clips in the video composition recorded on the first memory device of the first electronic device with the high resolution video clips,
33, An electronic apparatus comprising:
a. first processor;
a first memory device;
a first transceiver;
a first display device;
first user inpu means;
a first camera lens; and
instructions residing on the first memory device, which when executed by the first processor, causes the first processor to:
a) display, in the first display device, a plurality of remote camera views perceived by a plurality of remote camera lenses of a plurality of remote electronic devices; b) activate one or more of the plurality of the remote camera views displayed in the first, display device in response to user input inputted via the first user input means;
c) for each remote camera view that is activated, in ste b): (1 ) record, on the first memory device, a low resolution video clip of the remote camera view as part of a video composition; and (2) generate and transmit a first record signal to the remote electronic devices, thereby causing a high resolution video clip of the remote camera view to be recorded on the remote electronic device capturing that remote camera view;
d) for each high resolution video clip recorded in step c), generating and transmitting a signal that causes the high resolution video clips from the remote electronic devices to be transmitted to the electronic apparatus; and
e) upon the electronic apparatus receiving the high resolution video clips transmitted in step d), automatically replacing the low resolution video clips in the video composition recorded on the first memory device of the electronic apparatus with the high resolution video clips.
34, A method of creating a final media composition using a media composition program residing on a first electronic device comprising a first display device, a first, memory device, and first user input means, the method comprising:
a) receiving, on the first electronic de vice, a plurality of low resolution media streams of high resolution media clip files from one or more databases, the high resolution media clip files stored on the one or more databases;
b) displaying, in the first displa device, visual indicia of each of the low resolution media streams being received by the first electronic device;
c) activating one or more of the low resolution media streams being received by the first electronic device " response to user input via the first user input means;
d) for each low resolution media stream that is activated in ste c), recording a low resolution media clip segment of ihat low resolution media stream in an interim media composition th t resides on the first memory device;
e) for each low resolution media clip segment recorded, in the interim media composition, receiving on the first electronic device a high resolution media clip segment from the one or more databases that corresponds to that low resolution media clip segment; and
f) automatically replacing the low resolution media clip segments in the interim media composition with the high resolution media clip segments to create the final media composition comprising the high resolution media clip segments.
35. The method according to claim 34 wherein at least one of the one or more databases resides on a server that is accessible by the first electronic device over a common carrier network.
36. The method according to any one of claims 34 to 35 wherein at least one of th one or more databases resides on a second electronic device,
37. The method according to claim 36 wherein the first and second electronic devices are mobile communication devices.
38. The method according to claim 34 wherein step b) further comprises: displaying,, in the first display device, visual indicia of only those low resolution media streams that satisfy qualification criteria.
39. The method according to claim 34 wherein step a) further comprises:
a-1) searching the one or more databases for high resolution video clip files that satisfy qualification criteria;
a-2) selecting a plurality of high resoluiioo video clips from the one or more databases thai s tisfy the qualification criteria; and
a-3) transmitting, to the first, portable electronic device, only those low resolution media streams that correspond to the high resolution video clips determined to satisfy the qualification criteria in step a-2).
40. The method according to claim 39 wherein the qualification criteria comprises user- specified criteria and/or auto-extracted criteria.
41. The method according to claim 40 wherein the user-specified criteria comprises local area network connectivity and GPS radius from the first electronic device and location.
42. The method according t claim 40 wherein the auto-extracted criteria comprises current weather conditions at a GPS location of the first portable electronic device, current time of day, current day and month, and GPS radius from the first electronic device and location,
43. A method of creating a video composition comprising:
a) displaying, in a first display device of a first portable electronic device, a first camera view perceived by a first camera lens of the first portable electronic device;
b) transmitting, to the first portable electronic device, a plurality of low resolution video streams of high resolution video clips previously stored in one or more databases;
c) displaying, in the first display device of the first electronic device, the low resolution video streams, wherein the first camera view and the low resolution video streams are simultaneously displayed in the first display device; d) recording, on the first memory device of the first portable el.eciro.nic device, a low resolution video clip for each of the low resolution video streams activated by a user as part of a video composition;
e) for each tow resolution video cli recorded on. the first memory device of the first portable electronic device, transmitting corresponding ones of the high resolution clips from the one or more databases to the first portable electronic device; and
f) automatically replacing the low resolution video clips in the video composition recorded on the first memor device of the first portable electronic device with the high resolution video clips.
44. The method according to claim 43 wherein, step a) former comprises:
a-1) displaying, in the first display device, a list of high resolution video clips previously stored in the library database that satisfy qualification criteria;
a- 2) selecting a plurality of high resolution video clips from the list via user input means of the first electronic device; and
a-3) transmitting, to the first portable electronic device, a plurality of low resolution video streams of the high resolution video clips selected in step a~2).
45. The method according to claim 44 wherein the qualification criteria comprises user- specified criteria and auto-extracted criteria.
46. The method according to claim 45 wherein the user-specified criteria comprises local area network connectivity, GPS radius from the first electronic device and location.
47. The method according to claim 45 wherein the auto-extracted criteria comprises current weather conditions at a GPS location of the first portable electronic device, current time of day and current day and mouth.
48. A method of creating a final media composition using a media composition program residing on a first electronic device comprising a first display device, a first memory device, and first user input means, the method comprising;
a) displaying, in die first display device, a visual indicia for each of a plurality of electronic media recording devices;
b) recording, on each of the electronic media recording devices, a perceived event as a media clip that contains an electronic media recording device identifier;
c) selectively activating each of the visual indicia of the plurality of electronic media recordin devices during step b) to generate and record a proxy clip segment in an interim video composition on the first memory device, wherein, each proxy clip segment is associated with the electronic media recording device whose visual indicia was activated to generate that proxy clip segment and a temporal period;
d) for each proxy clip segment recorded in the interim media composition, receiving on the first electronic device the media clip recorded in step b); and
e) for each media clip received in step d), matching the media clip with the corresponding prosy clip segment based on the electronic media recording device identifier and automatically extracting a segment of the media ciip that corresponds to the temporal period of that proxy clip segment; and
f) for each media clip segment extracted in step e), automatically replacing the proxy clip segment to which that media clip segment is matched with, that media clip segment to create the final media composition comprising the media clip segments.
PCT/US2013/023499 2012-01-26 2013-01-28 Method of creating a media composition and apparatus therefore Ceased WO2013116163A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US14/374,719 US20150058709A1 (en) 2012-01-26 2013-01-28 Method of creating a media composition and apparatus therefore
US15/599,621 US20170257414A1 (en) 2012-01-26 2017-05-19 Method of creating a media composition and apparatus therefore

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261931911P 2012-01-26 2012-01-26
US61931911 2012-01-26

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US14/374,719 A-371-Of-International US20150058709A1 (en) 2012-01-26 2013-01-28 Method of creating a media composition and apparatus therefore
US15/599,621 Continuation US20170257414A1 (en) 2012-01-26 2017-05-19 Method of creating a media composition and apparatus therefore

Publications (1)

Publication Number Publication Date
WO2013116163A1 true WO2013116163A1 (en) 2013-08-08

Family

ID=48905739

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/023499 Ceased WO2013116163A1 (en) 2012-01-26 2013-01-28 Method of creating a media composition and apparatus therefore

Country Status (1)

Country Link
WO (1) WO2013116163A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104378555A (en) * 2014-11-28 2015-02-25 桂林信通科技有限公司 Intelligent subtitle recording system in emergency command system
US20150084857A1 (en) * 2013-09-25 2015-03-26 Seiko Epson Corporation Image display device, method of controlling image display device, computer program, and image display system
CN107241540A (en) * 2017-08-01 2017-10-10 哈尔滨市舍科技有限公司 A kind of dual resolution design integrated form panorama camera device and method
CN111263093A (en) * 2020-01-22 2020-06-09 维沃移动通信有限公司 A video recording method and electronic device
CN112261425A (en) * 2020-10-20 2021-01-22 成都中科大旗软件股份有限公司 Video live broadcast and video recording playing method and system
CN113794923A (en) * 2021-09-16 2021-12-14 维沃移动通信(杭州)有限公司 Video processing method and device, electronic equipment and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007082167A2 (en) * 2006-01-05 2007-07-19 Eyespot Corporation System and methods for storing, editing, and sharing digital video
US20070195203A1 (en) * 2006-02-21 2007-08-23 Qualcomm Incorporated Multi-program viewing in a wireless apparatus
US20100304731A1 (en) * 2009-05-26 2010-12-02 Bratton R Alex Apparatus and method for video display and control for portable device
US20100318647A1 (en) * 2009-06-10 2010-12-16 At&T Intellectual Property I, L.P. System and Method to Determine Network Usage
US20110312310A1 (en) * 2005-09-14 2011-12-22 Jorey Ramer System for targeting advertising content to a plurality of mobile communication facilities

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110312310A1 (en) * 2005-09-14 2011-12-22 Jorey Ramer System for targeting advertising content to a plurality of mobile communication facilities
WO2007082167A2 (en) * 2006-01-05 2007-07-19 Eyespot Corporation System and methods for storing, editing, and sharing digital video
US20070195203A1 (en) * 2006-02-21 2007-08-23 Qualcomm Incorporated Multi-program viewing in a wireless apparatus
US20100304731A1 (en) * 2009-05-26 2010-12-02 Bratton R Alex Apparatus and method for video display and control for portable device
US20100318647A1 (en) * 2009-06-10 2010-12-16 At&T Intellectual Property I, L.P. System and Method to Determine Network Usage

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"CollabraCam: Live multicamera switching on your iPhone or iPad.", COLLABRACAM: LIVE MULTICAMERA SWITCHING ON YOUR IPHONE OR IPAD., 4 January 2012 (2012-01-04), Retrieved from the Internet <URL:http://www.fcp.co/hardware-and-softwarelconsumer/388-collabracam-live-multicamera-switching-on-your-iphone-or-ipad> [retrieved on 20130321] *
DENYSTKALICH.: "An in depth review of CollabraCam: An iphone/ipad app for multi-camera video production.", AN IN DEPTH REVIEW OF COLLABRACAM: AN IPHONE/IPAD APP FOR MULTI-CAMERA VIDEO PRODUCTION., 11 March 2011 (2011-03-11), Retrieved from the Internet <URL:http://thenextweb.com/apps/2011/03/11/an-in-depth-review-of-collabracam-an-iphoneipad-app-for-multi-camera-video-production> [retrieved on 20120921] *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150084857A1 (en) * 2013-09-25 2015-03-26 Seiko Epson Corporation Image display device, method of controlling image display device, computer program, and image display system
US10451874B2 (en) * 2013-09-25 2019-10-22 Seiko Epson Corporation Image display device, method of controlling image display device, computer program, and image display system
CN104378555A (en) * 2014-11-28 2015-02-25 桂林信通科技有限公司 Intelligent subtitle recording system in emergency command system
CN104378555B (en) * 2014-11-28 2017-09-08 桂林信通科技有限公司 Intelligent caption input system in emergency commading system
CN107241540A (en) * 2017-08-01 2017-10-10 哈尔滨市舍科技有限公司 A kind of dual resolution design integrated form panorama camera device and method
CN111263093A (en) * 2020-01-22 2020-06-09 维沃移动通信有限公司 A video recording method and electronic device
CN112261425A (en) * 2020-10-20 2021-01-22 成都中科大旗软件股份有限公司 Video live broadcast and video recording playing method and system
CN112261425B (en) * 2020-10-20 2022-07-12 成都中科大旗软件股份有限公司 Video live broadcast and video recording playing method and system
CN113794923A (en) * 2021-09-16 2021-12-14 维沃移动通信(杭州)有限公司 Video processing method and device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
US20170257414A1 (en) Method of creating a media composition and apparatus therefore
US20150058709A1 (en) Method of creating a media composition and apparatus therefore
US9117483B2 (en) Method and apparatus for dynamically recording, editing and combining multiple live video clips and still photographs into a finished composition
US11343594B2 (en) Methods and systems for an augmented film crew using purpose
WO2019128787A1 (en) Network video live broadcast method and apparatus, and electronic device
US20120169855A1 (en) System and method for real-sense acquisition
US20100064239A1 (en) Time and location based gui for accessing media
JP2005341064A (en) Information transmission apparatus, information transmission method, program, recording medium, display control apparatus, and display method
CN106576190A (en) 360 degree space image reproduction method and system therefor
WO2013116163A1 (en) Method of creating a media composition and apparatus therefore
US20140205261A1 (en) Interactive audio/video system and method
US12081865B2 (en) System and method for video recording with continue-video function
US20200104030A1 (en) User interface elements for content selection in 360 video narrative presentations
CN106095881A (en) Method, system and the mobile terminal of a kind of display photos corresponding information
WO2017157135A1 (en) Media information processing method, media information processing device and storage medium
US10453496B2 (en) Methods and systems for an augmented film crew using sweet spots
CN112734937B (en) Tourism system based on panoramic technology
US11398254B2 (en) Methods and systems for an augmented film crew using storyboards
US20090013241A1 (en) Content reproducing unit, content reproducing method and computer-readable medium
CN105765969B (en) Image treatment method, device and equipment and image shooting system
US20150032744A1 (en) Generation of personalized playlists for reproducing contents
US20240388864A1 (en) Environmental artificial intelligence soundscape experience
JP2006005788A (en) Image displaying method and image display system
GROFELNIK FUNDAMENTALS OF VIDEO PRODUCTION FOR MODERN MARKETING
WO2024241106A1 (en) Environmental artificial intelligence soundscape experience

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13743585

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 14374719

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 07-11-2014)

122 Ep: pct application non-entry in european phase

Ref document number: 13743585

Country of ref document: EP

Kind code of ref document: A1