WO2016038964A1 - 情報処理装置及び情報処理方法 - Google Patents
情報処理装置及び情報処理方法 Download PDFInfo
- Publication number
- WO2016038964A1 WO2016038964A1 PCT/JP2015/067355 JP2015067355W WO2016038964A1 WO 2016038964 A1 WO2016038964 A1 WO 2016038964A1 JP 2015067355 W JP2015067355 W JP 2015067355W WO 2016038964 A1 WO2016038964 A1 WO 2016038964A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- image
- time
- viewpoint
- recorded
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
- G11B20/10527—Audio or video recording; Data buffering arrangements
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/102—Programmed access in sequence to addressed parts of tracks of operating record carriers
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/266—Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
- H04N21/2665—Gathering content from different sources, e.g. Internet and satellite
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
- H04N5/77—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/91—Television signal processing therefor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/91—Television signal processing therefor
- H04N5/93—Regeneration of the television signal or of selected parts thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/79—Processing of colour television signals in connection with recording
- H04N9/87—Regeneration of colour television signals
- H04N9/8715—Regeneration of colour television signals involving the mixing of the reproduced video signal with a non-recorded signal, e.g. a text signal
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
- G11B20/10527—Audio or video recording; Data buffering arrangements
- G11B2020/10537—Audio or video recording
Definitions
- the technology disclosed in the present specification relates to an information processing apparatus and an information processing method for sharing information such as moving images and sounds between users, and in particular, such as images and sounds without time or space restrictions.
- the present invention relates to an information processing apparatus and information processing method for reproducing information.
- Information devices equipped with information playback functions such as video content and audio content are already widely used.
- a mobile phone such as a smartphone, a tablet terminal, an electronic book, a portable music player, and the like.
- information devices such as a head-mounted display and a wristband type device that are used by being worn on a part of a user's body such as a head or an arm.
- These information devices are pre-downloaded content, content recorded (photographed and recorded) on the spot, played via the network (including real-time streaming), and augmented reality using AR (Augmented Reality) technology Various contents such as contents provided in real time can be viewed.
- Some information devices having a content playback function include a chasing playback function (see, for example, Patent Document 1) for playing back a moving image file being recorded from an arbitrary playback position, or a desired screen. Some have a high-speed playback function (see, for example, Patent Document 2) for quickly finding the image. If these functions are used, the user can easily view the content from an arbitrary time point.
- the purpose of the technology disclosed in this specification is to provide an excellent information processing apparatus and information processing method capable of viewing content without temporal or spatial restrictions.
- An information acquisition unit for acquiring image or audio information A sensor information acquisition unit that acquires position / posture information or other sensor information when acquiring image or audio information; A storage unit for storing acquired image or audio information in a database together with sensor information; Is an information processing apparatus.
- the storage unit of the information processing device stores image or audio information in a dedicated database on a network or a database of a video sharing site. Configured to save.
- the storage unit of the information processing apparatus is configured to perform blur correction when recording an image or sound.
- the fourth aspect of the technology disclosed in this specification is: An information acquisition unit for acquiring image or audio information stored in a database; An arithmetic processing unit that reproduces information at an arbitrary time point or an arbitrary place from information recorded or recorded at different times or different places; Is an information processing apparatus.
- the database stores image or audio information together with position / posture information or other sensor information.
- the said arithmetic processing part of the information processing apparatus which concerns on a 4th side is comprised so that the information of the image or audio
- the arithmetic processing unit of the information processing device reproduces an image or sound according to a time difference between a desired viewing time and a current time. It is configured to perform processing.
- the arithmetic processing unit of the information processing device when the time difference between the desired viewing time and the current time is less than a predetermined threshold value, It is configured to generate a real-time image.
- the arithmetic processing unit of the information processing device can perform arbitrary processing when the spatial difference between the generated image and the real image is equal to or greater than a predetermined threshold. It is configured to generate a real-time image at a place / viewpoint.
- the arithmetic processing unit of the information processing device provides a future in which a time difference between viewing time and a present time is equal to or greater than a predetermined threshold value. In the case of the case, it is configured to generate a future image.
- the arithmetic processing unit of the information processing device can perform any processing when the spatial difference between the generated image and the real image is equal to or greater than a predetermined threshold.
- a future image of an arbitrary time is generated at a place / viewpoint, and a future image of an arbitrary viewpoint and an arbitrary time is generated at a fixed location when the image is less than a predetermined threshold.
- the arithmetic processing unit of the information processing device provides a past in which a time difference between viewing time and a current time is equal to or greater than a predetermined threshold value. In this case, a playback image is generated.
- the arithmetic processing unit of the information processing device can perform any processing when the spatial difference between the generated image and the real image is equal to or greater than a predetermined threshold.
- a playback image at an arbitrary time is generated at a place / viewpoint, and a playback image at an arbitrary viewpoint and an arbitrary time is generated at a fixed position when the playback image is less than a predetermined threshold.
- the thirteenth aspect of the technology disclosed in this specification is: An information acquisition step for acquiring image or audio information; A sensor information acquisition step for acquiring position / posture information or other sensor information when acquiring image or audio information; A storage step of storing the acquired image or audio information in a database together with sensor information; Is an information processing method.
- the fourteenth aspect of the technology disclosed in this specification is: An information acquisition step of acquiring image or audio information stored in the database; A calculation processing step for reproducing information at an arbitrary time point or an arbitrary place from information recorded or recorded at different times or different places; Is an information processing method.
- an excellent information processing apparatus and information processing method capable of reproducing information at an arbitrary place at an arbitrary time point without time or space restrictions. be able to.
- FIG. 1 is a diagram schematically showing a configuration of an information reproduction system 100 according to an embodiment of the technique disclosed in this specification.
- FIG. 2 is a diagram illustrating an internal configuration example of the information device 101 that records information to be presented to the user.
- FIG. 3 is a diagram showing an internal configuration example of the information input unit 204 for inputting information to be recorded.
- FIG. 4 is a diagram illustrating an internal configuration example of the information device 141 that presents recorded information to the user.
- FIG. 5 is a diagram showing an internal configuration example of the information output unit 205 that reproduces and outputs recorded information and presents it to the user.
- FIG. 6 is a diagram illustrating an internal configuration example of an information device that performs all of stand-alone input of information to be recorded, information recording, and reproduction output of the recorded information.
- FIG. 7 is a flowchart showing an example of basic processing executed by the recording-side information devices 101, 102,..., 10n.
- FIG. 8 is a flowchart showing an example of basic processing executed by the recording-side information devices 101, 102,..., 10n.
- FIG. 9 is a flowchart showing an example of basic processing executed by the recording side information devices 101, 102,..., 10n.
- FIG. 10 is a flowchart showing an example of basic processing executed by the recording-side information devices 101, 102,..., 10n.
- FIG. 11 is a flowchart showing an example of basic processing executed by the viewing-side information devices 141, 142,..., 14m.
- FIG. 12 is a flowchart showing an example of basic processing executed by the viewing-side information devices 141, 142,..., 14m.
- FIG. 13 is a flowchart showing a processing procedure for generating a composite image.
- FIG. 14 is a flowchart showing a processing procedure for generating a composite image.
- FIG. 15 is a flowchart showing a processing procedure for generating a composite image.
- FIG. 16 is a flowchart showing a detailed procedure of the virtual space image synthesis process.
- FIG. 17 is a flowchart showing a detailed procedure of the virtual space image synthesis process.
- FIG. 18 is a flowchart showing a detailed procedure of the synthesis process of the estimated future image at an arbitrary place / viewpoint and at an arbitrary time.
- FIG. 13 is a flowchart showing a processing procedure for generating a composite image.
- FIG. 14 is a flowchart showing a processing procedure for generating a composite image.
- FIG. 15 is a flow
- FIG. 19 is a flowchart showing a detailed procedure of the synthesis process of the estimated future image at an arbitrary place / viewpoint and at an arbitrary time.
- FIG. 20 is a flowchart showing a detailed procedure of the synthesis process of the estimated future image at a certain place / viewpoint and at an arbitrary time.
- FIG. 21 is a flowchart showing a detailed procedure of the synthesis process of the estimated future image at a certain place / viewpoint and at an arbitrary time.
- FIG. 22 is a flowchart showing a detailed procedure of a synthesis process of a playback image at an arbitrary place / viewpoint and at an arbitrary time.
- FIG. 20 is a flowchart showing a detailed procedure of the synthesis process of the estimated future image at a certain place / viewpoint and at an arbitrary time.
- FIG. 21 is a flowchart showing a detailed procedure of the synthesis process of the estimated future image at a certain place / viewpoint and at an arbitrary time.
- FIG. 22 is a
- FIG. 23 is a flowchart showing a detailed procedure of a synthesis process of a playback image at an arbitrary place / viewpoint and at an arbitrary time.
- FIG. 24 is a flowchart showing a detailed procedure of a synthesis process of a playback image at a certain place / viewpoint and at an arbitrary time.
- FIG. 25 is a flowchart showing a detailed procedure of a synthesis process of a playback image at a certain place / viewpoint and at an arbitrary time.
- FIG. 26 is a flowchart showing a detailed procedure of real time image synthesis processing at an arbitrary place / viewpoint and at an arbitrary time.
- FIG. 27 is a flowchart showing a detailed procedure of real time image synthesis processing at an arbitrary place / viewpoint and at an arbitrary time.
- Small information devices that can be carried and used by users on the go, such as mobile phones such as smartphones, tablet terminals, electronic books, and portable music players, have become widespread.
- an increasing number of information devices are attached to a part of the user's body, such as the head and arms.
- a head-mounted display or a wristband type device For example, a head-mounted display or a wristband type device.
- This type of information device is basically equipped with a playback function for information such as video content and audio content, and can play back videos that have been saved in advance or recorded on the spot, Video playback (including real-time and streaming) can be performed. Furthermore, AR (Augmented Reality) information can be given to these reproduced moving images.
- the technology disclosed in the present specification has no temporal or spatial restrictions in a content viewing system using a head-mounted display or a portable information device equipped with a display. It reproduces one or a combination of two or more of images, sounds, and environmental information at an arbitrary point in time.
- the user can experience a pseudo-temporal or spatial movement or temporal and spatial movement.
- a content viewing system using the technology disclosed in this specification provides a function of performing chasing playback until a desired time if necessary.
- the user can avoid missing information at a certain point in time.
- the technology disclosed in the present specification can effectively present information to the user by realizing a return to the current state as necessary or presenting information in which a plurality of information is combined. Users can deepen their understanding of the situation and experience any point in time and any position / posture, so they can know the situation at that time, even if they are not present at that time. Is possible.
- the virtual space and the real space are inhibited from immersive feeling by seamlessly connecting the real information and the virtual information at an arbitrary time point or an arbitrary place. Can be presented.
- FIG. 1 schematically shows a configuration of an information reproduction system 100 according to an embodiment of the technology disclosed in this specification.
- the information reproduction system 100 shown in FIG. 1 is recorded by one or more information devices 101, 102,..., 10n and information devices 101, 102,. .., 11j that stores information, an arithmetic device 120 that performs arithmetic processing on information recorded in the information devices 101, 102,..., 10n, and arithmetic processing performed by the arithmetic device 120
- databases 131, 132,..., 13k that store later information
- information devices 141, 142,..., 14m that reproduce recorded images, sounds, or environmental information and present them to the user Composed.
- databases 111, 112,..., 11j, computing device 120, databases 131, 132,..., 13k, and information devices 141, 142,. are connected to each other via the network 150.
- the information devices 101, 102,..., 10n that record information that is shared between users and presented to the users are, for example, image display devices (head mounted displays) that the user wears on the head or face, and monitoring. Cameras, mobile phones, tablet devices, e-books, mobile imaging devices, etc., and various times or locations depending on where the user wearing the head-mounted display moves, where the surveillance camera is installed, where the user is Record information such as recording and recording in various places.
- the information devices 101, 102,..., 10 n that record information that is shared between users and presented to the user record information presented to the user that is recorded or recorded at various times or in various places.
- the information devices 101, 102,..., 10n are equipped with various sensors for acquiring position / posture information, time, environment information, and the like.
- the information devices 101, 102,..., 10n record the recorded or recorded information, the position and orientation information of the device at the time of recording (for example, information on the line of sight to be photographed), the time, and the environment of the device at the time of recording
- sensor information such as information
- Image or audio information stored in the databases 111, 112,..., 11j is shared on the network 150.
- the sensor information given to the image or sound recording information is based on position / posture information and time information when the information devices 101, 102,.
- environmental information such as illuminance, temperature, humidity, acceleration, ultraviolet intensity, chemical substance (concentration, type, state, etc.) at the time of recording may be added to the recorded information.
- timing for processing position / posture information, time, and environment information for recording information such as images and sounds is transmitted from the information devices 101, 102,..., 10n to the databases 111, 112,. Even after sending, it may be after sending to the databases 111, 112,..., 11j.
- the information devices 101, 102,..., 10n perform blurring correction when recording images and sounds, thereby reproducing the reproduction accuracy of images and sounds during subsequent viewing (that is, image reproducibility, sound reproducibility, position). ⁇ Attitude reproducibility can be improved.
- the following can be applied as the blur correction function.
- the computing device 120 processes the recorded information recorded by the information devices 101, 102,..., 10n based on the sensor information at the time of recording.
- the computing device 120 is for reproducing actual information at an arbitrary time point or an arbitrary place from information recorded or recorded at different times or different places in each information device 101, 102,..., 10n.
- Perform arithmetic processing Information after the arithmetic processing is stored in any of the databases 131, 132,.
- the arithmetic device 120 is configured to perform arithmetic processing for seamless connection from real information at any time point or arbitrary place stored in any of the databases 111, 112,.
- arithmetic processing for seamless connection from fictitious information to real information is performed.
- Information after the arithmetic processing is stored in any of the databases 131, 132,.
- Information devices 141, 142,..., 14m that reproduce recorded images and sounds or environmental information and present them to the user include, for example, an image display device (head mounted display) that the user wears on the head or face. ), Information devices for personal use such as mobile phones such as smartphones, tablet terminals, electronic books, portable music players, and portable imaging devices.
- the information devices 141, 142,..., 14m are carried by the user and used to reproduce information reproduced by the computing device 120.
- Information stored in the databases 111, 112,..., 11j and the databases 131, 132,..., 13k are shared among users.
- the information devices 141, 142, ..., 14m read information stored in the databases 111, 112, ..., 11j or the databases 131, 132, ..., 13k, they are reproduced and presented to the user.
- the information devices 141, 142,..., 14m reproduce and output information read from the databases 131, 132,..., They can be presented to the user as information reproduced at an arbitrary time point or an arbitrary place.
- the information devices 141, 142,..., 14m request the reproduction of information together with the sensor information such as the current position and posture of the user and the environment information, the information reproduced based on the sensor information (for example, the user's current Information reconstructed into an image viewed in the current gaze direction at the position), or fictitious information seamlessly connected from actual information based on sensor information, from any of the databases 131, 132,. Get it and present it to the user.
- the sensor information such as the current position and posture of the user and the environment information
- the information reproduced based on the sensor information for example, the user's current Information reconstructed into an image viewed in the current gaze direction at the position
- fictitious information seamlessly connected from actual information based on sensor information
- the computing device 120 In the information reproduction system 100 shown in FIG. 1, the computing device 120 generates image, sound, or environmental information at an arbitrary point in time from the viewpoint where the information devices 141, 142,. .
- An arbitrary time point includes both the time going back from the present time and the time to proceed. Therefore, on the information equipment 141, 142,..., 14m side, it is possible to reproduce the situation at the arbitrary time.
- the information such as images and sounds generated by the arithmetic device 120 is based on the position / posture information and the time of the desired viewing point and the time (a quantity indicating when or how much time goes back or proceeds).
- setting information on the desired reproduction quality for images and sounds, and other information devices such as illuminance, temperature, humidity, acceleration, UV intensity, chemical substances (concentration, type, condition, etc.) It may be system control information generated based on environmental information measured by a mounted sensor or a sensor installed outside the information device.
- Information generated by the computing device 120 for viewing on the information equipment 141, 142,..., 14m side at the current or near current position / posture and time is recorded by the information equipment 101, 102,.
- -It can also be generated in the same way as recording. That is, it can be generated by pattern recognition or the like from sensor information (GPS, geomagnetism, acceleration sensor, image, sound) by a sensor unit mounted on the device or attached externally.
- the viewpoint position / posture and time desired for viewing on the information equipment 141, 142,..., 14m side may be arbitrarily set by the user, or may be automatically set in accordance with predetermined conditions. . For example, if a certain setting is made, it is not necessary for the user to set it each time. Depending on the line of sight directed by the user, the image and sound in accordance with the position and posture and the preset return time or advance time are displayed. Process to be played.
- Reconstruction of recorded and recorded past images and past sounds may be performed by the recording / recording information devices 101, 102,..., 10n in addition to the calculation device 120 on the network 150.
- playback may be performed by the information devices 141, 142,.
- the device may directly have an area for storing the image, sound, and position / posture information, or a database may be used.
- position / posture information and time information are added to image or audio information stored in the databases 111, 112,.
- the information devices 141, 142,..., 14m on the viewing side have their own (that is, in advance, predicting that an event of viewing occurs when an event of viewing occurs or when an event of viewing occurs)
- the position / posture information (viewed by the user) is transmitted to the computing device 120 on the network 150.
- the computing device 120 reconstructs the image and sound corresponding to the received position / posture information using the image or sound information stored in the databases 111, 112,. It transmits to information equipment 141, 142, ..., 14m.
- the computing device 120 uses image or sound information stored in the databases 111, 112,. For example, a future image or sound is estimated from a difference between a certain time point and a certain time point, and is transmitted to the viewing-side information devices 141, 142,.
- the information devices 141, 142,..., 14m on the viewing side set in advance the amount of time retroactive to the past or the amount of advance to the future, or arbitrarily set the time at the time of viewing, information on such time is displayed on the network 150. It transmits to the above arithmetic unit 120. With respect to this time, for example, at one point in time or a plurality of recorded images and sounds in the viewing-side information devices 141, 142,..., 14m or the recording-side information devices 101, 102,. Assume that it is allowed to mark a time point, and the marked time point can be used as a playback point.
- This mark may be set at an arbitrary time point, or set or set in advance by the viewing-side information devices 141, 142,..., 14m or the recording-side information devices 101, 102,. It is also possible to automate itself and automatically mark at the time of the point that matches the setting.
- a method for generating the time for automatically marking it is conceivable to analyze recorded and recorded images and sounds and make a list according to the purpose. For example, when a change point is detected by analyzing a recorded / recorded image and sound, a method for extracting the point at which the state transition is found in the image / sound field component from the recorded / recorded information as information indicating the change point Is mentioned.
- An important point is to create a time list such as when there are the most constituent sound sources, when the strength of the sound sources is strong, or when the sound range of the sound sources is high, and automatically according to the purpose. Make a mark. Instead of listing, it is also possible to extract only a specific state by narrowing down the purpose in advance.
- the computing device 120 transmits the image and audio data around the position information and the time around the position information sent from the viewing-side information devices 141, 142,..., 14m from the viewing-side information devices 141, 142,. Load it.
- the image and sound at the viewing viewpoint are reconstructed based on the position information and the posture information.
- the viewing-side information devices 141, 142,..., 14m play the reconstructed images and sounds while loading them into the internal memory via the network 150. In this way, the viewing-side information devices 141, 142,..., 14m can view images and sounds at arbitrary points in time.
- reconstruction of images and sounds viewed on the information devices 141, 142,..., 14 m is performed by a single arithmetic device 120.
- a series of processing for reconfiguration may be distributed to a plurality of devices.
- a place where there is a user carrying the information devices 101, 102,..., 10n having a recording / recording function or the installation of the information devices 101, 102 If it is a place, it is possible to access past image and audio information. Therefore, in the information devices 141, 142,..., 14m on the viewing side, in addition to being able to view images and sounds at multiple viewing points, it is possible to view past situations at the viewing points in a wide range of places. It becomes. Furthermore, by applying a high-speed playback technique (for example, see Patent Document 2) to a captured video, it is possible to return to the current situation while confirming the past situation. .
- a high-speed playback technique for example, see Patent Document 2
- the actual recording actually performed by the recording-side information devices 101, 102,. It is also conceivable to present fictitious information generated based on actual recorded information instead of image and audio information.
- the computing device 120 generates an imaginary configuration through computation from an actual image at a certain point in time and at a certain position / posture, and combines the imaginary scene and the actual scene without a sense of incongruity.
- the user can feel the movement between the real space and the virtual space without any discomfort by processing from temporally or spatially, or temporally and spatially from a fictitious sight to gradually turning it into a real sight. It becomes possible to give a deeper sense of immersion.
- the viewing-side information devices 141, 142,..., 14m may display the display of viewpoints by time, by position / posture on a full screen.
- the current view may be displayed in parallel by combining any one or two or more of the views from the viewpoints of the past, the future, and different positions and postures.
- the current view may be displayed by superimposing one or a combination of two or more of the views from the viewpoints of the past, the future, and different positions and orientations.
- the current information is similarly changed to past, future, and other positions.
- position For example, you may dare to combine an image that is visible from a certain viewpoint with a sound image that is not recorded from that viewpoint. Conversely, an image that can be viewed from another viewpoint may be combined with the sound image recorded from that viewpoint. Further, for a certain point in time, sound images recorded before and after that may be presented in combination.
- One or more databases 111, 112,... That store information recorded by the information devices 101, 102,... 10n, or one or more databases 131, 132,. May be a dedicated database server or a video sharing site server.
- the recording-side information devices 101, 102,..., 10n upload the recorded image and sound information together with the position / posture, time, and other sensor information to the server of the video sharing site.
- the computing device 120 obtains desired image and sound information from the video sharing site server based on the position / posture, time information, and the like from the video sharing site server, the computing device 120 performs arithmetic processing, The sound information is loaded into the viewing-side information devices 141, 142,..., 14m or uploaded to the server of the video sharing site.
- the generation timing of position / posture / time information is not limited even for image and sound information for which position / posture / time information is not set.
- the computing device 120 estimates the position, orientation, and time from the images and the images and sounds in the sound field by using an image recognition technique, extracts them as position, orientation, and time information at the time of recording, and the databases 111, 112,. You may give to the image and sound currently recorded on 11j. According to this method, even if the information of the existing image or sound that has not been recorded by the recording-side information devices 101, 102,. It is possible to use the information reproduction system 100 by estimating and giving the posture and time information.
- video sharing sites provide a service for writing comments input by other users or automatically input, and other input information in an image (see, for example, Patent Document 3). ).
- Comments written in the image are mainly annotations, explanations, commentary, criticism, and opinions.
- the calculation device 120 calculates image or sound information, or the information devices 141 and 142 on the viewing side. ..,..., 14 m may be used to reflect the input information in images and sounds.
- the computing device 120 extracts a viewing point with a lot of comments written by the user from the moving image as a recommended playback viewing point and feeds it back to the playback-side information devices 141, 142,. Can be realized.
- comments and other information may be input.
- This input information may also be the same processing target as a comment or other input information input by user input or automatic input on the video sharing site.
- FIG. 2 shows an internal configuration example of the information device 101 that records information to be presented to the user. It should be understood that the other information devices 102, 103,..., 10n that record information have the same configuration.
- constituent elements essential to the information device 101 are drawn with solid lines, arbitrary constituent elements are drawn with dotted lines, and constituent elements that should have at least one are drawn with one-dot chain lines.
- the information device 101 includes an information input unit 204 for inputting information to be recorded as an essential component in addition to the basic computer components such as the calculation unit 201, the main storage unit 202, and the user input unit 203. Further, the information device 101 may include an information output unit 205 that reproduces and outputs recorded information and presents it to the user as an optional component. Details of the information input unit 204 and the information output unit 205 will be described later.
- the information device 101 includes an external storage device 206 or a communication unit 207 as an arbitrary component of the computer.
- the external storage device 206 is composed of a large-capacity storage device such as a hard disk drive, and can be used for storing recorded information as the database 111 or 131, for example.
- the communication unit 207 includes, for example, a wired or wireless LAN (Local Area Network) interface, and is used to connect to an external device via the network 150.
- the external devices mentioned here include databases 111, 112,..., 11j, arithmetic device 120, databases 131, 132,..., 13k, information devices 141, 142,.
- the information device 101 In order to record the information input by the information input unit 204 and present it to the user, the information device 101 needs to include at least one of the external storage device 206 and the communication unit 207.
- the information input unit 204 and the information output unit 205 are connected to the external storage device 206 or the communication unit 207 via the buffer 208.
- FIG. 3 shows an internal configuration example of the information input unit 204 for inputting information to be recorded.
- the information input unit 204 includes a presentation information input unit 301 that inputs information to be presented to the user, a position and orientation detection unit 302 that detects position and orientation information of the device at the time of recording, and the device at the time of recording. Is provided with an environment detection unit 303 that detects an environment in which is placed.
- the information input unit 204 is equipped with at least one of a presentation information input unit 301, a position / orientation detection unit 302, and an environment detection unit 303. In FIG. 3, these components 301 to 303 are drawn with a one-dot chain line. Yes.
- the presentation information input unit 301 includes an image sensor 311 for inputting an image to be recorded, a microphone 312 for inputting sound to be recorded, a text input unit 313 for inputting a character string, and a motion such as a gesture. At least one of a motion input unit 314, an odor input unit 315, a tactile input unit 316, and a taste input unit 317, and at least one of image, sound, character information, motion, smell, touch, and taste As presentation information to be presented or shared with other users. In FIG. 3, these components 311 to 316 are drawn by a one-dot chain line.
- the position and orientation detection unit 302 detects at least one of a GPS (Global Positioning System) 321, a geomagnetic sensor 322, an acceleration sensor 323, a Doppler sensor 324, and a radio wave intensity sensor 325 in order to detect the position and orientation of the information device 101. It has. In FIG. 3, these components 321 to 325 are drawn by a one-dot chain line.
- GPS Global Positioning System
- the sensors constituting the position / orientation detection unit 302 is installed outside the information input unit 204 (that is, in the information device).
- information such as position and posture can be generated not by the detection result of the sensor but by pattern recognition of an image or sound input by the presentation information input unit 301.
- the environment detection unit 303 detects a temperature sensor 331, a humidity sensor 332, an infrared sensor 333, an ultraviolet sensor 334, an illuminance sensor 335, a radio wave intensity sensor 336, a chemical substance (concentration / type / Status) At least one of environmental sensors such as sensor 337 is provided. In FIG. 3, these constituent elements 331 to 337 are drawn by a one-dot chain line.
- FIG. 4 shows an internal configuration example of the information device 141 that presents recorded information to the user. It should be understood that the other information devices 142, 143,..., 143m that present information have the same configuration. In the figure, the same components as those shown in FIG. 2 are given the same reference numerals. In addition, essential constituent elements of the information device 141 are drawn with solid lines, arbitrary constituent elements are indicated with dotted lines, and constituent elements that should include at least one are indicated with one-dot chain lines.
- the information device 141 includes an information output unit 205 that reproduces and outputs recorded information and presents it to the user. As prepared.
- the information device 141 may include an information input unit 204 that inputs information to be recorded as an optional component.
- the information input unit 204 is as described with reference to FIG. Details of the information output unit 205 will be described later.
- the information device 141 includes an external storage device 206 or a communication unit 207 as an arbitrary component of the computer.
- the external storage device 206 is composed of a large-capacity storage device such as a hard disk drive, and can be used as a database 111 or 131, for example, to read recorded information.
- the communication unit 207 is configured by, for example, a wired or wireless LAN interface, and is used for connecting to an external device via the network 150.
- the external devices mentioned here include information devices 101, 102,..., 10n, databases 111, 112,..., 11j, arithmetic device 120, databases 131, 132,.
- the information device 141 needs to include at least one of the external storage device 206 and the communication unit 207.
- the information input unit 204 and the information output unit 205 are connected to the external storage device 206 or the communication unit 207 via the buffer 208.
- FIG. 5 shows an internal configuration example of the information output unit 205 that reproduces and outputs recorded information and presents it to the user.
- the information output unit 205 includes a liquid crystal display 501 that displays presentation information, an organic EL display 502, a retina direct-drawing display 503, a speaker 504 that outputs presentation information as audio, and a tactile display 505 that outputs tactile information.
- Odor display 506 that outputs the odor component of the presentation information
- temperature display 507 that outputs the temperature component of the presentation information
- taste display 508 that presents the taste
- electrical / physical stimulation to each sensory organ and brain electrical / physical stimulation
- FIG. 6 shows an internal configuration example of an information device 600 that performs all of stand-alone input of information to be recorded, information recording, and reproduction output of the recorded information.
- the information device 600 has both the functions of the information device 101 and the information device 141.
- the same components as those shown in FIG. 2 are given the same reference numerals.
- essential constituent elements are drawn with a solid line, arbitrary constituent elements with a dotted line, and constituent elements that should have at least one are drawn with a one-dot chain line.
- the information device 141 includes an information input unit 204 that inputs information to be recorded, and reproduces and outputs the recorded information.
- the information input unit 204 is as described with reference to FIG.
- the information output unit 205 is as described with reference to FIG.
- the calculation unit 201 executes a predetermined application, for example, and performs the same processing as the calculation device 120. That is, the information input to the presentation information input unit 301 in the information input unit 204 is processed based on sensor information such as the position / orientation detection unit 302 and the environment detection unit 303 at the time of input.
- the calculation unit 201 performs a calculation process for reproducing real information having a different time or place from information recorded or recorded at an arbitrary time or an arbitrary place.
- the information after the arithmetic processing is stored in the external storage device 206 in the information device 600 or stored in any of the databases 131, 132,..., 13k on the network 150 via the communication unit 207.
- the calculation unit 201 seamlessly changes from real-time information stored at the external storage device 206 (or any one of the databases 111, 112,. Arithmetic processing for connection, and conversely, arithmetic processing for seamless connection from fictitious information to real information is performed.
- the information device 600 includes an external storage device 206 or a communication unit 207 as an arbitrary component of the computer.
- the external storage device 206 is composed of a large-capacity storage device such as a hard disk drive, and can be used as a database 111 or 131, for example, to read recorded information.
- the communication unit 207 is configured by, for example, a wired or wireless LAN interface, and is used for connecting to an external device via the network 150.
- the external devices mentioned here include information devices 101, 102,..., 10n, databases 111, 112,..., 11j, arithmetic device 120, databases 131, 132,.
- the information device 101 needs to include at least one of the external storage device 206 and the communication unit 207.
- the information input unit 204 and the information output unit 205 are connected to the external storage device 206 or the communication unit 207 via the buffer 208.
- FIG. 7 to 10 show examples of basic processing executed by the recording-side information devices 101, 102,..., 10n in the form of flowcharts.
- the information device 101 When the information device 101 performs the recording process, the information device 101 first checks whether or not the use of the image recording or the sound recording for the arbitrary viewpoint / time function is permitted (S701).
- step S701 When the use of the image recording or the sound recording for the arbitrary viewpoint / time function is not permitted (No in step S701), the information device 101 transfers the image or sound to, for example, a private data folder in the image device. Save (step S702), and this processing routine ends.
- step S703 the information device 101 subsequently checks whether or not the manual tagging is performed.
- step S703 When manual tagging is performed (Yes in step S703), setting by the user is performed (step S704).
- step S705 If manual tagging is not performed (No in step S703), it is checked whether or not tag extraction is performed (step S705). If tag extraction is not performed (No in step S705), a fixed value is loaded (step S706). If tag extraction is to be performed (Yes in step S705), automatic setting is performed (step S707), and tagging conditions are set (step S708).
- the information device 101 confirms the image and audio recording modes (step S709).
- the recording mode is determined according to the following conditions.
- step S710 when an end timing or an interrupt occurs (Yes in step S710), when the recording target state and the external input state are substantially equal to the tagging conditions set in step S708 (Yes in step S711), information After recording and tagging images and sounds (step S712), the device 101 moves to the next data (step S713).
- step S714 when an end timing or interruption occurs (Yes in step S714), the information device 101 records an image and sound (step S715).
- step S715 When the EOF (End Of File) of the recording data is reached (Yes in step S716), when the recording target state and the external input state are substantially equal to the tagging conditions set in step S708 (Yes in step S717), the information device. After recording and tagging images and sounds (step S718) 101, the process moves to the next data (step S720).
- step S721 when an end timing or interruption occurs (Yes in step S721), the information device 101 records an image and sound (step S722).
- step S723 when the recording target state and the external input state are substantially equal to the tagging conditions set in step S708 (step S724: Yes), the information device 101 displays an image, After voice recording and tagging (step S725), the process moves to the next data (step S726).
- the information device 101 transmits the recorded data to the databases 111, 112,..., 11j and stores it (step S719).
- 11 to 12 show, in the form of flowcharts, basic processing examples executed by the computing device 120 to view images and sounds on the viewing-side information devices 141, 142,..., 14m. However, this processing procedure may be directly executed by each information device 141, 142,.
- the information device 141 When the information device 141 performs the reproduction process, the information device 141 first checks whether or not to use an arbitrary viewpoint / time function for image recording or audio recording (S1101).
- step S1101 When the arbitrary viewpoint / time function for recording images or recording audio is not used (No in step S1101), the information device 141 displays a real-time image (step S1106). If there is a subsequent process (Yes in step S1110), the process returns to step S1101, and if there is no subsequent process (No in step S1110), the process routine ends.
- step S1102 when using the arbitrary viewpoint / time function of image recording or audio recording (Yes in step S1101), the information device 141 checks whether to automatically generate an image and audio (step S1102).
- the information device 141 automatically determines a target generation image based on the viewer preference / intention, the recorder preference / intention, and the environment information (step S1103). That is, the playback content is determined. Next, the playback start / end point timing is determined from the viewer preference / intention, the recorder preference / intention, and the environment information, and an item is extracted (step S1104).
- the information device 141 selects a target reproduction image for which audiovisual is desired (step S1107). That is, the playback content is determined. Then, the playback start / end point timing is determined and an item is set (step S1108).
- step S1105 it is checked whether there is a related library. If there is no related library, the information device 141 generates a new library (step S1109).
- step S1111 it is checked whether the tag type in the library is substantially equal to the type of the timing determination item.
- step S1116 If the tag type in the library is not equal to the type of the timing determination item (No in step S1111), the information device 141 generates a new tag type (step S1116). If the tag type in the library is substantially equal to the type of the timing determination item (Yes in step S1111), the information device 141 determines whether or not the match between the reproduction tag and the data in the library is within a predetermined threshold. Further checking is performed (step S1112). If the match between the reproduction tag and the data in the library is not within the predetermined threshold (No in step S1112), the information device 141 generates a composite image (step S1117).
- step S1113 the information device 141 tosses the contents of the reproduction and display expression to the buffer (step S1113).
- a desired generated image is output (step S1114). If there is a subsequent process (Yes in step S1115), the process returns to step S1101, and the information device 141 repeatedly executes the process described above. If there is no subsequent process (No in step S1115), the process routine ends.
- the tag is tagged at steps S712, S718, S728, etc. when someone wants to ask at the time of recording.
- the tags include an electroencephalogram tag, a keyword tag to be called repeatedly. Tags are added to the necessary locations from the video. It is necessary to consider how to decide where to start and where to end. In addition, a function for preventing forgery is required when an image is reproduced (synthesized) up to the point of tagging. However, details of the synthesis process will be described later.
- meta information related to an image reproduction method such as highlight reproduction, standard speed reproduction, and double speed reproduction can be attached to the tag. Other meta information is exemplified below.
- step S1112 if the coincidence between the reproduction tag and the data in the library is not within the predetermined threshold (No in step S1112), a synthesized image is generated in step S1117, thereby catching up to the time of the reproduction tag at high speed.
- the composite processing of the playback image has a forgery prevention function, and is different from simple chasing playback.
- image processing technology such as morphing and cross-sensing is used to transform images from one person to another, or from one character in an animation to another. To do.
- image processing technology such as morphing and cross-sensing is used to transform images from one person to another, or from one character in an animation to another. To do.
- image processing technology such as morphing and cross-sensing is used to transform images from one person to another, or from one character in an animation to another. To do.
- Playback methods such as UP / Down CONVERT are also subject to control. For example, it is set to DOWN in the preview mode, and set to UP in the zoom mode. The point of attention of the police is UP for details and DOWN for appearance. Also, the playback method is controlled for each viewer of the playback object.
- FIG. 13 to FIG. 15 show a processing procedure for generating a composite image executed in step S1117 of the flowchart shown in FIG. 11 to FIG. 12 in the form of a flowchart.
- step S1304 If there is information necessary for generating a reproduced image (Yes in step S1301), the information device 141 acquires such information (step S1304).
- the information device 141 checks whether or not the generated image that the user of the viewing-side information device desires to view is only an actual image (step S1302).
- step S1305 the information device 141 executes a virtual space image synthesis process. Details of the virtual space image composition processing will be described later.
- step S1317 the process proceeds to step S1317, and the information device 141 outputs the generated image.
- step S1303 When the generated image for which the user desires audiovisual is only an actual image (Yes in step S1302), and when the virtual space image is combined with another synthesis method (Yes in step S1306), the information device 141 is used. Subsequently, it is checked whether or not the time difference between the generated image and the real image is greater than or equal to a threshold value (step S1303).
- the information device 141 subsequently checks whether the generated image that the user desires for audiovisual is in the past (step S1303). s1307).
- the information device 141 further checks whether or not the difference between the generated image and the actual space is greater than or equal to the threshold (step S1308).
- step S1308 If the difference between the generated image and the actual space is greater than or equal to the threshold (Yes in step S1308), the information device 141 performs a process of generating an estimated future image at an arbitrary place / viewpoint and at an arbitrary time (step S1311). Details of the process of generating an estimated future image at an arbitrary place / viewpoint and at an arbitrary time will be described later.
- the information device 141 When the difference between the generated image and the actual space is less than the threshold (No in step S1308), the information device 141 performs a process of generating an estimated future image at a certain place / viewpoint and at an arbitrary time (step S1312). . Details of the process of generating an estimated future image at a certain arbitrary place / viewpoint and at an arbitrary time will be described later.
- the information device 141 further checks whether the generated image and the actual spatial difference are equal to or greater than a threshold (step S1309).
- the information device 141 When the difference between the generated image and the actual space is greater than or equal to the threshold (Yes in step S1309), the information device 141 performs a playback image generation process at an arbitrary place / viewpoint and at an arbitrary time (step S1313). Details of the playback generation processing of an image at an arbitrary place / viewpoint and at an arbitrary time will be described later.
- the information device 141 When the difference between the generated image and the actual space is less than the threshold value (No in step S1309), the information device 141 performs playback image generation processing at a certain place / viewpoint and for an arbitrary time (step S1314). . Details of the playback image generation process at a certain arbitrary place / viewpoint and for an arbitrary time will be described later.
- the information device 141 further checks whether the generated image and the real space difference are equal to or larger than the threshold value (step S1310). .
- the information device 141 When the difference between the generated image and the actual space is equal to or greater than the threshold (Yes in step S1310), the information device 141 performs a real-time image generation process at an arbitrary place / viewpoint and at an arbitrary time (step S1315). Details of real-time image generation processing at an arbitrary place / viewpoint and at an arbitrary time will be described later.
- step S1310 If the difference between the generated image and the actual space is less than the threshold (No in step S1310), the information device 141 performs real-time image generation processing (step S1316).
- 16 to 17 show the detailed procedure of the synthesis process of the virtual space image executed in step S1305 in the flowcharts shown in FIGS. 13 to 15 in the form of flowcharts.
- the information device 141 first sets the playback base position, viewpoint, time, and user preference information (step S1601).
- the information device 141 detects the change speed of the position, viewpoint, and time (step S1602).
- the information device 141 calculates a calculation speed that can correspond to the movement of the position and the viewpoint (step S1603).
- the information device 141 sets the reproduction space / time definition and the permissible tolerance range (step S1604).
- the information device 141 simplifies an image of one coordinate unit space or time, or a base coordinate of space and time (step S1605).
- the information device 141 extracts the transition state and the feature amount (step S1606).
- the information device 141 checks whether there is enough data to generate a recommended reproduction image (step S1607).
- step S1607 If there is not enough data to generate the recommended playback image (No in step S1607), the information device 141 moves the base point of space and time by the unit step (step s1611) and returns to step S1605. Repeat the same process.
- step S1607 If there is enough data to generate a recommended playback image (Yes in step S1607), the information device 141 generates a recommended playback image based on the transition state, feature amount, and preference (step S1608).
- the information device 141 synthesizes a reproduction image of the past, the current, or the future serving as a return point (step S1609).
- the synthesis process in step S1609 applies a synthesis method in which the calculation load is reduced by lowering the system than the synthesis process in steps S1611 to S1615.
- the information device 141 checks whether or not the change in the position / viewpoint change speed is within a threshold (step S1610).
- step S1610 If the change in the position / viewpoint change speed exceeds the threshold (No in step S1610), the process returns to step S1603, and the same processing as described above is repeatedly executed.
- step S1610 If the change in the position / viewpoint change speed is within the threshold (Yes in step S1610), has the information device 141 completed generation of the estimated image of the position, viewpoint, and time change range of the virtual reality composite image? It is checked whether or not (step S1612).
- the information device 141 may adjust the space and / or time according to the definition.
- the base point is moved by the amount corresponding to the step (step S1613), the process returns to step S1604, and the same processing as described above is repeatedly executed.
- the information device 141 ends this processing routine.
- FIG. 18 to FIG. 19 show the detailed procedure of the synthesis process of the estimated future image at any place / viewpoint and at any time, which is executed in step S1311 in the flowchart shown in FIG. 13 to FIG. Show.
- the information device 141 first sets the reproduction base point position and viewpoint information (step S1801).
- the information device 141 detects the change speed of the position and the viewpoint (step S1802).
- the information device 141 calculates a calculation speed that can correspond to the movement of the position and the viewpoint (step S1803).
- the information device 141 sets the space / time definition for reproduction (step S1804).
- the information device 141 searches for a past record image having a close definition close to one coordinate unit space or time, or the base coordinates of space and time (step S1805).
- the information device 141 checks whether or not the number of searched corresponding data is greater than or equal to a threshold value (step S1806). If the number of corresponding data is less than the threshold (No in step S1806), the information device 141 changes the base coordinate space or time, or the space and time, or the threshold (step S1811), and step S1805. Returning to the above, the search process of the past recorded image is repeatedly executed.
- step S1806 If the number of corresponding data is equal to or greater than the threshold (Yes in step S1806), the information device 141 performs a simplification process (step S1807).
- the information device 141 performs difference calculation and transition state extraction (step S1808).
- the information device 141 estimates the state transition amount of the future image according to space or time, or space-time definition (step S1809).
- the information device 141 generates an estimated future space or time in unit processing, or a space and time reproduction image, or a reproduction image, or generates only transition information that can generate a reproduction image by post-processing (step S1810).
- the information device 141 checks whether or not the change in the position / viewpoint change speed is within a threshold (step S1812).
- step S1812 If the change in the position / viewpoint change speed exceeds the threshold value (No in step S1812), the process returns to step S1803, and the same processing as described above is repeatedly executed.
- step S1812 If the change in the position / viewpoint change speed is within the threshold (Yes in step S1812), the information device 141 further checks whether the generation of the estimated image of the position / viewpoint change range has been completed (step S1812). S1813).
- step S1813 When the generation of the estimated image of the position / viewpoint change range has not been completed (No in step S1813), the information device 141 moves the base point by space or time or both steps according to the definition. (Step S1814), the process returns to step S1804, and the same processing as described above is repeatedly executed.
- step S1813 when the generation of the estimated image of the position / viewpoint change range is completed (Yes in step S1813), this processing routine is ended.
- FIG. 20 to FIG. 21 are flowcharts showing the detailed procedure of the synthesis process of the estimated future image at a certain place / viewpoint and at an arbitrary time, which is executed in step S1312 in the flowcharts shown in FIG. 13 to FIG. Show.
- the information device 141 first sets the playback position and base point viewpoint information (step S2001).
- the information device 141 detects the change speed of the viewpoint (step S2002).
- the information device 141 calculates a calculation speed that can correspond to the movement of the viewpoint (step S2003).
- the information device 141 sets the reproduction space / time definition (step S2004).
- the information device 141 searches a past record image with a close definition close to the base coordinate of one computation unit space or time, or time and space (step S2005).
- the information device 141 checks whether or not the number of searched corresponding data is greater than or equal to the threshold (step S2006).
- the information device 141 changes the base coordinate space or time, or the space and time, or the threshold (step S2011), and step S2005.
- the search process of the past recorded image is repeatedly executed.
- step S2007 If the number of corresponding data is equal to or greater than the threshold (Yes in step S2006), the information device 141 performs a simplification process (step S2007).
- the information device 141 performs difference calculation and transition state extraction (step S2008).
- the information device 141 estimates the state transition amount of the future image according to space or time, or space-time definition (step S2009).
- the information device 141 generates the estimated future space or time in the unit processing, or the space and time playback image, or the playback image, or only the transition information that can generate the playback image by the post processing (step S2010). .
- the information device 141 checks whether or not the change in the viewpoint change speed is within a threshold (step S2012).
- step S2012 If the change in the viewpoint change speed exceeds the threshold (No in step S2012), the process returns to step S2003, and the same processing as described above is repeatedly executed.
- the information device 141 further checks whether the generation of the estimated image of the position / viewpoint change range has been completed (step S12). S2013).
- Step S2013 If the generation of the estimated image of the position / viewpoint change range has not been completed (No in step S2013), the information device 141 moves the base point by the space or time or both steps according to the definition. (Step S2014), the process returns to Step S2004, and the same processing as described above is repeatedly executed.
- step S2013 when the generation of the estimated image of the position / viewpoint change range is completed (Yes in step S2013), this processing routine is ended.
- FIGS. 22 to 23 show in flowchart form the detailed procedure of the synthesis process of the playback image at any place / viewpoint and at any time, which is executed in step S1313 in the flowchart shown in FIGS. Show.
- the information device 141 first sets the reproduction base point position and viewpoint information (step S2201).
- the information device 141 detects the change speed of the position and the viewpoint (step S2202).
- the information device 141 calculates a calculation speed that can correspond to the movement of the position and the viewpoint (step S2203).
- the information device 141 sets the space / time definition for reproduction (step S2204).
- the information device 141 searches a past record image with a close definition close to the base coordinate of one computation unit space or time, or time and space (step S2205).
- the information device 141 checks whether or not the number of searched corresponding data is greater than or equal to the threshold (step S2206). If the number of corresponding data is less than the threshold (No in step S2206), the information device 141 changes the base coordinate space or time, or the space and time, or the threshold (step S2211), and step S2205. Returning to the above, the search process of the past recorded image is repeatedly executed.
- step S2206 If the number of corresponding data is equal to or greater than the threshold (Yes in step S2206), the information device 141 performs a simplification process (step S2207).
- the information device 141 performs difference calculation and transition state extraction (step S2208).
- the information device 141 estimates a state transition amount according to space or time, or space-time definition (step S2209).
- the information device 141 generates only estimated transitional space or time in unit processing, or space and time playback images, or only transition information that can generate a playback image by post-processing to reconstruct the playback image (step S2210). ).
- the information device 141 checks whether or not the change in the position / viewpoint change speed is within a threshold value (step S2212).
- step S2212 If the change in the position / viewpoint change speed exceeds the threshold (No in step S2212), the process returns to step S2203, and the same processing as described above is repeatedly executed.
- the information device 141 further checks whether the generation of the estimated image of the position / viewpoint change range has been completed (step S2212). S2213).
- Step S2213 If the generation of the estimated image of the position / viewpoint change range has not been completed (No in step S2213), the information device 141 moves the base point by the space and / or time step according to the definition. (Step S2214), the process returns to step S2204, and the same processing as described above is repeatedly executed.
- FIG. 24 to FIG. 25 show the detailed procedure of the synthesis process of the playback image at a certain place / viewpoint and arbitrary time, which is executed in step S1314 in the flowchart shown in FIG. 13 to FIG. Show.
- the information device 141 first sets the playback position and base point viewpoint information (step S2401).
- the information device 141 detects the change speed of the viewpoint (step S2402).
- the information device 141 calculates a calculation speed that can cope with the movement of the viewpoint (step S2403).
- the information device 141 sets the reproduction space / time definition (step S2404).
- the information device 141 searches for a past record image having a closeness close to the base coordinate of one computation unit space or time, or space and time (step S2405).
- the information device 141 checks whether or not the number of searched corresponding data is greater than or equal to a threshold value (step S2406). If the number of corresponding data is less than the threshold (No in step S2406), the information device 141 changes the base coordinate space or time, or the space and time, or the threshold (step S2411), and step S2405. Returning to the above, the search process of the past recorded image is repeatedly executed.
- step S2406 If the number of corresponding data is equal to or greater than the threshold (Yes in step S2406), the information device 141 performs a simplification process (step S2407).
- the information device 141 performs difference calculation and transition state extraction (step S2408).
- the information device 141 estimates a state transition amount according to space or time, or space-time definition (step S2409).
- the information device 141 generates only estimated transitional space or time in unit processing, or space and time playback images, or only transition information that can generate playback images by post-processing to reconstruct the playback images (step S2410). ).
- the information device 141 checks whether or not the change in the viewpoint change speed is within the threshold (step S2412).
- step S2412 If the change in the viewpoint change speed exceeds the threshold (No in step S2412), the process returns to step S2403, and the same processing as described above is repeatedly executed.
- step S2412 If the change in the position / viewpoint change speed is within the threshold (Yes in step S2412), the information device 141 further checks whether the generation of the estimated image of the position / viewpoint change range has been completed (step S2412). S2413).
- Step S2413 When the generation of the estimated image of the position / viewpoint change range has not been completed (No in step S2413), the information device 141 moves the base point by the space and / or time step according to the definition. (Step S2414), the process returns to step S2404, and the same processing as described above is repeatedly executed.
- step S2413 when the generation of the estimated image of the position / viewpoint change range is completed (Yes in step S2413), this processing routine is ended.
- 26 to 27 show, in flowchart form, the detailed procedure of the real-time image synthesis process at an arbitrary place / viewpoint and at an arbitrary time, which is executed in step S1315 in the flowcharts shown in FIGS. ing.
- the information device 141 first sets the reproduction base point position and viewpoint information (step S2601).
- the information device 141 detects the change speed of the position and the viewpoint (step S2602).
- the information device 141 calculates a calculation speed that can correspond to the movement of the position and the viewpoint (step S2603).
- the information device 141 sets the space / time definition for reproduction (step S2604).
- the information device 141 searches for a recorded image with a resolution close to one computation unit space base point coordinate within a range within the time resolution from the reduced time (step S2605).
- the information device 141 checks whether or not the number of searched corresponding data is greater than or equal to a threshold value (step S2606). If the number of corresponding data is less than the threshold (No in step S2606), the information device 141 changes the base coordinate space or the threshold (step S2611), returns to step S2605, and records the past recorded image. Repeat the search process.
- the information device 141 checks whether or not the corresponding data group and the desired position / posture deviation are equal to or greater than the threshold (step S2607).
- step S2607 If the corresponding data group and the desired position / posture deviation are equal to or greater than the threshold (Yes in step S2607), the information device 141 performs simplification processing (step S2608).
- the information device 141 performs difference calculation and transition state extraction (step S2609).
- the information device 141 estimates a real image state transition amount according to space or time or space-time definition (step S2610).
- the information device 141 synthesizes an aerial image of the viewpoint changeable area in the unit processing (step S2612).
- the information device 141 identifies the device that records the data or the storage device that the device records.
- the storage area of the playback device is connected (step S2613).
- the information device 141 copies the corresponding data to the playback device (step S2614).
- the information device 141 checks whether or not the change in the position / viewpoint change speed is within a threshold (step S2615).
- step S2615 If the change in the position / viewpoint change speed exceeds the threshold (No in step S2615), the process returns to step S2603 and the same process as described above is repeatedly executed.
- step S2615 If the change in the position / viewpoint change speed is within the threshold (Yes in step S2615), the information device 141 further checks whether the generation of the image of the position / viewpoint change range has been completed (step S2617). ).
- Step S2617 If the generation of the image of the position / viewpoint change range has not been completed (No in step S2617), the information device 141 moves the base point by space or time or both steps according to the definition. (Step S2616), returning to step S2604, the same processing as described above is repeatedly executed.
- step S2617 when the generation of the image of the position / viewpoint change range is completed (Yes in step S2617), this processing routine is ended.
- the information devices 101, 102,..., 10n have at least one of a recording function or a recording function, and are scattered in various places in the real world and continue to record or record at multiple points.
- the information devices 101, 102,..., 10n are, for example, a head-mounted display, a wristband type device, a surveillance camera, a mobile phone, a tablet terminal, an electronic book, a mobile imaging device, or the like.
- the information devices 101, 102,..., 10n are position and orientation information at the time of recording or recording obtained from a position and orientation detection unit 302 including a GPS 321, a geomagnetic sensor 322, an acceleration sensor 323, a Doppler sensor 324, a radio wave intensity sensor 325, and the like. Time information is added and the recorded or recorded information is stored in the databases 111, 112,..., 11j, and can be shared on the network 150.
- the information devices 101, 102,..., 10n include a temperature sensor 331, a humidity sensor 332, an infrared sensor 333, an ultraviolet sensor 334, an illuminance sensor 335, a radio wave intensity sensor 336, a chemical substance (concentration / type / state) sensor 337, and the like.
- the environment detection part 303 which consists of may be provided.
- the information devices 101, 102,..., 10n add environment information at the time of recording or recording detected by the environment detection unit 303 instead of position / orientation information and time information (or together with position / orientation information and time information)
- the recorded or recorded information may be stored in the databases 111, 112,.
- timing for processing the position / orientation information, time information, and environment information for the recorded or recorded information may be when the information devices 101, 102,..., 10n transmit to the network 150. However, it may be stored in the database 111, 112,.
- a blur correction function for performing blur correction at the time of recording or recording may be further provided.
- accuracy image reproduction, sound reproduction, position or posture reproducibility
- a mechanical vibration correction mechanism around the recording or recording function provided in the information devices 101, 102,..., 10n itself, detection results of sensors, and detection of periodic changes in image / audio signals And automatic electrical correction based on the above.
- the latter automatic electrical correction can be performed by the information devices 101, 102,..., 10n, or by another information device or the arithmetic unit 120.
- the information devices 141, 142,..., 14m on the playback side that is, the image and sound viewing side, reproduce the image and sound of the viewpoint corresponding to the position and orientation of the device, and the image and sound from the spot. make it available for viewing on Also, you can view the scene at any time by playing back the image and sound from the current time, or generating the image and sound at a time that is ahead of the current time. .
- the viewing desired viewpoint is not the current position / orientation of the information devices 141, 142,..., 14m detected by the position / orientation detection unit 302, but a viewpoint arbitrarily designated by the user through the user input unit 203 or the like. Also good.
- the user can specify an arbitrary time (when the past time, how far it goes from the current time, or how far ahead) through the user input unit 203 or the like.
- the desired viewing point and desired viewing time it is not necessary for the user to set the desired viewing point and desired viewing time each time an image or sound is viewed, and the desired viewing point is automatically set according to the line of sight directed by the user and set in advance. You may process so that an image and an audio
- the position / orientation information and time information of the information devices 141, 142,..., 14m they are detected by the environment detection unit 303 such as illuminance, temperature, acceleration, ultraviolet intensity, chemical substance (concentration, type, state, etc.).
- the image and sound may be reconstructed based on the environmental information (or system information generated using the environmental information). For example, images and sounds from the viewing desired viewpoint based on the current position and orientation of the information devices 141, 142,..., 14m, and further adapted to the current environmental information (time zone, season, weather, moving). Reconstruct the image and sound.
- the past image and audio information recorded or recorded by the information devices 101, 102,..., 10n is recorded on the information devices 101, 102,. 120, or the computing unit 201 in the information device 141, 142,..., 14m on the image reproduction side reproduces the image and sound from the desired viewing point.
- the computing device 120 may store information such as images and sounds reconstructed from the viewpoint desired for viewing in a local storage area (not shown), or external databases 131, 132,. You may make it preserve
- the position and orientation information and time information at the time of recording or recording obtained from the position and orientation detection unit 302 are recorded. Assigned to the audio information and stored in the databases 111, 112,.
- the reproduction-side information devices 141, 142,..., 14m predict the occurrence of a viewing event at regular intervals, or when a viewing event occurs, To the arithmetic device 120.
- the computing device 120 side based on the received position / orientation information, the information and the devices 141, 142,... Are reconstructed from the information stored in the databases 111, 112,. , 14m.
- Viewing on the information devices 141, 142,..., 14m is not limited to playback of past images and sounds, and may present future images and sounds. That is, the future can be designated as the desired viewing time.
- the processing of reconstructing the future image and sound using the past image and sound stored in the databases 111, 112,..., 11j is performed by the arithmetic device 120 or the information devices 141 and 142. ... 14m itself.
- the reconstruction of the future image and sound is performed by, for example, a process of estimating the future image and sound from the difference in information between certain points in time.
- the reproduction-side information devices 141, 142,..., 14m transmit, to the arithmetic device 120 on the network 150, as the desired viewing time, the amount of time retroactively set at the time of viewing or the amount of time advance to the future.
- the reproduction-side information devices 141, 142,..., 14m or the recording-side information devices 101, 102,..., 10n are permitted to set marks at the time of image and sound in advance, and the marked time points are reproduced. It can be a point. It is also acceptable to set marks at multiple points in time.
- the recording-side information devices 101, 102,..., 10n, or the reproduction-side information devices 141, 142,. May be automatically marked.
- a method for automatically generating the marking time it is possible to analyze a recorded or recorded image or sound and make a list according to the purpose. For example, the recorded and recorded information is analyzed, and the point where the state transition is noticeably observed in the constituent elements in the image and the sound field is extracted as information indicating the change point of the image and sound. More specifically, a time list is created by extracting various types of points by voice analysis, such as points with the largest number of constituent sound sources, points with strong sound source strength, and points when the sound source has a high temperature range on the high temperature side. Keep it. Then, at the time of reproduction or the like, marks are automatically made at the types of points according to the purpose. It should be noted that only a specific state may be extracted by narrowing down the purpose in advance without listing.
- the computing device 120 Upon receiving the position / orientation information and time information (viewing desired viewpoint, desired viewing time) from the reproduction-side information devices 141, 142,..., 14m, the computing device 120 receives the position information periphery of the device and images around the time, When audio information is loaded from the databases 111, 112,..., 11j, the image and audio to be reproduced are reconstructed from the desired viewing time at the desired viewing viewpoint. The reconstructed image and sound are played back (streamed playback) through the network 150 while being loaded into the main storage unit 202 in the playback side information devices 141, 142,. In this way, the reproduction-side information devices 141, 142,..., 14m can view images and sounds from a certain point in time.
- fictitious information is reconstructed from past information actually recorded and actually recorded by the information devices 101, 102,..., 10n, and this is reconstructed by the information devices 141, 142,. You may make it show. From the actual scene recorded from the time when the information devices 101, 102,..., 10n are installed, the virtual scene is generated through arithmetic processing, and the virtual scene and the real scene are seamlessly combined. It is possible. In other words, the fictitious scene is separated from the real scene in terms of time or space, but from the real scene to the fictitious scene (or, conversely, from the fictitious scene to the real scene). ) Process gradually. A user who views such a scene can move between the real space presented by the real scene and the virtual space presented by the fictitious scene without feeling uncomfortable, and can be immersed in the virtual space.
- the information devices 141, 142,..., 14m can reconstruct and display multi-viewpoint images by time, position, or orientation.
- An image of a specific viewpoint at a certain time may be displayed on the entire screen, or a specific one-point image by time, a multi-view image at a certain time, or a multi-view image by time may be combined. May be displayed.
- the current view at the viewpoint consisting of the current positions and postures of the information devices 141, 142,..., 14m may be displayed in combination with past and future views, and from the viewpoints of different positions or postures. It may be displayed in combination with the view.
- a method of displaying a plurality of images in combination a part or all of them can be displayed in a superimposed manner, or all can be displayed in parallel.
- the information devices 141, 142,..., 14m may output not only images but also a plurality of sounds and environment information reconstructed according to time, position, or orientation. For example, a sound image that is not recorded from that viewpoint may be combined with an image that is viewed from a certain viewpoint, and conversely, a sound image that is recorded from that viewpoint may be combined with an image that is viewed from a certain viewpoint. . Similarly, for a certain point in time, sound images recorded before and after that may be presented in combination.
- Databases 111, 112,..., 11j that share images and sounds recorded on or recorded by the recording side information devices 101, 102,.
- the databases 131, 132,..., 13k that share images and sounds reconstructed by changing the environment may be dedicated data servers.
- video sharing sites that specialize in posting and viewing videos on the Internet are known in the industry.
- the server of the video sharing site may be used as the databases 111, 112,..., 11j and the databases 131, 132,.
- the computing device 120 When the information uploaded to the video sharing site is used in the information reproduction system 100, if it is an image or sound in which position, posture, and time information are set in advance, the computing device 120 will display the position, posture, time information and image. .., Using the sound, as described above, reconstructs an appropriate image and sound at the time of viewing, and loads them onto the information devices 141, 142,.
- the timing for setting the position, orientation, and time information in the image and sound is not particularly limited. Even if the position, orientation, and time information are not set in the image and sound uploaded to the video sharing site, for example, the arithmetic device 120 estimates the appropriate position, orientation, and time information by the image and sound recognition technology. Or may be added to an image or sound. According to this method, not only images and sounds recorded and recorded by the recording-side information devices 101, 102,..., 10n, but also existing images and sounds in the video sharing site can be used for the information reproduction system 100. Is possible.
- video sharing sites provide a service for writing comments such as annotations, explanations, commentary, criticism, and opinions in images (see, for example, Patent Document 3).
- comments and other inputs entered by the user or automatic input on the video sharing site The information may be loaded together to reflect the comment on the image or sound to be viewed.
- the image may be written to other media such as an electronic bulletin board instead of the image itself.
- the arithmetic device 120 collects comments and other input information written on the video sharing site in the databases 131, 132,..., 13k, etc., generates new control information from the information, and generates information devices 141, 142,..., 14m may be reflected when images and sounds are reproduced.
- the computing device 120 extracts a viewing point with many comments written by the user from the moving image as a recommended playback viewing point and feeds it back to the playback-side information devices 141, 142,. Can be realized.
- comments and other information may be input when uploading images and sounds recorded and recorded from the recording-side information devices 101, 102,..., 10n to the video sharing site.
- This input information may also be the same processing target as a comment or other input information input by user input or automatic input on the video sharing site.
- the information reproduction system 100 When the information reproduction system 100 according to the technology disclosed in this specification is applied to viewing of conferences, lectures, and lectures, it provides a smooth and comfortable disabling function when participating from the middle, and an understanding promoting function from other viewpoints. be able to.
- a camera of a camera conference system is installed at a place where a meeting, lecture, or lecture is being performed, or shooting is performed with a head-mounted display attached to each participant.
- the camera installed at the venue and the head-mounted display worn by the participants correspond to the recording-side information devices 101, 102,.
- the captured moving images are stored in the databases 131, 132,.
- the head-mounted display worn by the participant is also the information device 141, 142, ..., 14m on the playback side.
- participants in the middle can view past videos taken at meetings, lectures, lectures, etc. on their head-mounted displays.
- a past moving image may be played back using high-speed playback technology so that participants on the way can smoothly catch up with the current conference, lecture, or lecture.
- High-speed playback technology does not simply play back moving images at a constant speed of n times speed (where n> 1), but classifies playback sections according to importance, and sets less important sections than real time. While shortening and compressing information, a highly important section is played back in real time or at a playback speed close to real time (see, for example, Patent Document 4).
- Fast playback technology makes it easier for participants on the way to understand the content of previous conferences, lectures, and lectures, and enables them to return to the current time efficiently, so they can participate in the conference from the beginning. You can get a sense of realism.
- users such as mid-way participants can not only play a video from the beginning of a meeting, lecture, or lecture, but can also arbitrarily specify the playback start time. For example, when looking at the agenda or the like and determining that the first one does not need to be viewed, the user may instruct the start of reproduction from the middle (that is, at a time closer to the time at which he / she participated).
- a user may want to participate in a conference, lecture, lecture, or view the situation from the viewpoint of another person or another place. For example, when a user is present, a specific person such as a speaker or a speaker cannot be seen well, and a whiteboard or a slide cannot be seen well.
- the user requests the reproduction of the moving image by designating a desired viewpoint instead of the current position information and posture information of the user. For example, when the viewpoint of another participant is designated, a moving image (moving image taken with a head mounted display worn by the participant) from that location is stored in the databases 111, 112,. It can be read and played back on the requesting user's head-mounted display. In this case, since the reconstructing process of the moving image by the arithmetic device 120 or the like is not required, viewing can be performed in real time with a low load.
- the arithmetic device 120 reads out the moving images shot at a location close to the viewpoint from the databases 111, 112,.
- the moving image from the desired viewpoint may be reconstructed using the difference between the two.
- each user who participates late in meetings, lectures, lectures, etc. when viewing and replaying the recorded video on a head mounted display etc. opinions, annotations, commentary on the discussion at a certain point in the past Comments such as commentary and criticism can be written on the video or electronic bulletin board.
- a character-based input means such as a keyboard, a pointing device similar to the keyboard, a voice input system or a similar input system, and a gesture input including voice input and sign language are performed. Means etc. are mentioned.
- participants who come later can use the comments written on the video and the electronic bulletin board as clues, and more efficiently experience the discussion while understanding the contents of the discussion and lecture.
- the return function and understanding promotion function of the information reproduction system 100 As described above, users who participate in meetings, lectures, and lectures can return smoothly and comfortably. Furthermore, the user can promote understanding (content comprehension, understanding of others) in meetings, lectures, and lectures by viewing an image from another person or another (different from the user) viewpoint.
- an event is photographed with a camera of a camera conference system installed at an event site, a surveillance camera, a recording camera, or a head-mounted display attached to an event participant.
- the camera installed at the venue and the head-mounted display worn by the participants correspond to the recording-side information devices 101, 102,.
- the captured moving images are stored in the databases 131, 132,.
- the head-mounted display worn by the participants corresponds to the information devices 141, 142,.
- a participant who is halfway through or a user who wants to check the past can view past videos on the head-mounted display that he or she is wearing.
- the computing device 120 When a user plays back a past video on a head-mounted display, for example, the computing device 120 includes the position information and posture information of the user, and the position information and posture information of the camera or head-mounted display that performed shooting.
- the past video is reconstructed into an image and sound from the viewpoint of the participant on the way using the difference between the two.
- the user can get a sense of realism as if he / she was participating in the event from the beginning without any spatial discomfort.
- the user designates a desired viewpoint instead of his / her current position information and posture information, and requests the reproduction of the moving image. For example, when the viewpoint of another participant is designated, a moving image shot from that location may be read from the databases 111, 112,..., 11J and played on the requesting user's head-mounted display. .
- the reconstructing process of the moving image by the arithmetic device 120 or the like is not required, viewing can be performed in real time with a low load.
- a user who participates in the event can upload a moving image shot from his / her viewpoint with a head-mounted display or the like directly or through the databases 111, 112,.
- the computing device 120 may reconstruct a video to be viewed on a head-mounted display worn by a user participating in the event from the video uploaded to the video sharing site. Comments may be written on videos published on video sharing sites. By reflecting comments written by others on videos reconstructed from videos on the video sharing site, more information is provided to users who wear head-mounted displays and participate in events. It is possible to promote a sense of presence and immersion.
- each user participating in the event can comment on comments, comments, commentary, commentary, criticism, etc. of the event at a certain point in the past when playing the recorded video on a head mounted display etc. Can be written on the moving image or the electronic bulletin board through a pointing device such as the keyboard, voice input, gesture input, or the like. Furthermore, participants who come later can use the comments written on the video and the electronic bulletin board as clues, and more effectively re-experience while understanding the contents held so far in the event.
- a user who participates in the event can return smoothly and without a sense of incongruity. Furthermore, the user can promote understanding (content understanding, understanding of others) in the event by viewing an image from another person or another (different from the user himself / herself) viewpoint.
- cameras and surveillance cameras in camera conferencing systems installed in places such as public spaces and street corners, head-mounted displays worn by people and animals that pass through those places, electronic devices and imaging similar to monitoring systems Take pictures with the device.
- the camera, head-mounted display, and monitoring system in this case correspond to the recording-side information devices 101, 102,.
- the captured moving images are stored in the databases 131, 132,.
- the head-mounted display worn by a user who wants to watch a past video at a certain point is also the information device 141, 142,.
- the arithmetic device 120 may detect the difference between the position information and posture information of the user and the position information and posture information of the camera or head mounted display that has taken the image.
- the past video is reconstructed into an image and sound from the user's viewpoint, and the scene that occurred in the past is given through the head mounted display worn by the user.
- the user can feel the illusion of being at the place from the beginning and can confirm what happened at the accident site in the past. For example, it is possible for the user to confirm in what situation the accident has occurred in the on-site verification of the accident from another viewpoint, and the accuracy of the verification can be improved.
- on-site verification using the information reproduction system 100 from the past to the present from an arbitrary viewpoint such as a party such as an accident victim or a suspect, a surrounding surveillance camera, a surrounding imaging device, etc.
- This method is superior to the verification method with limited time zones and viewpoints for verification such as confirmation of recordings of moving images taken by surveillance cameras, self-recorders, and on-site verification.
- the difference between the image captured in the past and the current scene is highlighted and displayed over time. Since it becomes easy to visually recognize the accompanying change, it is possible to facilitate verification work that is difficult to confirm by normal visual observation.
- the information reproduction system 100 not only the image and the audible sound that can be seen from the position and posture of the user who verifies the accident site are reconstructed, but also the sound that is not normally audible at that position is reproduced simultaneously. Is possible. In addition, it is possible to deepen understanding of the connection before and after by confirming the situation at a certain point while listening to the sound before and after a certain point that is not usually audible.
- the information reproduction system 100 can be applied to a time travel system to display a scene at a certain point in time at a certain place.
- images and sounds recorded by a camera conferencing system or surveillance camera at a certain point images and sounds recorded by a head-mounted display worn by a person or animal passing a certain point, or an electronic device similar to a monitoring system Databases 111, 112,... Together with position / posture information when recording images and sounds recorded by the imaging device and images and sounds recorded by electronic devices and imaging devices similar to the monitoring system mounted on the object. , 11j.
- position information and posture information are sent to the computing device 120 from an information device such as a mobile phone such as a head-mounted display or a smartphone, a tablet terminal, or an electronic book. Send and request the image you want to see.
- an information device such as a mobile phone such as a head-mounted display or a smartphone, a tablet terminal, or an electronic book.
- the computing device 120 acquires the requested image from any of the databases 111, 112,..., 11j, it reconstructs the image from the user's viewpoint using the difference between the position information and the posture information, By loading the information device that the user wears, the user is given a scene at that point in time. As a result, the user has the illusion of being at a certain point at a certain time.
- the information reproduction system 100 is characterized in that an arbitrary viewpoint that the user is viewing in real time can be switched to a scene at a certain point and displayed.
- users can check the situation in the time zone they are interested in, such as the atmosphere of the place in a certain time zone, even if it is not in the time zone in which they are interested. can do.
- the user can perceive historical culture while feeling immersive, such as how the place has developed in the past.
- the user can experience the predicted future.
- the information reproduction system 100 enables time travel in a pseudo manner by virtually entering a 3D earth map at a certain point in time. For example, it can be applied to evaluation of land and buildings, travel guides, and the like. You may regard itself as an attraction.
- an imaginary sight may be generated from an image at a certain point in time and at a certain position / posture to generate an imaginary scene, which may be used in a game or the like.
- the real space and the virtual space are combined. The movement can be performed without obstructing the user's immersive feeling.
- an image display device head mounted display worn by the user on the head or face
- the database 111, 112,..., 11j that stores information such as images and sounds recorded by the recording side information devices 101, 102,..., 10n, and information such as images and sounds processed by the arithmetic unit 120 are stored.
- a moving image sharing site server can also be used.
- an information acquisition unit for acquiring image or audio information A sensor information acquisition unit that acquires position / posture information or other sensor information when acquiring image or audio information; A storage unit for storing acquired image or audio information in a database together with sensor information;
- An information processing apparatus comprising: (2) The storage unit stores image or audio information in a dedicated database on a network or a database of a video sharing site. The information processing apparatus according to (1) above. (3) The storage unit performs blur correction when recording an image or sound. The information processing apparatus according to (1) above.
- an information acquisition unit for acquiring image or audio information stored in the database;
- An arithmetic processing unit that reproduces information at an arbitrary time point or an arbitrary place from information recorded or recorded at different times or different places;
- An information processing apparatus comprising: (5)
- the database stores image or audio information together with position / posture information or other sensor information,
- the arithmetic processing unit reproduces position / posture information of a desired viewing viewpoint and information of an image or sound at a time,
- the arithmetic processing unit performs image or audio reproduction processing according to a time difference between a desired viewing time and a current time.
- the arithmetic processing unit generates a real-time image when the time difference between the desired viewing time and the current time is less than a predetermined threshold.
- the arithmetic processing unit generates a real-time image at an arbitrary place / viewpoint when the spatial difference between the generated image and the real image is equal to or greater than a predetermined threshold.
- the arithmetic processing unit generates a future image when the desired viewing time is the future in which the time difference from the current time is equal to or greater than a predetermined threshold.
- the arithmetic processing unit generates a future image of an arbitrary time at an arbitrary place / viewpoint when the spatial difference between the generated image and the real image is equal to or greater than a predetermined threshold, and arbitrarily at a predetermined place when the spatial difference is less than the predetermined threshold. Generate a future image of the viewpoint and arbitrary time, The information processing apparatus according to (9) above. (11) The arithmetic processing unit generates a playback image in the past when the time difference between viewing and the current time is equal to or greater than a predetermined threshold. The information processing apparatus according to (6) above.
- the arithmetic processing unit generates a playback image for an arbitrary time from an arbitrary place / viewpoint when the spatial difference between the generated image and the real image is equal to or larger than a predetermined threshold, and at a certain place when the spatial difference is smaller than the predetermined threshold.
- Generating a playback image of an arbitrary viewpoint and an arbitrary time; The information processing apparatus according to (11) above.
- An information processing method comprising: (14) an information acquisition step of acquiring image or audio information stored in the database; A calculation processing step for reproducing information at an arbitrary time point or an arbitrary place from information recorded or recorded at different times or different places;
- An information processing method comprising:
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Theoretical Computer Science (AREA)
- Astronomy & Astrophysics (AREA)
- Databases & Information Systems (AREA)
- Television Signal Processing For Recording (AREA)
- Studio Devices (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
画像又は音声の情報を取得する情報取得部と、
画像又は音声の情報を取得したときの位置・姿勢情報又はその他のセンサー情報を取得するセンサー情報取得部と、
取得した画像又は音声の情報をセンサー情報とともにデータベースに保存する保存部と、
を具備する情報処理装置である。
データベースに保存されている画像又は音声の情報を取得する情報取得部と、
異なる時間又は異なる場所で録画又は録音された情報から、任意の時点又は任意の場所での情報を再現する演算処理部と、
を具備する情報処理装置である。
画像又は音声の情報を取得する情報取得ステップと、
画像又は音声の情報を取得したときの位置・姿勢情報又はその他のセンサー情報を取得するセンサー情報取得ステップと、
取得した画像又は音声の情報をセンサー情報とともにデータベースに保存する保存ステップと、
を有する情報処理方法である。
データベースに保存されている画像又は音声の情報を取得する情報取得ステップと、
異なる時間又は異なる場所で録画又は録音された情報から、任意の時点又は任意の場所での情報を再現する演算処理ステップと、
を有する情報処理方法である。
(2)記録を行なう情報機器101、102、…、10n自身のセンサー又は画像や音の周期的な変化の検出に基づく自動ブレ補正
(3)記録を行なう他の情報機器でとらえた補正対象の画像又は音の周期的な変化の検出による自動ブレ補正
(2)構成物のエッジがある閾値以上。
(3)車のタイヤが磨り減っている。
(4)ブレーキを踏んだ。
(5)ハーフタイム(決められたタイミング:アジェンダ・プログラム)
(6)パワポが使われている。
(7)ある特定のキーワードが連呼されている。
(8)時計を見た。
(9)最頻キーワード。
(2)音源変化
(3)手入力
(4)機器操作
(5)脳波レベル変化
(6)心理状態変化
(7)入退室
(8)登録キーワード出現
(9)視線注視
(10)メガネを基点
(11)視聴者/撮影者
(2)時計を見ている人が多い場面は、発表の面白さ、つまらなさを表す。
(3)地図サイトのように、縮尺に応じて提示する情報の粒度を制御する。
(4)視野角、解像度、車はこう来てこう動く。
(5)ある程度離れた未来を予測する。例えば、精度に対する要求があまり高くない天気のような情報を予測して、画像、音声の再生に利用する。
(6)時間的な多視点画像の表示の仕方(小窓、重ね合わせなど)
(7)実写した画像を利用して仮想表示する。例えば、車載端末やウェアラブル端末など、端末の種類に応じて画像の合成方法を変更する。
途中参加者のヘッド・マウント・ディスプレイで過去の動画を再生する際、その途中参加者の位置情報や姿勢情報と、撮影を行なったカメラやヘッド・マウント・ディスプレイの位置情報や姿勢情報との差分を利用して、過去の動画を途中参加者の視点からの画像、音声に再構成してヘッド・マウント・ディスプレイから出力することで、空間的違和感なく、最初からその場で会議に参加していたような臨場感を与えることができる。
ユーザーは、例えば他者や別の場所からの視点で、会議、講義、講演などに参加したり、様子を眺めたりしたいことがある。例えば、ユーザーが現在いる場所からは、発言者や講演者など特定の人物がよく見えない、ホワイトボードやスライドがよく見えない場合である。
(1)画像又は音声の情報を取得する情報取得部と、
画像又は音声の情報を取得したときの位置・姿勢情報又はその他のセンサー情報を取得するセンサー情報取得部と、
取得した画像又は音声の情報をセンサー情報とともにデータベースに保存する保存部と、
を具備する情報処理装置。
(2)前記保存部は、ネットワーク上の専用のデータベース又は動画共有サイトのデータベースに画像又は音声の情報を保存する、
上記(1)に記載の情報処理装置。
(3)前記保存部は、画像や音声の記録に際して、ブレ補正を行なう、
上記(1)に記載の情報処理装置。
(4)データベースに保存されている画像又は音声の情報を取得する情報取得部と、
異なる時間又は異なる場所で録画又は録音された情報から、任意の時点又は任意の場所での情報を再現する演算処理部と、
を具備する情報処理装置。
(5)前記データベースは、画像又は音声の情報を置・姿勢情報又はその他のセンサー情報とともに保存しており、
前記演算処理部は、視聴希望視点の位置・姿勢情報及び時間における画像又は音声の情報を再現する、
上記(4)に記載の情報処理装置。
(6)前記演算処理部は、視聴希望の時間と現在との時間差分に応じた画像又は音声の再現処理を行なう、
上記(5)に記載の情報処理装置。
(7)前記演算処理部は、視聴希望の時間と現在との時間差分が所定の閾値未満のときには、リアルタイム画像を生成する、
上記(6)に記載の情報処理装置。
(8)前記演算処理部は、生成像と現実像の空間差分が所定の閾値以上のときには、任意の場所・視点でのリアルタイム画像を生成する、
上記(7)に記載の情報処理装置。
(9)前記演算処理部は、視聴希望の時間が現在との時間差分が所定の閾値以上となる未来の場合には、未来像を生成する、
上記(6)に記載の情報処理装置。
(10)前記演算処理部は、生成像と現実像の空間差分が所定の閾値以上のときには、任意の場所・視点で任意時間の未来像を生成し、所定の閾値未満のときには一定場所で任意視点及び任意時間の未来像を生成する、
上記(9)に記載の情報処理装置。
(11)前記演算処理部は、視聴希望の時間が現在との時間差分が所定の閾値以上となる過去の場合には、プレイバック像を生成する、
上記(6)に記載の情報処理装置。
(12)前記演算処理部は、生成像と現実像の空間差分が所定の閾値以上のときには、任意の場所・視点で任意時間のプレイバック像を生成し、所定の閾値未満のときには一定場所で任意視点及び任意時間のプレイバック像を生成する、
上記(11)に記載の情報処理装置。
(13)画像又は音声の情報を取得する情報取得ステップと、
画像又は音声の情報を取得したときの位置・姿勢情報又はその他のセンサー情報を取得するセンサー情報取得ステップと、
取得した画像又は音声の情報をセンサー情報とともにデータベースに保存する保存ステップと、
を有する情報処理方法。
(14)データベースに保存されている画像又は音声の情報を取得する情報取得ステップと、
異なる時間又は異なる場所で録画又は録音された情報から、任意の時点又は任意の場所での情報を再現する演算処理ステップと、
を有する情報処理方法。
101、102、…、10n…情報機器(記録側)
111、112、…、11j…データベース
120…演算装置
131、132、…、13k…データベース
141、142、…、14m…情報機器(再生側)
150…ネットワーク
201…演算部、202…主記憶部、203…ユーザー入力部
204…情報入力部、205…情報出力部
206…外部記憶装置、207…通信部、208…バッファー
301…提示情報入力部、302…位置姿勢検出部
303…環境検出部、311…撮像素子
312…マイクロフォン、313…テキスト入力部
314…モーション入力部、315…匂い入力部
316…触覚入力部、317…味覚入力部
321…GPSセンサー、322…地磁気センサー
323…加速度センサー、324…ドップラー・センサー
325…電波強度センサー、331…温度センサー
332…湿度センサー、333…赤外線センサー
334…紫外線センサー、335…照度センサー
336…電波強度センサー
337…化学物質(濃度・種類・状態など)センサー
501…液晶ディスプレイ、502…有機ELディスプレイ
503…網膜直描ディスプレイ、504…スピーカー
505…触覚ディスプレイ、506…匂いディスプレイ
507…温度ディスプレイ、508…味覚ディスプレイ、
509…各感覚器及び脳への電気・物理刺激によるディスプレイ
Claims (14)
- 画像又は音声の情報を取得する情報取得部と、
画像又は音声の情報を取得したときの位置・姿勢情報又はその他のセンサー情報を取得するセンサー情報取得部と、
取得した画像又は音声の情報をセンサー情報とともにデータベースに保存する保存部と、
を具備する情報処理装置。 - 前記保存部は、ネットワーク上の専用のデータベース又は動画共有サイトのデータベースに画像又は音声の情報を保存する、
請求項1に記載の情報処理装置。 - 前記保存部は、画像や音声の記録に際して、ブレ補正を行なう、
請求項1に記載の情報処理装置。 - データベースに保存されている画像又は音声の情報を取得する情報取得部と、
異なる時間又は異なる場所で録画又は録音された情報から、任意の時点又は任意の場所での情報を再現する演算処理部と、
を具備する情報処理装置。 - 前記データベースは、画像又は音声の情報を置・姿勢情報又はその他のセンサー情報とともに保存しており、
前記演算処理部は、視聴希望視点の位置・姿勢情報及び時間における画像又は音声の情報を再現する、
請求項4に記載の情報処理装置。 - 前記演算処理部は、視聴希望の時間と現在との時間差分に応じた画像又は音声の再現処理を行なう、
請求項5に記載の情報処理装置。 - 前記演算処理部は、視聴希望の時間と現在との時間差分が所定の閾値未満のときには、リアルタイム画像を生成する、
請求項6に記載の情報処理装置。 - 前記演算処理部は、生成像と現実像の空間差分が所定の閾値以上のときには、任意の場所・視点でのリアルタイム画像を生成する、
請求項7に記載の情報処理装置。 - 前記演算処理部は、視聴希望の時間が現在との時間差分が所定の閾値以上となる未来の場合には、未来像を生成する、
請求項6に記載の情報処理装置。 - 前記演算処理部は、生成像と現実像の空間差分が所定の閾値以上のときには、任意の場所・視点で任意時間の未来像を生成し、所定の閾値未満のときには一定場所で任意視点及び任意時間の未来像を生成する、
請求項9に記載の情報処理装置。 - 前記演算処理部は、視聴希望の時間が現在との時間差分が所定の閾値以上となる過去の場合には、プレイバック像を生成する、
請求項6に記載の情報処理装置。 - 前記演算処理部は、生成像と現実像の空間差分が所定の閾値以上のときには、任意の場所・視点で任意時間のプレイバック像を生成し、所定の閾値未満のときには一定場所で任意視点及び任意時間のプレイバック像を生成する、
請求項11に記載の情報処理装置。 - 画像又は音声の情報を取得する情報取得ステップと、
画像又は音声の情報を取得したときの位置・姿勢情報又はその他のセンサー情報を取得するセンサー情報取得ステップと、
取得した画像又は音声の情報をセンサー情報とともにデータベースに保存する保存ステップと、
を有する情報処理方法。 - データベースに保存されている画像又は音声の情報を取得する情報取得ステップと、
異なる時間又は異なる場所で録画又は録音された情報から、任意の時点又は任意の場所での情報を再現する演算処理ステップと、
を有する情報処理方法。
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US15/507,512 US20170256283A1 (en) | 2014-09-08 | 2015-06-16 | Information processing device and information processing method |
| JP2016547731A JPWO2016038964A1 (ja) | 2014-09-08 | 2015-06-16 | 情報処理装置及び情報処理方法 |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2014-182771 | 2014-09-08 | ||
| JP2014182771 | 2014-09-08 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2016038964A1 true WO2016038964A1 (ja) | 2016-03-17 |
Family
ID=55458727
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2015/067355 Ceased WO2016038964A1 (ja) | 2014-09-08 | 2015-06-16 | 情報処理装置及び情報処理方法 |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US20170256283A1 (ja) |
| JP (1) | JPWO2016038964A1 (ja) |
| WO (1) | WO2016038964A1 (ja) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2018207517A (ja) * | 2018-08-17 | 2018-12-27 | 株式会社コロプラ | ヘッドマウントデバイスにおける表示を制御するためにコンピュータで実行される方法、当該方法をコンピュータに実行させるプログラム、および情報処理装置 |
| CN112640472A (zh) * | 2018-07-12 | 2021-04-09 | 佳能株式会社 | 信息处理设备、信息处理方法和程序 |
Families Citing this family (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10664975B2 (en) * | 2014-11-18 | 2020-05-26 | Seiko Epson Corporation | Image processing apparatus, control method for image processing apparatus, and computer program for generating a virtual image corresponding to a moving target |
| US9995847B2 (en) * | 2015-06-23 | 2018-06-12 | International Business Machines Corporation | Airborne particulate source detection system |
| US10649233B2 (en) * | 2016-11-28 | 2020-05-12 | Tectus Corporation | Unobtrusive eye mounted display |
| US10217287B2 (en) * | 2016-12-24 | 2019-02-26 | Motorola Solutions, Inc. | Method and apparatus for generating a search pattern for an incident scene |
| JP6869453B2 (ja) | 2019-02-21 | 2021-05-12 | 三菱電機株式会社 | 情報処理装置、情報処理方法及び情報処理プログラム |
| US11315326B2 (en) * | 2019-10-15 | 2022-04-26 | At&T Intellectual Property I, L.P. | Extended reality anchor caching based on viewport prediction |
| US20230027666A1 (en) * | 2021-07-13 | 2023-01-26 | Meta Platforms Technologies, Llc | Recording moments to re-experience |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2006260338A (ja) * | 2005-03-18 | 2006-09-28 | Sony Corp | タイムシフト画像配信システム、タイムシフト画像配信方法、タイムシフト画像要求装置および画像サーバ |
| WO2013009472A2 (en) * | 2011-07-11 | 2013-01-17 | Ning Alice | Conductive composites |
| JP2013175883A (ja) * | 2012-02-24 | 2013-09-05 | Sony Corp | クライアント端末、サーバ、およびプログラム |
Family Cites Families (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2000338858A (ja) * | 1999-05-28 | 2000-12-08 | Toshiba Corp | 仮想空間体感装置 |
| JP2003299013A (ja) * | 2002-03-29 | 2003-10-17 | Fuji Photo Film Co Ltd | 体験情報再現装置 |
| JP2004140812A (ja) * | 2002-09-26 | 2004-05-13 | Oki Electric Ind Co Ltd | 体験記録情報処理方法とその通信システム及び情報記録媒体並びにプログラム |
| JP4963105B2 (ja) * | 2007-11-22 | 2012-06-27 | インターナショナル・ビジネス・マシーンズ・コーポレーション | 画像を記憶する方法、装置 |
| JP5493456B2 (ja) * | 2009-05-01 | 2014-05-14 | ソニー株式会社 | 画像処理装置、画像処理方法、プログラム |
| IN2014CN04659A (ja) * | 2011-12-27 | 2015-09-18 | Sony Corp | |
| JP2013161416A (ja) * | 2012-02-08 | 2013-08-19 | Sony Corp | サーバ、クライアント端末、システム、およびプログラム |
| JP2014017775A (ja) * | 2012-07-11 | 2014-01-30 | Nikon Corp | 画像処理装置、撮像装置、頭部装着型情報入出力装置、及び画像処理プログラム |
-
2015
- 2015-06-16 JP JP2016547731A patent/JPWO2016038964A1/ja active Pending
- 2015-06-16 US US15/507,512 patent/US20170256283A1/en not_active Abandoned
- 2015-06-16 WO PCT/JP2015/067355 patent/WO2016038964A1/ja not_active Ceased
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2006260338A (ja) * | 2005-03-18 | 2006-09-28 | Sony Corp | タイムシフト画像配信システム、タイムシフト画像配信方法、タイムシフト画像要求装置および画像サーバ |
| WO2013009472A2 (en) * | 2011-07-11 | 2013-01-17 | Ning Alice | Conductive composites |
| JP2013175883A (ja) * | 2012-02-24 | 2013-09-05 | Sony Corp | クライアント端末、サーバ、およびプログラム |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112640472A (zh) * | 2018-07-12 | 2021-04-09 | 佳能株式会社 | 信息处理设备、信息处理方法和程序 |
| JP2018207517A (ja) * | 2018-08-17 | 2018-12-27 | 株式会社コロプラ | ヘッドマウントデバイスにおける表示を制御するためにコンピュータで実行される方法、当該方法をコンピュータに実行させるプログラム、および情報処理装置 |
Also Published As
| Publication number | Publication date |
|---|---|
| JPWO2016038964A1 (ja) | 2017-06-22 |
| US20170256283A1 (en) | 2017-09-07 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2016038964A1 (ja) | 情報処理装置及び情報処理方法 | |
| US10573351B2 (en) | Automatic generation of video and directional audio from spherical content | |
| AU2019216671B2 (en) | Method and apparatus for playing video content from any location and any time | |
| CN114236837B (zh) | 用于显示交互式增强现实展示的系统、方法和介质 | |
| US20220222028A1 (en) | Guided Collaborative Viewing of Navigable Image Content | |
| US11095947B2 (en) | System for sharing user-generated content | |
| EP2816564B1 (en) | Method and apparatus for smart video rendering | |
| US10298876B2 (en) | Information processing system, control method, and storage medium | |
| EP3384693B1 (en) | Immersive telepresence | |
| EP3014888A1 (en) | Live crowdsourced media streaming | |
| US20140294366A1 (en) | Capture, Processing, And Assembly Of Immersive Experience | |
| US20150213031A1 (en) | Geo-location video archive system and method | |
| JP2023153120A (ja) | 情報処理装置及び情報処理方法 | |
| CN114846808B (zh) | 内容发布系统、内容发布方法以及存储介质 | |
| WO2018135334A1 (ja) | 情報処理装置、および情報処理方法、並びにコンピュータ・プログラム | |
| US20080246841A1 (en) | Method and system for automatically generating personalized media collection for participants | |
| JP2014204411A (ja) | 会議記録システム、会議記録装置、会議記録再生方法およびコンピュータプログラム | |
| GB2530984A (en) | Apparatus, method and computer program product for scene synthesis | |
| US9807350B2 (en) | Automated personalized imaging system | |
| JP2020005150A (ja) | 録画再生装置及びプログラム | |
| JP2023181567A (ja) | 情報処理装置、情報処理方法、情報処理システム、及びデータ生成方法 | |
| KR20180042094A (ko) | 화자의 위치를 기반으로 한 자막 디스플레이 방법 및 이러한 방법을 수행하는 장치 | |
| US12532036B2 (en) | Multi-camera multiview imaging with fast and accurate synchronization | |
| WO2018201195A1 (en) | Devices, systems and methodologies configured to enable generation, capture, processing, and/or management of digital media data | |
| NL2016351B1 (en) | System and method for event reconstruction from image data |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15840852 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2016547731 Country of ref document: JP Kind code of ref document: A |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 15507512 Country of ref document: US |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 15840852 Country of ref document: EP Kind code of ref document: A1 |