WO2012102276A1 - 静止画抽出装置 - Google Patents
静止画抽出装置 Download PDFInfo
- Publication number
- WO2012102276A1 WO2012102276A1 PCT/JP2012/051460 JP2012051460W WO2012102276A1 WO 2012102276 A1 WO2012102276 A1 WO 2012102276A1 JP 2012051460 W JP2012051460 W JP 2012051460W WO 2012102276 A1 WO2012102276 A1 WO 2012102276A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- extraction
- still image
- frame
- condition
- extraction condition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/91—Television signal processing therefor
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/49—Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
- G11B27/28—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
- G11B27/32—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on separate auxiliary tracks of the same or an auxiliary record carrier
- G11B27/327—Table of contents
- G11B27/328—Table of contents on a tape [TTOC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
- H04N5/77—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
Definitions
- the present invention relates to a still image extraction apparatus that extracts a specific frame as a still image from a plurality of frames such as moving images.
- an object of the present invention is to enable a still image extraction apparatus that extracts a specific frame as a still image from a plurality of frames to extract a still image desired by a user.
- a still image extraction device of a first configuration made to achieve such an object, A still image extraction device that extracts a specific frame from a plurality of frames as a still image, An extraction condition registration means for registering an extraction condition at the time of extracting a still image designated by a user of the still image extraction apparatus in the extraction condition recording unit; Extraction determination means for determining, for each frame, whether or not the plurality of frames match an extraction condition registered in the extraction condition recording unit; Extraction means for extracting a frame determined to match the extraction condition as a still image; It is provided with.
- a specific still image can be extracted from a plurality of frames in accordance with an extraction condition designated by the user. Therefore, the still image desired by the user can be extracted.
- Specific extraction conditions include facial expressions when the subject appears most beautiful, moment of sunrise, moment of accident, similar to previously taken photos / pictures, weather, and the like.
- An exclusion condition registration means for registering in the exclusion condition recording unit an exclusion condition indicating a condition for not extracting a still image specified by the user of the still image extraction device;
- Exclusion determination means for determining, for each frame, whether or not the plurality of frames match an exclusion condition registered in the exclusion condition recording unit;
- An extraction prohibiting means for prohibiting the extraction means from extracting a frame determined to match the exclusion condition among frames determined to match the extraction condition; May be provided.
- An extraction condition transmission means for holding an extraction condition for extracting a still image and transmitting the extraction condition recorded in the extraction condition recording unit to a server holding the extraction condition when receiving the extraction condition from the outside, May be provided.
- the extraction condition set by the user can be transmitted to the server, so that this extraction condition can be combined with other still image extraction devices by combining with the third configuration. Can be shared.
- the exclusion condition registration means can register that the extraction of similar still images is prohibited as the exclusion condition
- the exclusion determination means groups the plurality of frames into similar frames when registration of prohibiting extraction of similar still images as the exclusion condition is registered in the exclusion condition recording unit. Determine if there are more than one frame for each
- the extraction prohibiting means may prohibit the extraction of two or more frames for each group.
- Such a still image extraction apparatus can suppress the extraction of a plurality of similar still images. Therefore, when printing the extracted still image, the cost can be reduced and the complexity of selecting the still image to be printed can be reduced.
- the present invention for example, when a frame in the same group has already been selected, a configuration in which frames selected after that are ignored, a configuration in which frames in the same group are sequentially overwritten, or the like can be given. .
- the present invention can be subordinate to the invention described as the third configuration and the fourth configuration without considering the description of the following invention.
- the extracting unit may extract a frame that most matches an extraction condition for each group.
- the extraction condition registration means can register that the frame containing a face similar to this face photo is extracted together with the face photo of the specific person as the extraction condition, If the extraction condition recording unit is registered in the extraction condition recording unit to extract a frame including a face similar to the face photograph as the extraction condition, a face included in each frame is recorded in the extraction condition recording unit. It may be determined whether or not it is similar to the face photograph recorded in the section.
- a still image extracting device it is possible to register a facial photograph of an expression to be extracted and extract a frame including a facial expression similar to this facial photograph. Therefore, it is possible to easily extract still images of facial expressions that the user likes.
- the extraction condition registration means can register that the moment the object collides is extracted as the extraction condition,
- the extraction condition recording unit is registered in the extraction condition recording unit to extract the moment when an object collides as the extraction condition, each of the frames is continuously compared in time series. It is also possible to detect whether or not the shape of the object has started to change by detecting the movement of the object existing inside and tracking the object.
- the moment when the object included in the plurality of frames starts to be deformed can be extracted as a still image at the moment of collision.
- the extraction condition registration means can register that the extraction condition is extracted when the deformation amount of the object is maximum.
- the extraction determination means is configured to continuously compare each frame in time series when the extraction condition recording unit registers that the extraction condition is when the amount of deformation of the object is maximum. The movement of an object existing in each frame may be detected, and by tracking this object, it may be determined whether or not the object has deformed and the deformation has stopped.
- Such a still image extraction apparatus can extract a still image at the moment when the deformation amount of an object included in a plurality of frames is maximized.
- a so-called decisive moment photograph such as a moment when a batter hits a home run in a stadium can be easily extracted from a plurality of frames.
- Display output means for sequentially outputting a plurality of frames to a display device;
- An external command input means for inputting a specific external command by the user;
- the extraction determination means outputs a specific external command via the external command input means, and the display output means outputs at this moment. It may be determined that a frame before a predetermined frame corresponding to the user's reaction speed is more suitable for the extraction condition than the frame.
- the plurality of frames referred to in the present invention includes a moving image reproduced after image capturing and a moving image for monitoring an image being captured.
- the extraction condition registration means can register that the highest point has been reached as the extraction condition,
- the extraction determination means continuously compares each frame according to a time series, thereby moving an object existing in each frame. May be detected, and it may be determined whether the object has moved upward and then stopped.
- Such a still image extraction apparatus can extract a still image at the moment when an object included in a plurality of frames reaches the highest point. Therefore, for example, the frame at the moment when the animal jumps the highest can be extracted.
- the extraction means includes A matching frame extracting means for extracting a matching frame representing a frame determined to match the extraction condition; Front and rear frame extracting means for extracting front and rear frames representing a predetermined number of frames imaged immediately before or after the matching frame; Selection extraction means for extracting a frame selected from a user of the matching frame and the preceding and following frames as the still image; May be provided.
- the number of frames that the device should extract is narrowed down to several frames, and then the optimum frame is selected by the user, and the selected frame is extracted as a still image. it can. Therefore, the user can easily select an optimal frame.
- the thirteenth configuration still image extraction program formed to achieve the above object is a program for causing a computer to function as each means constituting any one of the still image extraction devices.
- 1 is a block diagram showing a schematic configuration of an image extraction system 1 to which the present invention is applied.
- 3 is a flowchart showing the first half of an extraction condition setting process executed by the microcomputer 11 of the imaging apparatus 10. It is a flowchart which shows the latter half part of extraction condition setting processing.
- 3 is a flowchart showing exclusion condition setting processing executed by the microcomputer 11 of the imaging apparatus 10. It is a flowchart which shows the 1st part of the still image extraction process which the microcomputer 11 performs. It is a flowchart which shows the 2nd part of a still image extraction process. It is a flowchart which shows the 3rd part of a still image extraction process. It is a flowchart which shows the exclusion process in a still image extraction process.
- SYMBOLS 1 ... Image extraction system, 10 ... Imaging device, 11 ... Microcomputer, 12 ... Imaging part, 13 ... Moving image recording part, 14 ... Communication part, 15 ... Condition recording part, 16 ... Still image recording part, 21 ... Operation part, 22 ... Display unit, 23 ... Sound collection unit, 30 ... Server, 40 ... Internet network, 50 ... Base station, 31 ... Extraction condition DB, 111 ... Memory.
- the image extraction system 1 is a system having a function of extracting a specific frame as a still image from a moving image or the like configured with a plurality of frames.
- the imaging device 10 still image extraction device
- the server 30 connected to the Internet network 40
- the server 30 can communicate with each other via the Internet network 40.
- a base station 50 functioning as a wireless communication base station.
- the imaging device 10 has a function as a well-known video camera capable of recording a moving image in an arbitrary format such as MPEG4 format. It also has a function to extract still images from moving images.
- the imaging device 10 includes a microcomputer 11, an imaging unit 12, a moving image recording unit 13, a communication unit 14, a condition recording unit 15 (extraction condition recording unit, exclusion condition recording unit), and still image recording.
- a unit 16, an operation unit 21 (an example of an external command input unit), a display unit 22, and a sound collection unit 23 are provided.
- the microcomputer 11 is configured as a well-known microcomputer including a CPU, a ROM, a RAM, and the like, and by executing a program (still image extraction program or the like) stored in a memory 111 such as a ROM, the entire imaging apparatus 10 is configured.
- the imaging unit 12 is configured as a camera including a lens and an image sensor, and causes the moving image recording unit 13 to record captured moving image data (for example, 30 fps).
- the moving image recording unit 13 is configured as a known memory.
- the communication unit 14 is configured as a communication module that mediates wireless communication between the imaging device 10 and the base station 50. Any method can be adopted as the communication method. For example, it may be configured as a communication module of a wireless LAN or a mobile phone.
- the condition recording unit 15 is configured as a memory for recording various conditions when a still image is extracted from a moving image.
- the still image recording unit 16 is configured as a memory for recording the extracted still image.
- Each of the recording units 13, 15, and 16 may be configured as physically separate memories, or physically configured as one memory area, which is used by dividing this area for each function. It may be configured.
- the operation unit 21 is configured as an interface for a user to input a command to the imaging device 10. Specifically, the operation unit 21 is configured as a plurality of buttons, a touch panel, and the like.
- the imaging device 10 (microcomputer 11) is configured to be able to switch and execute a plurality of operation modes by input via the operation unit 21. Specifically, a recording mode for recording a moving image using the imaging unit 12, a playback mode for reproducing a moving image or a still image recorded in the moving image recording unit 13 or the still image recording unit 16, and an extraction for extracting a still image from the moving image
- a condition setting mode for setting conditions, a still image extraction mode for extracting still images from moving images, and a transmission mode for transmitting extraction conditions and still images to the outside can be switched.
- the display unit 22 is configured as a liquid crystal color display or an organic EL display.
- the display unit 22 displays an image (for example, a moving image to be recorded when a moving image is recorded using the imaging unit 12) according to an instruction from the microcomputer 11, and is recorded in the moving image recording unit 13 or the still image recording unit 16.
- the reproduced moving image or still image is reproduced, the reproduced moving image or still image is displayed.
- the sound collection unit 23 is configured as a well-known microphone, and sends the detected sound wave signal to the microcomputer 11.
- the server 30 is configured to be connectable to a large number of imaging devices 10 via the Internet network 40 and the base station 50, and is configured to be able to acquire setting contents in each imaging device 10.
- each imaging device 10 is configured to be able to acquire setting contents by other imaging devices 10 recorded in the extraction condition DB 31 and setting contents prepared in the server 30 in the condition setting mode.
- processing for setting a condition for extracting a still image from a moving image and processing for extracting a still image from the moving image according to the set condition are executed.
- processing for setting conditions for extracting a still image from a moving image will be described with reference to FIGS. 2A, 2B, and FIG.
- an extraction condition which is a condition for extracting a still image from a moving image
- an exclusion condition setting process (FIG. 3) even if the extraction condition is met, a still image is extracted from the moving image.
- an exclusion condition which is a condition when no image is extracted.
- the extraction condition setting process and the exclusion condition setting process are each started when the imaging apparatus 10 is set to the condition setting mode via the operation unit 21, and are repeatedly executed until the condition setting mode is switched to another mode. Is done.
- the extraction condition setting process first, it is determined whether or not the extraction condition is set to be acquired from the server 30 (S5). Whether or not the extraction condition is acquired from the server 30 is set in advance by an operation via the operation unit 21.
- the extraction condition is set to be acquired from the server 30 (S5: YES), the extraction condition is acquired from the server 30 and the acquired extraction condition is set (S10: an example of an extraction condition acquisition unit).
- the server 30 receives a request for an extraction condition from the imaging device 10, the server 30 returns the extraction condition to the imaging device 10 that is the request source. At this time, all the extraction conditions that the server 30 has may be returned, and the extraction conditions set in the imaging apparatus 10 may be selected, or a plurality of extraction conditions (can be still images corresponding to the extraction conditions) that the server 30 has. It may be presented to the imaging device 10 so that it can be selected, and only the extraction conditions selected by the imaging device 10 may be returned to the imaging device 10.
- the extraction conditions set in the processes of S55 to S170 described later are acquired and set.
- the setting in this process does not determine the extraction condition but means that the extraction condition is temporarily recorded in a memory such as a RAM, and is set to be registered in S175 described later. Assume that the extraction conditions are fixed.
- the still image recording unit 16 If it is set to extract a still image of the user's favorite facial expression (S55: YES), it is recorded in the still image recording unit 16 in order to select a photograph taken by the user with a good facial expression.
- the still image (comparison photo) is displayed on the display unit 22 (S60). Note that the user can capture a still image (comparison photo) input from an interface (not shown) by operating the operation unit 21 and register the still image in the still image recording unit 16 in advance. .
- the angle of the face is selected so that the face angle can be changed if you like the facial expression of the selected still image but you do not like the face angle. is there.
- the display unit 22 displays “right +10 degrees”, “bottom +15 degrees”, and the like, and displays a face at a corresponding angle (a real photograph or a model character), and allows the user to Let the angle be selected.
- the process returns to S70. If the face angle has been selected (S75: YES), the facial expression characteristics and the selected face angle in the selected still image are set (S80). Note that the facial expression feature represents information such as the positional relationship, size, and spacing of facial parts.
- an image for selecting whether to extract the moment of collision of the object, to extract when the amount of deformation becomes the maximum after the collision of the object, or to extract around the predetermined time of the collision is selected.
- the setting is made so that the moment at the highest point is extracted (S160). Subsequently, it is determined whether or not the setting is made by the operation of the operation unit 21 by the user so as to extract the moment when the sound is detected (frame linked to the sound) (S165).
- the exclusion condition setting process first, it is determined whether or not the exclusion condition is set to be acquired from the server 30 (S205). Note that whether or not the exclusion condition is acquired from the server 30 is set in advance by an operation via the operation unit 21.
- the exclusion condition is set to be acquired from the server 30 (S205: YES)
- the exclusion condition is acquired from the server 30 by the same method as that for acquiring the extraction condition, and the acquired exclusion condition is set ( S210).
- the extraction condition exclusion conditions as shown in S220 to S235 described later are set. Further, the exclusion condition that has been set is confirmed when registered in the process of S240 described later.
- the exclusion condition is manually set (S215: YES), or when the exclusion condition is set to be acquired from the server 30 in the process of S205 (S205: YES).
- the exclusion condition is manually set by executing the processing from S220 onward.
- the still image extraction process is a process that starts when the user sets the imaging apparatus 10 to the still image extraction mode via the operation unit 21 and further selects a moving image from which a still image is to be extracted.
- the first extraction condition among the registered extraction conditions is selected (S305).
- the n 0th frame, that is, the first frame constituting the moving image is selected (S310). Subsequently, it is determined which condition is the selected extraction condition (S315). If the extraction condition is to extract a still image of the user's favorite facial expression (good expression) (S315: good expression), the face part in the selected frame is extracted by well-known image processing (S355), The image is compared with the registered comparison photograph (S360), and the face angle (angle with respect to the reference direction (eg, front)) is calculated (S365).
- the degree of coincidence of the expression with the comparative photograph and the degree of coincidence of the angle are scored, and the score of the degree of coincidence is calculated (S370).
- the face angle can be detected by specifying the position of the face part using a well-known image processing technique.
- the comparison photograph and the photograph in the frame have almost the same expression, and the threshold value set to such a degree that the face direction can be recognized as specified is compared with the score of coincidence (S375). If the score of the degree of coincidence is equal to or greater than the threshold (S375: YES), the process proceeds to S555 described later. If the score of the matching degree is less than the threshold value (S375: NO), the process proceeds to S560 described later.
- a moving object (and a stationary object) is extracted from the frame (S405).
- an object included in the selected frame is extracted, and by detecting the position of the object in the already selected frame, it is determined whether or not the object is a moving object. Therefore, the moving object is not extracted in the frame immediately after the start of the moving image.
- the deformation of the moving object means a deformation caused by the collision of the moving object with another object (another moving object or a stationary object). Excludes no deformation.
- the process proceeds to S420 described later. If the deformation of the moving object has started (S410: YES), the frame immediately before the start of the deformation is detected is set as the frame at the time of collision (S415). This is because the frame in which the collision is actually detected is the post-collision frame, and it can be estimated that the immediately preceding frame is the frame at the time of the collision (the same applies to the processing of S425 and S465 described later). ).
- the deformation amount due to the collision is compared with the past frame, and it is determined whether or not the deformation amount has started to decrease in the current frame (S420). That is, the deformation is started by the collision, and thereafter, the deformation amount is maximized, and thereafter, it is determined whether or not the deformation is to be restored.
- the process proceeds to S555 described later. If the deformation amount starts to decrease (S420: YES), the frame immediately before the frame where the start of the deformation amount decrease is detected is set as the maximum deformation amount frame (S425).
- the moving object is the same as the process of S405. (And a stationary object) are extracted (S455), and it is determined whether or not the moving object starts to descend (S460).
- the moving object is jumping, it is determined whether or not it is a frame at the moment when it moves upward and then stops for a moment and turns downward. Specifically, the behavior of the moving object is detected in the past frame, and it is detected that the moving object starts to move downward from the state in which the moving object moves upward.
- a representative object among the objects existing in this frame (for example, an object having the largest area ratio in the frame, a moving object whose speed has changed, etc.)
- the distance to is detected and set.
- a detection result by an infrared ray or a radar provided in the imaging device 10 may be used, or a detection result by an external distance measurement device may be used.
- the imaging device 10 is configured as a stereo camera, the position up to the object may be detected by image processing, and when the imaging device 10 is fixedly arranged on a road or the like, a frame is previously stored. The relationship between the position and distance of the object inside may be recorded in the imaging device 10 and this information may be used.
- the process of S520 is a configuration for capturing the moment when sound is generated.
- a flag (a flag indicating that the image is extracted as a still image) according to the type of setting is attached to the frame that is set in the above processing (S555: an example of an extraction unit).
- a flag indicating that the frame is a collision frame is attached to the frame set as the collision frame.
- the frame number n is incremented (that is, the next frame is selected) (S570). Then, it is determined whether or not the frame number n is equal to or greater than the number of frames (nmax) constituting the moving image (that is, whether the last frame of the moving image has been selected) (S575).
- the first extraction condition is selected again (S580), and the processing from S315 is executed. If the frame number n is equal to or greater than the number of frames constituting the moving image (S575: YES), an exclusion process is performed (S585).
- the degree of blurring of the selected extracted frame is detected (S615).
- the degree of blur refers to the degree of focus, the clarity of the contour (edge), and the like that are digitized by image processing. Subsequently, it is determined whether or not the numerical value is within an allowable range (S620).
- Whether or not it is within the allowable range can be determined by determining whether or not the numerical value is equal to or greater than a predetermined threshold value (a value that many people feel that a still image is clear). If this value is outside the allowable range (that is, if the still image is not clear) (S620: YES), it is excluded from the extraction target by canceling the flag indicating that it is extracted as a still image set in the selected frame. (S625: an example of extraction prohibiting means).
- a predetermined threshold value a value that many people feel that a still image is clear
- the frames flagged to be extracted are extracted as still images, and these frames are recorded in the still image recording unit 16 (S590: an example of extraction means). End the process.
- the upload process is a process that is started when the imaging apparatus 10 is set to the transmission mode and then repeatedly executed until it is changed to another mode.
- the upload process first, as shown in FIG. 6, it is determined whether or not the user has selected a still image (photograph) extracted via the operation unit 21 (S905).
- the upload process is terminated. If a still image has been selected (S905: YES), extraction conditions (including exclusion conditions) when this still image is extracted are extracted based on the flag attached to this still image (S910). Then, the extraction condition and the still image data are transmitted to the server 30 (S915: an example of an extraction condition transmission unit, S920), and the upload process is terminated. Note that the server 30 records the still image uploaded from the imaging device 10 and the extraction condition in the extraction condition DB 31 together with information for specifying the imaging device 10.
- the still image and the extraction condition are transmitted to the server 30. Therefore, if other users see the still image recorded in the server 30 and feel that they want to take a similar still image.
- By selecting a still image it is possible to acquire this extraction condition and set an extraction condition for taking a similar still image in the imaging apparatus 10 of its own.
- the manual extraction process is a process of extracting a still image based on the timing when the user inputs an extraction command to the imaging apparatus 10 regardless of the extraction conditions described in the above description.
- This manual extraction process is a process that starts when the imaging apparatus 10 is set to the recording mode or the playback mode and is repeatedly executed until the mode is changed to another mode.
- the imaging device 10 is recording or reproducing a moving image (S955). If the imaging device 10 is not recording or reproducing a moving image (S955: NO), the manual extraction process is terminated.
- the imaging device 10 is recording or reproducing a moving image (S955: YES)
- the moving image being recorded or the moving image is displayed on the display unit 22 (S960: an example of display output means).
- an extraction command has been input (S965: an example of extraction determination means).
- the user has performed a specific operation for operating the operation unit 21 such as an operation to release the shutter.
- the microcomputer 11 of the imaging device 10 registers the extraction conditions for extracting a still image specified by the user of the imaging device 10 in the condition recording unit 15. It is determined for each frame whether or not the plurality of frames meet the extraction condition registered in the condition recording unit 15. Then, the microcomputer 11 extracts a frame determined to meet the extraction condition as a still image.
- a still image can be extracted from a moving image in accordance with an extraction condition specified by the user. Therefore, the still image desired by the user can be extracted.
- the microcomputer 11 registers an exclusion condition, which is designated by the user of the imaging apparatus 10 and represents a condition for not extracting a still image, in the condition recording unit 15, and the plurality of frames are stored in the condition recording unit 15. It is determined for each frame whether or not it matches the exclusion condition registered in. Then, the microcomputer 11 prohibits the extraction of the frame determined to meet the exclusion condition among the frames determined to match the extraction condition.
- the microcomputer 11 is located outside the imaging apparatus 10 and acquires an extraction condition from a server that holds the extraction condition when extracting a still image. According to such an imaging apparatus 10, it is possible to acquire a condition for extracting a favorite still image from the server. Therefore, anyone can easily set a moment for making a still image.
- the microcomputer 11 holds the extraction conditions for extracting the still image, and when the extraction conditions are received from the outside, the microcomputer 11 records the extraction conditions in the condition recording unit 15 for the server holding the extraction conditions. Send extraction conditions.
- the extraction condition set by the user is transmitted to the server, so that this extraction condition can be shared with other imaging devices 10.
- the microcomputer 11 can register that the extraction of similar still images is prohibited as an exclusion condition, and prohibits the condition recording unit 15 from extracting similar still images as an exclusion condition.
- the plurality of frames are grouped for each similar frame, and it is determined whether or not there are two or more frames for each group, and two or more frames are extracted for each group. It is prohibited to be done.
- Such an imaging apparatus 10 can suppress the extraction of a plurality of similar still images. Therefore, when printing the extracted still image, the cost can be reduced and the complexity of selecting the still image to be printed can be reduced.
- the microcomputer 11 extracts a frame that best matches the extraction condition for each group. According to such an imaging apparatus 10, it is possible to extract only the optimum frame desired by the user.
- the microcomputer 11 can register that a frame including a face similar to the face photograph is extracted as an extraction condition, together with the face photograph of the specific person, and the condition recording unit 15 extracts the extraction condition.
- the condition recording unit 15 extracts the extraction condition.
- an imaging apparatus 10 it is possible to register a facial photograph of an expression to be extracted and extract a frame including a facial expression similar to this facial photograph. Therefore, it is possible to easily extract still images of facial expressions that the user likes.
- the microcomputer 11 can register that the moment when the object collides is extracted as the extraction condition, and the condition recording unit 15 extracts the moment when the object collides as the extraction condition.
- the movement of an object existing in each frame is detected by comparing each frame continuously in time series, and the shape of this object changes by tracking this object. Determine if it has started.
- the microcomputer 11 can register the extraction condition to extract when the amount of deformation of the object is maximum, and the microcomputer 11 can register the object in the condition recording unit 15 as the extraction condition.
- the movement of an object existing in each frame is detected by continuously comparing each frame in time series, and this object is tracked. Thus, it is determined whether or not the object is deformed and the deformation is stopped.
- an imaging apparatus 10 it is possible to extract a still image at the moment when the deformation amount of the object included in the moving image is maximized.
- a so-called decisive moment photograph such as a moment when a batter hits a home run in a stadium can be easily extracted from a moving image.
- the microcomputer 11 has a function of outputting a moving image to the display device, and the microcomputer 11 receives a specific external command from the user when outputting the moving image to the display device. Then, it is determined that the frame before the predetermined frame corresponding to the reaction speed of the user from the frame output at this moment matches the extraction condition.
- the microcomputer 11 can register that the highest point has been reached as an extraction condition, and if the fact that the highest point has been reached is registered in the condition recording unit 15, By continuously comparing the frames in time series, the movement of an object present in each frame is detected, and it is determined whether or not the object has moved upward and then stopped.
- Embodiments of the present invention are not limited to the above-described embodiments, and can take various forms as long as they belong to the technical scope of the present invention.
- the moment when the collision is detected and still images before and after the collision are extracted.
- the imaging device 10 is arranged at the intersection, not only the moment of the collision but also before the collision. You may make it extract the still image in the moment when the light color of the traffic light changed, or the moment when the moving object passed a predetermined position (stop line etc.).
- the imaging device 10 is configured as a video camera, the imaging device 10 may be configured as a mobile phone including audio output means such as a speaker and a headphone terminal. Further, in the imaging apparatus 10, it is determined by detecting that the moving direction of the moving object has changed from the upper direction to the lower direction, that the moving object has reached the highest reaching point.
- the frame when the moving direction of the moving object changes in an arbitrary direction may be extracted. For example, when a moving object moving in the right direction stops or returns to the left direction, or when an object moving in the downward direction stops or returns to the upper direction, the frame at this time is extracted. Also good.
- the frame at the moment of collision or the moment when the sound is generated is extracted, but the frame at the moment when the hue or light amount is changed, such as light-up or explosion, may be extracted. .
- the extraction condition setting process of the modification shown in FIG. 8 and the still image extraction process of the modification shown in FIGS. 9A and 9B may be performed.
- the extraction condition setting process of the modified example as shown in FIG. 8, after completion of the above-described processes of S55 to S170 (S1010), an optical interlock indicating that a frame corresponding to a change in hue or light amount is selected. It is determined whether or not (S1020).
- the brightness of the currently selected frame and the previous frame are extracted (S1110).
- the luminance herein may be the average luminance of the entire frame, or may be the luminance of a certain part (portion where the object exists).
- the currently selected frame is set as the luminance increasing frame (S1130), the setting is recorded in the same manner as in S555 (S1180), and the process proceeds to S560. . If the luminance has decreased (S1120: NO, S1140: YES), the frame immediately before the currently selected frame is set as the luminance decreasing frame (S1150), and the processing proceeds to the above-described processing of S1180.
- the person when processing is performed during imaging, if a rejection command is detected, the person may be focused instead of focusing on the background other than the person, or the already captured imaging
- image processing that makes the subject person unclear or looks more beautiful, such as mosaic or blurring.
- the image correction processing shown in FIG. 10 is performed. It is good to carry out. Specifically, as shown in FIG. 10, when editing an image (for example, for a still image extracted immediately after a still image is extracted from a moving image), frames before and after the extracted target frame (still image) (the same A rejection frame having a facial expression / pose that rejects shooting is extracted from the scene frame) (S1210).
- facial expressions that are disliked immediately after checking the camera characteristic parameters of this facial expression are registered in the same way as smiles
- facial expressions / poses that are rejected by a pose that makes a cross with a finger or arm Applicable.
- a woman who is not makeup that is, a person who is a so-called “suppin” is extracted (S1220).
- a woman who is not makeup that is, a person who is a so-called “suppin” is extracted (S1220).
- the reflectance of the illumination light and the degree of color change for each part of the skin part it is determined whether or not it is “Suppin”, and the frame that is “Suppin” is extracted.
- the selected frame is a rejection frame or a “Suppin” frame (S1230, S1250). If the frame being selected is a rejection frame or a “Suppin” frame (S1230: YES or S1240: YES), a process of blurring the object (here, a human face) with respect to the target frame is performed (S1240).
- the selected frame is a rejection frame
- the object is blurred relatively strongly, and the hue, contrast, and the like with adjacent pixels in the image are reduced to such an extent that a person cannot be specified (S1240).
- the selected frame is a “Suppin” frame
- the object is blurred relatively weakly, and the color, contrast, etc. of adjacent pixels in the image can be reduced to such an extent that a person can be identified but details are not conspicuous. Decrease (S1260).
- the image correction processing ends.
- the selected still image is uploaded to the server 30, but a still image such as an accident may be forcibly uploaded to the server 30.
- processing such as upload processing shown in FIG. 11 may be performed.
- the upload process is a process executed by the microcomputer 11 when a still image is extracted. Specifically, as shown in FIG. 11, first, it is determined whether or not the vehicle or the person is deformed by performing image analysis on the extracted still image (S1310, S1320). This process may be determined using frames before and after the extracted still image.
- the forced upload process is terminated. Also, if the vehicle or person is deformed (S1320: YES), the accident flag in this still image is set to ON, position information indicating the position where the still image is captured is acquired, and the position information is converted to the still image. (S1340), and the processing of S910 to S920 described above is performed.
- the extraction condition setting process shown in FIG. 8 whether or not the dummy sound interlock indicating that a still image at a timing interlocked with the dummy sound is extracted is selected by the user after the process of S1030. Is determined (S1040). If the dummy sound interlocking is selected (S1040: YES), the dummy sound interlocking setting is performed (S1050), and the extraction condition setting process is terminated. If the dummy sound interlock is not selected (S1040: NO), the extraction condition setting process is terminated.
- dummy sound interlocking extraction processing shown in FIG. 12 is performed.
- the dummy sound interlocking extraction process is, for example, a process that is started when the dummy sound interlocking setting is set and then repeatedly performed until the dummy sound interlocking setting is canceled.
- the operation unit 21 (shutter) has been operated (S1410). If there is no operation of the operation unit 21 (S1410: NO), this process is repeated. If the operation unit 21 is operated (S1410: YES), a shutter sound is output from a speaker (not shown) or the like (S1415), and recording of a moving image (a plurality of frames) is started (S1420).
- the fixed time indicates a time for the person who is the subject to relax after the photographing is completed, and is set to about 3 seconds, for example.
- the recording of the moving image is started immediately after the trigger (operation of the operation unit 21).
- the recording of the moving image may be started after a predetermined time has elapsed.
- the old moving image may be overwritten and recorded by, for example, a FIFO method.
- still image extraction (still image extraction processing or the like) is performed before an old moving image is overwritten with a new moving image
- an optimal still image can be stored.
- one still image is extracted, but several frames before and after the frame (still image) to be extracted by the above processing are extracted, and the still image extracted from these frames is extracted by the user.
- the operation of playing these frames at a lower speed than the playback speed of a normal moving image that is, displaying a plurality of still images by switching sequentially one by one slowly every few seconds
- the frame when it is input via may be selected as a frame to be extracted.
- the front and rear frame extraction processing shown in FIG. 13 may be performed.
- first, predetermined frames before and after the output still image are extracted (S1510). For example, several consecutive frames before the target still image may be extracted, several consecutive frames after the target still image may be extracted, or several frames before and after May be. Further, it is not necessary to extract consecutive frames, and it may be extracted every predetermined frame before or after the target still image.
- recording of a moving image is started based on when the power is turned on or when an operation by the user is performed.
- the user can It is also possible to detect the timing at which imaging is to be performed and start recording of a moving image at this time.
- the start timing setting process is, for example, a process that starts when the imaging apparatus 10 is turned on.
- the imaging unit 12 is provided with a 3D acceleration sensor, a gyro, and the like, and the microcomputer 11 can detect whether the imaging device 10 has moved.
- the imaging unit 12 has a user side (a finder side of the imaging unit 12) within the imaging range, and detects that the user's face approaches the imaging range. Suppose that it is configured to detect that the user has looked into the viewfinder.
- S1610 In the start timing setting process, as shown in FIG. 14, it is first determined whether or not the movement of the imaging device 10 has been detected (S1610). If the movement of the imaging device 10 is not detected (S1610: NO), the process of S1610 is repeated.
- S1610 If the movement of the imaging device 10 is detected (S1610: YES), it is determined whether this movement is periodic (that is, periodic vibration) (S1620). If it is a periodic vibration (S1620: YES), imaging is started (S1650), and the start timing setting process is terminated.
- periodic vibration that is, periodic vibration
- the imaging device 10 is not stationary (S1630: NO)
- the ROM included in the microcomputer 11 is the memory 111.
- the memory 111 may be configured as a hard disk drive, a RAM such as a flash memory, or other known memory.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Library & Information Science (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Studio Devices (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
Description
複数のフレームから特定のフレームを静止画として抽出する静止画抽出装置であって、
当該静止画抽出装置の使用者によって指定された、静止画を抽出する際の抽出条件を抽出条件記録部に登録する抽出条件登録手段と、
前記複数のフレームが前記抽出条件記録部に登録された抽出条件に合致するか否かを前記フレーム毎に判定する抽出判定手段と、
前記抽出条件に合致すると判定されたフレームを静止画として抽出する抽出手段と、
を備えたことを特徴とする。
当該静止画抽出装置の使用者によって指定された、静止画を抽出しない条件を表す除外条件を除外条件記録部に登録する除外条件登録手段と、
前記複数のフレームが前記除外条件記録部に登録された除外条件に合致するか否かを前記フレーム毎に判定する除外判定手段と、
前記抽出手段が、前記抽出条件に合致すると判定されたフレームのうちの前記除外条件に合致すると判定されたフレームを抽出することを禁止する抽出禁止手段と、
を備えていてもよい。
当該静止画抽出装置の外部に位置し、静止画を抽出する際の抽出条件を保持するサーバから前記抽出条件を取得する抽出条件取得手段、
を備えていてもよい。
静止画を抽出する際の抽出条件を保持するとともに、外部から抽出条件を受けるとこの抽出条件を保持するサーバに対して前記抽出条件記録部に記録された抽出条件を送信する抽出条件送信手段、を備えていてもよい。
前記除外条件登録手段は、前記除外条件として、類似する静止画の抽出を禁止する旨を登録可能であり、
前記除外判定手段は、前記除外条件記録部に、前記除外条件として類似する静止画の抽出を禁止する旨が登録されている場合に、前記複数のフレームについて類似するフレーム毎にグループ化し、前記グループ毎に2枚以上のフレームが存在するか否かを判定し、
前記抽出禁止手段は、前記グループ毎に2枚以上のフレームが抽出されることを禁止する、ようにしてもよい。
前記抽出手段は、前記グループ毎に最も抽出条件に合致するフレームを抽出する、ようにしてもよい。
さらに、上記静止画抽出装置においては、第7の構成のように、
前記抽出条件登録手段は、前記抽出条件として、特定人物の顔写真とともに、この顔写真と類似する顔を含むフレームを抽出する旨を登録可能であり、
前記抽出判定手段は、前記抽出条件記録部に、前記抽出条件として前記顔写真と類似する顔を含むフレームを抽出する旨が登録されている場合に、各フレームに含まれる顔が前記抽出条件記録部に記録された顔写真と類似するか否かを判定する、ようにしてもよい。
前記抽出条件登録手段は、前記抽出条件として、物体が衝突した瞬間を抽出する旨を登録可能であり、
前記抽出判定手段は、前記抽出条件記録部に、前記抽出条件として物体が衝突した瞬間を抽出する旨が登録されている場合に、各フレームを時系列に従って連続的に比較することによって、各フレーム内に存在する物体の移動を検出し、この物体を追跡することによってこの物体の形状が変化し始めたか否かを判定する、ようにしてもよい。
さらに、上記静止画抽出装置においては、第9の構成のように、
前記抽出条件登録手段は、前記抽出条件として、物体の変形量が最大のときを抽出する旨を登録可能であり、
前記抽出判定手段は、前記抽出条件記録部に、前記抽出条件として物体の変形量が最大のときを抽出する旨が登録されている場合に、各フレームを時系列に従って連続的に比較することによって、各フレーム内に存在する物体の移動を検出し、この物体を追跡することによってこの物体が変形し、変形が停止したか否かを判定する、ようにしてもよい。
複数のフレームを順次表示装置に出力する表示出力手段と、
使用者による特定の外部指令を入力するための外部指令入力手段と、
を備え、
前記抽出判定手段は、前記表示出力手段が複数のフレームを出力しているときにおいて、前記外部指令入力手段を介して特定の外部指令が入力されると、この瞬間において前記表示出力手段が出力したフレームよりも、使用者の反応速度に応じた所定フレーム前のフレームを、抽出条件に合致すると判定する、ようにしてもよい。
前記抽出条件登録手段は、前記抽出条件として、最高到達点に到達した旨を登録可能であり、
前記抽出判定手段は、前記抽出条件記録部に最高到達点に到達した旨が登録されている場合に、各フレームを時系列に従って連続的に比較することによって、各フレーム内に存在する物体の移動を検出し、この物体が上方に移動し、その後、停止したか否かを判定する、ようにしてもよい。
前記抽出手段は、
前記抽出条件に合致すると判定されたフレームを表す合致フレームを抽出する合致フレーム抽出手段と、
前記合致フレームの直前または直後に撮像された所定数のフレームを表す前後フレームを抽出する前後フレーム抽出手段と、
前記合致フレームおよび前記前後フレームのうちの使用者から選択されたフレームを前記静止画として抽出する選択抽出手段と、
を備えていてもよい。
[本実施形態の構成]
画像抽出システム1は、複数のフレームを備えて構成される動画等から特定のフレームを静止画として抽出する機能を有するシステムである。詳細には、図1に示すように、撮像装置10(静止画抽出装置)と、インターネット網40に接続されたサーバ30と、サーバ30とインターネット網40を介して通信可能であり、撮像装置10の無線通信基地局として機能する基地局50と、を備えている。
上記のような撮像装置10では、動画から静止画を抽出する際の条件を設定する処理と、設定された条件に従って動画から静止画を抽出する処理とをそれぞれ実行する。まず、動画から静止画を抽出する際の条件を設定する処理について、図2A、2Bおよび図3を用いて説明する。
物体の衝突を抽出するように設定されていなければ(S105:NO)、後述するS155の処理に移行する。また、物体の衝突を抽出するよう設定されれば(S105:YES)、項目を示す画像を表示部22に表示させる(S110)。
なお、静止画抽出処理のうちのS355~S520の処理は、本発明でいう抽出判定手段の一例に相当する。また、除外処理のうちのS620,S655,S665~S675の処理は、本発明でいう除外判定手段の一例に相当する。
抽出条件が使用者のお気に入りの表情(好表情)の静止画を抽出する旨であれば(S315:好表情)、選択中のフレーム中の顔部分を周知の画像処理によって抽出し(S355)、登録された比較用写真と比較し(S360)、また、顔の角度(基準方向(正面など)に対する角度)を演算する(S365)。
アップロード処理は、撮像装置10が送信モードに設定されると開始され、その後、他のモードに変更されるまでの間、繰り返し実行される処理である。アップロード処理では、まず、図6に示すように、使用者が操作部21を介して抽出された静止画(写真)を選択したか否かを判定する(S905)。
手動抽出処理は、上記の説明で述べた抽出条件とは無関係に、使用者が撮像装置10に対して抽出指令を入力すると、そのタイミングに基づく静止画を抽出する処理である。この手動抽出処理は、撮像装置10が録画モードまたは再生モードに設定されると開始され、他のモードに変更されるまでの間、繰り返し実行される処理である。
[本実施形態による効果]
以上のように詳述した画像抽出システム1において撮像装置10のマイクロコンピュータ11は、撮像装置10の使用者によって指定された、静止画を抽出する際の抽出条件を条件記録部15に登録し、前記複数のフレームが条件記録部15に登録された抽出条件に合致するか否かをフレーム毎に判定する。そして、マイクロコンピュータ11は、抽出条件に合致すると判定されたフレームを静止画として抽出する。
また、撮像装置10においてマイクロコンピュータ11は、撮像装置10の使用者によって指定された、静止画を抽出しない条件を表す除外条件を条件記録部15に登録し、前記複数のフレームが条件記録部15に登録された除外条件に合致するか否かをフレーム毎に判定する。そして、マイクロコンピュータ11は、抽出条件に合致すると判定されたフレームのうちの除外条件に合致すると判定されたフレームを抽出することを禁止する。
このような撮像装置10によれば、好みの静止画を抽出するための条件をサーバから取得することができる。よって、誰でも容易に静止画とする瞬間を設定することができる。
また、上記撮像装置10においてマイクロコンピュータ11は、除外条件として、類似する静止画の抽出を禁止する旨を登録可能であり、条件記録部15に、除外条件として類似する静止画の抽出を禁止する旨が登録されている場合に、前記複数のフレームについて類似するフレーム毎にグループ化し、グループ毎に2枚以上のフレームが存在するか否かを判定し、グループ毎に2枚以上のフレームが抽出されることを禁止する。
このような撮像装置10によれば、使用者が希望する最適なフレームのみを抽出することができる。
さらに、上記撮像装置10においてマイクロコンピュータ11は、抽出条件として、物体の変形量が最大のときを抽出する旨を登録可能であり、マイクロコンピュータ11は、条件記録部15に、抽出条件として物体の変形量が最大のときを抽出する旨が登録されている場合に、各フレームを時系列に従って連続的に比較することによって、各フレーム内に存在する物体の移動を検出し、この物体を追跡することによってこの物体が変形し、変形が停止したか否かを判定する。
さらに、上記撮像装置10においてマイクロコンピュータ11は、抽出条件として、最高到達点に到達した旨を登録可能であり、条件記録部15に最高到達点に到達した旨が登録されている場合に、各フレームを時系列に従って連続的に比較することによって、各フレーム内に存在する物体の移動を検出し、この物体が上方に移動し、その後、停止したか否かを判定する。
本発明の実施の形態は、上記の実施形態に何ら限定されることはなく、本発明の技術的範囲に属する限り種々の形態を採りうる。
さらに、上記撮像装置10においては、移動物体が最高到達点に到達したことを移動物体の移動方向が上方向から下方向に変化したことを検出することによって判断したが、最高到達点に限らず、移動物体の移動方向が任意の方向に変化するときのフレームを抽出するようにしてもよい。例えば、右方向に移動する移動物体が停止、或いは左方向に戻る場合や、下方向に移動する物体が停止、或いは上方向に戻る場合等を検出し、このときのフレームを抽出するようにしてもよい。
ところで、人物を撮像する際には、その人物の事情によって撮像されることを嫌がる場合がある。そこで、人物が撮像されることを嫌がる状況または嫌がる仕草・表情を拒絶指令として検出し、この拒絶指令が検出された場合に、撮像画像中の人物に対して明確に撮像されることを抑制するようにしてもよい。
次に、上記実施形態においては、選択された静止画をサーバ30にアップロードするようにしたが、事故等の静止画を強制的にサーバ30にアップロードするようにしてもよい。この場合には、例えば、図11に示すアップロード処理のような処理を実施するとよい。
また、上記のように複数のフレームから静止画を抽出する際には、記録領域を節約するために、記録を行う際に、何らかのトリガが入力されたときに動画の撮影を開始し、その後、動画の撮影を終了するようにしてもよい。例えば、シャッタ音のようなダミー音を出力し、このときに動画の撮影を開始し、ダミー音を聞いて撮影が終わったものと安心した人物の表情を抽出できた後のタイミングで動画の撮影を終了するようにしてもよい。
また、上記の構成では、1枚の静止画を抽出したが、上記の処理で抽出しようとするフレーム(静止画)の前後の数フレームを抽出し、これらのフレームから抽出する静止画を使用者に選択させるように構成してもよい。例えば、これらのフレームを通常の動画の再生速度よりも低速度で再生し(つまり、複数の静止画を1枚ずつ数秒毎にゆっくりと順次切り換えて表示し)、シャッタを切る動作が操作部21を介して入力されたときのフレームを抽出するフレームとして選択してもよい。
例えば、対象となる静止画の前の連続する数フレームを抽出してもよいし、対象となる静止画の後の連続する数フレームを抽出してもよいし、前および後の数フレームを抽出してもよい。また、連続するフレームを抽出する必要はなく、対象となる静止画の前または後において所定フレーム毎等に抽出するようにしてもよい。
なお、本実施形態では、撮像部12において3D加速度センサやジャイロ等が備えられており、マイクロコンピュータ11が撮像装置10の移動の有無を検出できるものとする。また、撮像部12は、被写体側だけでなく、使用者側(撮像部12のファインダ側)も撮像範囲内としており、撮像範囲内に使用者の顔が接近してくることを検出することで、使用者がファインダを覗いたことが検出できるよう構成されているものとする。
Claims (13)
- 複数のフレームから特定のフレームを静止画として抽出する静止画抽出装置であって、
当該静止画抽出装置の使用者によって指定された、静止画を抽出する際の抽出条件を抽出条件記録部に登録する抽出条件登録手段と、
前記複数のフレームが前記抽出条件記録部に登録された抽出条件に合致するか否かを前記フレーム毎に判定する抽出判定手段と、
前記抽出条件に合致すると判定されたフレームを静止画として抽出する抽出手段と、
を備えたことを特徴とする静止画抽出装置。 - 請求項1に記載の静止画抽出装置において、
当該静止画抽出装置の使用者によって指定された、静止画を抽出しない条件を表す除外条件を除外条件記録部に登録する除外条件登録手段と、
前記複数のフレームが前記除外条件記録部に登録された除外条件に合致するか否かを前記フレーム毎に判定する除外判定手段と、
前記抽出手段が、前記抽出条件に合致すると判定されたフレームのうちの前記除外条件に合致すると判定されたフレームを抽出することを禁止する抽出禁止手段と、
を備えたことを特徴とする静止画抽出装置。 - 請求項1または請求項2に記載の静止画抽出装置において、
当該静止画抽出装置の外部に位置し、静止画を抽出する際の抽出条件を保持するサーバから前記抽出条件を取得する抽出条件取得手段、を備えたこと
を特徴とする静止画抽出装置。 - 請求項3に記載の静止画抽出装置において、
静止画を抽出する際の抽出条件を保持するとともに、外部から抽出条件を受けるとこの抽出条件を保持するサーバに対して前記抽出条件記録部に記録された抽出条件を送信する抽出条件送信手段、を備えたこと
を特徴とする静止画抽出装置。 - 請求項2に記載の静止画抽出装置において、
前記除外条件登録手段は、前記除外条件として、類似する静止画の抽出を禁止する旨を登録可能であり、
前記除外判定手段は、前記除外条件記録部に、前記除外条件として類似する静止画の抽出を禁止する旨が登録されている場合に、前記前記複数のフレームについて類似するフレーム毎にグループ化し、前記グループ毎に2枚以上のフレームが存在するか否かを判定し、
前記抽出禁止手段は、前記グループ毎に2枚以上のフレームが抽出されることを禁止すること
を特徴とする静止画抽出装置。 - 請求項5に記載の静止画抽出装置において、
前記抽出手段は、前記グループ毎に最も抽出条件に合致するフレームを抽出すること
を特徴とする静止画抽出装置。 - 請求項1~請求項6の何れか1項に記載の静止画抽出装置において、
前記抽出条件登録手段は、前記抽出条件として、特定人物の顔写真とともに、この顔写真と類似する顔を含むフレームを抽出する旨を登録可能であり、
前記抽出判定手段は、前記抽出条件記録部に、前記抽出条件として前記顔写真と類似する顔を含むフレームを抽出する旨が登録されている場合に、各フレームに含まれる顔が前記抽出条件記録部に記録された顔写真と類似するか否かを判定すること
を特徴とする静止画抽出装置。 - 請求項1~請求項7の何れか1項に記載の静止画抽出装置において、
前記抽出条件登録手段は、前記抽出条件として、物体が衝突した瞬間を抽出する旨を登録可能であり、
前記抽出判定手段は、前記抽出条件記録部に、前記抽出条件として物体が衝突した瞬間を抽出する旨が登録されている場合に、各フレームを時系列に従って連続的に比較することによって、各フレーム内に存在する物体の移動を検出し、この物体を追跡することによってこの物体の形状が変化し始めたか否かを判定すること
を特徴とする静止画抽出装置。 - 請求項1~請求項8の何れか1項に記載の静止画抽出装置において、
前記抽出条件登録手段は、前記抽出条件として、物体の変形量が最大のときを抽出する旨を登録可能であり、
前記抽出判定手段は、前記抽出条件記録部に、前記抽出条件として物体の変形量が最大のときを抽出する旨が登録されている場合に、各フレームを時系列に従って連続的に比較することによって、各フレーム内に存在する物体の移動を検出し、この物体を追跡することによってこの物体が変形し、変形が停止したか否かを判定すること
を特徴とする静止画抽出装置。 - 請求項1~請求項9の何れか1項に記載の静止画抽出装置において、
前記複数のフレームを順次表示装置に出力する表示出力手段と、
使用者による特定の外部指令を入力するための外部指令入力手段と、
を備え、
前記抽出判定手段は、前記表示出力手段が前記複数のフレームを順次出力しているときにおいて、前記外部指令入力手段を介して特定の外部指令が入力されると、この瞬間において前記表示出力手段が出力したフレームよりも、使用者の反応速度に応じた所定フレーム前のフレームを、抽出条件に合致すると判定すること
を特徴とする静止画抽出装置。 - 請求項1~請求項10の何れか1項に記載の静止画抽出装置において、
前記抽出条件登録手段は、前記抽出条件として、最高到達点に到達した旨を登録可能であり、
前記抽出判定手段は、前記抽出条件記録部に最高到達点に到達した旨が登録されている場合に、各フレームを時系列に従って連続的に比較することによって、各フレーム内に存在する物体の移動を検出し、この物体が上方に移動し、その後、停止したか否かを判定すること
を特徴とする静止画抽出装置。 - 請求項1~請求項11の何れか1項に記載の静止画抽出装置において、
前記抽出手段は、
前記抽出条件に合致すると判定されたフレームを表す合致フレームを抽出する合致フレーム抽出手段と、
前記合致フレームの直前または直後に撮像された所定数のフレームを表す前後フレームを抽出する前後フレーム抽出手段と、
前記合致フレームおよび前記前後フレームのうちの使用者から選択されたフレームを前記静止画として抽出する選択抽出手段と、
を備えたことを特徴とする静止画抽出装置。 - コンピュータを請求項1~請求項12の何れか1項に記載の静止画抽出装置を構成する各手段として機能させるための静止画抽出プログラム。
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2012554805A JPWO2012102276A1 (ja) | 2011-01-24 | 2012-01-24 | 静止画抽出装置 |
| US13/981,397 US20130308829A1 (en) | 2011-01-24 | 2012-01-24 | Still image extraction apparatus |
| EP12739279.3A EP2670134A4 (en) | 2011-01-24 | 2012-01-24 | STATIC IMAGE EXTRACTION DEVICE |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2011012000 | 2011-01-24 | ||
| JP2011-012000 | 2011-01-24 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2012102276A1 true WO2012102276A1 (ja) | 2012-08-02 |
Family
ID=46580842
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2012/051460 Ceased WO2012102276A1 (ja) | 2011-01-24 | 2012-01-24 | 静止画抽出装置 |
Country Status (4)
| Country | Link |
|---|---|
| US (1) | US20130308829A1 (ja) |
| EP (1) | EP2670134A4 (ja) |
| JP (2) | JPWO2012102276A1 (ja) |
| WO (1) | WO2012102276A1 (ja) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2020092392A (ja) * | 2018-12-07 | 2020-06-11 | 大日本印刷株式会社 | 画像提供システム |
| WO2023181255A1 (ja) * | 2022-03-24 | 2023-09-28 | 日本電気株式会社 | 情報提供装置、事故情報の提供方法及びプログラム記録媒体 |
Families Citing this family (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9501573B2 (en) * | 2012-07-30 | 2016-11-22 | Robert D. Fish | Electronic personal companion |
| WO2016139940A1 (ja) * | 2015-03-02 | 2016-09-09 | 日本電気株式会社 | 画像処理システム、画像処理方法およびプログラム記憶媒体 |
| IL239113A (en) * | 2015-06-01 | 2016-12-29 | Elbit Systems Land & C4I Ltd | A system and method for determining audio characteristics from a body |
| WO2017018012A1 (ja) * | 2015-07-28 | 2017-02-02 | ソニー株式会社 | 情報処理システム、情報処理方法、および記録媒体 |
| JP6768223B2 (ja) * | 2017-07-20 | 2020-10-14 | 京セラドキュメントソリューションズ株式会社 | 画像処理装置、画像処理方法及び画像処理プログラム |
| US11606493B2 (en) * | 2018-11-14 | 2023-03-14 | Samsung Electronics Co., Ltd. | Method for recording multimedia file and electronic device thereof |
Citations (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2002232815A (ja) * | 2000-12-01 | 2002-08-16 | Sharp Corp | ビデオプリントシステムおよび情報配信システムならびに受信装置 |
| JP2004297618A (ja) * | 2003-03-27 | 2004-10-21 | Kyocera Corp | 画像抽出方法および画像抽出装置 |
| JP2005109789A (ja) * | 2003-09-30 | 2005-04-21 | Casio Comput Co Ltd | 画像処理装置及び画像処理プログラム |
| JP2005176274A (ja) * | 2003-12-15 | 2005-06-30 | Canon Inc | 撮像装置及び撮像制御方法 |
| JP2007074276A (ja) * | 2005-09-06 | 2007-03-22 | Fujifilm Corp | 撮像装置 |
| JP2007228453A (ja) * | 2006-02-27 | 2007-09-06 | Casio Comput Co Ltd | 撮像装置、再生装置、プログラム、および記憶媒体 |
| JP2009049667A (ja) * | 2007-08-20 | 2009-03-05 | Sony Corp | 情報処理装置、その処理方法およびプログラム |
| JP2010068180A (ja) * | 2008-09-10 | 2010-03-25 | Sony Corp | 撮像装置及び撮像方法 |
| JP2010109592A (ja) | 2008-10-29 | 2010-05-13 | Canon Inc | 情報処理装置およびその制御方法 |
| JP2010114733A (ja) * | 2008-11-07 | 2010-05-20 | Toshiba Corp | 情報処理装置およびコンテンツ表示方法 |
Family Cites Families (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH09307813A (ja) * | 1996-05-10 | 1997-11-28 | Bridgestone Sports Co Ltd | 高速現象撮影装置 |
| JP3287814B2 (ja) * | 1997-07-11 | 2002-06-04 | 三菱電機株式会社 | 動画再生装置 |
| JP3984175B2 (ja) * | 2003-01-31 | 2007-10-03 | 富士フイルム株式会社 | 写真画像選別装置およびプログラム |
| US7298895B2 (en) * | 2003-04-15 | 2007-11-20 | Eastman Kodak Company | Method for automatically classifying images into events |
| US7796785B2 (en) * | 2005-03-03 | 2010-09-14 | Fujifilm Corporation | Image extracting apparatus, image extracting method, and image extracting program |
| US8169484B2 (en) * | 2005-07-05 | 2012-05-01 | Shai Silberstein | Photography-specific digital camera apparatus and methods useful in conjunction therewith |
| US8125526B2 (en) * | 2006-02-03 | 2012-02-28 | Olympus Imaging Corp. | Camera for selecting an image from a plurality of images based on a face portion and contour of a subject in the image |
| JP4577410B2 (ja) * | 2008-06-18 | 2010-11-10 | ソニー株式会社 | 画像処理装置、画像処理方法およびプログラム |
| JP5423305B2 (ja) * | 2008-10-16 | 2014-02-19 | 株式会社ニコン | 画像評価装置及びカメラ |
| JP2011035837A (ja) * | 2009-08-05 | 2011-02-17 | Toshiba Corp | 電子機器および画像データの表示方法 |
| JP2010141911A (ja) * | 2010-01-29 | 2010-06-24 | Casio Computer Co Ltd | 画像記録方法、画像記録装置、およびプログラム |
-
2012
- 2012-01-24 WO PCT/JP2012/051460 patent/WO2012102276A1/ja not_active Ceased
- 2012-01-24 JP JP2012554805A patent/JPWO2012102276A1/ja active Pending
- 2012-01-24 US US13/981,397 patent/US20130308829A1/en not_active Abandoned
- 2012-01-24 EP EP12739279.3A patent/EP2670134A4/en not_active Withdrawn
-
2016
- 2016-11-04 JP JP2016216340A patent/JP2017063463A/ja active Pending
Patent Citations (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2002232815A (ja) * | 2000-12-01 | 2002-08-16 | Sharp Corp | ビデオプリントシステムおよび情報配信システムならびに受信装置 |
| JP2004297618A (ja) * | 2003-03-27 | 2004-10-21 | Kyocera Corp | 画像抽出方法および画像抽出装置 |
| JP2005109789A (ja) * | 2003-09-30 | 2005-04-21 | Casio Comput Co Ltd | 画像処理装置及び画像処理プログラム |
| JP2005176274A (ja) * | 2003-12-15 | 2005-06-30 | Canon Inc | 撮像装置及び撮像制御方法 |
| JP2007074276A (ja) * | 2005-09-06 | 2007-03-22 | Fujifilm Corp | 撮像装置 |
| JP2007228453A (ja) * | 2006-02-27 | 2007-09-06 | Casio Comput Co Ltd | 撮像装置、再生装置、プログラム、および記憶媒体 |
| JP2009049667A (ja) * | 2007-08-20 | 2009-03-05 | Sony Corp | 情報処理装置、その処理方法およびプログラム |
| JP2010068180A (ja) * | 2008-09-10 | 2010-03-25 | Sony Corp | 撮像装置及び撮像方法 |
| JP2010109592A (ja) | 2008-10-29 | 2010-05-13 | Canon Inc | 情報処理装置およびその制御方法 |
| JP2010114733A (ja) * | 2008-11-07 | 2010-05-20 | Toshiba Corp | 情報処理装置およびコンテンツ表示方法 |
Non-Patent Citations (1)
| Title |
|---|
| See also references of EP2670134A4 |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2020092392A (ja) * | 2018-12-07 | 2020-06-11 | 大日本印刷株式会社 | 画像提供システム |
| WO2023181255A1 (ja) * | 2022-03-24 | 2023-09-28 | 日本電気株式会社 | 情報提供装置、事故情報の提供方法及びプログラム記録媒体 |
| JPWO2023181255A1 (ja) * | 2022-03-24 | 2023-09-28 |
Also Published As
| Publication number | Publication date |
|---|---|
| EP2670134A4 (en) | 2015-11-25 |
| JP2017063463A (ja) | 2017-03-30 |
| JPWO2012102276A1 (ja) | 2014-06-30 |
| US20130308829A1 (en) | 2013-11-21 |
| EP2670134A1 (en) | 2013-12-04 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP2017063463A (ja) | 静止画抽出装置 | |
| US8698920B2 (en) | Image display apparatus and image display method | |
| JP4640456B2 (ja) | 画像記録装置、画像記録方法、画像処理装置、画像処理方法、プログラム | |
| CN101388965B (zh) | 数据处理装置和数据处理方法 | |
| JP5025782B2 (ja) | 画像検索装置及び画像検索方法 | |
| CN105075237A (zh) | 图像处理设备、图像处理方法和程序 | |
| JP2008206018A (ja) | 撮像装置およびプログラム | |
| CN108632536B (zh) | 一种相机控制方法及装置、终端、存储介质 | |
| CN115529378B (zh) | 一种视频处理方法及相关装置 | |
| EP4096211B1 (en) | Image processing method, electronic device and computer-readable storage medium | |
| JP2017538975A (ja) | 顔アルバムに基づく音楽再生方法、装置および端末デバイス | |
| JP7058309B2 (ja) | 画像撮影方法、画像撮影装置および記憶媒体 | |
| US9888176B2 (en) | Video apparatus and photography method thereof | |
| US8085997B2 (en) | Imaging apparatus and imaging method | |
| CN113726949B (zh) | 一种视频处理方法、电子设备及存储介质 | |
| JP2014050022A (ja) | 画像処理装置、撮像装置、およびプログラム | |
| JP5446035B2 (ja) | 撮像装置、撮像方法、及びプログラム | |
| EP3304551B1 (en) | Adjusting length of living images | |
| JP5761323B2 (ja) | 撮像装置、撮像方法、及びプログラム | |
| CN105469107B (zh) | 图像分类方法及装置 | |
| JP5488639B2 (ja) | 撮像装置、撮像方法及びプログラム | |
| JP4885084B2 (ja) | 撮像装置、撮像方法、撮像プログラム | |
| JP2018148483A (ja) | 撮像装置及び撮像方法 | |
| JP2009212867A (ja) | 撮影画像処理装置、撮影制御プログラム及び撮影制御方法 | |
| WO2021237744A1 (zh) | 拍摄方法及装置 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12739279 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2012554805 Country of ref document: JP Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 13981397 Country of ref document: US |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2012739279 Country of ref document: EP |