[go: up one dir, main page]

CN101685542A - Electronic device, fuzzy image sorting method and program - Google Patents

Electronic device, fuzzy image sorting method and program Download PDF

Info

Publication number
CN101685542A
CN101685542A CN200910178651.9A CN200910178651A CN101685542A CN 101685542 A CN101685542 A CN 101685542A CN 200910178651 A CN200910178651 A CN 200910178651A CN 101685542 A CN101685542 A CN 101685542A
Authority
CN
China
Prior art keywords
image
blur level
zone
subject zone
subject
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN200910178651.9A
Other languages
Chinese (zh)
Other versions
CN101685542B (en
Inventor
保坂尚
木村光佑
成瀬国一郎
猪狩一真
番场定道
秋山由希子
奥村光男
菊池章
鹿岛秀文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Publication of CN101685542A publication Critical patent/CN101685542A/en
Application granted granted Critical
Publication of CN101685542B publication Critical patent/CN101685542B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

Provided are an electronic device and a fuzzy image sorting method. The electronic device comprises: an extracting component for extracting a subject area having a predetermined characteristic in an image from the image; a first calculation component for calculating a first fuzziness indicating the fuzziness of the extracted subject area; a second calculation component, for calculating a second fuzziness indicating the fuzziness of the extracted subject area based on the first fuzziness when the number of the subject area in the image for calculating the first fuzziness is one, and when the number is multiple, calculating the second fuzziness based on a value obtained by performing weighted average on multiple first fuzziness according to sizes of multiple subject areas; and a sorting component for sorting images with the second fuzziness calculated equal to or greater than a predetermined threshold as fuzzy images.

Description

Electronic equipment, fuzzy image sorting method and program
Technical field
The present invention relates to store and to export the electronic equipment of a plurality of rest images, fuzzy image sorting method and the program in this electronic equipment.
Background technology
Past has existed and has had the electronic equipment that the photo album that is used to organize the image such as the picture that is taken and stores is created function and is used for the function slide of image.Carried out by electronic equipment under the situation of these functions, the user need select to want image to store or show, but the user is difficult to the image that letter sorting is wanted from great amount of images.
In this, from a plurality of images, select the technology of so-called best photo (shot) (just being assumed that the image that is worth the user to watch), known a kind of from selecting the technology of best photo (for example, please see Japanese Patent Application Publication No.2006-311340 a plurality of images that obtain by taking continuously; After this be called patent file 1).
In addition, the quality of also known a kind of sharpness based on image (sharpness), face-image and flash of light exists/do not exist the technology of estimating as the quality of the image of picture (for example, please see that the Jap.P. translation discloses No.2005-521927; After this be called patent file 2).In addition, also knownly a kind ofly from image, detect facial and (for example, please see Japanese Patent Application Publication No.2007-27971 according to the technology that testing result is selected and montage goes out best works (composition); After this be called patent file 3).
Summary of the invention
Yet, in above-mentioned technology, whether be assumed that about image worth user watches, that is to say whether the subject of image (subject) is fuzzy, be difficult to estimate an evaluation objective image reliably, sort it then.
For example, from by taking the technology of selecting best photo the image that obtains continuously, promptly in the technology of in patent file 1, describing, estimated blur level and the exposure in the entire image.Yet, even, neither sort out blurred picture when to not being to have carried out blur level and exposure evaluation by the normal image that continuous shooting obtains.In other words, usually, wherein subject is in the focal position and background is that the picture that blurs may be photo or the so-called best photo that suitably is taken, because subject is taken very clearly.Yet, when image that the technology of patent file 1 is used for not being obtaining by continuous shooting, have such risk, subject wherein is in the focal position and background to be the image that blurs be judged as blurred picture.
In patent file 2 described technology, suppose that subject is the quality that people's face comes the evaluation map picture, and under from image, not detecting facial situation, based on the existence of sharpness and flash of light/the do not exist quality of evaluation map picture.Therefore, equally under these circumstances, subject wherein is in the focal position and background is the image that blurs is judged as blurred picture.
In addition, in the technology of describing, can cut out out best works in patent file 3, similarly be not fuzzy but be difficult to evaluation map.In this technology, people's face did not exist in situation in the image as subject under, cutting out out best works perhaps was impossible.
Consider above-mentioned condition, exist sorting out the requirement that the subject that wherein grips one's attention is electronic equipment, fuzzy image sorting method and program that blur, user's unwanted picture reliably.
According to the embodiment of the present invention, provide the electronic equipment that comprises extraction element, first calculation element, second calculation element and sorting equipment.
Extraction element is extracted in the subject zone that has predetermined characteristic in the image from image.
First calculation element calculates first blur level of the blur level of indicating the subject zone of being extracted.
When the number in the subject zone the image that calculates first blur level from it was one, second calculation element calculated second blur level of the blur level of indication entire image based on first blur level.In addition, when the number in the subject zone the image that calculates first blur level from it when being a plurality of, second calculation element calculates second blur level based on by the size according to a plurality of subjects zone a plurality of first blur leveles being carried out the value that weighted means obtain.
Sorting equipment from a plurality of sorting images have be equal to or greater than predetermined threshold the second blur level image of being calculated as blurred picture.
Utilize this structure, calculate the blur level (first blur level) of the subject in image and the blur level (second blur level) of entire image by the size based on the subject zone, electronic equipment can go out blurred picture from a plurality of sorting images.Subject with predetermined characteristic is various objects, comprises people, animal, plant and buildings.For example, first blur level is calculated by the edge strength in the subject zone.Subject zone with predetermined characteristic is the zone that attracts spectators to note in the image.That is to say that when the zone that grips one's attention was one, electronic equipment calculated the blur level of entire image based on this regional blur level.In addition, when a plurality of whens zone that existence grips one's attention, electronic equipment calculates the blur level of entire image by according to each regional size each regional blur level being carried out weighted mean.This is because bigger size more may grip one's attention.By this processing, electronic equipment accurately computed image blur level and sort out the unwanted blurred picture of user.The user can delete blurred picture and remove them from original film or similar material according to the letter sorting result.Thereby promote user's convenience.
Electronic equipment may also comprise optimization means, is used to optimize the subject zone of being extracted, so that the subject zone of being extracted has the pre-sizing of the calculating of enough first blur leveles.
Here, optimization refers to greatly and reduces to the size that need take a long time the zone of calculating first blur level, to such an extent as to and little removing to the zone that can not accurately calculate first blur level.Utilize this optimization, electronic equipment can calculate first blur level more accurately, and as its result, can calculate second blur level more accurately.
Extraction element can calculate the deterministic score of the extraction in indication subject zone.
In this case, when the number in the subject zone the image that calculates first blur level from it when being a plurality of, second calculation element can calculate second blur level based on by according to the size in a plurality of subjects zone and the score of being calculated a plurality of first blur leveles being carried out the value that weighted mean obtains.
Here, score is that indication is compared with other zones, and the evaluation of estimate of the degree of the feature in subject zone, this feature comprise brightness, color, edge (direction) and face.It is generally acknowledged that along with score is more and more higher, the subject zone of this score may attract spectators' attention more.In first blur level is under the situation about being calculated from a plurality of subjects zone, and except the size in a plurality of subjects zone, electronic equipment is also carried out weighted mean according to score to first blur level.Thereby, might calculate second blur level more accurately and sort out blurred picture more accurately.
In addition, extraction element also comprises face recognition device and feature identification device.
The facial zone of face recognition device identification people face is as the subject zone, and first score of the score of the facial zone that identifies is indicated in calculating.
The notable attribute zone is as the subject zone on the feature identification device recognition visible sensation, and calculates second score of the score of the characteristic area that indication identifies.
The number in the subject zone the image that calculates first blur level from it is under a plurality of situation, when facial zone was identified as the subject zone by the face recognition device, second calculation element can not use first must assign to calculate second blur level in weighted mean.In addition, when characteristic area was identified as the subject zone by the feature identification device, second calculation element can use second must assign to calculate second blur level in weighted mean.
In this structure, when a plurality of faces were identified as the subject zone, electronic equipment calculated second blur level by only according to the size of facial zone first blur level being carried out weighted mean.On the other hand, when a plurality of faces were identified as the subject zone, electronic equipment was carried out weighted mean to first blur level and is calculated second blur level by according to the size of facial zone and the score in zone.That is to say, when the subject zone of being extracted was facial zone, electronic equipment was only carried out weighted mean according to size, and irrelevant with the score of face, and work as the subject zone of being extracted when being characteristic area, electronic equipment is carried out weighted mean according to the score and the size thereof of characteristic area.Provide difference in the processing in this manner, because think that facial zone attracts than feature zone unconditionally that spectators' more many attention.Therefore, utilize electronic equipment can calculate the result of the blur level of entire image more accurately, electronic equipment is distinguished zone and the executable operations that grips one's attention more accurately in image, so that the blur level of entire image becomes high more when the zone that grips one's attention is fuzzy more.
When not having the subject zone to be extracted, first calculation element can be the subject zone with the entire image, calculates first blur level.
Thereby even when the zone that grips one's attention in the image does not exist, electronic equipment also can be by thinking entire image the subject zone calculates the blur level of entire image.
Electronic equipment can also comprise the operation receiving trap of the operation that is used to receive the user, and the display device that is used to show a plurality of images.
In this case, sorting equipment can sort out blurred picture according to user's scheduled operation.
In this case, when receiving scheduled operation, display device can only show the blurred picture that is sorted out in a plurality of display images.
Thereby only by carrying out scheduled operation, the user just can sort and only see blurred picture in a plurality of images, and deletes and remove unwanted blurred picture easily.Scheduled operation refers to the GUI such as the icon of button, but is not limited only to this.
According to another embodiment of the invention, provide to comprise the fuzzy image sorting method that from image, extracts subject zone with the predetermined characteristic in the image.
In this method, calculate first blur level of the blur level of indicating the subject zone of being extracted.
When the number in the subject zone the image that calculates first blur level from it is one, calculate second blur level of the blur level of indication entire image based on first blur level.When the number in the subject zone the image that calculates first blur level from it when being a plurality of,, calculate second blur level based on a plurality of first blur leveles being carried out the value that weighted means obtain by size according to a plurality of subjects zone.
Then, go out to have to be equal to or greater than the image of second blur level of being calculated of predetermined threshold as blurred picture from a plurality of sorting images.
According to this method, might calculate the blur level of image reliably and sort out the unwanted blurred picture of user.
According to another kind of again embodiment of the present invention, provide to comprise the fuzzy image sorting method of optimizing subject zone with predetermined characteristic, this subject zone is extracted so that this subject zone has pre-sizing from image.
In this method, obtained first blur level of blur level that from subject zone, calculated, indication subject zone through optimizing.
When the number in the subject zone the image that calculates first blur level from it is one, calculate second blur level of the blur level of indication entire image based on first blur level.In addition, when the number in the subject zone the image that calculates first blur level from it when being a plurality of,, calculate second blur level based on a plurality of first blur leveles being carried out the value that weighted means obtain by size according to a plurality of subjects zone.
Then, go out to have to be equal to or greater than the image of second blur level of being calculated of predetermined threshold as blurred picture from a plurality of sorting images.
According to this method, the size by optimizing each subject zone of extracting and calculate the blur level of entire image based on the blur level of being calculated from the subject zone can sort out blurred picture reliably.
According to another kind of again embodiment of the present invention, provide the program that makes electronic equipment carry out extraction step, first calculation procedure, second calculation procedure and separation step.
In extraction step, from image, be extracted in the subject zone that has predetermined characteristic in the image.
In first calculation procedure, calculate first blur level of the blur level of indicating the subject zone of being extracted.
In second calculation procedure, when the number in the subject zone the image that calculates first blur level from it is one, calculate second blur level of the blur level of indication entire image based on first blur level.In addition, in second calculation procedure, when the number in the subject zone the image that calculates first blur level from it when being a plurality of,, calculate second blur level based on a plurality of first blur leveles being carried out the value that weighted means obtain by size according to a plurality of subjects zone.
In separation step, the image of second blur level of being calculated that goes out to have to be equal to or greater than predetermined threshold from a plurality of sorting images is as blurred picture.
According to this program, might accurately calculate the blur level of image and sort out the unwanted blurred picture of user.
According to another kind of again embodiment of the present invention, provide the program that makes electronic equipment carry out optimization step, obtaining step, calculation procedure and separation step.
In optimization step, optimize subject zone with predetermined characteristic, extract from image so that it has pre-sizing in this subject zone.
In obtaining step, obtain first blur level that from subject zone, calculated and blur level indication subject zone through optimizing.
In calculation procedure, when the number in the subject zone the image that calculates first blur level from it is one, calculate second blur level of the blur level of indication entire image based on first blur level.In addition, in calculation procedure, when the number in the subject zone the image that calculates first blur level from it when being a plurality of,, calculate second blur level based on a plurality of first blur leveles being carried out the value that weighted means obtain by size according to a plurality of subjects zone.
In separation step, the image of second blur level of being calculated that goes out to have to be equal to or greater than predetermined threshold from a plurality of sorting images is as blurred picture.
According to this program, the size by optimizing each subject zone of extracting and calculate the blur level of entire image based on the blur level of being calculated from the subject zone can sort out blurred picture reliably.
According to another kind of again embodiment of the present invention, provide to comprise the electronic equipment that extracts parts, first calculating unit, second calculating unit and sorting component.
Extract parts and from image, extract subject zone with predetermined characteristic in the image.
First calculating unit calculates first blur level of the blur level of indicating the subject zone of being extracted.
When second calculating unit is one when the number in the subject zone the image that calculates first blur level from it, calculate second blur level of the blur level of indication entire image based on first blur level, and when the number in the subject zone the image that calculates first blur level from it when being a plurality of, based on a plurality of first blur leveles being carried out the value that weighted mean obtains, calculate second blur level by size according to a plurality of subjects zone.
Sorting component is used as blurred picture from the image that a plurality of sorting images go out to have to be equal to or greater than second blur level of being calculated of predetermined threshold.
As mentioned above, according to the embodiment of the present invention, might sort out the subject that grips one's attention therein reliably and be blur, the unwanted blurred picture of user.
According to the following detailed description of as shown in the figure optimal mode embodiment, these and other purposes of the present invention, feature and advantage will become more obvious.
Description of drawings
Fig. 1 is the sketch that the hardware configuration of PC according to the embodiment of the present invention is shown;
Fig. 2 is the sketch that is used to describe the image letter sorting function of PC according to the embodiment of the present invention;
Fig. 3 is the process flow diagram of rough operating process that the metadata analysis parts of PC according to the embodiment of the present invention are shown;
Fig. 4 is the process flow diagram that the flow process of feature identification processing in embodiments of the present invention is shown;
Fig. 5 is the process flow diagram that is shown specifically the flow process of the feature identification processing of being carried out by the feature identification engine in the embodiments of the present invention;
Fig. 6 is the process flow diagram that the flow process of fuzzy diagnosis processing in embodiments of the present invention is shown;
Fig. 7 is the process flow diagram that is shown specifically the flow process of the fuzzy diagnosis processing of being carried out by the feature identification engine in the embodiments of the present invention;
Fig. 8 A is to be used to describe the sketch of the type at detected edge in embodiments of the present invention to 8D;
Fig. 9 is the process flow diagram of flow process that the computing of image blurring index in embodiments of the present invention is shown;
Figure 10 is the sketch of the computing formula of the image blurring index that illustrates in embodiments of the present invention to be calculated;
Figure 11 be illustrate in embodiments of the present invention the fuzzy image sorting parts and the process flow diagram of the flow process of the processing of image display part; And
Figure 12 A and Figure 12 B are the sketches that material selection screen in embodiments of the present invention is shown.
Embodiment
Below with reference to accompanying drawing embodiments of the present invention are described.
(hardware configuration of PC)
Fig. 1 is the sketch that the hardware configuration of PC according to the embodiment of the present invention is shown.
As shown in Figure 1, PC 100 comprises CPU (central processing unit) 1, ROM (ROM (read-only memory)) 2 and RAM (random access memory) 3, and by data transmission bus 4 these is connected to each other.
PC 100 also comprises input and output (I/O) interface 5, input block 6, output block 7, memory unit 8, communication component 9 and driver 10.In PC 100, input block 6, output block 7, memory unit 8, communication component 9 and driver 10 all are connected to IO interface 5.
When carrying out various operation, CPU 1 is access RAM 3 grades and centralized control PC 100 whole as required.ROM 2 is nonvolatile memories, and it stores OS (operating system), program, firmware regularly, such as the various parameters of being carried out by CPU 1.RAM 3 usefulness act on the perform region of CPU 1 etc., and interim storage operating system, the various programs of carrying out and the various data slots handled.
Input block 6 is keyboard, mouse, touch pads, button etc., and the operation signal of reception user's various operations and output input is given CPU 1.Output block 7 is the display unit (as LCD (LCDs) and OEL (organic electroluminescent) display) of vision signal of the various contents of output and the loudspeaker of exporting the sound signal of various contents.
For example, memory unit 8 is nonvolatile memory (as HDD (hard disk drive) and flash memories).Memory unit 8 is put storage operating system in hard disk or the storage arrangement, various program and application and various data slot within it.In addition, memory unit 8 reads those programs and data slot and gives RAM 3.
Especially, in this embodiment, the creation of memory unit 8 store movies is used.Memory unit 8 is also stored motion pictures files, static picture document and music file that becomes the material that is used to create film and the movie file of having created.It is to be used to use comprise the user and taken and be stored in the motion pictures files of image of memory unit 8 and static picture document and music file as material, the application of creation user's oneself original motion image (film) that film creation is used.Specifically, the film creation application is that file is created film by inserting moving image or the rest image selected by the user to the template and the combination image that have predefined moving image frame, rest image frame etc.
For example, communication component 9 comprises network interface unit and modulator-demodular unit, and communicates by letter by network (as the Internet) with other equipment.For example, communication component 9 can receive program and data from other facilities via network.
Driver 10 loads removable medium 11 and comes program and the data of reading and recording on removable medium 11, and stores them or reproduce them by output block 7 via IO interface 5 in storage unit 8.
Removable medium 11 is the media that comprise CD (as DVD, BD and CD) and semiconductor memory (as storage card).
(function that film creation is used)
Film creation is used has such function, and it sorts out the fuzzy image (after this being called blurred picture) that the user does not need and be difficult to become the material of film from a plurality of rest images that are stored in memory unit 8 before the creation film.Below the fuzzy image sorting function will be described.Fig. 2 is the sketch that is used to describe the fuzzy image sorting function.
As shown in Figure 2, film creation application 20 comprises metadata analysis parts 21, metadata accumulation parts 22, fuzzy image sorting parts 23 and image display part 24.In addition, PC 100 facial recognition engine 120 of storage and feature identification engine 130 are used 20 external motor as film creation.
Face recognition engine 120 is discerned people's face comprises people's face as subject zone and extraction rectangular area (facial zone) from being used by film creation 20 material image that provide.Then, recognition of face engine 120 output metadata use 20 to film creation, and wherein this metadata comprises that data, its size information (highly, width, degree of tilt) of human face region, the deterministic face recognition of indication identification must grade.
In face recognition engine 120 with various known technologies as facial recognition techniques.For example, face recognition engine 120 can the facial feature of use characteristic filtrator identification.The feature filtrator is such filtrator, and it detects the different piece of the specific part and the shielding rectangle of rectangle in image.By the feature filtrator, the position relation between eyes in the face, eyebrow, nose, the cheek etc. of detecting is as comprising facial facial characteristics in the image, and detected object except that face shape and the position relation between the component of object as the non-facial characteristics that does not comprise face in the image.Face recognition engine 120 usefulness feature filtrator filtering images change the size and the position of the frame (frame) of feature filtrator simultaneously.Then, the size of the feature filtrator that 120 identifications of face recognition engine obtain when obtaining the most definite detected value is as the size of facial zone, and the extraction facial zone.As the feature filtrator, except that rectangle filter, the Gabor filtrator of the feature filtrator can also be used the separability filtrator that detects circular feature (circular feature) and pass through each facial parts of rim detection on particular orientation position relation.In addition, for example,, except the feature filtrator, also can use Luminance Distribution information and colour of skin information in the image as facial recognition techniques.
Feature identification engine 130 has characteristic area recognition function and fuzzy diagnosis function.The characteristic area recognition function is used the 20 given images identification and is extracted visually the notable attribute zone as the subject zone from film creation.The fuzzy diagnosis function is determined blur level in giving image.
As the characteristic area recognition function, the feature of the feature identification engine 130 identification images of giving, with generation feature reflection, integration characteristics is videoed to produce remarkable reflection (saliency map) then, therefore identification and extraction rectangular characteristic zone.The feature that identifies comprises brightness, color character and edge (direction) feature.Feature identification engine 130 produces brightness reflection, color reflection and edge reflection from brightness, color character and the edge feature of image respectively, and the incompatible generation of these reflection experience linear junctions is significantly videoed.Then, image recognition engine 130 extracts the rectangular characteristic zone based on remarkable reflection, and the output metadata uses 20 to film creation, and wherein this metadata comprises that data, its size information (highly, width, degree of tilt) of the characteristic area that is extracted, the deterministic feature identification of indication identification must grade.
In addition, as the fuzzy diagnosis function, feature identification engine 130 uses as required significantly to video and extracts marginal point from giving facial zone and characteristic area, and analyzes marginal point and calculate the fuzzy index in each zone.Then, feature identification engine 130 is exported the fuzzy index of being calculated and is used 20 to film creation.
Metadata analysis part 21 comprises respectively face recognition plug-in unit and the feature identification plug-in unit with face recognition engine 120 and 130 cooperations of feature identification engine.Metadata analysis part 21 provides material image (rest image) to give face recognition engine 120 and feature identification engine 130, and obtains the metadata about the facial zone and the characteristic area of material image.In addition, metadata analysis part 21 is at being used for enough sizes that fuzzy diagnosis is handled, handle facial zone and the characteristic area obtained, then they are offered feature identification engine 130, with the fuzzy index of each fuzzy Index for Calculation entire image of obtaining based on result as fuzzy diagnosis.After this, each facial zone that will be calculated by feature identification engine 130 and the fuzzy index of characteristic area are called the fuzzy index in zone, and the fuzzy index of the entire image that will be calculated by metadata analysis part 21 will be called image blurring index.
22 accumulations of metadata accumulation parts are about the facial zone that obtains from face recognition engine 120 and feature identification engine 130 and the metadata of characteristic area.Metadata accumulation parts 22 are also accumulated the image blurring index of being calculated by metadata analysis parts 21.
Fuzzy image sorting parts 23 sort blurred picture based on above-mentioned fuzzy index, according to user's operation from a plurality of material image.Image display part 24 shows the tabulation of a plurality of material image, and only shows the blurred picture that is sorted out by fuzzy image sorting parts 23 according to user's operation.
(operation of PC)
Below, will the operation of the PC 100 of configuration like this be described.After this, with film creation use 20, each software in face recognition engine 120 and the feature identification engine 130 is described as the main composition of operation, but whole operation is the control execution down at hardware (as CPU 1 etc.).
(the operation summary of metadata analysis part)
Fig. 3 is the process flow diagram that the rough operating process of the metadata analysis parts 21 in this embodiment is shown.
As shown in Figure 3, at first, the face recognition plug-in unit of metadata analysis part 21 is provided at the material image of accumulation in the memory unit 8 and gives face recognition engine 120 (step 31).
Face recognition engine 120 is carried out face recognition processing about the material image of input, and, export each metadata (as the data of the facial zone that extracted, the size and the face recognition score of facial zone) and give face recognition plug-in unit (step 32) as the result who handles.
The face recognition plug-in unit judges whether the rectangular area exists (step 33) in the metadata that face recognition engine 120 provides.If there be (being) in the rectangular area, then the face recognition plug-in unit is registered all metadata (step 34) on the rectangular area in metadata accumulation parts 22.
Next, feature identification plug-in unit and 130 cooperations of feature identification engine are handled (step 35) to use material image to carry out feature identification.The back will be described feature identification in detail and handle.
Subsequently, the feature identification plug-in unit is handled by feature identification and is judged whether the rectangular area exists (step 36) the metadata that provides from feature identification engine 130.If there be (being) in the rectangular area, then the feature identification plug-in unit is registered all metadata (step 37) on the rectangular area in metadata accumulation parts 22.
Then, the feature identification plug-in unit provides the rectangular area that is registered in the metadata accumulation parts 22 to feature identification engine 130, and fuzzy diagnosis is carried out in the rectangular area handle (step 38).The back also will be described fuzzy diagnosis in detail and handle.
The feature identification plug-in unit calculates the image blurring index (step 39) of each image (entire image) based on the result of fuzzy diagnosis processing.The back will be described this processing in detail.Then, the image blurring index (step 40) of feature identification plug-in unit each material image that registration calculates in metadata accumulation parts 22.
(feature identification of metadata analysis parts is handled)
Fig. 4 is the process flow diagram that is shown specifically the flow process of the feature identification processing in the above-mentioned steps 35.
As shown in Figure 4, the feature identification plug-in unit is at first judged from memory unit 8 feature identification that whether the image size of the material image that obtains the is equal to or greater than feature identification engine 130 minimum effectively size (step 41) in handling.For example, minimum effectively size is 256 * 256 (pixels), but is not limited only to this.In this is handled, judge that whether material image is greatly to enough allowing the processing of (tolerate) feature identification.
After this, the feature identification plug-in unit judges whether the size of material image is equal to or less than the maximum analysis object size (step 42) in the feature identification of feature identification engine 130 is handled.For example, the maximum analysis object size is 3200 * 3200 (pixels), but is not limited only to this.Under the situation of size greater than the maximum analysis object size of material image (denying), the feature identification plug-in unit reduces the size of material image, so that it becomes the size (step 43) that is equal to or less than the maximum analysis object size.The reason that the size of material image reduces is as follows.Have in material image under the situation of the size that is equal to or greater than the maximum analysis object size, feature identification engine 130 can be carried out feature identification voluntarily and handle, but consumes long-time so that can not finish processing.In other words, by reducing processing, can alleviate the processing load on the feature identification engine 130.
Then, the feature identification plug-in unit provides the material image with the size that has reduced as required to feature identification engine 130 (step 44).130 pairs of material image that provided of feature identification engine are carried out feature identification and are handled (step 45).Then, the feature identification plug-in unit obtains metadata on the characteristic area as result's (step 46) of characteristic processing from feature identification engine 130.
(feature identification of feature identification engine is handled)
Fig. 5 is the process flow diagram of the flow process handled of the feature identification that is illustrated in detail in the above-mentioned steps 45, carried out by feature identification engine 130.
As shown in Figure 5, feature identification engine 130 at first produces brightness reflection (step 51) according to the material image that provides.
Specifically, feature identification engine 130 produces the luminance picture of the brightness value of each pixel with material image as pixel value.Then, by using luminance picture, feature identification engine 130 produces each and all has a plurality of luminance pictures (gradual change image (pyramid image)) of different resolution.For example, produce the gradual change image according to being divided into the resolution level of 8 level of resolution L1 to L8.The gradual change image of L1 grade has the highest resolution, and reduces resolution with the ascending order from L1 to L8.The pixel value of a pixel that comprises in the gradual change image of specific grade is set to be right after the mean value of the pixel value of four neighbors that comprise in the gradual change image of the grade on this grade.
Subsequently, feature identification engine 130 is selected the gradual change image of two different brackets from a plurality of gradual change images, and obtains difference between these two gradual change images and produce error image about brightness.Difference between the brightness value of pixel value indication in the gradual change image of different brackets of this error image, the difference between the mean flow rate around the brightness of the intended pixel in material image just and this pixel in the material image.Then, feature identification engine 130 produces the brightness reflection based on the error image of the predetermined quantity of calculating like this.
Next, feature identification engine 130 produces color reflection (step 52) from material image.The generation of color reflection is also substantially by carrying out with the brightness similar method of videoing.
At first, feature identification engine 130 produces RG error image and BY error image.In the RG error image, the difference between the R of the pixel in the material image (redness) composition and G (green) composition is set to pixel value.In the BY error image, the difference between the B of the pixel in the material image (blueness) composition and Y (yellow) composition is set to pixel value.
Then, by using the RG error image, feature identification engine 130 produces each and all has a plurality of RG characteristic images (gradual change image) of different resolution.Feature identification engine 130 is selected two gradual change images of different brackets from a plurality of gradual change images, and the difference between the acquisition gradual change image produces the error image about the RG difference.The BY error image is carried out identical processing.Like this, feature identification engine 130 produces the color reflection about RG and BY based on the error image of the predetermined quantity of calculating like this.
Next, feature identification engine 130 produces edge reflection (step 53) from material image.The generation of edge reflection is also basically by carrying out with brightness reflection and the color similar method of videoing.
At first, feature identification engine 130 is carried out material image by the Gabor filtrator and is filtered, and be created in all directions wherein for example the edge strengths on 0 degree, 45 degree, 90 degree and 135 degree be set to the edge image of pixel value.
Then, by using the edge image on each direction, feature identification engine 130 produces each and all has a plurality of edge images (gradual change image) of different resolution.Subsequently, feature identification engine 130 is selected two gradual change images of different brackets from a plurality of gradual change images, and obtains difference between these gradual change images and produce error image about the edge on each direction.Like this, feature identification engine 130 produces the edge reflection about all directions based on the error image of the predetermined quantity of calculating like this.
Then, 130 pairs of brightness reflection, color reflection and edge reflections that produce from material image of feature identification engine are carried out linear the integration, and produce significantly reflection.In other words, feature identification engine 130 is carried out weighted sum about each zone on same position (overlapping region) to each regional information (characteristic quantity) of brightness reflection, color reflection and edge reflection, thereby produces significantly reflection (step 54).
Here, the weight of using in weighted sum obtains by for example neural network learning.Specifically, feature identification engine 130 produces significantly reflection by processing same as described above for predetermined study image (learningimage).Then, feature identification engine 130 uses the weight of using in producing the processing of significantly videoing to obtain weight difference and image tag, and the weight difference is added on the weight of using in the processing that produces subject reflection (subject map), thereby upgrades weight.Image tag is such label, wherein based on the existence of the actual characteristic in the study image of pixel (subject (subject))/do not exist by 0 and 1 and indicate.That is to say that image tag is desirable remarkable reflection.Feature identification engine 130 repeats to upgrade the processing of weight and produces the significantly processing of reflection, thereby finally determines suitable weight.
Based on remarkable reflection, feature identification engine 130 from material image, extract have high feature quantity the zone as rectangular area (step 55).Then, feature identification engine 130 output metadata (as data, its size and the feature identification score of rectangular area) are to the feature identification plug-in unit (step 56) of metadata analysis part 21.
(fuzzy diagnosis of metadata analysis parts is handled)
Fig. 6 is the process flow diagram that is illustrated in detail in the flow process of the fuzzy diagnosis processing in the above-mentioned steps 38.
As shown in Figure 6, the feature identification plug-in unit judges at first whether size as the material image of the target of fuzzy diagnosis is equal to or greater than the minimum effectively size (step 61) in being handled by the fuzzy diagnosis of feature identification engine 130.For example, minimum effectively size is 64 * 64 (pixels), but is not limited only to this.Thereby, judge whether material image enough greatly consequently allows the fuzzy diagnosis processing.Under the situation of the size that in above-mentioned feature identification is handled, reduces material image, the image that has reduced is regarded the target image of fuzzy diagnosis.
Being lower than in the size of material image under the situation of the minimum effectively size that fuzzy diagnosis handles (deny), is the fuzzy image that can not discern because supposed this material image, so the feature identification plug-in unit stops the fuzzy diagnosis processing of material image.
Be equal to or greater than in the size of material image under the situation of the minimum effectively size that fuzzy diagnosis handles (being), the feature identification plug-in unit obtains metadata (step 62) on the facial zone from metadata accumulation parts 22.Here, under the situation that the size as the material image of the extraction source of facial zone has reduced in feature identification is handled, the size of facial zone is changed according to the original size of material image.
Subsequently, the feature identification plug-in unit judges that whether facial zone is the effective rectangle (step 63) during the fuzzy diagnosis of feature identification engine 130 is handled.Here, effectively rectangle refers to the minimum effectively rectangle of size that the fuzzy diagnosis of satisfying feature identification engine 130 is handled, and perhaps wherein the pixel quantity on its minor face is 20% or a more rectangle as the pixel quantity on the minor face of the material image of the extraction source of this rectangle.
If facial zone is effective rectangle (being), then the feature identification plug-in unit provides facial zone to feature identification engine 130 (step 67) and make the 130 pairs of facial zones of feature identification engine carry out fuzzy diagnosis to handle (step 68).In this case, the fuzzy diagnosis of not using remarkable reflection to carry out feature identification engine 130 is handled.
If facial zone is not effective rectangle (denying), then whether the judgement of feature identification plug-in unit is equal to or greater than the minimum effectively size (256 * 256 pixel) (step 64) of the feature identification processing of feature identification engine 130 as the size of the material image of the extraction source of facial zone.
If the size of material image is equal to or greater than the minimum effectively size (being) that feature identification is handled, then the feature identification plug-in unit obtains metadata (step 65) about characteristic area from data accumulation parts 22.Here, under the situation that the size as the material image of the extraction source of characteristic area has reduced in feature identification is handled, the size of characteristic area is changed according to the original size of material image.
Subsequently, whether feature identification plug-in unit judging characteristic zone is the effective rectangle (step 66) during the fuzzy diagnosis of feature identification engine 130 is handled.Here, effective rectangle refers to the minimum effectively rectangle of size of the fuzzy diagnosis processing of satisfying feature identification engine 130.
If characteristic area is effective rectangle (being), then the feature identification plug-in unit provides characteristic area to feature identification engine 130 (step 67) and make the 130 pairs of facial zones of feature identification engine carry out fuzzy diagnosis to handle (step 68).In this case, use the fuzzy diagnosis of carrying out feature identification engine 130 of significantly videoing to handle.
If characteristic area is not effective rectangle (denying), then the feature identification plug-in unit will be as the material image of the extraction source of characteristic area as the rectangular area, offer feature identification engine 130 (step 69), and make 130 pairs of material image of feature identification engine carry out fuzzy diagnosis processing (step 70).That is to say, whole material image rather than characteristic area are carried out the fuzzy diagnosis processing.Use the significantly fuzzy diagnosis of reflection execution feature identification engine 130 in this case to handle.
In above-mentioned steps 64, if the size of material image is lower than the minimum effectively size (deny) that feature identification is handled, then the feature identification plug-in unit provides material image to feature identification engine 130 (step 71) and make 130 pairs of whole material image of feature identification engine carry out fuzzy diagnosiss processing (step 72).The fuzzy diagnosis of carrying out feature identification engine 130 in this case of need not significantly videoing is handled.
(details that the fuzzy diagnosis of feature identification engine is handled)
Fig. 7 is the process flow diagram that is illustrated in detail in the flow process of fuzzy diagnosis processing in above-mentioned steps 68,70 and 72, that feature identification engine 130 is carried out.
As shown in Figure 7, feature identification engine 130 at first produces edge reflection (step 81) from facial zone, characteristic area or the material image that is provided.If the image that is provided is a facial zone, then significantly reflection is not used in the generation of videoing in the edge.If the image that is provided is a characteristic area, then significantly reflection is used for the generation of videoing in the edge.In addition, if the image that is provided is whole material image, then have under the situation of the minimum effectively size of size that the feature identification of being equal to or greater than handles in material image, significantly reflection is used for the generation of edge reflection, and have in material image under the situation of the minimum effectively size of handling less than feature identification of size, significantly reflection is not used in the generation of edge reflection.Below, will be referred to as " subject zone " from the image that comprises facial zone, characteristic area and the whole material image that the feature identification plug-in unit provides.
Specifically, feature identification engine 130 has the piece of the size of 2 * 2 pixels with the subject area dividing that is provided for each.Then, the absolute value of the difference between the pixel value of each pixel in feature identification engine 130 each piece of calculating, and calculate these average absolute subsequently.Vertical, the level in this mean value indicator dog and the mean value of the edge strength on the vergence direction.Then, feature identification engine 130 by with the subject zone in the corresponding mean value that identical series arrangement is calculated like this, thereby the edge of generation ratio (scale) SC1 reflection.In addition, feature identification engine 130 is set to the average image of a pixel value based on the mean value of the pixel value in the piece of ratio SC1 wherein, produces the edge image of ratio SC2.Similarly, feature identification engine 130 is set to the average image of a pixel value based on the mean value of the pixel value in wherein, the edge image of generation ratio SC3, wherein this piece is that the average image by division proportion SC2 obtains for the zone of size with 2 * 2 pixels.Therefore, in order to suppress the variation of edge strength, produce the edge reflection of different proportion based on the piece of different sizes.
Subsequently, feature identification engine 130 uses the edge reflection to detect the dynamic range (step 82) in subject zone.Specifically, detect the maximal value and the minimum value of pixel value the edge reflection of feature identification engine 130 from aforementioned proportion SC1 to SC3, and the difference between detection maximal value and the minimum value is as the dynamic range of the edge strength in subject zone.
After this, feature identification engine 130 is according to the initial value (step 83) of detected dynamic range setting operation parameter.Here, operating parameter comprises the edge reference value and extracts reference value.The edge reference value is used for the judgement of marginal point.Extract reference value and be used to judge whether the marginal point extracted amount is suitable.
In other words, whether feature identification engine 130 surpasses predetermined threshold according to dynamic range, is the image of low-dynamic range and the image of high dynamic range with the subject area dividing, and initial value is set for the operating parameter of each image.Suppose the operating parameter of the operating parameter of the image that is used for low-dynamic range less than the image that is used for high dynamic range.This be because, the image of low-dynamic range have than the image of high dynamic range more smallest number the edge and have more in a small amount the marginal point that is extracted, so also from the image of low-dynamic range, extract enough marginal points of the accuracy of maintenance fuzzy diagnosis.
Then, feature identification engine 130 uses the edge reflection that is produced to produce local maximum (step 84).Specifically, feature identification engine 130 is divided into the edge reflection of ratio SC1 the piece of the size with 2 * 2 pixels.Feature identification engine 130 extracts the maximal value of each piece of edge reflection, and by the such maximal value of extracting of the series arrangement identical with corresponding piece, thereby the local maximum LM1 of the ratio of generation SC1.That is to say, extract the maximal value of the pixel value in each piece.
Similarly, feature identification engine 130 is videoed the edge of ratio SC2 and is divided into the piece of the size with 4 * 4 pixels, extracts the maximal value of each piece, and by the such maximal value of extracting of the series arrangement identical with corresponding piece.Like this, produce the local maximum LM2 of ratio SC2.By same mode, feature identification engine 130 is divided into the piece of the size with 8 * 8 pixels with the edge reflection of ratio SC3, and produces the local maximum LM3 of ratio SC3 from the maximal value of each piece.
Then, use the above local maximum that produces, feature identification engine 130 is from subject extracted region marginal point.Extract handling for marginal point, as mentioned above, is which of facial zone and characteristic area used remarkable reflection according to the subject zone.In addition, be under the situation of whole material image in the subject zone, whether have the minimum effectively size that the feature identification of being equal to or greater than is handled according to the subject zone, use significantly reflection together.
Specifically, feature identification engine 130 selects pixel in the subject zones and its to be set to focus pixel.Use under the situation of remarkable reflection, from highly significant and pixel value the subject zone are equal to or higher than the characteristic area of predetermined value, select focus pixel.
In addition, feature identification engine 130 passes through following expression (1), and the coordinate of the pixel of acquisition and the corresponding local maximum LM1 of focus pixel (x1, y1), suppose at the coordinate that comprises the x-y coordinate system on the subject zone of selected focus pixel be (x, y).
(x1,y1)=(x/4,y/4)...(1)
From 4 * 4 block of pixels in subject zone, produce the pixel of local maximum LM1.Therefore, be 1/4 of the x coordinate of focus pixel and y coordinate with the coordinate figure of the pixel of the corresponding local maximum LM1 of focus pixel in subject zone.
Similarly, feature identification engine 130 is by the coordinate (x2 of following expression (2) acquisition with the pixel of the corresponding local maximum LM2 of focus pixel, y2), and by following expression (3) obtain with the coordinate of the pixel of the corresponding local maximum LM3 of focus pixel (x3, y3).
(x2,y2)=(x/16,y/16)...(2)
(x3,y3)=(x/64,y/64)...(3)
At coordinate (x1, y1), (x2, y2) and (x3, each pixel value y3) is equal to or greater than under the situation of edge reference value, feature identification engine 130 extracts focus pixel as marginal point (step 85) under local maximum LM1, local maximum LM2 and local maximum LM3.Then, feature identification engine 130 store associated with each otherly local pixel coordinate (x, y) with the coordinate of local maximum LM1, LM2 and LM3 (x1, y1), (x2, y2) and (x3, pixel value y3).Feature identification engine 130 repeats above-mentioned processing all pixels in the subject zone and is all regarded till the focus pixel.
Thereby, based on local maximum LM1, extract be included in from 4 * 4 block of pixels in the subject zone, wherein edge strength is equal to or greater than pixel in the piece of edge reference value as marginal point.
Similarly, based on local maximum LM2, extract be included in from 16 * 16 block of pixels in the subject zone, wherein edge strength is equal to or greater than pixel in the piece of edge reference value as marginal point.In addition, based on local maximum LM3, extract be included in from 64 * 64 block of pixels in the subject zone, wherein edge strength is equal to or greater than pixel in the piece of edge reference value as marginal point.
Thereby, extract and to be included in edge strength wherein and to be equal to or greater than pixel in 4 * 4 block of pixels, 16 * 16 block of pixels, 64 * 64 block of pixels in the subject zone of edge reference value at least one as marginal point.
Feature identification engine 130 produces marginal point form ET1 as such form, therein the coordinate of the marginal point that extracts based on local maximum LM1 (x, y) and associated with each other with the pixel value of the corresponding local maximum LM1 of marginal point.
Similarly, feature identification engine 130 produces marginal point form ET2, therein the coordinate of the marginal point that extracts based on local maximum LM2 (x, y) and associated with each other with the pixel value of the corresponding local maximum LM2 of marginal point.In addition, feature identification engine 130 also produces marginal point form ET3, therein the coordinate of the marginal point that extracts based on local maximum LM3 (x, y) and associated with each other with the pixel value of the corresponding local maximum LM3 of marginal point.
Then, use the above marginal point form ET1 that produces to ET3, feature identification engine 130 is judged marginal point extracted amount whether suitable (step 86).For example, at the sum of extraction marginal point, promptly marginal point form ET1 less than under the situation of extracting reference value, judges that the marginal point extracted amount is improper to the sum of the data item of ET3.
When judging marginal point extracted amount improper (denying), feature identification engine 130 is adjusted operating parameter (step 87).For example, feature identification engine 130 is provided with the predetermined value of edge reference value for being provided with less than the current time, to extract the marginal point than current bigger quantity.When having adjusted operating parameter, handle turning back to above-mentioned steps 85.After this, repeat above-mentioned processing up to be judged as marginal point extract quantity suitable till.
In order to improve the degree of accuracy of fuzzy diagnosis by above processing, about the image of low-dynamic range, also from piece, extract marginal point with low edge intensity, keep the fuzzy diagnosis degree of accuracy determining on the rank so that can guarantee the marginal point of sufficient amount.On the other hand,, from piece, extract marginal point, constitute marginal point with more high-intensity edge so that extract with high as far as possible edge strength about the image of high dynamic range.
Under the situation that is judged as marginal point extracted amount suitable (being), feature identification engine 130 uses edge reference value, marginal point form and local maximum to carry out edge analysis (step 88).
Specifically, to ET3, feature identification engine 130 is provided with a marginal point from the subject extracted region as focus pixel based on marginal point form ET1.Then, utilize picture (x, the coordinate of the x-y coordinate system of focus pixel y), feature identification engine 130 arrives (3) by above-mentioned expression formula (1), acquisition about with the corresponding local maximum LM1 of focus pixel to the coordinate of the pixel of LM3 (x1, y1) to (x3, y3).
Feature identification engine 130 is given local maximum 1 (x1, y1) be arranged on the wherein coordinate (x1 of local maximum LM1, y1) pixel is in the maximal value of the pixel value of the pixel in the piece of the local maximum LM1 upper left corner, that have m * m pixel (for example, 4 * 4 pixels).Similarly, feature identification engine 130 is given local maximum 2 (x2, y2) (x2, pixel y2) is in the maximal value of the pixel value of the pixel in the piece of the local maximum LM2 upper left corner, that have n * n pixel (for example, 2 * 2 pixels) coordinate of local maximum LM2 to be set wherein.In addition, feature identification engine 130 is given local maximum 3 (x3 y3) is provided with coordinate (x3, the pixel value of pixel y3) of local maximum LM3.
Here, be used for local maximum 1 (x1, the parameter of setting y1) (m * m) and be used for local maximum 2 (x2, (each among the n * n) all is the parameter of difference of size that is used to adjust the piece in subject zone to the parameter of setting y2), and this parameter and local maximum LM1 are corresponding to a pixel among the LM3.
Feature identification engine 130 judge local maximums 1 (x1, y1), (x2, y2) (whether x3 y3) satisfies following conditions expression formula (4) to local maximum 2 with local maximum 3.
Local maximum 1 (x1, y1)>the edge reference value or
Local maximum 2 (x2, y2)>the edge reference value or
Local maximum 3 (x3, y3)>the edge reference value ... (4)
Local maximum 1 (x1, y1), local maximum 2 (x2, y2) and local maximum 3 (x3 y3) satisfies condition under the situation of expression formula (4), and feature identification engine 130 is by a value that increases progressively variable Nedge.
The marginal point of supposing the expression formula that satisfies condition (4) is for constituting the marginal point at the edge with definite grade or more high-grade intensity, and no matter with the marginal point structure how.
Fig. 8 A is the sketch that is used to describe the edge kind to 8D.Edge shown in Fig. 8 A is precipitous pulsed (impulse-like) edge, edge shown in Fig. 8 B is pulse formula (pulse-like) edge with the gradient that more relaxes than the edge shown in Fig. 8 A, edge shown in Fig. 8 C is the notch cuttype edge, it has the gradient of perpendicular, and the edge shown in Fig. 8 D is the notch cuttype edge with the gradient that more relaxes than the edge shown in Fig. 8 C.
When local maximal value 1 (x1, y1), local maximum 2 (x2, y2) and local maximum 3 (x3, when y3) satisfying condition expression formula (4), feature identification engine 130 judges that further they satisfy following conditions expression formula (5) still (6).
Local maximum 1 (x1, y1)<local maximum 2 (x2, y2)<maximal value 3 (x3, y3) ... (5)
Local maximum 2 (x2, y2)>local maximum 1 (x1, y1) and
Local maximum 2 (x2, y2)>local maximum 3 (x3, y3) ... (6)
When local maximal value 1 (x1, y1), local maximum 2 (x2, y2) and local maximum 3 (x3, when y3) satisfying condition expression formula (5) or (6), feature identification engine 130 is by a value that increases progressively variable Nsmallblur.
The marginal point of supposing the expression formula that satisfies condition (4) and conditional expression (5) or (6) is determined grade or more high-grade intensity but this intensity is lower than the marginal point at edge of the structure of Fig. 8 B of edge strength of Fig. 8 A or 8C or 8D for constituting to have to have.
As local maximal value 1 (x1, y1), local maximum 2 (x2, y2) and local maximum 3 (x3, when y3) satisfying condition expression formula (4) and conditional expression (5) or (6), feature identification engine 130 judges further (whether x1 y1) satisfies following conditions expression formula (7) to local maximum 1.
Local maximum 1 (x1, y1)<the edge reference value ... (7)
(x1, when y1) satisfying condition expression formula (7), feature identification engine 130 is by a value that increases progressively variable Nlargeblur when local maximal value 1.
The marginal point of supposing the expression formula that satisfies condition (4), conditional expression (5) or (6) and conditional expression (7) for constitute from have definite grade or more high-grade intensity, cause the marginal point of the structure of the edge of fuzzy and loss sharpness and Fig. 8 B or 8D therein.In other words, suppose cause at marginal point fuzzy.
Feature identification engine 130 repeats above the processing till all marginal points from the subject extracted region all are focus pixel.Handle by this, obtain Nedge marginal point quantity Nedge, marginal point quantity Nsmallblur and marginal point quantity Nlargeblur from the marginal point that extracts.
Here, quantity Ndege is the marginal point quantity of expression formula (5) of satisfying condition, and quantity Nsmallblur is the marginal point quantity of expression formula (4) and conditional expression (5) or (6) of satisfying condition.Quantity Nlargeblur is the marginal point quantity of expression formula (4), conditional expression (5) or (6) and conditional expression (7) of satisfying condition.
Subsequently, feature identification engine 130 uses the quantity Nsmallblur that calculated and quantity Nlargeblur and design conditions expression formula (8) to calculate the fuzzy index (step 89) in zone of the index (indicator) of the blur level that becomes the subject zone.
Fuzzy index=the Nlargeblur/Nsmallblur... (8) in zone
In other words, the fuzzy index in zone is to be assumed that to constitute the marginal point that causes fuzzy edge, with the ratio of the marginal point that is assumed that the edge that constitutes the structure with definite grade or more high-grade intensity and Fig. 8 B or 8D.Thereby, suppose that the blur level in subject zone is increasing along with the fuzzy index in zone is more and more higher, and along with the fuzzy index in zone is more and more littler, the blur level in subject zone is more and more littler.With the fuzzy Index for Calculation in zone is for example in 0 to 1000 value that changes.
The fuzzy index in zone that 130 outputs of feature identification engine are calculated is to the feature identification plug-in unit, and the termination fuzzy diagnosis is handled.
(computing of the image blurring index of metadata analysis parts)
Next, the feature identification plug-in unit of metadata analysis part 21 calculates the image blurring index (step 38 of Fig. 3) of entire image based on the fuzzy index in the zone in the subject zone that obtains from feature identification engine 130.Fig. 9 is the process flow diagram of flow process that is shown specifically the computing of image blurring index.
As shown in Figure 9, the feature identification plug-in unit obtains the fuzzy index (step 91) in zone in the subject zone the single material image from feature identification engine 130, judges whether to have obtained effective fuzzy diagnosis result (step 92) then.
(deny) that it is invalid value " 1 " (step 97) that the feature identification plug-in unit is provided with image blurring index under the situation of not obtaining fuzzy diagnosis result effectively.Here, it is the situation of exceptional value that the situation of also not obtaining effective fuzzy diagnosis result refers to from the fuzzy index in the zone that feature identification engine 130 obtains, or, at first the rectangular area is not offered feature identification engine 130 shown in the situation (denying) of the step 64 among Fig. 6.Image blurring index is registered in the metadata accumulation parts 22.
(be) that the feature identification plug-in unit judges whether to have obtained effective fuzzy diagnosis result (step 92) from a plurality of subjects zone of a material image under the situation of obtaining effective fuzzy diagnosis result.
Under the situation of having obtained effective fuzzy diagnosis result from only subject zone of a material image (denying), the feature identification plug-in unit is provided with the fuzzy index in the zone that is obtained as image blurring index (step 98).In this case, though image blurring index is calculated as from 0 to 100 value that changes, the fuzzy index from 0 to 1000 in zone of being calculated by feature identification engine 130 changes.Therefore, the feature identification plug-in unit is by being set to image blurring index to the fuzzy index in zone divided by 10 values that obtain.This image blurring index also is registered in the metadata section 22.
Under the situation of having obtained effective fuzzy diagnosis result from a plurality of subjects zone of a material image (being), the feature identification plug-in unit judges whether the fuzzy diagnosis result from the subject zone obtains (step 94) from facial zone or characteristic area.
Under the situation of having obtained the fuzzy diagnosis result from a plurality of facial zones (being), the size of feature identification plug-in unit and facial zone is carried out weighted mean to the fuzzy index in the zone of facial zone pro rata.As a result of, calculating is about an image blurring index (step 95) of a material image in the calculating source of the fuzzy index in conduct zone.
Under the situation of obtaining the fuzzy diagnosis result from a plurality of characteristic areas (denying), the size of feature identification plug-in unit and characteristic area and its feature identification score are carried out weighted mean to the fuzzy index in the zone of characteristic area pro rata.As a result of, calculate about an image blurring index (step 96) as the material image in the calculating source of feature identification score.
Figure 10 is the sketch that is illustrated in the computing formula of the image blurring index that calculates in above-mentioned steps 96 and 96.
As shown in figure 10, size (Sn) by using subject zone n and the identification score (Dn) of the regional n of subject are as weight, the fuzzy index (Bn) in zone to the zone of the subject in material image n is carried out weighted sum, and summed result comes computed image to blur index divided by the summation of weight.When the subject zone was characteristic area, Dn was the fuzzy index in the zone of characteristic area.When the subject zone made facial zone, Dn was fixed as 1.That is to say, when the subject zone is facial zone, do not carry out and the proportional weighting of face recognition score.Here, n is the value that is used for identifying from a plurality of subjects zone that a material image is discerned.Should be noted in the discussion above that because as mentioned above the image blurring index of being calculated the fuzzy index in zone of 0 to 1000 scope from the value of being illustrated in is expressed in from 0 to 100 the scope, therefore in this computing formula, denominator is multiplied by 10.
Because it is generally acknowledged, along with the size in each subject zone is big more, the subject zone might attract spectators' attention more, therefore carries out weighting pro rata with the size in each subject zone.In addition, about facial zone, do not carry out weighting pro rata with the face recognition score.This be because, it has been generally acknowledged that when recognizing subject when being face, there is a strong possibility unconditionally stares at facial zone for spectators, and no matter the identification score (characteristic quantity) of facial zone how.On the other hand, about characteristic area, be that what and this subject whether be easy to grip one's attention because the feature identification plug-in unit may be difficult to the subject in recognition feature zone, so when the fuzzy index of computed image and the feature identification score carry out weighting pro rata.
The computing of image recognition index begins when being loaded in the memory unit 8 of (take in) PC 100 in material image.In metadata accumulation parts 22, register the image blurring index of being calculated with material image relatedly.In following fuzzy image sorting is handled, carry out the letter sorting of blurred picture based on the image blurring index of having been calculated.
(fuzzy image sorting processing)
Next, the fuzzy image sorting of describing based on as above-mentioned image blurring index of calculating is handled.
Figure 11 is the process flow diagram of flow process that the processing of fuzzy image sorting parts 23 and image display part 24 is shown.
As shown in figure 11, image display part 24 shows material selection screen, and it is used to make the user at the material (step 101) that begins to select film of being used the 20 film creation steps of carrying out by film creation.
Figure 12 A and 12B illustrate the sketch that material is selected screen.Shown in Figure 12 A and 12B, material selects screen 110 to comprise fuzzy image sorting button 111, and it is used for only sorting and shows to come comfortable memory unit 8 to store and can become the blurred picture of a plurality of rest images of the material that is used for film.Material selects screen 110 also to comprise people's image letter sorting button 112, smile image letter sorting button 113, audiovideo letter sorting button 114, phonetic image letter sorting button 115, moving image letter sorting button 116 and rest image letter sorting button 117.People's image letter sorting button 112 is the quantity that is used for by the people in the image, only sorts and show the button of people's rest image within it.Smile image letter sorting button 113 is the buttons that are used for only sorting and show people's smile rest image within it.Audiovideo letter sorting button 114 is the buttons that are used for only sorting and showing the moving image that comprises the sound except that people's voice.Phonetic image letter sorting button 115 is the buttons that are used for only sorting and showing the moving image of the voice that comprise the people.Moving image letter sorting button 116 and rest image letter sorting button 117 are to be used for only sorting and showing from the moving image of a plurality of materials (moving image and rest image) or the button of rest image.Figure 12 A shows by rest image letter sorting button 117 and shows the only situation of the tabulation of rest image 118.In a plurality of rest images 118 that show, rest image 118a and 118b are blurred pictures.
Return Figure 11, fuzzy image sorting parts 23 judge whether the user presses fuzzy image sorting button 111 (step 102).When fuzzy image sorting button 111 has been supressed in judgement (being), fuzzy image sorting parts 23 obtain the above-mentioned image blurring index (step 103) of each rest image from metadata accumulation parts 22.
Subsequently, fuzzy image sorting parts 23 judge singly about a plurality of rest images whether the image blurring index that is obtained is equal to or greater than predetermined threshold (step 104).For example, predetermined threshold is 60, but is not limited only to this.
When the image blurring index of rest image during less than threshold value (denying), 23 pairs of next rest images of fuzzy image sorting parts are carried out and are judged.
When the image blurring index of rest image is equal to or greater than threshold value (being), fuzzy image sorting parts 23 letter sorting rest images are as blurred picture, and instruction image display part 24 only shows the blurred picture (step 105) that sorts out.
Then, according to the instruction from fuzzy image sorting parts 23, image display part 24 is switched demonstration only to show the blurred picture (step 106) from a plurality of rest images that up to the present shown.
Figure 12 B illustrates by pressing the situation that fuzzy image sorting button 111 only sorts out blurred picture.Shown in Figure 12 B, in the rest image shown in Figure 12 A 118, only sort and show blurred picture 118a and 118b.
The user can by delete they or the storage area storage that is different from other rest images they, come to remove blurred picture 118a and the 118b that sorts out and shown from the material that is used for film.That is to say that the user can grasp unwanted blurred picture immediately only by selecting to press fuzzy image sorting button 111 on the screen 110 at material.
(summary)
As mentioned above, in the present embodiment, PC 100 can be that a material image is calculated an image blurring index based on the fuzzy index of the fuzzy index in a zone or a plurality of zone in the one or more subjects zone (facial zone or characteristic area) in the material image.Calculate under the situation of the fuzzy index in effective coverage in a plurality of subjects zone in material image, the size (with the identification score) of each zone being blured index and subject zone experiences weighted mean pro rata, thus the fuzzy index of computed image.Then, when on material is selected screen 110, pressing fuzzy image sorting button 111, only sort and show blurred picture based on image blurring index.Thereby, utilization can sort out the result of blurred picture more accurately, the subject zone that PC 100 distinguishes in the material image of the attention that more may attract spectators, also handle the subject zone in addition so that when the subject zone that more grips one's attention becomes fuzzyyer, it is higher that the fuzzy index of entire image becomes.
(modification)
The present invention is not limited only to above-mentioned embodiment, can make various modification, and not depart from purport of the present invention.
In the above-described embodiment, on selecting screen 110, press fuzzy image sorting button 111, and sort out and when showing blurred picture thereon, the user suitably makes various processing, as deletion at material.Yet when material image was loaded among the PC 100, film creation uses 20 can carries out image blur the letter sorting processing of Index for Calculation processing and blurred picture, and deletes blurred picture automatically.In addition, in this case, film creation uses 20 can be for the user shows confirmation, and for example, " image of packing into comprises blurred picture.The deletion blurred picture? ", and according to user's instruction deletion blurred picture.When packing material image into, film creation is used 20 letter sortings that can carry out blurred picture and is handled, and about blurred picture, stops to extract them in PC 100.Equally under this situation, the information of the blurred picture that can explicit user be used to confirm to stop to pack into.In addition, film creation is used 20 letter sortings that can carry out blurred picture regularly and is handled, and for example, once a day or weekly, rather than in the time of packing material image into, and automatically deletes blurred picture according to user's instruction.
In the above-described embodiment, feature identification engine 130 extracts characteristic area by producing remarkable reflection from image.Yet the extraction of characteristic area is not limited only to use the situation of remarkable reflection.For example, can be subject with the object detection that on line, exists with the ratio partitioned image of so-called golden section.
In the above-described embodiment, feature identification engine 130 produces significantly reflection based on brightness reflection, color reflection and edge reflection.Yet significantly reflection also can be based on the reflection about other features, and for example, the motion vector that the feature from the motion vector of consecutive image produces is videoed and produced.
In the above-described embodiment, film creation uses 20 from a plurality of rest images execution fuzzy image sortings.Yet film creation uses 20 also can sort blurred picture by similar mode from a plurality of moving images.In this case, film creation uses 20 can be processed into above-mentioned rest image by the frame that will constitute a plurality of moving images, sorts out the fuzzy video of video conduct that comprises blurred picture.In this case, in all frames, can sort out comprise predetermined frame ratio or higher blurred picture moving image as blurred picture.
In the above-described embodiment, the function that the fuzzy image sorting function is used as film creation has been described.Yet the application except that film creation is used may comprise above-mentioned fuzzy image sorting function, and the common application that perhaps only has the fuzzy image sorting function can independently exist.In this case, face recognition engine 120 and feature identification engine 130 can be used as the external motor that is separated with the application with fuzzy image sorting function and exist, and perhaps can be used as internal engine and exist.
In the above-described embodiment, carry out fuzzy image sorting to being stored in as the rest image in the memory unit 8 of the local storage parts of PC 100.Yet PC 100 can carry out the fuzzy image sorting processing to the rest image that is stored in the memory unit on the network that connects via communication component 9.
In the above-described embodiment, by software execution each processing in fuzzy image sorting is handled.Yet, comprise that each processing that face recognition processing, feature identification processing, fuzzy diagnosis processing and fuzzy image sorting are handled can be by various hardware (such as the circuit board of realizing those processing) execution.
In the above-described embodiment, PC is as the demonstration of electronic equipment.Yet, the present invention can be in the same manner, be applied to other electronic equipments, comprise recording/reproducing apparatus, digital camera, digital VTR, portable AV equipment, portable phone and the game station of television equipment, service recorder medium (as HDD (hard disk drive), DVD and BD (Blu-ray disc)).
The application comprises the theme that is involved in the September 24 in 2008 of disclosed theme in the Japanese priority patent application JP 2008-244816 that Jap.P. office submits to, and the complete content of this priority patent is incorporated in here by reference.
It will be understood by those skilled in the art that in the scope of appended claims or its equivalent foundation designing requirement and other factors so far can present various modification, combination, unit construction and change.

Claims (11)

1, a kind of electronic equipment comprises:
Extraction element is used for being extracted in the subject zone that has predetermined characteristic the image from image;
First calculation element is used to calculate first blur level of the blur level in the subject zone that indication extracts;
Second calculation element, be used for when the number in the subject zone of the image that calculates first blur level from it is one, calculate second blur level of the blur level of indication entire image based on first blur level, and when the number in the subject zone the image that calculates first blur level from it when being a plurality of, based on a plurality of first blur leveles being carried out the value that weighted mean obtains, calculate second blur level by size according to a plurality of subjects zone; And
Sorting equipment is used for having the image of second blur level of being calculated that is equal to or greater than predetermined threshold as blurred picture from a plurality of sorting images.
2, electronic equipment as claimed in claim 1 also comprises:
Optimization means is used to optimize the subject zone of being extracted, so that the subject zone of being extracted will have the pre-sizing of the calculating of enough first blur leveles.
3, electronic equipment as claimed in claim 2,
Wherein, this extraction element counts the score, the determinacy of the extraction in its indication subject zone, and
Wherein, when the number in the subject zone the image that calculates first blur level from it when being a plurality of, this second calculation element calculates second blur level based on by according to the size in a plurality of subjects zone and the score of being calculated a plurality of first blur leveles being carried out the value that weighted mean obtains.
4, electronic equipment as claimed in claim 3,
Wherein, this extraction element comprises
Face recognition device, the facial zone that is used to discern people's face be as the subject zone, and calculate the facial zone that indication identifies score first score and
The feature identification device is used on the recognition visible sensation notable attribute zone as the subject zone, and calculates second score of the score of the characteristic area that indication identifies, and
Wherein, the number in the subject zone the image that calculates first blur level from it is under a plurality of situation, when facial zone is identified as the subject zone by facial recognition device, this second calculation element does not use first must assign to calculate second blur level in weighted mean, and when characteristic area was identified as the subject zone by the feature identification device, second calculation element used second must assign to calculate second blur level in weighted mean.
5, electronic equipment as claimed in claim 2,
Wherein, when not extracting the subject zone, this first calculation element calculates first blur level with entire image as the subject zone.
6, electronic equipment as claimed in claim 2 also comprises:
The operation receiving trap is used to receive user's operation; And
Display device is used to show a plurality of images,
Wherein, sorting equipment sorts out blurred picture according to user's scheduled operation, and
Wherein, when receiving scheduled operation, display device only shows the blurred picture that is sorted out in these a plurality of display images.
7, a kind of fuzzy image sorting method comprises:
From image, be extracted in the subject zone that has predetermined characteristic in the image;
Calculate first blur level of the blur level of indicating the subject zone of being extracted;
When the number in the subject zone the image that calculates first blur level from it is one, calculate second blur level of the blur level of indication entire image based on first blur level, and when the number in the subject zone the image that calculates first blur level from it when being a plurality of, based on a plurality of first blur leveles being carried out the value that weighted mean obtains, calculate second blur level by size according to a plurality of subjects zone; And
Has the image of second blur level of being calculated that is equal to or greater than predetermined threshold as blurred picture from a plurality of sorting images.
8, a kind of fuzzy image sorting method comprises:
Optimize the subject zone with predetermined characteristic, it is extracted from image so that this subject zone has pre-sizing;
Obtain first blur level of from subject zone, calculating and indicating the blur level in subject zone through optimizing;
When the number in the subject zone the image that calculates first blur level from it is one, calculate second blur level of the blur level of indication entire image based on first blur level, and when the number in the subject zone the image that calculates first blur level from it when being a plurality of, based on a plurality of first blur leveles being carried out the value that weighted mean obtains, calculate second blur level by size according to a plurality of subjects zone; And
Has the image of second blur level of being calculated that is equal to or greater than predetermined threshold as blurred picture from a plurality of sorting images.
9, a kind of program that makes electronic equipment carry out following steps, these steps comprise:
From image, be extracted in the subject zone that has predetermined characteristic in the image;
Calculate first blur level of the blur level of indicating the subject zone of being extracted;
When the number in the subject zone the image that calculates first blur level from it is one, calculate second blur level of the blur level of indication entire image based on first blur level, and when the number in the subject zone the image that calculates first blur level from it when being a plurality of, based on a plurality of first blur leveles being carried out the value that weighted mean obtains, calculate second blur level by size according to a plurality of subjects zone; And
Has the image of second blur level of being calculated that is equal to or greater than predetermined threshold as blurred picture from a plurality of sorting images.
10, a kind of program that makes electronic equipment carry out following steps, these steps comprise:
Optimize the subject zone with predetermined characteristic, it is extracted from image so that this subject zone has pre-sizing;
Obtain first blur level of from subject zone, calculating and indicating the blur level in subject zone through optimizing;
When the number in the subject zone the image that calculates first blur level from it is one, calculate second blur level of the blur level of indication entire image based on first blur level, and when the number in the subject zone the image that calculates first blur level from it when being a plurality of, based on a plurality of first blur leveles being carried out the value that weighted mean obtains, calculate second blur level by size according to a plurality of subjects zone; And
Has the image of second blur level of being calculated that is equal to or greater than predetermined threshold as blurred picture from a plurality of sorting images.
11, a kind of electronic equipment comprises:
Extract parts, be used for being extracted in the subject zone that has predetermined characteristic the image from image;
First calculating unit is used to calculate first blur level of the blur level in the subject zone that indication extracts;
Second calculating unit, be used for when the number in the subject zone of the image that calculates first blur level from it is one, calculate second blur level of the blur level of indication entire image based on first blur level, and when the number in the subject zone the image that calculates first blur level from it when being a plurality of, based on a plurality of first blur leveles being carried out the value that weighted mean obtains, calculate second blur level by size according to a plurality of subjects zone; And
Sorting component is used for having the image of second blur level of being calculated that is equal to or greater than predetermined threshold as blurred picture from a plurality of sorting images.
CN200910178651.9A 2008-09-24 2009-09-24 Electronic device, fuzzy image sorting method Expired - Fee Related CN101685542B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP244816/08 2008-09-24
JP2008244816A JP4840426B2 (en) 2008-09-24 2008-09-24 Electronic device, blurred image selection method and program

Publications (2)

Publication Number Publication Date
CN101685542A true CN101685542A (en) 2010-03-31
CN101685542B CN101685542B (en) 2012-07-18

Family

ID=42048688

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200910178651.9A Expired - Fee Related CN101685542B (en) 2008-09-24 2009-09-24 Electronic device, fuzzy image sorting method

Country Status (3)

Country Link
US (1) US8300972B2 (en)
JP (1) JP4840426B2 (en)
CN (1) CN101685542B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103177249A (en) * 2011-08-22 2013-06-26 富士通株式会社 Image processing apparatus and image processing method
CN103310413A (en) * 2012-03-13 2013-09-18 三星电子株式会社 A method and an apparatus for debluring non-uniform motion blur
CN103558996A (en) * 2013-10-25 2014-02-05 广东欧珀移动通信有限公司 Photo processing method and system
CN107743200A (en) * 2017-10-31 2018-02-27 广东欧珀移动通信有限公司 Method, device, computer-readable storage medium and electronic device for taking pictures
CN109978833A (en) * 2019-03-05 2019-07-05 上海扩博智能技术有限公司 Picture quality automatic testing method, system, equipment and storage medium
CN113168511A (en) * 2018-09-24 2021-07-23 莫维迪乌斯有限公司 Method and apparatus for generating occlusion images based on selective privacy and/or location tracking
CN113421197A (en) * 2021-06-10 2021-09-21 杭州海康威视数字技术股份有限公司 Processing method and processing system of beautifying image
CN114463276A (en) * 2022-01-05 2022-05-10 吉林省通联信用服务有限公司 Method for evaluating straight stroke ambiguity of digital image of personnel file of cadre

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI389576B (en) * 2009-07-02 2013-03-11 Mstar Semiconductor Inc Image processing apparatus and image processing method
US10178406B2 (en) 2009-11-06 2019-01-08 Qualcomm Incorporated Control of video encoding based on one or more video capture parameters
US20110292997A1 (en) * 2009-11-06 2011-12-01 Qualcomm Incorporated Control of video encoding based on image capture parameters
US20110287811A1 (en) * 2010-05-21 2011-11-24 Nokia Corporation Method and apparatus for an augmented reality x-ray
JP5621524B2 (en) * 2010-11-09 2014-11-12 カシオ計算機株式会社 Image processing apparatus and method, and program
FR2968811A1 (en) * 2010-12-13 2012-06-15 Univ Paris Diderot Paris 7 METHOD OF DETECTING AND QUANTIFYING FLOUD IN A DIGITAL IMAGE
US9848106B2 (en) * 2010-12-21 2017-12-19 Microsoft Technology Licensing, Llc Intelligent gameplay photo capture
JP2013020335A (en) * 2011-07-08 2013-01-31 Nikon Corp Image classification method
JP5371128B2 (en) * 2011-07-28 2013-12-18 マルミ光機株式会社 Diorama filter for digital camera
JP5826001B2 (en) * 2011-11-30 2015-12-02 キヤノン株式会社 Image processing apparatus and control method thereof
JP6046961B2 (en) * 2012-09-06 2016-12-21 日本放送協会 Video composition device and video composition program
JP6040655B2 (en) 2012-09-13 2016-12-07 オムロン株式会社 Image processing apparatus, image processing method, control program, and recording medium
US9632803B2 (en) * 2013-12-05 2017-04-25 Red Hat, Inc. Managing configuration states in an application server
US10013639B1 (en) * 2013-12-16 2018-07-03 Amazon Technologies, Inc. Analyzing digital images based on criteria
KR102301379B1 (en) * 2015-01-20 2021-09-14 삼성전자주식회사 An imaging processor, an image capturing apparatus, a method for processing an image and a method for controlling the image capturing apparatus
WO2017138220A1 (en) * 2016-02-12 2017-08-17 ソニー株式会社 Information processing method and information processing device
CN106131518A (en) * 2016-06-30 2016-11-16 东莞市中控电子技术有限公司 A kind of method of image procossing and image processing apparatus
JP6897100B2 (en) * 2017-01-06 2021-06-30 富士通株式会社 Judgment device, judgment method, and judgment program
US10440276B2 (en) * 2017-11-02 2019-10-08 Adobe Inc. Generating image previews based on capture information
JP6885474B2 (en) * 2017-12-20 2021-06-16 日本電気株式会社 Image processing device, image processing method, and program
US11611773B2 (en) * 2018-04-06 2023-03-21 Qatar Foundation For Education, Science And Community Development System of video steganalysis and a method for the detection of covert communications
CN109729272B (en) * 2019-01-04 2022-03-08 平安科技(深圳)有限公司 Shooting control method, terminal device and computer readable storage medium
JP2019096364A (en) * 2019-03-18 2019-06-20 株式会社ニコン Image evaluation device
US11755948B2 (en) 2019-12-18 2023-09-12 Google Llc Attribution and generation of saliency visualizations for machine-learning models
CN113447128B (en) 2020-03-10 2023-06-23 百度在线网络技术(北京)有限公司 Multi-human-body-temperature detection method and device, electronic equipment and storage medium
EP3938998A1 (en) * 2020-05-19 2022-01-19 Google LLC Debanding using a novel banding metric
JP2020170555A (en) * 2020-07-13 2020-10-15 株式会社ニコン Image evaluation device, camera, and program
TWI813181B (en) * 2021-09-09 2023-08-21 大陸商星宸科技股份有限公司 Image processing circuit and image processing method

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3339191B2 (en) * 1994-08-08 2002-10-28 ミノルタ株式会社 Camera with image stabilization function
JP2000148980A (en) * 1998-11-12 2000-05-30 Fuji Photo Film Co Ltd Image processing method, image processor and recording medium
JP2000350123A (en) * 1999-06-04 2000-12-15 Fuji Photo Film Co Ltd Picture selection device, camera, picture selection method and recording medium
JP2004362443A (en) * 2003-06-06 2004-12-24 Canon Inc Parameter determination method
JP2005332382A (en) * 2004-04-22 2005-12-02 Fuji Photo Film Co Ltd Image processing method, device and program
JP4415188B2 (en) * 2004-08-09 2010-02-17 カシオ計算機株式会社 Image shooting device
JP2006115289A (en) * 2004-10-15 2006-04-27 Canon Inc Imaging device for displaying deletion candidate images
JP2007251338A (en) * 2006-03-14 2007-09-27 Casio Comput Co Ltd IMAGING DEVICE, IMAGING DEVICE RECORDING METHOD, AND IMAGING DEVICE RECORDING PROGRAM
JP4182990B2 (en) * 2006-06-02 2008-11-19 セイコーエプソン株式会社 Printing device, method for determining whether image is blurred, and computer program
JP5098259B2 (en) * 2006-09-04 2012-12-12 株式会社ニコン camera
JP4518131B2 (en) * 2007-10-05 2010-08-04 富士フイルム株式会社 Imaging method and apparatus

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103177249B (en) * 2011-08-22 2016-01-20 富士通株式会社 Image processing apparatus and image processing method
CN103177249A (en) * 2011-08-22 2013-06-26 富士通株式会社 Image processing apparatus and image processing method
CN103310413A (en) * 2012-03-13 2013-09-18 三星电子株式会社 A method and an apparatus for debluring non-uniform motion blur
CN103558996A (en) * 2013-10-25 2014-02-05 广东欧珀移动通信有限公司 Photo processing method and system
CN103558996B (en) * 2013-10-25 2017-06-16 广东欧珀移动通信有限公司 Photo processing method and system
CN107743200A (en) * 2017-10-31 2018-02-27 广东欧珀移动通信有限公司 Method, device, computer-readable storage medium and electronic device for taking pictures
US11783086B2 (en) 2018-09-24 2023-10-10 Movidius Ltd. Methods and apparatus to generate masked images based on selective privacy and/or location tracking
CN113168511A (en) * 2018-09-24 2021-07-23 莫维迪乌斯有限公司 Method and apparatus for generating occlusion images based on selective privacy and/or location tracking
CN113168511B (en) * 2018-09-24 2024-04-12 莫维迪乌斯有限公司 Method and apparatus for generating a mask image
CN109978833A (en) * 2019-03-05 2019-07-05 上海扩博智能技术有限公司 Picture quality automatic testing method, system, equipment and storage medium
CN113421197A (en) * 2021-06-10 2021-09-21 杭州海康威视数字技术股份有限公司 Processing method and processing system of beautifying image
CN114463276A (en) * 2022-01-05 2022-05-10 吉林省通联信用服务有限公司 Method for evaluating straight stroke ambiguity of digital image of personnel file of cadre
CN114463276B (en) * 2022-01-05 2024-11-15 吉林省通联信用服务有限公司 A method for evaluating the fuzziness of straight strokes in digital images of cadre personnel files

Also Published As

Publication number Publication date
JP4840426B2 (en) 2011-12-21
CN101685542B (en) 2012-07-18
JP2010079446A (en) 2010-04-08
US8300972B2 (en) 2012-10-30
US20110116726A1 (en) 2011-05-19

Similar Documents

Publication Publication Date Title
CN101685542B (en) Electronic device, fuzzy image sorting method
AU2017261537B2 (en) Automated selection of keeper images from a burst photo captured set
US11074734B2 (en) Image processing apparatus, image processing method and storage medium
US11570312B2 (en) Image processing apparatus, image processing method, and storage medium to select an image to be arranged in an added page in an album
US10290135B2 (en) Image processing apparatus, image processing method, and storage medium storing a program that select images based on evaluations and lay out a main image on a main slot and a sub image on a sub slot in a template
CN101601287B (en) Apparatus and methods of producing photorealistic image thumbnails
RU2479037C2 (en) Device and method to process image, trained device and training method, and program
US7908547B2 (en) Album creating apparatus, album creating method and program
US11627227B2 (en) Image processing apparatus, image processing method, and storage medium
JP4335476B2 (en) Method for changing the number, size, and magnification of photographic prints based on image saliency and appeal
US10043300B2 (en) Image processing apparatus, control method, and record medium for selecting a template for laying out images
US9390316B2 (en) Image selecting device, image selecting method, image pickup apparatus, and computer-readable medium
EP3196758B1 (en) Image classification method and image classification apparatus
US20130004073A1 (en) Image processing device, image processing method, and image processing program
JP2007097090A (en) Image display apparatus and method, program, and photo print order receiving apparatus
US11915430B2 (en) Image analysis apparatus, image analysis method, and storage medium to display information representing flow quantity
EP2506218B1 (en) Method, terminal, and computer-readable recording medium for trimming a piece of image content
JP2011109428A (en) Information processing apparatus, information processing method, and program
JP4490214B2 (en) Electronic album display system, electronic album display method, and electronic album display program
KR101833943B1 (en) Method and system for extracting and searching highlight image
US10708446B2 (en) Information processing apparatus, control method, and storage medium
US20060036948A1 (en) Image selection device and image selecting method
JP6511950B2 (en) Image processing apparatus, image processing method and program
JP2006229682A (en) Print recommended image selection apparatus, method and program
JP2013182330A (en) Image processor and image processing method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120718

Termination date: 20150924

EXPY Termination of patent right or utility model