CN1293759C - Image processing method and apparatus, image processing system and storage media - Google Patents
Image processing method and apparatus, image processing system and storage media Download PDFInfo
- Publication number
- CN1293759C CN1293759C CNB01132807XA CN01132807A CN1293759C CN 1293759 C CN1293759 C CN 1293759C CN B01132807X A CNB01132807X A CN B01132807XA CN 01132807 A CN01132807 A CN 01132807A CN 1293759 C CN1293759 C CN 1293759C
- Authority
- CN
- China
- Prior art keywords
- image
- gradient
- pixel
- benchmark
- annulus
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000003672 processing method Methods 0.000 title claims description 23
- 238000012545 processing Methods 0.000 title description 22
- 238000000034 method Methods 0.000 claims abstract description 182
- 238000011156 evaluation Methods 0.000 claims abstract description 17
- 238000004364 calculation method Methods 0.000 claims description 10
- 238000009795 derivation Methods 0.000 claims description 6
- 238000001514 detection method Methods 0.000 description 32
- 238000010586 diagram Methods 0.000 description 29
- 230000011218 segmentation Effects 0.000 description 9
- 230000006870 function Effects 0.000 description 6
- 230000015654 memory Effects 0.000 description 6
- 238000013459 approach Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 210000004709 eyebrow Anatomy 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000007639 printing Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 241001632422 Radiola linoides Species 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007634 remodeling Methods 0.000 description 1
- 241000894007 species Species 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The present invention discloses a method, a device and a system for detecting a human face in an image. The method comprises the steps that a first variable is led from the gray level distribution of the image for a subset of the pixel of the image; a second variable is led from the preset reference distribution for the subset of the pixel, and the reference distribution represents an object; the correspondence of the first variable and the second variable on the subset of the pixel is evaluated, and whether the object is contained in the image is judged according to the result of the step of the evaluation.
Description
The field of the invention
The present invention relates to a kind of image processing method and equipment, and image processing system, relate in particular to a kind of people's face that is used for definite image people face and determine method, apparatus and system, and a kind of storage medium.
Background of the present invention
The image processing method that detects or extract a characteristic area from the diversity of settings image is very useful.Such as, it can be used for determining people's face of given image.Particularly detecting people's face from the visual background of complexity is a job highly significant.It can be used for a lot of fields, such as, telecommunications meeting, human-computer interaction interface, secure authentication, the face tracking in the surveillance, image compression etc.
For the mankind, no matter be adult or baby, it is very easy identifying people's face from the complicated background image, still, up till now for this reason, does not also have a kind of approach efficiently to detect people's face automatically and quickly.
Determine that whether a zone in the image is the important step that detects people's face for people's face.There are a lot of methods to be used for determining people's face at present.For example, can utilize the geometry site of the inherence between the distinctive feature of people's face (as two eyes, mouth, nose) and each notable feature, utilize the symmetry characteristic of people's face, the features of skin colors of people's face, template matching method and neural net method etc.For example, at the 5th IEEEInternational Workshop on Robot and Human Communication, 1996. in the article of 341-346 page or leaf Haiyuan Wu " the Face Detection and RotationsEstimation using Color Information. " literary composition, the relation between a kind of use face characteristic (two eyes and mouth) and feature of having provided is as the method for confirming people's face.In the method, at first investigate image region to be determined, whether see can be from wherein extracting the face characteristic that needs, if can extract, then further investigating the face feature that is extracted is an existing faceform matching degree, and wherein this faceform has described common people's face geometric relation between features.Higher as matching degree, think that then this image region is people's face, otherwise, then think not to be people's face.But the method is treated mapping and is resembled quality and rely on very greatly, is subjected to complexity, people's the influence of ethnic classification of illumination condition, visual background bigger, and precision is not high.Particularly when image quality is relatively poor, be difficult to determine exactly people's face.
Also have many other prior art method for detecting human face, such as:
1.JYH-YUAN DENG and PEIPEI LAI, " Region-Based TemplateDeformation And Masking For Eye-Feature Extraction And Description ", Patter Recognition, Vol.30, No.3, pp.403-419,1997;
2.″Generalized?likelihood?ratio-based?face?detection?andextraction?of?mouth?features″,C.Kervrann,F.Davoine,P.Perez,R.Forchheimer,C.Labit,Pattern?Recognition?Letters?18(1997)899-912;
3.″Face?Detection?From?Color?Images?Using?a?Fuzzy?PatternMatching?Method″,Haiyuan?Wu,Qian?Chen,and?Masahiko?Yachida,IEEETransactions?On?Pattern?Analysis?And?Machine?Intelligence,Vol.21,No.6,June?1999;
4.″Human?Face?Detection?In?a?Complex?Background″,GuangzhengYang?and?Thomas?S.Huang,Pattern?Recognition,Vol.27,No.1,pp.53-63.1994;
5.”A?Fast?Approach?for?Detecting?Human?faces?in?a?ComplexBackground”,Kin-Man?Lam,Proceedings?of?the?1998?IEEEInternational?Symposium?on?Circuits?and?System,1998,ISCAS’98?Vol.4,pp85-88.
General introduction of the present invention
Therefore, the object of the present invention is to provide a kind of improved image processing method, equipment and image processing system and storage medium, described image processing method, equipment and image processing system can carry out image processing at an easy rate, with the specific region in detection quickly and efficiently or the definite given image.
The invention provides a kind of method, be used for surveying a kind of object of a image, it is characterized in that this method may further comprise the steps with intensity profile:
For a subclass of the pixel in the described image, from one first variable of described intensity profile derivation of described image;
For described subset of pixels, derive one second variable from a preset reference distribution, described benchmark distributes and has characterized described object;
Estimate described first variable and the correspondence of described second variable on described subset of pixels;
Result according to this evaluation procedure judges whether described image comprises described object.
In addition, the invention provides a kind of method, be used for surveying a kind of object of a image, it is characterized in that this method may further comprise the steps with intensity profile:
A) in described image, determine an image subsection;
B), select a subclass of the pixel in the described image according to described image subsection;
C) for this subclass, from one first variable of described intensity profile derivation of described image;
D) for described subset of pixels, derive one second variable from a preset reference distribution, described benchmark distributes and has characterized described object;
E) estimate described first variable and the correspondence of described second variable on described subset of pixels;
F) result according to this evaluation procedure judges whether described image comprises described object.
In the present invention, object-detection is by two vector field evaluations are carried out, thereby the unfavorable and uncertain influence of sparing such as uneven illumination can be eliminated, and method of the present invention can be reduced to the requirement of image quality, simultaneously method of the present invention the kind of the image that can handle more.In addition, gradient calculation is fairly simple, thereby has shortened the required time of surveying.
As a specific embodiment of the present invention, by surveying one or more characteristic feature in the image (such as a pair of dark space, they are a pair of human eyes by expection), and determine an image subsection surveying a kind of destination object (such as people's face) therein, thereby provide a kind of effective and efficient manner of surveying a predetermine one in the image.
As a specific embodiment, method of the present invention is limited to a zone in the image that will survey to the evaluation of the degree of correspondence between two vector fields, such as being used for the annulus that people's face is surveyed, correspondence in this zone between two vector fields is more remarkable, thereby make the evaluation to these correspondences become more effective, shortened the required time of surveying simultaneously.
As a further embodiment, method of the present invention has been carried out a kind of weighting statistical disposition in evaluation, thereby make that wherein have pixel than the amplitude of high-gray level gradient has bigger weight the result who estimates is had bigger contribution, thus make survey more effective.
As a further embodiment, method of the present invention has adopted weighting statistical disposition and non-weighting statistical disposition simultaneously, and judges whether comprise an object in the image that will survey according to the result of these two kinds of processing, thereby has improved the accuracy of surveying.
According to the present invention, above-mentioned purpose realizes that by a kind of image processing method is provided described method comprises:
In image, set a rectangular area to be determined;
Set an annular region of surrounding this rectangular area;
Calculate the gradient of the gray scale of the picture element in the annular region;
Be a kind of benchmark gradient of the described picture element in the annular region; And
According to the shade of gray and the benchmark gradient of described picture element, judge the characteristic that described rectangular area to be determined comprises.
According to the present invention, above-mentioned purpose realizes that by a kind of image processing facility is provided described equipment comprises:
In image, set the device of a rectangular area to be determined;
Set the device of an annular region of surrounding this rectangular area;
The device of the gradient of the gray scale of the picture element in the calculating annular region;
Device for a kind of benchmark gradient of described picture element in the annular region; And
The device of judging the characteristic that described rectangular area to be determined comprises according to the shade of gray and the benchmark gradient of described picture element.
In addition, the invention provides a kind of storage medium, wherein have the program code that is used for carrying out object-detection, it is characterized in that this program code comprises at a image with intensity profile:
Be used for determining the code of an image subsection at image;
Be used for code according to the selected subset of pixels of described image subsection;
Be used for deriving from the described intensity profile of described image the code of one first variable for a subclass of the pixel of described image;
Be used for for the code of described subset of pixels from one second variable of a preset reference distribution derivation, described benchmark distributes and has characterized described object;
Be used to estimate the code of described first variable and the correspondence of described second variable on described subset of pixels;
Be used for judging according to the result of this evaluation procedure whether described image comprises the code of described object.
Further aim of the present invention provides a kind of image processing method and equipment with novelty, image processing system.By the description of following execution mode and accompanying drawing, it is clearer that the other objects and features of the invention will become, and identical label is represented same or similar parts in the accompanying drawing.
Brief description of the drawings
Accompanying drawing and this explanation one as a part of the present invention are used from the explanation embodiments of the invention, are used to explain principle of the present invention.
Fig. 1 is a block diagram, show can use according to the image processing system of image processing facility of the present invention the block diagram of an embodiment.
Fig. 2 is a structured flowchart, shows the structure of determining device according to people's face of one embodiment of the invention.
Fig. 3 schematically shows an example of the original picture of wanting detected.
Fig. 4 is a flow chart, shows the people's face deterministic process according to one embodiment of the invention.
Fig. 5 is a schematic diagram, and an image subsection to be measured (rectangular area) of determining in the original picture shown in Figure 3 and the annular region that centers on it are shown.
Fig. 6 is a schematic diagram, shows a plurality of picture elements in the images.
Fig. 7 is a schematic diagram, is used to illustrate the benchmark Grad of each picture element in the annular region.
Fig. 8 A is a schematic diagram, shows another example of the original picture of wanting detected
Fig. 8 B is a schematic diagram, show Fig. 8 A shown in original picture in an image subsection (rectangular area).
Fig. 8 C is a schematic diagram, shows to be the determined annular region in rectangular area shown in Fig. 8 B.
Fig. 9 is a flow chart, shows the people's face deterministic process according to another embodiment of the present invention.
Figure 10 has shown that a kind of benchmark of the people's face that is used for surveying image distributes;
Figure 11 is used to show a kind of mode that generates the image subsection that is used for the detection of people's face from a pair of dark (eyes) district that detects that this is considered to the dark space may be corresponding to human eye;
Figure 12 is the block diagram according to the human eye detection equipment of embodiments of the invention;
Figure 13 A is a flow chart, is used to represent to search the process of human eye area;
Figure 13 B is an example of original picture that will be detected;
Figure 14 A is a flow chart, is used for every row of image are carried out segmentation;
Figure 14 B is an example, is used for row of presentation image;
Figure 14 C is an example, is used to represent the intensity profile of row;
Figure 14 D is a schematic diagram, is used to represent the intensity profile of a row section of being divided into;
Figure 14 E is an example, is used for the row of presentation image one section of being divided into;
Figure 14 F is a schematic diagram, is used to represent determining a row waypoint;
Figure 15 A is a flow chart, is used for merging the paddy district of row;
Figure 15 B is a schematic diagram, is used for the row of presentation image and the paddy district and the seed zone of every row;
Figure 15 C is the image that is used to represent detected eyes alternative area;
Figure 16 A is a flow chart, has shown according to eye areas determination processing of the present invention;
Figure 16 B is a schematic diagram, is used to represent eyes alternative area and its boundary rectangle;
Figure 16 C is the image that is used to represent the eye areas that is detected;
Figure 17 A is a flow chart, is used to adjust section boundaries;
Figure 17 B is a schematic diagram, is used to represent waypoint is merged to the process of its adjacent area;
Figure 17 C is a schematic diagram, is used to represent relay area is merged to the process in adjacent valleys district;
Figure 18 A is a flow chart, is used to judge whether a paddy district can be integrated into a seed zone;
Figure 18 B is a schematic diagram, the prediction paddy district of a seed zone of expression; And
Figure 18 C is a schematic diagram, is used to represent overlapping between two paddy districts.
The description of preferred embodiment
Describe the preferred embodiments of the present invention with reference to the accompanying drawings in detail.
Fig. 1 represents the block diagram according to the image processing system of the image processing facility of the first embodiment of the present invention.In this system, printer 105, for example ink-jet printer or similarly printer, and monitor 106 links to each other with master computer 100.
Master computer 100 has Application Software Program 101, word processing program for example, amplification procedure, device and similar other programs of looking at are surveyed in the internet, OS (operating system) 102, printed driver 103, be used to handle various drawing for orders (the visual drawing for order of indication output image, the text drawing for order, the chart drawing for order), they are employed software program 101 and send to OS102, and generation print data, and watchdog driver 104, be used to handle the various drawing for orders that send by Application Software Program 101, and on monitor 106 video data.
Label 112 presentation directives's input units, the driver of label 113 these devices of expression.For example, thus be connected with and be used to click on monitor 106 the various information that show send Genius mouse from various instructions to OS 102.Note, other indicator device, for example trace ball, pen, touch-screen and other similar devices or keyboard can be used to replace mouse.
Master computer 100 comprises: as the CPU (CPU) 108 of the various hardware that can move these software programs, hard disk (HD) 107, random-access memory (ram) 109, read-only memory (ROM) 110 etc.
Determine an example of system as people's face shown in Figure 1, can the Windows98 of Microsoft be installed on the PC-AT of popular IBM Corporation personal computer as operating system, install the application program that required execution is printed, and monitor is linked to each other with personal computer with printer.
In master computer 100, each Application Software Program 101 uses text data, the chart data that is included into chart (for example illustration or its analog) that is included into text (for example character or other analogs), the pictorial data that is included into natural image or its similar spy etc. to produce output image data.When the printout pictorial data, Application Software Program 101 is sent the request of printing to OS102.At this moment, Application Software Program 101 is sent the drawing for order group to OS102, comprises the graph making instruction corresponding to chart data, and draws instruction corresponding to the image of pictorial data.
OS102 receives and sends the drawing for order group to the print driver 103 corresponding to an output printer after the output request of Application Software Program 101.Printed driver 103 is handled from the printing request and the drawing for order group of OS102 input, produces the print data that printer 105 can be printed, and print data is delivered to printer 105.When printer 105 was scanner/printer, printed driver 103 carries out image correcting according to the drawing for order from OS 102 to be handled, and scans the instruction on the memory (such as RGB24 plane memory) then in order.After finishing all drawing for orders, printed driver 103 converts the content of 24 page memories of RGB to data format that printer 105 can be printed, as cmyk data, and data converted is passed to printer 105.
Notice that master computer 100 can connect digital camera 111, the image of its inspected object also produces the rgb image data, and can load the pictorial data that detects and be stored among the HD 107.Note, encode according to for example JPEG by the pictorial data that digital camera 111 detects.The pictorial data that detects can be used as pictorial data and passes to printer 105 after being printed 103 decodings of machine driver.
Master computer 100 also comprises people's face sniffer 114, is used for determining whether the image that detects comprises people's face.The pictorial data that is stored in HD 107 or other similar portions is determined by people's face that device 114 reads and is processed.At first, read possible human face region part, and judge whether this zone is people's face.Then, under the control of OS 102, the people's face that is determined in printer 105 or monitor 106 output images.
People's face detecting devices
Fig. 2 is a structured flowchart, shows the structure according to people's face detecting devices of first embodiment of the invention.
In the present embodiment, people's face detecting devices 114 comprises that reading device 210, human eye detection equipment 218, image subsection determine device 219, annular region setting device 211, first calculation element 212, second calculation element 213 and decision maker 214.Wherein, in people's face detecting devices 114, reading device 210 is used to carry out the process that reads of image.Reading device 210 reads the gray value that is stored in such as each pixel of the image among HD 107 or the RAM 109 etc.
People's face detection process
Fig. 3 has schematically provided an example of the original picture of wanting detected, this original picture has been carried out actual people's face detected.Comprise in this image that a width of cloth has the image of people's face.Wherein, this original picture 300 can be input to people's face detection system, and be stored in the suchlike storage devices such as HD107 or RAM109 by such as the digital device 111 of a class such as digital camera, scanner etc.
Referring to Fig. 3, as can be seen, generally, the edge shape of the people's face in the original picture 300 is approaching oval, and irrelevant with the mankind's race, the colour of skin, age, sex.Can make a circumscribed rectangular area 300A along the edge of people's face.
Fig. 4 is a flow chart, shows the people's face detection process according to first embodiment of the invention.
With reference to Fig. 4 and Fig. 3, explain detection process below to people's face in the original picture.
Read original picture and determine image subsection (rectangular area) to be detected
Referring to Fig. 4, people's face detection process starts from step S40.At step S41, reading device 210 at first reads wants detected original picture 300, and obtains the gray value of each picture element in this original picture 300.If original picture 300 uses encode such as JPEG, so, reading device 210 was at first decoded to it before reading its pictorial data.Among the step S41, reading device 210 also reads in rectangular area 300A to be determined in this original picture 300, and determines the position of this rectangular area 300A in this original picture 300.
Below describe according to a kind of method and apparatus that is used for the image subsection 300A of the definite image that will survey of the present invention.It should be understood, however, that and be used for determining that the eyes district detection mode of image subsection is not limited to method of the present invention as described below.On the contrary, other method and/or processing-Zhu method as be known in the art and/or processing-also can be used to determine this image subsection.
Eye detection equipment
Figure 12 is a structured flowchart, shows according to the structure embodiment of the invention and eye detection equipment.
Detect eye areas
Explain the human eye detection process that is used for original picture with reference to the flow chart of Figure 13 A below.Figure 13 B is an example of original picture that will be detected.Suppose that original picture is stored in HD 107, or a suchlike precalculated position such as RAM 109.
Referring to Figure 13 A, at step S132, each row of original picture are divided into a plurality of intervals by the sectioning branch.With reference to Figure 14 E, each interval I1-1, I1-2 ..., I1-9, the length of I1-10 is variable.Such as, interval I-1 is uneven in length in the length of interval I1-2.According to the average gray value of its pixel, the interval of some segmentations is marked as the paddy district.At step S133, by merging device 1202, the paddy district of adjacent column is merged to generate the eyes alternative area.Because the length difference in the paddy district of every row, therefore, the size of eyes alternative area also differs from one another.At step S134,, determine the human eye area of eyes alternative area by judgment means 1203.Like this, just can detect zone corresponding in the image with human eye.
Each row in the image are carried out segmentation
Figure 14 A is a flow chart, has shown the processing that is used at step S132 every row of image being carried out segmentation.
Term " Gu Qu, peak district and relay area " is defined as follows.
Figure 14 B is an example, is used for row of presentation image.Referring to Figure 14 B, reading device 200 reads a row C41 of original picture.Figure 14 C shows the gray value profiles of row C41.Figure 14 D is the gray value profiles of the row of the section of being divided into.In Figure 14 D, label I1-5, I1-6, I1-9 represent divided interval respectively, and the gray value in each interval or the section is the interval corresponding among Figure 14 C or the mean value of the pixel in the section.
Figure 14 E is the divided row of Figure 14 B image.Referring to Figure 14 E, reading device 200 reads the pictorial data of row C41 in the image.For the image of Figure 14 B, row C41 is divided into interval I1-1, I1-2 ..., I1-9, I1-10.Interval size just should the interval in the quantity of pixel.Such as, if interval I1-2 comprises 12 pixels, the size of so interval I1-2 is 12.Interval gray value is the average gray value of pixel in this interval.
Referring to Figure 14 D and 14E, if the gray value in the interval gray value between its adjacent region when young, this interval is referred to as " paddy district " so.If the gray value in an interval is greater than the gray value between its adjacent region, this interval is referred to as " peak district " so.On the other hand, if the gray value in an interval between the gray value between its adjacent region, then this interval is referred to as " relay area ".For the row C41 in the present embodiment, interval I1-1 ..., the gray value of I1-10 is respectively 196,189,190,185,201,194,213,178,188 and 231.For interval I1-6, its gray value is 194, and I1-5 between its adjacent region, the gray value of I1-7 is respectively 201 and 213.Because the gray value of interval I1-6 is less than I1-5 between its adjacent region, I1-7, therefore, interval I1-6 is confirmed as the paddy district.With the same manner, interval I1-2, I1-4 and I1-8 are confirmed as the paddy district respectively.For interval I1-5, its gray value is 201, I1-4 between its adjacent region, and the gray value of I1-6 is respectively 185,194.Because the gray value of interval I1-5 is greater than I1-4 between its adjacent region, I1-6, therefore, interval I1-5 is confirmed as the peak district.With the same manner, interval I1-1, I1-3, I1-7 and I1-10 are confirmed as the peak district respectively.In addition, for interval I1-9, its gray value is 188, and I1-8 between its adjacent region, the gray value of I1-10 is respectively 178 and 231.Because gray value circle I1-8 between its adjacent region of I1-9, between the gray value of I1-10, therefore, interval I1-9 is confirmed as relay area.
Because paddy district also is an interval, thus, gray value and the big or small method of calculating the gray value in paddy district and big or small method thereof and computation interval are the same.This method also is applicable to gray value and the size of calculating peak district and relay area.
Below with reference to Figure 14 A, describe the process of among the step S132 the every row in the image being carried out segmentation in detail.
Referring to Figure 14 A,, read the gray value of each pixel in the 1st row of detected image left side at step S141.For these row are divided into three types interval, promptly Gu Qu, peak district and relay area must be determined waypoint.
At step S142, can determine according to the first derivative values and the second dervative value of the gray value at pixel place whether a picture element in these row is waypoint.Figure 14 F is a schematic diagram, shows to judge whether a pixel is the process of the waypoint of row.Referring to Figure 14 F, in row, two neighboring pixels Pi1 have been provided, Pi2.
Subsequently, utilize any discrete derivative operator, calculate first derivative and second dervative value at these two pixel Pi1, Pi2 place.Suppose that the first derivative at pixel Pi1 and Pi2 place represents with D1f and D2f that respectively the second dervative value at pixel Pi1 and Pi2 place is represented with D1s and D2s respectively.If one of two following conditions are set up:
(D1s 〉=0) and (D2s<0);
(D1s<0) and (D2s 〉=0)
And the absolute value of one of D1f and D2f be greater than a predetermined value (this threshold value can be chosen in the scope of 6-15, and preferred value is 8), and then to be judged as be a waypoint to pixel Pi1.Otherwise pixel Pi1 is judged as and is not waypoint.
Like this, in step S142, can obtain a plurality of waypoint S11, S12 ..., S19.
In row, determine at step S143, these row to be divided into a plurality of intervals after the waypoint.Then, at step S144,, they are divided into Gu Qu, peak district or relay area respectively according to the gray value in a plurality of intervals.In step S145, adjust section boundaries.The detailed content of step S145 is described with reference to the accompanying drawings.In step S146, whether all row of checking detected image are all by segmentation, if be not last row by the row of segmentation, then flow process is returned step S147.At step S147, read the gray value of each pixel in the next column.Then, flow process enters step S142 and repeating step S147 and subsequent step.Yet if will be last row of detected image by the row of segmentation in step S146, promptly all row be all by segmentation, and flow process finishes at step S148.
Alternatively, above-mentioned segment processing can from detected image other row (as rightmost first row) begin to carry out.
Merge the paddy district to produce the eyes alternative area
Figure 15 A is a flow chart, is used for showing that combined diagram 13A merges the processing in the paddy district of every row at step S133.Figure 15 B is a schematic diagram, is used for the row of presentation image and paddy district and the seed zone in every row.In Figure 15 B, image is divided n row Co11, Co12 ..., Co1n.
With reference to Figure 15 A and 15B, all the paddy district S1 in the first row Co11 (Far Left) of the detected image of step S151, S2, S3, S4 is set as seed zone.Seed zone is the set in one or more paddy district, because the gray value in paddy district is lower than the gray scale of peak district or relay area, therefore, seed zone is a dark areas in being listed as normally.
At the step S152 of Figure 15 A, read the first paddy district V2-1 of next column Co12.Then, flow process enters row step S153.At step S153, read the first seed zone S1.At step S154,, check whether the paddy district V2-1 of row Co12 can merge to seed zone S1 according to paddy district V2-1 and seed zone S2.If the paddy district V2-1 of row Co12 can merge to seed zone S1, flow process enters into step S156 and this paddy district V2-1 is merged to seed zone so, and then, the paddy district becomes the part of seed zone.Yet if judge that at step S154 paddy district V2-1 can not merge to seed zone S1, flow process enters into step S155.In this example, the paddy district V2-1 of row Co12 can not merge to seed zone S1.Flow process enters step S155.At step S155, judge whether seed zone is last seed zone.If not last seed zone, read next seed zone at step S157 so, flow process is returned step S154, with repeating step S154 and following step.In this example, seed zone S1 is not last seed zone, so read next seed zone S2 at step S157 so.And the step more than repeating.If judge that at step S155 this seed zone is last seed zone, (such as, the seed zone S4 shown in Figure 15 B), flow process enters into step S158 so, and the paddy district that can not merge to seed zone is set to a new seed zone.Referring to Figure 15 B, because the paddy district V2-1 of row Co12 can not merge to seed zone S1, S2, S3 or S4, just the paddy district can not merge in the seed zone that has existed, and so, at step S158, the paddy district of row Co12 is set to a novel species subarea.
At step S159, judge whether all paddy districts of row Co12 are all processed.If handled all paddy districts of row Co12, flow process enters step S1511 so.At step S1511, detect and whether handled all row.If these row are not last row of detected image, flow process is returned step S152 so, with repeating step S154 and the processing of step thereafter.Because row Co12 is not last row of detected image, flow process is returned step S152.If handled all row, such as, if these row are that last is listed as Co1n, flow process enters step S1520 so.At step S1520, all seed zones are set to the eyes alternative area.Then, flow process finishes at step S1521.Figure 15 C is an example, shows at step S133 to merge the paddy district to produce the result of eyes alternative area in the row of detected image.
Judge eye areas
Figure 16 A is a flow chart, is used for being presented at the processing that step S134 judges eye areas.
Referring to Figure 16 A, read the first eyes alternative area at step S161.Then, flow process enters into step S162.At step S162, calculate the gray value of eyes alternative area.As mentioned above, the eyes alternative area comprises one or more paddy district.If an eyes alternative area comprises several paddy district, Na Gu district 1, paddy district 2 ... paddy district n so, is drawn the gray value of eyes alternative area so by following formula:
DarkGray1=(Valley1Gray1×Pixels1+Valley2Gray1×Pixels2+…+ValleynGray1×Pixelsn)/Total?Pixels (1)
Wherein, DarkGary1 is the gray value of eyes alternative area;
Valley1Gray1 is the gray value in paddy district 1, and Pixels1 is the number of picture elements in the paddy district 1;
Valley2Gray1 is the gray value in paddy district 2, and Pixels2 is the number of picture elements in the paddy district 2;
ValleynGray1 is the gray value of paddy district n; Pixelsn is the number of picture elements among the paddy district n;
Total Pixels is included in the sum of the pixel in the eyes alternative area.
Therefore, if the eyes alternative area comprises that gray value is respectively 10,20,3 paddy districts of 30, and each paddy district comprises 5,6,4 pixels respectively, and so, the gray value of eyes alternative area will be (12 * 5+20 * 6+30 * 4)/15=20.
Referring to the step S162 of Figure 16 A, calculate the gray value of eyes alternative area.If the value of eyes alternative area is not less than first threshold, such as, 160, then flow process enters step S1610.In the present embodiment, first threshold is between 100 to 200.At step S1610, the eyes alternative area is confirmed as the artificial eye zone and is rejected.Then, flow process enters step S168.At step S168, judge whether to have handled all eyes alternative area of detected image.If not last eyes alternative area, then read next eyes alternative area at step S169, then, flow process enters step S162 and repeats following step.Yet, be last eyes alternative area if determine the eyes alternative area that is detected at step S168, so, all eyes alternative area of detected image all are determined, and flow process finishes at step S1611.
At step S162, if the gray value of eyes alternative area less than first threshold, then flow process enters step S163.
At step S163, calculate the background gray levels of eyes alternative area.The background gray levels that is included in the paddy district of eyes alternative area is determined the background gray levels of eyes alternative area.The background gray levels in paddy district is the mean value of gray value between its adjacent region.Following formula has provided the background gray levels of the eyes alternative area of calculating in step S163.
DarkBGray1=(Valley1BGray1+Valley2BGray1+…+ValleynBGray1)/n (2)
Wherein, DarkBGray1 is the background gray levels of eyes alternative area;
Valley1BGray1 is the background gray levels in paddy district 1;
Valley2BGray1 is the background gray levels in paddy district 2;
ValleynBGray1 is the background gray levels of paddy district n;
N is included in the quantity in the paddy district in the eyes alternative area.
At step S163, calculate the background gray levels of eyes alternative area.If at step S163, the background gray levels of eyes alternative area is not more than second threshold value, (such as, 30), flow process enters step S1610 so.In an embodiment, second threshold value is between 20 to 80.At step S1610, judge that the eyes alternative area is the artificial eye zone, and refuse it.Then, flow process enters step S168.
At step S163, if the background gray levels of eyes alternative area greater than second threshold value, program enters step S164 so.
At step S164, calculate the poor of the background gray levels of eyes alternative area and the gray value of itself.If this difference is not more than the 3rd threshold value (such as 20), flow process enters step S1610 so.In the present embodiment, the 3rd threshold value is between 5 to 120.At step S1610, the eyes alternative area is judged as the artificial eye zone and is rejected.Then, flow process enters step S168.
At step S163, if the difference of the gray value of the background gray levels of eyes alternative area and itself greater than the 3rd threshold value, flow process enters step S165 so.
At step S165, calculate the width of eyes alternative area and the ratio of height.
About the width of eyes alternative area, highly, we carry out following definitions.The size in paddy district is the quantity of the pixel that comprised in this paddy district.Such as, if the paddy district comprises 5 pixels, the size in paddy district equals 5 so.The size of eyes alternative area is included in the big or small sum in eyes alternative area Nei Gu district.The width of eyes alternative area is included in the quantity in the paddy district in this eyes alternative area, and so, the height H d of eyes alternative area is provided by following formula:
Hd=Sd/Wd (3)
Wherein, Hd is the height of eyes alternative area, and Sd is the size of eyes alternative area, and Wd is the width of eyes alternative area.
Referring to the step S165 of Figure 16 A, calculate the width of eyes alternative area and the ratio of height.At step S165, if the ratio of the width of eyes alternative area and height is not more than the 4th threshold value, (such as 3.33), flow process enters step S1610 so.In the present embodiment, the 4th threshold value is between 1 to 5.At step S1610, the eyes alternative area is judged as the artificial eye alternative area and is rejected.So subsequently, flow process enters step S168.
At step S165, if the ratio of the width of eyes alternative area and height greater than the 4th threshold value, flow process enters step S166 so.
At step S166, calculate the size of eyes alternative area and the ratio of its boundary rectangle size.Figure 16 B is a schematic diagram, shows eyes alternative area and its boundary rectangle.Referring to Figure 16 B, eyes alternative area D1 and its boundary rectangle DC1 have wherein been provided.As can be seen from Figure 16B, the boundary rectangle DC1 of eyes alternative area is the minimum rectangle of surrounding eyes alternative area D1.The size of eyes alternative area boundary rectangle is included in the quantity of the pixel in this boundary rectangle.The size of eyes alternative area is the quantity of the interior pixel that is comprised of eyes alternative area.
At step S166, calculate the size of eyes alternative area and the ratio of its boundary rectangle size.If this ratio is not more than one the 5th threshold value, (such as 0.4), flow process enters step S1610 so.In the present embodiment, the 5th threshold value is between 0.2 to 1.At step S1610, the eyes alternative area is confirmed as the artificial eye zone and is rejected.Then, flow process enters step S168.
At step S166, if the ratio of the size of the size of eyes alternative area and its boundary rectangle greater than the 5th threshold value, flow process enters step S167, so.At step S167, this eyes alternative area is confirmed as a real eye areas.
After the step S167, flow process enters step S168 and judges whether this eyes alternative area is last eyes alternative area.If not last eyes alternative area, read next eyes alternative area and flow process is returned step S162 at step S169 so.If judge it is last eyes alternative area at step S168, so just determined all eye areas.Figure 16 C is an example, shows the eye areas of the image that is detected in step S133.
Adjust section boundaries
Figure 17 A is a flow chart, is used for adjusting section boundaries at the step S145 of Figure 14 A.
Referring to Figure 17 A, compare the gray value of waypoint and the gray value between its two adjacent regions, and, this waypoint is merged to gray value and immediate that interval of this gray value at step S171.Such as, referring to Figure 17 B, the gray value of waypoint S is 80, is interval In1 and In2 between its two adjacent regions.The gray value of interval In1 and In2 is respectively 70 and 100.Because the gray value of interval In1 is more near the gray value of waypoint S, so, waypoint S merges to interval In1.
Further, flow process enters step S172.At step S172, read first relay area.Then, at step S173, the gray value that calculates relay area is adjacent the gray value in Gu Qu He Feng district.Calculate after their gray value, flow process enters step S174, at step S174, compares and judges whether following formula is set up:
GR<GP×Th6+Gv×(1-Th6)
Wherein, GR is the gray value of relay area,
Gv is the gray value in relay area adjacent valleys district,
GP is the gray value in relay area adjacent peak district,
Th6 is the 6th threshold value, such as being 0.2.The 6th threshold value is between 0 and 0.5.
If at step S174, judged result is a "No", and flow process enters step S176 so.Otherwise if be "Yes" in step S174 judged result, so, at step S175, this relay area merges to the paddy district.
Figure 17 C is a schematic diagram, shows an example that relay area is merged to its adjacent valleys district.X-axis among Figure 17 C is represented the position of every row, and Y-axis is represented the gray value in each district.
Referring to Figure 17 C, the gray value of relay area Re1 is 25, and the gray value of paddy district Va1 is 20, and the gray value of peak district Pe1 is that 70, the nine threshold settings are 0.2, so
GP×Th6+Gv×(1-Th6)
=70×0.2+20×0.8
=30>GR=25
Therefore, be "Yes" in the judged result of step S174, relay area Re1 will merge to paddy district Va1 so.Further, the gray value of relay area Re2 is 40, and the gray value of peak district Pe2 is 60, so
GP×Th6+Gv×(1-Th6)
=60×0.2+20×0.8
=28>GR=40
Therefore, be "No" in the judged result of step S174, relay area Re2 can not merge to paddy district Va1 so.
Referring to the step S176 of Figure 17 A, check all relay area of whether having handled detected image.If relay area is not last relay area, read next relay area at step S177 so, flow process enters step S173 and repeating step S173 and the processing of step thereafter then.Yet, if judge that at step S176 this relay area is last relay area, promptly handled all relay area, flow process will finish at step S178 so.Like this, just adjust all borders of the image that detects that is over.
Judge whether the paddy district can merge to seed zone.
Figure 18 A is a flow chart, is used for judging the step S154 at Figure 15 A, and whether a paddy district can merge to a seed zone.
Figure 18 B is a schematic diagram, the prediction paddy district of a seed zone of expression.The prediction paddy district of seed zone is not an in esse paddy district in any row of detected image.The prediction paddy district of seed zone is considered to be positioned at the paddy district of the adjacent column next column in seed zone the right, the position consistency of the adjacent column on the right of its position and seed zone.Referring to Figure 18 B, paddy district Va3 is adjacent valleys district, the right of seed zone Se1, and it is positioned at row Co11, and row Co12 is the next column of row Co11.So, paddy district Va1 is the prediction paddy district of seed zone Se1.This prediction paddy district is positioned at row Co12, and its position is the same with the paddy district of paddy district Va3, but is positioned at different row.
Figure 18 C is a schematic diagram, shows the overlay region in two paddy districts.The overlay region in these two paddy districts is such zones, and wherein, this pixel belongs to this two paddy districts.
Referring to Figure 18 C, some B is paddy district Va1 to the interval of some D, and some A is paddy district Va2 to the interval of some C, and Va1 is the prediction paddy district of seed zone Se1, and paddy district Va2 is the true paddy district of row Co12.So, some B is the overlay region of paddy district Va1 He Gu district Va2 to the interval of some C.
Judge whether a paddy district can merge to seed zone, and process is described with reference to Figure 18 A.Referring to Figure 18 A, calculate the overlay region in the prediction paddy district of paddy district and seed at step S181.
After having calculated the overlay region, flow process enters into step S182.At step S182, relatively and judge whether following formula is set up:
Osize/Max(Vsize,SVsize)>Th7
Wherein Osize is the size of overlay region in the prediction paddy district of paddy district and seed zone,
Max (Vsize SVsize) is the maximum of paddy district and seed zone prediction paddy district size,
Th7 is the 7th threshold value, such as being 0.37.The 7th threshold value is between 0.2 to 0.75.
If in step S182 judged result is "No", flow process enters step S188 so.The paddy district can not merge to seed zone so, and then flow process finishes at step S189.Otherwise if be "Yes" in step S182 judged result, flow process enters step S183 so.
At step S183, calculate the gray value of paddy district and seed zone.Then, flow process enters step S184.At step S184, relatively and judge whether following formula is set up:
|GVallley-GSeed|<Th8
Wherein, GValley is the gray value in paddy district,
GSeed is the gray value of seed zone,
Th8 is the 8th threshold value, such as being 40.The 8th threshold value is between 0 to 60.
If the judged result of step S184 is a "No", flow process enters step S188 so.So, the paddy district can not be integrated into seed zone, and flow process finishes at step S189.Otherwise if be "Yes" in the judged result of step S184, flow process enters step S185 so.
At step S185, calculate the bright lightness of paddy district background, seed zone background and paddy district and seed zone respectively.
For the brightness of a pixel in the image, can calculate by following formula:
G=1.2219×10
-1L+9.063×10
-4L
2+3.6833526×10
-5L
3+1.267023×10
-7L
4+1.987583×10
-10L
5 (4)
Formula (4) has been represented the non-linear relation between the gray value and brightness value in your colour system of Meng Han, and wherein G is the gray value of pixel, and it is between 0 to 255; L is the brightness value of pixel, and it is also between 0 to 255.
Therefore, can obtain its brightness value, and also can obtain the gray scale of image from the brightness of image by the gray value of image.
For this example, pixel Pi1 among Figure 14 F and the gray value of Pi2 are respectively 50 and 150, by formula (4), can determine that the brightness value of pixel Pi1 and Pi2 is respectively 128 and 206.
Return Figure 18 A, behind step S185, flow process enters step S186.At step S186, relatively and judge whether following formula is set up
Min((Lvb-Lv),(Lsb-Ls))/Max((Lvb-Lv),(Lsb-Ls))>Th9
Wherein Lv is the brightness in paddy district, and Ls is the brightness of seed zone,
Lvb is the brightness of paddy district background, and Lb is the brightness of seed zone background,
Min ((Lvb-Lv), (Lsb-Ls)) is (Lvb-Lv) and minimum value (Lsb-Ls),
((Lvb-Lv), ((Lsb-Ls)) are (Lvb-Lv) and maximum (Lsb-Ls) to Max.
Th9 is the 9th threshold value, such as being 0.58.The 9th threshold value is between 0.3 to 1.
If the judged result at step S186 is a "No", flow process enters step S188 so, and then the paddy district can not merge to seed zone, and flow process finishes at step S189.Otherwise if be "Yes" in the judged result of step S186, flow process enters step S187.
At step S187, the paddy district is integrated into seed zone, and flow process finishes at step S189.
From as can be seen above-mentioned, method of the present invention provides a kind of method fast, is used for detecting the human eye of the image with complex background, and does not need detected image to have very high quality.Thereby eliminated the uncared-for possibility of human eye in fact.This method can accurately detect have different size, the human eye of direction and lightness.Therefore, according to method equipment of the present invention or system, can detect human eye quickly and efficiently.
The method of the foregoing description is used to detect human eye, yet the present invention is not limited in the detection human eye, and it can also be suitable for other detection method, such as, the method for the defect part of testing circuit plate.
In addition, the above-mentioned method and apparatus that is used for surveying the image zone corresponding with human eye is used to detection packet and is contained in a people's face in the image, but the method and apparatus of the people's face in the detection image of the present invention also can adopt other method and apparatus to survey human eye in the image that will survey.For example, can be Kin-Man Lam at " A Fast Approach for Detecting Human Facesin aComplex Background ", Proceedings on the 1998IEEEInternational Symposium on Circuits and System, 1998, the method for announcing on ISCAS ' the 98 Vol.4 pp85-88 is used for people's face detection method of the present invention and equipment carries out eye detection.
When detecting at least two eyes (or dark) district, the pair of eyes district is selected as a pair of candidate's eyes arbitrarily.For each eyes district, determine the distance L between their center to selecting.Subsequently, in mode as shown in figure 11, determine the image subsection 300A of a rectangle.Detect for people's face, except rectangle image subsection as shown in figure 11, also determined to be used for another rectangle image subsection that people's face is surveyed, this image subsection is symmetrical with respect to passing through this straight line to the center in eyes district with rectangle image subsection shown in Figure 11.
It should be understood that, value/ratio shown in Figure 11 might not be strict, on the contrary, differs value/ratio within certain scope (for example ± 20%) with value/ratio shown in Figure 11, for implementing people's face detection method of the present invention, all be acceptable.
Under the situation that detects plural eyes district, each possible eyes district is selected to determine corresponding image subsection all obtaining.And to the eyes district, all produce two rectangle image subsections in the above described manner in the image each.
Subsequently,, survey this image subsection, will describe as following whether corresponding to the image of people's face for each image subsection of so determining.
In addition, the shape of the image subsection 300A that survey is not limited to rectangle, and on the contrary, it can be any suitable shape, such as ellipse etc.
In addition, the position of people face part (image subsection) not only can determine by the eyes district (dark space) that surveys in the image, also can be by surveying other face characteristics in the image-as mouth, nose, eyebrow etc.-determined.By in image, surveying the method that face characteristics such as eye, mouth, nose, eyebrow are determined the position of people's face, except the eyes detection method of the present invention of this announcement, can also adopt method of the prior art, as given face feature detection method in the documents of in " background of the present invention " part of this specification, listing.
The generation of annular region
Then, flow process enters step S42.At step S42, determine device 211 by annular region, determine an annular region 300R around rectangular area 300A.The generation of relevant annular region 300R will give detailed description in conjunction with Fig. 5 below.
Fig. 5 is a schematic diagram, rectangular area 300A shown in Figure 3 is shown and centers on its definite annular region 300R.In plane coordinate system shown in Figure 5, be the origin of coordinates with the upper left corner of original picture 300, X coordinate, Y coordinate represent respectively each picture element in the horizontal direction with the distance of vertical direction with respect to the origin of coordinates.
With reference to Fig. 5, at first, by reading device shown in Figure 2 210, from the expression as schematically shown in Figure 3 of HD107 or RAM109 reading pre-stored original picture 300, and obtain rectangular area 300A with respect to original picture 300 residing positions.In this example, at given X, in the Y coordinate system, the coordinate at four angles of rectangular area 300A is respectively (230,212), (370,212), (230,387) and (370,387) also are its width and highly are respectively 140 and 175.
For rectangular area 300A, with its width, amplify, be reduced into original 9/7 times and 5/7 times highly respectively, obtain two rectangular area 300A1 and 300A2.Wherein the coordinate at four of the first rectangular area 300A1 angles is respectively (210,187), (390,187), (210,412) and (390,412) at this X, Y coordinate; And the coordinate at four angles of the second rectangular area 300A2 is respectively (250,237), (350,237), (250,362) and (370,362) at this X, Y coordinate system.Wherein, first, second rectangular area 300A1, the zone between the 300A2 is exactly to be the determined annular region 300R of former rectangular area 300A.
Calculate the gradient and the weight thereof of the gray scale at each pixel place in the annular region
Return Fig. 4, after step S42, flow process enters step S43.At step S43, by first calculation element 212, calculate the Grad of the ganmma controller value of each picture element in the annular region 300R, and the weight of the Grad of the ganmma controller value of interior each picture element of definite annular region 300R.
Below with reference to Fig. 5 and Fig. 6, illustrate people's face detecting devices according to the present invention be how to determine each picture element in the annular region 300R the ganmma controller value Grad with and corresponding weights.
In general, an images comprises a lot of picture elements, and a lot of other picture elements are arranged around each picture element.Therefore, can be according to the k around a certain picture element
2The gray value of individual picture element is determined the Grad of the gray value of described picture element, and wherein, k is an integer, and scope is between 2 and 15.In the present embodiment, get k=3.In the present embodiment, use the sobel operator, ask the Grad of each picture element gray value in the annular region 300R in the original picture.
Below with reference to Fig. 6, Grad how to determine the ganmma controller value of each picture element in the annular region 300R is described.Wherein, the coordinate of picture element P in X, Y coordinate system is (380,250).
With reference to Fig. 6, select adjacent image point point P1, P2, P3, P4, P5, P6, P7 and the P8 of picture element P, just use the picture element on every side of picture element P.In the present embodiment, the gray value of all picture elements in this image is between 0 to 255.Wherein, the gray value of picture element P is 122, and the gray value of picture element P1, P2, P3, P4, P5, P6, P7 and P8 is respectively 136,124,119,130,125,132,124 and 120.
According to the sobel operator, at each neighboring pixels point P1 of picture element P, P2 ..., on the basis of the gray value of P8, can obtain the Grad of the ganmma controller value of picture element P according to following formula:
DX1=(G3+2*G5+G8)-(G1+2*G4+G6);
DY1=(G6+2*G7+G8)-(G1+2*G2+G3) (5)
Wherein, DX1 is first component of Grad of the ganmma controller value of picture element P, and DY1 is the second component of Grad of the ganmma controller value of picture element P; G1, G2 ..., G8 represents picture element P1 respectively, P2 ..., the gray value of P8.Therefore, according to formula (5), the Grad that can obtain picture element P ganmma controller value is (39 ,-3).
Similarly, can obtain the Grad of the ganmma controller value of other each picture elements in the annular region 300R.In this example, the Grad of the ganmma controller value of picture element P1, P2, P3, P4, P5, P6, P7, P8 etc. is respectively (30 ,-5), (36 ,-4), (32 ,-1), (33 ,-6), (30 ,-4), (38 ,-8), (33 ,-3), (31,2).Like this, can obtain the Grad of the gray value of all picture elements in the annular region 300R in the same way.
Return Fig. 4, at step S43, first calculation element 212 is also determined the weight of the ganmma controller value of each picture element in the annular region 300R, and concrete grammar can be referring to following formula:
W1=(|DX1|+|DY1|)/255; (6)
Wherein, W1 represents the weight of a certain picture element gray value, and DX1, DY1 are the Grad of the ganmma controller value of this picture element.
Such as, in the present embodiment, referring to Fig. 6, the Grad of the ganmma controller value of known pixel point P is (39 ,-3), therefore, and the weight of the Grad of the ganmma controller value of picture element P=(|-39|+|-3|)/255=0.165.
Equally, in an identical manner, can obtain other each picture element P1, P2, P3, P4, P5, P6, P7, P8 in the annular region 300R at an easy rate ... Deng the weight of Grad of ganmma controller value be respectively 0.137,0.157,0.129,0.153,0.133,0.180,0.141,0.129 ... Deng.
Calculate the benchmark gradient at each pixel place in the annulus
Referring to Fig. 4, flow process proceeds to step S44 after step S43.At step S44, second calculation element 213 calculates the gradient of a benchmark distribution at each the pixel place among the above-mentioned annulus 300R.This benchmark distributes and has represented a kind of ideal model of the people's face among the district 300A.The gradient that this benchmark distributes will be determined device 214 and be used for judging the degree of the image area of annulus 300R near people's face image.
In people's face detection method of the present invention and equipment, people's face is surveyed, be direction by the gradient of the intensity profile at each pixel place in the relevant portion (annulus 300R) of estimating institute's image to be processed with poor from the direction between the direction of a benchmark gradient of this benchmark distribution derivation, and obtain carrying out.
Particularly, for determined each image subsection 300A, determined people's face benchmark and distributed, this benchmark distributes and can be represented as:
Be that h is a constant, a/b equals the ratio of the height of the width of described image subsection and described image subsection, (x
c, y
c) be the center of image subsection.Figure 10 has shown to have the gray scale image that such benchmark distributes, and Fig. 7 has shown the contour (E11 and E12) that this benchmark distributes.
Thereby the gradient that this benchmark distributes can be represented as:
z=(z/x,z/y)=(-2(x-x
C)/a
2,-2(y-y
C)/b
2) (7)
Owing to significantly have only vector ( z/ x at this, z/ y) direction, and opposite direction ( z/ x in treatment step subsequently,- z/ y) will obtain with identical direction handling, thereby the change of the equal proportion of the x component of vector z and y component will can not influence the result of evaluation the samely.Thereby to have only ratio a/b be significant.The common value of ratio a/b is 4/5.
So,, utilize formula (7) to calculate gradient z, and this gradient is called as the benchmark gradient at step S44.
After step S44, flow process proceeds to step S45.At step S45, for each pixel among the annulus 300R, calculate the angle between gradient g and the z, wherein (DX1 DY1) is the gradient of intensity profile to g=.Certainly, g is not limited to the gradient of utilizing formula (5) to be calculated, and it can be the gradient that any other operator calculated that is used for discrete gradient calculation.
Particularly, point (x, y) locate angle theta between g and the z and can utilize following formula to calculate:
cosθ=z·g/(|g|·|z|)
θ=cos
-1(|z·g|/(|g|·|z|)) (8)
Wherein z g is the inner product of vector z and g, and | g| with | z| has represented the amplitude of vector g and z respectively.Notice that θ is in the scope of 0≤θ≤pi/2.
Therefore, in this example, the angle of two kinds of gradients of picture element P can be defined as 0.29.In the same way, the angle that can draw two kinds of gradients of other picture elements P1, P2, P3, P4, P5, P6, P7, P8 etc. is respectively 0.19,0.26,0.33,0.18,0.23,0.50,0.27,0.42 (radian).
Then, in step S45, determine the ganmma controller value Grad of all each picture elements in the annular region 300R and the mean value of the angle of corresponding benchmark gradient according to following formula:
A1=S1/C1 (9)
Wherein, A1 represents the mean value on angle all pixels in annular region between the shade of gray benchmark gradient corresponding with it, and S1 represents the summation of the gradient angle of each picture element in the annular region, and C1 represents the sum of picture element in this annular region.
For this example, the ganmma controller gradient of each picture element is 0.59 with the mean value of the angle of corresponding benchmark gradient in this annular region 300R.
Return the step S45 of Fig. 3, behind the mean value of the angle that draws above-mentioned gradient, judge also that at step S45 whether this mean value is less than the 11st threshold value, such as 0.61.In the present invention, the 11st threshold value is between 0.01 to 1.5.If the mean value of determining gained is less than the 11st threshold value, so, flow process enters step S46.Be not less than the 11st threshold value if determine the mean value of gained, so, flow process enters step S48.At step S48, determine that given image is not people's face, so flow process finishes at step S49.
At this example, the ganmma controller gradient of all each picture elements is 0.59 with the mean value of the angle of corresponding benchmark gradient in the annular region 300R, and it is less than the 11st threshold value, and therefore, flow process enters step S46.
At step S46, at first, determine the weighted average of the weight of the shade of gray of picture element in the annular region 300R and angle between the benchmark gradient and ganmma controller gradient.Determine this weighted average of picture element in the annular region 300R according to following formula:
A2=S2/C2; (10)
Wherein, A2 represents the weighted average of the weight of the shade of gray at picture element place of annular region and angle between the benchmark gradient and ganmma controller gradient, C2 represents the weight sum of the shade of gray of these picture elements, and S2 represents the long-pending summation of the weight of the gradient angle of these picture elements and ganmma controller gradient.
In this example, the weighted average of each picture element gradient angle is 0.66 in this annular region.
After drawing the weighted average of above-mentioned gradient angle, judge also that at step S46 whether this mean value is less than the 12nd threshold value, such as 0.68.In the present invention, the 12nd threshold value is between being between 0.01 to 1.If the weighted average of the gradient angle of definite gained is less than the 12nd threshold value, so, flow process enters step S47.Determine that rectangular area to be determined is people's face.Then, flow process finishes at step S49.Be not less than the 12nd threshold value if determine the weighted average of the gradient angle of gained, so, flow process enters step S48.At step S48, determine that given image is not people's face, so flow process finishes at step S49.
At this example, because the weighted average of each picture element gradient angle is 0.66 in the annular region 300R, it is less than the 12nd threshold value, and therefore, flow process enters step S47, determines that rectangular area 300A to be determined is people's face, and then, flow process finishes at step S49.
Second example
Fig. 8 A is a schematic diagram, shows an example of the original picture of wanting detected.In Fig. 8 A, this original picture 800 is taken by digital camera.Certainly, also can be input to people's face detection system, and be stored on the suchlike precalculated positions such as HD 107 or RAM 109 by such as the digital device 111 of classes such as scanner etc.
Fig. 8 B is a schematic diagram, show Fig. 8 A shown in original picture 800 in a rectangular area 800B with and position in X, Y coordinate system.Wherein, X-axis, Y-axis represent respectively each picture element in the horizontal direction with the distance of vertical direction with respect to the origin of coordinates.
Referring to Fig. 4,, read the gray value 250,251,251,251,250,249 of original picture Figure 80 0 each pixel in the reading device 210 at first at step S41 ..., and read rectangular area 800 with respect to position in the original picture 800.Shown in Fig. 8 A, in X, Y coordinate system, be the origin of coordinates with the upper left corner of original picture 800.Thereby can obtain that rectangular area 800B is in this X, Y coordinate system among Fig. 8 B, the coordinate at its four angles is respectively: (203,219), (373,219), (203,423), (373,423).Therefore, the width of this rectangular area 800B and highly be respectively 170 and 204.
Then,, determine device 211, determine the annular region 800C of rectangular area 800B among Fig. 8 B by annular region at step S42.Referring to Fig. 8 C,, with its width, amplify, be reduced into original 9/7 times and 5/7 times highly respectively, obtain two rectangular area 800C1 and 800C2 for rectangular area 800B.
Fig. 8 C is a schematic diagram, show into the determined annular region 800C of rectangular area 800B with and position in X, Y coordinate system.Wherein, X-axis, Y-axis represent respectively each picture element in the horizontal direction with the distance of vertical direction with respect to the origin of coordinates.Wherein, annular region 800C is made of the first rectangular area 800C1 and the second rectangular area 800C2.Wherein, the coordinate at four of the first rectangular area 800C1 angles is respectively: (179,190), (397,190), (179,452) and (397,452).And the coordinate at four angles of the second rectangular area 800C2 is respectively (227,248), (349,248), (227,394) and (349,394) at this X, Y coordinate system.
Then, at step S43,, determine that the Grad of the ganmma controller value of each picture element in the annular region 800C is respectively (188 by first calculation element 212,6), (183,8), (186,10), (180,6), (180,6) ..., and in the same manner described above, determine the weight of Grad of the ganmma controller value of annular region 800 each picture element, they are respectively 0.76,0.75,0.77,0.77,0.73 ...
Then, at step S44, second calculation element 213 is determined the benchmark Grad of each point among the annular region 800C, is respectively (0.015 ,-0.012), (0.015 ,-0.012), (0.015 ,-0.012), (0.015 ,-0.012), (0.014 ,-0.012) ...Afterwards, flow process enters step S45.
At step S45, determine the gradient angle of each picture element among the annular region 800C, obtain 0.64,0.63,0.62,0.64,0.64 respectively Adopt above-mentioned identical method, the mean value that can obtain the gradient angle of each picture element in the annular region 800C is 0.56.In this example, this mean value is 0.56, and it is less than the 11st threshold value, and therefore, flow process enters step S46.
At step S46, determine that the weighted average of the gradient angle of each picture element in the annular region 800 is 0.64.In this example, because this mean value is 0.64, less than the 12nd threshold value, therefore, flow process enters step S47, determines that rectangular area 800B to be determined is people's face, and then, flow process finishes at step S49.
Alternative embodiment
The following describes an alternative embodiment of the present invention.
Fig. 9 is a flow chart, shows the people's face detection process according to an alternative embodiment of the present invention.In Fig. 9, represent identical processing with identical label among the embodiment of Fig. 4.
Still be example with the original picture shown in Fig. 8 A 800, illustrate whether the rectangular area 800B that judges in the original picture 800 is people's face.
At first, the same with Fig. 4, this flow process starts from step S40.At first, read the gray value 250,251,251,251,250,249 of original picture Figure 80 0 each pixel in the reading device 210 at step S41 The eyes district that eyes sniffer 218 is surveyed in the image; And image subsection determines that device 219 determines image subsection to be detected according to the result that the eyes district surveys, as rectangle region 800B.Then, flow process enters step S42, at step S42, shown in Fig. 8 C, for rectangular area 800B determines annular region 800C.Further, flow process enters step S43, at step S43, determines that the Grad of the ganmma controller value of each picture element in the annular region 800C is respectively (188,6), (183,8), (186,10), (180,6), (180,6) And, determine the weight of the Grad of the ganmma controller value of each picture element in the annular region 800C in the identical mode of the foregoing description, and they are respectively 0.76,0.75, and 0.77,0.77,0.73 ...Then, flow process enters step S44.At step S44, the benchmark Grad that obtains each picture element in the annular region 800C is respectively (0.015 ,-0.012), (0.015 ,-0.012), (0.015 ,-0.012), (0.015 ,-0.012), (0.014 ,-0.012) ...
Then, flow process enters step S95.At step S95, determine the shade of gray of each picture element of annular region 800C and the angle of its benchmark gradient, obtaining respectively is 0.64,0.63,0.62,0.64,0.64 ..., and the weighted average of definite annular region 800C each point gradient angle and its respective weights.Judge also that at step S95 whether this mean value is less than the 13rd threshold value, such as 0.68.In the present invention, the 13rd threshold value is between 0.01 to 1.If the weighted average of the gradient angle of definite gained is less than the 13rd threshold value, so, flow process enters step S96.At step S96, really this rectangular area to be determined is people's face.Be not less than the 13rd threshold value if determine the weighted average of the gradient angle of gained, so, flow process enters step S97, determines that given image is not people's face, so flow process finishes at step S98.
For this example, because the weighted average of each picture element gradient angle is 0.64 in the annular region, it is less than the 13rd threshold value, and therefore, flow process enters step S96, determines that rectangular area 800B to be determined is people's face, and then, flow process finishes at step S98.
As the technology people in this area former can understand, might not calculate and estimate the intensity profile gradient at each the pixel place in the annulus and the angle between the benchmark distribution gradient; On the contrary, in the method that is used for surveying people's face (or other objects that will survey) according to the present invention, can only calculate and estimate this angle at some the pixel place in the annulus, and realize purpose of the present invention and effect.
In addition, though in the foregoing description, described the specific embodiment that the weighted average of the mean value that adopts the angle between intensity profile gradient and the benchmark distribution gradient and this angle estimates and another specific embodiment that only adopts weighted average to estimate, but the embodiment that wherein only adopts the mean value of this angle to estimate also can realize purpose of the present invention and thereby also be one embodiment of the present of invention.
Notice that the present invention can be applied to the system by multiple arrangement (for example, master computer, interface arrangement, reader, printer and similar device) formation, perhaps the device that constitutes by an equipment (for example photocopier, facsimile machine or similar equipment).
Purpose of the present invention also realizes by this way: the storage medium of program code that described system or device is provided for writing down the software program of the function that can realize the foregoing description, and computer (or the CPU by system or device, or MPU) reads and carries out the program code that is stored in the storage medium, in this case, read from storage medium
Program code itself carry out the function of the foregoing description, and program code stored storage medium constitutes the present invention.
As the storage medium that is used to provide program code, for example, floppy disk, hard disk, CD, magneto optical disk, CD-ROM, CD-R, tape, Nonvolatile memory card, ROM and analog can use.
The function of the foregoing description not only can realize by carried out the program code read by computer, and OS (operating system) that can be by operation on computers operates according to some or all actual treatment of the instruction execution of program code and realizes.
From as can be seen above-mentioned, method of the present invention provides a kind of method fast, is used for judging people's face of the image with complex background, and does not need detected image to have very high quality.Thereby eliminated the uncared-for possibility of people's face in fact.This method can judge accurately whether image to be detected comprises people's face.Therefore, according to method of the present invention, equipment or system, can judge people's face quickly and efficiently.
In addition, the present invention also comprises such a case, promptly after the program code of reading from storage medium is written into the function expansion card that is inserted into computer, perhaps be written into functional expansion unit that computer links to each other in after the memory that provides, CPU that comprises in function expansion card or functional expansion unit or analog are according to the command execution section processes of program code or all processing, thus the function of realization the foregoing description.
Be applied in the present invention under the situation of above-mentioned storage medium, storage medium stores is corresponding to the program code of the flow process described in the embodiment (Fig. 4, Fig. 9).
The method of the foregoing description is used to judge people's face, yet also not only in judging people's face, it can also be applicable to other determination methods in the present invention, such as, the method for the defect part of testing circuit plate.
Under the prerequisite that does not depart from the scope of the present invention and conceive, can make many other change and remodeling.Should be appreciated that the present invention is not limited to certain embodiments, scope of the present invention is defined by the following claims.
Claims (36)
1. method that is used in the image detected object with intensity profile is characterized in that this method may further comprise the steps:
A), derive one first variable from the described intensity profile of described image for a subclass of the pixel in the described image;
B) for described subset of pixels, derive one second variable from a preset reference distribution, described benchmark distributes and has characterized described object;
C) estimate described first variable and the correspondence of described second variable on described subset of pixels;
D) result according to this evaluation procedure judges whether described image comprises described object.
2. according to the method for claim 1, it is characterized in that further comprising:
A zone in the selected described image,
Wherein said subset of pixels is arranged in described zone, and
Wherein compare the pixel outside the described zone, have good correspondence from first variable and second variable on the pixel in described zone with the corresponding intensity profile derivation of the object that will survey.
3. according to the method for claim 2, it is characterized in that it further may further comprise the steps:
In described image, determine an image subsection; And
According to selected described zone, the position of described image subsection.
4. according to any one the method among the claim 1-3, wherein said first variable has been represented the direction of gradient of the intensity profile of described image.
5. according to the method for claim 4, wherein said second variable has been represented the direction of the gradient of described benchmark distribution.
6. according to the method for claim 5, wherein said gradient is a kind of discrete gradient.
7. according to the method for claim 5, it is characterized in that described evaluation procedure comprises the statistical disposition to described subset of pixels.
8. according to the method for claim 5, it is characterized in that described evaluation procedure comprises the statistical disposition to described subset of pixels, described statistical disposition comprises a kind of weighted, wherein the weight of each pixel is determined according to the gradient of first variable at this pixel place, and the pixel with bigger first variable gradient amplitude tends to have bigger weight.
9. method according to Claim 8 is characterized in that described evaluation procedure comprises the statistical disposition of adopting this weighted and do not adopt another statistical disposition of weighted, and judges according to the result of these two kinds of statistical dispositions whether this image comprises this object.
10. according to the method for claim 3, it is characterized in that described zone is an annulus, near the center that is centered close to described image subsection of this annulus, and the pixel of described subclass all is positioned within this annulus.
11. according to the method for claim 2, wherein said subset of pixels comprises all pixels in the described zone.
12., it is characterized in that described image subsection is by surveying predetermined characteristic in the described image and definite according to the method for claim 3.
13., it is characterized in that described predetermined characteristic comprises a pair of dark space according to the method for claim 12.
14. according to the method for claim 13, it is characterized in that described image subsection be according to the position at right center, described dark space and between the interval and definite.
15., it is characterized in that described benchmark distribution is expressed from the next according to the method for claim 13:
z(x,y)=-(x
2/a
2+y
2/b
2)+h
Wherein h is a constant, and a/b equals the ratio of the height of the width of described image subsection and described image subsection, and this formula is an initial point with the center of described image subsection.
16. according to the process of claim 1 wherein that described evaluation procedure comprises:
For all pixels in the described subclass, calculate the mean value of the angle between the gradient of intensity profile of described image and the gradient that described benchmark distributes;
This mean value and predetermined value are compared; And
Be equal to or less than at described mean value and judge under the situation of described predetermined value that described image subsection comprises described object.
17. according to the process of claim 1 wherein that described evaluation procedure comprises:
For all pixels in the described subclass, calculate the weighted average of the angle between the gradient that the intensity profile gradient of described image and described benchmark distribute, the pixel that wherein has bigger intensity profile gradient amplitude tends to have bigger weight;
Described weighted average is compared with a predetermined value; And
Be equal to or less than at described weighted average and judge under the situation of described predetermined value that this image subsection comprises described object.
18. according to the process of claim 1 wherein that described evaluation procedure comprises:
For all pixels in the described subclass, calculate the mean value of the angle between the gradient of intensity profile of described image and the gradient that described benchmark distributes;
For all pixels in the described subclass, calculate the weighted average of the angle between the gradient of intensity profile of described image and the gradient that described benchmark distributes, the pixel that wherein has bigger intensity profile gradient amplitude tends to have bigger weight;
Described mean value is compared with first predetermined value;
Described weighted average is compared with second predetermined value; And
Be equal to or less than under the situation that described first predetermined value and described weighted average be equal to or less than described second predetermined value at described mean value and judge that this image subsection comprises described object.
19., determine that wherein the step of an image subsection in the described image comprises according to the method for claim 14:
Each row of this image are divided into a plurality of intervals;
Each interval is designated as paddy district, relay area or peak district;
Each row the paddy district of the row that are adjacent of paddy district merge, thereby produce a candidate dark space; And
Determine a dark space.
20. according to the method for claim 19, wherein this combining step comprises:
Each paddy district in first row of this image is set at a seed zone;
Judge whether the paddy district in the next column can be integrated in this seed zone;
The paddy district that can merge is merged in this seed zone;
The paddy district that can not merge in this seed zone is set at seed zone; And
The seed zone that no longer includes the paddy district that can merge is defined as a dark space.
21. a method that is used in the image detected object with intensity profile is characterized in that this method may further comprise the steps:
A) in described image, determine an image subsection;
B) according to described image subsection, a subclass of the pixel in the selected described image;
C) for the pixel in the described subclass, from one first variable of described intensity profile derivation of described image;
D) for described subclass pixel, derive one second variable from a preset reference distribution, described benchmark distributes and has characterized described object;
E) estimate described first variable and the correspondence of described second variable on described subset of pixels;
F) result according to this evaluation procedure judges whether described image comprises described object.
22. an image processing method is used for judging characteristic at image, comprising:
Read step, be used for reading this image and this image with the rectangle region that is determined;
Set step, be used to set a annulus around described rectangle region;
A step is used for calculating the gradient of gray scale at each pixel place of described annulus;
A step is used to this annulus to determine that benchmark distributes and determines an annulus benchmark gradient for each pixel in the annulus; And
Determining step is used for determining to be included in characteristic in the described rectangle region according to the benchmark gradient at the gradient of the gray scale at each pixel place of described annulus and each pixel place.
23. according to the image processing method of claim 22, the image that wherein comprises rectangle region to be determined is a characteristic.
24. according to the image processing method of claim 22, wherein said annulus forms by one first rectangle region and one second rectangle region.
25. according to the image processing method of claim 24, wherein said first rectangle region is positioned within the described rectangle region that will determine.
26. according to the image processing method of claim 24, wherein said second rectangle region is positioned at outside the described rectangle region to be determined.
27. image processing method according to claim 24, wherein said first rectangle region be centered close to the position identical substantially with the center of described rectangle region to be determined, and the width of described first rectangle region and length are respectively width and m times of length, wherein 0<m<1 of described rectangle region to be determined.
28. image processing method according to claim 24, wherein said second rectangle region be centered close to the position identical substantially with the center of the described rectangle region that will determine, and the width of described second rectangle region and length equal width and n times of length, wherein 0<n<1 of the described rectangle region that will determine respectively.
29. according to the image processing method of claim 22, wherein said shade of gray calculation procedure comprises the step of calculating the shade of gray at this pixel place according to the gray scale of the pixel that centers on a pixel.
30. according to the image processing method of claim 29, wherein the number of pixels around a described pixel is k
2, wherein k is an integer and 2<k<15.
31. according to the image processing method of claim 22, wherein benchmark distributes and describes according to annulus.
32., further comprise the gradient of the gray scale at each pixel place in definite annulus and the step of the angle between the benchmark gradient according to the image processing method of claim 22 or 31.
33., further comprise the step of the mean value of the gradient of determining the gray scale at each pixel place in the annulus and the angle between the benchmark gradient according to the image processing method of claim 22 or 31.
34., further comprise the step of weight of the gradient of the gray scale of determining each pixel place in the described annulus according to the image processing method of claim 22.
35. image processing method according to claim 33, whether the mean value that further comprises the gradient of the gray scale of determining the pixel place, peak in the described annulus and the angle between the benchmark gradient less than the step of a first threshold, and wherein this first threshold is in 0.01 to 1.5 the scope.
36. image processing method according to claim 34, further comprise the gradient of determining the gray scale at each pixel place in the described annulus and the angle between the benchmark gradient with the weighted average of the weight of the gradient at this pixel place whether less than the step of one the 3rd threshold value, wherein the 3rd threshold value is in 0.1 to 1 the scope.
Priority Applications (5)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CNB01132807XA CN1293759C (en) | 2001-09-06 | 2001-09-06 | Image processing method and apparatus, image processing system and storage media |
| EP01307827A EP1211640A3 (en) | 2000-09-15 | 2001-09-14 | Image processing methods and apparatus for detecting human eyes, human face and other objects in an image |
| US09/951,458 US6965684B2 (en) | 2000-09-15 | 2001-09-14 | Image processing methods and apparatus for detecting human eyes, human face, and other objects in an image |
| JP2001282283A JP2002183731A (en) | 2000-09-15 | 2001-09-17 | Image processing method and apparatus for detecting human eyes, face and other objects in an image |
| US11/235,132 US7103218B2 (en) | 2000-09-15 | 2005-09-27 | Image processing methods and apparatus for detecting human eyes, human face, and other objects in an image |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CNB01132807XA CN1293759C (en) | 2001-09-06 | 2001-09-06 | Image processing method and apparatus, image processing system and storage media |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN1404312A CN1404312A (en) | 2003-03-19 |
| CN1293759C true CN1293759C (en) | 2007-01-03 |
Family
ID=4671592
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CNB01132807XA Expired - Fee Related CN1293759C (en) | 2000-09-15 | 2001-09-06 | Image processing method and apparatus, image processing system and storage media |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN1293759C (en) |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN100465985C (en) | 2002-12-31 | 2009-03-04 | 佳能株式会社 | Human ege detecting method, apparatus, system and storage medium |
| SE1850158A2 (en) * | 2011-09-23 | 2018-12-11 | Kt Corp | Procedure for selecting a candidate block for fusion as well as a device for applying this procedure |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH09329527A (en) * | 1996-04-08 | 1997-12-22 | Advantest Corp | Image processing method, and apparatus therefor |
| EP1011064A2 (en) * | 1998-11-30 | 2000-06-21 | Canon Kabushiki Kaisha | Image pattern detection method and apparatus |
-
2001
- 2001-09-06 CN CNB01132807XA patent/CN1293759C/en not_active Expired - Fee Related
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH09329527A (en) * | 1996-04-08 | 1997-12-22 | Advantest Corp | Image processing method, and apparatus therefor |
| EP1011064A2 (en) * | 1998-11-30 | 2000-06-21 | Canon Kabushiki Kaisha | Image pattern detection method and apparatus |
Also Published As
| Publication number | Publication date |
|---|---|
| CN1404312A (en) | 2003-03-19 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN1213592C (en) | Adaptive two-valued image processing method and equipment | |
| CN1184796C (en) | Image processing method and equipment, image processing system and storage medium | |
| CN1269068C (en) | Header extracting device and method for extracting header from file image | |
| CN1324873C (en) | Boundary detection method between areas having different features in image data | |
| CN1320502C (en) | Object trace device, object trace method, and object trace program | |
| CN1290312C (en) | Image processing device and its method for removing and reading strik-through produced by double side or overlaped master cope | |
| CN1252978C (en) | Image processing device and image processing method | |
| CN1271505C (en) | Image processing apparatus, control method therefor, and program | |
| CN1202065A (en) | Image detection method, image detection device, image processing method, image processing device, and medium | |
| CN1620094A (en) | Image processing apparatus and method for converting image data to predetermined format | |
| CN1867940A (en) | Imaging apparatus and image processing method therefor | |
| CN1881234A (en) | Image processing apparatus, image processing method,computer program, and storage medium | |
| CN1162795A (en) | Pattern Recognition Apparatus and Method | |
| CN1091906C (en) | Pattern recognizing method and system and pattern data processing system | |
| CN1369856A (en) | Image processing method and appts. thereof | |
| CN1940965A (en) | Information processing apparatus and control method therefor | |
| CN1764228A (en) | Image processing device, image forming device, image processing method | |
| CN1588431A (en) | Character extracting method from complecate background color image based on run-length adjacent map | |
| CN1599406A (en) | Image processing method and device and its program | |
| CN1617568A (en) | Compressing and restoring method of image data | |
| CN1993707A (en) | Image processing method and apparatus, image sensing apparatus, and program | |
| CN1202670A (en) | Pattern extraction apparatus | |
| CN1293759C (en) | Image processing method and apparatus, image processing system and storage media | |
| CN1991863A (en) | Medium processing apparatus, medium processing method, and medium processing system | |
| CN1841407A (en) | Image processing apparatus |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| C14 | Grant of patent or utility model | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20070103 Termination date: 20140906 |
|
| EXPY | Termination of patent right or utility model |