[go: up one dir, main page]

CN105069466B - Pedestrian's dress ornament color identification method based on Digital Image Processing - Google Patents

Pedestrian's dress ornament color identification method based on Digital Image Processing Download PDF

Info

Publication number
CN105069466B
CN105069466B CN201510443292.0A CN201510443292A CN105069466B CN 105069466 B CN105069466 B CN 105069466B CN 201510443292 A CN201510443292 A CN 201510443292A CN 105069466 B CN105069466 B CN 105069466B
Authority
CN
China
Prior art keywords
pedestrian
color
image
searched
color space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510443292.0A
Other languages
Chinese (zh)
Other versions
CN105069466A (en
Inventor
薛晓利
柳斌
朱小军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Gaobo Huike Information Technology Co Ltd
Original Assignee
Chengdu Gaobo Huike Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Gaobo Huike Information Technology Co Ltd filed Critical Chengdu Gaobo Huike Information Technology Co Ltd
Priority to CN201510443292.0A priority Critical patent/CN105069466B/en
Publication of CN105069466A publication Critical patent/CN105069466A/en
Application granted granted Critical
Publication of CN105069466B publication Critical patent/CN105069466B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of pedestrian's dress ornament color identification method based on Digital Image Processing, comprising: (1) pedestrian image is acquired using the pedestrian detection method that HOG feature describes operator combination SVM classifier;(2) pedestrian's edge contour shape is detected using Sobel operator, obtains image to be searched;(3) pedestrian contour shape template is made, and pedestrian contour shape template is matched with corresponding region in image to be searched, obtains the upper part of the body and lower part of the body image of pedestrian;(4) connected component labeling is carried out to the dress ornament color of pedestrian's upper part of the body and lower part of the body region respectively using seed filling method;(5) color feature extracted is carried out to color-connected regions;(6) color classification differentiation is carried out using SVM classifier, obtains pedestrian's dress ornament color, and export final result.The present invention improves the accuracy of identification of pedestrian's dress ornament color, brings very big guarantee for the execution of danger zone specification dressing, eliminates security risk.

Description

Pedestrian clothing color identification method based on digital image processing
Technical Field
The invention relates to a color recognition method, belongs to the technical field of image processing, and particularly relates to a pedestrian clothing color recognition method based on digital image processing.
Background
Safety is a constant theme in special fields such as electric power and oil and gas fields. In recent years, various safety accidents in industries such as oil and gas fields and electric power have occurred, and how to enhance the safety production capacity and improve the management level of oil and gas fields and electric power enterprises has become a first problem to be faced by related personnel.
One of the potential safety hazards in the fields of oil and gas fields, electric power and the like is that workers do not strictly execute dressing requirements in a specified working area and do not wear special safety clothes according to regulations. Meanwhile, as the country pays more attention to safety production operation, more and more video monitoring systems appear in industries such as oil and gas fields, electric power and the like. However, most of the existing monitoring systems are in the stages of video recording, storage, query and retrieval and the like, so that the color of the pedestrian clothes is judged with a large error, and whether the clothes of the pedestrian in a specified working area such as an oil-gas field, electric power and the like meet the requirements is difficult to accurately identify, so that corresponding potential safety hazards exist all the time.
Disclosure of Invention
The invention aims to provide a pedestrian clothing color identification method based on digital image processing, and mainly solves the problem that potential safety hazards exist due to large identification errors of pedestrian clothing colors in the prior art.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the pedestrian clothing color identification method based on digital image processing comprises the following steps:
(1) acquiring a pedestrian image by adopting a pedestrian detection method combining an HOG feature description operator and an SVM classifier;
(2) detecting the contour shape of the pedestrian edge by using a Sobel operator to obtain an image to be searched;
(3) making a pedestrian outline shape template T according to the common postures of pedestrians, and matching the pedestrian outline shape template with a corresponding region in an image to be searched to obtain images of the upper half body and the lower half body of the pedestrians;
(4) respectively marking the clothing colors of the upper half body and the lower half body of the pedestrian by adopting a seed filling method;
(5) extracting color features of the obtained color connected region;
(6) and according to the extracted color features, carrying out color classification and judgment by using an SVM classifier to obtain the color of the pedestrian garment, and outputting a final result.
Further, in the step (3), a specific process of matching the pedestrian outline shape template with the image to be searched is as follows:
(a) sequentially translating and sliding the pedestrian outline shape template T on the image to be searched from left to right and from top to bottom to obtain a sub-image S representing the template covering the image area to be searchedi,jWherein i and j represent the coordinates of the upper left corner of the subgraph in the image to be searched;
(b) and comparing the matching degree of the pedestrian outline shape template and each sub-image by using the following formula:
(c) and selecting the minimum value of D (i, j), wherein the obtained position (i, j) is the position of the pedestrian in the image, and the width and the height of the pedestrian are respectively equal to the width and the height of the pedestrian outline shape template T.
Specifically, the step (5) includes the steps of:
(5a) respectively transforming each color connected region marked in the step (4) from an RGB color space to an HSV color space, a YCbCr color space and a Lab color space;
(5b) respectively extracting the mean value, variance, energy and contrast of an HSV color space, an YCbCr color space and an Lab color space, and then connecting the mean value, variance, energy and contrast in series to obtain a color feature vector;
(5c) repeating the steps (5a) and (5b), inputting the color feature vectors of a plurality of pedestrian training samples into an SVM classifier for training and learning, and obtaining an SVM classifier model; and the color feature vector obtained by subsequent extraction can be classified and distinguished only by sending the color feature vector into an SVM classifier.
Compared with the prior art, the invention has the following remarkable effects:
(1) the method combines several existing algorithms, utilizes methods of pedestrian detection, upper and lower half body shape segmentation and color communication area marking, and designs and integrates modes of template matching, color feature extraction and classification judgment, so that the colors of clothes of travelers can be effectively identified, the identification precision is quite high, and errors of the identified colors basically do not exist, so that a manager can well conveniently manage the dressing aspect of dangerous areas, a greater guarantee is brought to the execution of standard dressing, and the potential safety hazard in the aspect is effectively eliminated.
(2) The invention has reasonable design and clear flow, and is very suitable for popularization and application in the special fields of electric power, oil and gas fields and the like.
Drawings
FIG. 1 is a schematic flow chart of the present invention.
Fig. 2 is a schematic diagram of a pedestrian outline shape template according to an embodiment of the invention.
Detailed Description
The present invention is further illustrated by the following figures and examples, which include, but are not limited to, the following examples.
Examples
As shown in figure 1, the invention provides a method suitable for detecting and identifying the dress of workers in special occasions such as oil and gas fields, transformer substations and the like, which mainly comprises the steps of pedestrian detection, segmentation of the shapes of the upper half body and the lower half body of a pedestrian, marking of the color communication area of the upper half body and the lower half body of the dress, color feature extraction, color classification judgment and result output.
First, pedestrian detection
The pedestrian image is acquired by adopting a pedestrian detection method combining an HOG feature description operator and an SVM classifier. The HOG (histogram of organization gradient) feature is a dense descriptor of the local overlapping region of the image, and is formed by calculating the gradient direction histogram of the local region. The edge of the human body can be well described while being insensitive to illumination variations and small amounts of drift. The HOG feature combined with the SVM classifier has been widely applied to image recognition, and has been highly successful in pedestrian detection in particular.
The calculation of the HOG features requires the concept of gradient, and the gradient of a pixel point (x, y) in the image is as follows:
Gx(x,y)=H(x+1,y)-H(x-1,y)
Gy(x,y)=H(x,y+1)-H(x,y-1)
g in the above formulax(x,y),Gy(x, y), and H (x, y) respectively represent the gradient of the pixel point (x, y) in the horizontal direction, the gradient of the pixel point (x, y) in the vertical direction, and the pixel value. The gradient magnitude and gradient direction at pixel point (x, y) are respectively:
the HOG feature extraction process comprises the following steps: the method comprises the steps of dividing an image into a plurality of units (cells) of pixels, averagely dividing the gradient direction into 9 sections (bins), carrying out histogram statistics on the gradient direction of all the pixels in each direction section in each unit to obtain a 9-dimensional feature vector, forming a block (block) by every adjacent 4 units, connecting the feature vectors in one block to obtain a 36-dimensional feature vector, and scanning a sample image by using the block, wherein the scanning step length is one unit. And finally, the characteristics of all the blocks are connected in series to obtain the characteristics of the human body. For example, for a 64 × 128 image, every 2 × 2 cells (16 × 16 pixels) constitutes a block, with 4 × 9-36 features in each block, with 8 pixels as a step size, then there will be 7 scan windows in the horizontal direction and 15 scan windows in the vertical direction. That is, 64 × 128 pictures have a total of 36 × 7 × 15 — 3780 features.
SVM is a common classifier that is presented from an optimal classification surface in the case of linear separable. The optimal classification is to say that a classification line (or a classification surface) is required to separate two classes without errors, and the classification interval between the two classes is maximum. The former is to ensure that the risk of experience is minimal, while maximizing the classification interval actually minimizes the confidence range in generalization. And when the method is popularized to a high-dimensional space, the optimal classification line becomes an optimal classification surface.
Secondly, the shape of the upper half body and the lower half body of the pedestrian is divided
Through the detection and image acquisition of the previous step, the image of the pedestrian in the monitoring video picture is obtained. However, in general, the pedestrian is not in the center of the image, and for example, the upper body of the pedestrian may assume a frontal posture, and may also assume a posture in which the body is inclined, tilted, or the like. The lower body of the pedestrian is more complex in posture, for example, the legs may be parallel or may be separated to present various angles. If the upper half part of the pedestrian image is directly used as the upper half part of the pedestrian and the lower half part of the pedestrian image is used as the lower half part of the pedestrian, the identification and determination of the colors of the clothes of the pedestrian inevitably causes great errors. Therefore, it is necessary to accurately divide the shapes of the upper body and the lower body of the pedestrian, and to facilitate the subsequent accurate identification and judgment of the color of the clothing.
According to the invention, the shapes of the upper half body and the lower half body of the pedestrian are segmented in a pedestrian edge contour shape detection and template matching mode, and corresponding images of the upper half body and the lower half body are obtained.
Pedestrian edge contour shape detection
The edge is the most basic feature of an image, and the commonly used edge contour detection method comprises the following steps: robert operator, Sobel operator, Prewitt operator, Canny operator, etc. The invention uses Sobel operator to detect the edge contour. The calculation steps are as follows:
the original image is gaussian filtered, where the gaussian kernel is as follows:
calculating the magnitude and direction of the gradient using a convolution template
The horizontal gradient convolution template is as follows:
the gradient convolution template in the vertical direction is as follows:
here, for convenience of description, a domain point labeling matrix of the pixel point (i, j) to be processed is given as follows:
obviously, the gradient magnitude at each point can be expressed mathematically as:
sx=(a2+2a3+a4)-(a0+2a7+a6)
sy=(a0+2a1+a2)-(a6+2a5+a4)
the gradient direction can be expressed as:
θ(x,y)=Sy/Sx
and setting a threshold value according to the specific requirements of different scenes, and segmenting to obtain an edge contour and an image.
Template matching
The invention manufactures different pedestrian outline shape templates T according to the common postures of pedestrians, as shown in figure 2. And matching the template with a corresponding area in the image to be searched according to the manufactured pedestrian outline shape template, namely searching the image with the same size, size and direction in the image to be searched, and then determining the position of the target by a certain method. The matching process is as follows:
(1) sequentially translating and sliding the pedestrian outline shape template T on the image to be searched from left to right and from top to bottom to obtain a subgraph S of different representative templates covering the image area to be searchedi,jWherein i and j represent the coordinates of the upper left corner of the subgraph in the image to be searched;
(2) and comparing the matching degree of the pedestrian outline shape template and each sub-image by using the following formula:
(3) selecting the minimum value of D (i, j) to obtain the corresponding subgraph Si,jAnd then matching the pedestrian outline shape template with the subgraph to obtain the accurate position area of the pedestrian.
After matching is completed, corresponding images of the shapes of the upper half body and the lower half body of the pedestrian can be obtained.
Third, color communication area mark of upper half body and lower half body clothes
It is obvious that although the images of the shape of the upper and lower body of the pedestrian are obtained in the above step, generally speaking, the clothes and trousers of the pedestrian are not a single color but a mixture of a plurality of colors, for example, a grid or stripe clothes, or various patterns and designs on the clothes. Therefore, it is necessary to mark the connected regions in the clothing colors of the upper and lower body regions of the pedestrian obtained as described above.
The color connected region marking is similar to a connected region detection method of a binary image, and common connected region marking methods include a two-pass scanning method and a seed filling method. The invention adopts a seed filling method to mark a color communication area, and the processing steps are as follows:
initializing a mask image B with the same size as the image size of the color connected domain to be calculated, wherein all pixels of B
The value is assigned to 0;
scanning the image until the current mask pixel point B (x, y) is 1;
(1) taking a pixel at the current coordinate (x, y) as a seed pixel, giving the seed pixel a label, and then pushing all similar pixels in 4 neighborhoods of the seed pixel into a stack; the similarity calculation formula between the seed pixel (x1, y1) and the neighborhood pixel (x2, y2) is as follows:
(2) popping up the top pixel of the stack, endowing the same label to the top pixel, and then pressing all foreground pixels adjacent to the top pixel of the stack into the stack;
repeating the step (2) until the stack is empty;
at this point, a connected region in image B is found, and the pixel values in this region are labeled as label;
repeating the step (1) until the scanning is finished; and obtaining the connected regions of all color components in the image after the scanning is finished.
Four, color feature extraction
Transforming each color connected region marked in the previous step from the RGB color space to the HSV color space, the YCbCr color space and the Lab color space, wherein:
the calculation formula for converting the RGB color space to the HSV color space is as follows:
M=max(R,G,B)
m=min(R,G,B)
C=M-m
H=60°×H′
V=M
the calculation formula for converting the RGB color space to the YCbCr color space is as follows:
Y=0.257R+0.564G+0.098B+16
Cb=-0.148R-0.291G+0.439B+128
Cr=0.439R-0.368G-0.071B+128
the calculation formula for converting the RGB color space to the Lab color space is as follows:
RGB cannot be directly converted to Lab, and conversion to Lab after XYZ is required, that is: RGB > XYZ > Lab
The conversion formula is therefore divided into two parts:
conversion of RGB to XYZ
Wherein,
the gamma function is used for carrying out nonlinear tone editing on the image, and the aim is to improve the contrast of the image.
Conversion of XYZ into Lab
L=116f(Y/Yn)-16
a=500[f(X/Xn)-f(Y-Yn)]
b=200[f(Y/Yn)-f(Z/Zn)]
After the RGB color space is converted into HSV color space, YCbCr color space and Lab color space, respectively extracting the mean value, variance, energy and contrast of the HSV color space, the YCbCr color space and the Lab color space, and then connecting the mean value, variance, energy and contrast in series to obtain a color feature vector, wherein the calculation formulas of the mean value, variance, capability and contrast are respectively as follows:
the mean value calculation formula is as follows:
the variance calculation formula is as follows:
energy calculation formula:
contrast calculation formula:
h in the above formulaijRepresenting the pixel value at coordinate (i, j), M and N represent the width and height of the image, respectively.
And repeating the steps, inputting a plurality of color feature vectors into an SVM classifier for classification training to obtain an SVM classifier model, and storing the SVM classifier model into the SVM classifier. SVM classifiers are trained using the concept of classification intervals, which rely on pre-processing of the data, i.e., expressing the original pattern in a higher-dimensional space, by appropriate nonlinear mapping to a sufficiently high dimensionThe original data belonging to the two classes, respectively, can be separated by a hyperplane.
Fifthly, color classification judgment and result output
Because the SVM classifier is subjected to sample learning and training before use, after an SVM classifier model is obtained, the obtained color vector features are extracted subsequently, the color vector features are input into the SVM classifier, classification and judgment can be carried out on the color vector features, and finally a result is output.
The invention analyzes and judges the monitoring video of the existing oil and gas field and power industry by designing a method for identifying the colors of the clothes of pedestrians, can identify whether workers in production operation areas wear the clothes correctly, and further effectively improves the safe production management level of the production operation areas. Compared with the prior art, the invention has obvious progress, and has prominent substantive characteristics and remarkable progress.
The above-mentioned embodiments are only one of the preferred implementations of the present invention, and should not be used to limit the scope of the present invention, and all modifications, additions, or equivalents that can be made to the technical solution of the present invention without departing from the spirit and scope of the present invention should be considered as within the scope of the present invention.

Claims (2)

1. The pedestrian clothing color identification method based on digital image processing is characterized by comprising the following steps of:
(1) acquiring a pedestrian image by adopting a pedestrian detection method combining an HOG feature description operator and an SVM classifier;
(2) detecting the contour shape of the pedestrian edge by using a Sobel operator to obtain an image to be searched;
(3) making a pedestrian outline shape template T according to the common postures of pedestrians, and matching the pedestrian outline shape template with a corresponding region in an image to be searched to obtain images of the upper half body and the lower half body of the pedestrians;
the specific process of matching the pedestrian outline shape template T with the image to be searched is as follows:
(a) sequentially translating and sliding the pedestrian outline shape template T on the image to be searched from left to right and from top to bottom to obtain a sub-image S representing the template covering the image area to be searchedi,jWherein i and j represent the coordinates of the upper left corner of the subgraph in the image to be searched;
(b) and comparing the matching degree of the pedestrian outline shape template and each sub-image by using the following formula:
(c) selecting the minimum value of D (i, j), wherein the obtained position (i, j) is the position of the pedestrian in the image, and the width and the height of the pedestrian are respectively equal to the width and the height of the pedestrian outline shape template T;
(4) respectively marking the clothing colors of the upper half body and the lower half body of the pedestrian by adopting a seed filling method;
(5) extracting color features of the obtained color connected region;
(6) and according to the extracted color features, carrying out color classification and judgment by using an SVM classifier to obtain the color of the pedestrian garment, and outputting a final result.
2. The pedestrian apparel color identification method based on digital image processing as claimed in claim 1, wherein the step (5) comprises the steps of:
(5a) respectively transforming each color connected region marked in the step (4) from an RGB color space to an HSV color space, a YCbCr color space and a Lab color space;
(5b) respectively extracting the mean value, variance, energy and contrast of an HSV color space, an YCbCr color space and an Lab color space, and then connecting the mean value, variance, energy and contrast in series to obtain a color feature vector;
(5c) repeating the steps (5a) and (5b), inputting the color feature vectors of a plurality of pedestrian training samples into an SVM classifier for training and learning, and obtaining an SVM classifier model; and the color feature vector obtained by subsequent extraction can be classified and distinguished only by sending the color feature vector into an SVM classifier.
CN201510443292.0A 2015-07-24 2015-07-24 Pedestrian's dress ornament color identification method based on Digital Image Processing Active CN105069466B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510443292.0A CN105069466B (en) 2015-07-24 2015-07-24 Pedestrian's dress ornament color identification method based on Digital Image Processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510443292.0A CN105069466B (en) 2015-07-24 2015-07-24 Pedestrian's dress ornament color identification method based on Digital Image Processing

Publications (2)

Publication Number Publication Date
CN105069466A CN105069466A (en) 2015-11-18
CN105069466B true CN105069466B (en) 2019-01-11

Family

ID=54498827

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510443292.0A Active CN105069466B (en) 2015-07-24 2015-07-24 Pedestrian's dress ornament color identification method based on Digital Image Processing

Country Status (1)

Country Link
CN (1) CN105069466B (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106844394B (en) * 2015-12-07 2021-09-10 北京航天长峰科技工业集团有限公司 Video retrieval method based on pedestrian clothes and shirt color discrimination
CN105631413A (en) * 2015-12-23 2016-06-01 中通服公众信息产业股份有限公司 Cross-scene pedestrian searching method based on depth learning
CN105976354B (en) * 2016-04-14 2019-02-01 广州视源电子科技股份有限公司 Color and gradient based component positioning method and system
CN107463544A (en) * 2016-06-06 2017-12-12 汇仕电子商务(上海)有限公司 Graph visualization collocation method
CN107918944B (en) * 2016-10-09 2021-08-31 北京奇虎科技有限公司 A kind of picture color filling method and device
CN106599781A (en) * 2016-11-08 2017-04-26 国网山东省电力公司威海供电公司 Electric power business hall dressing normalization identification method based on color and Hu moment matching
CN107358242B (en) * 2017-07-11 2020-09-01 浙江宇视科技有限公司 Target area color recognition method, device and monitoring terminal
CN107909580A (en) * 2017-11-01 2018-04-13 深圳市深网视界科技有限公司 A kind of pedestrian wears color identification method, electronic equipment and storage medium clothes
CN107766861A (en) * 2017-11-14 2018-03-06 深圳码隆科技有限公司 The recognition methods of character image clothing color, device and electronic equipment
CN108230297B (en) * 2017-11-30 2020-05-12 复旦大学 Color collocation assessment method based on garment replacement
CN110298893A (en) * 2018-05-14 2019-10-01 桂林远望智能通信科技有限公司 A kind of pedestrian wears the generation method and device of color identification model clothes
CN109145947B (en) * 2018-07-17 2022-04-12 昆明理工大学 Fashion women's dress image fine-grained classification method based on part detection and visual features
CN110263605A (en) * 2018-07-18 2019-09-20 桂林远望智能通信科技有限公司 Pedestrian's dress ornament color identification method and device based on two-dimension human body guise estimation
CN110427808A (en) * 2019-06-21 2019-11-08 武汉倍特威视系统有限公司 Police uniform recognition methods based on video stream data
CN110555393A (en) * 2019-08-16 2019-12-10 北京慧辰资道资讯股份有限公司 method and device for analyzing pedestrian wearing characteristics from video data
CN111428748B (en) * 2020-02-20 2023-06-27 重庆大学 HOG feature and SVM-based infrared image insulator identification detection method
CN111967455A (en) * 2020-10-23 2020-11-20 成都考拉悠然科技有限公司 Method for comprehensively judging specified dressing based on computer vision
CN113628287B (en) * 2021-08-16 2024-07-09 杭州知衣科技有限公司 Single-stage clothing color recognition system and method based on deep learning
CN114401365B (en) * 2021-12-31 2024-05-14 广东省教育研究院 Target person identification method, video switching method and device
CN115309988B (en) * 2022-08-10 2023-07-07 上海迪塔班克数据科技有限公司 Webpage search content matching method, system and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745226A (en) * 2013-12-31 2014-04-23 国家电网公司 Dressing safety detection method for worker on working site of electric power facility
CN104504369A (en) * 2014-12-12 2015-04-08 无锡北邮感知技术产业研究院有限公司 Wearing condition detection method for safety helmets

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8891880B2 (en) * 2009-10-16 2014-11-18 Nec Corporation Person clothing feature extraction device, person search device, and processing method thereof

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745226A (en) * 2013-12-31 2014-04-23 国家电网公司 Dressing safety detection method for worker on working site of electric power facility
CN104504369A (en) * 2014-12-12 2015-04-08 无锡北邮感知技术产业研究院有限公司 Wearing condition detection method for safety helmets

Also Published As

Publication number Publication date
CN105069466A (en) 2015-11-18

Similar Documents

Publication Publication Date Title
CN105069466B (en) Pedestrian's dress ornament color identification method based on Digital Image Processing
Li et al. Robust rooftop extraction from visible band images using higher order CRF
CN107424142B (en) Weld joint identification method based on image significance detection
Wang et al. Character location in scene images from digital camera
CN105913093B (en) A Template Matching Method for Character Recognition Processing
CN104463195B (en) Printing digit recognizing method based on template matches
WO2017190656A1 (en) Pedestrian re-recognition method and device
US8320665B2 (en) Document image segmentation system
Asi et al. A coarse-to-fine approach for layout analysis of ancient manuscripts
CN106874884B (en) Human body re-identification method based on part segmentation
CN105447503B (en) Pedestrian detection method based on rarefaction representation LBP and HOG fusion
CN106203539B (en) Method and device for identifying container number
Zang et al. Traffic sign detection based on cascaded convolutional neural networks
CN109948625A (en) Text image clarity evaluation method and system, computer readable storage medium
CN112232332B (en) Non-contact palm detection method based on video sequence
CN103440035A (en) Gesture recognition system in three-dimensional space and recognition method thereof
KR101742115B1 (en) An inlier selection and redundant removal method for building recognition of multi-view images
CN107066972A (en) Natural scene Method for text detection based on multichannel extremal region
CN108280469A (en) A kind of supermarket's commodity image recognition methods based on rarefaction representation
Chakraborty et al. Bangladeshi road sign detection based on YCbCr color model and DtBs vector
Fitriyah et al. Traffic sign recognition using edge detection and eigen-face: Comparison between with and without color pre-classification based on Hue
CN106548195A (en) A kind of object detection method based on modified model HOG ULBP feature operators
Youlian et al. Face detection method using template feature and skin color feature in rgb color space
Kaur et al. 2-D geometric shape recognition using canny edge detection technique
Alaei et al. Logo detection using painting based representation and probability features

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant