WO2012162981A1 - Procédé et dispositif de séparation de personnages vidéo - Google Patents
Procédé et dispositif de séparation de personnages vidéo Download PDFInfo
- Publication number
- WO2012162981A1 WO2012162981A1 PCT/CN2011/079751 CN2011079751W WO2012162981A1 WO 2012162981 A1 WO2012162981 A1 WO 2012162981A1 CN 2011079751 W CN2011079751 W CN 2011079751W WO 2012162981 A1 WO2012162981 A1 WO 2012162981A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- foreground
- background
- image
- probability
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/167—Detection; Localisation; Normalisation using comparisons between temporally consecutive images
Definitions
- the present invention relates to the field of communications technologies, and in particular, to a method and apparatus for video character segmentation. Background technique
- the object segmentation technique refers to separating the object of interest to the user on the video or image at the pixel level from the background, and the segmented object can be synthesized into a new background.
- a Gaussian mixture model is used to establish a background color model, and then the video frame image is subtracted from the established background color model, and threshold segmentation is performed to obtain a color model of the foreground object.
- the object segmentation is automatically segmented by the graph cut, and the cut image is smoothed by the morphological opening and closing operation to optimize the segmentation result.
- the original RGB Red, Green, Blue
- HSV Hue, Saturation, Value
- Embodiments of the present invention provide a method and apparatus for video character segmentation, which can be applied to segmentation of various video character objects, and can segment a complete person object in real time.
- a method for segmenting video characters including:
- a map is constructed according to the respective probabilities, and a graph cut is performed to obtain a person object.
- a device for video character segmentation comprising:
- a first acquiring unit configured to perform face detection on the first frame video image to be processed, to obtain a human face region
- a second acquiring unit configured to acquire a foreground seed pixel point and a background seed pixel point according to the character face area
- a calculating unit configured to calculate, according to the foreground seed pixel point and the background seed pixel point, a probability that each pixel in the video image is a foreground or a background;
- An embodiment of the present invention provides a method and an apparatus for video character segmentation.
- the face image of a person is obtained by performing face detection on a video image of a first frame to be processed, and acquiring a foreground seed pixel according to the face region of the character.
- a background seed pixel according to the foreground seed pixel point and the background seed pixel point, respectively calculating a probability that each pixel point in the video image is a foreground or a background, constructing a graph according to the respective probabilities, and performing graph cut acquisition Character object.
- FIG. 1 is a flowchart of a method for video character segmentation according to Embodiment 1 of the present invention
- FIG. 2 is a block diagram of a device for video character segmentation according to Embodiment 1 of the present invention
- 4 is a schematic diagram of video character segmentation according to Embodiment 2 of the present invention
- Figure 5 is a schematic view showing the cutting of the figure provided in Embodiment 2 of the present invention.
- FIG. 6 is a schematic diagram of determining a contour according to Embodiment 2 of the present invention.
- FIG. 7 is a schematic diagram of luminance change detection according to Embodiment 2 of the present invention.
- FIG. 8 is a block diagram of an apparatus for video character segmentation according to Embodiment 2 of the present invention. detailed description
- An embodiment of the present invention provides a method for segmenting a video character. As shown in FIG. 1 , the method includes: Step 1 01: Performing a face detection on a video image of a first frame to be processed to obtain a face region of a person; The first frame image in the video image to be processed is processed, and when it is not the first frame video image, the video image can be quickly segmented according to the correlation between adjacent video frames.
- Step 1 02 Obtain a foreground seed pixel point and a background seed pixel point according to the face area of the character;
- Step 1 03 Calculate, according to the foreground seed pixel point and the background seed pixel point, a probability that each pixel point in the video image is a foreground or a background;
- Step 1 04 construct a graph according to the respective probabilities, and perform graph cut to obtain a character object.
- An embodiment of the present invention provides a method for segmenting a video character.
- the face image of a person is obtained by performing face detection on a first frame of the video image to be processed, and acquiring a foreground seed pixel and a background seed according to the face region of the character.
- the prior art which is used when segmenting video characters is not adaptable to various types of videos, and when the object segmentation result is optimized by the opening and closing operation, the complete character object cannot be segmented, and the embodiment of the present invention provides Plan can It is suitable for segmentation of various video characters, and can segment complete character objects in real time.
- the embodiment of the present invention provides a device for video character segmentation.
- the device includes: a first acquiring unit 201, a second acquiring unit 202, a calculating unit 203, and a processing unit 204.
- the first acquiring unit 201 is configured to perform face detection on the first frame video image to be processed to obtain a face region of the person;
- a second acquiring unit 202 configured to acquire a foreground seed pixel point and a background seed pixel point according to the character face area
- the calculating unit 203 is configured to separately calculate a probability that each pixel in the video image is a foreground or a background according to the foreground seed pixel point and the background seed pixel point;
- the processing unit 204 is configured to construct a map according to the respective probabilities, and perform a graph cut to obtain a person.
- the apparatus for providing a video character segmentation is provided in the embodiment of the present invention, and the first frame of the video image to be processed is performed by the first acquiring unit. Detecting, acquiring a face region of a person, according to the face region of the character, the second acquiring unit acquires a foreground seed pixel point and a background seed pixel point, and then the calculating unit respectively calculates a probability that each pixel point in the video image is a foreground or a background
- the processing unit constructs a map according to the respective probabilities, and performs graph cutting to obtain a character object.
- the number of components of the Gaussian mixture model is manually set, the adaptability to various types of video is not strong, and the complete character object cannot be segmented, which is provided by the embodiment of the present invention.
- the scheme can be applied to the segmentation of various video characters, and the complete character object can be segmented in real time.
- An embodiment of the present invention provides a method for segmenting a video character. As shown in FIG. 3, the method includes: Step 301: Determine whether a video frame image to be processed is a first frame.
- the purpose of determining whether the current video frame image to be processed is the first frame is that when the current video frame image is not the first frame, the current frame image may be processed according to the result of the segmented video character object of the previous frame, that is, according to the adjacent video frame.
- the correlation of the images is processed, which speeds up the processing.
- Step 302 When the image of the video frame to be processed is the first frame, perform a face detection on the first frame of the video image to be processed, and obtain a face region of the character; Specifically, the AdaBoost algorithm is used for face detection.
- Adaboos t is an iterative algorithm. The core idea is to train different classifiers for the same training set, which can be called weak classifiers, and then combine these weak classifiers. , constitutes a stronger final classifier (strong classifier).
- Performing face detection that is, using a face image and a non-face image, training a group of classifiers, wherein the face image is a positive sample and the non-face image is a negative sample; searching for each region of the input image to be processed, The face area is judged by the group of classifiers, and the detected face area of the person is as shown in the rectangular area in Fig. 4 (a).
- Step 303 Obtain a foreground seed pixel point and a background seed pixel point according to the character face area.
- a moderate adjustment is made to generate a foreground sample model and a background sample model.
- the face area of the person is appropriately reduced, and then the distance between the face area of the person and the upper body area is determined according to the height of the face area of the person, according to the head and shoulder of the person.
- the ratio of the width determines the area of the upper body, so that the foreground model can be generated.
- the pixel in the area included in the light-colored line in Figure 4 (b) is the foreground seed pixel;
- a background sample model is generated, and the dark dotted line in Fig. 4(c) and the pixel points in the area included in the image boundary are Seed pixels for the background.
- Step 304 Determine, according to the foreground seed pixel points, three sets of sample values of the foreground seed pixel points on three color components of L, a, and b, and determine the background seed pixel points according to the background seed pixel points respectively. Three sets of sample values on three color components of L, a, b;
- the solution provided by the embodiment of the present invention converts a video image from RGB (Red, Green, Blue, Red, Green, Blue) space to Lab space
- Lab consists of three channels, and L channel is a luminance channel, a channel and b channel.
- L channel is a luminance channel, a channel and b channel.
- a represents the range from magenta to green
- b represents the range from yellow to blue.
- the three color components L, a, and b are independent of each other.
- three sets of sample values are obtained on the three color components L, a, b, ⁇ a , a 2 F ,..., adon F j, ⁇ «".
- Step 305 Calculate, according to sample values of the foreground seed pixel point and the background seed pixel point, a first foreground probability and a first background probability of each pixel point in the video image.
- f/(x), f( x ), respectively are calculated according to sample values of the foreground seed pixel point and the background seed pixel point. f (x), f (x);
- Xi represents an i-th foreground seed pixel point or an i-th background seed pixel point, and X represents any one of the video images
- Step 306 Normalize the first foreground probability and the first background probability, and calculate a probability that each pixel in the video image is a foreground or a background.
- each image in the video image of the first frame is The prime point is processed to obtain the probability that each pixel is foreground or background.
- the higher the pixel value, the brighter the pixel the greater the probability that the pixel is foreground.
- the darker the pixel The greater the probability of it being the background.
- Step 307 construct a map according to the respective probabilities, and perform graph cutting to obtain a character object;
- V Ver tex, vertex
- Vl the ith vertex
- ⁇ the jth vertex
- E (Edge, edge) refers to the join of the two associated vertices in graph G line
- ⁇ 3 ⁇ 4 represents a linking i, j edge vertices
- W (we i ght, right) refers to a value assigned to the edge connecting two vertices, which represents how closely the relationship between these two vertices
- w ⁇ represents The weight of the edge connecting i, j vertices.
- the solution provided by the embodiment of the present invention performs the graph cutting by using the maximum stream minimum cut algorithm.
- all the vertices of the graph are divided into two subsets, and the edges between the two subsets constitute a cut of the graph. This is shown by the dotted line in Figure 5 (b).
- the two subsets respectively include a virtual source point and a virtual sink point, the source point corresponds to the foreground seed pixel point, and the sink point corresponds to the background seed pixel point. All the cuts with the smallest cut weight from the source point to the sink point are called minimum cuts.
- a basic way to find the minimum cut is to find the maximum flow from the source point to the sink point, that is, the edge connecting the two vertices is regarded as A water pipe, the weight of the side is the capacity of the water pipe.
- the so-called maximum flow is the maximum water flow that can be passed from the source point to the sink point.
- the water pipes that are completely filled are the source and sink points. Minimal cut.
- a map is constructed by constructing energy items between pixels, specifically, a pixel in a video frame image corresponds to a vertex of the graph, and an edge of the graph is connected to two adjacent pixels correspondingly, and each edge is allocated.
- a weight indicates the relationship between the two pixels connected to the edge, such as the degree of similarity between the colors and the relationship between the source and the sink.
- the foreground probability of the pixel indicates the relationship with the source, and the background probability of the pixel. Indicates the relationship with the Meeting Point.
- the source point and the sink point respectively represent the foreground seed pixel point and the background seed pixel point.
- B is a binary variable
- the problem of segmenting a person object in a video frame image may be converted into a problem of segmentation of a graph to be constructed.
- the maximum stream minimum cut algorithm may be used to perform graph cut, thereby obtaining a character object.
- Steps 302 to 307 are processes for processing the image of the first frame.
- the image to be processed is not the image of the first frame, if the processing of steps 302 to 307 is performed for each frame of image, it will be quite expensive. Time, so when processing the entire video sequence, the following process is further taken.
- Step 308 When the video frame image to be processed is not the first frame, perform a brightness change detection on the video frame image to obtain a brightness difference distance between the current video frame image and the previous frame video image.
- Whether the non-parametric model of the foreground/background needs to be updated depends mainly on the change of the scene.
- One of the main factors is the change of brightness.
- the change of brightness may be caused by the change of the surrounding environment, or may be caused by the video collection device. It will result in a foreground I background probability calculated using the current non-parametric model that does not fit well with the current video frame.
- the brightness change detection mainly utilizes the Bha t tacha ryya distance to calculate the luminance histogram of the current frame and the luminance histogram of the previous frame.
- H i) is the value of the histogram at the gray level i
- H. (i) is a histogram H.
- Step 309 determining whether the brightness difference distance is less than a preset threshold
- the preset threshold is determined experimentally and can be 0.1.
- the processing is performed according to the current image to be processed as the processing method of the first frame video image; as shown in FIG. 7 , the continuous two frames of the brightness difference distance greater than the preset threshold is the histogram Therefore, the processing is performed in accordance with the processing of steps 302-307.
- Step 310 Determine, when the brightness difference distance is less than a preset threshold, a contour of a person object of the current video frame image according to a contour of a person object of the video image of the previous frame;
- At least one key point on the contour of the character object of the previous frame video image is extracted; and the feature point whose direction is abruptly changed on the contour of the object may be extracted, and then Proportional sampling is performed to obtain a suitable number of key points, and the starting point, ending point, and feature points closer to the bottom of the image are also selected as key points. Dozens of dark dots in the gray banded area as shown in Fig. 6 are the key points.
- ⁇ represents the position of X
- ⁇ denotes the motion vector estimated for pixel X
- ( 2x4+1 ) * ( 2x4+1 ) 81 possible motion quantities.
- an energy function E value can be calculated, and the motion vector corresponding to the smallest E value is selected as the pixel X.
- the motion vector in this way, can obtain the corresponding key point of the pixel X in the current frame.
- the key point is the key point of the goal.
- the at least one target key point is connected to obtain a character object outline of the current video frame image.
- Step 311 Update, according to the contour of the character object, a probability that each pixel in the current video frame image is a foreground or a background;
- the currently determined character object contour is an approximate character object contour, as shown in FIG. 6, the white area is the foreground, the black area is the background, and the gray strip area is the uncertain area, that is, the gray strip area may
- the foreground it may also be a background
- the non-parametric model of the foreground/background of the video image of the previous frame the non-parametric model of the foreground/background of each pixel in the current video frame image is updated, that is, each pixel is determined to be foreground or
- the probability of the background specifically, the probability of each pixel point being the foreground or background is calculated according to the methods of steps 305 and 306.
- Step 312 Construct a map according to the respective probabilities, and perform a graph cut to obtain a character object of the current video frame image.
- the person object of the video frame image is cut according to the method of step 307.
- a video character segmentation method provided by an embodiment of the present invention by using a video frame image
- the segmentation of the object object when the prior art is used, the adaptability to the various types of video is not strong, and when the segmentation result is optimized by the opening and closing operation, the complete character object cannot be segmented, which is provided by the embodiment of the present invention.
- the scheme can be applied to the segmentation of various video characters, and the complete character object can be segmented in real time, and the entire video can be quickly segmented based on the correlation between adjacent video frames.
- An embodiment of the present invention provides a device for video character segmentation.
- the device includes: a determining unit 801, a first obtaining unit 802, a second obtaining unit 803, a calculating unit 804, a determining module 805, and a first calculating.
- the determining unit 801 is configured to determine whether the image of the video frame to be processed is the first frame
- the first acquiring unit 802 is configured to perform face detection on the first frame of the video image to be processed to obtain a face region of the person; and use the AdaBoos t algorithm to perform the face. Detecting, using a face image and a non-face image, training a group of classifiers, wherein the face image is a positive sample and the non-face image is a negative sample; searching for each region of the input image to be processed, using the group The classifier determines the face area.
- the second acquiring unit 803 acquires a foreground seed pixel point and a background seed pixel point;
- the facial region of the person acquired by the first obtaining unit 802 is moderately adjusted, that is, the facial region of the person is appropriately reduced, and then the distance between the facial region and the upper body region of the human is determined according to the height of the facial region of the human.
- the area of the upper body is determined according to the ratio of the width of the head and the shoulder of the person, so that the foreground model is generated, wherein the pixel included in the foreground model is the foreground seed pixel;
- a background sample model is generated, wherein the pixel points included in the background sample model are background seed pixels.
- the calculating unit 804 is configured to separately calculate a probability that each pixel in the video image is a foreground or a background according to the foreground seed pixel point and the background seed pixel point;
- the determining module 805 of the calculating unit 804 is configured to respectively determine three sets of sample values of the foreground seed pixel points on the three color components of L, a, b according to the foreground seed pixel points, according to the background seed The pixel points respectively determine three sets of sample values of the background seed pixel points on the three color components of L, a, b;
- a first calculating module 806, configured to separately calculate a first foreground probability and a first background probability of each pixel in the video image according to the foreground value of the foreground seed pixel and the background seed pixel;
- the first calculating submodule 807 in the first calculating module 806 is configured to calculate f/(x), f (x) respectively according to sample values of the foreground seed pixel point and the background seed pixel point. ), f ( f (x), f (x), f (x); where x represents any pixel in the video image, f/( x ), f/(x), f (x) respectively Representing the foreground probability of the pixel points on the three color components a and b, f (x), f (x), and f (x) respectively represent the background scene of the pixel points on the three color components a and b Probability
- a second calculation sub-module 808, configured to Calculating a first foreground probability of any one of the pixel points in the video image
- the first foreground probability and the first background probability are normalized, and the second computing module 809 calculates a probability that each pixel in the video image is a foreground or a background;
- the processing unit 812 After determining the foreground/background probability of the pixel in the current video frame image, the processing unit 812 constructs a map according to the respective probabilities, and performs graph cutting to obtain the character object;
- the detection acquiring unit 813 for the video frame Performing a brightness change detection on the image to obtain a brightness difference distance between the current video frame image and the previous frame video image;
- the luminance change detection mainly utilizes the Bha t tacharyya distance to calculate the luminance histogram of the current frame and the luminance histogram of the previous frame.
- H i is the value of the histogram at the gray level i
- H. (i) is a histogram H.
- the determining unit 813 determines the character object contour of the current video frame image according to the character object contour of the previous frame video image;
- the extracting module 815 in the determining unit 814 is configured to extract at least one key point on the contour of the character object of the video image of the previous frame according to the binary image of the segmentation result of the video image of the previous frame. ;
- the first determining module 816 determines a corresponding key point of each of the key points in the current video frame image; according to a distance and a slope change between two adjacent key points
- the second determining module 817 determines at least one target key point
- An obtaining module 818 configured to connect the at least one target key point to obtain a character object contour of the current video frame image
- the updating unit 819 updates a probability that each pixel point in the current video frame image is a foreground or a background; and constructs a map according to the updated respective probability, the processing unit 812 is further configured to perform The graph cut acquires the character object of the current video frame image.
- An embodiment of the present invention provides a device for segmenting a video object, by using a first acquiring unit to perform face detection on a first frame of a video image to be processed, to obtain a face region of a person, according to the face region of the character, a second acquiring unit. Obtaining a foreground seed pixel and a background seed pixel, and then calculating a probability that each pixel in the video image is a foreground or a background, the processing unit constructs a map according to the respective probabilities, and performs graph cutting to obtain a character object.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
La présente invention concerne le domaine technique des communications. L'invention propose un procédé et un dispositif de séparation de personnages vidéo, applicables à la séparation de divers objets de personnages vidéo et capables de séparer un objet de personnage complet en temps réel. Dans la solution technique fournie par les modes de réalisation de la présente invention, la détection de visage humain est effectuée sur une première trame d'image vidéo à traiter, de manière à obtenir la zone de visage de personnage ; des pixels germes de premier plan et des pixels germes d'arrière-plan sont obtenus conformément à la zone de visage de personnage ; la probabilité que chaque pixel soit le premier plan ou l'arrière-plan dans l'image vidéo est calculée, respectivement, conformément aux pixels germes de premier plan et aux pixels germes d'arrière-plan ; une image est construite conformément à chaque probabilité ; et une découpe d'image est effectuée pour obtenir l'objet de personnage. La solution fournie par les modes de réalisation de la présente invention est applicable à la séparation d'objets vidéo.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201180001853.1A CN103119625B (zh) | 2011-09-16 | 2011-09-16 | 一种视频人物分割的方法及装置 |
| PCT/CN2011/079751 WO2012162981A1 (fr) | 2011-09-16 | 2011-09-16 | Procédé et dispositif de séparation de personnages vidéo |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2011/079751 WO2012162981A1 (fr) | 2011-09-16 | 2011-09-16 | Procédé et dispositif de séparation de personnages vidéo |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2012162981A1 true WO2012162981A1 (fr) | 2012-12-06 |
Family
ID=47258310
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2011/079751 Ceased WO2012162981A1 (fr) | 2011-09-16 | 2011-09-16 | Procédé et dispositif de séparation de personnages vidéo |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN103119625B (fr) |
| WO (1) | WO2012162981A1 (fr) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111507997A (zh) * | 2020-04-22 | 2020-08-07 | 腾讯科技(深圳)有限公司 | 图像分割方法、装置、设备及计算机存储介质 |
| CN111583292A (zh) * | 2020-05-11 | 2020-08-25 | 浙江大学 | 一种面向双光子钙成像视频数据的自适应图像分割方法 |
| CN115984378A (zh) * | 2022-12-22 | 2023-04-18 | 浙江大华技术股份有限公司 | 一种轨道异物检测方法、装置、设备及介质 |
Families Citing this family (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108230252B (zh) * | 2017-01-24 | 2022-02-01 | 深圳市商汤科技有限公司 | 图像处理方法、装置以及电子设备 |
| CN106846336B (zh) * | 2017-02-06 | 2022-07-15 | 腾讯科技(上海)有限公司 | 提取前景图像、替换图像背景的方法及装置 |
| CN106997599B (zh) * | 2017-04-17 | 2019-08-30 | 华东理工大学 | 一种光照敏感的视频运动目标分割方法 |
| CN107221058A (zh) * | 2017-05-25 | 2017-09-29 | 刘萍 | 智能化通道阻挡系统 |
| CN107766803B (zh) * | 2017-09-29 | 2021-09-28 | 北京奇虎科技有限公司 | 基于场景分割的视频人物装扮方法、装置及计算设备 |
| CN109035257B (zh) * | 2018-07-02 | 2021-08-31 | 百度在线网络技术(北京)有限公司 | 人像分割方法、装置及设备 |
| CN113673270B (zh) * | 2020-04-30 | 2024-01-26 | 北京达佳互联信息技术有限公司 | 一种图像处理方法、装置、电子设备及存储介质 |
| CN115346233A (zh) * | 2021-05-12 | 2022-11-15 | Oppo广东移动通信有限公司 | 图像处理方法、装置、电子设备及存储介质 |
| CN119445110B (zh) * | 2024-10-28 | 2025-07-15 | 湘西民族职业技术学院 | 一种基于特征增强的全景交互式牙齿分割方法 |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101588459A (zh) * | 2009-06-26 | 2009-11-25 | 北京交通大学 | 一种视频抠像处理方法 |
| CN101710418A (zh) * | 2009-12-22 | 2010-05-19 | 上海大学 | 基于测地距离的交互方式图象分割方法 |
| CN102129691A (zh) * | 2011-03-22 | 2011-07-20 | 北京航空航天大学 | 一种采用Snake轮廓模型的视频对象跟踪分割方法 |
| CN102156995A (zh) * | 2011-04-21 | 2011-08-17 | 北京理工大学 | 一种运动相机下的视频运动前景分割方法 |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP4639271B2 (ja) * | 2005-12-27 | 2011-02-23 | 三星電子株式会社 | カメラ |
| JP2008123086A (ja) * | 2006-11-09 | 2008-05-29 | Matsushita Electric Ind Co Ltd | 画像処理装置及び画像処理方法 |
| CN100580691C (zh) * | 2007-03-16 | 2010-01-13 | 上海博康智能信息技术有限公司 | 综合利用人脸及人体辅助信息的交互式人脸识别系统及方法 |
| CN101587541B (zh) * | 2009-06-18 | 2011-02-02 | 上海交通大学 | 基于人体轮廓的人物识别方法 |
-
2011
- 2011-09-16 WO PCT/CN2011/079751 patent/WO2012162981A1/fr not_active Ceased
- 2011-09-16 CN CN201180001853.1A patent/CN103119625B/zh not_active Expired - Fee Related
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101588459A (zh) * | 2009-06-26 | 2009-11-25 | 北京交通大学 | 一种视频抠像处理方法 |
| CN101710418A (zh) * | 2009-12-22 | 2010-05-19 | 上海大学 | 基于测地距离的交互方式图象分割方法 |
| CN102129691A (zh) * | 2011-03-22 | 2011-07-20 | 北京航空航天大学 | 一种采用Snake轮廓模型的视频对象跟踪分割方法 |
| CN102156995A (zh) * | 2011-04-21 | 2011-08-17 | 北京理工大学 | 一种运动相机下的视频运动前景分割方法 |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111507997A (zh) * | 2020-04-22 | 2020-08-07 | 腾讯科技(深圳)有限公司 | 图像分割方法、装置、设备及计算机存储介质 |
| CN111583292A (zh) * | 2020-05-11 | 2020-08-25 | 浙江大学 | 一种面向双光子钙成像视频数据的自适应图像分割方法 |
| CN111583292B (zh) * | 2020-05-11 | 2023-07-07 | 浙江大学 | 一种面向双光子钙成像视频数据的自适应图像分割方法 |
| CN115984378A (zh) * | 2022-12-22 | 2023-04-18 | 浙江大华技术股份有限公司 | 一种轨道异物检测方法、装置、设备及介质 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN103119625B (zh) | 2015-06-03 |
| CN103119625A (zh) | 2013-05-22 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2012162981A1 (fr) | Procédé et dispositif de séparation de personnages vidéo | |
| CN114519808B (zh) | 图像融合方法、装置及设备、存储介质 | |
| CN101375607B (zh) | 帧间模式关注区视频对象分割的方法和系统 | |
| CN107274419B (zh) | 一种基于全局先验和局部上下文的深度学习显著性检测方法 | |
| KR101023733B1 (ko) | 인트라-모드 관심 영역 비디오 오브젝트 세그멘테이션 | |
| KR100997064B1 (ko) | 멀티-모드 관심-영역 비디오 오브젝트 세그먼트화 | |
| Li et al. | Saliency model-based face segmentation and tracking in head-and-shoulder video sequences | |
| US7680342B2 (en) | Indoor/outdoor classification in digital images | |
| US9418426B1 (en) | Model-less background estimation for foreground detection in video sequences | |
| WO2017084204A1 (fr) | Procédé et système de suivi d'un point du squelette humain dans un flux vidéo en deux dimensions | |
| US20150125074A1 (en) | Apparatus and method for extracting skin area to block harmful content image | |
| CN108447068B (zh) | 三元图自动生成方法及利用该三元图的前景提取方法 | |
| JP4098021B2 (ja) | シーン識別方法および装置ならびにプログラム | |
| CN107239735A (zh) | 一种基于视频分析的活体检测方法和系统 | |
| CN105046721B (zh) | 基于Grabcut及LBP跟踪质心矫正模型的Camshift算法 | |
| CN116309607B (zh) | 基于机器视觉的船艇式智能水上救援平台 | |
| CN105868735A (zh) | 一种跟踪人脸的预处理方法及基于视频的智慧健康监视系统 | |
| CN105550999A (zh) | 一种基于背景复用的视频图像增强处理方法 | |
| CN109784216B (zh) | 基于概率图的车载热成像行人检测RoIs提取方法 | |
| JP2000348173A (ja) | 唇抽出方法 | |
| CN109118546A (zh) | 一种基于单帧图像的景深等级估计方法 | |
| Zafarifar et al. | Blue sky detection for picture quality enhancement | |
| CN106327500B (zh) | 深度信息获取方法及装置 | |
| Jyothisree et al. | Shadow detection using tricolor attenuation model enhanced with adaptive histogram equalization | |
| CN113781330A (zh) | 图像处理方法、装置及电子系统 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| WWE | Wipo information: entry into national phase |
Ref document number: 201180001853.1 Country of ref document: CN |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11866721 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 11866721 Country of ref document: EP Kind code of ref document: A1 |