[go: up one dir, main page]

CN108510500A - A kind of hair figure layer process method and system of the virtual figure image based on face complexion detection - Google Patents

A kind of hair figure layer process method and system of the virtual figure image based on face complexion detection Download PDF

Info

Publication number
CN108510500A
CN108510500A CN201810138228.5A CN201810138228A CN108510500A CN 108510500 A CN108510500 A CN 108510500A CN 201810138228 A CN201810138228 A CN 201810138228A CN 108510500 A CN108510500 A CN 108510500A
Authority
CN
China
Prior art keywords
skin color
processing
face
area
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810138228.5A
Other languages
Chinese (zh)
Other versions
CN108510500B (en
Inventor
陈嘉莉
蒋念娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Shuihao Technology Co ltd
Original Assignee
Yun Zhimeng Science And Technology Ltd Of Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yun Zhimeng Science And Technology Ltd Of Shenzhen filed Critical Yun Zhimeng Science And Technology Ltd Of Shenzhen
Priority to CN201810138228.5A priority Critical patent/CN108510500B/en
Publication of CN108510500A publication Critical patent/CN108510500A/en
Application granted granted Critical
Publication of CN108510500B publication Critical patent/CN108510500B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

A kind of hair figure layer process method of the virtual figure image based on face complexion detection, which is characterized in that including detecting human face region and characteristic point;Area of skin color processing;Area of skin color and feature dot profile scratch the information such as head mask mask, obtain face mask mask;The processing of face mask, edge transition are natural;Each figure layer is rendered, final image is generated.The present invention provides a kind of hair figure layer process method and system of the virtual figure image based on face complexion detection.Since the difficulty of the stingy figure of hair in scratching head is larger, scratching the effect of hair can become unnatural due to the influence of various backgrounds and clothes, so present invention employs discount granting body and clothes figure layer subsequent scheme, cover in unnatural hair portion so that the figure image of generation is more beautiful.

Description

Method and system for processing hair image layer of virtual character image based on human face skin color detection
Technical Field
The invention relates to the field of computer image processing, in particular to a method and a system for processing a hair image layer of a virtual human image based on human face skin color detection.
Background
With the continuous development of artificial intelligence technology, image processing plays an increasingly important role in our daily life. The virtual character image hair processing is an important field in image processing, and before the virtual character image hair processing, human face, feature points and skin color area detection are required. The detection of the human face and the characteristic points is mainly based on a detection technology of machine learning. The skin color detection is mainly a process of selecting a corresponding color range in an image as a skin color according to the inherent color of the skin, namely selecting pixel points of an area where the human skin is located in the image.
The existing scheme mainly comprises the following steps: detecting face information of a current frame picture of a picture, obtaining an approximate outline of a face region according to an Active Shape Model (ASM) algorithm, estimating a skin region of the face according to the outline region, avoiding some regions (such as eyes, eyebrows and lip regions) which are possibly misled, performing threshold segmentation on the estimated skin color region of the face according to a skin threshold empirical parameter which is set in advance according to the estimated skin color region of the face, uniformly selecting a certain number of skin color seeds in different skin color regions, and performing spreading and detection on a peripheral connected region according to the selected seed points, so that all connected skin color regions can be detected; scheme II: the method comprises the steps of obtaining a face area from a gray-scale image of a current frame picture of the picture, calculating a histogram of the face area, finding approximate valley points of the histogram, and dividing a skin color area and a non-skin color area in the face area through the approximate valley points. However, the above two schemes have great difficulty in hair matting in matting, and the effect of hair matting can become unnatural due to the influence of various backgrounds and clothes.
Disclosure of Invention
The invention provides a method for processing a hair layer of a virtual character based on human face skin color detection, which comprises the following steps,
detecting a face area and feature points;
processing a skin color area;
obtaining a face mask by using the skin color area, the feature point outline, the head-scratching mask and other information;
face mask processing, and natural edge transition;
and rendering each layer to generate a final image.
And detecting the face region of the picture, and obtaining feature points of the five sense organs and the face contour by using a face region detection and feature point detection technology based on machine learning.
The skin color area processing comprises the following steps of,
combining the characteristic points and the brightness information to obtain an initial skin color area;
combining the brightness and the color information in the initial skin color area to obtain an accurate skin color area;
estimating an ellipsoid space of Skin Color according to a Skin Color Modeling method of a Skin Color Digital Photographic image of Skin Color Modeling based on the Skin Color area;
and solving by a max flow/min cut method in graph cut (graph cut) based on the spatial distance of the skin color ellipsoid to obtain an optimized skin color area.
The initial skin color region is obtained by combining the feature points and the brightness information, including,
255 is filled in a polygonal region surrounded by the feature points 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,45,46,47,42,27,39,40,41,36, and 0 is filled in a polygonal region surrounded by the feature points 48,49,50,51,52,53,54,55,56,57,58, 59; the region with luminance value L <30 is filled with 0 to obtain the initial skin color region.
The combination of luminance and color information in the initial skin tone region yields an accurate skin tone region including,
counting to obtain a brightness point set of a skin color area, wherein the brightness and position information of each pixel in the skin color area are recorded;
sorting the brightness point sets from small to large according to the brightness values, and removing 15% -20% of over-black and 10% -15% of over-bright pixel points;
selecting a middle brightness point set of a skin color area as a seed point set, searching the position of a middle point according to the point set of the skin color area, and selecting a left-right plus-minus 5 range as the seed point set;
selecting a new candidate skin color area, selecting feature points 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,27,26,25,24,23,22,21,20,19,18 as outer contours, pulling down portions of 18, 19, 20, 21, 22 (left eyebrow) and 23, 24, 25, 26, 27 (right eyebrow), removing portions surrounded by 31,32,33,34,35, 36 (nose), and removing portions surrounded by 37,38,39,40,41, 42 (right eye) and 43,44,45,46,47, 48 (left eye);
traversing the seed point set, and taking all pixels with the BGR color distance less than 50 from the seed point in the candidate skin color region obtained in the step (d) as a new skin color region.
The method for Modeling Skin Color Modeling of digital Photographic Images based on the Skin Color area estimates the ellipsoid space of the Skin Color according to the Skin Color of the digital Photographic Images, wherein the ellipsoid space model of the Skin Color is as follows:
Φ(X)=[X-Ψ]TA-l[X-Ψ]
wherein,
x1.. Xn represents the color that appears in the skin color region, and f (Xi) represents the number of times the color Xi appears.
Estimating an ellipsoid spatial model of skin color based on the skin color region;
and substituting each pixel of the matting mask into the ellipsoid space model of the skin color to solve to obtain the ellipsoid space distance between each pixel and the skin color.
The smoother skin color area is obtained by solving by a maximum flow minimum cut (maxflow/min cut) method in graph cut (graph cut) based on the spatial distance of the skin color ellipsoid, comprising,
setting a smooth item to represent the weight between adjacent points,
dist1 represents the color difference, σ, between adjacent pixels1set to 15, α is set to 20;
setting weight values between the data items, the representative points and the Source Sink (Source/Sink),
dist2 represents the ellipsoid spatial distance (5c) of the pixel from skin color, with β set to 1, δ2Set to 15.
The mask processing of the human face, the transition of the edge is natural, including,
performing operation of expanding the face mask for 7 circles and then reducing the face mask for 7 circles;
performing Gaussian blur of 5x5 for 2 times to ensure the transition of the edges of the human face to be natural;
and obtaining the processed face mask and the face four-channel picture.
And rendering each layer to generate a final image, including,
pasting a real head map layer, and matting a four-channel picture obtained by a head mask;
pasting virtual body and clothes layers;
pasting a real face layer on the top layer;
the final image creates the effect.
The invention provides a hair layer processing system of a virtual character based on human face skin color detection, which comprises,
the detection module is used for detecting a face area and feature points;
the area processing module is used for processing the skin color area;
the head scratching processing module is used for obtaining a face mask by using information such as a skin color area, a characteristic point outline, a head scratching mask and the like;
the human face mask processing module is used for human face mask processing and natural edge transition;
and the rendering module is used for rendering each layer to generate a final image.
The invention provides a product for processing a hair layer of a virtual character based on human face skin color detection, which comprises images suitable for virtual reality, virtual fitting, virtual social contact, clothes, shoes and accessories and non-real contact measuring bodies.
Has the advantages that:
the invention provides a method and a system for processing a hair layer of a virtual character based on human face skin color detection. Because the difficulty of hair matting in head matting is higher, the effect of hair matting becomes unnatural due to the influence of various backgrounds and clothes, so the invention adopts the scheme that the head is distributed behind the body and the clothes layer, and the unnatural hair part is covered, so that the generated figure image is more beautiful.
Description of the drawings:
FIG. 1 is a schematic diagram of feature points of the five sense organs and facial contours
FIG. 2 is a schematic diagram of a mask
FIG. 3 is a schematic view of an initial skin tone region
FIG. 4 is a schematic diagram of an accurate skin tone region
FIG. 5 is a schematic view of a new skin tone region
FIG. 6 is a schematic view of the ellipsoid spatial distance between each pixel and skin tone
FIG. 7 is a schematic diagram of optimizing skin tone regions
FIG. 8 is a schematic diagram of face mask
FIG. 9 is a schematic diagram of processed face mask
FIG. 10 is a four-channel schematic diagram of a processed face
FIG. 11 is a final image generation effect diagram
Detailed Description
The embodiment provides a method for processing a hair layer of a virtual character based on human face skin color detection, which comprises the following steps,
detecting a face area and feature points;
processing a skin color area;
obtaining a face mask by using the skin color area, the feature point outline, the head-scratching mask and other information;
face mask processing, and natural edge transition;
and rendering each layer to generate a final image.
In a preferred embodiment, the face region of the detected picture in this embodiment uses a face region detection and feature point detection technology based on machine learning to obtain feature points of five sense organs and face contour.
The preferred embodiment, the skin tone region processing in this embodiment, includes,
combining the characteristic points and the brightness information to obtain an initial skin color area;
combining the brightness and the color information in the initial skin color area to obtain an accurate skin color area;
estimating an ellipsoid space of Skin Color according to a Skin Color Modeling method of a Skin Color Digital Photographic image of Skin Color Modeling based on the Skin Color area;
and solving by a max flow/min cut method in graph cut (graph cut) based on the spatial distance of the skin color ellipsoid to obtain an optimized skin color area.
In a preferred embodiment, the initial skin color region is obtained by combining the feature points and the luminance information in this embodiment, including filling 255 in a polygonal region surrounded by the feature points 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,45,46,47,42,27,39,40,41,36, and filling 0 in a polygonal region surrounded by the feature points 48,49,50,51,52,53,54,55,56,57,58, 59; the region with luminance value L <30 is filled with 0 to obtain the initial skin color region.
In the preferred embodiment, the present embodiment combines the luminance and color information in the initial skin color region to obtain the accurate skin color region, including,
counting to obtain a brightness point set of a skin color area, wherein the brightness and position information of each pixel in the skin color area are recorded;
sorting the brightness point sets from small to large according to the brightness values, and removing 15% -20% of over-black and 10% -15% of over-bright pixel points;
selecting a middle brightness point set of a skin color area as a seed point set, searching the position of a middle point according to the point set of the skin color area, and selecting a left-right plus-minus 5 range as the seed point set;
selecting a new candidate skin color area, selecting feature points 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,27,26,25,24,23,22,21,20,19,18 as outer contours, pulling down portions of 18, 19, 20, 21, 22 (left eyebrow) and 23, 24, 25, 26, 27 (right eyebrow), removing portions surrounded by 31,32,33,34,35, 36 (nose), and removing portions surrounded by 37,38,39,40,41, 42 (right eye) and 43,44,45,46,47, 48 (left eye);
traversing the seed point set, and taking all pixels with the BGR color distance less than 50 from the seed point in the candidate skin color region obtained in the step (d) as a new skin color region.
In the preferred embodiment, based on the skin color region, the ellipsoid space of the skin color is estimated according to the skin color Modeling of Digital Photographic Images, and the ellipsoid space model of the skin color is:
Φ(X)=[X-Ψ]TA-l[X-Ψ]
wherein,
x1.. Xn represents the color that appears in the skin color region, and f (Xi) represents the number of times the color Xi appears.
Estimating an ellipsoid spatial model of skin color based on the skin color region;
and substituting each pixel of the matting mask into the ellipsoid space model of the skin color to solve to obtain the ellipsoid space distance between each pixel and the skin color.
In the preferred embodiment, in this embodiment, based on the spatial distance of the skin color ellipsoid, a smoother skin color region is obtained by solving with a max flow minimum cut (max flow/min cut) method in graph cut (graph cut), including,
setting a smooth item to represent the weight between adjacent points,
dist1 represents the color difference, σ, between adjacent pixels1set to 15, α is set to 20;
setting weight values between the data items, the representative points and the Source Sink (Source/Sink),
dist2 represents the ellipsoid spatial distance (5c) of the pixel from skin color, with β set to 1, δ2Set to 15.
In the preferred embodiment, the face mask processing, edge transition natural, including,
performing operation of expanding the face mask for 7 circles and then reducing the face mask for 7 circles;
performing Gaussian blur of 5x5 for 2 times to ensure the transition of the edges of the human face to be natural;
and obtaining the processed face mask and the face four-channel picture.
In the preferred embodiment, each layer is rendered in this embodiment to generate a final avatar, including,
pasting a real head map layer, and matting a four-channel picture obtained by a head mask;
pasting virtual body and clothes layers;
pasting a real face layer on the top layer;
the final image creates the effect.
The embodiment provides a hair layer processing system of virtual character based on human face skin color detection, which comprises,
the detection module is used for detecting a face area and feature points;
the area processing module is used for processing the skin color area;
the head scratching processing module is used for obtaining a face mask by using information such as a skin color area, a characteristic point outline, a head scratching mask and the like;
the human face mask processing module is used for human face mask processing and natural edge transition;
and the rendering module is used for rendering each layer to generate a final image.
The embodiment provides a product for processing the hair layer of a virtual character based on human face skin color detection, which comprises images suitable for virtual reality, virtual fitting, virtual social contact, clothes, shoes and accessories and non-real contact measuring bodies.

Claims (11)

1. A method for processing the hair layer of virtual figure based on human face skin color detection is characterized by comprising the following steps,
detecting a face area and feature points;
processing a skin color area;
obtaining a face mask by using the skin color area, the feature point outline, the head-scratching mask and other information;
face mask processing, and natural edge transition;
and rendering each layer to generate a final image.
2. The method as claimed in claim 1, wherein the facial region of the detected picture is processed by using facial region detection and feature point detection based on machine learning to obtain feature points of five sense organs and facial contour.
3. The method for processing the hair layer of the virtual character based on the human face skin color detection as claimed in claim 1, wherein the processing of the skin color area comprises,
combining the characteristic points and the brightness information to obtain an initial skin color area;
combining the brightness and the color information in the initial skin color area to obtain an accurate skin color area;
estimating an ellipsoid space of Skin Color according to a Skin Color Modeling method of a Skin Color Digital Photographic image of Skin Color Modeling based on the Skin Color area;
and solving by a maximum flow minimum cut (max flow/min cut) method in graph cuts (graph cuts) based on the spatial distance of the skin color ellipsoid to obtain an optimized skin color area.
4. A method as claimed in claim 3, wherein said combining feature points and luminance information to obtain an initial skin color region comprises filling 255 in a polygonal region surrounded by feature points 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,45,46,47,42,27,39,40,41,36 and filling 0 in a polygonal region surrounded by feature points 48,49,50,51,52,53,54,55,56,57,58, 59; the region with luminance value L <30 is filled with 0 to obtain the initial skin color region.
5. The method as claimed in claim 3, wherein said combining brightness and color information in the initial skin color region to obtain the accurate skin color region comprises,
counting to obtain a brightness point set of a skin color area, wherein the brightness and position information of each pixel in the skin color area are recorded;
sorting the brightness point sets from small to large according to the brightness values, and removing 15% -20% of over-black and 10% -15% of over-bright pixel points;
selecting a middle brightness point set of a skin color area as a seed point set, searching the position of a middle point according to the point set of the skin color area, and selecting a left-right plus-minus 5 range as the seed point set;
selecting a new candidate skin color area, selecting feature points 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,27,26,25,24,23,22,21,20,19,18 as outer contours, pulling down portions of 18, 19, 20, 21, 22 (left eyebrow) and 23, 24, 25, 26, 27 (right eyebrow), removing portions surrounded by 31,32,33,34,35, 36 (nose), and removing portions surrounded by 37,38,39,40,41, 42 (right eye) and 43,44,45,46,47, 48 (left eye);
traversing the seed point set, and taking all pixels with the BGR color distance less than 50 from the seed point in the candidate skin color region obtained in the step (d) as a new skin color region.
6. The method for processing the hair layer of the virtual character based on the human face Skin Color detection as claimed in claim 3, wherein the ellipsoid space of the Skin Color is estimated according to the Skin Color modeling Digital Photographic Images method of the Digital Photographic image based on the Skin Color area, and the ellipsoid space model of the Skin Color is as follows:
Φ(X)=[X-Ψ]ΤΛ-1[X-Ψ]
wherein,
x1.. Xn represents the color that appears in the skin color region, and f (Xi) represents the number of times the color Xi appears.
Estimating an ellipsoid spatial model of skin color based on the skin color region;
and substituting each pixel of the matting mask into the ellipsoid space model of the skin color to solve to obtain the ellipsoid space distance between each pixel and the skin color.
7. The method for processing the hair layer of the virtual character based on the human face skin color detection as claimed in claim 3, wherein a smoother skin color region is obtained by solving with a max flow min cut method in a graph cut based on a skin color ellipsoid spatial distance, comprising,
setting a smooth item to represent the weight between adjacent points,
dist1 represents the color difference, σ, between adjacent pixels1set to 15, α is set to 20;
setting weight values between the data items, the representative points and the Source Sink (Source/Sink),
dist2 represents the ellipsoid spatial distance (5c) of the pixel from skin color, with β set to 1, δ2Set to 15.
8. The method for processing the hair layer of the virtual character based on the human face complexion detection as claimed in claim 1, wherein the human face mask is processed, the edge transition is natural, comprising,
performing operation of expanding the face mask for 7 circles and then reducing the face mask for 7 circles;
performing Gaussian blur of 5x5 for 2 times to ensure the transition of the edges of the human face to be natural;
and obtaining the processed face mask and the face four-channel picture.
9. The method of claim 1, wherein rendering layers to generate a final avatar comprises,
pasting a real head map layer, and matting a four-channel picture obtained by a head mask;
pasting virtual body and clothes layers;
pasting a real face layer on the top layer;
the final image creates the effect.
10. A system for processing a hair layer of an virtual character based on human face skin color detection as claimed in claim 1, comprising,
the detection module is used for detecting a face area and feature points;
the area processing module is used for processing the skin color area;
a head-scratching processing module, a skin color area, a characteristic point outline, a head-scratching mask and other information to obtain a face mask;
the human face mask processing module is used for processing the human face mask and enabling the edge transition to be natural;
and the rendering module is used for rendering each layer to generate a final image.
11. A product for processing a hair layer of a virtual figure based on face skin color detection is characterized by comprising images suitable for virtual reality, virtual fitting, virtual social contact, clothes, shoes, accessories and non-real contact measuring bodies, and the product for processing the hair layer of the virtual figure based on face skin color detection is the method and the system for processing the hair layer of the virtual figure based on face skin color detection in any one of claims 1 to 9.
CN201810138228.5A 2018-05-14 2018-05-14 Method and system for processing hair image layer of virtual character image based on human face skin color detection Active CN108510500B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810138228.5A CN108510500B (en) 2018-05-14 2018-05-14 Method and system for processing hair image layer of virtual character image based on human face skin color detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810138228.5A CN108510500B (en) 2018-05-14 2018-05-14 Method and system for processing hair image layer of virtual character image based on human face skin color detection

Publications (2)

Publication Number Publication Date
CN108510500A true CN108510500A (en) 2018-09-07
CN108510500B CN108510500B (en) 2021-02-26

Family

ID=63374657

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810138228.5A Active CN108510500B (en) 2018-05-14 2018-05-14 Method and system for processing hair image layer of virtual character image based on human face skin color detection

Country Status (1)

Country Link
CN (1) CN108510500B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685876A (en) * 2018-12-21 2019-04-26 北京达佳互联信息技术有限公司 Fur rendering method, apparatus, electronic equipment and storage medium
CN111862290A (en) * 2020-07-03 2020-10-30 完美世界(北京)软件科技发展有限公司 Radial fuzzy-based fluff rendering method and device and storage medium
CN111931908A (en) * 2020-07-23 2020-11-13 北京电子科技学院 Face image automatic generation method based on face contour
CN112270735A (en) * 2020-10-27 2021-01-26 北京达佳互联信息技术有限公司 Virtual image model generation method and device, electronic equipment and storage medium
CN112465734A (en) * 2020-10-29 2021-03-09 星业(海南)科技有限公司 Method and device for separating picture layers
CN113426138A (en) * 2021-05-28 2021-09-24 广州三七极创网络科技有限公司 Edge description method, device and equipment of virtual role
CN114155324A (en) * 2021-12-02 2022-03-08 北京字跳网络技术有限公司 Virtual role driving method and device, electronic equipment and readable storage medium
CN114565507A (en) * 2022-01-17 2022-05-31 北京新氧科技有限公司 Hair processing method and device, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102236786A (en) * 2011-07-04 2011-11-09 北京交通大学 Light adaptation human skin colour detection method
CN103116902A (en) * 2011-11-16 2013-05-22 华为软件技术有限公司 Three-dimensional virtual human head image generation method, and method and device of human head image motion tracking
CN103456010A (en) * 2013-09-02 2013-12-18 电子科技大学 Human face cartoon generation method based on feature point localization
CN104902189A (en) * 2015-06-24 2015-09-09 小米科技有限责任公司 Picture processing method and picture processing device
US20160154993A1 (en) * 2014-12-01 2016-06-02 Modiface Inc. Automatic segmentation of hair in images
CN105719234A (en) * 2016-01-26 2016-06-29 厦门美图之家科技有限公司 Automatic gloss removing method and system for face area and shooting terminal
CN106652037A (en) * 2015-10-30 2017-05-10 深圳超多维光电子有限公司 Face mapping processing method and apparatus
CN107562963A (en) * 2017-10-12 2018-01-09 杭州群核信息技术有限公司 A kind of method and apparatus screened house ornamentation design and render figure
CN107730573A (en) * 2017-09-22 2018-02-23 西安交通大学 A kind of personal portrait cartoon style generation method of feature based extraction

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102236786A (en) * 2011-07-04 2011-11-09 北京交通大学 Light adaptation human skin colour detection method
CN103116902A (en) * 2011-11-16 2013-05-22 华为软件技术有限公司 Three-dimensional virtual human head image generation method, and method and device of human head image motion tracking
CN103456010A (en) * 2013-09-02 2013-12-18 电子科技大学 Human face cartoon generation method based on feature point localization
US20160154993A1 (en) * 2014-12-01 2016-06-02 Modiface Inc. Automatic segmentation of hair in images
CN104902189A (en) * 2015-06-24 2015-09-09 小米科技有限责任公司 Picture processing method and picture processing device
CN106652037A (en) * 2015-10-30 2017-05-10 深圳超多维光电子有限公司 Face mapping processing method and apparatus
CN105719234A (en) * 2016-01-26 2016-06-29 厦门美图之家科技有限公司 Automatic gloss removing method and system for face area and shooting terminal
CN107730573A (en) * 2017-09-22 2018-02-23 西安交通大学 A kind of personal portrait cartoon style generation method of feature based extraction
CN107562963A (en) * 2017-10-12 2018-01-09 杭州群核信息技术有限公司 A kind of method and apparatus screened house ornamentation design and render figure

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685876A (en) * 2018-12-21 2019-04-26 北京达佳互联信息技术有限公司 Fur rendering method, apparatus, electronic equipment and storage medium
US11393154B2 (en) 2018-12-21 2022-07-19 Beijing Dajia Internet Information Technology Co., Ltd Hair rendering method, device, electronic apparatus, and storage medium
CN111862290A (en) * 2020-07-03 2020-10-30 完美世界(北京)软件科技发展有限公司 Radial fuzzy-based fluff rendering method and device and storage medium
CN111931908A (en) * 2020-07-23 2020-11-13 北京电子科技学院 Face image automatic generation method based on face contour
CN111931908B (en) * 2020-07-23 2024-06-11 北京电子科技学院 Face image automatic generation method based on face contour
CN112270735A (en) * 2020-10-27 2021-01-26 北京达佳互联信息技术有限公司 Virtual image model generation method and device, electronic equipment and storage medium
CN112270735B (en) * 2020-10-27 2023-07-28 北京达佳互联信息技术有限公司 Virtual image model generation method, device, electronic equipment and storage medium
CN112465734A (en) * 2020-10-29 2021-03-09 星业(海南)科技有限公司 Method and device for separating picture layers
CN113426138A (en) * 2021-05-28 2021-09-24 广州三七极创网络科技有限公司 Edge description method, device and equipment of virtual role
CN114155324A (en) * 2021-12-02 2022-03-08 北京字跳网络技术有限公司 Virtual role driving method and device, electronic equipment and readable storage medium
CN114155324B (en) * 2021-12-02 2023-07-25 北京字跳网络技术有限公司 Virtual character driving method and device, electronic equipment and readable storage medium
CN114565507A (en) * 2022-01-17 2022-05-31 北京新氧科技有限公司 Hair processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN108510500B (en) 2021-02-26

Similar Documents

Publication Publication Date Title
CN108510500B (en) Method and system for processing hair image layer of virtual character image based on human face skin color detection
CN112669447B (en) Model head portrait creation method and device, electronic equipment and storage medium
CN112529999B (en) A training method, device, equipment and storage medium for parameter estimation model
JP7632917B2 (en) Method, system and computer program for generating a 3D head deformation model
Liao et al. Automatic caricature generation by analyzing facial features
US8831379B2 (en) Cartoon personalization
US8913847B2 (en) Replacement of a person or object in an image
CN104834898B (en) A kind of quality classification method of personage&#39;s photographs
JP7712026B2 (en) Method, electronic device, and program for creating personalized 3D head and face models
CN112633191B (en) Three-dimensional face reconstruction method, device, equipment and storage medium
CN107316333B (en) A method for automatically generating Japanese cartoon portraits
JP7462120B2 (en) Method, system and computer program for extracting color from two-dimensional (2D) facial images
US20080309662A1 (en) Example Based 3D Reconstruction
US20070031028A1 (en) Estimating 3d shape and texture of a 3d object based on a 2d image of the 3d object
CN107491726A (en) A kind of real-time expression recognition method based on multi-channel parallel convolutional neural networks
WO2009131644A2 (en) Method for creating photo cutouts and collages
CN108197533A (en) A kind of man-machine interaction method based on user&#39;s expression, electronic equipment and storage medium
CN104123749A (en) Picture processing method and system
KR101112142B1 (en) Apparatus and method for cartoon rendering using reference image
CN113379623B (en) Image processing method, device, electronic equipment and storage medium
KR100815209B1 (en) Apparatus and method for feature extraction of 2D image for generating 3D image and apparatus and method for generating 3D image using same
Guo Digital anti-aging in face images
Aizawa et al. Do you like sclera? Sclera-region detection and colorization for anime character line drawings
Xia et al. Lazy texture selection based on active learning
KR102555166B1 (en) Method and System for Facial Texture Synthesis with Skin Microelement Structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231117

Address after: Gao Lou Zhen Hong Di Cun, Rui'an City, Wenzhou City, Zhejiang Province, 325200

Patentee after: Wang Conghai

Address before: 10 / F, Yihua financial technology building, 2388 Houhai Avenue, high tech park, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Patentee before: SHENZHEN CLOUDREAM INFORMATION TECHNOLOGY CO.,LTD.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240715

Address after: Building 3, No. 28 Tianshan Road, Xinqiao Street, Xinbei District, Changzhou City, Jiangsu Province, China 213022

Patentee after: Jiangsu Shuihao Technology Co.,Ltd.

Country or region after: China

Address before: Gao Lou Zhen Hong Di Cun, Rui'an City, Wenzhou City, Zhejiang Province, 325200

Patentee before: Wang Conghai

Country or region before: China

TR01 Transfer of patent right