[go: up one dir, main page]

WO2008126948A1 - System for making caricature and method of the same and recording medium of the same - Google Patents

System for making caricature and method of the same and recording medium of the same Download PDF

Info

Publication number
WO2008126948A1
WO2008126948A1 PCT/KR2007/001788 KR2007001788W WO2008126948A1 WO 2008126948 A1 WO2008126948 A1 WO 2008126948A1 KR 2007001788 W KR2007001788 W KR 2007001788W WO 2008126948 A1 WO2008126948 A1 WO 2008126948A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
caricature
information
user
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2007/001788
Other languages
French (fr)
Inventor
Jin Kook Choi
Kediy Kim
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
JDF Group
Original Assignee
JDF Group
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by JDF Group filed Critical JDF Group
Priority to JP2009509402A priority Critical patent/JP2009521065A/en
Priority to PCT/KR2007/001788 priority patent/WO2008126948A1/en
Publication of WO2008126948A1 publication Critical patent/WO2008126948A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present invention relates to a caricature generation system and method, and more particularly, to a caricature generation system and method which can generate a caricature by receiving image data from a user and extracting face information.
  • Internet game sites increase recently, a tendency of identifying, with themselves, characters which may show themselves on Internet, for example, avatars, game characters, and the like is becoming common.
  • Internet sites provide the users with various face shapes, hair styles, shapes of ears, eyes, mouths, noses, facial expressions, and the like, so that the users may generate characters similar to the users' own appearances by combining each component.
  • FIG. 1 is a flowchart illustrating a caricature manufacturing method according to a conventional art.
  • a source picture for manufacturing a caricature is inputted in a picture input of an automatic caricature manufacturing device, in step 100 of inputting the picture for conventional caricature manufacturing.
  • the picture may be a famous entertainer's picture, and the like, and may be generally a user's own face.
  • a face shape, eyes, a nose, a mouth, and the like are analyzed as each portion of the picture, i.e. each portion configuring a face, in step 102, and a model appropriate for a character of analyzed each portion is extracted in order to be used for the caricature, in step 104.
  • Extraction of the model is progressed by using a database previously constructed with respect to each portion of a person. For example, an entire face shape is completed by information extracted by calculating light and shadows on the face in the case of a face shape, and a location and a ratio of each portion are calculated, and are extracted from the database in the case of eyes, a nose, and a mouth.
  • the face is combined in step 106, and a hairstyle appropriate for the combined face is selected from a hairstyle database by inputting data of the combined face, in step 108. Also, the face is completed in step 109.
  • a body of the caricature is selected in step 112, and other portions of the body may be also selected by a similar method.
  • a background image is combined with the caricature in step 114, and the caricature is completed in step 116.
  • the user can easily generate the character similar to the user's own shape, from an automatic device including a built-in program by merely inputting the user's own picture into the image data.
  • an existing automatic caricature generation method cannot generally and accurately determine a location and a range of an actual face in the picture, a face contour cannot be accurately identified.
  • a boundary of a jaw line portion of the face cannot be accurately identified in the image data since a neck and a jaw generally show a similar color.
  • the user can merely select, from the prestored database, each portion required for generating the caricature, and cannot directly modify light and shadows on, or a contour of the use's face. Accordingly, there is a drawback that the caricature similar to an actual person cannot be generated since a three-dimensional effect and reality are reduced.
  • the present invention provides a caricature generation system and method which enables a user to easily generate a caricature by merely inputting the user's own picture into image data.
  • the present invention also provides a caricature generation system and method which can extract a face contour by using a difference between a face area color and a background area color when the caricature is generated, thereby generating a caricature similar to an actual user.
  • the present invention also provides a caricature generation system and method which can generate a caricature similar to an actual user by accurately extracting a face contour when the caricature is generated.
  • the present invention also provides a caricature generation system and method which can generate a caricature which a user desires by enabling the user to modify an automatically extracted contour with a simple mouse click.
  • the present invention also provides a caricature generation system and method which can extract a contour very similar to an actual user's contour with reference to an estimated face width and a face height by estimating the face width and the face height, based on a pupils location and a lips location when a face contour is extracted in order to generate a caricature.
  • the present invention also provides a caricature generation system and method which can extract a contour in which a face area color and a background area color are similar, such as a jaw line, by extracting a contour in a face range when a face contour is extracted in order to generate a caricature.
  • the present invention also provides a caricature generation system and method which can express a three-dimensional shape of an actual face by using a shadow plate including a face contour and a face curve when a caricature is generated.
  • the present invention also provides a caricature generation system and method which can easily generate a face curve by posterizing a face area in image data inputted from a user.
  • the present invention also provides a caricature generation system and method which can generate a caricature similar to an actual face by using a color of an actual face area when a face area is posterized from image data inputted from a user in order to generate a face curve.
  • the present invention also provides a caricature generation system and method which can generate a caricature having various moods by receiving, from a user, a transparency of a shadow plate and determining a combination ratio when the shadow plate and a character image are combined.
  • a caricature generation system for generating a caricature by using image data inputted from a user, the system including: a face shape generator which extracts, from the image data, face shape information including face contour information generated by distinguishing a face area from a background area, and generates a face shape; a character image generator which extracts face component shapes from the image data, and generates character images corresponding to the extracted face component shapes; and a caricature generator which combines the face shape and the character images, and generates the caricature.
  • the face shape generator includes: a face component location identifier which identifies pupils location information and lips location information from the image data; a face length estimator which estimates a face width and a face height, based on the identified pupils location and the identified lips location; a face contour information extractor which distinguishes the face area from the background area in an estimated face range by using the estimated face width and the face height, and extracts the face contour information; and a face shape former which processes the face contour information or the image data, and forms the face shape.
  • a caricature generation system for generating a caricature from image data inputted from a user, the system including: a shadow plate generator which generates a shadow plate including a face contour and a face curve generated by dividing a face area and a background area from the image data; a character image generator which extracts face component shapes from the image data, and generates character images corresponding to the extracted face component shapes; and a caricature generator which combines the shadow plate and the character images, and generates the caricature.
  • a caricature generation method of generating a caricature by using image data inputted from a user including: a step of distinguishing a face area from a background area, extracting, from the image data, face shape information including face contour information, and generating a face shape; a step of extracting face component shapes from the image data, and generating character images corresponding to the extracted face component shapes; and a step of combining the face shape and the character images, and generating the caricature.
  • the step of distributing includes: identifying pupils location information and lips location information from the image data; estimating a face width and a face height, based on the identified pupils location and the identified lips location; distinguishing the face area from the background area in an estimated face range by using the estimated face width and the face height, and extracting the face contour information; and forming the face shape, based on the extracted face contour information, generating an image of ears, eyes, mouth, and nose corresponding to eyes, eyebrows, nose, mouth, or ears shape of the image data, combining the face shape with the image of ears, eyes, mouth, and nose, and generating the caricature.
  • a caricature generation method of generating a caricature from image data inputted from a user including: a step of generating a shadow plate including a face contour and a face curve generated by dividing a face area and a background area from the image data; a step of extracting face component shapes from the image data, and generating character images corresponding to the extracted face component shapes; and a step of combining the shadow plate and the character images, and generating the caricature.
  • a computer-readable recording medium storing a program for implementing the above- described caricature generation method.
  • FIG. 1 is a flowchart illustrating a caricature manufacturing method according to a conventional art
  • FIG. 2 illustrates roughly, operations of a caricature generation system according to an exemplary embodiment of the present invention
  • FIG. 3 is a block diagram illustrating an internal configuration of a caricature generation system according to a first exemplary embodiment of the present invention
  • FIG. 4 is a block diagram illustrating an internal configuration of a face shape generator of FIG. 3;
  • FIG. 5 illustrates a method of estimating face lengths, based on a pupils location and a lips location according to an exemplary embodiment of the present invention
  • FIG. 6 is a diagram illustrating data from experiments for acquiring a correlation between a distance between a female's eyes, and a face width, and a correlation between a distance between the female's eyes and lips, and a face height according to an exemplary embodiment of the present invention
  • FIG. 7 is a diagram illustrating data from experiments for acquiring a correlation between a distance between a male's eyes, and a face width, and a correlation between a distance between the male's eyes and lips, and a face height according to an exemplary embodiment of the present invention
  • FIG. 8 illustrates a method of distinguishing a face area from a background area in a face range according to an exemplary embodiment of the present invention
  • FIG. 9 is a flowchart illustrating flows of a caricature generation method according to a first exemplary embodiment of the present invention.
  • FIG. 10 is a flowchart illustrating step S720 of generating a face shape of FIG. 9 in detail
  • FIG. 11 is a block diagram illustrating an internal configuration of a caricature generation system according to a second exemplary embodiment of the present invention.
  • FIG. 12 illustrates a shadow plate generated by using a posterizing according to an exemplary embodiment of the present invention
  • FIG. 13 illustrates a caricature changed depending on a transparency of a shadow plate according to an exemplary embodiment of the present invention
  • FIG. 14 is a diagram illustrating a character image database according to an exemplary embodiment of the present invention
  • FIG. 15 illustrates a picture in which a selected hairstyle and selected eye shapes of a caricature are inputted from a user according to an exemplary embodiment of the present invention
  • FIG. 16 illustrates an example of using, in a mobile terminal, a caricature generated according to an exemplary embodiment of the present invention.
  • FIG. 17 is a flowchart illustrating flows of a caricature generation method according to a second exemplary embodiment of the present invention.
  • FIG. 2 illustrates rough operations of a caricature generation system according to a first exemplary embodiment of the present invention.
  • a caricature generation system 120 generates a caricature 130 by using image data 110 inputted from a user
  • the image data 110 may include all data showing a static picture.
  • the image data 110 may be digital data including a form such as a Joint Photographic Experts Group (JPEG) image, a Graphics Interchange Format (GIF) image, a Tagged Image File Format (TIFF), a bitmap (BMP), and the like, or may be analog data such as a printed picture.
  • JPEG Joint Photographic Experts Group
  • GIF Graphics Interchange Format
  • TIFF Tagged Image File Format
  • BMP bitmap
  • the digital data may be inputted by using a digital medium such as a floppy disk, a compact disc read-only memory (CD-ROM), a Universal Serial Bus (USB)-memory, a communication network, and the like, and the analog data may be inputted after being converted into the digital data by using a conversion apparatus such as a scanner, and the like.
  • a movie may be used for the image data 110 by capturing the static picture.
  • the caricature generation system 120 may be operated being installed in a user's terminal. Also, the caricature generation system 120 may be installed in a server, receive the image data 110 by using the user's terminal connected with the server via the communication network, and generate the caricature 130.
  • the caricature generation system 120 may transmit the caricature 130 to the user's terminal via the communication network, or may store the caricature 130 in an internal storing apparatus of the server, and enable the user to transmit the caricature 130 to the user's terminal as required.
  • FIG. 3 is a block diagram illustrating an internal configuration of the caricature generation system 120 of FIG. 2.
  • the caricature generation system 120 includes a face shape generator 210 which extracts, from the image data, face shape information including face contour information, and generates a face shape, a character image generator 220 which extracts face component shapes from the image data, and generates character images corresponding to the extracted face component shapes, and a caricature generator 230 which combines the face shape and the character images, and generates the caricature.
  • the face shape information includes extracted pupils location information, lips location information, face width information, and face height information including the face contour information.
  • a caricature is a word derived from the Italian word 'caricatura' denoting "exaggeration and distortion", and denotes a satirical drawing, a drawing drawn as a cartoon, and the like.
  • the caricature was generally used for interestingly and exaggeratedly depicting shapes of celebrities such as entertainers, politicians, athletes, and the like, in the past, the caricature is widely used on the Internet recently for images showing users' characteristics.
  • the caricature in the present specification denotes a face image drawn resembling an actual shape of a specific person by using a facial characteristic of the specific person, and may be widely understood by being not limited to a specific form, a number of components, and an expression form.
  • the face shape generator 210 includes a face component location identifier 210-1, a face length estimator 210-2, a face contour information extractor 210-3, and a face shape former 210-4.
  • a function of each component is described in detail.
  • the face component location identifier 210-1 identifies pupils location information and lips location information from image data.
  • the pupils location information and the lips location information identified by the face component location identifier 210-1 may include a coordinate of a pixel corresponding to a location of a pupil center, and a coordinate of a pixel corresponding to a location of a center of lips in the image data.
  • the pupils location information and the lips location information may be identified by using a difference between a color of pixels corresponding to a pupils location and a lips location, and a color of skin around pupils and lips.
  • a coordinate of a clicked pixel may be identified by providing a user with the image data, and receiving, from the user, the pupils location and the lips location by a click.
  • the face length estimator 210-2 estimates a face width and a face height, based on the pupils location and the lips location identified by the face component location identifier 210-1. For an approximation, a correlation between the pupils location and the lips location extracted from a plurality of sample image data, and the face width and the face height may be used. The correlation is respectively acquired depending on sex differentiation of a person in sample image data, and the face width and the face length may be estimated by using the correlation corresponding to the sex differentiation of the person in image data inputted from the user.
  • the face length estimator 210-2 may further include a user sex differentiation information database including sex differentiation information.
  • the face length estimator 210-2 may estimate the face width, based on a distance between two pupils, and estimate the face length, based on a distance between pupils and lips.
  • the face length estimator 210- 2 may respectively extract the distance between two pupils, and the face width from a plurality of image data, and statistically acquire a correlation between the distance between two pupils, and the face width by using the extracted distance between two pupils, and the face width.
  • the face length estimator 210-2 may store, in a separate storing apparatus, the acquired correlation as described above, and when the user inputs the image data, the face length estimator 210-2 may estimate the face width by using the stored correlation, based on the distance between two pupils extracted from the inputted image data.
  • the face height may be estimated by an identical method by using the correlation between pupils and lips.
  • the face contour information extractor 210-3 estimates a face range in the image data by using the face width and the face height estimated by the face length estimator 210-2.
  • the face range may be a range of a rectangle having a minimum size including a face, and the face width and the face height may be respectively established as a width and a height of the rectangle.
  • a location of the rectangle may be estimated by using the pupils location information and the lips location information identified by the face component location identifier 210-1.
  • the face contour information extractor 210-3 distinguishes the face area from the background area in the face range, and extracts the face contour information.
  • the face area may be distinguished from the background area by using a difference between color data of a pixel corresponding to the face area, and color data of a pixel corresponding to the background area.
  • information with respect to a boundary line of the distinguished area may be the face contour information.
  • a color of the face area and a color of the background area are indistinguishable, for example, a jaw line
  • areas may not be clearly distinguishable by using only the difference of the color data. Accordingly, the areas may be distinguished more accurately than an existing method by distinguishing the areas in the face range. Therefore, there is an effect to generate the caricature similar to the actual person by extracting the accurate face contour information.
  • the face contour information extractor 210-3 may provide the user with the extracted face contour information, receive contour change information from the user, and modify the contour information, based on the contour change information.
  • a method of estimating the face width and the face height by the face length estimator 210-2 uses a statistical method. Since the face contour information extractor 210-3 estimates the face range by using the face width and the face height, and extracts the face contour information in the face range, the contour information different from the actual face contour may be extracted when an exception to statistics is generated. In this case, the caricature similar to an actual photo may be generated by enabling the user to modify the face contour extracted by a simple mouse click, and to be in accordance with an actual contour.
  • the face shape former 210-4 forms the face shape, based on the face contour information extracted from the face contour information extractor 210-3.
  • the face shape may be generated by covering, with a color similar to an actual face, an internal portion of the face contour, the face contour being extracted from the face contour information extractor 210-3, or extracting the internal portion of the face contour from the image data inputted from the user and passing through a predetermined process.
  • the character image generator 220 extracts face component shapes from the image data, and generates character images corresponding to the extracted face component shapes.
  • the face components may denote characteristic components in a face, and may include eyes, eyebrows, a nose, a mouth, ears, and the like.
  • the character images may be images showing characteristic shapes of the face components, and may include a form of a digital image which may be combined with other digital images.
  • the character image generator 220 may extract the face components from an identified location after identifying the location of the face components.
  • the location of the face components may be identified by using a difference between a color of a pixel corresponding to the location of the face components, and a color of surrounding skin.
  • a coordinate of a clicked pixel may be identified by providing a user with the image data, and receiving, from the user, the pupils location information and the lips location information by click.
  • the contour of the face components is extracted by using a color difference between the face components and the surrounding skin in the identified location. Accordingly, the face component shapes may be extracted.
  • the character image generator 220 generates character images corresponding to the extracted face component shapes.
  • the character image generator 220 may extract the character images corresponding to the extracted face component shapes, from a character image list stored in the character image database 240, or may provide the user with the character image list, and receive the selected character images from the user.
  • the character image generator 220 may generate the character images by using user information including sex differentiation information of the user, information concerning whether double eyelids exist, and the like.
  • the user information may be stored in a predetermined user information database, and may be referred when the character images are generated.
  • the caricature generator 230 combines the face shape and the character images of the face components, and generates the caricature.
  • Face component images may include images showing shapes of eyes, eyebrows, a nose, lips, ears, and the like.
  • a plurality of face component images may be stored in a predetermined storing apparatus, and the caricature generator 230 may provide the user with a list, and receive the selected desirable images from the user.
  • the face components may be identified from the image data inputted from the user, and the face component images similar to the identified face components may be extracted in the storing apparatus.
  • the caricature generator 230 may determine a location to combine the images, based on the pupils location and the lips location identified by the face component location identifier 210-1 when the character images of the face components are combined with the face shape.
  • FIG. 5 illustrates a method of estimating face lengths, based on pupils location and lips location in order to perform the first exemplary embodiment of the present invention including the caricature generation system 120 of FIG. 3.
  • Reference numeral 310 illustrates the method of estimating the face lengths, based on the pupils location and the lips location in an actual picture
  • reference numeral 320 expresses an identical illustration with a picture instead of a photograph, in order to be easily recognized.
  • the caricature generation system estimates a distance between two pupils, e.g. width_eyes, and a distance between pupils and lips, e.g. height_eyemouth, by identifying the pupils location and the lips location from the image data, and estimates a face width, e.g.
  • width_face and a face height, e.g. height_face, by using the estimated distance between two pupils and the distance between pupils and lips.
  • a correlation between the distance between two pupils, e.g. width eyes, and the face width, e.g. widthjface, and a correlation between the distance between pupils and lips, e.g. height_eyemouth, and the face height, e.g. height_face may be used.
  • the caricature generation system estimates the distance between two pupils, the distance between pupils and lips, the face width, and the face height from a plurality of image data, acquires correlations of estimated values, and stores the correlations in the predetermined storing apparatus.
  • the caricature generation system may estimate the distance between two pupils and the distance between pupils and lips from the image data inputted from the user, and estimate the face width and the face height by inputting the estimated values in the stored correlations.
  • the correlations may be separately stored by being variously acquired depending on sex differentiation, and correlations corresponding to sex differentiation of an inputted image data thereto may be applied.
  • FIGS. 6 and 7 are diagrams illustrating data from experiments for acquiring a correlation between a distance between eyes, and a face width, and a correlation between a distance between eyes and lips, and a face height according to an exemplary embodiment of the present invention.
  • FIG. 6 illustrates data of females from experiments
  • FIG. 7 illustrates data of males from experiments.
  • the caricature generation system calculates a ratio of a distance between two pupils, and a face width, e.g. a ratio eye/face, and a ratio of a distance between pupils and lips, and a face height, e.g. a ratio height/face, for each sample image data by calculating the distance between two pupils, the distance between pupils and lips, the face width, and the face height from a plurality of sample image data.
  • the ratio eye/face and the ratio height/face have a nearly uniform value for each image data, it may be identified that the distance between two pupils, and the face width, and the distance between pupils and lips, and the face height have a first linear correlation. Accordingly, when the distance between two pupils, and the distance between pupils and lips can be estimated, the face width and the face height may be inferred by applying the estimated ratio.
  • the ratio eye/face and the ratio height/face are different in males and females.
  • the ratio eye/face ranges from 0.4 to 0.5 in males
  • the ratio eye/face ranges from 0.3 to 0.35 in females.
  • the ratio height/face ranges from 0.3 to 0.4 in males
  • the ratio height/face ranges from 0.4 to 0.5 in females. Since there is a characteristic that the correlations are changed depending on sex differentiation, the face width and the face height may be accurately estimated when sex differentiation information of the image data here is used in estimation of the face width and the face height by using the distance between two pupils and the distance between pupils and lips.
  • FIG. 8 illustrates a method of distinguishing a face area from a background area in a face range according to an exemplary embodiment of the present invention.
  • the caricature generation system estimates a face range 610 by using a face width and a face height.
  • the face range 610 may be a range of a rectangle having a minimum size that includes a face, and the face width and the face height may be respectively established as a width and a height of the rectangle.
  • the caricature generation system extracts, in the face range 610, a face contour 640 by distinguishing a face area 620 from a background area 630.
  • the face area 620 and the background area 630 may be distinguished by respectively using a difference between the face area color and the background area color, and a boundary of the face range may be the face contour in a portion in which the face contour 640 is not clearly distinguishable, for example, a jaw line.
  • FIG. 9 is a flowchart illustrating flows of a caricature generation method according to the first exemplary embodiment of the present invention
  • FIG. 10 is a flowchart illustrating step S720 of FIG. 9 in detail.
  • Image data is inputted from a user, in step S710, and face shape information including face contour information is generated in step S 720.
  • pupils location information and lips location information are identified from the image data first inputted from the user, in step S720-1.
  • the identified pupils location information and the lips location information may include a coordinate of a pixel corresponding to a location of a pupil center, and a coordinate of a pixel corresponding to a location of a center of lips in the image data.
  • the pupils location information and the lips location information may be identified by using a difference between a color of pixels corresponding to a pupils location and a lips location, and a color of skin around pupils and lips.
  • a coordinate of a clicked pixel may be identified by providing a user with the image data, and receiving, from the user, the pupils location and the lips location with a click.
  • step S720-2 a face width and a face height are estimated, based on the pupils location and the lips location identified in step S720-1.
  • a correlation between the pupils location and the lips location extracted from a plurality of sample image data, and the face width and the face height may be used.
  • the correlation is respectively acquired depending on sex differentiation of a person in sample image data, and the face width and the face length may be estimated by using the correlation corresponding to the sex differentiation of the person in image data inputted from the user.
  • step S720-3 a face range in the image data is estimated by using the face width and the face height estimated in step S720-2.
  • the face range may be a range of a rectangle having a minimum size including a face, and the face width and the face height may be respectively established as a width and a height of the rectangle.
  • a location of the rectangle may be estimated by using the pupils location information and the lips location information identified in step 720-1.
  • the face area is distinguished from the background area in the face range, and the face contour information is extracted.
  • the face area may be distinguished from the background area by using a difference between color data of a pixel corresponding to the face area, and color data of a pixel corresponding to the background area.
  • information with respect to a boundary line of the distinguished area may be the face contour information.
  • a color of the face area and a color of the background area are indistinguishable, for example, a jaw line
  • areas may not be clearly distinguishable by using only the difference of the color data. Accordingly, the areas may be distinguished more accurately than an existing method by distinguishing the areas in the face range. Therefore, there is an effect to generate the caricature similar to the actual person by extracting the accurate face contour information.
  • step S720-3 the user may be provided with the extracted face contour information, contour change information may be received from the user, and the contour information may be modified, based on the contour change information.
  • step S720-4 the face shape is formed, based on the face contour information extracted in step S 720-3.
  • the face shape may be generated by filling in, with a color similar to an actual face, an internal portion of the face contour, the face contour being extracted in step 720-3, or extracting the internal portion of the face contour from the image data inputted from the user and passing through a predetermined process.
  • face component shapes are extracted from the image data, and character images are generated corresponding to the face component shapes.
  • Face component shapes may include images showing shapes of eyes, eyebrows, a nose, lips, ears, and the like.
  • a plurality of face component images may be stored in a predetermined storing apparatus, the user may be provided with a list, and the selected desirable images may be received from the user.
  • the face components may be identified from the image data inputted from the user, and the face component images similar to the identified face components may be extracted in the storing apparatus.
  • step S740 the generated character images of the face components are combined with the face shape, and the caricature is generated.
  • the pupils location and the lips location identified in step S720-1 may be used for the combination.
  • FIG. 11 is a block diagram illustrating an internal configuration of a caricature generation system according to the second exemplary embodiment of the present invention.
  • the caricature generation system may include a shadow plate generator 210', a character image generator 220, and a caricature generator 230.
  • the caricature generation system may further include a character image database 240.
  • the shadow plate generator 210' generates a shadow plate including a face contour and a face curve from image data inputted from a user.
  • the image data may be converted into a form which may use data inputted from the user in the shadow plate generator 210'.
  • the image data when the user inputs the image data including a compressed form such as a JPEG format, a GIF format, and the like, the image data may be used by being converted into an image form including color data of all pixels such as a BMP format form.
  • analog data such as an actual picture
  • the analog data may be converted into digital data by using a conversion apparatus such as a scanner, and the like.
  • the face contour denotes a boundary line of a face and a background, and shows a plane characteristic of a face.
  • the face curve denotes an extent in which a face is protruded and concaved, and shows a three-dimensional characteristic of the face.
  • the shadow plate generator 210' extracts the face contour from the image data in order to generate the shadow plate.
  • the shadow plate generator 210' may identify the boundary line by dividing a face area and a background area from the image data. For example, a portion in which a color data value is significantly changed may be identified as the boundary line by comparing the color data values for each pixel of the image data.
  • the shadow plate generator 210' generates the face curve from the image data in order to generate the shadow plate.
  • the shadow plate generator 210' may use a posterizing method. Posterizing denotes reducing a range of a value of light and shade which each pixel may have in an image, and when the image data passes through a posterizing process, the image data is converted into using only a predetermined number of colors.
  • the image data having passed through the posterizing process is generally and indistinctly shown, a specific shape of each portion can not be clearly distinguishable. Therefore, when posterizing of the face area is performed, face components are imperceptibly erased, and only the general face curve remains. Also, the posterizing uses only the predetermined number of colors, and different images may be generated depending on which color the image is based upon when the color is determined. Accordingly, the shadow plate generator 210' may determine a color being a basic color of the posterizing, and the color may be determined, based on a color of the face area from the image data inputted from the user. Also, when there is a method of generating the face contour similar to the actual face, based on the image data, any method may be applied to the present invention.
  • a method of extracting the face contour in the caricature generation system according to the first exemplary embodiment of the present invention may be applied to the present invention.
  • a character image generator 220 is referred to a description of the caricature generation system according to the above-described first exemplary embodiment of the present invention.
  • a caricature generator 230 combines the shadow plate generated by the shadow plate generator 210', and the character images generated by the character image generator 220, and generates the caricature.
  • the shadow plate and the character image may have a BMP format form having color data for each pixel, and may combine two images by respectively adding the color data of the overlapped pixel depending on a predetermined ratio.
  • the color data of the pixel may be divided into three values including red (R), green (G), and blue (B), and combination may be performed by respectively adding the RGB values depending on the predetermined ratio.
  • the caricature generator 230 may combine the shadow plate and the character images, based on a transparency value of the shadow plate by receiving the transparency value of the shadow plate inputted from the user.
  • the transparency value of the shadow plate may denote a ratio of the color data of the pixel corresponding to the shadow plate when the color data of each pixel are added.
  • alpha denotes the transparency value of the shadow plate
  • dstR, dstG, and dstB denote the RGB value of the combined caricature
  • picR, picG, and picB denote the RGB value of the shadow plate
  • chaR, chaG, and chaB denote the RGB value of the character images.
  • a character image database 240 stores character images corresponding to face components.
  • the character image database 240 stores a plurality of character images.
  • the character image database 240 may be divided into each face component such as eyes, eyebrows, noses, mouths, ears, and the like, and store the character images.
  • FIG. 12 illustrates a shadow plate generated by using a posterizing introduced in a caricature generation system according to the second exemplary embodiment of the present invention.
  • the caricature generation system may generate the shadow plate by posterizing image data by a predetermined color.
  • Reference numeral 301 illustrates a face area extracted from image data inputted from a user
  • reference numeral 302 illustrates a shadow plate generated by posterizing the face area, based on a skin color of the face area in the image data.
  • the shadow plate generated by the posterizing includes a face curve, face components are indistinctively shown. Accordingly, although the shadow plate is combined with character images afterwards, original face components may be not shown being overlapped with character images.
  • FIG. 13 illustrates a caricature changed depending on a transparency of a shadow plate according to an exemplary embodiment of the present invention.
  • a caricature generation system may determine a combination ratio, based on the transparency value of the shadow plate when the shadow plate and the character images are combined.
  • the transparency value of the shadow plate may denote a ratio in which the shadow plate occupies in combination, and may have a value from 0 to 1.
  • Reference numeral 401 corresponds to a case where the transparency value of the shadow plate is 0
  • reference numeral 404 corresponds to a case where the transparency value of the shadow plate is 1
  • reference numerals 402 and 403 show that the transparency value of the shadow plate are intermediate values.
  • the transparency value of the shadow plate is increased as the caricature progresses from reference numeral 401 to reference numeral 404.
  • the caricature appears like a drawing, and since the face curve is shown when the transparency value of the shadow plate is high, the caricature is felt like an actual photograph.
  • the transparency value of the shadow plate may be received from the user, and may be used.
  • FIG. 14 is a diagram illustrating a character image database 240 according to an exemplary embodiment of the present invention.
  • the character image database 240 may store character images corresponding to face components such as eyes, eyebrows, noses, mouths, and ears, and may separately store character images for each face component.
  • the character image generator 220 may extract character images most similar to the extracted shape of eyes from among records in which a face component field corresponds to eyes in the character image database 240, and may generate the character images.
  • FIG. 15 illustrates a picture in which a selected hairstyle and selected eye shapes of a caricature are inputted from a user according to an exemplary embodiment of the present invention.
  • Reference numeral 610 is a caricature generated in the caricature generation system according to the present invention
  • reference numeral 620 is a picture of selecting eye shapes of the caricature
  • reference numeral 630 is a picture of selecting a hairstyle of the caricature
  • reference numeral 640 illustrates a caricature depending on the hairstyle selected in reference numeral 630.
  • the caricature generation system since the caricature generation system according to the present invention enables a user to freely select face component images of the caricature, to select hairstyles, accessories, clothes, and the like, and to freely adorn the user's caricature, there is an effect that the user can desirably express himself/herself by using the caricature.
  • FIG. 16 illustrates an example of using, in a mobile terminal, a caricature generated according to an exemplary embodiment of the present invention.
  • the caricature generated by the present invention may be used for expressing oneself in the mobile terminal, and may be variously used as an image showing a user on Internet communities, personal homepages, Internet game sites, and the like. According to the present invention, since the caricature fully showing one's characteristic may be generated by using the shadow plate including the face curve, the caricature generated by the present invention may be generally used as image expressing oneself on the Internet wired/wirelessly connected.
  • FIG. 17 is a flowchart illustrating flows of a caricature generation method according to the second exemplary embodiment of the present invention.
  • a shadow plate including a face contour and a face curve is generated from image data inputted from a user.
  • the face contour of the shadow plate may be generated by using a difference between a face area color and a background area color in the image data, and the face curve may be easily generated by using posterizing.
  • the posterizing denotes reducing a range of a value of light and shade in which each pixel may have in an image, and is performed, based on a predetermined color. Also, the predetermined color may be determined by using a color of the face area.
  • step S 802 face component shapes are extracted from the image data, and character images corresponding to the extracted face component shapes are generated.
  • the face component shapes may be extracted by using a color difference between face components and surrounding skin.
  • the character images corresponding to the extracted face component shapes may be extracted from a character image list, or the user may be provided with the character image list, and the selected character images are received from the user.
  • User information including sex differentiation information of the user, information concerning whether double eyelids exist, and the like may be used when the character images are generated.
  • the user information may be stored in a predetermined user information database, and may be referred to when the character images are generated.
  • step S803 the shadow plate generated in step S801, and the character images generated in step S802 are combined, and the caricature is generated.
  • the shadow plate and the character image may have a BMP format form having color data for each pixel, and may combine two images by respectively adding the color data of the overlapped pixel depending on a predetermined ratio.
  • the color data of the pixel may be divided into three values including R, G, and B, and combination may be performed by respectively adding the RGB values depending on the predetermined ratio.
  • the shadow plate and the character images may be combined, based on a transparency value of the shadow plate by receiving the transparency value of the shadow plate inputted from the user.
  • the transparency value of the shadow plate may denote a ratio of the color data of the pixel corresponding to the shadow plate when the color data of each pixel are added.
  • the transparency value of the shadow plate may have a value from 0 to 1, and combination may be performed, based on
  • dstG picG x alpha + chaG x ( 1 -alpha)
  • dstB picB x alpha + chaB x (1 -alpha).
  • alpha denotes the transparency value of the shadow plate
  • dstR, dstG, and dstB denote the RGB value of the combined caricature
  • picR, picG, and picB denote the RGB value of the shadow plate
  • chaR, chaG, and chaB denote the RGB value of the character images.
  • a caricature generation method may be recorded in computer-readable media including program instructions to implement various operations embodied by a computer.
  • the media may also include, alone or in combination with the program instructions, data files, data structures, and the like.
  • the media and program instructions may be those specially designed and constructed for the purposes of the present invention, or they may be of the kind well-known and available to those having skill in the computer software arts.
  • Examples of computer- readable media include magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD ROM disks and DVD; magneto-optical media such as optical disks; and hardware devices that are specially configured to store and perform program instructions, such as read-only memory (ROM), random access memory (RAM), flash memory, and the like.
  • the media may also be a transmission medium such as optical or metallic lines, wave guides, and the like, including a carrier wave transmitting signals specifying the program instructions, data structures, and the like.
  • Examples of program instructions include both machine code, such as produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter.
  • the described hardware devices may be configured to act as one or more software modules in order to perform the operations of the above-described embodiments of the present invention.
  • a caricature generation system and method which enables a user to easily generate a caricature by merely inputting the user's own picture into image data.
  • a caricature generation system and method which can generate a caricature similar to an actual user by accurately extracting a face contour when the caricature is generated. Also, according to the present invention, there is provided a caricature generation system and method which can generate a caricature which a user desires by enabling the user to modify an automatically extracted contour with a simple mouse click.
  • a caricature generation system and method which can extract a face contour by using a difference between a face area color and a background area color when the caricature is generated, thereby generating a caricature similar to an actual user.
  • a caricature generation system and method which can extract a contour very similar to an actual user's contour with reference to an estimated face width and a face height by estimating the face width and the face height, based on a pupils location and a lips location when a face contour is extracted in order to generate a caricature.
  • a caricature generation system and method which can extract a contour in which a face area color and a background area color are similar, such as a jaw line, by extracting a contour in a face range when a face contour is extracted in order to generate a caricature. Also, according to the present invention, there is provided a caricature generation system and method which can express a three-dimensional shape of an actual face by using a shadow plate including a face contour and a face curve when a caricature is generated.
  • a caricature generation system and method which can easily generate a face curve by posterizing a face area in image data inputted from a user.
  • a caricature generation system and method which can generate a caricature similar to an actual face by using a color of an actual face area when a face area is posterized from image data inputted from a user in order to generate a face curve.
  • a caricature generation system and method which can generate a caricature having various moods by receiving, from a user, a transparency of a shadow plate and determining a combination ratio when the shadow plate and a character image are combined.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

A caricature generation system, method, and recording medium of generating a caricature by using image data, the system including: a face shape generator which extracts, from the image data inputted from a user, face shape information including face contour information generated by distinguishing a face area from a background area, and generates a face shape; a character image generator which extracts face component shapes from the image data, and generates character images corresponding to the extracted face component shapes; and a caricature generator which combines the face shape and the character images, and generates the caricature. Alternatively, the system may include: a shadow plate generator which generates a shadow plate including a face contour and a face curve generated by dividing a face area and a background area from the image data; a character image generator; and a caricature generator.

Description

SYSTEM FOR MAKING CARICATURE AND METHOD OF THE SAME AND RECORDING MEDIUM OF THE SAME
Technical Field
The present invention relates to a caricature generation system and method, and more particularly, to a caricature generation system and method which can generate a caricature by receiving image data from a user and extracting face information.
Background Art As users using Internet community sites, Internet shopping mall sites, and
Internet game sites increase recently, a tendency of identifying, with themselves, characters which may show themselves on Internet, for example, avatars, game characters, and the like is becoming common. In order to fulfill the above-described requirements of the users, Internet sites provide the users with various face shapes, hair styles, shapes of ears, eyes, mouths, noses, facial expressions, and the like, so that the users may generate characters similar to the users' own appearances by combining each component.
However, a method of generating a character by combining provided components as described above cannot completely fulfill the users' desires even though various components are provided, and the users' characters are generally similar for each other. Accordingly, there is a limitation of providing the users with characters having strong individuality. Also, Internet sites require to provide work such as design manufacturing, image manufacturing, and the like, with much labor and time in order to provide the users with a maximum amount of various components. Accordingly, methods of automatically generating caricatures by using image data including pictures and the like, and generating characters similar to the users by using the generated caricatures are introduced. For example, a caricature manufacturing method disclosed in Korean Granted Patent Application No. 10-376760 manufactures a caricature by a method of analyzing each portion of a picture by using a source picture provided by a user, extracting a caricature model of each portion from a database, combining the extracted each portion, and subsequently combining the combined portions and a necessary background image. FIG. 1 is a flowchart illustrating a caricature manufacturing method according to a conventional art. Referring to FIG. 1, a source picture for manufacturing a caricature is inputted in a picture input of an automatic caricature manufacturing device, in step 100 of inputting the picture for conventional caricature manufacturing. The picture may be a famous entertainer's picture, and the like, and may be generally a user's own face. When the source picture is inputted, a face shape, eyes, a nose, a mouth, and the like are analyzed as each portion of the picture, i.e. each portion configuring a face, in step 102, and a model appropriate for a character of analyzed each portion is extracted in order to be used for the caricature, in step 104. Extraction of the model is progressed by using a database previously constructed with respect to each portion of a person. For example, an entire face shape is completed by information extracted by calculating light and shadows on the face in the case of a face shape, and a location and a ratio of each portion are calculated, and are extracted from the database in the case of eyes, a nose, and a mouth. When extraction of each portion of the face is completed, the face is combined in step 106, and a hairstyle appropriate for the combined face is selected from a hairstyle database by inputting data of the combined face, in step 108. Also, the face is completed in step 109. Next, a body of the caricature is selected in step 112, and other portions of the body may be also selected by a similar method. A background image is combined with the caricature in step 114, and the caricature is completed in step 116.
When the above-described method is used, the user can easily generate the character similar to the user's own shape, from an automatic device including a built-in program by merely inputting the user's own picture into the image data. However, since an existing automatic caricature generation method cannot generally and accurately determine a location and a range of an actual face in the picture, a face contour cannot be accurately identified. In particular, there is a problem that a boundary of a jaw line portion of the face cannot be accurately identified in the image data since a neck and a jaw generally show a similar color.
Also, when the automatic device including the built-in program is used, the user can merely select, from the prestored database, each portion required for generating the caricature, and cannot directly modify light and shadows on, or a contour of the use's face. Accordingly, there is a drawback that the caricature similar to an actual person cannot be generated since a three-dimensional effect and reality are reduced.
Disclosure of Invention
Technical Goals The present invention provides a caricature generation system and method which enables a user to easily generate a caricature by merely inputting the user's own picture into image data.
The present invention also provides a caricature generation system and method which can extract a face contour by using a difference between a face area color and a background area color when the caricature is generated, thereby generating a caricature similar to an actual user.
The present invention also provides a caricature generation system and method which can generate a caricature similar to an actual user by accurately extracting a face contour when the caricature is generated. The present invention also provides a caricature generation system and method which can generate a caricature which a user desires by enabling the user to modify an automatically extracted contour with a simple mouse click.
The present invention also provides a caricature generation system and method which can extract a contour very similar to an actual user's contour with reference to an estimated face width and a face height by estimating the face width and the face height, based on a pupils location and a lips location when a face contour is extracted in order to generate a caricature.
The present invention also provides a caricature generation system and method which can extract a contour in which a face area color and a background area color are similar, such as a jaw line, by extracting a contour in a face range when a face contour is extracted in order to generate a caricature.
The present invention also provides a caricature generation system and method which can express a three-dimensional shape of an actual face by using a shadow plate including a face contour and a face curve when a caricature is generated. The present invention also provides a caricature generation system and method which can easily generate a face curve by posterizing a face area in image data inputted from a user. The present invention also provides a caricature generation system and method which can generate a caricature similar to an actual face by using a color of an actual face area when a face area is posterized from image data inputted from a user in order to generate a face curve. The present invention also provides a caricature generation system and method which can generate a caricature having various moods by receiving, from a user, a transparency of a shadow plate and determining a combination ratio when the shadow plate and a character image are combined.
Technical solutions
According to an aspect of the present invention, there is provided a caricature generation system for generating a caricature by using image data inputted from a user, the system including: a face shape generator which extracts, from the image data, face shape information including face contour information generated by distinguishing a face area from a background area, and generates a face shape; a character image generator which extracts face component shapes from the image data, and generates character images corresponding to the extracted face component shapes; and a caricature generator which combines the face shape and the character images, and generates the caricature. The face shape generator includes: a face component location identifier which identifies pupils location information and lips location information from the image data; a face length estimator which estimates a face width and a face height, based on the identified pupils location and the identified lips location; a face contour information extractor which distinguishes the face area from the background area in an estimated face range by using the estimated face width and the face height, and extracts the face contour information; and a face shape former which processes the face contour information or the image data, and forms the face shape.
According to another aspect of the present invention, there is provided a caricature generation system for generating a caricature from image data inputted from a user, the system including: a shadow plate generator which generates a shadow plate including a face contour and a face curve generated by dividing a face area and a background area from the image data; a character image generator which extracts face component shapes from the image data, and generates character images corresponding to the extracted face component shapes; and a caricature generator which combines the shadow plate and the character images, and generates the caricature.
According to still another aspect of the present invention, there is provided a caricature generation method of generating a caricature by using image data inputted from a user, the method including: a step of distinguishing a face area from a background area, extracting, from the image data, face shape information including face contour information, and generating a face shape; a step of extracting face component shapes from the image data, and generating character images corresponding to the extracted face component shapes; and a step of combining the face shape and the character images, and generating the caricature.
The step of distributing includes: identifying pupils location information and lips location information from the image data; estimating a face width and a face height, based on the identified pupils location and the identified lips location; distinguishing the face area from the background area in an estimated face range by using the estimated face width and the face height, and extracting the face contour information; and forming the face shape, based on the extracted face contour information, generating an image of ears, eyes, mouth, and nose corresponding to eyes, eyebrows, nose, mouth, or ears shape of the image data, combining the face shape with the image of ears, eyes, mouth, and nose, and generating the caricature.
According to yet another aspect of the present invention, there is provided a caricature generation method of generating a caricature from image data inputted from a user, the method including: a step of generating a shadow plate including a face contour and a face curve generated by dividing a face area and a background area from the image data; a step of extracting face component shapes from the image data, and generating character images corresponding to the extracted face component shapes; and a step of combining the shadow plate and the character images, and generating the caricature.
According to another aspect of the present invention, there is provided a computer-readable recording medium storing a program for implementing the above- described caricature generation method. Brief Description of Drawings
FIG. 1 is a flowchart illustrating a caricature manufacturing method according to a conventional art;
FIG. 2 illustrates roughly, operations of a caricature generation system according to an exemplary embodiment of the present invention;
FIG. 3 is a block diagram illustrating an internal configuration of a caricature generation system according to a first exemplary embodiment of the present invention;
FIG. 4 is a block diagram illustrating an internal configuration of a face shape generator of FIG. 3; FIG. 5 illustrates a method of estimating face lengths, based on a pupils location and a lips location according to an exemplary embodiment of the present invention;
FIG. 6 is a diagram illustrating data from experiments for acquiring a correlation between a distance between a female's eyes, and a face width, and a correlation between a distance between the female's eyes and lips, and a face height according to an exemplary embodiment of the present invention;
FIG. 7 is a diagram illustrating data from experiments for acquiring a correlation between a distance between a male's eyes, and a face width, and a correlation between a distance between the male's eyes and lips, and a face height according to an exemplary embodiment of the present invention; FIG. 8 illustrates a method of distinguishing a face area from a background area in a face range according to an exemplary embodiment of the present invention;
FIG. 9 is a flowchart illustrating flows of a caricature generation method according to a first exemplary embodiment of the present invention;
FIG. 10 is a flowchart illustrating step S720 of generating a face shape of FIG. 9 in detail;
FIG. 11 is a block diagram illustrating an internal configuration of a caricature generation system according to a second exemplary embodiment of the present invention;
FIG. 12 illustrates a shadow plate generated by using a posterizing according to an exemplary embodiment of the present invention;
FIG. 13 illustrates a caricature changed depending on a transparency of a shadow plate according to an exemplary embodiment of the present invention; FIG. 14 is a diagram illustrating a character image database according to an exemplary embodiment of the present invention;
FIG. 15 illustrates a picture in which a selected hairstyle and selected eye shapes of a caricature are inputted from a user according to an exemplary embodiment of the present invention;
FIG. 16 illustrates an example of using, in a mobile terminal, a caricature generated according to an exemplary embodiment of the present invention; and
FIG. 17 is a flowchart illustrating flows of a caricature generation method according to a second exemplary embodiment of the present invention.
Best Mode for Carrying Out the Invention
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. The embodiments are described below in order to explain the present invention by referring to the figures.
FIG. 2 illustrates rough operations of a caricature generation system according to a first exemplary embodiment of the present invention. As illustrated in FIG. 2, a caricature generation system 120 generates a caricature 130 by using image data 110 inputted from a user The image data 110 may include all data showing a static picture. For example, the image data 110 may be digital data including a form such as a Joint Photographic Experts Group (JPEG) image, a Graphics Interchange Format (GIF) image, a Tagged Image File Format (TIFF), a bitmap (BMP), and the like, or may be analog data such as a printed picture. The digital data may be inputted by using a digital medium such as a floppy disk, a compact disc read-only memory (CD-ROM), a Universal Serial Bus (USB)-memory, a communication network, and the like, and the analog data may be inputted after being converted into the digital data by using a conversion apparatus such as a scanner, and the like. Also, a movie may be used for the image data 110 by capturing the static picture. The caricature generation system 120 may be operated being installed in a user's terminal. Also, the caricature generation system 120 may be installed in a server, receive the image data 110 by using the user's terminal connected with the server via the communication network, and generate the caricature 130. After generating the caricature 130, the caricature generation system 120 may transmit the caricature 130 to the user's terminal via the communication network, or may store the caricature 130 in an internal storing apparatus of the server, and enable the user to transmit the caricature 130 to the user's terminal as required.
FIG. 3 is a block diagram illustrating an internal configuration of the caricature generation system 120 of FIG. 2. Referring to FIG. 3, the caricature generation system 120 includes a face shape generator 210 which extracts, from the image data, face shape information including face contour information, and generates a face shape, a character image generator 220 which extracts face component shapes from the image data, and generates character images corresponding to the extracted face component shapes, and a caricature generator 230 which combines the face shape and the character images, and generates the caricature. The face shape information includes extracted pupils location information, lips location information, face width information, and face height information including the face contour information.
A caricature is a word derived from the Italian word 'caricatura' denoting "exaggeration and distortion", and denotes a satirical drawing, a drawing drawn as a cartoon, and the like. Although the caricature was generally used for interestingly and exaggeratedly depicting shapes of celebrities such as entertainers, politicians, athletes, and the like, in the past, the caricature is widely used on the Internet recently for images showing users' characteristics. The caricature in the present specification denotes a face image drawn resembling an actual shape of a specific person by using a facial characteristic of the specific person, and may be widely understood by being not limited to a specific form, a number of components, and an expression form. FIG. 4 is a block diagram illustrating an internal configuration of the face shape generator in the caricature generation system of FIG. 3 in detail. As illustrated in FIG. 4, the face shape generator 210 includes a face component location identifier 210-1, a face length estimator 210-2, a face contour information extractor 210-3, and a face shape former 210-4. Hereinafter, a function of each component is described in detail. The face component location identifier 210-1 identifies pupils location information and lips location information from image data. The pupils location information and the lips location information identified by the face component location identifier 210-1 may include a coordinate of a pixel corresponding to a location of a pupil center, and a coordinate of a pixel corresponding to a location of a center of lips in the image data. The pupils location information and the lips location information may be identified by using a difference between a color of pixels corresponding to a pupils location and a lips location, and a color of skin around pupils and lips. Also, a coordinate of a clicked pixel may be identified by providing a user with the image data, and receiving, from the user, the pupils location and the lips location by a click.
The face length estimator 210-2 estimates a face width and a face height, based on the pupils location and the lips location identified by the face component location identifier 210-1. For an approximation, a correlation between the pupils location and the lips location extracted from a plurality of sample image data, and the face width and the face height may be used. The correlation is respectively acquired depending on sex differentiation of a person in sample image data, and the face width and the face length may be estimated by using the correlation corresponding to the sex differentiation of the person in image data inputted from the user. For this, the face length estimator 210-2 may further include a user sex differentiation information database including sex differentiation information.
For example, the face length estimator 210-2 may estimate the face width, based on a distance between two pupils, and estimate the face length, based on a distance between pupils and lips. For an approximation, the face length estimator 210- 2 may respectively extract the distance between two pupils, and the face width from a plurality of image data, and statistically acquire a correlation between the distance between two pupils, and the face width by using the extracted distance between two pupils, and the face width. The face length estimator 210-2 may store, in a separate storing apparatus, the acquired correlation as described above, and when the user inputs the image data, the face length estimator 210-2 may estimate the face width by using the stored correlation, based on the distance between two pupils extracted from the inputted image data. The face height may be estimated by an identical method by using the correlation between pupils and lips.
The face contour information extractor 210-3 estimates a face range in the image data by using the face width and the face height estimated by the face length estimator 210-2. The face range may be a range of a rectangle having a minimum size including a face, and the face width and the face height may be respectively established as a width and a height of the rectangle. A location of the rectangle may be estimated by using the pupils location information and the lips location information identified by the face component location identifier 210-1.
Also, the face contour information extractor 210-3 distinguishes the face area from the background area in the face range, and extracts the face contour information. The face area may be distinguished from the background area by using a difference between color data of a pixel corresponding to the face area, and color data of a pixel corresponding to the background area. Also, information with respect to a boundary line of the distinguished area may be the face contour information. However, when a color of the face area and a color of the background area are indistinguishable, for example, a jaw line, areas may not be clearly distinguishable by using only the difference of the color data. Accordingly, the areas may be distinguished more accurately than an existing method by distinguishing the areas in the face range. Therefore, there is an effect to generate the caricature similar to the actual person by extracting the accurate face contour information.
Also, the face contour information extractor 210-3 may provide the user with the extracted face contour information, receive contour change information from the user, and modify the contour information, based on the contour change information. A method of estimating the face width and the face height by the face length estimator 210-2 uses a statistical method. Since the face contour information extractor 210-3 estimates the face range by using the face width and the face height, and extracts the face contour information in the face range, the contour information different from the actual face contour may be extracted when an exception to statistics is generated. In this case, the caricature similar to an actual photo may be generated by enabling the user to modify the face contour extracted by a simple mouse click, and to be in accordance with an actual contour.
The face shape former 210-4 forms the face shape, based on the face contour information extracted from the face contour information extractor 210-3. The face shape may be generated by covering, with a color similar to an actual face, an internal portion of the face contour, the face contour being extracted from the face contour information extractor 210-3, or extracting the internal portion of the face contour from the image data inputted from the user and passing through a predetermined process. Referring to FIG. 3 again, the character image generator 220 extracts face component shapes from the image data, and generates character images corresponding to the extracted face component shapes. The face components may denote characteristic components in a face, and may include eyes, eyebrows, a nose, a mouth, ears, and the like. Also, the character images may be images showing characteristic shapes of the face components, and may include a form of a digital image which may be combined with other digital images.
For this, the character image generator 220 may extract the face components from an identified location after identifying the location of the face components. The location of the face components may be identified by using a difference between a color of a pixel corresponding to the location of the face components, and a color of surrounding skin. Also, a coordinate of a clicked pixel may be identified by providing a user with the image data, and receiving, from the user, the pupils location information and the lips location information by click. When the location of the face components is identified as described above, the contour of the face components is extracted by using a color difference between the face components and the surrounding skin in the identified location. Accordingly, the face component shapes may be extracted.
Also, the character image generator 220 generates character images corresponding to the extracted face component shapes. For this, the character image generator 220 may extract the character images corresponding to the extracted face component shapes, from a character image list stored in the character image database 240, or may provide the user with the character image list, and receive the selected character images from the user. Also, the character image generator 220 may generate the character images by using user information including sex differentiation information of the user, information concerning whether double eyelids exist, and the like. Also, the user information may be stored in a predetermined user information database, and may be referred when the character images are generated.
The caricature generator 230 combines the face shape and the character images of the face components, and generates the caricature. Face component images may include images showing shapes of eyes, eyebrows, a nose, lips, ears, and the like. A plurality of face component images may be stored in a predetermined storing apparatus, and the caricature generator 230 may provide the user with a list, and receive the selected desirable images from the user. Also, the face components may be identified from the image data inputted from the user, and the face component images similar to the identified face components may be extracted in the storing apparatus.
Also, the caricature generator 230 may determine a location to combine the images, based on the pupils location and the lips location identified by the face component location identifier 210-1 when the character images of the face components are combined with the face shape.
FIG. 5 illustrates a method of estimating face lengths, based on pupils location and lips location in order to perform the first exemplary embodiment of the present invention including the caricature generation system 120 of FIG. 3. Reference numeral 310 illustrates the method of estimating the face lengths, based on the pupils location and the lips location in an actual picture, and reference numeral 320 expresses an identical illustration with a picture instead of a photograph, in order to be easily recognized. As illustrated in FIG. 5, the caricature generation system according to the present invention estimates a distance between two pupils, e.g. width_eyes, and a distance between pupils and lips, e.g. height_eyemouth, by identifying the pupils location and the lips location from the image data, and estimates a face width, e.g. width_face, and a face height, e.g. height_face, by using the estimated distance between two pupils and the distance between pupils and lips. For an approximation, a correlation between the distance between two pupils, e.g. width eyes, and the face width, e.g. widthjface, and a correlation between the distance between pupils and lips, e.g. height_eyemouth, and the face height, e.g. height_face, may be used.
For example, the caricature generation system according to the present invention estimates the distance between two pupils, the distance between pupils and lips, the face width, and the face height from a plurality of image data, acquires correlations of estimated values, and stores the correlations in the predetermined storing apparatus. When the user generates the caricature by inputting the image data, the caricature generation system may estimate the distance between two pupils and the distance between pupils and lips from the image data inputted from the user, and estimate the face width and the face height by inputting the estimated values in the stored correlations. In this instance, the correlations may be separately stored by being variously acquired depending on sex differentiation, and correlations corresponding to sex differentiation of an inputted image data thereto may be applied.
FIGS. 6 and 7 are diagrams illustrating data from experiments for acquiring a correlation between a distance between eyes, and a face width, and a correlation between a distance between eyes and lips, and a face height according to an exemplary embodiment of the present invention. FIG. 6 illustrates data of females from experiments, and FIG. 7 illustrates data of males from experiments.
As illustrated in FIGS. 6 and 7, the caricature generation system according to the present invention calculates a ratio of a distance between two pupils, and a face width, e.g. a ratio eye/face, and a ratio of a distance between pupils and lips, and a face height, e.g. a ratio height/face, for each sample image data by calculating the distance between two pupils, the distance between pupils and lips, the face width, and the face height from a plurality of sample image data.
According to the illustrated experiment data, since it is understood that the ratio eye/face and the ratio height/face have a nearly uniform value for each image data, it may be identified that the distance between two pupils, and the face width, and the distance between pupils and lips, and the face height have a first linear correlation. Accordingly, when the distance between two pupils, and the distance between pupils and lips can be estimated, the face width and the face height may be inferred by applying the estimated ratio.
Also, when data of FIG. 6 and data of FIG. 7 are compared, it is understood that the ratio eye/face and the ratio height/face are different in males and females. According to the data, the ratio eye/face ranges from 0.4 to 0.5 in males, and the ratio eye/face ranges from 0.3 to 0.35 in females. The ratio height/face ranges from 0.3 to 0.4 in males, and the ratio height/face ranges from 0.4 to 0.5 in females. Since there is a characteristic that the correlations are changed depending on sex differentiation, the face width and the face height may be accurately estimated when sex differentiation information of the image data here is used in estimation of the face width and the face height by using the distance between two pupils and the distance between pupils and lips.
FIG. 8 illustrates a method of distinguishing a face area from a background area in a face range according to an exemplary embodiment of the present invention. As illustrated in an image 601, the caricature generation system according to the present invention estimates a face range 610 by using a face width and a face height.
The face range 610 may be a range of a rectangle having a minimum size that includes a face, and the face width and the face height may be respectively established as a width and a height of the rectangle.
As illustrated in an image 602, the caricature generation system according to the present invention extracts, in the face range 610, a face contour 640 by distinguishing a face area 620 from a background area 630. The face area 620 and the background area 630 may be distinguished by respectively using a difference between the face area color and the background area color, and a boundary of the face range may be the face contour in a portion in which the face contour 640 is not clearly distinguishable, for example, a jaw line.
FIG. 9 is a flowchart illustrating flows of a caricature generation method according to the first exemplary embodiment of the present invention, and FIG. 10 is a flowchart illustrating step S720 of FIG. 9 in detail.
Image data is inputted from a user, in step S710, and face shape information including face contour information is generated in step S 720.
When step S710 is described with reference to FIG. 10, pupils location information and lips location information are identified from the image data first inputted from the user, in step S720-1. The identified pupils location information and the lips location information may include a coordinate of a pixel corresponding to a location of a pupil center, and a coordinate of a pixel corresponding to a location of a center of lips in the image data. The pupils location information and the lips location information may be identified by using a difference between a color of pixels corresponding to a pupils location and a lips location, and a color of skin around pupils and lips. Also, a coordinate of a clicked pixel may be identified by providing a user with the image data, and receiving, from the user, the pupils location and the lips location with a click.
In step S720-2, a face width and a face height are estimated, based on the pupils location and the lips location identified in step S720-1. For an approximation, a correlation between the pupils location and the lips location extracted from a plurality of sample image data, and the face width and the face height may be used. The correlation is respectively acquired depending on sex differentiation of a person in sample image data, and the face width and the face length may be estimated by using the correlation corresponding to the sex differentiation of the person in image data inputted from the user. In step S720-3, a face range in the image data is estimated by using the face width and the face height estimated in step S720-2. The face range may be a range of a rectangle having a minimum size including a face, and the face width and the face height may be respectively established as a width and a height of the rectangle. A location of the rectangle may be estimated by using the pupils location information and the lips location information identified in step 720-1.
Also, in step S720-3, the face area is distinguished from the background area in the face range, and the face contour information is extracted. The face area may be distinguished from the background area by using a difference between color data of a pixel corresponding to the face area, and color data of a pixel corresponding to the background area. Also, information with respect to a boundary line of the distinguished area may be the face contour information. However, when a color of the face area and a color of the background area are indistinguishable, for example, a jaw line, areas may not be clearly distinguishable by using only the difference of the color data. Accordingly, the areas may be distinguished more accurately than an existing method by distinguishing the areas in the face range. Therefore, there is an effect to generate the caricature similar to the actual person by extracting the accurate face contour information.
Also, in step S720-3, the user may be provided with the extracted face contour information, contour change information may be received from the user, and the contour information may be modified, based on the contour change information.
In step S720-4, the face shape is formed, based on the face contour information extracted in step S 720-3. The face shape may be generated by filling in, with a color similar to an actual face, an internal portion of the face contour, the face contour being extracted in step 720-3, or extracting the internal portion of the face contour from the image data inputted from the user and passing through a predetermined process.
Referring to FIG. 9 again, in step of 730, face component shapes are extracted from the image data, and character images are generated corresponding to the face component shapes. Face component shapes may include images showing shapes of eyes, eyebrows, a nose, lips, ears, and the like. A plurality of face component images may be stored in a predetermined storing apparatus, the user may be provided with a list, and the selected desirable images may be received from the user. Also, the face components may be identified from the image data inputted from the user, and the face component images similar to the identified face components may be extracted in the storing apparatus.
In step S740, the generated character images of the face components are combined with the face shape, and the caricature is generated. The pupils location and the lips location identified in step S720-1 may be used for the combination.
Hereinafter, a caricature generation system and a caricature generation method according to a second exemplary embodiment of the present invention are described in detail with reference to FIGS. 11 and 12.
FIG. 11 is a block diagram illustrating an internal configuration of a caricature generation system according to the second exemplary embodiment of the present invention. As illustrated in FIG. 11, the caricature generation system may include a shadow plate generator 210', a character image generator 220, and a caricature generator 230. The caricature generation system may further include a character image database 240. The shadow plate generator 210' generates a shadow plate including a face contour and a face curve from image data inputted from a user. The image data may be converted into a form which may use data inputted from the user in the shadow plate generator 210'. For example, when the user inputs the image data including a compressed form such as a JPEG format, a GIF format, and the like, the image data may be used by being converted into an image form including color data of all pixels such as a BMP format form. When the user inputs analog data such as an actual picture, the analog data may be converted into digital data by using a conversion apparatus such as a scanner, and the like. Also, the face contour denotes a boundary line of a face and a background, and shows a plane characteristic of a face. The face curve denotes an extent in which a face is protruded and concaved, and shows a three-dimensional characteristic of the face.
The shadow plate generator 210' extracts the face contour from the image data in order to generate the shadow plate. For extraction, the shadow plate generator 210' may identify the boundary line by dividing a face area and a background area from the image data. For example, a portion in which a color data value is significantly changed may be identified as the boundary line by comparing the color data values for each pixel of the image data.
Also, the shadow plate generator 210' generates the face curve from the image data in order to generate the shadow plate. In order to generate the face curve, the shadow plate generator 210' may use a posterizing method. Posterizing denotes reducing a range of a value of light and shade which each pixel may have in an image, and when the image data passes through a posterizing process, the image data is converted into using only a predetermined number of colors.
Accordingly, the image data having passed through the posterizing process is generally and indistinctly shown, a specific shape of each portion can not be clearly distinguishable. Therefore, when posterizing of the face area is performed, face components are imperceptibly erased, and only the general face curve remains. Also, the posterizing uses only the predetermined number of colors, and different images may be generated depending on which color the image is based upon when the color is determined. Accordingly, the shadow plate generator 210' may determine a color being a basic color of the posterizing, and the color may be determined, based on a color of the face area from the image data inputted from the user. Also, when there is a method of generating the face contour similar to the actual face, based on the image data, any method may be applied to the present invention. For example, a method of extracting the face contour in the caricature generation system according to the first exemplary embodiment of the present invention may be applied to the present invention. A character image generator 220 is referred to a description of the caricature generation system according to the above-described first exemplary embodiment of the present invention.
A caricature generator 230 combines the shadow plate generated by the shadow plate generator 210', and the character images generated by the character image generator 220, and generates the caricature. The shadow plate and the character image may have a BMP format form having color data for each pixel, and may combine two images by respectively adding the color data of the overlapped pixel depending on a predetermined ratio. Also, the color data of the pixel may be divided into three values including red (R), green (G), and blue (B), and combination may be performed by respectively adding the RGB values depending on the predetermined ratio.
Also, the caricature generator 230 may combine the shadow plate and the character images, based on a transparency value of the shadow plate by receiving the transparency value of the shadow plate inputted from the user. The transparency value of the shadow plate may denote a ratio of the color data of the pixel corresponding to the shadow plate when the color data of each pixel are added. The transparency value of the shadow plate may have a value from 0 to 1 , and combination may be performed, based on Equation as follows: [Equation] dstR = picR x alpha + chaR x (1 -alpha) dstG = picG x alpha + chaG x (1 -alpha) dstB = picB x alpha + chaB x (1 -alpha). In Equation, alpha denotes the transparency value of the shadow plate, dstR, dstG, and dstB denote the RGB value of the combined caricature, picR, picG, and picB denote the RGB value of the shadow plate, and chaR, chaG, and chaB denote the RGB value of the character images.
A character image database 240 stores character images corresponding to face components. The character image database 240 stores a plurality of character images.
When the character image generator 220 extracts the character images corresponding to face component shapes from a character image list, the character images corresponding to face component shapes are retrieved and are provided. The character image database 240 may be divided into each face component such as eyes, eyebrows, noses, mouths, ears, and the like, and store the character images.
FIG. 12 illustrates a shadow plate generated by using a posterizing introduced in a caricature generation system according to the second exemplary embodiment of the present invention.
As illustrated in FIG. 12, the caricature generation system according to the present exemplary embodiment of the present invention may generate the shadow plate by posterizing image data by a predetermined color. Reference numeral 301 illustrates a face area extracted from image data inputted from a user, and reference numeral 302 illustrates a shadow plate generated by posterizing the face area, based on a skin color of the face area in the image data. Although the shadow plate generated by the posterizing includes a face curve, face components are indistinctively shown. Accordingly, although the shadow plate is combined with character images afterwards, original face components may be not shown being overlapped with character images.
FIG. 13 illustrates a caricature changed depending on a transparency of a shadow plate according to an exemplary embodiment of the present invention.
As illustrated in FIG. 13, a caricature generation system according to the present invention may determine a combination ratio, based on the transparency value of the shadow plate when the shadow plate and the character images are combined. The transparency value of the shadow plate may denote a ratio in which the shadow plate occupies in combination, and may have a value from 0 to 1. Reference numeral 401 corresponds to a case where the transparency value of the shadow plate is 0, reference numeral 404 corresponds to a case where the transparency value of the shadow plate is 1, and reference numerals 402 and 403 show that the transparency value of the shadow plate are intermediate values. The transparency value of the shadow plate is increased as the caricature progresses from reference numeral 401 to reference numeral 404.
As described above, since the face curve is not shown when the transparency value of the shadow plate is low, the caricature appears like a drawing, and since the face curve is shown when the transparency value of the shadow plate is high, the caricature is felt like an actual photograph. The transparency value of the shadow plate may be received from the user, and may be used.
FIG. 14 is a diagram illustrating a character image database 240 according to an exemplary embodiment of the present invention. As illustrated in FIG. 14, the character image database 240 may store character images corresponding to face components such as eyes, eyebrows, noses, mouths, and ears, and may separately store character images for each face component. As an example, when the character image generator 220 extracts a shape of eyes from image data inputted from a user, the character image generator 220 may extract character images most similar to the extracted shape of eyes from among records in which a face component field corresponds to eyes in the character image database 240, and may generate the character images. As another example, when the selected character images corresponding to the shape of eyes are received from the user, the user is provided with a record list in which the face component field corresponds to eyes from the character image database 240, and the selected character images from among the list are received. FIG. 15 illustrates a picture in which a selected hairstyle and selected eye shapes of a caricature are inputted from a user according to an exemplary embodiment of the present invention. Reference numeral 610 is a caricature generated in the caricature generation system according to the present invention, reference numeral 620 is a picture of selecting eye shapes of the caricature, reference numeral 630 is a picture of selecting a hairstyle of the caricature, and reference numeral 640 illustrates a caricature depending on the hairstyle selected in reference numeral 630.
As illustrated in FIG. 15, since the caricature generation system according to the present invention enables a user to freely select face component images of the caricature, to select hairstyles, accessories, clothes, and the like, and to freely adorn the user's caricature, there is an effect that the user can desirably express himself/herself by using the caricature.
FIG. 16 illustrates an example of using, in a mobile terminal, a caricature generated according to an exemplary embodiment of the present invention.
As illustrated in FIG.16, the caricature generated by the present invention may be used for expressing oneself in the mobile terminal, and may be variously used as an image showing a user on Internet communities, personal homepages, Internet game sites, and the like. According to the present invention, since the caricature fully showing one's characteristic may be generated by using the shadow plate including the face curve, the caricature generated by the present invention may be generally used as image expressing oneself on the Internet wired/wirelessly connected.
FIG. 17 is a flowchart illustrating flows of a caricature generation method according to the second exemplary embodiment of the present invention.
In step S 801, a shadow plate including a face contour and a face curve is generated from image data inputted from a user. The face contour of the shadow plate may be generated by using a difference between a face area color and a background area color in the image data, and the face curve may be easily generated by using posterizing. The posterizing denotes reducing a range of a value of light and shade in which each pixel may have in an image, and is performed, based on a predetermined color. Also, the predetermined color may be determined by using a color of the face area.
In step S 802, face component shapes are extracted from the image data, and character images corresponding to the extracted face component shapes are generated. The face component shapes may be extracted by using a color difference between face components and surrounding skin. The character images corresponding to the extracted face component shapes may be extracted from a character image list, or the user may be provided with the character image list, and the selected character images are received from the user. User information including sex differentiation information of the user, information concerning whether double eyelids exist, and the like may be used when the character images are generated. The user information may be stored in a predetermined user information database, and may be referred to when the character images are generated.
In step S803, the shadow plate generated in step S801, and the character images generated in step S802 are combined, and the caricature is generated. The shadow plate and the character image may have a BMP format form having color data for each pixel, and may combine two images by respectively adding the color data of the overlapped pixel depending on a predetermined ratio. Also, the color data of the pixel may be divided into three values including R, G, and B, and combination may be performed by respectively adding the RGB values depending on the predetermined ratio.
Also, in step S 803, the shadow plate and the character images may be combined, based on a transparency value of the shadow plate by receiving the transparency value of the shadow plate inputted from the user. The transparency value of the shadow plate may denote a ratio of the color data of the pixel corresponding to the shadow plate when the color data of each pixel are added. The transparency value of the shadow plate may have a value from 0 to 1, and combination may be performed, based on
Equation as follows:
[Equation] dstR = picR x alpha + chaR x (1 -alpha) dstG = picG x alpha + chaG x ( 1 -alpha) dstB = picB x alpha + chaB x (1 -alpha). In Equation, alpha denotes the transparency value of the shadow plate, dstR, dstG, and dstB denote the RGB value of the combined caricature, picR, picG, and picB denote the RGB value of the shadow plate, and chaR, chaG, and chaB denote the RGB value of the character images.
A caricature generation method according to the first exemplary embodiment and the second exemplary embodiment of the present invention may be recorded in computer-readable media including program instructions to implement various operations embodied by a computer. The media may also include, alone or in combination with the program instructions, data files, data structures, and the like. The media and program instructions may be those specially designed and constructed for the purposes of the present invention, or they may be of the kind well-known and available to those having skill in the computer software arts. Examples of computer- readable media include magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD ROM disks and DVD; magneto-optical media such as optical disks; and hardware devices that are specially configured to store and perform program instructions, such as read-only memory (ROM), random access memory (RAM), flash memory, and the like. The media may also be a transmission medium such as optical or metallic lines, wave guides, and the like, including a carrier wave transmitting signals specifying the program instructions, data structures, and the like. Examples of program instructions include both machine code, such as produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter. The described hardware devices may be configured to act as one or more software modules in order to perform the operations of the above-described embodiments of the present invention.
According to the present invention, there is provided a caricature generation system and method which enables a user to easily generate a caricature by merely inputting the user's own picture into image data.
Also, according to the present invention, there is provided a caricature generation system and method which can generate a caricature similar to an actual user by accurately extracting a face contour when the caricature is generated. Also, according to the present invention, there is provided a caricature generation system and method which can generate a caricature which a user desires by enabling the user to modify an automatically extracted contour with a simple mouse click.
Also, according to the present invention, there is provided a caricature generation system and method which can extract a face contour by using a difference between a face area color and a background area color when the caricature is generated, thereby generating a caricature similar to an actual user.
Also, according to the present invention, there is provided a caricature generation system and method which can extract a contour very similar to an actual user's contour with reference to an estimated face width and a face height by estimating the face width and the face height, based on a pupils location and a lips location when a face contour is extracted in order to generate a caricature.
Also, according to the present invention, there is provided a caricature generation system and method which can extract a contour in which a face area color and a background area color are similar, such as a jaw line, by extracting a contour in a face range when a face contour is extracted in order to generate a caricature. Also, according to the present invention, there is provided a caricature generation system and method which can express a three-dimensional shape of an actual face by using a shadow plate including a face contour and a face curve when a caricature is generated.
Also, according to the present invention, there is provided a caricature generation system and method which can easily generate a face curve by posterizing a face area in image data inputted from a user.
Also, according to the present invention, there is provided a caricature generation system and method which can generate a caricature similar to an actual face by using a color of an actual face area when a face area is posterized from image data inputted from a user in order to generate a face curve.
Also, according to the present invention, there is provided a caricature generation system and method which can generate a caricature having various moods by receiving, from a user, a transparency of a shadow plate and determining a combination ratio when the shadow plate and a character image are combined. Although a few embodiments of the present invention have been shown and described, the present invention is not limited to the described embodiments. Instead, it would be appreciated by those skilled in the art that changes may be made to these embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims

1. A caricature generation system for generating a caricature by using image data inputted from a user, the system comprising: a face shape generator which extracts, from the image data, face shape information including face contour information generated by distinguishing a face area from a background area, and generates a face shape; a character image generator which extracts face component shapes from the image data, and generates character images corresponding to the extracted face component shapes; and a caricature generator which combines the face shape and the character images, and generates the caricature.
2. The system of claim 1, wherein the face shape information comprises pupils location information, lips location information, face width information, and face height information.
3. The system of claim 1, wherein the face shape generator comprises: a face component location identifier which identifies pupils location information and lips location information from the image data; a face length estimator which estimates a face width and a face height, based on the identified pupils location and the identified lips location; a face contour information extractor which distinguishes the face area from the background area in an estimated face range by using the estimated face width and the face height, and extracts the face contour information; and a face shape former which processes the face contour information or the image data, and forms the face shape.
4. The system of claim 3, wherein the face length estimator estimates the width by using a distance between two pupils, and estimates the length by using a distance between pupils and lips.
5. The system of claim 3, wherein the face length estimator extracts, from a plurality of sample image data, a correlation between the distance between two pupils, and the face width, and a correlation between the distance between pupils and lips, and the face height, and estimates the width and the length by using the correlations.
6. The system of claim 3, wherein the face length estimator further comprises: a user sex differentiation information database which stores user information including sex differentiation information.
7. The system of claim 6, wherein the face length estimator extracts the sex differentiation information of the user with reference to the database, and estimates the width and the length by further considering the extracted sex differentiation information.
8. The system of claim 3, wherein the face contour information extractor distinguishes the face area and the background area by using a difference between a face area color and a background area color.
9. The system of claim 3, wherein the face contour information extractor extracts a face contour from the face area.
10. The system of claim 3, wherein the face contour information extractor provides the user with the extracted contour information, receives contour change information from the user, and modifies the contour information, based on the contour change information.
11. The system of claim 3, wherein the face shape former performs a process of filling in, with a color similar to an actual face, an internal portion of the face contour, the face contour being extracted from the face contour information extractor, and forms the face shape.
12. The system of claim 3, wherein the face shape former extracts, from the image data, an internal portion of the face contour, and forms the face shape.
13. The system according to any one of claims 1 and 3, further comprising: a character image database which stores a plurality of character images having a plurality of face component shapes, wherein the character image generator generates the character image corresponding to the extracted plurality of face component shapes with reference to the character image database.
14. The system according to any one of claims 1 and 3, further comprising: a character image database which stores a plurality of character images having a plurality of face component shapes, wherein the character image generator provides the user with a character image list stored in the character image database, receives a selection of character images from the user, and generates the character images, based on the selected character images.
15. The system according to any one of claims 1 and 3, wherein the character image generator generates the character images by further considering user information including sex differentiation information of the user, or information concerning whether double eyelids exist.
16. The system according to any one of claims 1 and 3, further comprising: a user information database which stores user information including sex differentiation information or information concerning whether double eyelids exist, wherein the character image generator extracts the sex differentiation information of the user, or the information concerning whether double eyelids exist with reference to the user information database, and generates the character images by further considering the extracted sex differentiation information, or the information concerning whether double eyelids exist.
17. The system of claim 3, wherein the caricature generator determines a location to combine the character images, based on the identified pupils location and the identified lips location in the face component location identifier.
18. A caricature generation system for generating a caricature from image data inputted from a user, the system comprising: a shadow plate generator which generates a shadow plate including a face contour and a face curve generated by dividing a face area and a background area from the image data; a character image generator which extracts face component shapes from the image data, and generates character images corresponding to the extracted face component shapes; and a caricature generator which combines the shadow plate and the character images, and generates the caricature.
19. The system of claim 18, wherein the shadow plate generator extracts the face contour, posterizes the face area specified by the face contour, based on a predetermined color, and generates the face curve.
20. The system of claim 18, wherein the predetermined color is determined, based on a color of the face contour area.
21. The system of claim 18, wherein the caricature generator determines a location to combine the character images in the shadow plate, based on a location of face components.
22. The system of claim 18, wherein the caricature generator combines the shadow plate and the character images depending on a predetermined combination ratio, and the combination ratio is determined on a transparency value of the shadow plate inputted from the user.
23. The system of claim 22, wherein the caricature generator generates the caricature by considering a red, green, and blue (RGB) value of the shadow plate, and the RGB value of the character images.
24. The system of claim 23, wherein the transparency value of the shadow plate corresponding to an alpha has a value from 0 to 1, and the RGB value of each pixel of the combined caricature is determined by Equation as follows:
[Equation] dstR = picR x alpha + chaR x (1 -alpha) dstG = picG x alpha + chaG x (1 -alpha) dstB = picB x alpha + chaB x (1 -alpha), where dstR, dstG, and dstB denote the RGB value of the caricature, picR, picG, and picB denote the RGB value of the shadow plate, and chaR, chaG, and chaB denote the RGB value of the character images.
25. The system of claim 18, further comprising: a character image database which stores a plurality of character images having a plurality of face component shapes, wherein the character image generator generates the character images corresponding to the extracted plurality of face component shapes with reference to the character image database.
26. The system of claim 18, further comprising: a character image database which stores a plurality of character images having a plurality of face component shapes, wherein the character image generator provides the user with a character image list stored in the character image database, receives the selected character images from the user, and generates the character images, based on the selected character images.
27. The system of claim 18, wherein the character image generator generates the character images by further considering user information including sex differentiation information of the user, or information concerning whether double eyelids exist.
28. The system of claim 18, further comprising: a user information database which stores user information including sex differentiation information or information concerning whether double eyelids exist, wherein the character image generator extracts the sex differentiation information of the user, or the information concerning whether double eyelids exist with reference to the user information database, and generates the character images by further considering the extracted sex differentiation information, or the information concerning whether double eyelids exist.
29. A caricature generation method of generating a caricature by using image data inputted from a user, the method comprising: a step of distinguishing a face area from a background area, extracting, from the image data, face shape information including face contour information, and generating a face shape; a step of extracting face component shapes from the image data, and generating character images corresponding to the extracted face component shapes; and a step of combining the face shape and the character images, and generating the caricature.
30. The method of claim 29, wherein the step of distributing comprises: identifying pupils location information and lips location information from the image data; estimating a face width and a face height, based on the identified pupils location and the identified lips location; distinguishing the face area from the background area in an estimated face range by using the estimated face width and the face height, and extracting the face contour information; and forming the face shape, based on the extracted face contour information, generating an image of ears, eyes, mouth, and nose corresponding to eyes, eyebrows, nose, mouth, or ears shape of the image data, combining the face shape with the image of ears, eyes, mouth, and nose, and generating the caricature.
31. A caricature generation method of generating a caricature from image data inputted from a user, the method comprising: a step of generating a shadow plate including a face contour and a face curve generated by dividing a face area and a background area from the image data; a step of extracting face component shapes from the image data, and generating character images corresponding to the extracted face component shapes; and a step of combining the shadow plate and the character images, and generating the caricature.
32. A computer-readable recording medium storing a program for implementing the method according to any one of claims 29 through 31.
PCT/KR2007/001788 2007-04-12 2007-04-12 System for making caricature and method of the same and recording medium of the same Ceased WO2008126948A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2009509402A JP2009521065A (en) 2007-04-12 2007-04-12 Caricature generation system and method and recording medium thereof
PCT/KR2007/001788 WO2008126948A1 (en) 2007-04-12 2007-04-12 System for making caricature and method of the same and recording medium of the same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2007/001788 WO2008126948A1 (en) 2007-04-12 2007-04-12 System for making caricature and method of the same and recording medium of the same

Publications (1)

Publication Number Publication Date
WO2008126948A1 true WO2008126948A1 (en) 2008-10-23

Family

ID=39864034

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2007/001788 Ceased WO2008126948A1 (en) 2007-04-12 2007-04-12 System for making caricature and method of the same and recording medium of the same

Country Status (2)

Country Link
JP (1) JP2009521065A (en)
WO (1) WO2008126948A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102496174A (en) * 2011-12-08 2012-06-13 中国科学院苏州纳米技术与纳米仿生研究所 Method for generating face sketch index for security monitoring
WO2024140246A1 (en) * 2022-12-28 2024-07-04 中国电信股份有限公司 Digital cartoon character avatar generation method and apparatus, electronic device, and medium

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110025689A1 (en) * 2009-07-29 2011-02-03 Microsoft Corporation Auto-Generating A Visual Representation
EP2631875A1 (en) 2012-10-29 2013-08-28 Meubook, S.L. Automatic caricaturing system and method maintaining the style of the draftsman
KR102384983B1 (en) * 2020-09-28 2022-04-29 김상철 Electronic terminal device which is able to create a composite image with the face image of a celebrity and the face image of a user and the operating method thereof
CN112991151B (en) * 2021-02-09 2022-11-22 北京字跳网络技术有限公司 Image processing method, image generation method, apparatus, device, and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020018595A1 (en) * 2000-07-05 2002-02-14 Eun-Jun Kwak Method for creating caricature
US6885761B2 (en) * 2000-12-08 2005-04-26 Renesas Technology Corp. Method and device for generating a person's portrait, method and device for communications, and computer product
KR20060088625A (en) * 2005-02-02 2006-08-07 엘지전자 주식회사 Method and apparatus for providing caricature of mobile terminal
KR20060098730A (en) * 2005-03-07 2006-09-19 엘지전자 주식회사 Mobile communication terminal with caricature generation function and generation method using same

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3800652B2 (en) * 1995-12-01 2006-07-26 カシオ計算機株式会社 Face image creation device, image generation device, and face image correction method
JP2000155836A (en) * 1998-11-19 2000-06-06 Design Network:Kk Portrait picture formation system and its method
JP2000311248A (en) * 1999-04-28 2000-11-07 Sharp Corp Image processing device
TW200614094A (en) * 2004-10-18 2006-05-01 Reallusion Inc System and method for processing comic character

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020018595A1 (en) * 2000-07-05 2002-02-14 Eun-Jun Kwak Method for creating caricature
US6885761B2 (en) * 2000-12-08 2005-04-26 Renesas Technology Corp. Method and device for generating a person's portrait, method and device for communications, and computer product
KR20060088625A (en) * 2005-02-02 2006-08-07 엘지전자 주식회사 Method and apparatus for providing caricature of mobile terminal
KR20060098730A (en) * 2005-03-07 2006-09-19 엘지전자 주식회사 Mobile communication terminal with caricature generation function and generation method using same

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102496174A (en) * 2011-12-08 2012-06-13 中国科学院苏州纳米技术与纳米仿生研究所 Method for generating face sketch index for security monitoring
WO2024140246A1 (en) * 2022-12-28 2024-07-04 中国电信股份有限公司 Digital cartoon character avatar generation method and apparatus, electronic device, and medium

Also Published As

Publication number Publication date
JP2009521065A (en) 2009-05-28

Similar Documents

Publication Publication Date Title
JP6956252B2 (en) Facial expression synthesis methods, devices, electronic devices and computer programs
US10535163B2 (en) Avatar digitization from a single image for real-time rendering
KR101733512B1 (en) Virtual experience system based on facial feature and method therefore
JP4435809B2 (en) Virtual makeup apparatus and method
JP3984191B2 (en) Virtual makeup apparatus and method
US20200020173A1 (en) Methods and systems for constructing an animated 3d facial model from a 2d facial image
KR20210119438A (en) Systems and methods for face reproduction
US20060188144A1 (en) Method, apparatus, and computer program for processing image
KR101827998B1 (en) Virtual experience system based on facial feature and method therefore
Pan et al. Renderme-360: A large digital asset library and benchmarks towards high-fidelity head avatars
JP2011170892A (en) Image extracting device, image extracting method, and image extracting program
WO2008126948A1 (en) System for making caricature and method of the same and recording medium of the same
CN109903291A (en) Image processing method and relevant apparatus
US20240412432A1 (en) Methods and Systems for Transferring Hair Characteristics from a Reference Image to a Digital Image
CN111862116A (en) Animation portrait generation method and device, storage medium and computer equipment
CN112819718A (en) Image processing method and device, electronic device and storage medium
KR20130120175A (en) Apparatus, method and computer readable recording medium for generating a caricature automatically
CN114419202A (en) Virtual image generation method and system
Elgharib et al. Egocentric videoconferencing
CN108833964A (en) A kind of real-time successive frame Information Embedding identifying system
JP5035524B2 (en) Facial image composition method and composition apparatus
CN114723860B (en) Method, device and equipment for generating virtual image and storage medium
CN118396857A (en) Image processing method and electronic device
JP2004145625A (en) Caricature creation device
CN112836545A (en) A 3D face information processing method, device and terminal

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 2009509402

Country of ref document: JP

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07745951

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07745951

Country of ref document: EP

Kind code of ref document: A1