[go: up one dir, main page]

CN120147460A - A character scene generation system based on AI interaction - Google Patents

A character scene generation system based on AI interaction Download PDF

Info

Publication number
CN120147460A
CN120147460A CN202510295615.XA CN202510295615A CN120147460A CN 120147460 A CN120147460 A CN 120147460A CN 202510295615 A CN202510295615 A CN 202510295615A CN 120147460 A CN120147460 A CN 120147460A
Authority
CN
China
Prior art keywords
user
image
age
features
facial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202510295615.XA
Other languages
Chinese (zh)
Inventor
王瑞明
赵华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Box Creation Glory Technology Co ltd
Original Assignee
Beijing Box Creation Glory Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Box Creation Glory Technology Co ltd filed Critical Beijing Box Creation Glory Technology Co ltd
Priority to CN202510295615.XA priority Critical patent/CN120147460A/en
Publication of CN120147460A publication Critical patent/CN120147460A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/58Extraction of image or video features relating to hyperspectral data
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/752Contour matching
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)

Abstract

本申请公开了一种基于AI交互的人物场景生成系统,包括:用户交互设计模块、人物图像拍摄模块、人物图像分析处理模块、人物图像处理模块、图像整合修正模块、图像生成模块和管理数据库;人物图像拍摄模块用于通过使用高清相机获取用户图像和皮肤表面的微观结构,微观结构包括皮肤光泽度参考值、皱纹级别参考值、毛孔特征和色素沉积,用户图像含有头发覆盖率;使用算法计算毛孔特征,对微观结构进行颜色转换得到色素沉积,用户年龄评估关联系数通过皮肤光泽度、皱纹级别、毛孔特征、色素沉积和头发覆盖率进行计算得到,通过改进年龄预测方法,提供更准确的年龄评估结果,能够更准确地量化皮肤纹理、色素沉积等特征,从而提高年龄评估的准确性。

The present application discloses a character scene generation system based on AI interaction, comprising: a user interaction design module, a character image shooting module, a character image analysis and processing module, a character image processing module, an image integration and correction module, an image generation module and a management database; the character image shooting module is used to obtain a user image and a microstructure of a skin surface by using a high-definition camera, wherein the microstructure comprises a skin gloss reference value, a wrinkle level reference value, pore features and pigmentation, and the user image contains hair coverage; an algorithm is used to calculate the pore features, and the color of the microstructure is converted to obtain pigmentation, and the user age assessment correlation coefficient is calculated through skin gloss, wrinkle level, pore features, pigmentation and hair coverage, and a more accurate age assessment result is provided by improving the age prediction method, and features such as skin texture and pigmentation can be more accurately quantified, thereby improving the accuracy of age assessment.

Description

Character scene generation system based on AI interaction
Technical Field
The invention relates to the technical field of image generation, in particular to a character scene generation system based on AI interaction.
Background
In the present digital age, with the rapid development of Artificial Intelligence (AI) technology, character scene generation technology has been widely applied to various fields of virtual fitting, avatar design, game character creation, and the like. However, existing character scene generation systems still have many challenges in achieving user age assessment, hairstyle and garment matching, dynamic effect simulation, and the like.
For example, chinese patent application No. 202311011914.3 discloses a character image scene generating system based on AI interaction, which includes a user interaction design module, a character image shooting module, a character image analysis processing module, a character image processing module, an image integration correction module, an image generating module, and a management database. The system matches corresponding scene subjects for the user according to keywords selected by the user through the voice recognition interface, carries out age evaluation on shot user static images and is based on the age evaluation, screens layer by layer, matches most suitable hairstyles and clothes for user character images, overcomes the defect of low attention to age in the prior art, provides selection more conforming to age characteristics and aesthetic preferences, meets the standard of personalized customization of user requirements and preference, improves visual effect and sense of reality, enhances the fidelity and quality of generated images, and enables the generated images to more conform to the expectations of the user.
In the prior art, the age estimation method often depends on a single photo or video, and it is difficult to comprehensively capture facial and physical features of a user, so that deviation exists in the age estimation result. Particularly when the user makes extreme expressions, facial expression changes may cause temporal changes in biomarkers of skin texture gloss, wrinkles, etc., further reducing the accuracy of age prediction. In addition, factors such as different illumination conditions, shooting angles and user posture can also influence the age evaluation result.
Disclosure of Invention
The application provides a character scene generation system based on AI interaction, which provides more accurate age assessment results by improving an age prediction method, provides personalized clothing and hairstyle suggestions for a user according to the assessment results, and can more accurately quantify the characteristics of skin texture, pigmentation and the like by introducing skin microstructure analysis, color space conversion and histogram analysis methods, thereby improving the accuracy of age assessment.
The application provides a character scene generation system based on AI interaction, which comprises a user interaction design module, a character image shooting module, a character image analysis processing module, a character image processing module, an image integration correction module, an image generation module and a management database, wherein the character image shooting module is used for acquiring a user image and a microstructure image of a skin surface by using a high-definition camera, the microstructure comprises a skin glossiness reference value, a wrinkle level reference value, pore characteristics and pigment deposition, the user image comprises a hair coverage rate, an algorithm is used for calculating the pore characteristics, the microstructure image is subjected to color conversion to obtain pigment deposition, the user age evaluation correlation coefficient is obtained by calculating the skin glossiness, the wrinkle level, the pore characteristics, the pigment deposition and the hair coverage rate, the character image analysis processing module is used for acquiring user age evaluation correlation data according to the microstructure of the user image and the skin surface so as to predict the age of the user, and further obtain a user age evaluation correlation coefficient, and the image integration correction module is used for integrating the user interaction scene and the user decoration image to obtain a user scene decoration image.
Preferably, the algorithm is used for calculating the characteristics of the pores, namely the algorithm is used for detecting edges with severe gray level change in an image, the edges correspond to contours of the pores, the identified pores are segmented, the size characteristics of the segmented pore areas are extracted, the number of pixels in the pore areas is counted by measuring the diameter of the pore areas, the area of the pores can be obtained, the shape characteristics of the segmented pore areas are extracted, the circularity is evaluated by calculating the ratio of the perimeter to the area of the pore areas, and the ratio of the longest diameter to the shortest diameter of the noncircular pores, namely the aspect ratio, is calculated.
Preferably, the user age estimation association coefficient ρ is formulated as: wherein, ψ is a skin glossiness reference value, ζ is a wrinkle level reference value, For the average pore diameter to be the average pore diameter,Is a quantitative index of pigmentation, lambda is the hair coverage,The weight coefficient of age assessment for each feature, the coefficient is set and adjusted according to the actual data or experience, e is a natural constant,Is a regulating parameter of the glossiness influence.
Preferably, the method for predicting the age of the user further comprises:
s101, collecting a face data set, and marking face key points according to the collected face data set;
S102, calculating the displacement of the facial key points by using a light flow method according to the marked facial key points;
S103, constructing a feature vector according to the calculated displacement of the key point, wherein the feature vector comprises a first feature vector, a second feature vector and a third feature vector;
s104, training a regression model according to the characteristics of the data in the face data set, inputting the obtained third feature vector into the trained regression model through the character image analysis processing module, and outputting the optimized association coefficient of the user age assessment by the regression model.
Preferably, the displacement vectors of the facial key points obtained by calculation through an optical flow method are arranged in sequence to form a second feature vector, the dimension of the second feature vector is equal to the number of the facial key points, each dimension corresponds to the displacement vector of one key point, global features are extracted from a facial image data set, the extracted global features are arranged in sequence to form a first feature vector, the first feature vector and the second feature vector are compared, a difference value, namely an expression reduction value, the inherent feature and the temporary feature are identified, the variance of the temporary feature is calculated according to the identified temporary feature, a feature threshold is set according to the calculated variance, the vectors exceeding the feature threshold in the first feature vector are removed, the remaining features after the temporary feature is removed are sorted, the sorted features are arranged in sequence, and the sorted features are combined into a third feature vector.
Preferably, the method for optimizing the association coefficient of the user age assessment further comprises:
s201, acquiring facial and body images of a user through a camera, and extracting facial features and body features of the user by utilizing an image processing algorithm;
S202, inputting the extracted facial features and body features into an age estimation algorithm, and calculating an age estimation value;
s203, collecting images, selecting candidate hairstyles and clothes from the collected hairstyle images according to the age evaluation association coefficients of the users, matching the candidate hairstyles and clothes with the face and body images of the users, and adjusting the matched hairstyles and clothes by using age evaluation values.
Preferably, a camera is used to obtain a whole body image of a user, an algorithm is used to extract body lines and contours when standing, key joints of the body are positioned, a distance formula is used to calculate the distance between two joints, for angle measurement, a corresponding joint triplet is selected, for elbow bending angle measurement, shoulder, elbow and wrist joints are selected, vectors between adjacent joints are calculated, for shoulder, elbow and wrist, shoulder-to-elbow vectors and elbow-to-wrist vectors are calculated, and the dot product and modular length of the vectors are used to calculate the included angle between the two vectors, wherein the formula is: , wherein, Is the vector between adjacent nodes.
Preferably, the step of adjusting the age estimation value is:
s301, constructing a three-dimensional image model according to acquired face and body images of a user;
S302, acquiring real motion data by using a motion capture technology, simulating the change of a three-dimensional image model according to the real motion data, judging whether the clothes and the hairstyle are matched in real time when the 3D model acts, if not, adjusting an age evaluation value, and if so, keeping the age evaluation value unchanged.
Preferably, when the three-dimensional image model acts, whether the clothes and the hairstyle are matched or not is judged in real time, the judgment is carried out according to the change rate of the clothes swing amplitude and the hairstyle shape, the consistency of the clothes swing amplitude refers to the matching degree between the amplitude of the clothes swing in the 3D model and the amplitude of the similar clothes swing in the real situation, the maximum distance of the clothes swing in the running process of the 3D model is measured, the maximum distance is compared with the swing amplitude of the similar clothes in the real video under the same action, if the swing amplitude is not more than 5% of the set swing amplitude difference, the clothes swing amplitude is considered to be consistent with the real situation, otherwise, the swing amplitude is inconsistent, the rationality of the hairstyle shape change rate refers to whether the shape change rate of the hairstyle in the 3D model accords with the expected physical action in the running process, the shape change rate of the hairstyle in the 3D model is calculated, the shape change rate of the same as the similar hairstyle in the real video is compared, the shape change rate of the hairstyle in the same action is not more than 15%, and the shape change rate of the hairstyle is considered reasonable, if the change rate of the hairstyle is more than 15%, the age evaluation value is adjusted.
Preferably, the user interaction design module comprises a voice recognition unit and a scene matching unit, wherein the voice recognition unit is used for collecting voice information of a user, converting the voice information of the user into characters and extracting keywords, and the scene matching unit is used for matching the extracted keywords with scene keywords in a database and screening to obtain a user interaction scene.
The technical scheme provided by the application has at least the following technical effects or advantages that by improving the age prediction method and introducing the skin microstructure analysis, the color space conversion and the histogram analysis method, the technical scheme can more accurately quantify the characteristics of skin texture, pigmentation and the like and improve the accuracy of age assessment. Meanwhile, by combining analysis and processing of facial dynamic expressions, the optical flow method is utilized to calculate facial key point displacement, temporary features related to the expressions are removed, and accuracy and generalization capability of age prediction are further improved. In addition, by adjusting factors that affect age assessment, such as hairline position, eye angle position, and hair color, and eliminating the effect of photographing shadows, the fit of the hairstyle and garment to the user is improved. Finally, the matching degree is judged from the front photo, a three-dimensional image model is built to comprehensively simulate the actual image of the user, the matching degree of clothes and hairstyles is judged in real time by combining the motion capture technology, the harmony and natural fluency of the dynamic change effect are ensured, and more real and natural virtual image experience is provided for the user.
Drawings
FIG. 1 is a schematic diagram of a character scene generation system based on AI interaction;
FIG. 2 is a flow chart of predicting the age of a user according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a process for optimizing correlation coefficients for user age estimation according to an embodiment of the present invention;
Fig. 4 is a flowchart illustrating a process for adjusting an age estimation value according to an embodiment of the invention.
Detailed Description
In order that the application may be readily understood, a more particular description of the application will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings, in which, however, the application may be embodied in many different forms and is not limited to the embodiments described herein, but is instead provided for the purpose of providing a more thorough understanding of the present disclosure.
It should be noted that the terms "vertical", "horizontal", "upper", "lower", "left", "right", and the like are used herein for illustrative purposes only and do not represent the only embodiment.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs, the terms used herein in this description of the invention are used for the purpose of describing particular embodiments only and are not intended to be limiting of the invention, and the term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
Fig. 1 is a schematic flow diagram of a character scene generating system based on AI interaction according to an embodiment of the present invention, which includes a user interaction design module, a character image capturing module, a character image analysis processing module, a character image processing module, an image integration correction module, an image generating module, and a management database.
The user interaction design module comprises a voice recognition unit and a scene matching unit, wherein the voice recognition unit is used for collecting voice information of a user, converting the voice information of the user into characters and extracting corresponding keywords, the user can interact with the system without manually inputting the characters through the voice recognition unit, so that the convenience and efficiency of interaction are improved, the system can more accurately understand voice input of the user, and further more accurate service is provided, the scene matching unit is used for matching the extracted keywords with scene keywords in a database, screening is carried out to obtain user interaction scenes, the scene matching unit can accurately judge interaction scenes required by the user, the system can provide personalized image generation for different scenes, the system can better meet specific requirements of the user through scene matching, more personalized and careless service is provided, and the satisfaction degree of the user is enhanced.
The character image shooting module is used for acquiring an image of a user and a microstructure of a skin surface by using a high-definition camera, wherein the microstructure comprises a skin glossiness reference value, a wrinkle level reference value, a pore shape and size and distribution or color of pigment spots, and the user image comprises hair coverage rate.
The character image analysis processing module is used for acquiring user age assessment association data according to the microstructure of the user image and the skin surface, predicting the age of the user, and further acquiring a user age assessment association coefficient;
Further, the captured skin image is preprocessed by an image processing technology Adobe Photoshop, the preprocessing comprises filtering and contrast enhancement, the filtering can remove noise and interference in the image to enable the image to be clearer, the contrast enhancement can highlight details such as pores and pigment spots and the like to enable the details to be easier to observe and analyze, the preprocessed image is converted into a visual graph by using a visual software MATLAB, so that the skin features are more visual and easier to understand, different skin features are endowed with different colors to distinguish the skin features more clearly, for example, the pores can be represented by one color, the pigment spots can be represented by another color, a three-dimensional reconstruction technology is used to convert a two-dimensional skin image into a three-dimensional graph, an observer can observe and analyze the skin features from different angles to obtain more comprehensive information, and different skin features are endowed with different colors to distinguish the skin features more clearly;
After the obtained microstructure image is detected by using an edge detection algorithm Canny, edges with intense gray level change in the image are detected, the edges correspond to contours of pores, then, the identified pores are segmented by using morphological processing, size and feature extraction is carried out on the segmented pore areas, the diameter of the pore areas is measured, the number of pixels in the pore areas is counted, the area of the pores can be obtained, shape feature extraction is carried out on the segmented pore areas, the circularity is evaluated by calculating the ratio of the perimeter of the pore areas to the area, the higher the circularity is, the closer the pore shape is to the circular shape is, and the ratio of the longest diameter to the shortest diameter of the non-circular pores is calculated, namely the aspect ratio;
Converting the obtained microstructured image from RGB color space into three independent channels, wherein L channel represents brightness information reflecting brightness and darkness of the image, a channel and b channel respectively represent different aspects of chromaticity, wherein the a channel is related to the opposite of red and green colors, the b channel is related to the opposite of yellow and blue colors, the influence of illumination variation on pigment analysis is reduced, the chromaticity (a and b channels in ab space) and the brightness (L channel in Lab space) are generated into a histogram, in the histogram, the horizontal axis represents a pixel value range, the vertical axis represents the frequency of occurrence of each pixel value, the concentration degree and the dispersity information of pigments in the image can be obtained through counting the number of pixels with different chromaticity or brightness values, the shape, peak position, peak width and other characteristics of the histogram are analyzed to quantify the pigment deposition condition, for example, the sharpness of the peak possibly reflects the uniformity of pigment deposition, and the position of the peak possibly reveals the type or concentration of main pigments;
Dividing skin areas and other areas of a user in a microstructure image, dividing the skin areas into a plurality of skin subareas for RGB color detection, respectively marking red, green and blue component values of the skin in each skin subarea as Ri, gi and Bi, wherein i represents the number of the ith skin subarea divided, i=1, 2, and k, and substituting the red, green and blue component values of the skin in each skin subarea into a formula: Analyzing and obtaining the skin brightness Y and k of the user to represent the number of skin subareas, comparing the skin brightness of the user with the skin brightness range corresponding to each preset glossiness in the management database, screening the glossiness corresponding to the skin brightness of the user from the skin brightness range, and marking the glossiness as a skin glossiness reference value psi;
the specific analysis method of the wrinkle level reference value comprises extracting corresponding wrinkle characteristic data from human face part of microstructure image, wherein the wrinkle characteristic data comprises wrinkle number, wrinkle depth of each wrinkle, wrinkle length of each wrinkle, the wrinkle number is marked as a, and the wrinkle depth corresponding to each wrinkle is marked as a J represents the number of each wrinkle, j=1, 2,..a, the wrinkle length corresponding to each wrinkle is noted asAnd substitutes it into the formula: obtaining the characteristic data matching coefficient of the skin area of the user and each level of wrinkles ,The number of reference wrinkles indicating the q-th level of wrinkles, q indicating the number of each level of wrinkles, q=1, 2,..,Reference depth of reference wrinkles representing the q-th level of wrinkles,The reference wrinkle length of the q-th level wrinkles is represented, eta 1, eta 2 and eta 3 respectively represent the set wrinkle number correction coefficient, the wrinkle depth correction coefficient and the wrinkle length correction coefficient, e represents a natural constant, and the wrinkle corresponding to the maximum characteristic data matching coefficient is selected from the characteristic data matching coefficients of the skin area of the user and the wrinkles of each level as a wrinkle level reference value and marked as xi;
The specific method of the hair coverage rate comprises the following steps of dividing a hair region of a person in a user image into a person hair region image, reading width w hair and height h hair of the person hair region image, converting the person hair region image into a gray image, detecting gray values of all pixels in the converted gray image, comparing the gray values with a gray value range corresponding to a set user hair standard density threshold value to obtain the number of pixels conforming to the range, marking the number as sigma, comparing the number of pixels conforming to the range with the total number of pixels of the person hair region image, and substituting the number into a formula: The dpi represents the pixel density of the image stored in the management database, so that the coverage rate lambda of the hair is obtained by analysis, an accurate hair coverage rate is provided by comparing the proportion of the number of pixels conforming to the range to the total number of pixels, the sizes of the images of different hair areas can be different, and the difference can be eliminated by comparing the number of pixels conforming to the range to the total number of pixels of the hair areas, so that the evaluation result is more accurate and comparable;
The user age evaluation association coefficient is comprehensively analyzed through weight distribution of skin glossiness, wrinkle level, pore characteristics, pigmentation and hair coverage, and a user age evaluation association coefficient ρ formula is as follows: Wherein, ψ is a skin glossiness reference value reflecting the brightness and compactness of the skin, generally the skin glossiness of young people is higher, ζ is a wrinkle level reference value representing the number and depth of wrinkles, generally increasing with age, For average pore diameter, larger pore diameters may be associated with age,As a quantitative index of pigmentation, such as the area or concentration of pigmented spots, may increase with age, λ is the hair coverage, and although the direct relationship with age is not great, it may be used as an auxiliary index, because hair thinning or hair loss is sometimes age-related,The weighting coefficients for the age assessment for each feature, which coefficients can be set and adjusted according to actual data or experience, e is a natural constant used to construct a sigmoid function to smooth the effect of gloss on age assessment,Is a regulating parameter of the glossiness influence and is used for controlling the steepness degree of the sigmoid function.
The character image processing module is used for carrying out corresponding clothing and hairstyle matching according to the user age evaluation association coefficient of the user to obtain preselected clothing and preselected hairstyle, comparing the original hairstyle and the original clothing of the user in the user image with the preselected hairstyle and the preselected clothing respectively, screening out a user to-be-determined hairstyle set and a user to-be-determined clothing set, further screening out and obtaining the image hairstyle and the image clothing of the user, and importing the image hairstyle and the image clothing of the user into the image of the user to obtain the user decoration image.
The image generation module is used for reading the user scene decoration image after brightness correction, marking the user scene decoration image as an end user image and generating and displaying the end user image.
The management database is used for storing pixel density of an image, each scene subject word, each scene corresponding to each scene subject word, each garment corresponding to each scene, each hairstyle corresponding to each scene, an age evaluation index threshold, each garment corresponding to each age group, each hairstyle corresponding to each age group, an age evaluation index range corresponding to each age group, each glossiness corresponding to skin brightness, a chromaticity component correction coefficient, a user hair standard concentration threshold, wrinkle level data corresponding to a wrinkle characteristic data matching coefficient and a wrinkle characteristic data correction coefficient.
The technical scheme provided by the embodiment of the application has at least the following technical effects or advantages that by improving the age prediction method, providing a more accurate age assessment result and providing personalized clothing and hairstyle suggestions for a user according to the assessment result, by introducing skin microstructure analysis, gray level co-occurrence matrix algorithm, color space conversion and histogram analysis methods, the characteristics of skin texture, pigmentation and the like can be more accurately quantified, so that the accuracy of age assessment is improved, according to the more accurate age assessment result, a character image processing module can provide clothing and hairstyle suggestions which are more in line with the actual age of the user, a user interaction scene is obtained through a user interaction design module, the image integration correction module performs brightness correction and other processing, and finally the generated decoration image of the user scene is more natural and lifelike, and the user experience is improved.
In the second embodiment, based on the optimization of the age prediction result in the first embodiment, but when a user makes extreme expressions (such as laughing, crying, frowning and the like), the image acquisition is performed, and the change of facial expression may cause the temporary change of biomarkers such as skin texture luster, wrinkles and the like, so that the age prediction result is interfered, the accuracy is reduced, and the embodiment introduces facial dynamic expression, so that the accuracy of age prediction is improved, and the accuracy of age prediction is improved by removing the temporary features related to the expression.
As shown in fig. 2, the method for predicting the age of the user further includes:
s101, collecting a face data set, and marking face key points according to the collected face data set;
Further, images are downloaded from public facial image data sets on the internet, such as LFW (Labeled FACES IN THE WILD), celebA, etc., the data sets usually contain rich facial images, which cover different ages, sexes, ethnicities and expressions, or facial images of different people are shot through a camera or a mobile phone, so that the diversity and authenticity of the data are ensured, the collected images are screened, blurred, blocked or low-quality images are removed, the quality of the data sets is ensured, MTCNN is used for automatic labeling, MTCNN is a face detection algorithm based on deep learning, not only can the facial positions be detected, but also facial key points can be labeled at the same time, key points such as eye corners, mouth corners, eyebrow endpoints, nose bridges, cheeks are determined, the images are input into MTCNN, the key point positions are automatically output by MTCNN, and labeling results are saved as files after labeling is finished.
S102, calculating the displacement of the facial key points by using a light flow method according to the marked facial key points;
Further, a Lucas-Kanade optical flow method is used, and is used for calculating the displacement of the facial key points, outputting the displacement vectors of the facial key points to capture the change of facial dynamic expression, extracting the facial key points from the image sequence, wherein the facial key points generally comprise characteristic points such as eye angles, mouth angles, nose bridges and the like, so that the key points have a one-to-one correspondence relationship between continuous frames, inputting the extracted facial key points into the Lucas-Kanade optical flow method, selecting a small window around each key point, and calculating the formula of the sum of squares of brightness differences of pixels in the windows, wherein the formula is as follows: Wherein S represents the sum of squares of luminance differences, W represents a set of pixel points within a window, I (x, y, t) represents the luminance value of the image of the t frame at the position (x, y), dx and dy are assumed motion vector components (i.e., x and y components of displacement vectors), the sum of squares of luminance differences is linearized by taylor expansion to obtain a linear equation set about the motion vector components, the linearized equation set is solved by using a least square method, the calculated displacement vector is drawn on the image to visualize the motion track, the direction of the displacement vector represents the direction (rightward, downward, etc.) of the movement of the key point, and the magnitude represents the distance of the movement of the key point (i.e., the modular length of the vector).
S103, constructing a feature vector according to the calculated displacement of the key point, wherein the feature vector comprises a first feature vector, a second feature vector and a third feature vector;
Specifically, the displacement vectors of the facial key points obtained by calculating through the Lucas-Kanade optical flow method, each key point has a corresponding displacement vector, the vector comprises two components of direction and size, the motion condition of the key points between the continuous frames is represented, the displacement vectors of all the facial key points are arranged according to a certain sequence to form a vector, the dimension of the vector is equal to the number of the facial key points, each dimension corresponds to the displacement vector of one key point, the vector is a second feature vector which reflects the motion characteristic of the facial key points, global features are extracted from a facial image data set, for skin texture, for wrinkles, the sagging or fold of the skin surface can be detected through an image processing technology, for facial shape, the facial contour can be extracted through an edge detection algorithm, the facial image is processed according to a selected feature extraction method, the extracted features can reflect the overall attribute of the face, such as skin texture, wrinkles, facial shape and the like, the extracted global features are arranged according to a certain sequence to form a vector, the vector depends on the dimension of the extracted global features and the extracted feature and the feature and forms a first feature vector, the difference is calculated by using a difference between the feature values and the feature values, and the difference value is calculated by using a first feature vector, and a difference value, and a difference between the difference value between the feature values and a second feature value and a second feature vector is calculated by using the feature vector. , wherein,Represents the Euclidean distance, measures the straight line distance between two n-dimensional feature vectors A and B, n represents the dimension of the feature vector,Representing the i-th element in feature vector a,The i-th element in the feature vector B, the difference value reflects the change degree or the reduction degree of the facial expression, namely the expression reduction value, is used for measuring the reduction effect or the change degree of the facial expression, the larger the expression reduction value is, the larger the change or the lower the reduction degree of the facial expression is, the smaller the expression reduction value is, the smaller the change or the higher the reduction degree of the facial expression is, the expression reduction value is combined with the change of illumination conditions, the relativity of each feature in the first feature vector and the facial expression is analyzed, the inherent feature and the temporary feature are identified, the inherent feature is the facial contour and the eye shape, the temporary feature is the shadow caused by the change of wrinkles and illumination due to the change of the facial expression, the variance of the temporary feature is calculated according to the identified temporary feature, the feature threshold is set according to the calculated variance, the vectors exceeding the feature threshold in the first feature vector are removed, the remaining features after the temporary feature is removed are arranged according to a certain sequence, the feature after the arrangement is combined into a new inherent feature vector, the third feature vector is identified, the third feature vector reflects the facial feature is the facial feature and the shadow, the change, the disturbance factor, the change, and the like are caused by the change of the illumination.
S104, training a regression model according to the characteristics of data in the face data set, inputting the obtained third feature vector into the trained regression model through the character image analysis processing module, and outputting the optimized association coefficient of the user age assessment by the regression model;
Further, random forest regression is used as a regression model, a third feature vector is used as an input feature of training data, an output target is an association coefficient of the user age assessment after optimization, the input feature and the output target are combined to form a training data set, the training data set is divided into a training set and a verification set, the training set is used for training the model, the verification set is used for adjusting model parameters and evaluating model performance, the regression model is trained by using the training set, in the training process, the model learns a mapping relation between the input feature and the output target, the model performs better and better performance on the training set by continuously iterating and adjusting parameters, the verification set is used for evaluating the performance of the model, the third feature vector is input into the trained regression model, the regression model outputs the association coefficient of the user age assessment after optimization, the character image processing module is used for performing corresponding clothing and hairstyle matching according to the association coefficient of the user age assessment after optimization to obtain preselected clothing and preselected hairstyle, the original hairstyle and the preselected clothing of a user in the user image are respectively compared with the preselected clothing, the user to determine the set and the user's hairstyle to be further determined, the user's image is further processed, and the image of the user's hairstyle is further processed, and the image is further obtained.
The technical scheme of the embodiment of the application has at least the following technical effects or advantages that the accuracy of age prediction is improved by introducing analysis and processing of the facial dynamic expression, the displacement of the key points of the face is calculated by an optical flow method, global features and dynamic expression features are fused, the permanent features of the face can be described more accurately, the temporary features related to the expression are removed, the third feature vector reflects the actual age features of the user more accurately, the interference of the facial expression change on the age prediction result is effectively reduced by calculating the expression reduction value and correcting the face, and the regression model is trained based on the third feature vector, so that the method can adapt to the facial expression change of different users better, and the generalization capability of age prediction is improved.
In the third embodiment, when the hairstyle is matched based on the first embodiment and the second embodiment, the position of the hairline, the eye angle position and the hair color of the user can blur the improvement result of the age in the conversation, meanwhile, when the user shoots by using the high-definition camera, the shadow shot by the user can influence the physical state of the user, and the physical state of the user can blur the improvement result of the age in the conversation.
As shown in fig. 3, the method for optimizing the association coefficient of the user age assessment further includes:
s201, acquiring facial and body images of a user through a camera, and extracting facial features and body features of the user by utilizing an image processing algorithm;
specifically, a high-definition camera is used for acquiring a facial image of a user, preprocessing is carried out on the image, including adjusting light rays and contrast, removing possible noise and shadow, so as to ensure clear image quality and rich details, an edge detection algorithm Canny edge detector is used for processing the preprocessed facial image, identifying and extracting boundary lines between hair and forehead, namely hairline, and smoothing the extracted hairline to remove irregular edges and noise; extracting key characteristics of human face by using Haar characteristics, training a classifier support vector machine by using a large number of marked human face and non-human face images, applying the trained classifier to an image to be detected, searching a human face region by using a sliding window, when the classifier judges that a certain region is a human face, finishing detection and positioning of the human face, positioning an eye region by using shape characteristics (such as circles, ellipses and the like), extracting the outline of the eye region by using an edge detection algorithm Canny edge detector, further confirming the existence and position of eyes, detecting the angular point on the eye outline by using an angular point detection algorithm Harris angular point detector to identify a specific angular shape, analyzing the sagging or the upward degree of the external angular point and the horizontal line by calculating the included angle, analyzing by using an HSV (hue, saturation and brightness) color space, converting the image from the RGB color space to the HSV color space so as to extract hair color more accurately, positioning the hair region in the image by using a threshold segmentation technology, extracting the color of the positioned hair region, comparing the color of the hair region with the average color value of all the color values in the calculated region with the predefined color information, the color of the hair is determined.
The method comprises the steps of obtaining a whole body image of a user by using a camera, extracting body lines and outlines in standing or sitting postures by using a posture analysis algorithm OpenPose, positioning key joints of the body, such as shoulders, elbows, knees and the like, calculating the distance between pairs of joints by using a Euclidean distance formula, selecting corresponding joint triplets for angle measurement, selecting elbow bending angles to measure elbow joints, selecting shoulders, elbows and wrist joints, calculating vectors between adjacent joints, calculating vectors from the shoulders to the elbows and vectors from the elbows to the wrists for the shoulders, the elbows and the wrists, and calculating an included angle between the two vectors by using dot products and modular lengths of the vectors. The formula is: , wherein, Is the vector between adjacent nodes.
S202, inputting the extracted facial features and body features into an age estimation algorithm, and calculating an age estimation value;
Specifically, the age estimation algorithm is a model based on machine learning, the machine learning includes a neural network, facial features and physical features are input into the machine learning model, and the machine learning model calculates an age estimation value through a calculation formula carried in the machine learning model, wherein the formula is as follows: where A is an age estimation value, f is a nonlinear function for combining and converting eigenvalues, Is the weight of each feature, which is obtained through model training, reflects the importance of each feature to age assessment,Is the input facial and physical characteristic values, which may be pre-processed or normalized, b is a bias term used to adjust the output of the model to more closely match the actual age distribution, calculate the age estimation value according to the formula, and the model outputs the age estimation value.
S203, collecting images, selecting candidate hairstyles and clothes from the collected hairstyle images according to the age evaluation association coefficients of the users, matching the candidate hairstyles and clothes with the face and body images of the users, and adjusting the matched hairstyles and clothes by using age evaluation values;
further, collecting various hair style images from fashion magazines, network picture libraries and professional hair style design websites, classifying and archiving the collected hair style images, selecting candidate hair styles and clothes from the collected hair style images according to user age evaluation association coefficients, opening user face images and hair style images by using image processing software Photoshop, adding the hair style images as new layers to the user face images, enabling the hair style images to be approximately aligned with the positions of the user heads by moving the hair style layers, enabling the hair style layers to be aligned with the positions of the user heads by moving the hair style layers, calculating the coincidence degree of the candidate hair style images and the user face images on hairline and facial contours, calculating pixel distances between hair style edges and the user facial contours to calculate the coincidence degree, selecting the pixel distances between the hair style edges and the user facial contours to be the highest as a matched hair style, collecting various clothes styles and sizes from brand official networks, electronic commerce platforms and clothes design works, comparing the body line and contour of a user with the size information of the clothes one by one according to the body line and contour of the user in standing or sitting postures, and calculating the fitting degree of the clothes to be the highest as the matched body style.
The age evaluation value is used for adjusting the matched hairstyle and clothes, dividing the age evaluation value into different age groups, namely young (18-30 years), middle (31-50 years) and old (more than 50 years), setting hairstyle adjustment rules, wherein the young groups tend to be fashionable and lively, selecting the hairstyle with popular elements, selecting more courage, trying bright or unique colors, selecting the hairstyle with asymmetric and strong layering sense, and focusing on personality expression; the hairstyle of the middle-aged section is prone to be stable and mature, a classical and concise hairstyle is selected, the color selection is mainly natural and low-tone, the style selection is mainly focused on finishing the face form, the hairstyle suitable for a job site or a daily occasion is selected, the hairstyle of the old section is prone to be comfortable and easy to be arranged, the short hair or middle-long hair is selected, the excessively complex style is avoided, the color selection is mainly natural and soft, hair dyeing is considered to cover white hair, the color is not excessively bright, the style selection is mainly focused on practicality and comfort, the hairstyle easy to comb and keep is selected, the age section of the hairstyle is determined according to age evaluation values of users, the proper hairstyle, color and style are selected according to hairstyle adjustment rules corresponding to the age section, fine-tuning is performed on the hairstyle to ensure that the personalized requirements are met, the adjustment rules are set, the garment style of the young section is prone to fashion, the garment style of the young section is selected, the garment style of the body is repaired, the garment style is more prone to be enlarged, the color selection is bright, the design such as to try to be stable or splice, the custom, the garment is selected to be focused on the body to show, the trousers, the short skirt section is selected, and the like, the color is more attractive, the hair style is selected to be attractive, and the custom, and the style is selected, and the custom, and the hairstyle is selected to be natural, and the custom, and the style is selected The clothing style of the old stage tends to be comfortable and loose, the clothing with loose color is selected to be mainly natural and soft, the clothing with dark color or light color is selected, the clothing style which is easy to put on and take off and move is selected to be focused on wearing comfort and convenience, the age stage of the user is determined according to the age evaluation value of the user, and the proper clothing style, color and cutting are selected according to the clothing adjustment rule corresponding to the age stage, so that the clothing is finely adjusted according to the personal preference, the physical characteristics and the demands of the user.
The technical scheme of the embodiment of the application has at least the following technical effects or advantages that the fit degree of the hairstyle and the clothes with the user is improved through adjusting the factors influencing the age evaluation, so that the age evaluation is more accurate, the overall image of the user is improved, the accuracy of the age evaluation is obviously improved through considering factors such as the hairline position, the eye angle position, the hair color and the like, and eliminating the influence of photographing shadows, the hairstyle and the clothes are adjusted according to the age evaluation numerical value, so that the hairstyle and the clothes are more fit with the age and the image of the user, the satisfaction degree and the self-confidence of the user are improved, the pursuit of the user on beauty and younger is met through providing personalized hairstyle and clothes matching service, and the market competitiveness is enhanced.
In the fourth embodiment, based on the first to third embodiments, whether the age of the user accords with the hairstyle and clothing of the user is judged only from the front photo (in a single direction), so that the accuracy is not enough, the whole outline and the center point of the human face are integrated in the facial image obtained in the scheme, the whole outline and the center point are matched with the obtained human body image, and the omnibearing coordination and natural fluency of the hairstyle, clothing and dynamic change effect are realized by constructing a 3D image model through the obtained matching data, as shown in fig. 4.
S301, constructing a three-dimensional image model according to acquired face and body images of a user;
Specifically, age estimation data is obtained according to acquired face and body images of a user, a 3D modeling software Blender is used for creating a head model, a body model is created by combining a human body image, the 3D modeling software is used for creating a body model, the created head model and the body model are combined to form a complete 3D image model, the proportion and connection between the head and the body are ensured to be natural and coordinated, the created 3D model is subjected to parameter adjustment, so that a hairstyle and a garment are kept in harmony and uniform effects from all directions (front, side, back and the like) on the 3D image model of the user, the face details, the body proportion, the skin color and the like are included, the model is similar to the actual image of the user in height, the created 3D model is adjusted by using age estimation values, and skin textures, facial features and the like are adjusted, so that the age of the user is reflected more accurately.
S302, acquiring real motion data by using a motion capture technology, simulating the change of a three-dimensional image model according to the real motion data, judging whether the clothes and the hairstyle are matched in real time when the 3D model acts, if not, adjusting an age evaluation value, and if so, keeping the age evaluation value unchanged.
Specifically, a mechanical motion capturing device is selected, mark points are attached to key positions (such as joints), a user captures data when walking and turning, the captured data are stored, a skeleton system and an animation controller are arranged in a three-dimensional image model, the captured data are mapped to the skeleton system of a 3D model, the 3D model is driven to execute corresponding motions according to the mapped motion data, the consistency of the swing amplitude of clothes in the 3D model refers to the matching degree between the swing amplitude of clothes and the swing amplitude of similar clothes in the real situation, the maximum distance (or angle) of the swing of the 3D model in the walking or turning process is measured, the swing amplitude of similar clothes in the same motion with that in the real video is compared, if the swing amplitude is not more than 5% of the set swing amplitude difference, the swing amplitude is considered to be consistent with the real situation, otherwise, the shape change rate of the hairstyle is not consistent, the rationality of the shape change rate of the hairstyle in the 3D model refers to whether the shape change rate of the hairstyle in the motion process accords with expected physical behaviors or not, the shape change rate of the hairstyle in the 3D model is calculated, the shape change rate of the hairstyle in the unit time is considered to be reasonably compared with the shape change rate of the hairstyle change of the hairstyle in the real situation is not more than 15% when the difference of the shape is considered to be reasonably compared with the shape change rate of the hairstyle change of the hairstyle in the same with the shape in the actual situation is not compared with the evaluation time when the change rate of the hairstyle is 15% is compared with the change of the shape is 15% when the change of the hairstyle is compared with the shape is not more than the shape.
The technical scheme in the embodiment of the application has at least the following technical effects or advantages that the scheme not only judges the matching degree of the age of the user and the hairstyle and the clothes from the front photo, but also comprehensively simulates the actual image of the user from all directions (front, side, back and the like) by constructing the three-dimensional image model, thereby ensuring that the hairstyle and the clothes keep harmonious and unified effects on the 3D image model of the user, improving the simulation accuracy and the reality, acquiring the real action data of the user in real time by combining the action capturing technology, simulating the change of the three-dimensional image model, judging whether the clothes and the hairstyle are matched in real time when the 3D model acts, and adjusting the age evaluation value if the clothes and the hairstyle are not matched, ensuring the harmony and the natural fluency of the dynamic change effect, and providing a more real and natural virtual image experience for the user through the omnibearing and high-precision image simulation and the real-time judgment and adjustment of the dynamic change effect.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1.一种基于AI交互的人物场景生成系统,其特征在于,包括:用户交互设计模块、人物图像拍摄模块、人物图像分析处理模块、人物图像处理模块、图像整合修正模块、图像生成模块和管理数据库;所述人物图像拍摄模块用于通过使用高清相机获取用户图像和皮肤表面的微观结构图像,所述微观结构包括皮肤光泽度参考值、皱纹级别参考值、毛孔特征和色素沉积,用户图像含有头发覆盖率;使用算法计算毛孔特征,对微观结构图像进行颜色转换得到色素沉积,所述用户年龄评估关联系数通过皮肤光泽度、皱纹级别、毛孔特征、色素沉积和头发覆盖率进行计算得到;所述人物图像分析处理模块用于根据用户图像和皮肤表面的微观结构获取用户年龄评估关联数据,对用户的年龄进行预测,进而获得用户年龄评估关联系数;所述图像整合修正模块用于将用户交互场景、用户装饰图像进行整合,得到用户场景装饰图像。1. A character scene generation system based on AI interaction, characterized in that it includes: a user interaction design module, a character image shooting module, a character image analysis and processing module, a character image processing module, an image integration and correction module, an image generation module and a management database; the character image shooting module is used to obtain a user image and a microstructure image of the skin surface by using a high-definition camera, the microstructure includes a skin gloss reference value, a wrinkle level reference value, pore characteristics and pigment deposition, and the user image contains hair coverage; the pore characteristics are calculated using an algorithm, and the microstructure image is color-converted to obtain pigment deposition, and the user age assessment correlation coefficient is calculated by skin gloss, wrinkle level, pore characteristics, pigment deposition and hair coverage; the character image analysis and processing module is used to obtain user age assessment correlation data according to the user image and the microstructure of the skin surface, predict the user's age, and then obtain the user age assessment correlation coefficient; the image integration and correction module is used to integrate the user interaction scene and the user decoration image to obtain the user scene decoration image. 2.如权利要求1所述的一种基于AI交互的人物场景生成系统,其特征在于,使用算法计算毛孔特征具体步骤为:使用算法检测图像中灰度变化剧烈的边缘,边缘对应于毛孔的轮廓,对识别出的毛孔进行分割,对分割出的毛孔区域进行大小特征提取,通过测量毛孔区域的直径,统计毛孔区域内的像素数量,即可得到毛孔的面积,对分割出的毛孔区域进行形状特征提取,通过计算毛孔区域的周长与面积的比值来评估圆形度,对于非圆形的毛孔,计算其最长径与最短径的比值,即长宽比。2. A character scene generation system based on AI interaction as described in claim 1 is characterized in that the specific steps of using an algorithm to calculate pore features are: using an algorithm to detect edges in an image where grayscale changes dramatically, the edges correspond to the contours of the pores, segmenting the identified pores, extracting size features from the segmented pore areas, measuring the diameter of the pore areas, and counting the number of pixels in the pore areas to obtain the area of the pores, extracting shape features from the segmented pore areas, and evaluating the circularity by calculating the ratio of the circumference of the pore areas to the area. For non-circular pores, the ratio of their longest diameter to their shortest diameter, i.e., the aspect ratio, is calculated. 3.如权利要求1所述的一种基于AI交互的人物场景生成系统,其特征在于,用户年龄评估关联系数ρ公式为:,其中,ψ为皮肤光泽度参考值,ξ为皱纹级别参考值,为平均毛孔直径,为色素沉积的量化指标,λ为头发覆盖率,为各特征对年龄评估的权重系数,系数根据实际数据或经验进行设定和调整,e 为自然常数,为光泽度影响的调节参数。3. A character scene generation system based on AI interaction as claimed in claim 1, characterized in that the user age assessment correlation coefficient ρ formula is: , where ψ is the reference value of skin glossiness, ξ is the reference value of wrinkle level, is the average pore diameter, is a quantitative indicator of pigment deposition, λ is the hair coverage, is the weight coefficient of each feature for age assessment, which is set and adjusted according to actual data or experience, e is a natural constant, It is the adjustment parameter affecting glossiness. 4.如权利要求1所述的一种基于AI交互的人物场景生成系统,其特征在于,对用户的年龄进行预测的方法还包括:4. The character scene generation system based on AI interaction according to claim 1, characterized in that the method for predicting the user's age further comprises: S101,收集面部数据集,根据收集的面部数据集进行面部关键点标注;S101, collecting a facial data set, and annotating facial key points according to the collected facial data set; S102,根据标注的面部关键点,使用光流法计算面部关键点的位移;S102, calculating the displacement of the facial key points using an optical flow method according to the marked facial key points; S103,根据计算得出的关键点的位移,构建特征向量,特征向量包括第一特征向量、第二特征向量和第三特征向量;S103, constructing a feature vector according to the calculated displacement of the key point, where the feature vector includes a first feature vector, a second feature vector and a third feature vector; S104,根据面部数据集中数据的特点,训练回归模型,通过人物图像分析处理模块将得到的第三特征向量输入到训练好的回归模型中,回归模型输出优化后的用户年龄评估的关联系数。S104, training a regression model according to the characteristics of the data in the facial data set, inputting the obtained third feature vector into the trained regression model through the character image analysis and processing module, and the regression model outputs the optimized correlation coefficient of the user age assessment. 5.如权利要求4所述的一种基于AI交互的人物场景生成系统,其特征在于,通过光流法计算得到的面部关键点的位移矢量,将所有面部关键点的位移矢量按照顺序排列,组成第二特征向量,第二特征向量的维度等于面部关键点的数量,每个维度对应一个关键点的位移矢量;从面部图像数据集中提取全局特征,将提取的全局特征按照顺序排列,组成第一特征向量,将第一特征向量和第二特征向量进行对比,计算差异值即表情还原值,识别出固有特征和临时特征,根据识别到的临时特征,计算临时特征的方差,根据计算得到的方差设置特征阈值,将第一特征向量中超过特征阈值的向量进行剔除,将剔除临时特征后剩余的特征进行整理,按照顺序进行排列,将整理后的特征组合成第三特征向量。5. A character scene generation system based on AI interaction as described in claim 4, characterized in that the displacement vectors of facial key points calculated by the optical flow method are arranged in sequence to form a second feature vector, the dimension of the second feature vector is equal to the number of facial key points, and each dimension corresponds to the displacement vector of a key point; global features are extracted from the facial image data set, the extracted global features are arranged in sequence to form a first feature vector, the first feature vector and the second feature vector are compared, the difference value, i.e., the expression restoration value, is calculated, the inherent features and temporary features are identified, the variance of the temporary features is calculated according to the identified temporary features, the feature threshold is set according to the calculated variance, the vectors in the first feature vector that exceed the feature threshold are eliminated, the features remaining after the temporary features are eliminated are sorted, arranged in sequence, and the sorted features are combined into a third feature vector. 6.如权利要求4所述的一种基于AI交互的人物场景生成系统,其特征在于,优化用户年龄评估的关联系数的方法还包括:6. The AI-interaction-based character scene generation system according to claim 4, wherein the method for optimizing the correlation coefficient of user age evaluation further comprises: S201,通过摄像头获取用户面部和身体图像,利用图像处理算法,对用户的面部特征和身体特征进行提取;S201, acquiring a user's facial and body images through a camera, and extracting the user's facial features and body features using an image processing algorithm; S202,将提取到的面部特征和身体特征输入到年龄评估算法,计算出年龄评估数值;S202, inputting the extracted facial features and body features into an age assessment algorithm to calculate an age assessment value; S203,收集图像,根据用户年龄评估关联系数从收集到的发型图像选择出候选发型和服装,将候选发型和服装与用户面部和身体图像进行匹配,使用年龄评估数值对匹配好的发型和服饰进行调整。S203, collect images, select candidate hairstyles and clothing from the collected hairstyle images according to the user age assessment correlation coefficient, match the candidate hairstyles and clothing with the user's face and body images, and use the age assessment value to adjust the matched hairstyles and clothing. 7.如权利要求6所述的一种基于AI交互的人物场景生成系统,其特征在于,使用摄像头获取用户的全身图像,使用算法提取站立时的身体线条和轮廓,并定位身体的关键关节点,使用距离公式计算两个关节点之间的距离,对于角度的测量,选择相应的关节点三元组,要测量肘部弯曲角度,就选择肩膀、肘部和手腕关节点,计算相邻关节点之间的向量,对于肩膀、肘部和手腕,计算肩膀到肘部的向量和肘部到手腕的向量,利用向量的点积和模长计算两个向量之间的夹角,公式为:,其中,是相邻关节点之间的向量。7. A character scene generation system based on AI interaction as described in claim 6, characterized in that a camera is used to obtain a full-body image of the user, an algorithm is used to extract the body lines and contours when standing, and the key joints of the body are located, and the distance formula is used to calculate the distance between two joints. For angle measurement, the corresponding joint point triplet is selected. To measure the elbow bending angle, the shoulder, elbow and wrist joints are selected, and the vectors between adjacent joints are calculated. For the shoulder, elbow and wrist, the vector from the shoulder to the elbow and the vector from the elbow to the wrist are calculated, and the angle between the two vectors is calculated using the dot product and modulus of the vectors. The formula is: ,in, is the vector between adjacent joint points. 8.如权利要求6所述的一种基于AI交互的人物场景生成系统,其特征在于,调整年龄评估数值的步骤为:8. The character scene generation system based on AI interaction as claimed in claim 6, characterized in that the step of adjusting the age evaluation value is: S301,根据采集得到的用户面部和身体图像,构建三维形象模型;S301, constructing a three-dimensional image model based on the collected user's facial and body images; S302,使用动作捕捉技术获取真实动作数据,根据真实动作数据模拟三维形象模型的变化,3D模型在进行动作时,实时判断服饰和发型是否匹配,若不匹配,对年龄评估值进行调整,若匹配,年龄评估值保持不变。S302, using motion capture technology to obtain real motion data, simulating changes in the three-dimensional image model based on the real motion data, and judging in real time whether the clothing and hairstyle match when the 3D model is in motion. If not, adjusting the age assessment value; if matching, the age assessment value remains unchanged. 9.如权利要求8所述的一种基于AI交互的人物场景生成系统,其特征在于,三维形象模型在进行动作时,实时判断服饰和发型是否匹配,根据服饰摆动幅度和发型形状的变化速率进行判断,服饰摆动幅度一致性指3D模型中服饰摆动的幅度与真实情况下同类服饰摆动的幅度之间的匹配程度,通过测量3D模型在行走过程中服饰摆动的最大距离,并与真实视频中同类服饰在相同动作下的摆动幅度进行比较,若摆动幅度不超过设定摆动幅度差异5%,则认为服饰摆动幅度与真实情况一致,反之,则不一致;发型形状变化率合理性是指3D模型中发型在动作过程中的形状变化率是否符合预期的物理行为,通过计算3D模型在行走过程中发型形状的变化速率,并与真实视频中同类发型在相同动作下的形状变化率进行比较,发型形状变化率与真实情况相差不超过15%,认为发型形状变化率合理,若超过15%,则对年龄评估值进行调整。9. A character scene generation system based on AI interaction as described in claim 8, characterized in that when the three-dimensional image model is performing an action, it is judged in real time whether the clothing and hairstyle match, and the judgment is made according to the clothing swing amplitude and the change rate of the hairstyle shape. The clothing swing amplitude consistency refers to the degree of matching between the swing amplitude of the clothing in the 3D model and the swing amplitude of similar clothing in the real situation. By measuring the maximum distance of the clothing swinging in the 3D model during walking, and comparing it with the swing amplitude of similar clothing in the real video under the same action, if the swing amplitude does not exceed 5% of the set swing amplitude difference, it is considered that the clothing swing amplitude is consistent with the actual situation, otherwise, it is inconsistent; the rationality of the hairstyle shape change rate refers to whether the shape change rate of the hairstyle in the 3D model during the action process conforms to the expected physical behavior. By calculating the change rate of the hairstyle shape in the 3D model during walking, and comparing it with the shape change rate of the similar hairstyle in the real video under the same action, the hairstyle shape change rate is not more than 15% different from the actual situation, and the hairstyle shape change rate is considered reasonable. If it exceeds 15%, the age assessment value is adjusted. 10.如权利要求1所述的一种基于AI交互的人物场景生成系统,其特征在于,所述用户交互设计模块包括语音识别单元和场景匹配单元,语音识别单元用于采集用户的语音信息,将用户的语音信息转换成文字,并提取关键字,场景匹配单元用于将提取的关键字与数据库中的场景关键词进行匹配,进行筛选得到用户交互场景。10. A character scene generation system based on AI interaction as described in claim 1, characterized in that the user interaction design module includes a speech recognition unit and a scene matching unit, the speech recognition unit is used to collect the user's voice information, convert the user's voice information into text, and extract keywords, and the scene matching unit is used to match the extracted keywords with the scene keywords in the database to screen and obtain the user interaction scene.
CN202510295615.XA 2025-03-13 2025-03-13 A character scene generation system based on AI interaction Pending CN120147460A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510295615.XA CN120147460A (en) 2025-03-13 2025-03-13 A character scene generation system based on AI interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510295615.XA CN120147460A (en) 2025-03-13 2025-03-13 A character scene generation system based on AI interaction

Publications (1)

Publication Number Publication Date
CN120147460A true CN120147460A (en) 2025-06-13

Family

ID=95958504

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510295615.XA Pending CN120147460A (en) 2025-03-13 2025-03-13 A character scene generation system based on AI interaction

Country Status (1)

Country Link
CN (1) CN120147460A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120543452A (en) * 2025-07-25 2025-08-26 浙江建设职业技术学院 A decoration effect display method and naked-eye VR system for interior design
CN120580520A (en) * 2025-08-04 2025-09-02 西部(重庆)科学城种质创制大科学中心 A hierarchical processing method for mandarin fish images based on appearance feature extraction

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107679507A (en) * 2017-10-17 2018-02-09 北京大学第三医院 Facial pores detecting system and method
US20180350071A1 (en) * 2017-05-31 2018-12-06 The Procter & Gamble Company Systems And Methods For Determining Apparent Skin Age
CN109300105A (en) * 2017-07-25 2019-02-01 上海中科顶信医学影像科技有限公司 Pore detection method, system, equipment and storage medium
CN109730637A (en) * 2018-12-29 2019-05-10 中国科学院半导体研究所 A system and method for quantitative analysis of facial images
US10818012B1 (en) * 2020-06-18 2020-10-27 Neo Derm Group Limited Method for facial skin age estimating and electronic device
CN112329607A (en) * 2020-11-03 2021-02-05 齐鲁工业大学 Age prediction method, system and device based on facial features and texture features
CN114445302A (en) * 2022-01-30 2022-05-06 北京字跳网络技术有限公司 Image processing method, device, electronic device and storage medium
CN117033688A (en) * 2023-08-11 2023-11-10 翡梧(上海)创意设计有限公司 Character image scene generation system based on AI interaction

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180350071A1 (en) * 2017-05-31 2018-12-06 The Procter & Gamble Company Systems And Methods For Determining Apparent Skin Age
CN109300105A (en) * 2017-07-25 2019-02-01 上海中科顶信医学影像科技有限公司 Pore detection method, system, equipment and storage medium
CN107679507A (en) * 2017-10-17 2018-02-09 北京大学第三医院 Facial pores detecting system and method
CN109730637A (en) * 2018-12-29 2019-05-10 中国科学院半导体研究所 A system and method for quantitative analysis of facial images
US10818012B1 (en) * 2020-06-18 2020-10-27 Neo Derm Group Limited Method for facial skin age estimating and electronic device
CN112329607A (en) * 2020-11-03 2021-02-05 齐鲁工业大学 Age prediction method, system and device based on facial features and texture features
CN114445302A (en) * 2022-01-30 2022-05-06 北京字跳网络技术有限公司 Image processing method, device, electronic device and storage medium
CN117033688A (en) * 2023-08-11 2023-11-10 翡梧(上海)创意设计有限公司 Character image scene generation system based on AI interaction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张新林;: "基于人脸图像衰老特征相关性的年龄估计方法", 计算机仿真, no. 09, 15 September 2012 (2012-09-15) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120543452A (en) * 2025-07-25 2025-08-26 浙江建设职业技术学院 A decoration effect display method and naked-eye VR system for interior design
CN120580520A (en) * 2025-08-04 2025-09-02 西部(重庆)科学城种质创制大科学中心 A hierarchical processing method for mandarin fish images based on appearance feature extraction

Similar Documents

Publication Publication Date Title
JP7598917B2 (en) Virtual facial makeup removal, fast face detection and landmark tracking
CN109690617B (en) System and method for digital cosmetic mirror
US9058765B1 (en) System and method for creating and sharing personalized virtual makeovers
US9142054B2 (en) System and method for changing hair color in digital images
CN106056064B (en) A kind of face identification method and face identification device
CN120147460A (en) A character scene generation system based on AI interaction
CN109376582A (en) An Interactive Face Cartoon Method Based on Generative Adversarial Networks
JP2024500896A (en) Methods, systems and methods for generating 3D head deformation models
JP2020526809A5 (en)
US20100189357A1 (en) Method and device for the virtual simulation of a sequence of video images
CN116648733A (en) Method and system for extracting color from facial image
KR20140033088A (en) Generation of avatar reflecting player appearance
CN108985873A (en) Cosmetics recommended method, the recording medium for being stored with program, the computer program to realize it and cosmetics recommender system
CN117033688B (en) Character image scene generation system based on AI interaction
CN108460398A (en) Image processing method, device, cloud processing equipment and computer program product
CN112819718A (en) Image processing method and device, electronic device and storage medium
AU2019364148A1 (en) Digital character blending and generation system and method
JP5035524B2 (en) Facial image composition method and composition apparatus
CN103714225A (en) Information system of automatic make-up and its method of applying make-up
CN114155569B (en) Cosmetic progress detection method, device, equipment and storage medium
US20190347469A1 (en) Method of improving image analysis
CN113763498A (en) Portrait simple-stroke region self-adaptive color matching method and system for industrial manufacturing
JP4893968B2 (en) How to compose face images
CN114240743A (en) Skin beautifying method based on high-contrast buffing human face image
CN119600654A (en) A method and system for identifying ethnic costumes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination