HK1138662B - Method for simulation of facial skin aging and de-aging - Google Patents
Method for simulation of facial skin aging and de-aging Download PDFInfo
- Publication number
- HK1138662B HK1138662B HK10104129.4A HK10104129A HK1138662B HK 1138662 B HK1138662 B HK 1138662B HK 10104129 A HK10104129 A HK 10104129A HK 1138662 B HK1138662 B HK 1138662B
- Authority
- HK
- Hong Kong
- Prior art keywords
- image
- aging
- skin
- texture
- wrinkle
- Prior art date
Links
Description
Technical Field
The present invention relates to the field of image processing and simulation, in particular to image generation for portraying simulated skin aging or de-aging.
Background
The effects of skin aging on the appearance of human faces have been well studied and documented in dermatology. The aging progression of the skin of each individual depends on both intrinsic and extrinsic factors. Intrinsic factors such as gender, race, and skin color are genetically controlled and unique to each individual and can affect the rate of epidermal thinning, loss of mechanical elasticity, and other well-characterized histological and biomechanical changes that occur with age. Intrinsic factors affect the parts of the body that are out of the sun and exposed to the sun. Extrinsic factors include the individual's diet, lifestyle, skin care habits, and sun history. It is well known that long-term exposure to sunlight accelerates the onset time and severity of skin aging. All exposed body parts, including the face, have some degree of skin photoaging. (Gilchests, B.Photodamage, Blackwell Science, Inc.1995).
One of the most visually prominent features of photoaged Skin is mottled lines and irregular pigmentation, which appear as dark brown spots on the Skin (Griffiths c.e.m., "thermal identification and quantification of photodemage", brit.j.term, volume 127 (supplement 41), pages 37 to 42, 1992; k.miyamoto et al, "inactivation of a high-resolution differentiation system for the object and quantification of a highly localized spot on the surface", Skin Research and technology, volume 8, phase 2, month 73 to page 78, year 2002, 5, hereinafter referred to as "Miyamoto reference"). These pigmented lesions are called age spots, liver spots, age spots nevus, or actinic nevus. Hyperpigmentation in photodamaged skin is better observed with methods that would exhibit subsurface pigmentation that is not visible under standard white light. A method called UV-excited Fluorescence photography, originally proposed by Kollias (ollias et al, "Fluorescence photography in the evaluation of phosphorylation in photomageed skin", J Am Acad Dermatol., Vol. 36, pp. 226 to 230, 1997), involves imaging the skin at a narrow band of UVA centered at 365 nm. Epidermal melanin absorbs strongly in this UVA range, about 3 to 5 times its absorbance in the visible spectrum. Any UVA not absorbed by epidermal melanin enters the dermis where it is scattered and absorbed by collagen and elastin fibers, which convert some of the absorbed energy into fluorescence. The wavelength of maximum collagen emission occurs in the visible spectrum centered at 420 nm. Melanin is twice as absorptive in vivo at 420nm as it is at 540 nm. Thus, the total amount of UVA that enters the skin and reaches the dermis is attenuated by epidermal melanin by 5 folds and the amount of visible fluorescence is attenuated by the same epidermal melanin by about 2 folds. In other words, epidermal melanin detection using ultraviolet excited fluorescence is about 10 times more sensitive than visible light. This enhancement in sensitivity allows for the detection of pigmented spots that are not visible under normal white light imaging methods. A pigmented spot that cannot be observed with visible light will be darker and more clearly visible in the second half of life without intervention under normal visible light.
Other prominent features of aged skin are rough texture and skin wrinkles caused in part by gradual alteration and loss of dermal connective tissue, such as collagen (Leyden j. "clinical wounds of forming skin", br. j. Dermatol. 122, suppl. 35, pages 1 to 3, 1990), particularly in the sun-exposed areas of the body (Bailey, molecular biology of forming in connecting properties, mech. forming dev., page 112, 7, pages 735 to 755, 2001). Hyperpigmentation, wrinkles and rough texture are visible skin features that play an important role in the overall appearance and health of the skin.
The method has practical value in accurately simulating the aging process. Aging simulation has several useful applications, such as computer animation, facial recognition, missing person identification, entertainment, medicine, and cosmetics. Realistic simulations of aging faces have been achieved using various models, including geometric, physical, image-based, or bio-mechanical models (Hussein, K.H, Toward driving facial Modeling and re-rendering of human skin imaging, Proceedings of the Shape Modeling International2002, IEEE computer Association, 2002). Attempts have been made to customize the aging simulation so that it more accurately depicts the appearance of future aging for a particular individual. For example, aging algorithms have been developed to simulate the appearance of aging in individuals based on a crowd queue of images in combination with published data on changes in the Face related to aging (Hypert PE et al, "At Face Value": analysis software programs personal affected evaluation of the effects of aging on the appearance ", Tobacco Control, Vol. 12, pages 238 to 240, 2003). A limitation of this approach is that the aged image is a reflection of the population's criteria and does not necessarily reflect an individual's unique aging process.
Boissiux et al developed an image-based model to simulate skin aging, with a pre-computed generic mask of wrinkles as a texture on a 3D model of an individual's face. Eight basic masks are utilized and the specific mask used is matched to the gender, facial shape and type of expression to be simulated of the person (Boissitux et al, "Simulation of skin and facial with cosmetic approach", Computer evaluation and Simulation, pages 15 to 27, 2000). Because it relies on population averages, this method is limited in its ability to accurately predict each person's unique skin characteristics that will appear with age.
Zhang et al describes a method for transferring geometric details of an aged face to those of a young face in order to make the young face appear older (Zhang et al "System and method for image-based surface detail transfer", US7020347B2, 2006). Conversely, the surface details of a young face may be transferred to the geometric details of an old face to make the old face look young. This approach is limited by the fact that the aging characteristics of an old face will not be exactly the same as what a young face would actually display.
Summary of The Invention
The present invention relates to processing methods and apparatus for processing facial images to detect and manipulate skin features such as hyperpigmented spots, wrinkles and fine texture features in order to overcome the aforementioned limitations. In one aspect of the invention, a computer-executable method is provided for detecting and depicting relevant portions of a digital facial image in which the aforementioned skin features are detected. Furthermore, computer-executable methods are employed to detect skin features and to manipulate them, for example by emphasizing or de-emphasizing their appearance, to simulate aging and/or de-aging of the skin.
In yet another aspect of the invention, digital images acquired under ultraviolet illumination are processed to detect the presence of spots that are not visible under standard lighting conditions and to predict their growth and potential visibility.
Methods are provided for discriminating between various types of facial features (e.g., speckle-wrinkle-texture-other features) and correctly simulating aging and de-aging of facial features based on the types of facial features.
The above and other aspects and features of the present invention will be apparent from the following drawings and detailed description.
Brief description of the drawings
FIG. 1 is a high level flow chart showing an exemplary method for aging, de-aging simulation of spots, wrinkles and texture of facial skin in accordance with the present invention.
Fig. 2 is a flow chart showing an exemplary facial skin detection method according to the present invention.
FIG. 3A shows an exemplary facial skin mask generated based on an entire facial oblique view image; FIG. 3B shows an exemplary spot/wrinkle aging mask (area within the black line); and FIG. 3C shows an exemplary texture aging mask (the area under the horizontal black line and to the left of the vertical black line) generated according to an exemplary embodiment of the present invention.
FIG. 4 is a flow diagram of a spot aging simulation process according to an exemplary embodiment of the present invention.
FIG. 5 is a flow chart of an exemplary process of detecting UV spots and calculating contrast according to the present invention.
FIG. 6 is a flow diagram of an exemplary blob de-aging process in accordance with the present invention.
FIGS. 7A and 7B show a flow chart of an exemplary blob detection algorithm, according to the invention.
Fig. 8 is a flow chart of an exemplary wrinkle aging and de-aging simulation process in accordance with the present invention.
Fig. 9 is a flow chart of an exemplary wrinkle detection process according to the present invention.
Fig. 10 is a flow chart of an exemplary ridge detection process according to the present invention.
Fig. 11 is a block diagram of an exemplary embodiment of a system for implementing the present invention.
FIG. 12 is a flow diagram of an exemplary texture aging process in accordance with the present invention.
FIG. 13 is a flow diagram of an exemplary texture de-aging process in accordance with the present invention.
FIG. 14 is a flow chart of an exemplary process for facial skin aging simulation in conjunction with skin aging indicated by spots, wrinkles and texture in accordance with the present invention.
FIG. 15 is a flow chart of an exemplary process for facial skin de-aging simulation in conjunction with skin indicated by spots, wrinkles and texture in accordance with the present invention.
Detailed Description
Summary of exemplary embodiments
FIG. 1 is a high-level flow diagram showing an exemplary method for aging/de-aging simulation of spots, wrinkles and texture of facial skin according to the present invention. At 101, a close-up facial photograph taken under standard lighting, for example with a conventional digital camera, is provided as input. At 111, a photograph of the same subject taken in the ultraviolet illumination mode (ultraviolet light source with ultraviolet filter in front of camera) is also provided as input. To provide standardized and reproducible illumination conditions and image fiducials, the two images are preferably acquired with an automatically controlled facial image acquisition System, such as the VISIA complex Analysis System (hereafter VISIA) available from Canfield Scientific, inc. Furthermore, the two images should preferably be acquired by oblique view to better show the cheek area with large skin spots.
Generally, the standard illumination image input in 101 will be represented as an RGB (red, green, blue) color image. It should be noted, however, that the present invention is not limited to any particular format. In step 105, the RGB image is converted into a 1976CIE L a b color space. Such a color conversion is commonly used in the art to separate the luminance and chrominance components of an image. Hereinafter, the L × a × b switching will be referred to as LAB switching, and the switched image will be referred to as LAB image. The L channel of the LAB image represents luminance, while the A and B components represent chrominance. Several skin feature analysis and resynthesis operations described herein are performed on LAB images. Although the various embodiments described show the use of the LAB color space format, it is possible to implement the invention in other color space formats including luma and chroma components.
In 103, facial skin detection is performed, which requires that those pixels from the entire facial image that represent skin (rather than hair, eyes, lips, nasolabial folds, etc.) are determined. The facial skin detection process is as follows.
From the skin pixels determined in 103, the operation then proceeds to 107, where a specific area or "mask" of the face is depicted to perform an aging simulation of spots, wrinkles and texture. A first mask is generated for the speckle and wrinkle simulation that covers certain portions of the face, and a second mask is generated for the texture simulation that covers certain portions of the face. The mask generation process is described in detail below.
Aging and de-aging simulations of spots, wrinkles and texture were performed in 113, 115 and 117, respectively. The spot aging/de-aging simulation in 113 receives the LAB-converted standard image (from 105) and the uv image in the RGB domain (from 111) and the "spot and wrinkle aging mask" (from 107) and generates a spot-aged image in 121 and a spot-de-aged image in 122.
The wrinkle aging/de-aging simulation in 115 receives the LAB-converted image (from 105) and the "spots and wrinkle aging mask" (from 107) and generates wrinkle-aged and de-aged images in 123 and 124, respectively.
The texture aging/de-aging simulation in 117 receives the LAB-converted image (from 105) and the texture aging mask (from 107) and generates a texture aged image and a de-aged image in 125 and 126, respectively.
The implementation of the aging and de-aging simulation of spots (113), wrinkles (115) and textures (117) and the generation of a composite image in which individual aged and de-aged images are combined is described in greater detail below. An interactive slider application for demonstrating the transition between aged and de-aged images on a computer monitor is also described below.
Fig. 11 is a block diagram of an exemplary embodiment of a system 1100 that may be used to implement the present invention. As shown in FIG. 11, System 1100 includes an image acquisition subsystem 1110, such as the aforementioned VISIA Complex Analysis System or the like, which is coupled to a general purpose computer 1120, which in turn is coupled to an output device 1130. The computer 1120 may be a personal computer or the like programmed to operate in accordance with the present invention. The output devices 1130 may include one or more of a variety of devices, such as: a conventional computer monitor, etc., the computer 1120 controls it to display images such as results of various simulations performed in accordance with the present invention; a printing device; a storage device; and communication devices, etc. It should be understood that the present invention may be implemented in a wide variety of hardware configurations and is not limited by the system of FIG. 11.
Facial skin detection
An aging simulation based on skin features should be performed on the skin area of the face. In an exemplary embodiment of the invention, non-skin areas of the face such as lips, hair, eyes, eyebrows, nostrils, etc. are excluded from the simulation. The skin area of the face is determined from the standard face image. Several skin detection algorithms have been developed for various purposes, including face detection. (see, e.g., R.L.Hsu et al, "Face detection in color images", IEEE transaction on Pattern Analysis and Machine understanding, Vol.24, No. 5, pp.696-707, 5.2002). Such skin detection algorithms can be used for facial skin aging simulation according to the present invention if they provide a suitable level of granularity.
Alternatively, skin detection (and subsequent mask generation) can be performed manually, i.e., with user input. Given an image of a face, the user may outline the skin area of the face using conventional computer-based mapping techniques. The outline will thus define the mask to be used for the aging/de-aging simulation. Although computationally simple, this approach has several disadvantages. It has the risk that non-skin parts of the face will be included in the simulation and also introduces subjectivity inherent to the person himself, possibly leading to large variations in the results.
In a preferred embodiment, a new skin detection algorithm is employed that intercepts only the uniformly bright portion of facial skin from an oblique view or front view and excludes non-skin areas (eyes, eyebrows, hair, mustache, and beard) and shadowed skin areas (e.g., neck area). Skin detection was performed according to the Indvidual Typology Angle (ITA) metric calculated from the L, A and B measurements. (see "Non-Invasive Measurements of skin segmentation In Situ" of G.N. Stamatas et al, volume 17, pages 618 to 626, 2004). The ITA is defined for each image pixel (i, j) as arctan ((L [ i, j ] -50)/B [ i, j ]) and is related to the melanin concentration on the skin. Assume that the ITA values of skin pixels will converge around a value rather than the ITA values of skin pixels being significantly further away from the ITA values of skin pixels.
Fig. 2 is a flowchart illustrating an exemplary facial skin detection process according to the present invention, which utilizes the aforementioned ITA scheme. Prior to skin detection, rough face detection is performed to cut out the face region from the entire image containing face, hair, neck, and background. The detected facial region should include all skin regions of the face, but may also include all facial features (eyes, eyebrows, nostrils, lips, hair). To this end, the face region is segmented from the close-up image using the LUX color space. (see "Nonlinear color space and spatial segmentation MRF for iterative segmentation of surface features in video" by M.Levin et al, IEEE Transactions in Image Processing, Vol.13, No. 1, month 1 2004).
As shown in FIG. 2, the process starts with a standard RGB full header image, such as that provided above at 101. The image is converted from RGB to LUX space at 203 using the technique described in the Levin reference above.
In 205, the face region is segmented. This may be done, for example, by applying an Otsu threshold method on the U channel of the LUX image. (see "A Threshold selection from Level Histograms" by N.Otsu, IEEE Transactions on systems, Man, and Cybernetics, Vol.9, No. 1, pp.62 to 66, 1979, hereinafter "Otsu reference"). A face mask depicting the face region is generated at 205. The rest of the facial skin detection process may then be performed only on the facial region, thereby reducing the search space and reducing the computational cost.
In 207, the original RGB image masked according to the segmentation performed in 205 is converted into LAB space. Therefore, subsequent ITA system calculations are performed within the facial region to further segment the facial non-skin portions. Because the division and truncation operations of the ITA system calculation are sensitive to noise, smoothing the L and B channels first is preferred. As shown, such smoothing may be performed in 209L and 209B, respectively, by filtering the L and B images with a two-dimensional Gaussian filter or other similar technique. For an operating resolution of 220PPI, the L-channel variance of the filter is chosen to be 5 and the B-channel variance is chosen to be 1.5.
At 211, the ITA for each pixel is calculated within the face region according to the following formula: arctan ((L [ i, j ] -50)/B [ i, j ]). The ITA image is a grayscale image in the [090] range, where smaller ITA values correspond to skin pixels and larger values correspond to non-skin pixels. This gray scale image is divided into two regions using Otsu thresholding in 213. For this reason, the histogram of the ITA image is calculated only in the face region. From the histogram, the Otsu thresholding algorithm returns a threshold that will segment the image into two classes with minimal inter-class differences. Furthermore, a priori information about the ratio of the skin region to the whole face image can be added in this thresholding. Hu et al, "Supervised range-binding," IEEE Transactions in Image Processing, Vol.15, No. 1, pp.228 to 240, 1 month 2006, hereinafter referred to as the "Hu reference". For a typical oblique view image, at least 25% of the face pixels should belong to skin pixels. The Hu reference describes how this information can be incorporated into Otsu-based segmentation methods. After the optimal threshold is calculated by the thresholding algorithm, pixels with an ITA value less than the threshold are classified as skin pixels. After this, a binary (black and white) image is generated, in which skin pixels are displayed in white and non-skin pixels are displayed in black.
The segmented skin region generated in 213 may comprise isolated non-skin pixels forming islets. Such non-skin islands may be eliminated at 215 by morphological closing operations using disk structural elements or other such techniques. For example, for an image resolution of 220PPI, the circumference of this disc is chosen to be 10. Alternatively, there may be skin fragments detected in non-skin facial features (e.g., eyebrows, hairs, etc.). These small fragments are also eliminated by using the same disk structural elements with morphological opening operations. Furthermore, some individuals may have large non-skin fragments due to unique skin features such as large pigmented spots. These can also be eliminated by applying morphological filling operations. The goal is to detect facial skin in a continuous area that includes the cheek, forehead and nose but does not include the nostrils, the shaded nasolabial sulcus, the eyeholes, the eyebrows and hair (including any mustache and beard). An example of an effective facial skin mask is shown in fig. 3A for an oblique view image acquired in the VISIA system. Such a facial skin mask is ideal for performing aging simulation according to the present invention.
Design of aging simulation mask
An aging simulation for each skin feature (spots, wrinkles, and texture) may be performed on a smaller subset of the facial skin area that is more relevant to that particular aging simulation. For example, performing wrinkle and speckle simulations on the cheek regions (below eye level and above lip level) is more effective than doing so in other facial skin regions. To this end, as shown in FIG. 1, two different masks are generated in 107, a speckle and wrinkle aging mask and a texture aging mask, from the entire facial skin mask generated in 103. Examples of such masks are shown in fig. 3B and 3C. These masks are designed according to eye, lip and nose positions. The wrinkles and spots shown in fig. 3B include all skin from eye level to lip level and from nose level to cheek end. The texture mask shown in fig. 3C may extend from the eye level down to the end of the chin. The eye and lip areas are clearly depicted in the front skin mask. The positions of these features can be calculated using the vertical and horizontal projections of the image. One local minimum of the vertical projection provides the center row of the eye and the second local minimum provides the center row of the lips. Once these coordinates are determined, the entire facial skin image is cropped accordingly to generate the two aforementioned aging simulation masks.
Spot aging simulation
FIG. 4 is a flow diagram of a spot aging simulation process according to an exemplary embodiment of the present invention. From the standard image converted at L a b (from 105, fig. 1), the uv image (from 111, fig. 1), and the mottle and wrinkle aging mask (from 107, fig. 1), the process generates a mottle aged image. As shown in fig. 4, the uv image and the spot/wrinkle mask generated as described above are provided as inputs to the spot aging simulation process. Uv images acquired using fluorescence spectroscopy techniques showed clearly identifiable markings for pigmented spots (see Miyamoto reference). This modality of illumination is commonly used in dermatology to clearly show pigmentation lesions that were originally invisible in standard images. There is strong evidence that these pigmented spots, which are visible only under uv images, will become visible as the pigmentation becomes more severe (i.e., as melanin deposition increases) due to photoaging. It should be noted that although the exemplary embodiments are shown with reference to a uv image and "uv spots," uv light is not the only spectrum that enables visualization of sub-skin surface features. Generally, this aspect of the invention applies to any sub-skin surface spots that cannot be readily seen by the naked eye, regardless of the spectrum of the illumination modality in which they are collected.
The exemplary process of the present invention shown in fig. 4 simulates the process described above. The detected pigmented spots from the uv image and their contrast information can be used to adjust the intensity and color contrast at corresponding locations in the standard image, thereby simulating the development of "aged spots" over time.
As mentioned above, the standard and uv images should ideally be calibrated prior to simulation for optimal implementation in the display. Acquiring the standard image and the ultraviolet image sequentially with minimal delay, for example with a VISIA system, may alleviate or eliminate the need for calibration. However, images that are not properly corrected may be corrected using any of several well-known correction techniques. (see, e.g., B.Srinivasa et al, "An FFT-Based Technique for transformation, Rotation and Scale-innovative Image Registration", IEEE Transactions on Image processing, Vol.5, No. 8, month 8 1996).
Assuming the image is properly corrected, ultraviolet speckle detection based on the ultraviolet image is performed at 403. An exemplary ultraviolet spot detection algorithm according to the present invention is described in detail below. The uv spot detection algorithm returns all the pixel coordinates of the uv spots and their contrast information. The blobs are indexed and each blob is attached to a specific label (e.g., number). Indexing may be performed by scanning a black and white image representing the uv spots row by row or column by column and assigning a number to each spot in order.
At 405, the UV blob decimation process decimates adjacent blobs in the vicinity of one blob so that not all blobs in the UV image become visible in the standard image. This decimation process can be justified by the fact that only a subset of all the uv spots will become visible in the standard image. The decimation process may be performed by selecting every other blob or every third blob in the indexed blob list. This will provide a sparse subset of all detected uv spots.
After decimation, an ultraviolet contrast image of the remaining spots is generated. The uv contrast image is an intensity image having the uv contrast intensity of each pixel in the remaining subset of uv spots. At 409, the uv contrast image of the remaining uv spots is dilated to magnify the uv spots. This will have a magnifying effect on the actual mottling points visible in both the standard and uv images. The dilation of the uv spot may be performed by blurring the uv contrast image. This operation may be performed by filtering the ultraviolet contrast image with a two-dimensional Gaussian filter. The variance of the Gaussian filter to the operating resolution is set to 5 and can be increased or decreased to adjust for the dilation effect. Alternatively, no dilation may be performed, as it is possible to simulate spot aging without dilation.
At 411A, B and L, the expanded UV speckle contrast image is used to modify the original standard image luminance component (L channel) and color components (A and B channels). The uv speckle contrast image is thus weighted before being added to the L, A and B components of the original image. It is well known that the effect of the color stain is visible in L, A and B as the intensity changes. In one exemplary embodiment, the UV speckle contrast is multiplied by 1.5 (i.e., e) before being added to the L channelL1.5) multiplied by 0.5 before being added to the a channel and multiplied by 0.5 before being added to the B channel (i.e., e)A=eB-0.5). The signs and absolute values of these numbers are determined based on research findings and empirical observations. After adding the aged contrasts, the speckle-aged image is synthesized by performing an LAB-to-RGB conversion at 413. It should be noted that, as noted above, the present invention is not limited to any particular color or image format. For example, if the resulting image is to be printed, the conversion at 413 can be a LAB to CMY conversion (i.e., to the well-known cyan-magenta-yellow color space typically used for printing). As will be appreciated, the images generated by the present invention may be printed, displayed, stored, transmitted, or subjected to any further processing. Further, as will be appreciated, the conversion at 413 can be dispensed with or deferred, for example, if the resulting image is to be stored or transmitted in LAB format.
For this purpose, an aging simulation is performed in the LAB gamut by intelligently adding factors of the uv contrast information to the intensity (L) and color (a and B) components. Hyperpigmentation is generally studied and quantified extensively in The LAB domain with a colorimeter (s. aluf et al, "The image of The epipigment mean on objective measures of human skinncolor", Pigment Cell Research, volume 15, pages 119 to 126, 2002, hereinafter referred to as "aluf reference") and The images are analyzed in The LAB domain (n. kollias et al, "Optical Non-innovative applications to pigments of skin Diseases", Journal of Investigative detail processing, volume 7, phase 1, pages 64 to 75, 2002). One study involved color measurements on normal and pigmented areas of human skin using an LAB colorimeter, which showed that all L, A and B values varied with the degree of pigmentation (melanin content). (see Alaluf reference). It is reported that the L value will become smaller as the melanin content increases, while the a and B values will become larger as the melanin content increases. This explains the dark brown appearance of the pigmented spots.
UV speckle detection and contrast calculation
FIG. 5 is a flow chart of an exemplary process of detecting UV spots and calculating contrast. The process takes the blue channel of the aforementioned uv image (e.g., from 111, fig. 1) and returns the uv spot and the uv contrast image. The blue channel of the uv image shows the best contrast in each channel (R, G and B) because the uv fluorescence is stronger in the blue spectrum. The goal of the process of fig. 5 is to extract the uv spot lesions from this gray scale image.
At 503, the blue channel ultraviolet image is subjected to noise filtering, where small variations across the image are smoothed. For this reason, the [5x5] median filter has been found to be very effective for ultraviolet images (with 220PPI operating resolution). Because of the non-uniform intensity field of the light source and the three-dimensional shape of the face, not all image pixels receive the same amount of light, thus causing the image to have varying intensities in different regions of the face. This variation in intensity does not allow the use of a fixed threshold to separate uv spot lesions that are visually darker than the background. To compensate for the non-uniform intensity, a graded background intensity is estimated and removed from the filtered intensity of each pixel at 505. The slowly varying background intensity can be estimated by using a low pass filter with large filter support. Such a filter may be implemented using a Wiener filter, i.e., an adaptive low-pass filter that estimates the low-frequency two-dimensional intensity surface from local averages and local variances. An exemplary Wiener filter that can be used for this purpose is described in appendix a-1 by a set of image pixel update formulas. For example, the support (size) of the Wiener filter is chosen to be [41x41], large enough to encapsulate the average large size of the uv spot, assuming a working resolution of 220 PPI.
When the background intensity level is removed from the noise-filtered version of the raw intensity image, a contrast image is obtained. This contrast image includes both positive and negative components. The uv spots are located in a subset of the negative contrast regions. Thus, at 507, uv spots are obtained by segmenting the negative contrast image with a fixed threshold. In an exemplary embodiment, this threshold is selected to be in the range of about-3.5 to-5.0. The criterion for uv speckle is that its contrast value should be less than this threshold. This speckle segmentation fits well with average human perception.
As a result of the segmentation operation in 507, a binary (black and white) image is obtained, where white lesions represent uv spots and black pixels represent the background. This image is smoothed 509, for example with a [5x5] median filter. At 511, the uv spots are indexed and labeled, and the area (e.g., number of pixels) associated with each uv spot is calculated. At 513, small uv spots having an area less than a threshold (e.g., 150 pixels) and large uv spots having an area greater than a threshold (e.g., 600 pixels) are eliminated. For each pixel, i.e., the uv spots image generated in process 506, the remaining uv spots are returned along with the contrast value. It is important to remember that these contrast values are negative and represent deep contrast. Optionally, a severity score is generated from the ultraviolet contrast Image (ID) by contrast weighted scoring at 515. This score is calculated by adding all the ID values inside the valid uv spot. This score is related to the degree of pigmentation and can be used to monitor the worsening and improvement of pigmentation. In addition, the detected uv spot perimeters are calculated at 517 so that they can be overlaid on the uv image to display the uv spots.
Spot de-aging simulation
In one exemplary embodiment, the blob de-aging simulation is performed in the LAB color space using the L, A and B channels of the standard image. Along with the pigmented spots, red spots (small inflamed areas due to scarring and skin diseases such as acne) can be discerned in these channels. To make the simulation more realistic, such skin features are preferably removed according to exemplary embodiments of the present invention.
FIG. 6 is a flow diagram of an exemplary blob de-aging process in accordance with the present invention. Using the LAB image (e.g., from 105, fig. 1) and the speckle and wrinkle mask (e.g., from 107, fig. 1), speckle detection and contrast calculations are performed in 603. An exemplary speckle detection and contrast computation algorithm is described in detail below with reference to FIGS. 7A and 7B. Contrast refers to the difference in pixel intensity relative to the low-pass background intensity calculated from the local neighborhood.
Multiplying the contrast value on L within the blob lesion by the value e in 609LLAnd added to the original L channel at 611 to bring the negative contrast of L flush with the background level. Similarly, the contrast value in A within the blob lesion is multiplied by the value e at 609AAAnd added to the original a channel at 611A to make the color difference of a flush with the background color. Similarly, the contrast value in B within the blob lesion is multiplied by the value e in 609BBAnd added to the original B channel in 611B to make the color difference of B flush with the background color. It should be noted that the contrast in L is used to modify the darkness of the spots and the contrasts in A and B are used to modify the color of the spots. Removing L, A within the spot lesion and the contrast in the B channel will bring the intensity and color of the spot lesion flush with the intensity and color of the background skin. This will have the visual effect of removing spot lesions and a smoother appearance of the facial skin. Thus, images of spot de-aging can be used to predict the expected outcome of an effective treatment.
Speckle detection algorithm
FIGS. 7A and 7B illustrate an exemplary blob detection algorithm. In 703, the standard RGB image 701 is converted to LAB color space and noise filtering is applied separately to L, A and B channels in 705L, 705A, and 705B, respectively. In the exemplary embodiment shown, noise filtering is performed with a Wiener filter with a smaller filter support, e.g., [5x5], as described above. Then, in 707L, 707A, and 707B, Wiener filters with, for example, [61x61] support are applied to the noise filtered L, A and B images, respectively, to estimate the background intensity and color of each pixel in the blob and wrinkle simulation mask. For each pixel of the L, A and B channels, a contrast value is calculated by subtracting the low pass L, A and B value of each pixel from the noise filtered L, A and B value of each pixel in 708L, 708A, and 708B, respectively.
A contrast image (here "contrast image" refers to the set of all image pixels with contrast values as intensities) is a good indicator of speckle. It has been confirmed in dermatological studies that the intensity of spot lesions is less than that of background skin and their color components in a and B are greater than the color reading of background skin. (it should be noted that background skin is considered healthy and smooth herein, and spot lesions are considered sparse within the background). Based on these criteria, speckle lesions are selected in 709L, 709A, and 709B in the negative contrast region in channel L and the positive contrast region in channels A and B, respectively. In addition, the contrast images obtained by 708L, 708A, and 708B are refined by 709L, 709A, and 709B operations to produce more meaningful contrast images. After these operations the contrast image is used for de-aging simulation of the spots.
At 711, a speckle-color-difference system (DE) is calculated for each pixel based on the contrast values from the L, A and B channels. The CIE L a b perceptual color difference system is often used in colorimetry to quantify the sensitivity of human vision to differences between two color patches. In an exemplary embodiment, this scale is used to distinguish the blob color from the background skin color so that the blob segmentation is consistent with human perception. Generally, if this scale is greater than 3.5, then the difference in color is discernable by both eyes.
Proceeding to FIG. 7B, the blobs are segmented in 713 by comparing DE to a threshold, such as 4.5, to segment the blobs. This threshold may vary from 3.5 to 5 depending on the desired sensitivity. After this threshold operation, a binary (black and white) image of the spot lesion is obtained, with white islands representing the spots. This binary image is optionally smoothed 715 to have a smooth-shaped blob lesion.
In 717, the segmented objects are marked by assigning numbers.
A speckle segmentation step based on, for example, a threshold DE as described above will generally segment out a subset of the wrinkle portions and large pores, as well as the speckles. At 719, small objects, such as sweat pores (typically smaller than blobs), are eliminated by applying a minimum area constraint to the segmented object. For example, an area threshold of 100 pixels at a given resolution (220PPI) is satisfactory.
At 721, to eliminate wrinkles and wrinkle-like features, certain shape properties of the remaining spot lesions are then calculated. Exemplary properties may include area, aspect ratio, solidity, major axis length, minor axis length, eccentricity, and range. These are all two-dimensional shape properties commonly used in the art and are described in appendix A-3: definitions of Shape Properties. To eliminate wrinkles and wrinkle-like features, the aspect ratio (minor axis length/major axis length) is used as a criterion. For example, objects having an aspect ratio less than 0.25 may be considered wrinkles and eliminated as spots. For example, a range threshold of 0.3 may also be used to eliminate distorted and ambiguous shape features. (ranges are compaction measurements, varying over the [01] range, where high values correspond to a dense object). After applying these shape and size constraints in 721, the remaining objects and their pixel locations are recorded, along with the previously calculated contrast values for these blob locations in 709L, 709A and 709B. The contrast values are used for de-aging simulation of the spots. Optionally, a severity score is generated 723 from the entire contrast image (DE) calculated 711. The score is calculated by adding all the DE values within the valid spot. The score is related to the degree of hyperpigmentation and skin non-uniformity and can be used to monitor the worsening and improvement of skin condition. In addition, the detected blob perimeters are computed at 725 so that they can be overlaid on the image to display the blobs.
Wrinkle aging and de-aging simulation
Fig. 8 is a flow chart of an exemplary wrinkle aging and de-aging simulation process in accordance with the present invention. At 801, a wrinkle detection step is performed using the luminance (L) channel of the LAB image masked by the speckle and wrinkle mask generated above. Color analysis of wrinkle features demonstrated that the color in fine wrinkles did not differ significantly from the color of background skin. However, the brightness of wrinkles is significantly different from the background brightness. For this purpose, the L channel is used to detect and simulate wrinkle aging/de-aging. The wrinkle detection step in 801 provides wrinkle features and their "wrinkle intensity" values. The "wrinkle strength" is a different measure than the contrast and is calculated from the orientation filter. (see "The design and use of Stable filters" in W.T. Freeman et al, IEEE Trans. Pattern Analysis and Machine understanding, Vol.13, No. 9, pp.891-906, 1991, hereafter referred to as The "Freeman reference"). An exemplary wrinkle detection algorithm is described in detail below.
After wrinkle detection, a pseudo wrinkle removal step is performed in 803. The candidate wrinkles generated by the wrinkle detection step in 801 are segmented out as white objects on a dark skin background. This black-and-white image is referred to as a ridge object image. Most ridge objects are due to wrinkles and fine lines, but some may come from other facial features such as boundaries of large spots, aligned pores, black hair and thin vascular threads on the skin, and so forth. Most of these spurious features may be eliminated according to a set of shape, size, and color criteria. Such an exemplary process is described in more detail below. The process returns effective wrinkles and their intensity images. The wrinkle intensity image takes the value of the ridge map at the effective wrinkle pixels and zeros elsewhere. The wrinkle intensity image will be used for wrinkle aging and de-aging simulation.
For the aging simulation of wrinkles, the wrinkle strength (hereinafter referred to as wrinkle contrast) is expanded in 807 to obtain a thickening effect, which will always occur with aging. For example, the dilation operation may be performed with a two-dimensional Gaussian filter, e.g., with a filter variance of 2. This step is described above with respect to uv spot enlargement. At 809A, the expanded contrast is then multiplied by an enhancement factor eL(e.g., 2) and added to the L channel. The net effect of these operations is that wrinkles seen in the original image appear darker and denser, and light wrinkles that are not clearly visible in the original image become visible. Finally, the wrinkle aged image is then synthesized by LAB-to-RGB conversion in 811. It is important to note that wrinkles will grow with age, and that simulation of this process can be performed by extending the detected wrinkles. The extension of the wrinkles may be performed in addition to or as an alternative to the dilation operation.
For de-aging of wrinkles, no contrast enlargement is performed and the wrinkle contrast is removed from the L-channel at 809D to bring the intensity level of the wrinkles to the intensity level of the surrounding background skin. Finally, the wrinkle de-aged image is synthesized by LAB to RGB conversion in 813.
Wrinkle detection algorithm
The wrinkle detection process will now be described with reference to fig. 9. In 901, the standard RGB image masked by the speckle and wrinkle mask is transformed to obtain an LAB image. In an exemplary embodiment, only the L channel is used to detect wrinkles. In 903, a noise filtering process using a Wiener filter as described above is applied to the L channel within the wrinkle aging simulation mask. The filter has support of, for example, [3x3 ]. In 905, a further Wiener filter with support of, for example, [21x21] is applied to the noise filtered L-channel to estimate the background illumination intensity. Preferably, the specification of this filter should be large enough to cover wrinkles in the working resolution. At 907, a contrast value for each pixel is calculated by subtracting the low pass L value from the noise filtered L value within the wrinkle mask.
In 909, the area with the negative contrast value (e.g., dark area) is selected for wrinkle detection. This is because the fine wrinkles are darker (lower in L) than the background, as viewed. In 911, a ridge detection step is applied to the negative contrast image to detect elongated structures. In one exemplary embodiment, described in more detail below, the Ridge-based segmentation information images of retina, see Freeman reference and J.Staal et al, "Ridge-based segmentation in color images of retina", IEEE Transaction on Medical Imaging, Vol.23, No. 4, pages 501 to 509, 4.2004, hereafter "Staal reference"). The ridge detection step accepts the contrast image and returns the "ridge intensity" and "ridge orientation" images. The two images are further processed at 913 to obtain a modified ridge intensity image or "ridge map". The ridge map calculation step is described in detail below. The ridge map is a gray luminance image representing the structure of the curve and shows a strong response to wrinkles.
To determine wrinkle structure from ridge map images, a hysteresis threshold was applied in 915 (see "a computational approach to edge detection" by f.j. canny, ieee transactions on Pattern Analysis and Machine Intelligence, volume 8, phase 6, pages 679 to 698, 1986). Hysteresis thresholding is a softer form of thresholding that links a weak structure to a strong structure and involves low and high thresholds. Exemplary values for these thresholds are 4 and 8.
Ridge detection
The wrinkles show themselves as elongated structures in the standard image. They are mostly visible in intensity (L-channel) relative to the background skin intensity level and have little difference in color (a and B-channels) compared to the background skin color. Thus, they can be extracted from the L-channel by using detectors designed for elongated structures.
The second directional derivative of the gaussian kernel is typically used to detect elongated structures in image processing. (see, e.g., the Staal reference, these derivatives are in fact a type of guided filter described by the Freeman reference.) these filters are fundamental filters that are sensitive to ridge features and have vertical, horizontal, and diagonal orientations. Fig. 10 is a flow chart of an exemplary ridge detection step using such a guided filter.
As shown in fig. 10, the DE image is subjected to two-dimensional convolution in 1001A, 1001B, and 1001C, respectively, with first, second, and third filters such as those described above. To analyze the orientation and intensity of structures in an image, a Hessian matrix is formed for each pixel in 1003, the elements of which are the fundamental filter response. Next, in 1005, Eigen resolution is performed on the [2x2] Hessian matrix for each pixel. Eigen parsing returns two eigenvalues (e1, e2) and two eigenvectors (v1, v2) that are orthogonal to each other. The ridge strength is defined as a positive eigenvalue (e.g., e1) if its absolute value is greater than the second eigenvalue (e.g., e2) and the ridge is oriented as the second eigenvector (v 2). In one variation of this method, the ridge strength is defined as (e1-e2) when e1 > 0 and | e1| > | e2 |. It has been observed that this definition better accentuates wrinkle structures.
Ridge map calculation
As described above, the exemplary ridge detection process returns two useful parameters to each pixel: ridge strength, a scalar value indicating how deep a wrinkle is; and the ridge orientation, which is a vector that specifies the direction of the wrinkle at a particular pixel location. A ridge map image is generated from the two parameters. In this case, for each pixel, a new ridge intensity is defined, which takes into account the original ridge intensity and an intensity term that depends on the orientation of the neighboring pixels. This intensity term is calculated by adding the inner product of the direction vector of the current pixel and the direction vector of each pixel in the 8 connected neighbourhoods of the current pixel. This process is described by a set of equations in appendix A-2.
False wrinkle removal
The goal of the pseudo-wrinkle removal process is to remove pseudo-positive values (pseudo-wrinkles) based on shape and size properties. To this end, all candidate wrinkles after the hysteresis threshold (915) are labeled and a plurality of shape properties are calculated for each. These shape properties may include: minor axis length, major axis length, area, solidity, and eccentricity. The definition of these two-dimensional shape properties is standard and is given in appendix A-3.
Based on these shape properties, ridge objects are classified into four classes: short wrinkles, long wrinkles, network wrinkles and non-wrinkles. To fall into one of the first three classes, the properties of the ridge objects must satisfy the corresponding set of criteria. For example, for a ridge shape to be a short wrinkle, its length must be between a minimum (e.g., 30 pixels) and maximum length (e.g., 50 pixels) threshold; its aspect ratio (minor axis length/major axis length) must be less than the aspect threshold (e.g., 0.25); its eccentricity must be greater than an eccentricity threshold (e.g., 0.97) and its solidity must be greater than a minimum solidity threshold (e.g., 0.49). Similarly, there is one set of criteria for long wrinkles and another for network wrinkles. These thresholds are determined empirically based on examining wrinkles on a set of training images. Ridge objects that are not classified as one of these wrinkle types are classified as non-wrinkles. The remaining ridge objects are called valid wrinkles and returned to the wrinkle detection algorithm.
Texture aging simulation
The term "texture" as used herein refers to small features of the skin that interfere with the overall smoothness of the skin. Texture aging and de-aging simulation are based on detection of texture features and contrast. Textural features include pores, small white spots and small roughness disruptions. Texture aging and de-aging simulations are performed within the texture mask. FIG. 3C illustrates a typical texture mask.
FIG. 12 is a flow diagram of an exemplary texture aging process in accordance with the present invention. Texture aging simulation is performed using the luminance (L) channel of a standard face image (e.g., standard face image from 105, fig. 1) masked by a texture mask (e.g., texture mask from 107, fig. 1). In 1201, the low-pass background intensity is removed from the L channel. To this end, the background intensity level is calculated by applying a Wiener filter, e.g. as described above, with a filter support of e.g. [21x21 ]. The term is subtracted from the L channel to generate a contrast image. The contrast image has both positive and negative components. Regions with negative contrast values are called low texture regions and regions with high contrast values are called high texture regions. An example of a low texture region is a pore, while an example of a high texture region is a very small white spot.
The segmentation of the low-texture regions is performed 1203L by thresholding the contrast image with a negative threshold (e.g., -2.5), i.e., by selecting pixels whose contrast is less than this threshold. Furthermore, the segmented texture lesions are labeled and the lesion areas are also recorded. A small area threshold (e.g., 10) is applied to remove very little damage due primarily to noise. A large area threshold (e.g., 120) is applied to remove large lesions due to small spots and wrinkles.
The remaining texture lesions and their contrast values for each pixel are recorded at 1205L (low texture contrast image). In addition, the low texture contrast image is dilated by applying a two-dimensional Gaussian filter with variance 1 at 1207. The net effect of this dilation operation is to enlarge the sweat pores in the face image. Pore enlargement occurs naturally with age or deterioration of skin health. The variance value may be increased to increase the degree of amplification.
Similarly, to segment high texture regions, a positive threshold (typically 2.5) is applied to the positive contrast image at 1203H, i.e., by taking pixels greater than this threshold. Next, the segmented texture lesions are labeled and the lesion areas are recorded. A small area (typically 10 pixels) is applied to remove very little damage due primarily to noise. A large area threshold (typically 100 pixels) is applied to remove large damage on the face due to sun exposure, i.e., excessive light exposure. The remaining texture lesions and their contrast values for each pixel are recorded at 1205H.
In 1209L, the dilated low-texture contrast is multiplied by an enhancement factor e1And added 1211 to the L channel. Similarly, the high texture contrast image is multiplied by an enhancement factor ehAnd added 1211 to the L channel. Enhancement factor e1And ehExemplary values of (a) are 1.0 and 0.5, respectively. In 1213, the texture-aged image is synthesized by LAB-to-RGB conversion.
Texture de-aging simulation
Exemplary texture de-aging simulations aim to reduce the size and intensity of textural features such as pores and small white spots. Complete removal of textural features in a spot or wrinkle de-aging simulation will result in an overly smooth appearance and will not provide a realistic skin image.
FIG. 13 is a flow diagram of an exemplary texture de-aging process in accordance with the present invention. Texture de-aging is also performed in the luminance channel. In 1301, the low-pass background intensity is removed from the L-channel. To this end, the background intensity level is calculated by applying a Wiener filter as described above with a filter support of e.g. [21x21 ]. The background intensity level is subtracted from the L channel to generate a contrast image. The contrast image has both negative and positive components. Regions with negative contrast values are called low texture regions and regions with high contrast values are called high texture regions.
At 1303L, to segment low-texture regions (i.e., large pores), a negative threshold (typically-2.5) is applied to the negative contrast image. The segmented texture lesions are indexed and labeled and the area of the lesion is also recorded. In addition, a small area threshold (typically 50) is applied to remove small pores and a large area threshold (typically 120) is applied to remove large lesions due to spots and wrinkles. The remaining texture lesions (most of which are large pores) for each pixel and their contrast values (low texture contrast image) are calculated and recorded in 1305L. In 1307L, the low-texture region is subjected to shrinkage by applying a morphological dilation operation to the low-texture contrast image having, for example, a disk structural element of perimeter 2. The net effect of this operation is a reduction in pore shrinkage and pore darkening on facial skin, which is associated with improving skin condition after effective treatment.
Similarly, at 1303H, to segment high-texture regions (small white spots), a positive threshold is applied to the positive-contrast image by taking pixels above a certain threshold (e.g., 2.5). Segmentation texture lesions are labeled and the area of the lesion is also recorded. A small area threshold (e.g., 30 pixels) is applied to remove small lesions and a large area threshold (e.g., 300 pixels) is applied to remove large lesions due to shrinkage. The remaining texture lesions for each pixel are calculated and recorded at 1305H along with their contrast values (high texture contrast image). At 1307H, the high-texture regions are subjected to shrinkage by applying a morphological erosion operation to the high-texture contrast image having, for example, a disk structural element of perimeter 2. The net effect of this operation is the marked shrinkage of the large white spot pores and the diminished intensity of these features on the facial skin, also associated with improved skin condition after effective treatment.
At 1309L, the reduced low-texture contrast is multiplied by an enhancement factor e1And added 1211 to the L channel. Similarly, the high texture contrast image is multiplied by an enhancement factor ehAnd added to the L channel at 1311. Enhancement factor e1And ehExemplary values of (a) are 1.0 and 1.0, respectively. In 1313, the texture-aged image is synthesized by LAB-to-RGB conversion.
Total skin aging and de-aging simulation
The above-described simulation of the aging of facial skin due to spots, wrinkles and texture can be combined to simulate the overall aging of facial skin. FIG. 14 is a flow chart of an exemplary process for doing so. The overall aged image is synthesized in the LAB gamut by modifying L, A and the B channel with the aged contrast of spots, wrinkles, and texture. As shown in fig. 14, in order to spot or wrinkleThe texture and texture are merged into the overall process, generating an aged contrast image in L, A and B channels in 1401S; generating a wrinkle-aged contrast image in the L channel in 1401W; and generating a texture aged contrast image in the L channel in 1401T. By w in 1403SA, 1403SB and 1403SL respectivelysFactor-weighting each A, B and channel blob aging image; with a factor w in 1403WLwWeighting the L-channel wrinkle-aged image; and with a factor w in 1403TLtThe L-channel texture aged image is weighted. Three weighting factors w are selecteds、wwAnd wtTo emphasize or de-emphasize the contributions of the individual components of the overall aged image. The weighted a and B channel spot-aged images are added to the a and B images of the final image in 1405SA and 1405SB, respectively. The weighted L-channel spots, wrinkles and texture images are combined in 1405L and added to the L-channel of the final image in 1407L. The L, A and B channels thus modified are subjected to a LAB-to-RGB conversion in 1409 to generate an overall aged image in the RGB domain.
In a similar manner, the above-described simulation of facial skin de-aging due to spots, wrinkles, and texture may be combined to simulate overall de-aging of facial skin. FIG. 15 is a flow chart of an exemplary process for doing so. The de-aged contrasts at L, A and B were generated by respective speckle, wrinkle and texture de-aging simulations at 1501SL, 1501SA, 1501SB, and 1501W and 1501T, respectively. In 1503SL, 1503W and 1503T are respectively weighted by a weight factor Ws、wwAnd wtEach contrast image in L is weighted to emphasize or de-emphasize the contribution of the respective component to the overall de-aged image. Preferred values of these weighting factors are all 1. The weighted L-channel speckle, wrinkle, and texture contrast images are combined 1505 and added 1507 to the L-channel of the final image. Similarly, the spot contrast in A is scaled by w at 1503SAsWeighted and added to the a channel in 1507 SA. Contrast of the blob in B with w at 1503SBsWeighting is performed and added to the B channel at 1507SB to get the final a and B channels. LAB-to-RGB conversion at 1509Into an overall de-aged image in the RGB domain. In the final image, the prominent skin features are eliminated and small skin features (pores) are reduced. Such an image is very useful for predicting how the face of the subject's skin may look after treatment for hyperpigmentation, wrinkles or skin texture.
Interactive tool for skin aging/de-aging simulation
Skin aging/de-aging simulation according to an exemplary embodiment of the present invention may be demonstrated on a computer monitor by displaying the original image and simulated image side-by-side and providing interactive slider control to enable the viewer to adjust the degree of aging. Depending on the desired simulation (speckle, wrinkle, texture, or any combination thereof), the aged or de-aged image is blended with the original image, where the degree of blending depends on the slider position. When the slider is in the neutral position, the original image is displayed in both the left and right panels. When the user moves the slider up, the de-aged analog image is displayed on the right panel by alpha blending the original image with the de-aged image. Similarly, when the user moves the slider down, the aging simulation image is displayed by alpha-blending the original image with the aged image. Alpha blending is a linear weighting of two images and is a standard operation commonly used in the art to blend two images. For the present patent application, various aged and de-aged images of spots, wrinkles and textures may be generated offline with alpha blending and image rendering is preferably performed in real time.
It should be noted that in each of the aging and de-aging simulations described above, the degree of aging or de-aging to be simulated is preferably user selectable over a suitable period of time, e.g., 5 to 10 years, to demonstrate natural aging, or several months to demonstrate de-aging due to treatment.
It should be understood that the above-described embodiments are illustrative of only a few of the specific embodiments that can represent applications of the invention. Numerous and varied other arrangements can be made by those skilled in the art without departing from the spirit and scope of the invention.
Appendix
A-1.WIENER filter
Given a [ MxN ] grayscale image g, the value at coordinate g (i, j) is given by (i, j), the following step performs a Wiener filter with a local [ KxK ] analysis window centered at (i, j), where K is an odd number.
1. KxK at the current pixel at which the pixel value is located]Computing the local mean-square deviation μ (i, j) and the local variance σ in the neighborhood2(i,j):(i,j)g(i,j)
L=(K-1)/2
2. Computing the noise variance σ by averaging the local variance across the entire imagew 2
3. The filtered image pixel value f (i, j) is calculated using the following update formula: if σ is2(i,j)>σw 2
Otherwise
f(i,j)=μ(i,j)
4. Step 3 is repeated for all pixels in the image.
A-2. ridge map Generation
For each pixel coordinate (i, j) located in the region of interest (ROI), the following calculation steps are performed:
1. the following numbers were obtained by the ridge detector:
r (i, j): intensity of ridge, positive real number
V (i, j): ridge orientation vector, 2-membered vector with real numbers.
2. From these quantities, the directional strength is calculated as the sum of the inner products of the orientation vectors in eight contiguous neighborhoods:
wherein<.>Represents an inner product operation, and VcThe ridge orientation vector representing the current pixel, and VnRepresenting the ridge orientation vector for the nth pixel in the neighborhood.
3. Add a portion of the directional intensity to the ridge intensity to calculate a ridge map:
rm (i, j) ═ R (i, j) + α Ds (i, j) where α is a weighting factor in the range of [0.20.5 ].
A-3. definition of shape Properties
Properties of
Definition of
Number of pixels in area object
The major axis length is the length (in pixels) of the major axis of the ellipse for which the subject has the same normalized second-order central moment.
The minor axis length is the length (in pixels) of the minor axis of the ellipse for which the object has the same normalized second-order central moment.
The range is also the proportion of pixels in the object in the frame. The bounding box is the smallest rectangle that contains the object.
Eccentricity of an ellipse having the same second moment as the object. Eccentricity is the ratio of the distance between the elliptical focal points and the length of its major axis.
Solidity is also the proportion of pixels in the object in the convex hull. The convex hull is the smallest convex polygon containing the object.
Claims (5)
1. A method of manipulating a facial image to simulate time-dependent changes, the method comprising:
converting the facial image into a color space format having an intensity component and a color component;
manipulating the facial image to generate a speckle-variation intensity component;
manipulating the facial image to generate a wrinkle variation intensity component;
manipulating the facial image to generate a texture change intensity component; and
modifying the intensity component of the facial image according to a combination of the blob change intensity component, wrinkle change intensity component, and texture change intensity component, thereby generating a blob, wrinkle, and texture changed facial image;
wherein a first spot, wrinkle and texture changed face image that is an aged face image and a second spot, wrinkle and texture changed face image that is a de-aged face image are generated, wherein the aged or de-aged face image and the face image are selectively blended to generate a composite image and the composite image is displayed.
2. The method of claim 1, the method comprising:
manipulating the facial image to generate a first blob-change color component and a second blob-change color component;
modifying a first color component of the facial image according to the first speckle-varying color component; and
modifying a second color component of the facial image according to the second blob-change color component.
3. The method of claim 1, the method comprising:
applying a mask to the facial image prior to manipulating the facial image.
4. The method of claim 3, wherein the masks comprise a first mask for speckle and wrinkle simulation and a second mask for texture simulation.
5. The method of claim 1, wherein the color space format having intensity and color components is an LAB format having a luminance component L and color components a and B.
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US11/681,509 | 2007-03-02 | ||
| US11/681,509 US8290257B2 (en) | 2007-03-02 | 2007-03-02 | Method and apparatus for simulation of facial skin aging and de-aging |
| PCT/US2008/055250 WO2008109322A1 (en) | 2007-03-02 | 2008-02-28 | Method and apparatus for simulation of facial skin aging and de-aging |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1138662A1 HK1138662A1 (en) | 2010-08-27 |
| HK1138662B true HK1138662B (en) | 2014-03-07 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN101652784B (en) | Method for simulation of facial skin aging and de-aging | |
| KR102839023B1 (en) | Device and method for visualizing cosmetic skin properties | |
| EP2174296B1 (en) | Method and apparatus for realistic simulation of wrinkle aging and de-aging | |
| EP3959651B1 (en) | Apparatus and method for determining cosmetic skin attributes | |
| JP5547730B2 (en) | Automatic facial and skin beautification using face detection | |
| CN111524080A (en) | Face skin feature identification method, terminal and computer equipment | |
| Lange | Automatic glare removal in reflectance imagery of the uterine cervix | |
| CN119339149B (en) | Strong pulse light skin beauty local area identification method and system | |
| Lézoray et al. | Graph-based skin lesion segmentation of multispectral dermoscopic images | |
| KR101436988B1 (en) | Method and Apparatus of Skin Pigmentation Detection Using Projection Transformed Block Coefficient | |
| CN101241593A (en) | Image processing device and method for image layer image | |
| Chang et al. | Automatic facial skin defect detection system | |
| HK1138662B (en) | Method for simulation of facial skin aging and de-aging | |
| JP6730051B2 (en) | Skin condition evaluation method | |
| Nomura et al. | Analysis of the Relationship between Age and Wrinkle from Facial Images Cor-rected for Surface Reflection | |
| HK40061351A (en) | Apparatus and method for visualizing cosmetic skin attributes | |
| JP2022024564A (en) | Face time change prediction method | |
| Mustafa et al. | Comparing median and gaussian blurring for grabcut segmentation of melanoma | |
| Oprea et al. | A Novel Approach in Melanoma Identification | |
| Malik et al. | Grading of Acne Vulgaris Lesions | |
| MIRZAALIAN et al. | Streak-Detection in Dermoscopic Color Images using Localized Radial Flux of Principal Intensity Curvature | |
| HK1145727B (en) | Method and apparatus for realistic simulation of wrinkle aging and de-aging |