[go: up one dir, main page]

WO2006003625A1 - Video processing - Google Patents

Video processing Download PDF

Info

Publication number
WO2006003625A1
WO2006003625A1 PCT/IB2005/052162 IB2005052162W WO2006003625A1 WO 2006003625 A1 WO2006003625 A1 WO 2006003625A1 IB 2005052162 W IB2005052162 W IB 2005052162W WO 2006003625 A1 WO2006003625 A1 WO 2006003625A1
Authority
WO
WIPO (PCT)
Prior art keywords
person
input signal
viewed
feature
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/IB2005/052162
Other languages
French (fr)
Inventor
Richard P. Kleihorst
Hasan Ebrahimmalek
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Koninklijke Philips NV
Original Assignee
Koninklijke Philips Electronics NV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics NV filed Critical Koninklijke Philips Electronics NV
Publication of WO2006003625A1 publication Critical patent/WO2006003625A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites

Definitions

  • the present invention relates to a video processing apparatus and method, and in particular, to a video processing apparatus and method involving cartoonizing.
  • Video communication is increasingly being used in numerous applications such as video telephones, video conferencing, television-collaboration, shared virtual table environments, and so on.
  • face detection and recognition is being actively researched in order to enhance the services provided by applications such as video conferencing.
  • face detection is used in video conferencing systems to create a virtual conference room, whereby participants of the meeting are seated around a virtual table.
  • Numerous approaches have been used to assist in face detection, including techniques such as feature invariant approaches, appearance-based approaches, and wavelength analysis.
  • Feature extraction methods utilize various properties of the face and skin to isolate and extract data, such as "eye" data.
  • Popular methods include skin color segmentation, principle component analysis, eigenspace modeling, histogram analysis and texture analysis.
  • face detection and skin detection methods are currently used in applications such as creating virtual video conferencing systems, or face recognition systems for security applications.
  • the aim of the present invention is to provide a video processing apparatus and method that utilizes information received from sources such as face and/or skin detection for cartoon applications.
  • an apparatus for cartoonizing an image signal having an object of interest comprises detecting means for detecting a feature of the object, and receiving means for receiving an input signal, the input signal relating to a characteristic of the object.
  • the apparatus further comprises image processing means that is configured to automatically adapt the image signal based on the received input signal and/or the detected feature.
  • the invention has the advantage of being able to automatically adapt an image signal based on an input signal and/or detected feature of the object being viewed.
  • a method of cartoonizing an image signal having an object of interest comprises the steps of detecting a feature of the object, and receiving an input signal relating to a characteristic of the object.
  • the image signal is automatically adapted based on the received input signal and/or the detected feature.
  • Fig. 1 shows a first embodiment of the present invention
  • Fig. 2 shows a second embodiment of the present invention
  • Fig. 3 shows a third embodiment of the present invention.
  • a first embodiment of the invention in which a skin detection unit 1 receives an image signal from a sensor or camera 3.
  • a video processor 5 receives the image signal from the camera 3, and a skin detection signal from the skin detection unit 1.
  • the video processor 5 processes the image data to produce an output video signal 9 for display on a display means (not shown).
  • the video processor 5 is configured to change the skin color based on an input signal 7.
  • the input 7 relates to a characteristic of the image being viewed.
  • the input signal 7 may relate to an emotional characteristic of the person being viewed.
  • the emotion of the person being viewed can be detected from the tone of voice of that person (e.g. the average pitch of the voice).
  • the emotion can be detected by means of a separate infrared camera (not shown), which detects heat from the face of the person being viewed.
  • the video processor 5 is configured to automatically adapt the image signal accordingly.
  • the skin color of the person is changed according to the emotion of the person.
  • the skin color of the person could be changed to red when an angry tone is detected, or grey when a calm tone is detected.
  • the skin color could be changed to red when the infrared camera detects an increase in heat dissipation, or grey when less heat is detected.
  • a user can configure the system such that the adaptation carried out by the video processor 5 is programmable.
  • the user can configure a settings table stored in a memory, to select the input condition that triggers an adaptation by the video processor 5, and a corresponding output condition for each input signal 5.
  • the settings table maps a received input signal with an adaptation process to be performed by the video processor 5.
  • the adaptation carried out by the video processor 5 may comprise other forms of video processing, for example facial texturing may also be applied.
  • the image signal can be automatically changed in accordance with an input signal relating to a characteristic of the person being viewed.
  • the cartoonizing involves a form of emotional conditioning.
  • Fig. 2 shows a second embodiment of the invention.
  • a feature extraction unit 21 is provided for detecting a feature in the object being viewed by a sensor or camera 23.
  • the feature extraction unit 21 may be configured to detect a feature in the face of a person being viewed.
  • a video processor 25 receives the image signal from the camera 23, and a feature extraction signal from the feature extraction unit 21.
  • the feature may be, for example, a left eye, a right eye, a left cheek, a right cheek, a chin, a left ear, a right ear, the top of the head, a left eyebrow, a right eyebrow, a beard, a nose or a mouth, etc.
  • the video processor 25 is configured to alter or adapt the image signal, by superimposing a secondary feature onto the image.
  • the secondary feature is preferably positioned in a predetermined relationship to the feature originally detected, for example on or next to the feature originally detected.
  • the object being superimposed may be, for example, sunglasses, a hat, a beard, a tattoo, or any other feature chosen by a user.
  • a user may select in advance which secondary feature is to be automatically superimposed onto the image signal, for example by configuring a settings table to map detected features with secondary features.
  • the system could be configured to automatically superimpose a pair of spectacles onto the eyes of the person being viewed, or a beard onto the person's chin.
  • the second embodiment can also be configured to automatically superimpose a secondary feature according to an emotional characteristic of the person being viewed.
  • the emotion of the person could be determined from the voice of the person, or heat detected from a separate infrared camera.
  • a set of horns could be placed on the head of the person being viewed, or smoke arranged to appear from the person's ears or forehead.
  • the background of a scene could be automatically changed according to the emotional characteristic of the person being viewed.
  • Fig. 3 shows a further embodiment of the invention, comprising a visual light camera 33 and an infrared (IR) or near infrared (nIR) camera 34.
  • a face and/or skin detection unit 31 receives the signals from the visual light camera 33 and the IR camera 34, and based on the two received signals, an improved face/skin tone detection is carried out.
  • a video processor 35 receives an image signal from the visual light camera 33, plus a face/skin detection signal from the face/skin detection unit 31.
  • the video processor 35 is configured to change the skin color based on an input signal 37.
  • the input signal 37 relates to a characteristic of the image being viewed.
  • the input signal 37 may relate to an emotional characteristic of the person being viewed.
  • the emotion of the person being viewed can be detected from the tone of voice of that person (e.g. the average pitch of the voice).
  • the emotion can be detected using the infrared camera 34, which detects heat from the face of the person being viewed.
  • the video processor 35 is configured to adapt the image signal accordingly.
  • the skin color of the person is changed according to the emotion of the person. For example, the skin color of the person could be changed to red when an angry tone is detected, or grey when a calm tone is detected. Likewise, the skin color could be changed to red when the infrared camera detects an increase in heat dissipation, or grey when less heat is detected.
  • a cartoon apparatus that automatically adapts an image signal in accordance with an input signal relating to a characteristic of an image being viewed, and/or a feature detected in the image being viewed.
  • any reference signs placed between parentheses shall not be construed as limiting the claim.
  • the word "a” or “an” preceding an element does not exclude the presence of a plurality of such elements.
  • the invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the device claim enumerating several means, several ofthese means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A video processor (5) receives an image signal from a camera (3), and a skin detection signal from a skin detection unit (1). The video processor (5) processes the image data to produce an output video signal (9) for display on a display means (not shown). The video processor (5) is configured to automatically adapt the image signal, for example change the skin color, based on an input signal (7). The input (7) relates to a characteristic of the image being viewed. For example, the input signal (7) may relate to an emotional characteristic of the person being viewed. The emotion of the person being viewed can be detected from the tone of voice of that person (e.g. the average pitch of the voice), or by means of a separate infrared camera, which detects heat from the face of the person being viewed. Based on the input signal (7) representing a characteristic of the object being viewed, the video processor (5) is configured to adapt the image signal accordingly. In one example, the skin color of the person is changed according to the emotion of the person. For example, the skin color of the person could be changed to red when an angry tone is detected, or grey when a calm tone is detected.

Description

Video processing
FIELD OF THE INVENTION
The present invention relates to a video processing apparatus and method, and in particular, to a video processing apparatus and method involving cartoonizing.
BACKGROUND OF THE INVENTION
Video communication is increasingly being used in numerous applications such as video telephones, video conferencing, television-collaboration, shared virtual table environments, and so on. In such systems, face detection and recognition is being actively researched in order to enhance the services provided by applications such as video conferencing. For example, face detection is used in video conferencing systems to create a virtual conference room, whereby participants of the meeting are seated around a virtual table. Numerous approaches have been used to assist in face detection, including techniques such as feature invariant approaches, appearance-based approaches, and wavelength analysis.
The majority of face detection research aims to find structural features that exist even when the light and viewpoint vary. Feature extraction methods utilize various properties of the face and skin to isolate and extract data, such as "eye" data. Popular methods include skin color segmentation, principle component analysis, eigenspace modeling, histogram analysis and texture analysis.
As mentioned above, face detection and skin detection methods are currently used in applications such as creating virtual video conferencing systems, or face recognition systems for security applications.
SUMMARY OF THE INVENTION
The aim of the present invention is to provide a video processing apparatus and method that utilizes information received from sources such as face and/or skin detection for cartoon applications.
The invention is defined by the independent claims. The dependent claims define advantageous embodiments. According to a first aspect of the invention, there is provided an apparatus for cartoonizing an image signal having an object of interest. The apparatus comprises detecting means for detecting a feature of the object, and receiving means for receiving an input signal, the input signal relating to a characteristic of the object. The apparatus further comprises image processing means that is configured to automatically adapt the image signal based on the received input signal and/or the detected feature.
The invention has the advantage of being able to automatically adapt an image signal based on an input signal and/or detected feature of the object being viewed.
According to another aspect of the invention, there is provided a method of cartoonizing an image signal having an object of interest,. The method comprises the steps of detecting a feature of the object, and receiving an input signal relating to a characteristic of the object. The image signal is automatically adapted based on the received input signal and/or the detected feature.
BRIEF DESCRIPTION OF THE DRAWINGS
For a better understanding of the invention, and to show more clearly how it can be carried into effect, reference will now be made, by way of example, to the following drawings in which:
Fig. 1 shows a first embodiment of the present invention; Fig. 2 shows a second embodiment of the present invention; and
Fig. 3 shows a third embodiment of the present invention.
DETAILED DESCRIPTION OF A PREFERRED EMBODIMENT OF THE PRESENT INVENTION Referring to Fig. 1, a first embodiment of the invention is disclosed in which a skin detection unit 1 receives an image signal from a sensor or camera 3. A video processor 5 receives the image signal from the camera 3, and a skin detection signal from the skin detection unit 1. The video processor 5 processes the image data to produce an output video signal 9 for display on a display means (not shown). According to the first embodiment, the video processor 5 is configured to change the skin color based on an input signal 7. The input 7 relates to a characteristic of the image being viewed. For example, the input signal 7 may relate to an emotional characteristic of the person being viewed. The emotion of the person being viewed can be detected from the tone of voice of that person (e.g. the average pitch of the voice). Alternatively, the emotion can be detected by means of a separate infrared camera (not shown), which detects heat from the face of the person being viewed.
Based on the input signal 7 representing a characteristic of the object being viewed, the video processor 5 is configured to automatically adapt the image signal accordingly. In one example, the skin color of the person is changed according to the emotion of the person. For example, the skin color of the person could be changed to red when an angry tone is detected, or grey when a calm tone is detected. Likewise, the skin color could be changed to red when the infrared camera detects an increase in heat dissipation, or grey when less heat is detected. Preferably, a user can configure the system such that the adaptation carried out by the video processor 5 is programmable. For example, the user can configure a settings table stored in a memory, to select the input condition that triggers an adaptation by the video processor 5, and a corresponding output condition for each input signal 5. In other words, the settings table maps a received input signal with an adaptation process to be performed by the video processor 5.
In addition to changing the skin color as described above, it is noted that the adaptation carried out by the video processor 5 may comprise other forms of video processing, for example facial texturing may also be applied. Thus, according to the first embodiment, the image signal can be automatically changed in accordance with an input signal relating to a characteristic of the person being viewed. Thus, in this embodiment the cartoonizing involves a form of emotional conditioning.
Fig. 2 shows a second embodiment of the invention. According to the second embodiment, a feature extraction unit 21 is provided for detecting a feature in the object being viewed by a sensor or camera 23. For example, the feature extraction unit 21 may be configured to detect a feature in the face of a person being viewed. A video processor 25 receives the image signal from the camera 23, and a feature extraction signal from the feature extraction unit 21. The feature may be, for example, a left eye, a right eye, a left cheek, a right cheek, a chin, a left ear, a right ear, the top of the head, a left eyebrow, a right eyebrow, a beard, a nose or a mouth, etc. Having detected a particular feature in the image signal, the video processor 25 is configured to alter or adapt the image signal, by superimposing a secondary feature onto the image. The secondary feature is preferably positioned in a predetermined relationship to the feature originally detected, for example on or next to the feature originally detected. The object being superimposed may be, for example, sunglasses, a hat, a beard, a tattoo, or any other feature chosen by a user.
Also, as described above in the first embodiment, a user may select in advance which secondary feature is to be automatically superimposed onto the image signal, for example by configuring a settings table to map detected features with secondary features. For example, the system could be configured to automatically superimpose a pair of spectacles onto the eyes of the person being viewed, or a beard onto the person's chin.
The second embodiment can also be configured to automatically superimpose a secondary feature according to an emotional characteristic of the person being viewed. For example, the emotion of the person could be determined from the voice of the person, or heat detected from a separate infrared camera. In one example, if an angry emotion is detected, a set of horns could be placed on the head of the person being viewed, or smoke arranged to appear from the person's ears or forehead.
Alternatively, the background of a scene could be automatically changed according to the emotional characteristic of the person being viewed.
Fig. 3 shows a further embodiment of the invention, comprising a visual light camera 33 and an infrared (IR) or near infrared (nIR) camera 34. A face and/or skin detection unit 31 receives the signals from the visual light camera 33 and the IR camera 34, and based on the two received signals, an improved face/skin tone detection is carried out. A video processor 35 receives an image signal from the visual light camera 33, plus a face/skin detection signal from the face/skin detection unit 31. As with the first embodiment, the video processor 35 is configured to change the skin color based on an input signal 37. The input signal 37 relates to a characteristic of the image being viewed. For example, the input signal 37 may relate to an emotional characteristic of the person being viewed. The emotion of the person being viewed can be detected from the tone of voice of that person (e.g. the average pitch of the voice). Alternatively, the emotion can be detected using the infrared camera 34, which detects heat from the face of the person being viewed.
Based on the input signal 37 representing a characteristic of the object being viewed, the video processor 35 is configured to adapt the image signal accordingly. In one example, the skin color of the person is changed according to the emotion of the person. For example, the skin color of the person could be changed to red when an angry tone is detected, or grey when a calm tone is detected. Likewise, the skin color could be changed to red when the infrared camera detects an increase in heat dissipation, or grey when less heat is detected. There is therefore provided a cartoon apparatus that automatically adapts an image signal in accordance with an input signal relating to a characteristic of an image being viewed, and/or a feature detected in the image being viewed.
While the preferred embodiments have referred to changing the skin color, for example, it will be appreciated that other features of an image could also be changed, for example hair color, eye color, etc. In addition, although the preferred embodiments are described in relation to a person being the main object in the image signal, it will be appreciated that the invention is equally applicable to any other object or objects.
Thus, it should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims. The word 'comprising' does not exclude the presence of elements or steps other than those listed in a claim.
In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the device claim enumerating several means, several ofthese means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

Claims

CLAIMS:
1. An apparatus for cartoonizing an image signal having an object of interest, the apparatus comprising; detecting means (1; 21; 31) for detecting a feature of the object; receiving means (7; 27; 37) for receiving an input signal, the input signal relating to a characteristic of the object; and image processing means (5; 25; 35) configured to automatically adapt the image signal based on the received input signal and/or the detected feature.
2. An apparatus as claimed in claim 1, wherein the detecting means (1; 21; 31) is configured to detect the skin of a person in the image signal.
3. An apparatus as claimed in claim 1, wherein the detecting means (1; 21; 31) is configured to detect the face of a person in the image signal.
4. An apparatus as claimed in claim 1, wherein the detecting means (1; 21; 31) is configured to detect a feature that is at least one of a left eye, a right eye, a left cheek, a right cheek, a chin, a left ear, a right ear, the top of the head, a left eyebrow, a right eyebrow, a beard, a nose and a mouth.
5. An apparatus as claimed in any one of claims 1 to 4, wherein the input signal relates to a characteristic of the object being viewed.
6. An apparatus as claimed in any one of claims 1 to 4, wherein the input signal relates to an emotional characteristic of a person.
7. An apparatus as claimed in claim 6, further comprising means for determining the emotional characteristic from the voice of the person being viewed.
8. An apparatus as claimed in claim 6, further comprising means for determining the emotional characteristic from heat dissipated from the person being viewed.
9. An apparatus as claimed in claim 8, further comprising an infrared camera (34) for detecting heat from the person being viewed.
10. An apparatus as claimed in any one of claims 1 to 9, wherein the image processing means is adapted to change a skin color in response to the input signal.
11. An apparatus as claimed in any one of claims 1 to 9, wherein the image processing means is adapted to change a facial texture in response to the input signal.
12. An apparatus as claimed in any one of claims 1 to 9, wherein the image processing means is adapted to superimpose a secondary feature in the image signal.
13. An apparatus as claimed in claim 12, wherein the image processing means is adapted to superimpose the secondary feature in a predetermined positional relationship to the detected feature.
14. An apparatus as claimed in claim 12 or 13, wherein the image processing means is adapted to superimpose a secondary feature from one of a hat, a pair of sunglasses, a pair of spectacles, a beard, a moustache, a set of horns and a tattoo.
15. An apparatus as claimed in any one of claims 1 to 14, further comprising a memory for mapping a received input signal and/or a detected feature with an adaptation process to be performed by the image processing means.
16. An apparatus as claimed in claim 15, wherein the mapping is programmable by a user.
17. A method of cartoonizing an image signal having an object of interest, the method comprising the steps of; detecting a feature of the object; receiving an input signal, the input signal relating to a characteristic of the object; and automatically adapting the image signal based on the received input signal and/or the detected feature.
PCT/IB2005/052162 2004-07-02 2005-06-29 Video processing Ceased WO2006003625A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP04103125 2004-07-02
EP04103125.3 2004-07-02

Publications (1)

Publication Number Publication Date
WO2006003625A1 true WO2006003625A1 (en) 2006-01-12

Family

ID=34972268

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2005/052162 Ceased WO2006003625A1 (en) 2004-07-02 2005-06-29 Video processing

Country Status (2)

Country Link
TW (1) TW200617804A (en)
WO (1) WO2006003625A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104410819A (en) * 2014-11-24 2015-03-11 苏州福丰科技有限公司 DSP (digital signal processor) unit for human ear detection

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5426460A (en) * 1993-12-17 1995-06-20 At&T Corp. Virtual multimedia service for mass market connectivity
US5710590A (en) * 1994-04-15 1998-01-20 Hitachi, Ltd. Image signal encoding and communicating apparatus using means for extracting particular portions of an object image
WO2001077976A2 (en) * 2000-03-28 2001-10-18 Eyeweb, Inc. Image segmenting to enable electronic shopping for wearable goods and cosmetic services
US20030117485A1 (en) * 2001-12-20 2003-06-26 Yoshiyuki Mochizuki Virtual television phone apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5426460A (en) * 1993-12-17 1995-06-20 At&T Corp. Virtual multimedia service for mass market connectivity
US5710590A (en) * 1994-04-15 1998-01-20 Hitachi, Ltd. Image signal encoding and communicating apparatus using means for extracting particular portions of an object image
WO2001077976A2 (en) * 2000-03-28 2001-10-18 Eyeweb, Inc. Image segmenting to enable electronic shopping for wearable goods and cosmetic services
US20030117485A1 (en) * 2001-12-20 2003-06-26 Yoshiyuki Mochizuki Virtual television phone apparatus

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KSHIRSAGAR S ET AL IEEE COMPUTER SOCIETY: "Personalized face and speech communication over the internet", PROCEEDINGS IEEE 2001 VIRTUAL REALITY. (VR). YOKOHAMA, JAPAN, MARCH 13, 13 March 2001 (2001-03-13), pages 37 - 44, XP010535482, ISBN: 0-7695-0948-7 *
N. MAGNENAT-THALMANN ET AL.: "face to virtual face", PROCEEDINGS OF THE IEEE, vol. 86, no. 5, May 1998 (1998-05-01), usa, pages 870 - 883, XP002347421 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104410819A (en) * 2014-11-24 2015-03-11 苏州福丰科技有限公司 DSP (digital signal processor) unit for human ear detection

Also Published As

Publication number Publication date
TW200617804A (en) 2006-06-01

Similar Documents

Publication Publication Date Title
US20210034864A1 (en) Iris liveness detection for mobile devices
US9443307B2 (en) Processing of images of a subject individual
EP3555799B1 (en) A method for selecting frames used in face processing
US9792490B2 (en) Systems and methods for enhancement of facial expressions
EP3236391B1 (en) Object detection and recognition under out of focus conditions
US20060110014A1 (en) Expression invariant face recognition
KR20190038594A (en) Face recognition-based authentication
JP7092108B2 (en) Information processing equipment, information processing methods, and programs
KR20050007427A (en) Face-recognition using half-face images
CN113302907B (en) Photography methods, devices, equipment and computer-readable storage media
US7023454B1 (en) Method and apparatus for creating a virtual video of an object
KR102077887B1 (en) Enhancing video conferences
US11216648B2 (en) Method and device for facial image recognition
Bala et al. Automatic detection and tracking of faces and facial features in video sequences
CN110276308A (en) Image processing method and device
EP3467619A2 (en) Device for influencing virtual objects of augmented-reality
CN103685948A (en) A shooting method and device
KR102439216B1 (en) Mask-wearing face recognition method and server using artificial intelligence deep learning model
CN115082980A (en) Image recognition method, device and computer storage medium
KR102194511B1 (en) Representative video frame determination system and method using same
WO2006003625A1 (en) Video processing
JP3245447U (en) face recognition system
US11182976B2 (en) Device for influencing virtual objects of augmented reality
US20230046710A1 (en) Extracting information about people from sensor signals
CN117041670B (en) Image processing methods and related equipment

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

122 Ep: pct application non-entry in european phase