CN111046814A - Image processing method and electronic device - Google Patents
Image processing method and electronic device Download PDFInfo
- Publication number
- CN111046814A CN111046814A CN201911307334.2A CN201911307334A CN111046814A CN 111046814 A CN111046814 A CN 111046814A CN 201911307334 A CN201911307334 A CN 201911307334A CN 111046814 A CN111046814 A CN 111046814A
- Authority
- CN
- China
- Prior art keywords
- image
- user
- decoration object
- target
- expression package
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/40—Filling a planar surface by adding surface attributes, e.g. colour or texture
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention discloses an image processing method and electronic equipment. The image processing method comprises the following steps: receiving a target input; acquiring face information in a first image in response to a target input; acquiring a target decoration object corresponding to the face information according to the face information; and carrying out image processing on the first image by using the target decoration object to obtain the expression package. By utilizing the embodiment of the invention, the time for making the facial expression package can be saved, and the made facial expression package is particularly novel.
Description
Technical Field
The embodiment of the invention relates to the field of image processing, in particular to an image processing method and electronic equipment.
Background
With the development of technology, electronic equipment is widely used, and the social range of people is greatly expanded. Based on the expansion of social contact range, the situation that users communicate by using the instant messaging software of the electronic equipment is more and more.
At present, in instant messaging software, a user enriches chatting by sending information such as characters, pictures, voice, emoticons and the like. The emotion packets from the network can be from the network, the user only has the option, and the emotion packets from the network are many the same and have no novelty; in addition, the user can also make the emoticon according to a given template, for example, manually add characters, pictures and the like, so that a period of time is consumed to make the emoticon in the chat process, and the user experience is poor.
Therefore, there is a need for a novel facial expression bag that can be quickly produced.
Disclosure of Invention
The embodiment of the invention provides an image processing method and electronic equipment, and aims to solve the problems that the production of an emoticon is time-consuming and the emoticon is not novel.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, where the image processing method is applied to an electronic device, and the image processing method includes:
receiving a target input of a user;
acquiring face information in a first image in response to a target input;
acquiring a target decoration object corresponding to the face information according to the face information;
and carrying out image processing on the first image by using the target decoration object to obtain the expression package.
In a second aspect, an embodiment of the present invention provides an electronic device, including:
the receiving module is used for receiving target input of a user;
an acquisition module for acquiring face information in a first image in response to a target input;
the acquisition module is also used for acquiring the face information and acquiring a target decoration object corresponding to the face information;
and the processing module is used for carrying out image processing on the first image by using the target decoration object to obtain the expression package.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and when executed by the processor, the electronic device implements the image processing method according to the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the image processing method according to the first aspect.
In the embodiment of the invention, after target input of a user is received, facial information in a first image is acquired according to the target input; then, according to the face information, a target decoration object corresponding to the face information is obtained; and then, carrying out image processing on the first image by using the target decoration object to obtain the expression package. Therefore, a large amount of time is not needed to be spent on manually making the expression package, and the made expression package is matched with the facial information, so that the novel facial expression bag has novelty and improves the user experience.
Drawings
The present invention will be better understood from the following description of specific embodiments thereof taken in conjunction with the accompanying drawings, in which like or similar reference characters designate like or similar features.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an electronic device according to an embodiment of the present invention;
fig. 3 is a schematic diagram of another electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention. As shown in fig. 2, the image processing method includes:
s101: receiving a target input of a user;
s102: acquiring face information in a first image in response to a target input;
s103: acquiring a target decoration object corresponding to the face information according to the face information;
s104: and carrying out image processing on the first image by using the target decoration object to obtain the expression package.
In the embodiment of the invention, after target input of a user is received, facial information in a first image is acquired according to the target input; then, acquiring a target decoration object corresponding to the face information according to the face information; and then, carrying out image processing on the first image by using the target decoration object to obtain the expression package. Therefore, a large amount of time is not needed to be spent on manually making the expression package, and the made expression package is matched with the facial information, so that the novel facial expression bag has novelty and improves the user experience.
In this embodiment of the present invention, before the obtaining of the user information of the first image in S101, the classifying of the images in the electronic device may be performed in advance, which specifically includes:
acquiring an image in the electronic equipment;
performing face recognition on an image in the electronic equipment to obtain a second image of at least one user;
extracting, for each of the at least one user, face information of a second image of each user;
classifying the second image of each user according to the face information of the second image of each user to obtain a classified image; wherein the classified image comprises a first image.
Specifically, an image in the electronic device is acquired first; then, identifying the portrait of the image in the electronic device, extracting the identified portrait, classifying the same portrait for the first time according to the identified portrait, classifying the same portrait into the same album, and naming the same portrait in sequence, such as "portrait 1", "portrait 2", etc., for convenience of description, hereinafter "portrait 1" and "portrait 2" are taken as examples; secondly, classifying the portrait 1 and the portrait 2 for the second time according to the face information to obtain folders such as face information 1, face information 2 and the like; here, "face information 1" and "face information 2" are sub-albums of "person image 1" and "person image 2".
In addition, after classifying the images in the electronic device, the user may also rename the album name, for example, to label the face information, including name, relationship, foreign number, and so on.
It should be noted that the image in the electronic device may also be an image frame in a video.
In the embodiment of the present invention, the target input in S101 includes, but is not limited to, clicking, dragging, zooming, or sliding input.
In this embodiment of the present invention, the acquiring of the face information in the first image in S102 includes:
identifying face information in a first image;
facial information in the first image is extracted.
The first image is an image stored in the electronic device, can also be a head portrait shot at present, and can also be an image frame in a video; the video may be a stored video or a currently recorded video.
In the embodiment of the present invention, the acquiring, according to the face information, the target decoration object corresponding to the face information in S103 includes:
and matching the target decoration object corresponding to the face information from the network and/or the electronic equipment according to the face information.
In this embodiment of the present invention, the performing image processing on the first image by using the target decoration object in S104 to obtain an expression package includes:
in one example, when the target decoration object comprises at least one of pictures or texts, the at least one of the pictures or the texts is overlapped with the first image to obtain the expression package.
Specifically, when the target decoration object includes at least one of a picture or a text, the at least one of the picture or the text may be superimposed on the first image to obtain the emoticon.
Note that the overlay is used to add a new element to the first image; wherein, the picture can include: animation icon class: beer, basketball, guitar, cap, tear etc. can also include the sticker class: expression stickers, cartoon character stickers, etc.; the characters include network popular words, celebrity names, facial characters, holiday blessing words, custom characters, and the like.
In one example, when the target decoration object comprises a picture, the facial information in the first image is replaced or covered by the facial information in the picture, and the expression package is obtained.
Specifically, when the target decoration object comprises a picture, the facial information in the picture can be replaced by the facial information in the first image, so that the expression package is obtained.
It should be noted that the face information in the first image is replaced or covered, specifically, the facial expression package is made based on the face information; wherein the making of the emoticon based on the facial information includes: extracting facial features and face changing. The facial feature is extracted in order to extract the facial information of the first portrait, then separate the facial information, and then replace or cover the picture included in the target decoration object.
In addition, after face information of a portrait is extracted, it may be added to an element library to provide a material for a face-changing function. When the facial expression package is made, a face template popular in the network can be added to the first portrait, and the facial information material of the portrait added by the user can be used for changing the face of the facial expression package.
In one example, when the target decoration object includes audio, the audio is added to the first image, resulting in an emoticon.
Splicing the picture and the first image to obtain a spliced image under the condition that the target decoration object also comprises the picture; and taking the audio as the background audio of the spliced image, and inserting the audio into the spliced image to obtain the expression package.
Specifically, two or more images are spliced, and the splicing may be performed spatially or temporally. The two or more images may be all the first images, or may include at least one first image and at least one target decoration object.
The first image is a general name of all images in the album. The number of the target decoration objects is not limited to one, and may be a plurality of target decoration objects.
After the images are spliced, the spliced images can be made into a dynamic expression package, the audio can be used as the background audio of the spliced images, and the audio is inserted into the spliced images to obtain the dynamic expression package.
Wherein the target decoration object includes: one or more of pictures, text, or audio.
In this embodiment of the present invention, after the producing of the emoticon, the image processing method further includes:
and storing the expression package.
The specific storage comprises the following steps: when the expression package is stored, the expression package can be stored according to the portrait and the facial information; of course, the user can also store the expression packages in a classified manner according to the setting of the user. After the expression packages are classified and stored, a user can accurately find the expression packages needing to be sent conveniently.
In one example, the face information includes: one or more of an emotional characteristic, a gender characteristic, or an age characteristic.
In addition, the user can also upload the expression package of categorised storage to the high in the clouds, and the follow-up of being convenient for can also continue to use the expression package that belongs to oneself after changing electronic equipment.
According to the embodiment of the invention, a function of quickly making the expression based on the album is provided for the user; meanwhile, expression design is carried out on the portrait in the photo album, so that the user experience is enriched, and the interestingness is improved; the user emoticons are stored through the cloud album, so that the user can use the exclusive emoticons without depending on equipment.
The electronic devices according to embodiments of the present invention may include various handheld devices, vehicle-mounted devices, Wearable Devices (WD), computing devices or other processing devices connected to a wireless modem, and various User Equipments (UE), Mobile Stations (MS), terminals (terminal), and so on.
Fig. 2 is a schematic diagram of an electronic device according to an embodiment of the present invention. As shown in fig. 2, the electronic device 20 includes:
a receiving module 201, configured to receive a target input of a user;
an acquisition module 202 for acquiring face information in a first image in response to a target input;
the obtaining module 202 is further configured to obtain a target decoration object corresponding to the face information according to the face information;
and the processing module 203 is configured to perform image processing on the first image by using the target decoration object to obtain an expression package.
In the embodiment of the invention, after target input of a user is received, facial information in a first image is acquired according to the target input; then, according to the face information, a target decoration object corresponding to the face information is obtained; and then, carrying out image processing on the first image by using the target decoration object to obtain the expression package. Therefore, a large amount of time is not needed to be spent on manually making the expression package, and the made expression package is matched with the facial information, so that the novel facial expression bag has novelty and improves the user experience.
Optionally, the electronic device further includes:
the acquisition module 202 is further configured to acquire an image in the electronic device;
the identification module is used for carrying out face identification on the image in the electronic equipment to obtain a second image of at least one user;
an extraction module for extracting, for each of the at least one user, facial information of a second image of each user;
the classification module is used for classifying the second image of each user according to the face information of the second image of each user to obtain a classified image; wherein the classified image comprises a first image.
Optionally, in a case that the target decoration object includes at least one of a picture or a text, the processing module 203 is further configured to:
and superposing at least one of the pictures or the characters and the first image to obtain the expression package.
Optionally, in a case that the target decoration object includes a picture, the processing module 203 is further configured to:
and replacing or covering the facial information in the first image by using the facial information in the picture to obtain the expression package.
Optionally, in a case that the target decoration object includes audio, the processing module 203 is further configured to:
and adding audio to the first image to obtain the expression package.
Optionally, in a case that the target decoration object further includes a picture, the processing module 203 is further configured to:
splicing the picture and the first image to obtain a spliced image;
and taking the audio as the background audio of the spliced image, and inserting the audio into the spliced image to obtain the expression package.
Optionally, the face information includes: one or more of an emotional characteristic, a gender characteristic, or an age characteristic.
Optionally, the target decoration object includes: one or more of pictures, text, or audio.
The electronic device provided by the embodiment of the present invention can implement each process implemented by the electronic device in the method embodiment of fig. 1, and is not described herein again to avoid repetition.
In the embodiment of the invention, after target input of a user is received, facial information in a first image is acquired according to the target input; then, according to the face information, a target decoration object corresponding to the face information is obtained; and then, carrying out image processing on the first image by using the target decoration object to obtain the expression package. Therefore, a large amount of time is not needed to be spent on manually making the expression package, and the made expression package is matched with the facial information, so that the novel facial expression bag has novelty and improves the user experience.
Fig. 3 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention.
The electronic device 300 includes, but is not limited to: radio frequency unit 301, network module 302, audio output unit 303, input unit 304, sensor 305, display unit 306, user input unit 307, interface unit 308, memory 309, processor 310, and power supply 311. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 3 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or combine certain components, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
Wherein, the processor 310 is configured to control the user input unit 307 to receive a target input of a user;
a processor 310, further configured to obtain face information in the first image in response to the target input;
acquiring a target decoration object corresponding to the face information according to the face information;
and carrying out image processing on the first image by using the target decoration object to obtain the expression package.
In the embodiment of the invention, after target input of a user is received, facial information in a first image is acquired according to the target input; then, according to the face information, a target decoration object corresponding to the face information is obtained; and then, carrying out image processing on the first image by using the target decoration object to obtain the expression package. Therefore, a large amount of time is not needed to be spent on manually making the expression package, and the made expression package is matched with the facial information, so that the novel facial expression bag has novelty and improves the user experience.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 301 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 310; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 301 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 301 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 302, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 303 may convert audio data received by the radio frequency unit 301 or the network module 302 or stored in the memory 309 into an audio signal and output as sound. Also, the audio output unit 303 may also provide audio output related to a specific function performed by the electronic apparatus 300 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 303 includes a speaker, a buzzer, a receiver, and the like.
The input unit 304 is used to receive audio or video signals. The input Unit 304 may include a Graphics Processing Unit (GPU) 3041 and a microphone 3042, and the Graphics processor 3041 processes image data of a still picture or video obtained by an image capturing apparatus (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 306. The image frames processed by the graphic processor 3041 may be stored in the memory 309 (or other storage medium) or transmitted via the radio frequency unit 301 or the network module 302. The microphone 3042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 301 in case of the phone call mode.
The electronic device 300 also includes at least one sensor 305, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 3061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 3061 and/or the backlight when the electronic device 300 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 305 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 306 is used to display information input by the user or information provided to the user. The Display unit 306 may include a Display panel 3061, and the Display panel 3061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 307 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 307 includes a touch panel 3071 and other input devices 3072. The touch panel 3071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 3071 (e.g., operations by a user on or near the touch panel 3071 using a finger, a stylus, or any suitable object or attachment). The touch panel 3071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 310, and receives and executes commands sent by the processor 310. In addition, the touch panel 3071 may be implemented using various types, such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 307 may include other input devices 3072 in addition to the touch panel 3071. Specifically, the other input devices 3072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein.
Further, the touch panel 3071 may be overlaid on the display panel 3061, and when the touch panel 3071 detects a touch operation on or near the touch panel, the touch operation is transmitted to the processor 310 to determine the type of the touch event, and then the processor 310 provides a corresponding visual output on the display panel 3061 according to the type of the touch event. Although the touch panel 3071 and the display panel 3061 are shown in fig. 3 as two separate components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 3071 and the display panel 3061 may be integrated to implement the input and output functions of the electronic device, which is not limited herein.
The interface unit 308 is an interface for connecting an external device to the electronic apparatus 300. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 308 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 300 or may be used to transmit data between the electronic apparatus 300 and the external device.
The memory 309 may be used to store software programs as well as various data. The memory 309 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 309 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 310 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 309 and calling data stored in the memory 309, thereby performing overall monitoring of the electronic device. Processor 310 may include one or more processing units; preferably, the processor 310 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 310.
The electronic device 300 may further include a power supply 311 (such as a battery) for supplying power to various components, and preferably, the power supply 311 may be logically connected to the processor 310 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the electronic device 300 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor 310, a memory 309, and a computer program stored in the memory 309 and capable of running on the processor 310, where the computer program is executed by the processor 310 to implement each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (15)
1. An image processing method applied to an electronic device, comprising:
receiving a target input;
acquiring face information in a first image in response to the target input;
acquiring a target decoration object corresponding to the face information according to the face information;
and carrying out image processing on the first image by using the target decoration object to obtain an expression package.
2. The method of claim 1, wherein prior to acquiring the facial information in the first image, the method further comprises:
acquiring an image in the electronic equipment;
performing face recognition on the image in the electronic equipment to obtain a second image of at least one user;
extracting, for each of the at least one user, facial information of a second image of the each user;
classifying the second image of each user according to the face information of the second image of each user to obtain a classified image; wherein the classified image comprises the first image.
3. The method according to claim 1 or 2, wherein in a case that the target decoration object includes at least one of a picture and a text, processing the first image by using the target decoration object to obtain an expression package comprises:
and superposing at least one of the pictures or the characters and the first image to obtain the expression package.
4. The method according to claim 1 or 2, wherein in a case that the target decoration object includes a picture, the processing the first image by using the target decoration object to obtain an expression package comprises:
and replacing or covering the facial information in the first image by using the facial information in the picture to obtain the expression package.
5. The method according to claim 1 or 2, wherein in the case that the target decoration object includes audio, processing the first image by using the target decoration object to obtain an expression package, comprises:
and adding the audio to the first image to obtain the expression package.
6. The method of claim 5, wherein in the case that the target decoration object further includes a picture, processing the first image with the target decoration object to obtain an expression package comprises:
splicing the picture and the first image to obtain a spliced image;
and taking the audio as the background audio of the spliced image, and inserting the audio into the spliced image to obtain the expression package.
7. The method according to claim 1 or 2, wherein the face information comprises: one or more of an emotional characteristic, a gender characteristic, or an age characteristic.
8. The method according to claim 1 or 2, wherein the target decoration object comprises: one or more of pictures, text, or audio.
9. An electronic device, comprising:
the receiving module is used for receiving target input of a user;
an acquisition module for acquiring face information in a first image in response to the target input;
the acquisition module is further used for acquiring a target decoration object corresponding to the face information according to the face information;
and the processing module is used for carrying out image processing on the first image by utilizing the target decoration object to obtain the expression package.
10. The electronic device of claim 9, further comprising:
the acquisition module is further used for acquiring an image in the electronic equipment;
the identification module is used for carrying out face identification on the image in the electronic equipment to obtain a second image of at least one user;
an extraction module for extracting, for each of the at least one user, facial information of the each user;
the classification module is used for classifying the second image of each user according to the face information of the second image of each user to obtain a classified image; wherein the classified image comprises the first image.
11. The electronic device of claim 9 or 10, wherein in the case that the target decoration object comprises at least one of a picture or a text, the processing module is further configured to:
and superposing at least one of the pictures or the characters and the first image to obtain the expression package.
12. The electronic device of claim 9 or 10, wherein in the case that the target decorative object comprises a picture, the processing module is further configured to:
and replacing or covering the facial information in the first image by using the facial information in the picture to obtain the expression package.
13. The electronic device of claim 9 or 10, wherein in the case that the target decoration object comprises audio, the processing module is further configured to:
and adding the audio to the first image to obtain the expression package.
14. The electronic device of claim 13, wherein in the case that the target decorative object further comprises a picture, the processing module is further configured to:
splicing the picture and the first image to obtain a spliced image;
and taking the audio as the background audio of the spliced image, and inserting the audio into the spliced image to obtain the expression package.
15. An electronic device, comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the image processing method according to any one of claims 1 to 8.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201911307334.2A CN111046814A (en) | 2019-12-18 | 2019-12-18 | Image processing method and electronic device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201911307334.2A CN111046814A (en) | 2019-12-18 | 2019-12-18 | Image processing method and electronic device |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN111046814A true CN111046814A (en) | 2020-04-21 |
Family
ID=70237448
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201911307334.2A Pending CN111046814A (en) | 2019-12-18 | 2019-12-18 | Image processing method and electronic device |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111046814A (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112749357A (en) * | 2020-09-15 | 2021-05-04 | 腾讯科技(深圳)有限公司 | Interaction method and device based on shared content and computer equipment |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107153496A (en) * | 2017-07-04 | 2017-09-12 | 北京百度网讯科技有限公司 | Method and device for inputting emoticons |
| CN107240143A (en) * | 2017-05-09 | 2017-10-10 | 北京小米移动软件有限公司 | Bag generation method of expressing one's feelings and device |
| CN107369196A (en) * | 2017-06-30 | 2017-11-21 | 广东欧珀移动通信有限公司 | Expression, which packs, makees method, apparatus, storage medium and electronic equipment |
| CN107704471A (en) * | 2016-08-09 | 2018-02-16 | 中兴通讯股份有限公司 | A kind of information processing method and device and file call method and device |
| CN107977928A (en) * | 2017-12-21 | 2018-05-01 | 广东欧珀移动通信有限公司 | Expression generation method, apparatus, terminal and storage medium |
| CN108737729A (en) * | 2018-05-04 | 2018-11-02 | Oppo广东移动通信有限公司 | Automatic photographing method and device |
| CN109391842A (en) * | 2018-11-16 | 2019-02-26 | 维沃移动通信有限公司 | A kind of dubbing method, mobile terminal |
| CN110458916A (en) * | 2019-07-05 | 2019-11-15 | 深圳壹账通智能科技有限公司 | Expression packet automatic generation method, device, computer equipment and storage medium |
-
2019
- 2019-12-18 CN CN201911307334.2A patent/CN111046814A/en active Pending
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107704471A (en) * | 2016-08-09 | 2018-02-16 | 中兴通讯股份有限公司 | A kind of information processing method and device and file call method and device |
| CN107240143A (en) * | 2017-05-09 | 2017-10-10 | 北京小米移动软件有限公司 | Bag generation method of expressing one's feelings and device |
| CN107369196A (en) * | 2017-06-30 | 2017-11-21 | 广东欧珀移动通信有限公司 | Expression, which packs, makees method, apparatus, storage medium and electronic equipment |
| CN107153496A (en) * | 2017-07-04 | 2017-09-12 | 北京百度网讯科技有限公司 | Method and device for inputting emoticons |
| CN107977928A (en) * | 2017-12-21 | 2018-05-01 | 广东欧珀移动通信有限公司 | Expression generation method, apparatus, terminal and storage medium |
| CN108737729A (en) * | 2018-05-04 | 2018-11-02 | Oppo广东移动通信有限公司 | Automatic photographing method and device |
| CN109391842A (en) * | 2018-11-16 | 2019-02-26 | 维沃移动通信有限公司 | A kind of dubbing method, mobile terminal |
| CN110458916A (en) * | 2019-07-05 | 2019-11-15 | 深圳壹账通智能科技有限公司 | Expression packet automatic generation method, device, computer equipment and storage medium |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112749357A (en) * | 2020-09-15 | 2021-05-04 | 腾讯科技(深圳)有限公司 | Interaction method and device based on shared content and computer equipment |
| CN112749357B (en) * | 2020-09-15 | 2024-02-06 | 腾讯科技(深圳)有限公司 | Interaction method and device based on shared content and computer equipment |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN108762954B (en) | Object sharing method and mobile terminal | |
| CN107846352B (en) | Information display method and mobile terminal | |
| CN107943390B (en) | Character copying method and mobile terminal | |
| CN109857905B (en) | Video editing method and terminal equipment | |
| CN109660728B (en) | Photographing method and device | |
| CN109240577B (en) | Screen capturing method and terminal | |
| CN107734170B (en) | Notification message processing method, mobile terminal and wearable device | |
| CN108334196B (en) | File processing method and mobile terminal | |
| WO2020011077A1 (en) | Notification message displaying method and terminal device | |
| CN109213416B (en) | A display information processing method and mobile terminal | |
| CN108415652A (en) | A kind of text handling method and mobile terminal | |
| CN109388456B (en) | Head portrait selection method and mobile terminal | |
| CN109189303B (en) | Text editing method and mobile terminal | |
| CN109874038A (en) | A kind of display methods and terminal of terminal | |
| CN108460817B (en) | A jigsaw puzzle method and mobile terminal | |
| CN109286728B (en) | Call content processing method and terminal equipment | |
| CN109166164B (en) | Expression picture generation method and terminal | |
| JP2021532492A (en) | Character input method and terminal | |
| CN107748640A (en) | One kind puts out screen display methods and mobile terminal | |
| CN108765522B (en) | Dynamic image generation method and mobile terminal | |
| CN109448069B (en) | A template generation method and mobile terminal | |
| CN111007980A (en) | Information input method and terminal equipment | |
| CN110932964A (en) | Information processing method and device | |
| CN111158815A (en) | Dynamic wallpaper fuzzy method, terminal and computer readable storage medium | |
| CN109669710B (en) | Note processing method and terminal |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200421 |