[go: up one dir, main page]

WO2019201235A1 - Video communication method and mobile terminal - Google Patents

Video communication method and mobile terminal Download PDF

Info

Publication number
WO2019201235A1
WO2019201235A1 PCT/CN2019/082862 CN2019082862W WO2019201235A1 WO 2019201235 A1 WO2019201235 A1 WO 2019201235A1 CN 2019082862 W CN2019082862 W CN 2019082862W WO 2019201235 A1 WO2019201235 A1 WO 2019201235A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
mobile terminal
positional relationship
video communication
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2019/082862
Other languages
French (fr)
Chinese (zh)
Inventor
刘馨悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Publication of WO2019201235A1 publication Critical patent/WO2019201235A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions

Definitions

  • the embodiments of the present disclosure relate to the field of communications, and in particular, to a method and a mobile terminal for video communication.
  • the video communication application function in the related art is relatively simple, and only the real-time picture of both parties of the chat is displayed in the display screen of the mobile terminal, and the function of the video communication is too single and the user experience is poor.
  • the embodiments of the present disclosure provide a method for video communication and a mobile terminal, which solve the problem that the video communication function in the related art is single and the user experience is poor.
  • a method of video communication applied to a first mobile terminal, the method comprising: acquiring a first image, a second image associated with the first image, and the a positional relationship between the first image and the second image; wherein the first image is an image captured by the first mobile terminal, or an image captured by a second mobile terminal, the second movement
  • the terminal is a mobile terminal that establishes a video communication connection with the first mobile terminal; and displays the first image and the second image according to a positional relationship between the first image and the second image.
  • a first mobile terminal including: a first acquiring module, configured to acquire a first image, a second image associated with the first image, and the first a positional relationship between the image and the second image; wherein the first image is an image captured by the first mobile terminal, or an image captured by a second mobile terminal, where the second mobile terminal is a mobile terminal that establishes a video communication connection with the first mobile terminal; and a display module, configured to display the first image and the second image according to a positional relationship between the first image and the second image.
  • a mobile terminal comprising a processor, a memory, and a computer program stored on the memory and operable on the processor, the computer program being processed The steps of the method of implementing video communication as described in the first aspect when executed.
  • the first mobile terminal acquires and displays the first image and the second image associated with the first image, which can improve the display effect of the first image, increase the function of the video communication, and improve the user experience.
  • FIG. 1 is a schematic structural diagram of a system for video communication according to an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart diagram of a video communication method according to an embodiment of the present disclosure
  • FIG. 3 is a second schematic flowchart of a video communication method according to an embodiment of the present disclosure.
  • FIG. 4 is a third schematic flowchart of a video communication method according to an embodiment of the present disclosure.
  • FIG. 5 is an application scenario of a video communication method according to an embodiment of the present disclosure
  • FIG. 6 is a schematic structural diagram of a first mobile terminal according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of a mobile terminal according to an embodiment of the present disclosure.
  • an embodiment of the present disclosure provides a system architecture for video communication, which may include a server 11 (eg, a video communication server), a first mobile terminal 12, and a second mobile terminal 13.
  • a server 11 eg, a video communication server
  • a first mobile terminal 12 e.g., a mobile phone
  • a second mobile terminal 13 e.g., a mobile phone
  • the display area of the first mobile terminal 12 includes a first video communication window 121 for displaying an image captured by the second mobile terminal 13 during video communication; and the second mobile terminal 13
  • a third video communication window 131 is included in the display area for displaying an image taken by the first mobile terminal 12 during video communication.
  • the display area of the first mobile terminal 12 further includes a second video communication window 122 for displaying an image captured by the first mobile terminal 12; and a fourth video communication window is further included in the display area of the second mobile terminal 13 132. It is used to display an image captured by the second mobile terminal 13 during video communication.
  • the first mobile terminal and the second mobile terminal may be a mobile phone, a tablet computer, a notebook computer, a palmtop computer, an in-vehicle terminal, or a wearable device.
  • video communication in the embodiment of the present disclosure may be provided by a third-party instant messaging application, such as WeChat, QQ, etc., or may be provided by an application built in the mobile terminal, for example, Facetime (iOS and Mac).
  • Facetime iOS and Mac
  • a video calling software built into OS X may be provided by a third-party instant messaging application, such as WeChat, QQ, etc., or may be provided by an application built in the mobile terminal, for example, Facetime (iOS and Mac).
  • iOS and Mac Facetime
  • a video calling software built into OS X A video calling software built into OS X).
  • an embodiment of the present disclosure provides a method for video communication, which is applied to a first mobile terminal, and the specific steps are as follows:
  • Step 201 Acquire a first image, a second image associated with the first image, and a positional relationship between the first image and the second image;
  • the first image is an image captured by the first mobile terminal or the second mobile terminal during video communication
  • the second mobile terminal is a mobile terminal that establishes video communication with the first mobile terminal.
  • the video communication may be provided by the first mobile terminal itself, or may be provided by a third-party application (for example, WeChat or QQ, etc.), and the manner in which the video communication is provided in the embodiment of the present disclosure is not specifically limited.
  • the second image may be an image of a character, a character, a scene decoration, etc., and the images may be static or dynamic.
  • the first mobile terminal may select the second image from the image database or network of the first mobile terminal according to an instruction of the user or according to the image recognition result.
  • the second image is equivalent to a fun animated image that improves the first image display effect. It is to be understood that the manner of selecting the second image in the embodiment of the present disclosure is not specifically limited. For example, a second image named "eight-character" is selected from the image database of the first mobile terminal according to a user instruction.
  • the positional relationship between the first image and the second image may be determined according to the type of the second image and the recognition result of the first image, or the name of the second image and the recognition result of the first image are determined, wherein the second
  • the image type includes: a character expression, a character costume, a scene decoration, and the like
  • the name of the second image includes: an expression name, a costume name, a scene name, and the like
  • the recognition result of the first image may be a face recognition result and a human body recognition result. Or environmental recognition results, etc.
  • the name of the second image is “eight character Hu”
  • the recognition result of the first image includes the face recognition result in the first image
  • the position of the second image in the first image is determined to be directly above the upper lip of the face.
  • the manner of determining the position of the second image in the first image is not specifically limited in the embodiment of the present disclosure.
  • Step 202 Display a first image and a second image according to a positional relationship between the first image and the second image;
  • the first image and the second image may be simultaneously displayed in the first video communication window by image layering processing or image composition processing, where image layering or image composition is a related technology, and Let me repeat.
  • the first mobile terminal acquires and displays the first image and the second image associated with the first image, which can improve the display effect of the first image, increase the function of the video communication, and improve the user experience.
  • an embodiment of the present disclosure provides a method for video communication, which is applied to a first mobile terminal, and the specific steps are as follows:
  • Step 301 Acquire a first image.
  • the first image is an image captured by the first mobile terminal or the second mobile terminal during video communication
  • the second mobile terminal is a mobile terminal that establishes video communication with the first mobile terminal.
  • the video communication function may be provided by the mobile terminal itself, or may be provided by a third-party application (for example, WeChat or QQ, etc.), and the manner in which the video communication function is provided in the embodiment of the present disclosure is not specifically limited.
  • Step 302 obtaining a first voice instruction, and then performing step 303;
  • the first voice instruction is an instruction collected by the first mobile terminal or the second mobile terminal.
  • Step 303 according to the first voice instruction, select the second image, and then perform step 310;
  • the first voice instruction is “adding a rabbit ear”, “changing a pig's head”, etc., and determining a second image corresponding to the first voice instruction by using a voice recognition function,
  • the second image is selected from an image database or a network.
  • the identification of the first voice instruction can be implemented by the voice recognition technology in the related art, and the manner of identifying the first voice instruction is not specifically limited in the embodiment of the present disclosure.
  • Step 304 identifying a first scene in the first image where the first target object is located, and then performing step 305;
  • Step 305 according to the first scene, select the second image, and then perform step 310;
  • the first target object is a person in the first image
  • the first scene is used to indicate a natural environment in which the first target object is located
  • the first scene is a snow scene
  • the snow scene is selected from an image database or a network.
  • the second image for example, an image of a scarf, an image of a cotton cap, and the like.
  • the identification of the first scene may be implemented by the image recognition technology in the related art, and the manner of identifying the first scene is not specifically limited in the embodiment of the present disclosure.
  • Step 306 identifying the facial expression of the first target object in the first image, and then performing step 307;
  • Step 307 according to the facial expression of the first target object, select the second image, and then perform step 310;
  • the facial expression of the first target object is crying
  • a second image related to crying such as an image of a paper towel
  • the recognition of the facial expression can be implemented by the face recognition technology in the related art, and the manner of recognizing the facial expression is not specifically limited in the embodiment of the present disclosure.
  • Step 308 Receive first identification information from the second mobile terminal.
  • the first identification information may include a name of the second image or a storage location of the second image, and the first identification information may further include a display position of the second image in the first image.
  • the content of the first identification information is not specifically limited in the embodiment of the present disclosure.
  • Step 309 according to the first identification information, select a second image corresponding to the first identification information, and then perform step 310;
  • the first identification information includes the name of the second image as “eight character Hu”, and the second image of “eight character Hu” is selected from the image database or the network.
  • the storage location of the second image is included in the first identification information is similar to this and will not be described again.
  • Step 310 Obtain a positional relationship between the first image and the second image.
  • the type of the second image includes: a character expression, a character costume, a scene decoration, and the like
  • the name of the second image includes: an expression name, a costume name, a scene name, and the like
  • the recognition result of the first image may be a face recognition result. , human body recognition results or environmental recognition results.
  • the name of the second image is “eight character Hu”
  • the recognition result of the first image includes the face recognition result in the first image
  • the position of the second image in the first image is determined to be directly above the upper lip of the face.
  • the manner of determining the position of the second image in the first image is not specifically limited in the embodiment of the present disclosure.
  • Step 311 Combine the first image and the second image into a third image according to a positional relationship between the first image and the second image;
  • the first image and the second image are combined into a third image by image synthesis processing in combination with a positional relationship between the first image and the second image, wherein the image is synthesized into a related art technique. This will not be repeated here.
  • Step 312 Display a third image.
  • Step 313 Send the third image to the second mobile terminal, and display the third image by the second mobile terminal.
  • the first mobile terminal sends the synthesized third image to the second mobile terminal, and displays the third image on the second mobile terminal, so that the user of the second mobile terminal can see the improved first image.
  • the first mobile terminal acquires the first image and the second image associated with the first image; synthesizes the first image and the second image into a third image, and the first mobile terminal displays the third image Simultaneously, the third image is sent to the second mobile terminal, and the third image is also displayed in the second mobile terminal, so that the first mobile terminal and the second mobile terminal user can see the improved first image, and increase
  • the function of video communication improves the user experience.
  • an embodiment of the present disclosure provides a method for video communication, which is applied to a first mobile terminal, and the specific steps are as follows:
  • Step 401 Acquire a first image.
  • the first image is an image captured by the first mobile terminal or the second mobile terminal during video communication
  • the second mobile terminal is a mobile terminal that establishes video communication with the first mobile terminal.
  • the video communication function may be provided by the mobile terminal itself, or may be provided by a third-party application (for example, WeChat or QQ, etc.), and the manner in which the video communication function is provided in the embodiment of the present disclosure is not specifically limited.
  • Step 402 obtaining a first voice instruction, and then performing step 403;
  • the first voice instruction is an instruction collected by the first mobile terminal or the second mobile terminal.
  • Step 403 according to the first voice instruction, select the second image, and then perform step 410;
  • the first voice instruction is “adding a rabbit ear”, “changing a pig's head”, etc., and determining a second image corresponding to the first voice instruction by using a voice recognition function,
  • the second image is selected from an image database or a network.
  • the identification of the first voice instruction can be implemented by the voice recognition technology in the related art, and the manner of identifying the first voice instruction is not specifically limited in the embodiment of the present disclosure.
  • Step 404 identifying the first scene in the first image where the first target object is located, and then performing step 405;
  • Step 405 according to the first scene, select the second image, and then perform step 410;
  • the first target object is a person in the first image
  • the first scene is used to indicate a natural environment in which the first target object is located
  • the first scene is a snow scene
  • the snow scene is selected from an image database or a network.
  • the second image for example, an image of a scarf, an image of a cotton cap, and the like.
  • the identification of the first scene may be implemented by the image recognition technology in the related art, and the manner of identifying the first scene is not specifically limited in the embodiment of the present disclosure.
  • Step 406 identifying the facial expression of the first target object in the first image, and then performing step 407;
  • Step 407 according to the facial expression of the first target object, select the second image, and then perform step 410;
  • the facial expression of the first target object is crying
  • a second image related to crying such as an image of a paper towel
  • the recognition of the facial expression can be implemented by the face recognition technology in the related art, and the manner of recognizing the facial expression is not specifically limited in the embodiment of the present disclosure.
  • Step 408 Receive first identification information from the second mobile terminal.
  • the first identification information may include a name of the second image or a storage location of the second image, and the first identification information may further include a display position of the second image in the first image.
  • the content of the first identification information is not specifically limited in the embodiment of the present disclosure.
  • Step 409 according to the first identification information, select a second image corresponding to the first identification information, and then perform step 410;
  • the first identification information includes the name of the second image as “eight character Hu”, and the second image of “eight character Hu” is selected from the image database or the network.
  • the storage location of the second image is included in the first identification information is similar to this and will not be described again.
  • Step 410 Identify location information of the first target object in the first image.
  • the face contour of the first target object in the first image is identified by image recognition, and the location information of the first target object is obtained.
  • Step 411 Determine a positional relationship between the first image and the second image according to the location information.
  • the second image is “eight character Hu”, and according to the position information of the first target object, the position of the second image in the first image is determined to be directly above the upper lip of the human face.
  • Step 412 Display a first image and a second image according to a positional relationship between the first image and the second image;
  • the first image and the second image may be simultaneously displayed in the first video communication window by image layering processing or image composition processing, where image layering or image composition is a related technology, and Let me repeat.
  • Step 413 Send second identification information of the second image to the second mobile terminal, and acquire, by the second mobile terminal, the second image and the positional relationship between the first image and the second image according to the second identification information, and display the first Image and second image.
  • the first mobile terminal sends the identification information of the second image to the second mobile terminal, where the first mobile terminal displays the first image and the second image, the identifier information is used to indicate the second mobile terminal from the image database or A second image is selected in the network, and a positional relationship between the first image and the second image is acquired. In combination with the positional relationship, the second mobile terminal can simultaneously display the first image and the second image.
  • the identification information of the second image may include a name of the second image or a storage location of the second image, and the identification information of the second image may further include a display position of the second image in the first image.
  • the identification information of the second image is not specifically limited in the embodiment of the present disclosure.
  • the first mobile terminal acquires the first image and the second image associated with the first image; the first mobile terminal displays the identifier of the second image while displaying the first image and the second image
  • the information is sent to the second mobile terminal, and the second mobile terminal can simultaneously display the first image and the second image according to the identification information, so that the first mobile terminal and the second mobile terminal user can see the improved first image.
  • the display area of the first mobile terminal 501 may include a first video communication window 502 and a second video communication window 503, where the first video communication is performed.
  • a fourth image 504 and a fifth image 505 are displayed in the window 502, and a sixth image 506 and a seventh image 507 are displayed in the second video communication window 503.
  • the image corresponding to the first mobile terminal is renamed to the fourth image and the fifth image (the fourth image is the first mobile terminal video communication).
  • the fifth image is an image associated with the fourth image; renaming the image corresponding to the second mobile terminal to the sixth image and the seventh image (the sixth image is obtained by the second mobile terminal for video communication)
  • the image, the seventh image is an image associated with the sixth image).
  • the display area of the first mobile terminal only the fourth image and the fifth image corresponding to the user of the first mobile terminal may be displayed; or only the sixth image corresponding to the user of the second mobile terminal may be displayed.
  • the seventh image; the fourth image, the fifth image, the sixth image, and the seventh image may also be simultaneously displayed.
  • the display manner in the display area of the first mobile terminal is not specifically limited in the embodiment of the present disclosure.
  • the first mobile terminal 501 obtains a fourth image 504 by capturing, and receives a second movement.
  • the terminal captures the obtained sixth image 506.
  • the fourth image 504 is displayed in the first video communication window 502, and the sixth image 506 is displayed in the second video communication window 503.
  • the seventh image 507 of "hat” is selected from the image database or network of the first mobile terminal 501.
  • the positional relationship between the seventh image 507 and the sixth image 506 is determined according to the name of the seventh image 507, which is located above the head of the person in the sixth image 506, at a corresponding position of the sixth image 506.
  • a seventh image 507 is displayed.
  • the above process is a process in which the user of the first mobile terminal adds an animated image to the user of the second mobile terminal. It can be understood that the user of the first mobile terminal can use the same method for adding an animated image to the user, and details are not described herein. .
  • the first mobile terminal sends the identification information of the seventh image 507 to the second mobile terminal, and the second mobile terminal selects a “hat from the image database or the network according to the identification information.
  • the corresponding image displays the seventh image 507.
  • the first mobile terminal receives the identification information of the fifth image 505 sent by the second mobile terminal, and according to the identification information, the first mobile terminal Selecting a fifth image 505 of the "horn" from the image database or the network, and acquiring a positional relationship between the fourth image 504 and the fifth image 505 according to the identification information, the fifth image 505 being located in the fourth image 504 Above the head, a fifth image 505 is displayed at a corresponding location of the fourth image 504.
  • an embodiment of the present disclosure provides a first mobile terminal 600, including:
  • a first acquiring module 601 configured to acquire a first image, a second image associated with the first image, and a positional relationship between the first image and the second image; wherein the first image Obtaining, by the first mobile terminal, the obtained image, or the image captured by the second mobile terminal, the second mobile terminal is a mobile terminal that establishes a video communication connection with the first mobile terminal;
  • the display module 602 is configured to display the first image and the second image according to a positional relationship between the first image and the second image.
  • the first obtaining module 601 includes:
  • the first obtaining unit 6011 is configured to acquire a first voice instruction, where the first voice instruction is an instruction collected by the first mobile terminal, or an instruction collected by the second mobile terminal;
  • a first selecting unit 6012 configured to select a second image from an image database of the first mobile terminal according to the first voice instruction
  • the first identifying unit 6013 is configured to identify a first scene where the first target object is located in the first image
  • a second selecting unit 6014 configured to select a second image from an image database of the first mobile terminal according to the first scenario
  • a second identifying unit 6015 configured to identify a facial expression of the first target object in the first image
  • a third selecting unit 6016 configured to select a second image from an image database of the first mobile terminal according to the facial expression of the first target object
  • the first receiving unit 6017 is configured to receive first identification information from the second mobile terminal
  • the fourth selecting unit 6018 is configured to select the second image corresponding to the first identifier information according to the first identifier information.
  • the display module 602 includes:
  • a synthesizing unit 6021 configured to synthesize the first image and the second image into a third image according to a positional relationship between the first image and the second image;
  • the display unit 6022 is configured to display the third image.
  • the first mobile terminal 600 further includes:
  • the first sending module 603 is configured to send the third image to the second mobile terminal, where the third image is displayed by the second mobile terminal.
  • the first mobile terminal 600 further includes:
  • a second sending module 604 configured to send second identifier information of the second image to the second mobile terminal, where the second mobile terminal acquires the second image according to the second identifier information, and a positional relationship between the first image and the second image, and displaying the first image and the second image.
  • the first obtaining module 601 further includes:
  • a third identifying unit 6019 configured to identify location information of the first target object in the first image
  • the determining unit 6020 is configured to determine a positional relationship between the first image and the second image according to the location information.
  • the third identifying unit 6019 includes:
  • the identification sub-unit 60191 is configured to identify a face contour in the first image.
  • the first mobile terminal acquires and displays the first image and the second image associated with the first image, which can improve the display effect of the first image, increase the function of the video communication, and improve the user experience.
  • FIG. 7 is a schematic diagram of a hardware structure of a mobile terminal that implements various embodiments of the present disclosure.
  • the mobile terminal 700 includes, but is not limited to, a radio frequency unit 701, a network module 702, an audio output unit 703, and an input unit 704.
  • the mobile terminal structure shown in FIG. 6 does not constitute a limitation of the mobile terminal, and the mobile terminal may include more or less components than those illustrated, or combine some components, or different components. Arrangement.
  • the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palmtop computer, an in-vehicle terminal, a wearable device, a pedometer, and the like.
  • the processor 710 is configured to acquire a first image, a second image associated with the first image, and a positional relationship between the first image and the second image;
  • the first image is an image captured by the first mobile terminal, or an image captured by a second mobile terminal, and the second mobile terminal is a mobile terminal that establishes a video communication connection with the first mobile terminal.
  • the processor 710 is further configured to display the first image and the second image according to a positional relationship between the first image and the second image.
  • the first mobile terminal acquires and displays the first image and the second image associated with the first image, which can improve the display effect of the first image, increase the function of the video communication, and improve the user experience.
  • the radio frequency unit 701 can be used for receiving and transmitting signals during the transmission and reception of information or during a call. Specifically, after receiving downlink data from the base station, the processing is processed by the processor 710; The uplink data is sent to the base station.
  • radio frequency unit 701 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio unit 701 can also communicate with the network and other devices through a wireless communication system.
  • the mobile terminal provides the user with wireless broadband Internet access through the network module 702, such as helping the user to send and receive emails, browse web pages, and access streaming media.
  • the audio output unit 703 can convert the audio data received by the radio frequency unit 701 or the network module 702 or stored in the memory 709 into an audio signal and output as a sound. Moreover, the audio output unit 703 can also provide audio output (eg, call signal reception sound, message reception sound, etc.) related to a particular function performed by the mobile terminal 700.
  • the audio output unit 703 includes a speaker, a buzzer, a receiver, and the like.
  • the input unit 704 is for receiving an audio or video signal.
  • the input unit 704 may include a graphics processing unit (GPU) 7041 and a microphone 7042 that images an still picture or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode.
  • the data is processed.
  • the processed image frame can be displayed on the display unit 706.
  • the image frames processed by the graphics processor 7041 may be stored in the memory 709 (or other storage medium) or transmitted via the radio unit 701 or the network module 702.
  • the microphone 7042 can receive sound and can process such sound as audio data.
  • the processed audio data can be converted to a format output that can be transmitted to the mobile communication base station via the radio unit 701 in the case of a telephone call mode.
  • the mobile terminal 700 also includes at least one type of sensor 705, such as a light sensor, motion sensor, and other sensors.
  • the light sensor includes an ambient light sensor and a proximity sensor, wherein the ambient light sensor can adjust the brightness of the display panel 7061 according to the brightness of the ambient light, and the proximity sensor can close the display panel 7061 when the mobile terminal 700 moves to the ear. / or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity. It can be used to identify the attitude of the mobile terminal (such as horizontal and vertical screen switching, related games).
  • sensor 705 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, Infrared sensors and the like are not described here.
  • the display unit 706 is for displaying information input by the user or information provided to the user.
  • the display unit 706 can include a display panel 7061.
  • the display panel 7061 can be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
  • the user input unit 707 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the mobile terminal.
  • the user input unit 707 includes a touch panel 7071 and other input devices 7072.
  • the touch panel 7071 also referred to as a touch screen, can collect touch operations on or near the user (such as a user using a finger, a stylus, or the like on the touch panel 7071 or near the touch panel 7071. operating).
  • the touch panel 671 can include two parts of a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the touch panel 7071 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the user input unit 707 may also include other input devices 7072.
  • the other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (such as a volume control button, a switch button, etc.), a trackball, a mouse, and a joystick, which are not described herein.
  • the touch panel 7071 can be overlaid on the display panel 7061. After the touch panel 7071 detects a touch operation on or near the touch panel 7071, the touch panel 7071 is transmitted to the processor 710 to determine the type of the touch event, and then the processor 710 according to the touch. The type of event provides a corresponding visual output on display panel 7061.
  • the touch panel 7071 and the display panel 7061 are used as two independent components to implement the input and output functions of the mobile terminal in FIG. 6, in some embodiments, the touch panel 7071 and the display panel 7061 may be integrated. The input and output functions of the mobile terminal are implemented, and are not limited herein.
  • the interface unit 708 is an interface in which an external device is connected to the mobile terminal 700.
  • the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, and an audio input/output. (I/O) port, video I/O port, headphone port, and more.
  • the interface unit 708 can be configured to receive input from an external device (eg, data information, power, etc.) and transmit the received input to one or more components within the mobile terminal 700 or can be used at the mobile terminal 700 and externally Data is transferred between devices.
  • an external device eg, data information, power, etc.
  • Memory 709 can be used to store software programs as well as various data.
  • the memory 709 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data created by the use of the mobile phone (such as audio data, phone book, etc.).
  • memory 709 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • Processor 710 is the control center of the mobile terminal, connecting various portions of the entire mobile terminal using various interfaces and lines, by running or executing software programs and/or modules stored in memory 709, and recalling data stored in memory 709.
  • the mobile terminal performs various functions and processing data to perform overall monitoring on the mobile terminal.
  • the processor 710 may include one or more processing units; preferably, the processor 710 may integrate an application processor and a modem processor, wherein the application processor mainly processes an operating system, a user interface, an application, etc., and performs modulation and demodulation.
  • the processor primarily handles wireless communications. It can be understood that the above modem processor may not be integrated into the processor 710.
  • the mobile terminal 700 may further include a power source 711 (such as a battery) for supplying power to various components.
  • a power source 711 such as a battery
  • the power source 711 may be logically connected to the processor 710 through a power management system to manage charging, discharging, and power management through the power management system. And other functions.
  • the mobile terminal 700 includes some functional modules not shown, and details are not described herein again.
  • an embodiment of the present disclosure further provides a mobile terminal, including a processor 710, a memory 709, a computer program stored on the memory 709 and executable on the processor 710, when the computer program is executed by the processor 710.
  • a mobile terminal including a processor 710, a memory 709, a computer program stored on the memory 709 and executable on the processor 710, when the computer program is executed by the processor 710.
  • the embodiment of the present disclosure further provides a computer readable storage medium, where the computer readable storage medium stores a computer program, and when the computer program is executed by the processor, implements various processes of the foregoing video communication method embodiment, and can achieve the same technology. The effect, to avoid repetition, will not be repeated here.
  • the computer readable storage medium such as a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Telephone Function (AREA)

Abstract

The embodiments of the present disclosure provide a video communication method and a mobile terminal, being applied to a first mobile terminal. Said method comprises: acquiring a first image, a second image associated with the first image, and a positional relationship between the first image and the second image, the first image being an image photographed by a first mobile terminal, or an image photographed by a second mobile terminal, the second mobile terminal being a mobile terminal establishing a video communication connection with the first mobile terminal; and displaying the first image and the second image according to the positional relationship between the first image and the second image.

Description

视频通信的方法及移动终端Video communication method and mobile terminal

相关申请的交叉引用Cross-reference to related applications

本申请主张在2018年4月16日在中国提交的中国专利申请No.201810337324.2的优先权,其全部内容通过引用包含于此。The present application claims priority to Chinese Patent Application No. 201101337324.2, filed on Jan.

技术领域Technical field

本公开实施例涉及通信领域,特别涉及一种视频通信的方法及移动终端。The embodiments of the present disclosure relate to the field of communications, and in particular, to a method and a mobile terminal for video communication.

背景技术Background technique

随着智能移动终端的普及,人们沟通的方式逐渐增多,越来越多的人开始使用视频通信的方式与他人进行对话。With the popularity of smart mobile terminals, the way people communicate has gradually increased, and more and more people are beginning to use video communication to talk to others.

然而,相关技术中的视频通信应用程序功能比较单一,在移动终端的显示屏中只有聊天双方的实时画面,视频通信的功能过于单一,用户体验较差。However, the video communication application function in the related art is relatively simple, and only the real-time picture of both parties of the chat is displayed in the display screen of the mobile terminal, and the function of the video communication is too single and the user experience is poor.

发明内容Summary of the invention

本公开实施例提供一种视频通信的方法及移动终端,解决相关技术中的视频通信功能单一,用户体验较差的问题。The embodiments of the present disclosure provide a method for video communication and a mobile terminal, which solve the problem that the video communication function in the related art is single and the user experience is poor.

依据本公开实施例的第一方面,提供一种视频通信的方法,应用于第一移动终端,所述方法包括:获取第一图像、与所述第一图像关联的第二图像,以及所述第一图像与所述第二图像之间的位置关系;其中,所述第一图像为所述第一移动终端拍摄得到的图像,或者,第二移动终端拍摄得到的图像,所述第二移动终端为与所述第一移动终端建立视频通信连接的移动终端;根据所述第一图像与所述第二图像之间的位置关系,显示所述第一图像和第二图像。According to a first aspect of an embodiment of the present disclosure, there is provided a method of video communication, applied to a first mobile terminal, the method comprising: acquiring a first image, a second image associated with the first image, and the a positional relationship between the first image and the second image; wherein the first image is an image captured by the first mobile terminal, or an image captured by a second mobile terminal, the second movement The terminal is a mobile terminal that establishes a video communication connection with the first mobile terminal; and displays the first image and the second image according to a positional relationship between the first image and the second image.

依据本公开实施例的第二方面,提供了一种第一移动终端,包括:第一获取模块,用于获取第一图像、与所述第一图像关联的第二图像,以及所述第一图像与所述第二图像之间的位置关系;其中,所述第一图像为所述第一移动终端拍摄得到的图像,或者,第二移动终端拍摄得到的图像,所述第二 移动终端为与所述第一移动终端建立视频通信连接的移动终端;显示模块,用于根据所述第一图像与所述第二图像之间的位置关系,显示所述第一图像和第二图像。According to a second aspect of the embodiments of the present disclosure, a first mobile terminal is provided, including: a first acquiring module, configured to acquire a first image, a second image associated with the first image, and the first a positional relationship between the image and the second image; wherein the first image is an image captured by the first mobile terminal, or an image captured by a second mobile terminal, where the second mobile terminal is a mobile terminal that establishes a video communication connection with the first mobile terminal; and a display module, configured to display the first image and the second image according to a positional relationship between the first image and the second image.

依据本公开实施例的第三方面,提供了一种移动终端,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如第一方面所述的视频通信的方法的步骤。According to a third aspect of the embodiments of the present disclosure, there is provided a mobile terminal comprising a processor, a memory, and a computer program stored on the memory and operable on the processor, the computer program being processed The steps of the method of implementing video communication as described in the first aspect when executed.

这样,在视频通信的过程中,第一移动终端获取并显示第一图像以及与第一图像关联的第二图像,能够改善第一图像的显示效果,增加视频通信的功能,改善用户体验。In this way, in the process of video communication, the first mobile terminal acquires and displays the first image and the second image associated with the first image, which can improve the display effect of the first image, increase the function of the video communication, and improve the user experience.

附图说明DRAWINGS

为了更清楚地说明本公开实施例的技术方案,下面将对本公开实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments of the present disclosure will be briefly described. It is obvious that the drawings in the following description are only some embodiments of the present disclosure. Other drawings may also be obtained from those of ordinary skill in the art in light of the inventive work.

图1为本公开实施例提供的一种视频通信的系统架构示意图;FIG. 1 is a schematic structural diagram of a system for video communication according to an embodiment of the present disclosure;

图2为本公开实施例提供的视频通信方法的流程示意图之一;FIG. 2 is a schematic flowchart diagram of a video communication method according to an embodiment of the present disclosure;

图3为本公开实施例提供的视频通信方法的流程示意图之二;FIG. 3 is a second schematic flowchart of a video communication method according to an embodiment of the present disclosure;

图4为本公开实施例提供的视频通信方法的流程示意图之三;FIG. 4 is a third schematic flowchart of a video communication method according to an embodiment of the present disclosure;

图5为本公开实施例提供一种的视频通信方法的应用场景;FIG. 5 is an application scenario of a video communication method according to an embodiment of the present disclosure;

图6为本公开实施例提供的一种第一移动终端的结构示意图;FIG. 6 is a schematic structural diagram of a first mobile terminal according to an embodiment of the present disclosure;

图7为本公开实施例提供的一种移动终端的结构示意图。FIG. 7 is a schematic structural diagram of a mobile terminal according to an embodiment of the present disclosure.

具体实施方式detailed description

下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。The technical solutions in the embodiments of the present disclosure are clearly and completely described in the following with reference to the accompanying drawings in the embodiments of the present disclosure. It is obvious that the described embodiments are a part of the embodiments of the present disclosure, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present disclosure without departing from the inventive scope are the scope of the disclosure.

参见图1,本公开实施例提供一种视频通信的系统架构,该系统架构可 以包括服务器11(例如,视频通信服务器)、第一移动终端12和第二移动终端13。Referring to FIG. 1, an embodiment of the present disclosure provides a system architecture for video communication, which may include a server 11 (eg, a video communication server), a first mobile terminal 12, and a second mobile terminal 13.

在视频通信的过程中,可选地,第一移动终端12的显示区域中包括第一视频通信窗口121,用于显示第二移动终端13在视频通信时拍摄的图像;第二移动终端13的显示区域中包括第三视频通信窗口131,用于显示第一移动终端12在视频通信时拍摄的图像。In the process of the video communication, optionally, the display area of the first mobile terminal 12 includes a first video communication window 121 for displaying an image captured by the second mobile terminal 13 during video communication; and the second mobile terminal 13 A third video communication window 131 is included in the display area for displaying an image taken by the first mobile terminal 12 during video communication.

可选地,第一移动终端12的显示区域中还包括第二视频通信窗口122,用于显示第一移动终端12拍摄的图像;第二移动终端13的显示区域中还包括第四视频通信窗口132,用于显示第二移动终端13在视频通信时拍摄的图像。Optionally, the display area of the first mobile terminal 12 further includes a second video communication window 122 for displaying an image captured by the first mobile terminal 12; and a fourth video communication window is further included in the display area of the second mobile terminal 13 132. It is used to display an image captured by the second mobile terminal 13 during video communication.

在本公开实施例中,第一移动终端和第二移动终端可以是手机、平板电脑、笔记本电脑、掌上电脑、车载终端或可穿戴设备等。In the embodiment of the present disclosure, the first mobile terminal and the second mobile terminal may be a mobile phone, a tablet computer, a notebook computer, a palmtop computer, an in-vehicle terminal, or a wearable device.

需要说明的是,本公开实施例中的视频通信可以是由第三方的即时通讯应用程序提供,例如微信、QQ等,也可以是由移动终端内置的应用程序提供,例如,Facetime(iOS和Mac OS X内置的一款视频通话软件)。It should be noted that the video communication in the embodiment of the present disclosure may be provided by a third-party instant messaging application, such as WeChat, QQ, etc., or may be provided by an application built in the mobile terminal, for example, Facetime (iOS and Mac). A video calling software built into OS X).

参见图2,本公开实施例提供了一种视频通信的方法,应用于第一移动终端,具体步骤如下:Referring to FIG. 2, an embodiment of the present disclosure provides a method for video communication, which is applied to a first mobile terminal, and the specific steps are as follows:

步骤201、获取第一图像、与第一图像关联的第二图像,以及第一图像与第二图像之间的位置关系;Step 201: Acquire a first image, a second image associated with the first image, and a positional relationship between the first image and the second image;

在本公开实施例中,第一图像是第一移动终端或者第二移动终端视频通信时拍摄的图像,该第二移动终端是与第一移动终端建立视频通信的移动终端。视频通信可以是第一移动终端自身提供的,也可以是第三方应用程序提供的(例如,微信或者QQ等),本公开实施例对视频通信的提供方式不做具体限定。In an embodiment of the present disclosure, the first image is an image captured by the first mobile terminal or the second mobile terminal during video communication, and the second mobile terminal is a mobile terminal that establishes video communication with the first mobile terminal. The video communication may be provided by the first mobile terminal itself, or may be provided by a third-party application (for example, WeChat or QQ, etc.), and the manner in which the video communication is provided in the embodiment of the present disclosure is not specifically limited.

其中,第二图像可以是人物表情、人物服饰、场景装饰等图像,这些图像可以是静态的,也可以是动态的。第一移动终端可以根据用户的指令或者根据图像识别结果从第一移动终端的图像数据库或者网络中选择第二图像。该第二图像相当于改善第一图像显示效果的趣味动画图像,当然可以理解的是,在本公开实施例中对于第二图像的选择方式并不做具体限定。例如:根 据用户指令在从第一移动终端的图像数据库中选择名称为“八字胡”的第二图像。The second image may be an image of a character, a character, a scene decoration, etc., and the images may be static or dynamic. The first mobile terminal may select the second image from the image database or network of the first mobile terminal according to an instruction of the user or according to the image recognition result. The second image is equivalent to a fun animated image that improves the first image display effect. It is to be understood that the manner of selecting the second image in the embodiment of the present disclosure is not specifically limited. For example, a second image named "eight-character" is selected from the image database of the first mobile terminal according to a user instruction.

其中,第一图像与第二图像之间的位置关系可以根据第二图像的类型和第一图像的识别结果确定,或者,第二图像的名称和第一图像的识别结果确定,其中该第二图像的类型包括:人物表情、人物服饰、场景装饰等,该第二图像的名称包括:表情名称、服饰名称、场景名称等,该第一图像的识别结果可以是人脸识别结果、人体识别结果或者环境识别结果等。The positional relationship between the first image and the second image may be determined according to the type of the second image and the recognition result of the first image, or the name of the second image and the recognition result of the first image are determined, wherein the second The image type includes: a character expression, a character costume, a scene decoration, and the like, and the name of the second image includes: an expression name, a costume name, a scene name, and the like, and the recognition result of the first image may be a face recognition result and a human body recognition result. Or environmental recognition results, etc.

例如,第二图像的名称为“八字胡”,第一图像的识别结果中包括第一图像中人脸识别结果,确定第二图像在第一图像中的位置为人脸的上嘴唇正上方。需要说明的时,本公开实施例对第二图像在第一图像中的位置的确定方式不做具体限定。For example, the name of the second image is “eight character Hu”, and the recognition result of the first image includes the face recognition result in the first image, and the position of the second image in the first image is determined to be directly above the upper lip of the face. When it is to be noted, the manner of determining the position of the second image in the first image is not specifically limited in the embodiment of the present disclosure.

步骤202、根据第一图像与第二图像之间的位置关系,显示第一图像和第二图像;Step 202: Display a first image and a second image according to a positional relationship between the first image and the second image;

在本公开实施例中,可以通过图像分层处理或者图像合成处理,在第一视频通信窗口中同时显示第一图像和第二图像,其中图像分层或者图像合成均为相关技术,在此不再赘述。In the embodiment of the present disclosure, the first image and the second image may be simultaneously displayed in the first video communication window by image layering processing or image composition processing, where image layering or image composition is a related technology, and Let me repeat.

这样,在视频通信的过程中,第一移动终端获取并显示第一图像以及与第一图像关联的第二图像,能够改善第一图像的显示效果,增加视频通信的功能,改善用户体验。In this way, in the process of video communication, the first mobile terminal acquires and displays the first image and the second image associated with the first image, which can improve the display effect of the first image, increase the function of the video communication, and improve the user experience.

参见图3,本公开实施例提供一种视频通信的方法,应用于第一移动终端,具体步骤如下:Referring to FIG. 3, an embodiment of the present disclosure provides a method for video communication, which is applied to a first mobile terminal, and the specific steps are as follows:

步骤301、获取第一图像;Step 301: Acquire a first image.

在本公开实施例中,第一图像是第一移动终端或者第二移动终端视频通信时拍摄的图像,该第二移动终端是与第一移动终端建立视频通信的移动终端。视频通信功能可以是移动终端自身提供的,也可以是第三方应用程序提供的(例如,微信或者QQ等),本公开实施例对视频通信功能的提供方式不做具体限定。In an embodiment of the present disclosure, the first image is an image captured by the first mobile terminal or the second mobile terminal during video communication, and the second mobile terminal is a mobile terminal that establishes video communication with the first mobile terminal. The video communication function may be provided by the mobile terminal itself, or may be provided by a third-party application (for example, WeChat or QQ, etc.), and the manner in which the video communication function is provided in the embodiment of the present disclosure is not specifically limited.

步骤302、获取第一语音指令,然后执行步骤303;Step 302, obtaining a first voice instruction, and then performing step 303;

在本公开实施例中,第一语音指令为第一移动终端或者第二移动终端采 集的指令。In an embodiment of the present disclosure, the first voice instruction is an instruction collected by the first mobile terminal or the second mobile terminal.

步骤303、根据第一语音指令,选择第二图像,然后执行步骤310;Step 303, according to the first voice instruction, select the second image, and then perform step 310;

例如:接收第一移动终端的用户的第一语音指令,该第一语音指令为“加兔耳朵”、“变猪头”等,通过语音识别功能,确定该第一语音指令对应的第二图像,从图像数据库或者网络中选择出该第二图像。需要说明的是,识别第一语音指令可以通过相关技术中的语音识别技术实现,本公开实施例对第一语音指令的识别方式不做具体限定。For example, receiving a first voice instruction of a user of the first mobile terminal, the first voice instruction is “adding a rabbit ear”, “changing a pig's head”, etc., and determining a second image corresponding to the first voice instruction by using a voice recognition function, The second image is selected from an image database or a network. It should be noted that the identification of the first voice instruction can be implemented by the voice recognition technology in the related art, and the manner of identifying the first voice instruction is not specifically limited in the embodiment of the present disclosure.

步骤304、识别第一图像中第一目标对象所在的第一场景,然后执行步骤305;Step 304, identifying a first scene in the first image where the first target object is located, and then performing step 305;

步骤305、根据第一场景,选择第二图像,然后执行步骤310;Step 305, according to the first scene, select the second image, and then perform step 310;

例如:第一目标对象为第一图像中的人,第一场景用于表示第一目标对象所处的自然环境,例如:第一场景为雪景,从图像数据库或者网络中选择出与雪景相关的第二图像,例如:围巾的图像、棉帽的图像等。需要说明的是,识别第一场景可以通过相关技术中的图像识别技术实现,本公开实施例对第一场景的识别方式不做具体限定。For example, the first target object is a person in the first image, and the first scene is used to indicate a natural environment in which the first target object is located, for example, the first scene is a snow scene, and the snow scene is selected from an image database or a network. The second image, for example, an image of a scarf, an image of a cotton cap, and the like. It should be noted that the identification of the first scene may be implemented by the image recognition technology in the related art, and the manner of identifying the first scene is not specifically limited in the embodiment of the present disclosure.

步骤306、识别第一图像中第一目标对象的面部表情,然后执行步骤307;Step 306, identifying the facial expression of the first target object in the first image, and then performing step 307;

步骤307、根据第一目标对象的面部表情,选择第二图像,然后执行步骤310;Step 307, according to the facial expression of the first target object, select the second image, and then perform step 310;

例如:识别出第一目标对象的面部表情为哭泣,从图像数据库或者网络中选择出与哭泣相关的第二图像,例如纸巾的图像等。需要说明的是,识别面部表情可以通过相关技术中的人脸识别技术实现,本公开实施例对面部表情的识别方式不做具体限定。For example, it is recognized that the facial expression of the first target object is crying, and a second image related to crying, such as an image of a paper towel, is selected from an image database or a network. It should be noted that the recognition of the facial expression can be implemented by the face recognition technology in the related art, and the manner of recognizing the facial expression is not specifically limited in the embodiment of the present disclosure.

步骤308、接收来自第二移动终端的第一标识信息;Step 308: Receive first identification information from the second mobile terminal.

在本公开实施例中,该第一标识信息中可以包括第二图像的名称或第二图像的存储位置,该第一标识信息中还可以包括第二图像在第一图像中的显示位置。本公开实施例对第一标识信息的内容不做具体限定。In the embodiment of the present disclosure, the first identification information may include a name of the second image or a storage location of the second image, and the first identification information may further include a display position of the second image in the first image. The content of the first identification information is not specifically limited in the embodiment of the present disclosure.

步骤309、根据第一标识信息,选择与第一标识信息对应的第二图像,然后执行步骤310;Step 309, according to the first identification information, select a second image corresponding to the first identification information, and then perform step 310;

例如:第一标识信息中包含第二图像的名称为“八字胡”,从图像数据库 或者网络中选择“八字胡”的第二图像。对于第一标识信息中包含第二图像的存储位置的情况与此类似,不再赘述。For example, the first identification information includes the name of the second image as “eight character Hu”, and the second image of “eight character Hu” is selected from the image database or the network. The case where the storage location of the second image is included in the first identification information is similar to this and will not be described again.

步骤310、获取第一图像与第二图像之间的位置关系;Step 310: Obtain a positional relationship between the first image and the second image.

在本公开实施例中,根据第二图像的类型和第一图像的识别结果,或者,第二图像的名称和第一图像的识别结果,确定第一图像与第二图像之间的位置关系,其中该第二图像的类型包括:人物表情、人物服饰、场景装饰等,该第二图像的名称包括:表情名称、服饰名称、场景名称等,该第一图像的识别结果可以是人脸识别结果、人体识别结果或者环境识别结果等。In an embodiment of the present disclosure, determining a positional relationship between the first image and the second image according to a type of the second image and a recognition result of the first image, or a name of the second image and a recognition result of the first image, The type of the second image includes: a character expression, a character costume, a scene decoration, and the like, and the name of the second image includes: an expression name, a costume name, a scene name, and the like, and the recognition result of the first image may be a face recognition result. , human body recognition results or environmental recognition results.

例如,第二图像的名称为“八字胡”,第一图像的识别结果中包括第一图像中人脸识别结果,确定第二图像在第一图像中的位置为人脸的上嘴唇正上方。需要说明的时,本公开实施例对第二图像在第一图像中的位置的确定方式不做具体限定。For example, the name of the second image is “eight character Hu”, and the recognition result of the first image includes the face recognition result in the first image, and the position of the second image in the first image is determined to be directly above the upper lip of the face. When it is to be noted, the manner of determining the position of the second image in the first image is not specifically limited in the embodiment of the present disclosure.

步骤311、根据第一图像与第二图像之间的位置关系,将第一图像和第二图像合成为第三图像;Step 311: Combine the first image and the second image into a third image according to a positional relationship between the first image and the second image;

在本公开实施例中,结合第一图像与第二图像之间的位置关系,通过图像合成处理,将第一图像和第二图像合成为第三图像,其中图像合成为相关技术中技术,在此不再赘述。In an embodiment of the present disclosure, the first image and the second image are combined into a third image by image synthesis processing in combination with a positional relationship between the first image and the second image, wherein the image is synthesized into a related art technique. This will not be repeated here.

步骤312、显示第三图像;Step 312: Display a third image.

步骤313、将第三图像发送给第二移动终端,由第二移动终端显示第三图像;Step 313: Send the third image to the second mobile terminal, and display the third image by the second mobile terminal.

在本公开实施例中,第一移动终端将合成后的第三图像发送给第二移动终端,在第二移动终端显示第三图像,使第二移动终端的用户能够看到改善后的第一图像。In the embodiment of the present disclosure, the first mobile terminal sends the synthesized third image to the second mobile terminal, and displays the third image on the second mobile terminal, so that the user of the second mobile terminal can see the improved first image.

这样,在视频通信的过程中,第一移动终端获取第一图像以及与第一图像关联的第二图像;将第一图像和第二图像合成为第三图像,第一移动终端显示第三图像的同时,将第三图像发送给第二移动终端,在第二移动终端中也显示第三图像,使第一移动终端和第二移动终端的用户都能够看到改善后的第一图像,增加视频通信的功能,改善用户体验。In this way, in the process of video communication, the first mobile terminal acquires the first image and the second image associated with the first image; synthesizes the first image and the second image into a third image, and the first mobile terminal displays the third image Simultaneously, the third image is sent to the second mobile terminal, and the third image is also displayed in the second mobile terminal, so that the first mobile terminal and the second mobile terminal user can see the improved first image, and increase The function of video communication improves the user experience.

参见图4,本公开实施例提供了一种视频通信的方法,应用于第一移动 终端,具体步骤如下:Referring to FIG. 4, an embodiment of the present disclosure provides a method for video communication, which is applied to a first mobile terminal, and the specific steps are as follows:

步骤401、获取第一图像;Step 401: Acquire a first image.

在本公开实施例中,第一图像是第一移动终端或者第二移动终端视频通信时拍摄的图像,该第二移动终端是与第一移动终端建立视频通信的移动终端。视频通信功能可以是移动终端自身提供的,也可以是第三方应用程序提供的(例如,微信或者QQ等),本公开实施例对视频通信功能的提供方式不做具体限定。In an embodiment of the present disclosure, the first image is an image captured by the first mobile terminal or the second mobile terminal during video communication, and the second mobile terminal is a mobile terminal that establishes video communication with the first mobile terminal. The video communication function may be provided by the mobile terminal itself, or may be provided by a third-party application (for example, WeChat or QQ, etc.), and the manner in which the video communication function is provided in the embodiment of the present disclosure is not specifically limited.

步骤402、获取第一语音指令,然后执行步骤403;Step 402, obtaining a first voice instruction, and then performing step 403;

在本公开实施例中,第一语音指令为第一移动终端或者第二移动终端采集的指令。In an embodiment of the present disclosure, the first voice instruction is an instruction collected by the first mobile terminal or the second mobile terminal.

步骤403、根据第一语音指令,选择第二图像,然后执行步骤410;Step 403, according to the first voice instruction, select the second image, and then perform step 410;

例如:接收第一移动终端的用户的第一语音指令,该第一语音指令为“加兔耳朵”、“变猪头”等,通过语音识别功能,确定该第一语音指令对应的第二图像,从图像数据库或者网络中选择出该第二图像。需要说明的是,识别第一语音指令可以通过相关技术中的语音识别技术实现,本公开实施例对第一语音指令的识别方式不做具体限定。For example, receiving a first voice instruction of a user of the first mobile terminal, the first voice instruction is “adding a rabbit ear”, “changing a pig's head”, etc., and determining a second image corresponding to the first voice instruction by using a voice recognition function, The second image is selected from an image database or a network. It should be noted that the identification of the first voice instruction can be implemented by the voice recognition technology in the related art, and the manner of identifying the first voice instruction is not specifically limited in the embodiment of the present disclosure.

步骤404、识别第一图像中第一目标对象所在的第一场景,然后执行步骤405;Step 404, identifying the first scene in the first image where the first target object is located, and then performing step 405;

步骤405、根据第一场景,选择第二图像,然后执行步骤410;Step 405, according to the first scene, select the second image, and then perform step 410;

例如:第一目标对象为第一图像中的人,第一场景用于表示第一目标对象所处的自然环境,例如:第一场景为雪景,从图像数据库或者网络中选择出与雪景相关的第二图像,例如:围巾的图像、棉帽的图像等。需要说明的是,识别第一场景可以通过相关技术中的图像识别技术实现,本公开实施例对第一场景的识别方式不做具体限定。For example, the first target object is a person in the first image, and the first scene is used to indicate a natural environment in which the first target object is located, for example, the first scene is a snow scene, and the snow scene is selected from an image database or a network. The second image, for example, an image of a scarf, an image of a cotton cap, and the like. It should be noted that the identification of the first scene may be implemented by the image recognition technology in the related art, and the manner of identifying the first scene is not specifically limited in the embodiment of the present disclosure.

步骤406、识别第一图像中第一目标对象的面部表情,然后执行步骤407;Step 406, identifying the facial expression of the first target object in the first image, and then performing step 407;

步骤407、根据第一目标对象的面部表情,选择第二图像,然后执行步骤410;Step 407, according to the facial expression of the first target object, select the second image, and then perform step 410;

例如:识别出第一目标对象的面部表情为哭泣,从图像数据库或者网络中选择出与哭泣相关的第二图像,例如纸巾的图像等。需要说明的是,识别 面部表情可以通过相关技术中的人脸识别技术实现,本公开实施例对面部表情的识别方式不做具体限定。For example, it is recognized that the facial expression of the first target object is crying, and a second image related to crying, such as an image of a paper towel, is selected from an image database or a network. It should be noted that the recognition of the facial expression can be implemented by the face recognition technology in the related art, and the manner of recognizing the facial expression is not specifically limited in the embodiment of the present disclosure.

步骤408、接收来自第二移动终端的第一标识信息;Step 408: Receive first identification information from the second mobile terminal.

在本公开实施例中,该第一标识信息中可以包括第二图像的名称或第二图像的存储位置,该第一标识信息中还可以包括第二图像在第一图像中的显示位置。本公开实施例对第一标识信息的内容不做具体限定。In the embodiment of the present disclosure, the first identification information may include a name of the second image or a storage location of the second image, and the first identification information may further include a display position of the second image in the first image. The content of the first identification information is not specifically limited in the embodiment of the present disclosure.

步骤409、根据第一标识信息,选择与第一标识信息对应的第二图像,然后执行步骤410;Step 409, according to the first identification information, select a second image corresponding to the first identification information, and then perform step 410;

例如:第一标识信息中包含第二图像的名称为“八字胡”,从图像数据库或者网络中选择“八字胡”的第二图像。对于第一标识信息中包含第二图像的存储位置的情况与此类似,不再赘述。For example, the first identification information includes the name of the second image as “eight character Hu”, and the second image of “eight character Hu” is selected from the image database or the network. The case where the storage location of the second image is included in the first identification information is similar to this and will not be described again.

步骤410、识别第一图像中的第一目标对象的位置信息;Step 410: Identify location information of the first target object in the first image.

在本公开实施例中,通过图像识别对第一图像中的第一目标对象的人脸轮廓进行识别,得到第一目标对象的位置信息。In the embodiment of the present disclosure, the face contour of the first target object in the first image is identified by image recognition, and the location information of the first target object is obtained.

步骤411、根据位置信息,确定第一图像与第二图像之间的位置关系;Step 411: Determine a positional relationship between the first image and the second image according to the location information.

例如,第二图像为“八字胡”,根据第一目标对象的位置信息,确定第二图像在第一图像中的位置为人脸的上嘴唇正上方。For example, the second image is “eight character Hu”, and according to the position information of the first target object, the position of the second image in the first image is determined to be directly above the upper lip of the human face.

步骤412、根据第一图像与第二图像之间的位置关系,显示第一图像和第二图像;Step 412: Display a first image and a second image according to a positional relationship between the first image and the second image;

在本公开实施例中,可以通过图像分层处理或者图像合成处理,在第一视频通信窗口中同时显示第一图像和第二图像,其中图像分层或者图像合成均为相关技术,在此不再赘述。In the embodiment of the present disclosure, the first image and the second image may be simultaneously displayed in the first video communication window by image layering processing or image composition processing, where image layering or image composition is a related technology, and Let me repeat.

步骤413、向第二移动终端发送第二图像的第二标识信息,由第二移动终端根据第二标识信息获取第二图像以及第一图像和第二图像之间的位置关系,并显示第一图像和第二图像。Step 413: Send second identification information of the second image to the second mobile terminal, and acquire, by the second mobile terminal, the second image and the positional relationship between the first image and the second image according to the second identification information, and display the first Image and second image.

在本公开实施例中,第一移动终端在显示第一图像和第二图像的同时,向第二移动终端发送第二图像的标识信息,该标识信息用于指示第二移动终端从图像数据库或网络中选择出第二图像,并获取第一图像与第二图像之间的位置关系,结合该位置关系,第二移动终端能够同时显示第一图像和第二 图像。In the embodiment of the present disclosure, the first mobile terminal sends the identification information of the second image to the second mobile terminal, where the first mobile terminal displays the first image and the second image, the identifier information is used to indicate the second mobile terminal from the image database or A second image is selected in the network, and a positional relationship between the first image and the second image is acquired. In combination with the positional relationship, the second mobile terminal can simultaneously display the first image and the second image.

该第二图像的标识信息中可以包括为第二图像的名称或第二图像的存储位置,该第二图像的标识信息中还可以包括第二图像在第一图像中的显示位置。本公开实施例对第二图像的标识信息不做具体限定。The identification information of the second image may include a name of the second image or a storage location of the second image, and the identification information of the second image may further include a display position of the second image in the first image. The identification information of the second image is not specifically limited in the embodiment of the present disclosure.

这样,在视频通信的过程中,第一移动终端获取第一图像以及与第一图像关联的第二图像;第一移动终端在显示第一图像和第二图像的同时,将第二图像的标识信息发送给第二移动终端,第二移动终端能够根据该标识信息同时显示第一图像和第二图像,使第一移动终端和第二移动终端的用户都能够看到改善后的第一图像,增加视频通信的功能,改善用户体验。In this way, in the process of video communication, the first mobile terminal acquires the first image and the second image associated with the first image; the first mobile terminal displays the identifier of the second image while displaying the first image and the second image The information is sent to the second mobile terminal, and the second mobile terminal can simultaneously display the first image and the second image according to the identification information, so that the first mobile terminal and the second mobile terminal user can see the improved first image. Increase the functionality of video communication to improve the user experience.

参见图5,图中示出一种本公开实施例的应用场景,第一移动终端501的显示区域中可以包括第一视频通信窗口502和第二视频通信窗口503,其中,在第一视频通信窗口502中显示第四图像504和第五图像505,在第二视频通信窗口503中显示第六图像506和第七图像507。Referring to FIG. 5, an application scenario of an embodiment of the present disclosure is shown. The display area of the first mobile terminal 501 may include a first video communication window 502 and a second video communication window 503, where the first video communication is performed. A fourth image 504 and a fifth image 505 are displayed in the window 502, and a sixth image 506 and a seventh image 507 are displayed in the second video communication window 503.

应当理解的是,本应用场景适用于本公开的方法实施例,为描述清楚,将第一移动终端对应的图像重新命名为第四图像和第五图像(第四图像为第一移动终端视频通信拍摄得到的图像,第五图像为与第四图像关联的图像);将第二移动终端对应的图像重新命名为第六图像和第七图像(第六图像为第二移动终端视频通信拍摄得到的图像,第七图像为与第六图像关联的图像)。It should be understood that the application scenario is applicable to the method embodiment of the present disclosure. For the sake of clarity, the image corresponding to the first mobile terminal is renamed to the fourth image and the fifth image (the fourth image is the first mobile terminal video communication). Taking the image obtained, the fifth image is an image associated with the fourth image; renaming the image corresponding to the second mobile terminal to the sixth image and the seventh image (the sixth image is obtained by the second mobile terminal for video communication) The image, the seventh image is an image associated with the sixth image).

需要说明的是,第一移动终端的显示区域内,可以只显示与第一移动终端的用户对应的第四图像和第五图像;也可以只显示与第二移动终端的用户对应第六图像和第七图像;还可以同时显示第四图像、第五图像、第六图像和第七图像。本公开实施例对第一移动终端的显示区域内的显示方式不做具体限定。It should be noted that, in the display area of the first mobile terminal, only the fourth image and the fifth image corresponding to the user of the first mobile terminal may be displayed; or only the sixth image corresponding to the user of the second mobile terminal may be displayed. The seventh image; the fourth image, the fifth image, the sixth image, and the seventh image may also be simultaneously displayed. The display manner in the display area of the first mobile terminal is not specifically limited in the embodiment of the present disclosure.

图5示出了同时显示第四图像、第五图像、第六图像和第七图像的应用场景,在视频通信过程中,第一移动终端501通过拍摄得到第四图像504,并接收第二移动终端拍摄得到的第六图像506。将第四图像504显示在第一视频通信窗口502中,将第六图像506显示在第二视频通信窗口503中。当第一移动终端501接收到用户发出的语音指令中包含关键字“帽子”时,从第一移动终端501的图像数据库或网络中选择“帽子”的第七图像507。根 据第七图像507的名称,确定该第七图像507与第六图像506之间的位置关系,该第七图像507位于第六图像506中人的头部上方,在第六图像506的对应位置显示第七图像507。上述过程为第一移动终端的用户为第二移动终端的用户添加动画图像的过程,可以理解的是,第一移动终端的用户为自己添加动画图像可以采用相同的方方式,在此不再赘述。5 shows an application scenario in which a fourth image, a fifth image, a sixth image, and a seventh image are simultaneously displayed. In the video communication process, the first mobile terminal 501 obtains a fourth image 504 by capturing, and receives a second movement. The terminal captures the obtained sixth image 506. The fourth image 504 is displayed in the first video communication window 502, and the sixth image 506 is displayed in the second video communication window 503. When the first mobile terminal 501 receives the keyword "hat" in the voice command issued by the user, the seventh image 507 of "hat" is selected from the image database or network of the first mobile terminal 501. The positional relationship between the seventh image 507 and the sixth image 506 is determined according to the name of the seventh image 507, which is located above the head of the person in the sixth image 506, at a corresponding position of the sixth image 506. A seventh image 507 is displayed. The above process is a process in which the user of the first mobile terminal adds an animated image to the user of the second mobile terminal. It can be understood that the user of the first mobile terminal can use the same method for adding an animated image to the user, and details are not described herein. .

需要说明的是,根据场景识别结果或者根据人脸识别结果在第四图像中添加第五图像的方式与此类似,在此不再赘述。It should be noted that the manner of adding the fifth image to the fourth image according to the scene recognition result or the face recognition result is similar to that of the scene recognition, and details are not described herein again.

以图4所示的方法实施例为例,第一移动终端将第七图像507的标识信息发送给第二移动终端,第二移动终端根据该标识信息,从图像数据库或者网络中选择出“帽子”的第七图像507,并根据该标识信息获取第六图像506和第七图像507之间的位置关系,该第七图像507位于第六图像506中人的头部上方,在第六图像506的对应位置显示第七图像507。Taking the method embodiment shown in FIG. 4 as an example, the first mobile terminal sends the identification information of the seventh image 507 to the second mobile terminal, and the second mobile terminal selects a “hat from the image database or the network according to the identification information. a seventh image 507 of which a positional relationship between the sixth image 506 and the seventh image 507 is located based on the identification information, the seventh image 507 being located above the head of the person in the sixth image 506, in the sixth image 506 The corresponding image displays the seventh image 507.

同理,当第二移动终端的用户为第一移动终端的用户添加了动画,则第一移动终端接收第二移动终端发送的第五图像505的标识信息,根据该标识信息,第一移动终端从图像数据库或者网络中选择出“牛角”的第五图像505,并根据该标识信息获取第四图像504和第五图像505之间的位置关系,该第五图像505位于第四图像504中人的头部上方,在第四图像504的对应位置显示第五图像505。Similarly, when the user of the second mobile terminal adds an animation to the user of the first mobile terminal, the first mobile terminal receives the identification information of the fifth image 505 sent by the second mobile terminal, and according to the identification information, the first mobile terminal Selecting a fifth image 505 of the "horn" from the image database or the network, and acquiring a positional relationship between the fourth image 504 and the fifth image 505 according to the identification information, the fifth image 505 being located in the fourth image 504 Above the head, a fifth image 505 is displayed at a corresponding location of the fourth image 504.

参见图6,本公开实施例提供了一种第一移动终端600,包括:Referring to FIG. 6, an embodiment of the present disclosure provides a first mobile terminal 600, including:

第一获取模块601,用于获取第一图像、与所述第一图像关联的第二图像,以及所述第一图像与所述第二图像之间的位置关系;其中,所述第一图像为所述第一移动终端拍摄得到的图像,或者,第二移动终端拍摄得到的图像,所述第二移动终端为与所述第一移动终端建立视频通信连接的移动终端;a first acquiring module 601, configured to acquire a first image, a second image associated with the first image, and a positional relationship between the first image and the second image; wherein the first image Obtaining, by the first mobile terminal, the obtained image, or the image captured by the second mobile terminal, the second mobile terminal is a mobile terminal that establishes a video communication connection with the first mobile terminal;

显示模块602,用于根据所述第一图像与所述第二图像之间的位置关系,显示所述第一图像和第二图像。The display module 602 is configured to display the first image and the second image according to a positional relationship between the first image and the second image.

可选地,第一获取模块601包括:Optionally, the first obtaining module 601 includes:

第一获取单元6011,用于获取第一语音指令,所述第一语音指令为所述第一移动终端采集的指令,或者,所述第二移动终端采集的指令;The first obtaining unit 6011 is configured to acquire a first voice instruction, where the first voice instruction is an instruction collected by the first mobile terminal, or an instruction collected by the second mobile terminal;

第一选择单元6012,用于根据第一语音指令,从第一移动终端的图像数 据库中选择第二图像;a first selecting unit 6012, configured to select a second image from an image database of the first mobile terminal according to the first voice instruction;

和/或,第一识别单元6013,用于识别第一图像中第一目标对象所在的第一场景;And/or, the first identifying unit 6013 is configured to identify a first scene where the first target object is located in the first image;

第二选择单元6014,用于根据第一场景,从第一移动终端的图像数据库中选择第二图像;a second selecting unit 6014, configured to select a second image from an image database of the first mobile terminal according to the first scenario;

和/或,第二识别单元6015,用于识别第一图像中第一目标对象的面部表情;And/or a second identifying unit 6015, configured to identify a facial expression of the first target object in the first image;

第三选择单元6016,用于根据第一目标对象的面部表情,从第一移动终端的图像数据库中选择第二图像;a third selecting unit 6016, configured to select a second image from an image database of the first mobile terminal according to the facial expression of the first target object;

和/或,第一接收单元6017,用于接收来自第二移动终端的第一标识信息;And/or, the first receiving unit 6017 is configured to receive first identification information from the second mobile terminal;

第四选择单元6018,用于根据所述第一标识信息,选择与所述第一标识信息对应的所述第二图像。The fourth selecting unit 6018 is configured to select the second image corresponding to the first identifier information according to the first identifier information.

可选地,显示模块602包括:Optionally, the display module 602 includes:

合成单元6021,用于根据所述第一图像与所述第二图像之间的位置关系,将所述第一图像和所述第二图像合成为第三图像;a synthesizing unit 6021, configured to synthesize the first image and the second image into a third image according to a positional relationship between the first image and the second image;

显示单元6022,用于显示所述第三图像。The display unit 6022 is configured to display the third image.

可选地,第一移动终端600还包括:Optionally, the first mobile terminal 600 further includes:

第一发送模块603,用于将所述第三图像发送给所述第二移动终端,由所述第二移动终端显示所述第三图像。The first sending module 603 is configured to send the third image to the second mobile terminal, where the third image is displayed by the second mobile terminal.

可选地,第一移动终端600还包括:Optionally, the first mobile terminal 600 further includes:

第二发送模块604,用于向所述第二移动终端发送所述第二图像的第二标识信息,由所述第二移动终端根据所述第二标识信息获取所述第二图像以及所述第一图像和所述第二图像之间的位置关系,并显示所述第一图像和所述第二图像。a second sending module 604, configured to send second identifier information of the second image to the second mobile terminal, where the second mobile terminal acquires the second image according to the second identifier information, and a positional relationship between the first image and the second image, and displaying the first image and the second image.

可选地,第一获取模块601还包括:Optionally, the first obtaining module 601 further includes:

第三识别单元6019,用于识别所述第一图像中的第一目标对象的位置信息;a third identifying unit 6019, configured to identify location information of the first target object in the first image;

确定单元6020,用于根据所述位置信息,确定所述第一图像与所述第二图像之间的位置关系。The determining unit 6020 is configured to determine a positional relationship between the first image and the second image according to the location information.

可选地,第三识别单元6019包括:Optionally, the third identifying unit 6019 includes:

识别子单元60191,用于识别所述第一图像中的人脸轮廓。The identification sub-unit 60191 is configured to identify a face contour in the first image.

这样,在视频通信的过程中,第一移动终端获取并显示第一图像以及与第一图像关联的第二图像,能够改善第一图像的显示效果,增加视频通信的功能,改善用户体验。In this way, in the process of video communication, the first mobile terminal acquires and displays the first image and the second image associated with the first image, which can improve the display effect of the first image, increase the function of the video communication, and improve the user experience.

图7为实现本公开各个实施例的一种移动终端的硬件结构示意图,如图所示,该移动终端700包括但不限于:射频单元701、网络模块702、音频输出单元703、输入单元704、传感器705、显示单元706、用户输入单元707、接口单元708、存储器709、处理器710、以及电源711等部件。本领域技术人员可以理解,图6中示出的移动终端结构并不构成对移动终端的限定,移动终端可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。在本公开实施例中,移动终端包括但不限于手机、平板电脑、笔记本电脑、掌上电脑、车载终端、可穿戴设备、以及计步器等。FIG. 7 is a schematic diagram of a hardware structure of a mobile terminal that implements various embodiments of the present disclosure. As shown, the mobile terminal 700 includes, but is not limited to, a radio frequency unit 701, a network module 702, an audio output unit 703, and an input unit 704. The sensor 705, the display unit 706, the user input unit 707, the interface unit 708, the memory 709, the processor 710, and the power source 711 and the like. It will be understood by those skilled in the art that the mobile terminal structure shown in FIG. 6 does not constitute a limitation of the mobile terminal, and the mobile terminal may include more or less components than those illustrated, or combine some components, or different components. Arrangement. In the embodiments of the present disclosure, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palmtop computer, an in-vehicle terminal, a wearable device, a pedometer, and the like.

在一个实施例中,处理器710,用于获取第一图像、与所述第一图像关联的第二图像,以及所述第一图像与所述第二图像之间的位置关系;其中,所述第一图像为所述第一移动终端拍摄得到的图像,或者,第二移动终端拍摄得到的图像,所述第二移动终端为与所述第一移动终端建立视频通信连接的移动终端。In one embodiment, the processor 710 is configured to acquire a first image, a second image associated with the first image, and a positional relationship between the first image and the second image; The first image is an image captured by the first mobile terminal, or an image captured by a second mobile terminal, and the second mobile terminal is a mobile terminal that establishes a video communication connection with the first mobile terminal.

处理器710,还用于根据所述第一图像与所述第二图像之间的位置关系,显示所述第一图像和第二图像。The processor 710 is further configured to display the first image and the second image according to a positional relationship between the first image and the second image.

这样,在视频通信的过程中,第一移动终端获取并显示第一图像以及与第一图像关联的第二图像,能够改善第一图像的显示效果,增加视频通信的功能,改善用户体验。In this way, in the process of video communication, the first mobile terminal acquires and displays the first image and the second image associated with the first image, which can improve the display effect of the first image, increase the function of the video communication, and improve the user experience.

应理解的是,本公开实施例中,射频单元701可用于收发信息或通话过程中,信号的接收和发送,具体的,将来自基站的下行数据接收后,给处理器710处理;另外,将上行的数据发送给基站。通常,射频单元701包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频单元701还可以通过无线通信系统与网络和其他设备通信。It should be understood that, in the embodiment of the present disclosure, the radio frequency unit 701 can be used for receiving and transmitting signals during the transmission and reception of information or during a call. Specifically, after receiving downlink data from the base station, the processing is processed by the processor 710; The uplink data is sent to the base station. In general, radio frequency unit 701 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio unit 701 can also communicate with the network and other devices through a wireless communication system.

移动终端通过网络模块702为用户提供了无线的宽带互联网访问,如帮 助用户收发电子邮件、浏览网页和访问流式媒体等。The mobile terminal provides the user with wireless broadband Internet access through the network module 702, such as helping the user to send and receive emails, browse web pages, and access streaming media.

音频输出单元703可以将射频单元701或网络模块702接收的或者在存储器709中存储的音频数据转换成音频信号并且输出为声音。而且,音频输出单元703还可以提供与移动终端700执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出单元703包括扬声器、蜂鸣器以及受话器等。The audio output unit 703 can convert the audio data received by the radio frequency unit 701 or the network module 702 or stored in the memory 709 into an audio signal and output as a sound. Moreover, the audio output unit 703 can also provide audio output (eg, call signal reception sound, message reception sound, etc.) related to a particular function performed by the mobile terminal 700. The audio output unit 703 includes a speaker, a buzzer, a receiver, and the like.

输入单元704用于接收音频或视频信号。输入单元704可以包括图形处理器(Graphics Processing Unit,GPU)7041和麦克风7042,图形处理器7041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元706上。经图形处理器7041处理后的图像帧可以存储在存储器709(或其它存储介质)中或者经由射频单元701或网络模块702进行发送。麦克风7042可以接收声音,并且能够将这样的声音处理为音频数据。处理后的音频数据可以在电话通话模式的情况下转换为可经由射频单元701发送到移动通信基站的格式输出。The input unit 704 is for receiving an audio or video signal. The input unit 704 may include a graphics processing unit (GPU) 7041 and a microphone 7042 that images an still picture or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The data is processed. The processed image frame can be displayed on the display unit 706. The image frames processed by the graphics processor 7041 may be stored in the memory 709 (or other storage medium) or transmitted via the radio unit 701 or the network module 702. The microphone 7042 can receive sound and can process such sound as audio data. The processed audio data can be converted to a format output that can be transmitted to the mobile communication base station via the radio unit 701 in the case of a telephone call mode.

移动终端700还包括至少一种传感器705,比如光传感器、运动传感器以及其他传感器。具体地,光传感器包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板7061的亮度,接近传感器可在移动终端700移动到耳边时,关闭显示面板7061和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别移动终端姿态(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;传感器705还可以包括指纹传感器、压力传感器、虹膜传感器、分子传感器、陀螺仪、气压计、湿度计、温度计、红外线传感器等,在此不再赘述。The mobile terminal 700 also includes at least one type of sensor 705, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor and a proximity sensor, wherein the ambient light sensor can adjust the brightness of the display panel 7061 according to the brightness of the ambient light, and the proximity sensor can close the display panel 7061 when the mobile terminal 700 moves to the ear. / or backlight. As a kind of motion sensor, the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity. It can be used to identify the attitude of the mobile terminal (such as horizontal and vertical screen switching, related games). , magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.; sensor 705 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, Infrared sensors and the like are not described here.

显示单元706用于显示由用户输入的信息或提供给用户的信息。显示单元706可包括显示面板7061,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板7061。The display unit 706 is for displaying information input by the user or information provided to the user. The display unit 706 can include a display panel 7061. The display panel 7061 can be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.

用户输入单元707可用于接收输入的数字或字符信息,以及产生与移动终端的用户设置以及功能控制有关的键信号输入。具体地,用户输入单元707包括触控面板7071以及其他输入设备7072。触控面板7071,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板7071上或在触控面板7071附近的操作)。触控面板671可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器710,接收处理器710发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板7071。除了触控面板7071,用户输入单元707还可以包括其他输入设备7072。具体地,其他输入设备7072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。The user input unit 707 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071, also referred to as a touch screen, can collect touch operations on or near the user (such as a user using a finger, a stylus, or the like on the touch panel 7071 or near the touch panel 7071. operating). The touch panel 671 can include two parts of a touch detection device and a touch controller. Wherein, the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information. To the processor 710, the command sent by the processor 710 is received and executed. In addition, the touch panel 7071 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves. In addition to the touch panel 7071, the user input unit 707 may also include other input devices 7072. Specifically, the other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (such as a volume control button, a switch button, etc.), a trackball, a mouse, and a joystick, which are not described herein.

进一步的,触控面板7071可覆盖在显示面板7061上,当触控面板7071检测到在其上或附近的触摸操作后,传送给处理器710以确定触摸事件的类型,随后处理器710根据触摸事件的类型在显示面板7061上提供相应的视觉输出。虽然在图6中,触控面板7071与显示面板7061是作为两个独立的部件来实现移动终端的输入和输出功能,但是在某些实施例中,可以将触控面板7071与显示面板7061集成而实现移动终端的输入和输出功能,具体此处不做限定。Further, the touch panel 7071 can be overlaid on the display panel 7061. After the touch panel 7071 detects a touch operation on or near the touch panel 7071, the touch panel 7071 is transmitted to the processor 710 to determine the type of the touch event, and then the processor 710 according to the touch. The type of event provides a corresponding visual output on display panel 7061. Although the touch panel 7071 and the display panel 7061 are used as two independent components to implement the input and output functions of the mobile terminal in FIG. 6, in some embodiments, the touch panel 7071 and the display panel 7061 may be integrated. The input and output functions of the mobile terminal are implemented, and are not limited herein.

接口单元708为外部装置与移动终端700连接的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。接口单元708可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到移动终端700内的一个或多个元件或者可以用于在移动终端700和外部装置之间传输数据。The interface unit 708 is an interface in which an external device is connected to the mobile terminal 700. For example, the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, and an audio input/output. (I/O) port, video I/O port, headphone port, and more. The interface unit 708 can be configured to receive input from an external device (eg, data information, power, etc.) and transmit the received input to one or more components within the mobile terminal 700 or can be used at the mobile terminal 700 and externally Data is transferred between devices.

存储器709可用于存储软件程序以及各种数据。存储器709可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功 能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器709可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。Memory 709 can be used to store software programs as well as various data. The memory 709 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data created by the use of the mobile phone (such as audio data, phone book, etc.). Moreover, memory 709 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.

处理器710是移动终端的控制中心,利用各种接口和线路连接整个移动终端的各个部分,通过运行或执行存储在存储器709内的软件程序和/或模块,以及调用存储在存储器709内的数据,执行移动终端的各种功能和处理数据,从而对移动终端进行整体监控。处理器710可包括一个或多个处理单元;优选的,处理器710可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器710中。Processor 710 is the control center of the mobile terminal, connecting various portions of the entire mobile terminal using various interfaces and lines, by running or executing software programs and/or modules stored in memory 709, and recalling data stored in memory 709. The mobile terminal performs various functions and processing data to perform overall monitoring on the mobile terminal. The processor 710 may include one or more processing units; preferably, the processor 710 may integrate an application processor and a modem processor, wherein the application processor mainly processes an operating system, a user interface, an application, etc., and performs modulation and demodulation. The processor primarily handles wireless communications. It can be understood that the above modem processor may not be integrated into the processor 710.

移动终端700还可以包括给各个部件供电的电源711(比如电池),优选的,电源711可以通过电源管理系统与处理器710逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。The mobile terminal 700 may further include a power source 711 (such as a battery) for supplying power to various components. Preferably, the power source 711 may be logically connected to the processor 710 through a power management system to manage charging, discharging, and power management through the power management system. And other functions.

另外,移动终端700包括一些未示出的功能模块,在此不再赘述。In addition, the mobile terminal 700 includes some functional modules not shown, and details are not described herein again.

优选的,本公开实施例还提供一种移动终端,包括处理器710,存储器709,存储在存储器709上并可在所述处理器710上运行的计算机程序,该计算机程序被处理器710执行时实现上述视频通信方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。Preferably, an embodiment of the present disclosure further provides a mobile terminal, including a processor 710, a memory 709, a computer program stored on the memory 709 and executable on the processor 710, when the computer program is executed by the processor 710 The various processes of the foregoing video communication method embodiments are implemented, and the same technical effects can be achieved. To avoid repetition, details are not described herein again.

本公开实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述视频通信方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。其中,所述的计算机可读存储介质,如只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。The embodiment of the present disclosure further provides a computer readable storage medium, where the computer readable storage medium stores a computer program, and when the computer program is executed by the processor, implements various processes of the foregoing video communication method embodiment, and can achieve the same technology. The effect, to avoid repetition, will not be repeated here. The computer readable storage medium, such as a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.

需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、 方法、物品或者装置中还存在另外的相同要素。It is to be understood that the term "comprises", "comprising", or any other variants thereof, is intended to encompass a non-exclusive inclusion, such that a process, method, article, or device comprising a series of elements includes those elements. It also includes other elements that are not explicitly listed, or elements that are inherent to such a process, method, article, or device. An element that is defined by the phrase "comprising a ..." does not exclude the presence of additional elements in the process, method, article, or device that comprises the element.

Claims (13)

一种视频通信方法,应用于第一移动终端,其中,所述方法包括:A video communication method is applied to a first mobile terminal, where the method includes: 获取第一图像、与所述第一图像关联的第二图像,以及所述第一图像与所述第二图像之间的位置关系;其中,所述第一图像为所述第一移动终端拍摄得到的图像,或者,第二移动终端拍摄得到的图像,所述第二移动终端为与所述第一移动终端建立视频通信连接的移动终端;Obtaining a first image, a second image associated with the first image, and a positional relationship between the first image and the second image; wherein the first image is captured by the first mobile terminal Obtaining an image, or an image captured by a second mobile terminal, wherein the second mobile terminal is a mobile terminal that establishes a video communication connection with the first mobile terminal; 根据所述第一图像与所述第二图像之间的位置关系,显示所述第一图像和第二图像。The first image and the second image are displayed according to a positional relationship between the first image and the second image. 根据权利要求1所述的方法,其中,所述获取与所述第一图像关联的第二图像,包括:The method of claim 1 wherein said obtaining a second image associated with said first image comprises: 获取第一语音指令,所述第一语音指令为所述第一移动终端采集的指令,或者,所述第二移动终端采集的指令;Obtaining a first voice instruction, where the first voice instruction is an instruction collected by the first mobile terminal, or an instruction collected by the second mobile terminal; 根据所述第一语音指令,选择所述第二图像;Selecting the second image according to the first voice instruction; 或者,or, 识别所述第一图像中第一目标对象所在的第一场景;Identifying a first scene in which the first target object is located in the first image; 根据所述第一场景,选择所述第二图像;Selecting the second image according to the first scenario; 或者,or, 识别所述第一图像中第一目标对象的面部表情;Identifying a facial expression of the first target object in the first image; 根据所述第一目标对象的面部表情,选择所述第二图像;Selecting the second image according to the facial expression of the first target object; 或者,or, 接收来自所述第二移动终端的第一标识信息;Receiving first identification information from the second mobile terminal; 根据所述第一标识信息,选择与所述第一标识信息对应的所述第二图像。And selecting, according to the first identification information, the second image corresponding to the first identification information. 根据权利要求1所述的方法,其中,所述根据所述第一图像与所述第二图像之间的位置关系,显示所述第一图像和显示第二图像包括:The method according to claim 1, wherein the displaying the first image and displaying the second image according to a positional relationship between the first image and the second image comprises: 根据所述第一图像与所述第二图像之间的位置关系,将所述第一图像和所述第二图像合成为第三图像;Combining the first image and the second image into a third image according to a positional relationship between the first image and the second image; 显示所述第三图像。The third image is displayed. 根据权利要求3所述的方法,其中,在所述将所述第一图像和所述第 二图像合成为第三图像之后,所述方法还包括:The method of claim 3, wherein after the synthesizing the first image and the second image into a third image, the method further comprises: 将所述第三图像发送给所述第二移动终端,由所述第二移动终端显示所述第三图像。Transmitting the third image to the second mobile terminal, and displaying the third image by the second mobile terminal. 根据权利要求1所述的方法,其中,在所述获取第一图像、与所述第一图像关联的第二图像,以及所述第一图像与所述第二图像之间的位置关系之后,所述方法还包括:The method of claim 1, wherein after the acquiring the first image, the second image associated with the first image, and the positional relationship between the first image and the second image, The method further includes: 向所述第二移动终端发送所述第二图像的第二标识信息,由所述第二移动终端根据所述第二标识信息获取所述第二图像以及所述第一图像和所述第二图像之间的位置关系,并显示所述第一图像和所述第二图像。Transmitting, to the second mobile terminal, second identification information of the second image, where the second mobile terminal acquires the second image and the first image and the second according to the second identification information A positional relationship between images and displaying the first image and the second image. 根据权利要求1所述的方法,其中,获取所述第一图像与所述第二图像之间的位置关系,包括:The method of claim 1, wherein acquiring a positional relationship between the first image and the second image comprises: 识别所述第一图像中的第一目标对象的位置信息;Identifying location information of the first target object in the first image; 根据所述位置信息,确定所述第一图像与所述第二图像之间的位置关系。Determining a positional relationship between the first image and the second image based on the location information. 根据权利要求6所述的方法,其中,所述识别所述第一图像中的第一目标对象的位置信息,包括:The method of claim 6, wherein the identifying the location information of the first target object in the first image comprises: 识别所述第一图像中的人脸轮廓。A face contour in the first image is identified. 一种第一移动终端,包括:A first mobile terminal includes: 第一获取模块,用于获取第一图像、与所述第一图像关联的第二图像,以及所述第一图像与所述第二图像之间的位置关系;其中,所述第一图像为所述第一移动终端拍摄得到的图像,或者,第二移动终端拍摄得到的图像,所述第二移动终端为与所述第一移动终端建立视频通信连接的移动终端;a first acquiring module, configured to acquire a first image, a second image associated with the first image, and a positional relationship between the first image and the second image; wherein the first image is The first mobile terminal captures the obtained image, or the image captured by the second mobile terminal, and the second mobile terminal is a mobile terminal that establishes a video communication connection with the first mobile terminal; 显示模块,用于根据所述第一图像与所述第二图像之间的位置关系,显示所述第一图像和第二图像。And a display module, configured to display the first image and the second image according to a positional relationship between the first image and the second image. 根据权利要求8所述的第一移动终端,其中,所述第一获取模块包括:The first mobile terminal according to claim 8, wherein the first obtaining module comprises: 第一获取单元,用于获取第一语音指令,所述第一语音指令为所述第一移动终端采集的指令,或者,所述第二移动终端采集的指令;a first acquiring unit, configured to acquire a first voice instruction, where the first voice instruction is an instruction collected by the first mobile terminal, or an instruction collected by the second mobile terminal; 第一选择单元,用于根据所述第一语音指令,选择所述第二图像;a first selection unit, configured to select the second image according to the first voice instruction; 和/或,and / or, 第一识别单元,用于识别所述第一图像中第一目标对象所在的第一场景;a first identifying unit, configured to identify a first scene where the first target object is located in the first image; 第二选择单元,用于根据所述第一场景,选择所述第二图像;a second selecting unit, configured to select the second image according to the first scenario; 和/或,and / or, 第二识别单元,用于识别所述第一图像中第一目标对象的面部表情;a second identifying unit, configured to identify a facial expression of the first target object in the first image; 第三选择单元,用于根据所述第一目标对象的面部表情,选择所述第二图像;a third selection unit, configured to select the second image according to the facial expression of the first target object; 和/或,and / or, 第一接收单元,用于接收来自第二移动终端的第一标识信息;a first receiving unit, configured to receive first identification information from the second mobile terminal; 第四选择单元,用于根据所述第一标识信息,选择与所述第一标识信息对应的所述第二图像。And a fourth selecting unit, configured to select the second image corresponding to the first identifier information according to the first identifier information. 根据权利要求8所述的第一移动终端,其中,所述显示模块包括:The first mobile terminal of claim 8, wherein the display module comprises: 合成单元,用于根据所述第一图像与所述第二图像之间的位置关系,将所述第一图像和所述第二图像合成为第三图像;a synthesizing unit, configured to synthesize the first image and the second image into a third image according to a positional relationship between the first image and the second image; 显示单元,用于显示所述第三图像。a display unit for displaying the third image. 根据权利要求10所述的第一移动终端,其中,所述第一移动终端还包括:The first mobile terminal according to claim 10, wherein the first mobile terminal further comprises: 第一发送模块,用于将所述第三图像发送给所述第二移动终端,由所述第二移动终端显示所述第三图像。And a first sending module, configured to send the third image to the second mobile terminal, where the third image is displayed by the second mobile terminal. 根据权利要求8所述的第一移动终端,其中,所述第一移动终端还包括:The first mobile terminal according to claim 8, wherein the first mobile terminal further comprises: 第二发送模块,用于向所述第二移动终端发送所述第二图像的第二标识信息,由所述第二移动终端根据所述第二标识信息获取所述第二图像以及所述第一图像和所述第二图像之间的位置关系,并显示所述第一图像和所述第二图像。a second sending module, configured to send second identifier information of the second image to the second mobile terminal, where the second mobile terminal acquires the second image according to the second identifier information a positional relationship between an image and the second image, and displaying the first image and the second image. 一种移动终端,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至7中任一项所述的视频通信的方法的步骤。A mobile terminal comprising a processor, a memory, and a computer program stored on the memory and operable on the processor, the computer program being executed by the processor to implement any of claims 1 to 7 The steps of a method of video communication as described.
PCT/CN2019/082862 2018-04-16 2019-04-16 Video communication method and mobile terminal Ceased WO2019201235A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810337324.2 2018-04-16
CN201810337324.2A CN108551562A (en) 2018-04-16 2018-04-16 A kind of method and mobile terminal of video communication

Publications (1)

Publication Number Publication Date
WO2019201235A1 true WO2019201235A1 (en) 2019-10-24

Family

ID=63514974

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/082862 Ceased WO2019201235A1 (en) 2018-04-16 2019-04-16 Video communication method and mobile terminal

Country Status (2)

Country Link
CN (1) CN108551562A (en)
WO (1) WO2019201235A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108551562A (en) * 2018-04-16 2018-09-18 维沃移动通信有限公司 A kind of method and mobile terminal of video communication
CN110022392B (en) * 2019-05-27 2021-06-15 Oppo广东移动通信有限公司 Camera control method and related products
CN111913630B (en) * 2020-06-30 2022-10-18 维沃移动通信有限公司 Video session method and device and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101835023A (en) * 2009-03-10 2010-09-15 英华达(西安)通信科技有限公司 Method for changing background of video call
CN102075727A (en) * 2010-12-30 2011-05-25 中兴通讯股份有限公司 Method and device for processing images in videophone
US8982179B2 (en) * 2012-06-20 2015-03-17 At&T Intellectual Property I, Lp Apparatus and method for modification of telecommunication video content
CN105872438A (en) * 2015-12-15 2016-08-17 乐视致新电子科技(天津)有限公司 Video call method and device, and terminal
CN107613228A (en) * 2017-09-11 2018-01-19 广东欧珀移动通信有限公司 Adding method and terminal equipment of virtual clothes
CN108551562A (en) * 2018-04-16 2018-09-18 维沃移动通信有限公司 A kind of method and mobile terminal of video communication

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101500126A (en) * 2008-01-28 2009-08-05 德信智能手机技术(北京)有限公司 Method for sending special effect scene in video call process
CN101287093B (en) * 2008-05-30 2010-06-09 北京中星微电子有限公司 Method for adding special effect in video communication and video customer terminal
CN103916621A (en) * 2013-01-06 2014-07-09 腾讯科技(深圳)有限公司 Method and device for video communication
CN103220490A (en) * 2013-03-15 2013-07-24 广东欧珀移动通信有限公司 A method for realizing special effects in video communication and video client
CN104967772B (en) * 2015-05-29 2018-03-30 努比亚技术有限公司 Photographic method and device
CN105657325A (en) * 2016-02-02 2016-06-08 北京小米移动软件有限公司 Method, apparatus and system for video communication
CN105915673B (en) * 2016-05-31 2019-04-02 努比亚技术有限公司 A kind of method and mobile terminal of special video effect switching

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101835023A (en) * 2009-03-10 2010-09-15 英华达(西安)通信科技有限公司 Method for changing background of video call
CN102075727A (en) * 2010-12-30 2011-05-25 中兴通讯股份有限公司 Method and device for processing images in videophone
US8982179B2 (en) * 2012-06-20 2015-03-17 At&T Intellectual Property I, Lp Apparatus and method for modification of telecommunication video content
CN105872438A (en) * 2015-12-15 2016-08-17 乐视致新电子科技(天津)有限公司 Video call method and device, and terminal
CN107613228A (en) * 2017-09-11 2018-01-19 广东欧珀移动通信有限公司 Adding method and terminal equipment of virtual clothes
CN108551562A (en) * 2018-04-16 2018-09-18 维沃移动通信有限公司 A kind of method and mobile terminal of video communication

Also Published As

Publication number Publication date
CN108551562A (en) 2018-09-18

Similar Documents

Publication Publication Date Title
WO2021098678A1 (en) Screencast control method and electronic device
CN109461117B (en) Image processing method and mobile terminal
US20220283828A1 (en) Application sharing method, electronic device and computer readable storage medium
WO2019154181A1 (en) Display control method and mobile terminal
WO2019196707A1 (en) Mobile terminal control method and mobile terminal
WO2019196929A1 (en) Video data processing method and mobile terminal
WO2019144814A1 (en) Display screen control method and mobile terminal
WO2021036536A1 (en) Video photographing method and electronic device
WO2019174628A1 (en) Photographing method and mobile terminal
WO2019223494A1 (en) Screenshot capturing method and mobile terminal
CN111026316A (en) Image display method and electronic equipment
CN107734170B (en) Notification message processing method, mobile terminal and wearable device
CN111666009A (en) Interface display method and electronic device
CN108881782B (en) A video call method and terminal device
WO2019196691A1 (en) Keyboard interface display method and mobile terminal
CN108600089B (en) Display method and terminal device of an expression image
WO2019206077A1 (en) Video call processing method and mobile terminal
CN109922294B (en) A video processing method and mobile terminal
CN108989558A (en) Method and device for terminal communication
CN108628515A (en) A kind of operating method and mobile terminal of multimedia content
CN108366221A (en) A kind of video call method and terminal
CN110087149A (en) A kind of video image sharing method, device and mobile terminal
WO2020011080A1 (en) Display control method and terminal device
WO2019154360A1 (en) Interface switching method and mobile terminal
CN109618218B (en) A video processing method and mobile terminal

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19788882

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19788882

Country of ref document: EP

Kind code of ref document: A1