[go: up one dir, main page]

CN109960813A - A translation method, mobile terminal and computer-readable storage medium - Google Patents

A translation method, mobile terminal and computer-readable storage medium Download PDF

Info

Publication number
CN109960813A
CN109960813A CN201910203324.8A CN201910203324A CN109960813A CN 109960813 A CN109960813 A CN 109960813A CN 201910203324 A CN201910203324 A CN 201910203324A CN 109960813 A CN109960813 A CN 109960813A
Authority
CN
China
Prior art keywords
language information
information
translated
language
mobile terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910203324.8A
Other languages
Chinese (zh)
Inventor
焦磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201910203324.8A priority Critical patent/CN109960813A/en
Publication of CN109960813A publication Critical patent/CN109960813A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1637Details related to the display arrangement, including those related to the mounting of the display in the housing
    • G06F1/1641Details related to the display arrangement, including those related to the mounting of the display in the housing the display being formed by a plurality of foldable display components
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Machine Translation (AREA)

Abstract

本发明实施例提供一种翻译方法、移动终端及计算机可读存储介质,移动终端包括分别设置于所述移动终端的正面和背面的两个显示屏,翻译方法包括:获取第一显示屏朝向的第一用户输入的第一待翻译语言信息,其中,第一待翻译的语言信息的种类包括手语信息、语音信息或者文字信息;将第一待翻译语言信息翻译成第一目标语言信息,其中,在第一待翻译语言信息的种类包括手语信息的情况下,第一目标语言信息的种类包括语音信息或者文字信息,在第一待翻译语言信息的种类包括语音信息或者文字信息的情况下,第一目标语言信息的种类包括手语信息;在第二显示屏上显示第一目标语言信息。本发明实施例能够提升翻译方法的效率。

Embodiments of the present invention provide a translation method, a mobile terminal, and a computer-readable storage medium. The mobile terminal includes two display screens respectively disposed on the front and the back of the mobile terminal. The translation method includes: obtaining an orientation of a first display screen. The first language information to be translated input by the first user, wherein the type of the first language information to be translated includes sign language information, voice information or text information; translate the first language information to be translated into first target language information, wherein, In the case where the type of the first language information to be translated includes sign language information, the type of the first target language information includes voice information or text information, and in the case where the type of the first language information to be translated includes voice information or text information, the first A type of target language information includes sign language information; the first target language information is displayed on the second display screen. The embodiments of the present invention can improve the efficiency of the translation method.

Description

Translation method, mobile terminal and computer readable storage medium
Technical Field
The present invention relates to the field of translation technologies, and in particular, to a translation method, a mobile terminal, and a computer-readable storage medium.
Background
With the deepening of internationalization and the improvement of living standard of people, people who travel abroad, exchange or work across countries are more and more, and the effectiveness of exchange is often reduced because of language obstruction in the process of face-to-face exchange.
In the related art, a mobile phone is used for translation, for example: inputting character information to be translated into a mobile phone by adopting a character input method so as to translate the character information into character information of another language through the mobile phone; and inputting the voice information to be translated into the mobile phone by adopting a voice input method so as to translate the voice information into voice information of another language and the like through the mobile phone.
However, when the mobile phone translation is applied to the face-to-face communication process, only one person can input the characters or voice information to be translated each time, and the mobile phone is handed to another person after the translation is completed so as to obtain the translated characters or voice information, so that the mobile phone is used alternately among the communicating persons, and the efficiency of the translation and communication is low.
Disclosure of Invention
The embodiment of the invention provides a translation method, a mobile terminal and a computer readable storage medium, which aim to solve the problem of low efficiency of the translation method in the prior art.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a translation method, which is applied to a mobile terminal, where the mobile terminal includes two display screens respectively disposed on a front side and a back side of the mobile terminal, and the translation method includes:
acquiring first to-be-translated language information input by a first user and oriented to a first display screen, wherein the type of the first to-be-translated language information comprises sign language information, voice information or character information;
translating the first to-be-translated language information into first target language information, wherein the category of the first target language information comprises voice information or character information under the condition that the category of the first to-be-translated language information comprises sign language information, and the category of the first target language information comprises sign language information under the condition that the category of the first to-be-translated language information comprises voice information or character information;
and displaying the first target language information on a second display screen.
In a second aspect, an embodiment of the present invention further provides a mobile terminal, where the mobile terminal includes two display screens respectively disposed on a front side and a back side of the mobile terminal, and the mobile terminal further includes:
the first obtaining module is used for obtaining first to-be-translated language information input by a first user and oriented to a first display screen, wherein the type of the first to-be-translated language information comprises sign language information, voice information or character information;
the first translation module is used for translating the first to-be-translated language information into first target language information, wherein the type of the first target language information comprises voice information or character information under the condition that the type of the first to-be-translated language information comprises sign language information, and the type of the first target language information comprises sign language information under the condition that the type of the first to-be-translated language information comprises voice information or character information;
and the first display module is used for displaying the first target language information on a second display screen.
In a third aspect, an embodiment of the present invention further provides a mobile terminal, including:
the translation method comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the computer program to realize the steps in the translation method provided by the embodiment of the invention.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the steps in the translation method provided by the embodiment of the present invention.
In the embodiment of the invention, first to-be-translated language information input by a first user and oriented to a first display screen is acquired, wherein the type of the first to-be-translated language information comprises sign language information, voice information or character information; translating the first to-be-translated language information into first target language information, wherein the category of the first target language information comprises voice information or character information under the condition that the category of the first to-be-translated language information comprises sign language information, and the category of the first target language information comprises sign language information under the condition that the category of the first to-be-translated language information comprises voice information or character information; and displaying the first target language information on a second display screen. Therefore, the first to-be-translated language information input by the user and oriented to the first display screen can be translated into the first target language information and displayed on the second display screen, so that the second user can read the translated first target language information from the second display screen, and the efficiency of the translation method is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a flowchart of a translation method provided by an embodiment of the present invention;
FIG. 2 is a schematic diagram of an application scenario of a translation method provided in an embodiment of the present invention;
FIG. 3 is a second flowchart of a translation method according to an embodiment of the present invention;
FIG. 4 is a flowchart of a translation method provided by an embodiment of the present invention when translating a speech message;
FIG. 5 is a flowchart of a translation method provided by an embodiment of the present invention when translating sign language information;
fig. 6 is one of the structural diagrams of a mobile terminal according to an embodiment of the present invention;
fig. 7 is a second block diagram of a mobile terminal according to an embodiment of the present invention;
fig. 8 is a third block diagram of a mobile terminal according to an embodiment of the present invention;
fig. 9 is a fourth structural diagram of a mobile terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In practical applications, the Mobile terminal may be a Mobile terminal capable of performing translation, such as a Mobile phone, a Tablet Personal Computer (Tablet Personal Computer), a Laptop Computer (Laptop Computer), a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), a Wearable Device (Wearable Device), a Computer, or a notebook Computer, and the front and the back of the Mobile terminal are both provided with display screens.
Referring to fig. 1, fig. 1 is a flowchart of a translation method according to an embodiment of the present invention. The translation method is applied to a mobile terminal, and the mobile terminal comprises two display screens which are respectively arranged on the front side and the back side of the mobile terminal. As shown in fig. 1, the translation method of the embodiment of the present invention may include the following steps:
step 101, obtaining first to-be-translated language information input by a first user and oriented to a first display screen, wherein the type of the first to-be-translated language information comprises sign language information, voice information or character information.
In practical applications, the first user and the second user may use different languages, respectively, such as: the first user is a deaf-mute capable of communicating only by using sign language, and the second user is a normal person capable of speaking normally and recognizing character information; alternatively, the first user uses Chinese and the second user uses English.
As shown in fig. 2, during the communication process, the user a is located at a side of the mobile terminal facing the screen a, and the user B is located at a side of the mobile terminal facing the screen B, so that the user a and the user B can obtain the translated language information from the screen a and the screen B, respectively.
In the step, sign language information, voice information or character information input by a first user is acquired so as to translate the sign language information or the voice information into target language information which can be identified by a second user, and the method is convenient for language communication between the deaf-mute and normal persons and between persons using different languages.
Step 102, translating the first to-be-translated language information into first target language information, wherein the category of the first target language information includes voice information or character information when the category of the first to-be-translated language information includes sign language information, and the category of the first target language information includes sign language information when the category of the first to-be-translated language information includes voice information or character information.
In the actual translation process, the language information to be translated can be translated into language information of a preset type and/or a preset language according to a preset translation rule. For example: the sign language information is translated into character information, the sign language information is translated into voice information, the voice information is translated into sign language information, the character information is translated into sign language information, and the like, and the present invention is not limited thereto.
And 103, displaying the first target language information on a second display screen.
The second display screen faces to the side where the second user is located, so that the second user can recognize the semantics of the first to-be-translated language information input by the first user from the second display screen. In an actual application process, the mobile terminal may be disposed between the first user and the second user in a handheld manner or an erection manner, so that the first display screen faces the first user and the second display screen faces the second user, which is not limited specifically herein.
In addition, the displaying of the first target language information may refer to displaying of translated sign language information or text information on the second display screen.
In the embodiment of the invention, first to-be-translated language information input by a first user and oriented to a first display screen is acquired, wherein the type of the first to-be-translated language information comprises sign language information, voice information or character information; translating the first to-be-translated language information into first target language information, wherein the category of the first target language information comprises voice information or character information under the condition that the category of the first to-be-translated language information comprises sign language information; and displaying the first target language information on a second display screen, wherein the type of the first target language information comprises sign language information under the condition that the type of the first to-be-translated language information comprises voice information or character information. Therefore, the first to-be-translated language information input by the user and oriented to the first display screen can be translated into the first target language information and displayed on the second display screen, so that the second user can read the translated first target language information from the second display screen, and the efficiency of the translation method is improved.
Please refer to fig. 3, which is a second flowchart of a translation method according to an embodiment of the present invention. The main difference between this embodiment and the foregoing embodiment is that a process of translating second to-be-translated language information input by a second user on a side where the second display screen is located is further added in this embodiment, specifically: the method further comprises the following steps: acquiring second language information to be translated input by a second user and oriented to a second display screen, wherein the type of the second language information to be translated comprises sign language information, voice information or character information; translating the second language information to be translated into second target language information, wherein the type of the second target language information comprises voice information or character information under the condition that the type of the second language information to be translated comprises sign language information, and the type of the second target language information comprises sign language information under the condition that the type of the second language information to be translated comprises voice information or character information; and displaying the second target language information on the first display screen.
As shown in fig. 3, the translation method according to the embodiment of the present invention may specifically include the following steps:
step 301, obtaining first to-be-translated language information input by a first user and oriented to a first display screen, wherein the category of the first to-be-translated language information includes sign language information, voice information or character information.
Step 302, translating the first to-be-translated language information into first target language information, wherein the category of the first target language information includes voice information or text information when the category of the first to-be-translated language information includes sign language information, and the category of the first target language information includes sign language information when the category of the first to-be-translated language information includes voice information or text information.
And 303, displaying the first target language information on a second display screen.
And 304, acquiring second language information to be translated input by a second user facing the second display screen, wherein the type of the second language information to be translated comprises sign language information, voice information or character information.
Step 305, translating the second language information to be translated into second target language information, wherein the type of the second target language information includes voice information or text information when the type of the second language information to be translated includes sign language information, and the type of the second target language information includes sign language information when the type of the second language information to be translated includes voice information or text information.
Step 306, displaying the second target language information on the first display screen.
The first language information to be translated and the second language information to be translated can belong to different categories. For example: one is sign language information and the other is voice information. Of course, the first language information to be translated and the second language information to be translated may belong to different languages, for example: one in chinese and the other in english, etc.
In addition, the language and/or category of the first target language information and the second language information to be translated may be the same, and the language and/or category of the second target language information and the first language information to be translated may be the same. For example: when the first user inputs sign language information, the image information of the sign language is displayed on the first display screen, and when the second user inputs Chinese voice information, the Chinese character information is displayed on the second display screen.
It should be noted that the sequence of steps 301 to 303 and steps 304 to 306 is not limited herein, that is, the first language information to be translated may be obtained first and the translated first target language information is displayed on the second display screen, or the second language information to be translated may be obtained first and the translated second target language information is displayed on the first display screen.
The embodiment can be applied to the mutual communication between two users, so that the two users can respectively obtain the semantics of the other side from the two display screens, and the efficiency and the application range of the translation method are improved.
As an optional implementation, before obtaining the first to-be-translated language information input by the first user with the first display screen oriented, the method further includes:
acquiring a preset translation rule, and respectively determining the types of the first language information to be translated and the first target language information according to the preset translation rule;
the method for acquiring the first to-be-translated language information input by the first user facing the first display screen comprises the following steps of:
under the condition that the type of the first to-be-translated language information is sign language information, starting a camera of the mobile terminal, which is positioned on the side where the first display screen is positioned, to acquire the sign language information; or,
and under the condition that the type of the first language information to be translated is voice information, starting a microphone of the mobile terminal to acquire the voice information.
In a specific implementation, obtaining the preset translation rule may be determining the preset translation rule according to an input of a user, for example: the first display screen and/or the second display screen are touch screens, when a user opens a translation application program of the mobile terminal, a setting interface is displayed on the first display screen and/or the second display screen, and the user selects the type and/or language information of the first language information to be translated and the second language information to be translated by performing touch operation on the setting interface.
In addition, the type and/or language information of the first target language information and the second target language information may be set. Of course, in a specific implementation process, it may be assumed that the type and/or language information of the first target language information is the same as the type and/or language information of the second language information to be translated, and it is assumed that the type and/or language information of the second target language information is the same as the type and/or language information of the first language information to be translated, which is not specifically limited herein.
Of course, besides the translation rule set by the user, the preset translation rule may also be a default translation rule of the mobile terminal, and the user may also modify the default translation rule.
In the embodiment, the camera on the side of the mobile terminal where the first display screen is located is turned on or the microphone is turned on according to the preset translation rule, so that the problem of high power consumption caused by turning on the cameras on the two sides of the mobile terminal, turning on the cameras in the voice translation process and the like can be solved, and the power consumption of the translation method is reduced.
In specific implementation, when the type of language information to be translated input by a first user is character information and the type of language information to be translated input by a second user is sign language information, the first user inputs the character information on one side of the mobile terminal through a touch screen and recognizes voice information or character information translated according to the sign language input by the second user on the screen; the second user inputs sign language information through the camera on the other side of the mobile terminal, and sign language image information translated according to the character information input by the first user is recognized on the screen.
It should be noted that, in this embodiment, the manner of turning on the camera of the mobile terminal located on the side where the first display screen is located or turning on the microphone of the mobile terminal according to the preset translation rule may also be applied to the process of acquiring the second to-be-translated language information input by the second user facing the second display screen, and no specific details are repeated here to avoid repetition.
Certainly, the microphone and the cameras respectively positioned at the two sides of the mobile terminal can be started when the user opens the translation application program, and the camera at the other side is closed under the condition that the camera at one side collects sign language information to be input; or, under the condition that the input of the voice information of two different timbres is detected, the cameras on the two sides are closed; or, when receiving the character information input on the touch screen on one side, closing the microphones on the two sides, and opening the cameras on the opposite side.
According to the embodiment, the process of manually setting the translation rules can be saved, so that the automation degree and efficiency of the translation method are improved.
As an optional implementation manner, in a case that the categories of the first to-be-translated language information and the second to-be-translated language information are both voice information, the step of acquiring the first to-be-translated language information input by the first user with the first display screen facing thereto includes:
acquiring to-be-translated voice information, and determining the to-be-translated voice information as the first to-be-translated language information under the condition that the language information of the to-be-translated voice information is the same as the language information of the first to-be-translated language information;
the step of acquiring second language information to be translated input by a second user facing a second display screen comprises the following steps:
acquiring the voice information to be translated, and determining the voice information to be translated as the second language information to be translated under the condition that the language information of the voice information to be translated is the same as the language information of the second language information to be translated.
In a specific implementation process, the language information of the first to-be-translated language information and the language information of the second to-be-translated language information may be determined according to a preset translation rule.
For example: as shown in fig. 2, during the communication process, the user a is located on the side of the mobile terminal facing the screen a, the user B is located on the side of the mobile terminal facing the screen B, the user a speaks chinese to the mobile terminal, and the user B speaks english to the mobile terminal. As shown in fig. 4, the process of performing speech translation between user a and user B by using the mobile terminal may include the following steps:
step 401, two persons face to face, lift the mobile terminal across the middle, and respectively look at the respective facing display screens.
Step 402, user A speaks Chinese to user B.
And step 403, under the condition that the Chinese voice information is acquired, the mobile terminal translates the Chinese voice information into English and displays the English on the screen B.
Step 404, the user B sees the translated english displayed on the screen B, and answers the user a with english.
Step 405, the mobile terminal translates the English voice information into Chinese and displays the Chinese on the screen A under the condition of acquiring the English voice information.
It should be noted that, in addition to the above-mentioned chinese and english, the voice information and the text information in the present embodiment may be language information of other languages, and are not limited specifically herein.
In this embodiment, when the first user and the second user both input the voice information, which user input the voice information can be determined according to the language information to which the input voice information belongs, so that the text information obtained by translating the voice information is displayed on the display screen facing the other user. The situation that voice information input by two users is mixed up, translated character information is displayed on a display screen facing the user who inputs the voice information, and the translated character information is not displayed on the display screen facing the user who listens to the voice information is avoided, and therefore the reliability of the translation method is guaranteed.
Of course, the first target language information and the second target language information may be displayed on both the first display screen and the second display screen under the condition that the types of the first language information to be translated and the second language information to be translated are both voice information.
In this embodiment, the first user and the second user may respectively obtain the first language information and the second language information from the first display screen and the second display screen, so as to obtain the language information input by the other user from the first language information and the second language information, and may check whether the language information expressed by each user is correctly input into the mobile terminal.
As an optional implementation manner, in a case that the category of the first to-be-translated language information is sign language information, the step of acquiring the first to-be-translated language information input by the first user with the first display screen facing the first display screen includes:
acquiring sign language image information input by a first user and oriented to a first display screen, and acquiring a moving track of a target anchor point from the sign language image information, wherein the target anchor point comprises a finger, a wrist and a palm of the user;
the step of translating the first to-be-translated language information into first target language information includes:
and determining the sign language meaning matched with the movement track from a pre-stored sign language information base, and determining first target language information corresponding to the meaning.
In practical applications, the target anchor point may further include a wrist of the user, and the like, and is not particularly limited herein.
In a specific implementation process, video information input by a first user can be shot through a camera facing to the first user side, a moving track of a target anchor point in the video information in a three-dimensional space is identified, the identified moving track is converted into a digital vector, the digital vector is matched with a target digital vector stored in a sign language information base in advance, the semantic meaning of the sign language image information can be determined to be the semantic meaning corresponding to the target digital vector under the condition that matching is successful, and therefore the semantic meaning is output as character information corresponding to the first target language information, and the effect of translating the sign language information is achieved.
The successful matching may mean that a distance between the digital vector and the target digital vector is smaller than a preset value.
In the embodiment, the moving track of the target anchor point is identified to obtain the semantics of the sign language information, so that the accuracy and the continuity of sign language translation can be improved.
For example: as shown in fig. 2, during the communication, the user a is located at the side of the mobile terminal facing the screen a, the user B is located at the side of the mobile terminal facing the screen B, the user a inputs sign language to the mobile terminal, and the user B speaks chinese to the mobile terminal. As shown in fig. 5, the process of language translation between user a and user B by using the mobile terminal may include the following steps:
step 501, two people face each other, and the mobile phone is held horizontally in the middle to respectively look at the display screens facing each other.
Step 502, user a presents language information in sign language to user B.
Step 503, the mobile terminal obtains sign language image information through the camera, translates the semantic meaning obtained by reading the sign language image information into Chinese characters, and displays the Chinese characters on the screen B.
In step 504, the user B sees the Chinese characters on the screen B and answers the user A with Chinese speech.
And 505, under the condition that the Chinese voice information is acquired, the mobile terminal translates the Chinese voice information into sign language animation and displays the sign language animation on a screen A.
Certainly, in the process of inputting the sign language by the user, a sign language picture can be shot, and the shot sign language picture is matched with a target picture stored in the database, so that under the condition of successful matching, the semantic meaning of the sign language picture can be determined to be the semantic meaning corresponding to the target picture.
Compared with the previous embodiment, the embodiment of the invention also translates the second to-be-translated language information input by the second user and oriented to the second display screen into the second target language information, thereby realizing the two-way communication between the first user and the second user and improving the performance of the translation method.
Referring to fig. 6, an embodiment of the present invention further provides a mobile terminal, where the mobile terminal includes two display screens respectively disposed on a front side and a back side of the mobile terminal. As shown in fig. 6, the mobile terminal 600 further includes:
the first obtaining module 601 is configured to obtain first to-be-translated language information input by a first user facing a first display screen, where a category of the first to-be-translated language information includes sign language information, voice information, or text information;
a first translation module 602, configured to translate the first to-be-translated language information into first target language information, where, in a case that the category of the first to-be-translated language information includes sign language information, the category of the first target language information includes voice information or text information, and in a case that the category of the first to-be-translated language information includes voice information or text information, the category of the first target language information includes sign language information;
a first display module 603, configured to display the first target language information on a second display screen.
Optionally, as shown in fig. 7, the mobile terminal 600 further includes:
a second obtaining module 604, configured to obtain a preset translation rule, and determine types of the first to-be-translated language information and the first target language information according to the preset translation rule, respectively;
the first obtaining module 601 is specifically configured to:
under the condition that the type of the first to-be-translated language information is sign language information, starting a camera of the mobile terminal, which is positioned on the side where the first display screen is positioned, to acquire the sign language information; or,
and under the condition that the type of the first language information to be translated is voice information, starting a microphone of the mobile terminal to acquire the voice information.
Optionally, as shown in fig. 8, the mobile terminal 600 further includes:
a third obtaining module 605, configured to obtain second to-be-translated language information input by a second user facing a second display screen, where a type of the second to-be-translated language information includes sign language information, voice information, or text information;
a second translation module 606, configured to translate the second language information to be translated into second target language information, where the type of the second language information to be translated includes voice information or text information when the type of the second language information to be translated includes sign language information, and the type of the second target language information includes sign language information when the type of the second language information to be translated includes voice information or text information;
a second display module 607, configured to display the second target language information on the first display screen.
Optionally, under the condition that the types of the first to-be-translated language information and the second to-be-translated language information are both voice information, the first obtaining module 601 is specifically configured to:
acquiring to-be-translated voice information, and determining the to-be-translated voice information as the first to-be-translated language information under the condition that the language information of the to-be-translated voice information is the same as the language information of the first to-be-translated language information;
the third obtaining module 605 is specifically configured to:
acquiring the voice information to be translated, and determining the voice information to be translated as the second language information to be translated under the condition that the language information of the voice information to be translated is the same as the language information of the second language information to be translated.
Optionally, when the type of the first language information to be translated is sign language information, the first obtaining module 601 is specifically configured to:
acquiring sign language image information input by a first user and oriented to a first display screen, and acquiring a moving track of a target anchor point from the sign language image information, wherein the target anchor point comprises a finger, a wrist and a palm of the user;
the first translation module 602 is specifically configured to:
and determining the sign language meaning matched with the movement track from a pre-stored sign language information base, and determining first target language information corresponding to the meaning.
The mobile terminal provided by the embodiment of the present invention can implement each process implemented by the mobile terminal in the above method embodiments, and is not described herein again to avoid repetition.
Referring to fig. 9, an embodiment of the present invention provides a mobile terminal 900 capable of implementing the method embodiment. The mobile terminal 900 includes, but is not limited to: a radio frequency unit 901, a network module 902, an audio output unit 903, an input unit 904, a sensor 905, a display unit 906, a user input unit 907, an interface unit 908, a memory 909, a processor 910, and a power supply 911. The mobile terminal 900 further includes two display screens respectively disposed on the front and back of the mobile terminal. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 9 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted mobile terminal, a wearable device, a pedometer, a computer, a notebook computer, and the like.
Wherein, the processor 910 is configured to:
acquiring first to-be-translated language information input by a first user and oriented to a first display screen, wherein the type of the first to-be-translated language information comprises sign language information, voice information or character information;
translating the first to-be-translated language information into first target language information, wherein the category of the first target language information comprises voice information or character information under the condition that the category of the first to-be-translated language information comprises sign language information, and the category of the first target language information comprises sign language information under the condition that the category of the first to-be-translated language information comprises voice information or character information;
and displaying the first target language information on a second display screen.
Optionally, the processor 910 is further configured to:
acquiring a preset translation rule, and respectively determining the types of the first language information to be translated and the first target language information according to the preset translation rule;
the processor 910 performs the steps of obtaining a first to-be-translated language information input by a first user with a first display screen oriented, including:
under the condition that the type of the first to-be-translated language information is sign language information, starting a camera of the mobile terminal, which is positioned on the side where the first display screen is positioned, to acquire the sign language information; or,
and under the condition that the type of the first language information to be translated is voice information, starting a microphone of the mobile terminal to acquire the voice information.
Optionally, the processor 910 is further configured to:
acquiring second language information to be translated input by a second user and oriented to a second display screen, wherein the type of the second language information to be translated comprises sign language information, voice information or character information;
translating the second language information to be translated into second target language information, wherein the type of the second target language information comprises voice information or character information under the condition that the type of the second language information to be translated comprises sign language information, and the type of the second target language information comprises sign language information under the condition that the type of the second language information to be translated comprises voice information or character information;
and displaying the second target language information on the first display screen.
Optionally, when the categories of the first language information to be translated and the second language information to be translated are both voice information, the step of obtaining the first language information to be translated input by the first user facing the first display screen, executed by the processor 910, includes:
acquiring to-be-translated voice information, and determining the to-be-translated voice information as the first to-be-translated language information under the condition that the language information of the to-be-translated voice information is the same as the language information of the first to-be-translated language information;
the processor 910 performs the step of obtaining second language information to be translated input by a second user with the second display oriented, including:
acquiring the voice information to be translated, and determining the voice information to be translated as the second language information to be translated under the condition that the language information of the voice information to be translated is the same as the language information of the second language information to be translated.
Optionally, in a case that the category of the first to-be-translated language information is sign language information, the step performed by the processor 910 to obtain the first to-be-translated language information input by the first user with the first display screen facing thereto includes:
acquiring sign language image information input by a first user and oriented to a first display screen, and acquiring a moving track of a target anchor point from the sign language image information, wherein the target anchor point comprises a finger, a wrist and a palm of the user;
the step of translating the first to-be-translated language information into first target language information includes:
and determining the sign language meaning matched with the movement track from a pre-stored sign language information base, and determining first target language information corresponding to the meaning.
The mobile terminal 900 has two display screens, so that the translation of voice or sign language in the face-to-face communication process between two users can be realized, the two users are prevented from using the mobile terminal in turn, and the efficiency of the communication translation process is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 901 may be used for receiving and sending signals during a message transmission and reception process or a call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 910; in addition, the uplink data is transmitted to the base station. Generally, the radio frequency unit 901 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 901 can also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides the user with wireless broadband internet access via the network module 902, such as helping the user send and receive e-mails, browse web pages, and access streaming media.
The audio output unit 903 may convert audio data received by the radio frequency unit 901 or the network module 902 or stored in the memory 909 into an audio signal and output as sound. Also, the audio output unit 903 may also provide audio output related to a specific function performed by the mobile terminal 900 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 903 includes a speaker, a buzzer, a receiver, and the like.
The input unit 904 is used to receive audio or video signals. The input Unit 904 may include a Graphics Processing Unit (GPU) 9041 and a microphone 9042, and the Graphics processor 9041 processes image data of a still picture or video obtained by an image capturing device (such as a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 906. The image frames processed by the graphic processor 9041 may be stored in the memory 909 (or other storage medium) or transmitted via the radio frequency unit 901 or the network module 902. The microphone 9042 can receive sounds and can process such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 901 in case of the phone call mode.
The mobile terminal 900 also includes at least one sensor 905, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 9061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 9061 and/or backlight when the mobile terminal 900 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 905 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described in detail herein.
The display unit 906 is used to display information input by the user or information provided to the user. The Display unit 906 may include a Display panel 9061, and the Display panel 9061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 907 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 907 includes a touch panel 9071 and other input devices 9072. The touch panel 9071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 9071 (e.g., operations by a user on or near the touch panel 9071 using a finger, a stylus, or any other suitable object or accessory). The touch panel 9071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 910, receives a command from the processor 910, and executes the command. In addition, the touch panel 9071 may be implemented by using various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 907 may include other input devices 9072 in addition to the touch panel 9071. Specifically, the other input devices 9072 may include, but are not limited to, a physical keyboard, function keys (such as a volume control key, a switch key, and the like), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 9071 may be overlaid on the display panel 9061, and when the touch panel 9071 detects a touch operation on or near the touch panel 9071, the touch panel is transmitted to the processor 910 to determine the type of the touch event, and then the processor 910 provides a corresponding visual output on the display panel 9061 according to the type of the touch event. Although in fig. 9, the touch panel 9071 and the display panel 9061 are two independent components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 9071 and the display panel 9061 may be integrated to implement the input and output functions of the mobile terminal, which is not limited herein.
The interface unit 908 is an interface through which an external device is connected to the mobile terminal 900. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 908 may be used to receive input from external devices (e.g., data information, power, etc.) and transmit the received input to one or more elements within the mobile terminal 900 or may be used to transmit data between the mobile terminal 900 and external devices.
The memory 909 may be used to store software programs as well as various data. The memory 909 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 909 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 910 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by running or executing software programs and/or modules stored in the memory 909 and calling data stored in the memory 909, thereby performing overall monitoring of the mobile terminal. Processor 910 may include one or more processing units; preferably, the processor 910 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 910.
The mobile terminal 900 may also include a power supply 910 (e.g., a battery) for powering the various components, and preferably, the power supply 910 may be logically coupled to the processor 910 via a power management system that may enable managing charging, discharging, and power consumption by the power management system.
In addition, the mobile terminal 900 includes some functional modules that are not shown, and thus will not be described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, including a processor 910, a memory 909, and a computer program stored in the memory 909 and capable of running on the processor 910, where the computer program is executed by the processor 910 to implement each process of the foregoing translation method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a mobile terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (12)

1. A translation method is applied to a mobile terminal, and is characterized in that the mobile terminal comprises two display screens which are respectively arranged on the front side and the back side of the mobile terminal, and the translation method comprises the following steps:
acquiring first to-be-translated language information input by a first user and oriented to a first display screen, wherein the type of the first to-be-translated language information comprises sign language information, voice information or character information;
translating the first to-be-translated language information into first target language information, wherein the category of the first target language information comprises voice information or character information under the condition that the category of the first to-be-translated language information comprises sign language information, and the category of the first target language information comprises sign language information under the condition that the category of the first to-be-translated language information comprises voice information or character information;
and displaying the first target language information on a second display screen.
2. The translation method according to claim 1, wherein prior to obtaining the first language information to be translated entered by the first user with the first display screen oriented, the method further comprises:
acquiring a preset translation rule, and respectively determining the types of the first language information to be translated and the first target language information according to the preset translation rule;
the method for acquiring the first to-be-translated language information input by the first user facing the first display screen comprises the following steps of:
under the condition that the type of the first to-be-translated language information is sign language information, starting a camera of the mobile terminal, which is positioned on the side where the first display screen is positioned, to acquire the sign language information; or,
and under the condition that the type of the first language information to be translated is voice information, starting a microphone of the mobile terminal to acquire the voice information.
3. The translation method of claim 1, wherein the method further comprises:
acquiring second language information to be translated input by a second user and oriented to a second display screen, wherein the type of the second language information to be translated comprises sign language information, voice information or character information;
translating the second language information to be translated into second target language information, wherein the type of the second target language information comprises voice information or character information under the condition that the type of the second language information to be translated comprises sign language information, and the type of the second target language information comprises sign language information under the condition that the type of the second language information to be translated comprises voice information or character information;
and displaying the second target language information on the first display screen.
4. The translation method according to claim 3, wherein the step of acquiring the first language information to be translated inputted by the first user with the first display screen oriented in the case where the first language information to be translated and the second language information to be translated are both voice information, comprises:
acquiring to-be-translated voice information, and determining the to-be-translated voice information as the first to-be-translated language information under the condition that the language information of the to-be-translated voice information is the same as the language information of the first to-be-translated language information;
the step of acquiring second language information to be translated input by a second user facing a second display screen comprises the following steps:
acquiring the voice information to be translated, and determining the voice information to be translated as the second language information to be translated under the condition that the language information of the voice information to be translated is the same as the language information of the second language information to be translated.
5. The translation method according to claim 1, wherein the step of acquiring the first language information to be translated inputted by the first user with the first display screen oriented thereto in the case where the kind of the first language information to be translated is sign language information comprises:
acquiring sign language image information input by a first user and oriented to a first display screen, and acquiring a moving track of a target anchor point from the sign language image information, wherein the target anchor point comprises a finger, a wrist and a palm of the user;
the step of translating the first to-be-translated language information into first target language information includes:
and determining the sign language meaning matched with the movement track from a pre-stored sign language information base, and determining first target language information corresponding to the meaning.
6. The utility model provides a mobile terminal, its characterized in that, mobile terminal including set up in respectively two display screens at mobile terminal's front and back, mobile terminal still includes:
the first obtaining module is used for obtaining first to-be-translated language information input by a first user and oriented to a first display screen, wherein the type of the first to-be-translated language information comprises sign language information, voice information or character information;
the first translation module is used for translating the first to-be-translated language information into first target language information, wherein the type of the first target language information comprises voice information or character information under the condition that the type of the first to-be-translated language information comprises sign language information, and the type of the first target language information comprises sign language information under the condition that the type of the first to-be-translated language information comprises voice information or character information;
and the first display module is used for displaying the first target language information on a second display screen.
7. The mobile terminal of claim 6, wherein the mobile terminal further comprises:
the second obtaining module is used for obtaining a preset translation rule and respectively determining the types of the first to-be-translated language information and the first target language information according to the preset translation rule;
the first obtaining module is specifically configured to:
under the condition that the type of the first to-be-translated language information is sign language information, starting a camera of the mobile terminal, which is positioned on the side where the first display screen is positioned, to acquire the sign language information; or,
and under the condition that the type of the first language information to be translated is voice information, starting a microphone of the mobile terminal to acquire the voice information.
8. The mobile terminal of claim 6, wherein the mobile terminal further comprises:
the third acquisition module is used for acquiring second to-be-translated language information input by a second user facing a second display screen, wherein the type of the second to-be-translated language information comprises sign language information, voice information or character information;
the second translation module is used for translating the second language information to be translated into second target language information, wherein the type of the second target language information comprises voice information or character information under the condition that the type of the second language information to be translated comprises sign language information, and the type of the second target language information comprises sign language information under the condition that the type of the second language information to be translated comprises voice information or character information;
and the second display module is used for displaying the second target language information on the first display screen.
9. The mobile terminal according to claim 8, wherein, in a case that the types of the first language information to be translated and the second language information to be translated are both voice information, the first obtaining module is specifically configured to:
acquiring to-be-translated voice information, and determining the to-be-translated voice information as the first to-be-translated language information under the condition that the language information of the to-be-translated voice information is the same as the language information of the first to-be-translated language information;
the third obtaining module is specifically configured to:
acquiring the voice information to be translated, and determining the voice information to be translated as the second language information to be translated under the condition that the language information of the voice information to be translated is the same as the language information of the second language information to be translated.
10. The mobile terminal of claim 6, wherein, when the type of the first to-be-translated language information is sign language information, the first obtaining module is specifically configured to:
acquiring sign language image information input by a first user and oriented to a first display screen, and acquiring a moving track of a target anchor point from the sign language image information, wherein the target anchor point comprises a finger, a wrist and a palm of the user;
the first translation module is specifically configured to:
and determining the sign language meaning matched with the movement track from a pre-stored sign language information base, and determining first target language information corresponding to the meaning.
11. A mobile terminal, characterized in that it comprises a memory, a processor and a computer program stored on said memory and executable on said processor, said processor implementing the steps in the translation method according to any of claims 1 to 5 when executing said computer program.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps in the translation method according to any one of claims 1 to 5.
CN201910203324.8A 2019-03-18 2019-03-18 A translation method, mobile terminal and computer-readable storage medium Pending CN109960813A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910203324.8A CN109960813A (en) 2019-03-18 2019-03-18 A translation method, mobile terminal and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910203324.8A CN109960813A (en) 2019-03-18 2019-03-18 A translation method, mobile terminal and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN109960813A true CN109960813A (en) 2019-07-02

Family

ID=67024502

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910203324.8A Pending CN109960813A (en) 2019-03-18 2019-03-18 A translation method, mobile terminal and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN109960813A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110457716A (en) * 2019-07-22 2019-11-15 维沃移动通信有限公司 A kind of voice output method and mobile terminal
CN110866410A (en) * 2019-11-15 2020-03-06 深圳市赛为智能股份有限公司 Multi-language conversion method, device, computer equipment and storage medium
CN111368560A (en) * 2020-02-28 2020-07-03 北京字节跳动网络技术有限公司 Text translation method and device, electronic equipment and storage medium
CN111507115A (en) * 2020-04-12 2020-08-07 北京花兰德科技咨询服务有限公司 Multi-modal language information artificial intelligence translation method, system and equipment
CN111797215A (en) * 2020-06-24 2020-10-20 北京小米松果电子有限公司 Dialogue method, dialogue device and storage medium
CN112309370A (en) * 2020-11-02 2021-02-02 北京分音塔科技有限公司 Voice translation method, device and equipment and translation machine
CN112614482A (en) * 2020-12-16 2021-04-06 平安国际智慧城市科技股份有限公司 Mobile terminal foreign language translation method, system and storage medium
CN113011200A (en) * 2021-03-01 2021-06-22 中国工商银行股份有限公司 Multi-language information display method and device, electronic equipment and storage medium
CN113835522A (en) * 2021-09-10 2021-12-24 阿里巴巴达摩院(杭州)科技有限公司 Sign language video generation, translation, customer service method, apparatus and readable medium
CN113851029A (en) * 2021-07-30 2021-12-28 阿里巴巴达摩院(杭州)科技有限公司 Barrier-free communication method and device
CN115066907A (en) * 2019-12-09 2022-09-16 金京喆 User terminal, broadcasting apparatus, broadcasting system including the same, and control method thereof
CN115066908A (en) * 2019-12-09 2022-09-16 金京喆 User terminal and control method thereof
WO2024053967A1 (en) * 2022-09-05 2024-03-14 주식회사 바토너스 Display-based communication system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090012788A1 (en) * 2007-07-03 2009-01-08 Jason Andre Gilbert Sign language translation system
CN103116576A (en) * 2013-01-29 2013-05-22 安徽安泰新型包装材料有限公司 Voice and gesture interactive translation device and control method thereof
CN106648388A (en) * 2016-11-18 2017-05-10 宇龙计算机通信科技(深圳)有限公司 Communication method and device based on terminal, and terminal
CN106776585A (en) * 2016-11-29 2017-05-31 维沃移动通信有限公司 Instant translation method and mobile terminal
CN107766340A (en) * 2017-10-24 2018-03-06 广东欧珀移动通信有限公司 Method, device and terminal for displaying text
CN108268835A (en) * 2017-12-28 2018-07-10 努比亚技术有限公司 sign language interpretation method, mobile terminal and computer readable storage medium
CN108664475A (en) * 2018-03-28 2018-10-16 广东欧珀移动通信有限公司 Translate display methods, device, mobile terminal and storage medium
CN108920224A (en) * 2018-04-11 2018-11-30 Oppo广东移动通信有限公司 dialogue information processing method, device, mobile terminal and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090012788A1 (en) * 2007-07-03 2009-01-08 Jason Andre Gilbert Sign language translation system
CN103116576A (en) * 2013-01-29 2013-05-22 安徽安泰新型包装材料有限公司 Voice and gesture interactive translation device and control method thereof
CN106648388A (en) * 2016-11-18 2017-05-10 宇龙计算机通信科技(深圳)有限公司 Communication method and device based on terminal, and terminal
CN106776585A (en) * 2016-11-29 2017-05-31 维沃移动通信有限公司 Instant translation method and mobile terminal
CN107766340A (en) * 2017-10-24 2018-03-06 广东欧珀移动通信有限公司 Method, device and terminal for displaying text
CN108268835A (en) * 2017-12-28 2018-07-10 努比亚技术有限公司 sign language interpretation method, mobile terminal and computer readable storage medium
CN108664475A (en) * 2018-03-28 2018-10-16 广东欧珀移动通信有限公司 Translate display methods, device, mobile terminal and storage medium
CN108920224A (en) * 2018-04-11 2018-11-30 Oppo广东移动通信有限公司 dialogue information processing method, device, mobile terminal and storage medium

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110457716A (en) * 2019-07-22 2019-11-15 维沃移动通信有限公司 A kind of voice output method and mobile terminal
CN110866410A (en) * 2019-11-15 2020-03-06 深圳市赛为智能股份有限公司 Multi-language conversion method, device, computer equipment and storage medium
CN110866410B (en) * 2019-11-15 2023-07-25 深圳市赛为智能股份有限公司 Multilingual conversion method, multilingual conversion device, computer device, and storage medium
CN115066908A (en) * 2019-12-09 2022-09-16 金京喆 User terminal and control method thereof
CN115066907A (en) * 2019-12-09 2022-09-16 金京喆 User terminal, broadcasting apparatus, broadcasting system including the same, and control method thereof
CN111368560A (en) * 2020-02-28 2020-07-03 北京字节跳动网络技术有限公司 Text translation method and device, electronic equipment and storage medium
CN111507115B (en) * 2020-04-12 2021-07-27 北京花兰德科技咨询服务有限公司 Multi-modal language information artificial intelligence translation method, system and equipment
CN111507115A (en) * 2020-04-12 2020-08-07 北京花兰德科技咨询服务有限公司 Multi-modal language information artificial intelligence translation method, system and equipment
CN111797215A (en) * 2020-06-24 2020-10-20 北京小米松果电子有限公司 Dialogue method, dialogue device and storage medium
CN112309370A (en) * 2020-11-02 2021-02-02 北京分音塔科技有限公司 Voice translation method, device and equipment and translation machine
CN112614482A (en) * 2020-12-16 2021-04-06 平安国际智慧城市科技股份有限公司 Mobile terminal foreign language translation method, system and storage medium
CN113011200A (en) * 2021-03-01 2021-06-22 中国工商银行股份有限公司 Multi-language information display method and device, electronic equipment and storage medium
CN113851029A (en) * 2021-07-30 2021-12-28 阿里巴巴达摩院(杭州)科技有限公司 Barrier-free communication method and device
CN113851029B (en) * 2021-07-30 2023-09-05 阿里巴巴达摩院(杭州)科技有限公司 Barrier-free communication method and device
CN113835522A (en) * 2021-09-10 2021-12-24 阿里巴巴达摩院(杭州)科技有限公司 Sign language video generation, translation, customer service method, apparatus and readable medium
WO2024053967A1 (en) * 2022-09-05 2024-03-14 주식회사 바토너스 Display-based communication system

Similar Documents

Publication Publication Date Title
CN109960813A (en) A translation method, mobile terminal and computer-readable storage medium
CN109151180B (en) Object recognition method and mobile terminal
US11429248B2 (en) Unread message prompt method and mobile terminal
CN108459797B (en) Control method of folding screen and mobile terminal
CN109523253B (en) A payment method and device
WO2021057267A1 (en) Image processing method and terminal device
CN109240577B (en) Screen capturing method and terminal
JP7221305B2 (en) Object recognition method and mobile terminal
US20200257433A1 (en) Display method and mobile terminal
CN108376096B (en) A message display method and mobile terminal
CN107734170B (en) Notification message processing method, mobile terminal and wearable device
WO2021115172A1 (en) Display method and electronic device
CN107845057A (en) One kind is taken pictures method for previewing and mobile terminal
WO2019206077A1 (en) Video call processing method and mobile terminal
CN110012151B (en) Information display method and terminal device
CN108089801A (en) A kind of method for information display and mobile terminal
WO2021082772A1 (en) Screenshot method and electronic device
WO2020024788A1 (en) Text input method and terminal
WO2020063107A1 (en) Screenshot method and terminal
CN107831891A (en) A kind of brightness adjusting method and mobile terminal
CN107783747B (en) Interface display processing method and mobile terminal
WO2019154360A1 (en) Interface switching method and mobile terminal
CN108305342A (en) A kind of Work attendance method and mobile terminal
CN109544172B (en) A display method and terminal device
CN108833791B (en) A shooting method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190702