CN115131649B - Content identification method and device and electronic equipment - Google Patents
Content identification method and device and electronic equipmentInfo
- Publication number
- CN115131649B CN115131649B CN202210745044.1A CN202210745044A CN115131649B CN 115131649 B CN115131649 B CN 115131649B CN 202210745044 A CN202210745044 A CN 202210745044A CN 115131649 B CN115131649 B CN 115131649B
- Authority
- CN
- China
- Prior art keywords
- content
- area
- screen
- input
- electronic device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/96—Management of image or video recognition tasks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/7243—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
- H04M1/72439—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2250/00—Details of telephonic subscriber devices
- H04M2250/22—Details of telephonic subscriber devices including a touch pad, a touch sensor or a touch detector
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
本申请公开了一种内容识别方法、装置及电子设备,属于通信技术领域,该方法包括:接收用户对电子设备的第一输入;响应于所述第一输入,在电子设备的第一屏的第一区域显示第一内容,该第一区域处于电子设备的第一摄像头的拍摄范围;在通过第一摄像头获取到第一内容的情况下,对第一内容进行识别,得到第一信息。
The present application discloses a content recognition method, device and electronic device, belonging to the field of communication technology. The method includes: receiving a first input from a user to an electronic device; in response to the first input, displaying first content in a first area of a first screen of the electronic device, where the first area is within the shooting range of a first camera of the electronic device; and when the first content is acquired through the first camera, recognizing the first content to obtain first information.
Description
Technical Field
The application belongs to the technical field of communication, and particularly relates to a content identification method, a content identification device and electronic equipment.
Background
Currently, with the popularity of the internet, electronic devices have gradually penetrated into the daily life and work of users, for example, users may recognize some interesting content using a rear camera of the electronic device.
Typically, when a user is watching a movie using an electronic device, if the user does not recognize a plant on a certain picture of the movie, the user may first trigger the electronic device to capture and store the picture, and then the user may trigger the electronic device to run a picture recognition application and upload the picture to a server of the picture recognition application for recognition. As such, the manner in which the electronic device recognizes the content (e.g., the screen of the movie described above) is cumbersome.
Disclosure of Invention
The embodiment of the application aims to provide a content identification method and device and electronic equipment, which can solve the problem that the mode of identifying content by the electronic equipment is complicated.
In a first aspect, an embodiment of the application provides a content identification method, which comprises the steps of receiving first input of a user to electronic equipment, responding to the first input, displaying first content in a first area of a first screen of the electronic equipment, wherein the first area is in a shooting range of a first camera of the electronic equipment, and identifying the first content to obtain first information under the condition that the first content is acquired through the first camera.
In a second aspect, an embodiment of the present application provides a content identification apparatus, where the content identification apparatus includes a receiving module, a display module, and a processing module. The electronic device comprises a receiving module, a display module and a processing module, wherein the receiving module is used for receiving first input of a user to the electronic device, the display module is used for responding to the first input received by the receiving module and displaying first content in a first area of a first screen of the electronic device, the first area is in a shooting range of a first camera of the electronic device, and the processing module is used for identifying the first content to obtain first information under the condition that the first content is acquired through the first camera.
In a third aspect, an embodiment of the present application provides an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, a first input of a user to the electronic equipment is received, a first content is displayed in a first area of a first screen of the electronic equipment in response to the first input, the first area is in a shooting range of a first camera of the electronic equipment, and the first content is identified to obtain first information under the condition that the first content is acquired through the first camera. According to the scheme, after the user triggers the display of the content in the photographable area of the first screen of the electronic equipment through input, the content can be acquired through the camera of the electronic equipment, and the content is directly identified under the condition that the content is acquired, so that identification information corresponding to the content is obtained, the user does not need to trigger the electronic equipment to screen the content, and the server uploading the content to the image recognition application is triggered to conduct identification. In this way, the way in which the electronic device recognizes the content is simplified.
Drawings
Fig. 1 is a schematic diagram of a content identification method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an interface to which a content recognition method according to an embodiment of the present application is applied;
Fig. 3 is a schematic structural diagram of a content recognition device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 5 is a schematic hardware diagram of a server according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
In the related art, when a user uses an electronic device to watch a movie, if the user does not know the plant appearing on a certain picture of the movie, the user can trigger the electronic device to capture a screen of the picture and store the picture, and then the user can trigger the electronic device to run a picture recognition application and upload the picture to a server of the picture recognition application for recognition. As such, the manner in which the electronic device recognizes the content (e.g., the screen of the movie described above) is cumbersome.
Further, in the above process, the electronic device needs to save the picture, which may cause some intermediate files to be generated and occupy the memory.
Based on the above-mentioned problems, the embodiment of the present application provides a content identification method, after a user triggers, through input, to display a certain content in a photographable area of a first screen of an electronic device, the content can be directly identified by a camera of the electronic device, and information of the content is obtained, without the need for the user to trigger the electronic device to screen the content and trigger a server uploading the content to a graph recognition application to identify. In this way, the way in which the electronic device recognizes the content is simplified.
Further, since the screen capturing of the content does not need to be triggered, an intermediate file is not generated, and therefore the content stored in the local album of the electronic device is not affected.
The content recognition method, the device and the electronic equipment provided by the embodiment of the application are described in detail through specific embodiments and application scenes thereof with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present application provides a content recognition method including the following S101 to S103.
S101, the content recognition device receives a first input of a user to the electronic device.
Alternatively, the first input may be a touch input, a voice input, or a gesture input of the user. For example, the touch input is a user folding input to a first screen of the electronic device. Of course, the first input may be other possible inputs, which are not limited by the embodiment of the present application.
S102, the content recognition device responds to the first input and displays first content in a first area of a first screen of the electronic device.
The first area is in a shooting range of a first camera of the electronic device.
Alternatively, the first screen of the electronic device may be a foldable screen.
Optionally, the first camera may be a front camera or a rear camera of the electronic device. Optionally, the electronic device is a folding screen device, including a first screen and a second screen, where the first camera may be a camera on the second screen, and after the electronic device is folded, the first area of the first screen may be located in a shooting range of the second screen camera.
Alternatively, the first camera may be a rotatable camera.
Alternatively, the first content may be from content stored in the electronic device, or from screen content of a first screen of the electronic device.
By way of example, the first content may include any of a web page, an image, text, an icon, an identification code, and the like.
Alternatively, the number of the first contents may be one or more.
Further, in the case where the first content includes a plurality of contents, the plurality of contents may be the same type of content, or different types of content.
Illustratively, it is assumed that the first content includes two contents. One possible case is that both contents are images, and the other possible case is that one content is an image and the other content is text.
Optionally, in the case that the electronic device comprises a second screen and the first screen comprises a first screen area and a second screen area, before S101, the content identification method provided by the embodiment of the application can further comprise the steps that the content identification device receives input of a user to the second screen, and screen contents in the first screen area and the second screen area are displayed in an exchanging mode in response to the input. In this way, the display content in the first screen can be controlled to be updated by the operation on the second screen.
Optionally, in the case that the first screen includes a first screen area and a second screen area, if the first camera is located in the first screen area, the first area is one area of the second screen area, so that in the case that the first area displays the first content, other areas except for the first area in the second screen area may be controlled to be in a screen-off state, or display colors of other areas except for the first area in the second screen area may be updated to black.
And S103, when the first content is acquired through the first camera, the content identification device identifies the first content to acquire first information.
Optionally, when the first input is a folding input and the electronic device detects that the duration of the first screen of the electronic device in the folded state reaches the preset duration, the identification mode of the first camera may be started, and in this mode, the content identification device may control the first camera to acquire the first content.
Alternatively, in the case where the first content is acquired by the first camera, the first content may be identified using a content identification technique. For example, the first content is identified by an AI identification algorithm, so as to obtain identification information corresponding to the first content.
Alternatively, in the case where the first content includes a plurality of contents, the plurality of contents may be sequentially identified in the order of arrangement of the plurality of contents, or the plurality of contents may be simultaneously identified. Specifically, the method can be determined according to actual conditions, and the embodiment of the application is not limited to the method.
The first information is determined according to the first content. And for the first content of different types, different first information is obtained after the first content is identified by the first camera.
The method comprises the steps of enabling first content to be an image, enabling first information to be the name of the image, enabling first content to be an English sentence, enabling first information to be a Chinese translation of the English sentence, enabling first content to be an icon, enabling first information to be the name of the icon, enabling first content to be an identity code, and enabling first information to be identity information corresponding to the identity code.
The content recognition device is exemplified by a folding screen mobile phone and the first camera is exemplified by a front camera. The user is browsing a chinese clothing image using the mobile phone. If the user does not recognize the Han-wear image, the user may collapse the screen (i.e., the first screen) of the handset. After the mobile phone receives the folding input of the user to the screen, the Chinese clothing image can be displayed in a first area of the screen, which is in the shooting range of the front-facing camera, in response to the input, and then the mobile phone can recognize the Chinese clothing image through the front-facing camera to obtain the name ' Tang Fu ' skirt ' of the Chinese clothing (namely, first information).
Alternatively, in the case where the first content includes a plurality of contents, the plurality of contents may be integrated according to the type information of the plurality of first information.
For example, after a plurality of texts are identified, the texts can be spliced into a part for display, for example, after the texts are identified by a first camera, the texts are translated to obtain a translation result, namely first information, and then the translation result can be correspondingly displayed with the original content sentence, namely one line of Chinese and one line of English.
Optionally, the first content includes M sub-contents, M is a positive integer, and after S101 and before S103, the content identifying method provided in the embodiment of the present application may further include receiving, by the content identifying device, a fourth input of a user to a target sub-content in the M sub-contents displayed in the second area. Accordingly, the above S103 can be specifically realized by the following S103A.
And S103A, the content recognition device responds to the fourth input, and recognizes the target sub-content to obtain first information under the condition that the target sub-content is obtained through the first camera.
Optionally, the fourth input may be a touch input, a voice input, or a gesture input of the user to the target sub-content. For example, the touch input is a user click input of the target sub-content.
Alternatively, the number of the target sub-contents may be one or more.
Further, the first information includes M pieces of sub information. When the target sub-content includes a sub-content, the first information includes a sub-information, and when the target sub-content includes a plurality of sub-contents, the first information includes a plurality of sub-information, each sub-information corresponding to one of the plurality of sub-contents.
When the target sub-content is identified, the obtained first information is detailed information of the target sub-content.
In connection with the above exemplary description, the mobile phone displays a han-dress image in the first area, where the han-dress image includes a button image, a pattern image, and the like. If the user wants to know what type of button is in this Han suit, the user can click on the button image. After the mobile phone receives the click input, the mobile phone can respond to the click input, and can recognize the button image through an AI recognition algorithm to obtain the name of the button (namely, the first information).
It can be understood that the user can trigger the identification of the target sub-content in the first content through the input of the target sub-content in the first content to obtain the first information, so that the specific sub-content in the first content can be identified in an important way. In this way, the details of the first content can be identified and information of the details can be obtained.
The embodiment of the application provides a content identification method, which comprises the steps that after a user inputs a certain content, the content is triggered to be displayed in a photographable area of a first screen of electronic equipment, the content can be acquired through a camera of the electronic equipment, and the content is directly identified under the condition that the content is acquired, so that information of the content is obtained, the user does not need to trigger the electronic equipment to screen the content, and the user is triggered to upload the content to a server of a graph identification application for identification. In this way, the way in which the electronic device recognizes the content is simplified.
Optionally, before the first content is displayed in the first area of the first screen of the electronic device in S102, the content identifying method provided by the embodiment of the present application may further include S104 to S106.
S104, the content recognition device responds to the first input and displays the first content in a second area of a second screen of the electronic device.
Wherein the second area is a mapping area of the first area.
S105, the content recognition device receives a second input of a user to a second area of the second screen.
S106, the content recognition device responds to the second input and updates the first content displayed in the first area.
Optionally, the electronic device in the embodiment of the present application may be a multi-panel electronic device. Further, the multi-sided screen includes a first screen and a second screen.
Optionally, the second screen is located on a plane opposite to the plane on which the first screen is located, or the second screen is located on the same plane as the plane on which the first screen is located. And in particular, according to actual use conditions, the embodiment of the present application is not limited thereto.
Alternatively, the size of the second region and the size of the first region may be the same or different.
Further, in the case where the size of the second region and the size of the first region are different, the size of the second region is larger than the size of the first region, or the size of the second region is smaller than the size of the first region. Specifically, the method can be determined according to actual use conditions, and the embodiment of the application is not limited to the method.
It should be noted that, after responding to the received first input, the first area may be determined first, and then the mapping area, that is, the second area, of the first area may be determined.
Further, in the case of determining the second area, a frame may be displayed on the second screen, where the area surrounded by the frame is the second area. In this way, the user can preview the photographable area of the first camera.
Optionally, in case the first content comprises a plurality of contents, the plurality of contents is displayed in a second area of a second screen of the electronic device. Thereafter, if the user drags one content of the plurality of contents to another content, the information obtained by identifying the two contents may be triggered to be spliced.
Optionally, the second input may be any feasible input such as a touch input, a voice input, or a gesture input of the user, which is not limited in the embodiment of the present application.
Optionally, the content recognition device receives a second input from the user to the second area, updates the content displayed in the second area, and synchronously updates the first content displayed in the first area. Alternatively, the second input may be an edit input of the first content in the second area by the user. Illustratively, the second area of the second screen displays text a to be recognized for which the user obtains text B, and the content recognition device updates text a in the first area to the text B.
Since the second area and the first area have a mapping relationship, after the first input triggers the display of the first content in the second area of the second screen, the first content in the mapping area of the first area may be displayed in the first area of the first screen at the same time.
That is, after the content displayed in the second area is changed, the content displayed in the first area is also changed.
According to the content identification method provided by the embodiment of the application, the first content can be displayed in the second area of the second screen of the electronic equipment, and the first content in the mapping area of the first area can be displayed in the first area, so that the content displayed in the first area of the first screen can be determined through the content displayed in the second area of the second screen, and the operation is more convenient.
Optionally, before the first content is displayed in the second area of the second screen of the electronic device in S104, the content identifying method provided by the embodiment of the present application may further include S107 and S108 described below, and accordingly, S104 may be specifically implemented by S104A described below.
And S107, the content recognition device displays N pieces of content in a third area of the second screen.
Wherein the N contents include a first content, and N is a positive integer.
Optionally, the third area is a screen area different from the second area in the second screen.
Alternatively, the size of the second region and the size of the third region may be the same or different.
Alternatively, in the case where the size of the second region and the size of the third region are different, the size of the second region is larger than the size of the third region, or the size of the second region is smaller than the size of the third region. Specifically, the method can be determined according to actual use conditions, and the embodiment of the application is not limited to the method.
Alternatively, for the description of each of the N contents, reference may be made to the detailed description in the above embodiment, and the embodiments of the present application are not repeated herein.
S108, the content recognition device receives a third input of the first content by the user.
Alternatively, the third input may be a touch input, a voice input, or a gesture input of the user to the first content. For example, the touch input is an input by which the user drags the first content from the third area to the second area.
Alternatively, in combination with the above S108, the above step S104 may include the following S104A.
And S104A, the content recognition device responds to the third input and displays the first content in the second area.
Optionally, after the step S108, the content identifying method provided by the embodiment of the present application may further include that the content identifying device displays, in response to the third input, a list identifier to be identified, where the list identifier indicating the first content is included in the list to be identified. In this way, the user can view the identified content and the content to be identified.
The content recognition device is exemplified by a mobile phone. As shown in fig. 2, the handset includes a first screen 01 and a second screen 02. The user may collapse the first screen 01 and after the handset receives a collapse input from the user (i.e., the first input), the handset may display content 04 and content 05 in region 03 of the second screen 02 (i.e., the third region) in response to the collapse input. The user may then drag the content 04 to the area 06 (i.e., the second area) of the second screen 02, after the mobile phone receives the drag input, the content 04 may be displayed in the area 06 in response to the drag input, and the content 04 may be displayed in the mapped area 06 of the area 07 of the first screen (i.e., the first area).
According to the content identification method provided by the embodiment of the application, after N pieces of content are displayed in the third area of the second screen, a user can trigger the first content to be displayed in the second area through input in the N pieces of content, so that the user can select any one piece of content from the N pieces of content as the content to be identified according to actual needs.
Optionally, before the first content is displayed in the first area of the first screen of the electronic device in S102, the content identifying method provided by the embodiment of the present application may further include S109 described below.
S109, the content recognition device captures the screen content displayed in the first screen to obtain N pieces of content.
Optionally, the step S107 specifically includes that the content recognition device captures a screen of the screen content displayed in the first screen to obtain at least one image, and processes the at least one image according to a preset mode to obtain N pieces of content.
Optionally, for processing at least one image in the preset manner to obtain N contents, the following two possible implementation manners may be included:
(1) And extracting the image elements in at least one image according to the categories of the image elements to obtain N contents, namely N contents are N images.
Illustratively, the image elements in the at least one image include humans, animals, and plants. People, animals and plants can be extracted from the at least one image according to the category of the image element.
(2) And dividing at least one image according to the interface level to obtain N contents.
Illustratively, the at least one image includes 3 interface levels. Dividing the at least one image according to the interface levels to obtain content 1, namely pictures and characters, at the first interface level, content 2, namely background, at the second interface level and content 3, namely characters, at the third interface level.
It can be appreciated that N contents can be obtained by screen capturing the screen contents displayed in the first screen, so that the user can select the content to be identified from the N contents.
Alternatively, S109 described above may be replaced with S109A and S109B described below:
S109A, the content recognition device analyzes the content of the screen displayed in the first screen.
And S109B, the content recognition device divides the screen content displayed in the first screen according to the analysis result to obtain N pieces of content.
Optionally, the content recognition device analyzes the screen content of the first screen through an AI algorithm to obtain each content type of the screen content, and then divides the screen content according to each content type to obtain the screen content of each type.
For example, when it is recognized that the screen content of the first screen includes two types of content, namely, a picture and text, the screen content may be divided to obtain the picture and text (i.e., N pieces of content) in the screen content, respectively.
Optionally, before S109 or S109B described above, the content identifying method provided by the embodiment of the present application may further include S110 to S112 described below.
S110, the content recognition device displays a target window on the second screen.
The target window displays the content of a target area in the first screen, and the target area is at least part of screen areas except the first area in the first screen.
Alternatively, the following two possible embodiments (a) and (b) may be specifically included for the above S110:
(a) The content recognition device displays the target window in a superimposed manner on the second screen.
(B) The content recognition device hovers over the second screen to display the target window.
Alternatively, the display area of the target window may be a preset area in the second screen, the display size of the target window may be a preset display size, and the display shape of the target window may be a circle, an ellipse, a rectangle, or other possible shapes. Specifically, the display mode of the target window can be determined according to actual conditions, and the display mode of the target window is not limited in the embodiment of the application.
S111, the content recognition device receives a fourth input of the user to the target window.
Optionally, the third input may be a touch input, a voice input, or a gesture input of the user to the target window. For example, the touch input is a sliding input of the user in the target window.
And S112, the content recognition device responds to the fourth input, updates the content displayed in the target window, and updates the content in the target area of the first screen in real time according to the updated content in the target window.
For example, in a scenario where content is continuously identified, for example, the entire article is translated, a small window is displayed on the second screen, and the small window displays the content of the target area of the first screen in real time. After the user slides the small window, the user can trigger to update the content displayed in the target window, and update the content in the target area of the first screen in real time according to the updated content in the target window.
Alternatively, in combination with the above S110 to S112, the above S109A may include S109A1 described below.
And S109A1, performing content analysis on the updated content in the first screen.
Optionally, the N contents are obtained based on content division after updating in the first screen.
Illustratively, in combination with the above example, when translating the entire article, after the user slides the text to be translated in the widget of the second screen, the user triggers updating of the text content displayed in the widget, and synchronously updates the content in the lower half screen area of the first screen, that is, the updated text content in the widget is displayed in the lower half screen area of the first screen. In this case, the content recognition device performs content analysis on the updated text content displayed in the screen area of the lower half of the first screen to obtain N pieces of content.
According to the content identification method provided by the embodiment of the application, the target window is displayed on the second screen, so that a user can trigger to update the content displayed in the target window through the input of the target window, and update the content in the target area of the first screen in real time according to the updated content in the target window, so that the content in the target area of the first screen can be controlled to be updated in real time through the content in the updated target window. In this way, a way of the target window interacting with the first screen is provided.
Optionally, the first content includes L second sub-contents, where L is a positive integer;
optionally, after the first content is displayed in the first area of the first screen of the electronic device in S102, the content identifying method provided by the embodiment of the present application further includes S113 described below.
S113, the content recognition device receives a sixth input of a user to at least two sub-contents of the L second sub-contents displayed in the second area.
Alternatively, in combination with the above S113, the above S103 may include the following steps S103B and S103C.
And S103B, the content identification device responds to the sixth input, and identifies at least two sub-contents under the condition that the at least two sub-contents are acquired through the first camera, so as to obtain second information corresponding to each sub-content in the at least two sub-contents.
And S103C, the content recognition device splices the second information corresponding to each sub-content to obtain the first information.
Optionally, the L second sub-contents are obtained by content division of the first content. Illustratively, the first content is an article, and the second sub-content is a plurality of text paragraphs in the article. Also illustratively, the first content includes an image 1 including a text paragraph a and an image 2 including a text paragraph B, which are obtained by screen capturing, and one of the second sub-contents is the image 1 and the other is the image 2.
Optionally, the sixth input may be any feasible input such as a touch input, a gesture input, or a voice input of the user, which is not limited in any way in the embodiment of the present application.
Optionally, the sixth input is an input that a user drags a plurality of sub-contents of the at least two sub-contents to one sub-content.
Illustratively, the above at least two sub-contents include sub-content 1, sub-content 2, and sub-content 3. After the user drags the sub-content 1 and the sub-content 2 to the sub-content 3, the content recognition device splices the recognition information 1 (i.e. the second information) corresponding to the sub-content 1, the recognition information 2 corresponding to the sub-content 2 and the recognition information 3 corresponding to the sub-content 3 to obtain final recognition information (i.e. the first information).
Taking as an example the first content comprising an image 1 comprising a text paragraph a and an image 2 comprising a text paragraph B obtained by a screen shot. After dragging the image 1 to the image 2 displayed in the second screen of the user, the content recognition device recognizes texts in the image 1 and the image 2 after obtaining the image contents of the image 1 and the image 2 in the first screen through the camera to obtain text contents in the image 1 and text contents in the image 2 respectively, and then integrates and splices the text contents in the image 1 and the text contents in the image 2 to obtain a section of text contents including texts in the image 1 and the image 2.
Therefore, the integrated and spliced identification result information corresponding to the plurality of sub-contents in the first content is used for obtaining the complete identification result information corresponding to the first content, and the readability of the identification result information is improved, so that the user experience is improved.
Optionally, after S103 or S103A described above, the content identifying method provided by the embodiment of the present application may further include S114 and S115 described below.
S114, the content recognition device receives a fifth input of the electronic device by the user.
Alternatively, the fourth input may be a touch input, a gesture input, or a voice input of the user. For example, the touch input is an input by a user expanding a first screen of the electronic device.
S115, the content recognition device responds to the fifth input to display the first information.
Alternatively, for displaying the first information in S115 described above, the following two possible embodiments may be included:
(1) The first information is displayed on a first screen of the electronic device.
(2) The first information is displayed on a second screen of the electronic device.
Optionally, after the step S114, the content identifying method provided by the embodiment of the present application may further include resuming display of the second content on the first screen, where the second content is the screen content displayed on the first screen before the first input is received. In this way, the user can continue to view the previous content.
According to the content identification method provided by the embodiment of the application, after the first content is identified through the first camera to obtain the first information, the user can trigger to display the first information through input, so that the user can check the first information obtained after the first content is identified.
According to the content identification method provided by the embodiment of the application, the execution subject can be a content identification device. In the embodiment of the present application, a method for performing content recognition by a content recognition device is taken as an example, and the content recognition device provided by the embodiment of the present application is described.
As shown in fig. 3, an embodiment of the present application provides a content identifying apparatus 200, where the content identifying apparatus 200 may include a receiving module 201, a display module 202 and a processing module 203, where the receiving module 201 is configured to receive a first input of an electronic device by a user, the display module 202 is configured to display, in response to the first input received by the receiving module 201, a first content in a first area of a first screen of the electronic device, where the first area is in a shooting range of a first camera of the electronic device, and the processing module 203 is configured to identify, when the first content is acquired by the first camera, the first content, and obtain first information.
Optionally, in the embodiment of the application, the display module is further configured to display the first content in a second area of a second screen of the electronic device before the first content is displayed in the first area of the first screen of the electronic device, where the second area is a mapping area of the first area, the receiving module is further configured to receive a second input of a user to the second area of the second screen, and the processing module is further configured to update the first content displayed in the first area in response to the second input received by the receiving module.
Optionally, in the embodiment of the application, the display module is further configured to display N pieces of content in a third area of the second screen before the first piece of content is displayed in the second area of the second screen of the electronic device, where N pieces of content includes the first piece of content, N is a positive integer, the receiving module is further configured to receive a third input of the first piece of content by a user, and the display module is specifically configured to display the first piece of content in the second area in response to the third input received by the receiving module.
Optionally, in the embodiment of the application, the processing module is further used for carrying out content analysis on the screen content displayed in the first screen, and the processing module is further used for dividing the screen content displayed in the first screen according to the analysis result to obtain N pieces of content.
Optionally, in the embodiment of the application, the display module is further configured to display a target window on the second screen, where the target window displays content of a target area in the first screen, the target area is at least a part of a screen area except the first area in the first screen, the receiving module is further configured to receive a fourth input of a user to the target window, the processing module is specifically configured to update the content displayed in the target window in response to the fourth input received by the receiving module, and update the content in the target area of the first screen in real time according to the updated content in the target window, and the processing module is specifically configured to perform content analysis on the updated content in the first screen, where the N contents are obtained based on the content division after the update in the first screen.
Optionally, in the embodiment of the present application, the first content includes M first sub-contents, where M is a positive integer, a receiving module further configured to receive a fifth input from a user to a target sub-content in the M first sub-contents displayed in the second area, and a processing module specifically configured to identify the target sub-content to obtain the first information when the target sub-content is obtained through the first camera in response to the fifth input received by the receiving module.
Optionally, in the embodiment of the application, the first content comprises L second sub-contents, wherein L is a positive integer, a receiving module is further used for receiving a sixth input of at least two sub-contents in the L second sub-contents displayed in the second area by a user, a processing module is specifically used for identifying the at least two sub-contents to obtain second information corresponding to each sub-content in the at least two sub-contents under the condition that the at least two sub-contents are acquired through the first camera in response to the sixth input received by the receiving module, and a processing module is specifically used for splicing the second information corresponding to each sub-content to obtain the first information.
The content identification device provided by the embodiment of the application receives a first input of a user to the electronic equipment, responds to the first input, displays first content in a first area of a first screen of the electronic equipment, wherein the first area is in a shooting range of a first camera of the electronic equipment, and identifies the first content to obtain first information under the condition that the first content is acquired through the first camera. According to the scheme, when a certain content is acquired in a photographable area of a first screen of the electronic equipment through the first camera, the acquired content is triggered and identified, and identification information corresponding to the content is obtained, and a user does not need to trigger screen capturing of the content and trigger uploading of the content to a server of a graph identification application for identification. In this way, the way in which the electronic device recognizes the content is simplified.
The content recognition device in the embodiment of the application can be an electronic device or a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. The electronic device may be a Mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a Mobile internet appliance (Mobile INTERNET DEVICE, MID), an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a robot, a wearable device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc., and may also be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, etc., which are not particularly limited in the embodiments of the present application.
The content recognition device in the embodiment of the application can be a device with an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The content recognition device provided by the embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 1 and fig. 2, and in order to avoid repetition, a detailed description is omitted here.
Optionally, as shown in fig. 4, the embodiment of the present application further provides an electronic device 300, including a processor 301 and a memory 302, where the memory 302 stores a program or an instruction that can be executed on the processor 301, and the program or the instruction implements each step of the embodiment of the content identification method when executed by the processor 301, and the steps achieve the same technical effects, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 5 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 400 includes, but is not limited to, a radio frequency unit 401, a network module 402, an audio output unit 403, an input unit 404, a sensor 405, a display unit 406, a user input unit 407, an interface unit 408, a memory 409, and a processor 410.
Those skilled in the art will appreciate that the electronic device 400 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 410 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 5 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The electronic device comprises a user input unit 407 for receiving a first input of the electronic device by a user, a display unit 406 for displaying first content in a first area of a first screen of the electronic device in response to the first input received by the user input unit 407, the first area being in a shooting range of a first camera of the electronic device, and a processor 410 for identifying the first content to obtain first information when the first content is acquired through the first camera.
Optionally, in the embodiment of the present application, the display unit 406 is further configured to display the first content in a second area of a second screen of the electronic device before the first content is displayed in the first area of the first screen of the electronic device, where the second area is a mapped area of the first area, the user input unit 407 is further configured to receive a second input of a user to the second area of the second screen, and the processor 410 is further configured to update the first content displayed in the first area in response to the second input received by the user input unit 407.
Optionally, in the embodiment of the present application, the display unit 406 is further configured to display N pieces of content in a third area of the second screen before the first piece of content is displayed in the second area of the second screen of the electronic device, where N pieces of content include the first piece of content and N is a positive integer, the user input unit 407 is further configured to receive a third input of the first piece of content by a user, and the display unit 406 is specifically configured to display the first piece of content in the second area in response to the third input received by the user input unit 407.
Optionally, in the embodiment of the present application, the processor 410 is further configured to analyze the content of the screen content displayed in the first screen, and the processor 410 is further configured to divide the screen content displayed in the first screen according to the analysis result to obtain N pieces of content.
Optionally, in the embodiment of the present application, the display unit 406 is further configured to display a target window on the second screen, where the target window displays content of a target area in the first screen, where the target area is at least a part of a screen area in the first screen except the first area, the user input unit 407 is further configured to receive a fourth input of a user to the target window, the processor 410 is specifically configured to update the content displayed in the target window in response to the fourth input received by the user input unit 407, and update the content in the target area of the first screen in real time according to the updated content in the target window, and the processor 410 is specifically configured to parse the content of the updated content in the first screen, where the N contents are obtained based on the content division after the update in the first screen.
Optionally, in the embodiment of the present application, the first content includes M first sub-contents, where M is a positive integer, the user input unit 407 is further configured to receive a fifth input of a user to a target sub-content in the M first sub-contents displayed in the second area, and the processor 410 is specifically configured to identify the target sub-content to obtain the first information in response to the fifth input received by the user input unit 407 when the target sub-content is obtained by the first camera.
Optionally, in the embodiment of the present application, the first content includes L second sub-contents, where L is a positive integer, the user input unit 407 is further configured to receive a sixth input of a user to at least two sub-contents of the L second sub-contents displayed in the second area, the processor 410 is specifically configured to identify the at least two sub-contents to obtain second information corresponding to each sub-content in the at least two sub-contents when the at least two sub-contents are obtained through the first camera in response to the sixth input received by the user input unit 407, and the processor 410 is specifically configured to splice the second information corresponding to each sub-content to obtain the first information.
In the electronic equipment provided by the embodiment of the application, the electronic equipment receives first input of a user to the electronic equipment, responds to the first input, displays first content in a first area of a first screen of the electronic equipment, wherein the first area is in a shooting range of a first camera of the electronic equipment, and identifies the first content to obtain first information under the condition that the first content is acquired through the first camera. According to the scheme, when a certain content is acquired in a photographable area of a first screen of the electronic equipment through the first camera, the acquired content is triggered and identified, and identification information corresponding to the content is obtained, and a user does not need to trigger screen capturing of the content and trigger uploading of the content to a server of a graph identification application for identification. In this way, the way in which the electronic device recognizes the content is simplified.
It should be appreciated that in embodiments of the present application, the input unit 404 may include a graphics processor (graphics processing unit, GPU) 4041 and a microphone 4042, with the graphics processor 4041 processing image data of still pictures or video obtained by an image capture device (e.g., a camera) in a video capture mode or an image capture mode. The display unit 406 may include a display panel 4061, and the display panel 4061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 407 includes at least one of a touch panel 4071 and other input devices 4072. The touch panel 4071 is also referred to as a touch screen. The touch panel 4071 may include two parts, a touch detection device and a touch controller. Other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
Memory 409 may be used to store software programs as well as various data. The memory 409 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 409 may include volatile memory or nonvolatile memory, or the memory 409 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static random access memory (STATIC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate Synchronous dynamic random access memory (Double DATA RATE SDRAM, DDRSDRAM), enhanced Synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCH LINK DRAM, SLDRAM), and Direct random access memory (DRRAM). Memory 109 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 410 may include one or more processing units and, optionally, processor 410 integrates an application processor that primarily processes operations involving an operating system, user interface, application program, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the above embodiment of the content identification method, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the embodiment of the content identification method, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
Embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the above-described embodiments of the content identification method, and achieve the same technical effects, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210745044.1A CN115131649B (en) | 2022-06-27 | 2022-06-27 | Content identification method and device and electronic equipment |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210745044.1A CN115131649B (en) | 2022-06-27 | 2022-06-27 | Content identification method and device and electronic equipment |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN115131649A CN115131649A (en) | 2022-09-30 |
| CN115131649B true CN115131649B (en) | 2025-08-01 |
Family
ID=83379291
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210745044.1A Active CN115131649B (en) | 2022-06-27 | 2022-06-27 | Content identification method and device and electronic equipment |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN115131649B (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN117176853A (en) * | 2023-09-01 | 2023-12-05 | 深圳传音控股股份有限公司 | Processing method, intelligent terminal and storage medium |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107465868A (en) * | 2017-06-21 | 2017-12-12 | 珠海格力电器股份有限公司 | Object identification method and device based on terminal and electronic equipment |
| CN109432775A (en) * | 2018-11-09 | 2019-03-08 | 网易(杭州)网络有限公司 | A kind of multi-screen display method and device of map |
| CN113542463A (en) * | 2021-06-30 | 2021-10-22 | 惠州Tcl移动通信有限公司 | Video shooting device and method based on folding screen, storage medium and mobile terminal |
| CN114666427A (en) * | 2020-12-08 | 2022-06-24 | 荣耀终端有限公司 | Image display method, electronic equipment and storage medium |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109785816B (en) * | 2017-01-03 | 2021-06-25 | 中兴通讯股份有限公司 | Mobile terminal and display control method thereof |
| CN111026302B (en) * | 2019-11-26 | 2021-02-23 | 维沃移动通信有限公司 | Display method and electronic equipment |
| KR20220007469A (en) * | 2020-07-10 | 2022-01-18 | 삼성전자주식회사 | Electronic device for displaying content and method for operating thereof |
| CN111866392B (en) * | 2020-07-31 | 2021-10-08 | Oppo广东移动通信有限公司 | Shooting prompting method, device, storage medium and electronic device |
| CN112287850B (en) * | 2020-10-30 | 2025-01-03 | 维沃移动通信有限公司 | Item information identification method, device, electronic device and readable storage medium |
-
2022
- 2022-06-27 CN CN202210745044.1A patent/CN115131649B/en active Active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107465868A (en) * | 2017-06-21 | 2017-12-12 | 珠海格力电器股份有限公司 | Object identification method and device based on terminal and electronic equipment |
| CN109432775A (en) * | 2018-11-09 | 2019-03-08 | 网易(杭州)网络有限公司 | A kind of multi-screen display method and device of map |
| CN114666427A (en) * | 2020-12-08 | 2022-06-24 | 荣耀终端有限公司 | Image display method, electronic equipment and storage medium |
| CN113542463A (en) * | 2021-06-30 | 2021-10-22 | 惠州Tcl移动通信有限公司 | Video shooting device and method based on folding screen, storage medium and mobile terminal |
Also Published As
| Publication number | Publication date |
|---|---|
| CN115131649A (en) | 2022-09-30 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111901896A (en) | Information sharing method, information sharing device, electronic equipment and storage medium | |
| CN114564921B (en) | Document editing method and device | |
| CN112099704A (en) | Information display method and device, electronic equipment and readable storage medium | |
| CN111722775A (en) | Image processing method, device, equipment and readable storage medium | |
| CN115291778B (en) | Display control method, device, electronic device and readable storage medium | |
| CN115437736A (en) | Method and device for taking notes | |
| CN114786062A (en) | Information recommendation method and device and electronic equipment | |
| KR20230061519A (en) | Screen capture methods, devices and electronics | |
| CN115131649B (en) | Content identification method and device and electronic equipment | |
| CN114845171B (en) | Video editing method, device and electronic equipment | |
| CN113794831B (en) | Video shooting method, device, electronic equipment and medium | |
| CN115842953A (en) | Shooting method and device thereof | |
| CN107784037B (en) | Information processing method and device, and device for information processing | |
| CN113873168A (en) | Shooting method, shooting device, electronic equipment and medium | |
| CN115242976B (en) | Photographing method, photographing device and electronic equipment | |
| CN116033094B (en) | Video editing method and device | |
| CN114049638B (en) | Image processing method, device, electronic equipment and storage medium | |
| WO2024160133A1 (en) | Image generation method and apparatus, electronic device, and storage medium | |
| CN117311885A (en) | Image viewing methods and devices | |
| CN112765447B (en) | Data searching method and device and electronic equipment | |
| CN114245017A (en) | Shooting method and device and electronic equipment | |
| CN115589457B (en) | A shooting method, device, electronic device and readable storage medium | |
| CN115499610B (en) | Video generation method, video generation device, electronic device and storage medium | |
| CN119172594B (en) | Video processing methods and devices | |
| CN115278378B (en) | Information display method, information display device, electronic device and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |