CN111970437B - Text shooting method, wearable device and storage medium - Google Patents
Text shooting method, wearable device and storage medium Download PDFInfo
- Publication number
- CN111970437B CN111970437B CN202010766614.6A CN202010766614A CN111970437B CN 111970437 B CN111970437 B CN 111970437B CN 202010766614 A CN202010766614 A CN 202010766614A CN 111970437 B CN111970437 B CN 111970437B
- Authority
- CN
- China
- Prior art keywords
- text
- text content
- camera
- size
- photographing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 230000002093 peripheral effect Effects 0.000 claims description 83
- 230000006870 function Effects 0.000 claims description 15
- 238000012546 transfer Methods 0.000 claims description 11
- 238000007781 pre-processing Methods 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 8
- 230000003287 optical effect Effects 0.000 description 6
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000003708 edge detection Methods 0.000 description 2
- 230000003321 amplification Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
- H04N23/632—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
- Studio Devices (AREA)
Abstract
The application discloses a text shooting method, a wearable device and a storage medium. The text shooting method in the application comprises the following steps: when a preview image containing text content is acquired by a camera, acquiring the characteristics of the text content from the preview image, wherein the characteristics of the text content are used for representing the position and the size of the text content; controlling the camera to zoom according to the characteristics of the text content; controlling the camera to focus the text content, and acquiring a text image after the camera finishes focusing; the text image collected by the camera is displayed in a display interface, the wearable equipment can be used for shooting the text, and the text can be automatically zoomed and focused to obtain a clearer image.
Description
Technical Field
The invention relates to the technical field of photographing, in particular to a text photographing method, wearable equipment and a storage medium.
Background
With the development of mobile communication technology, the use of smart terminals such as smart phones, tablet computers, wearable smart devices and the like has become widespread, and the camera module with the photographing function has a quite important role in the daily life of people as an important component of the smart mobile terminal.
Some wearable devices have also realized gradually and have shot the function, can be used for the characters to shoot, for example student uses's smart watch can shoot books content etc.. However, a common implementation mode is that a camera is aimed at characters to directly take a picture, and since the display screen of the wearable device is too small, the display of texts in images which are not in focus is easily blurred.
Disclosure of Invention
The application provides a text shooting method, a wearable device and a storage medium.
In a first aspect, a text photographing method is provided, including:
when a preview image containing text content is acquired by a camera, acquiring the characteristics of the text content from the preview image, wherein the characteristics of the text content are used for representing the position and the size of the text content;
controlling the camera to zoom according to the characteristics of the text content;
controlling the camera to focus the text content, and acquiring a text image after the camera finishes focusing;
and displaying the text image acquired by the camera in a display interface.
In an optional implementation mode, the characteristics of the text content comprise a peripheral rectangle of the text content and position information of the peripheral rectangle in the display interface;
before controlling the camera to zoom according to the characteristics of the text content, the method further includes:
judging whether the size of the peripheral rectangle of the text content is matched with the size of the photographing preview interface;
and if the size of the peripheral rectangle of the text content is not matched with the size of the photographing preview interface, triggering the step of controlling the camera to zoom according to the characteristics of the text content.
In an optional implementation manner, the determining whether the size of the peripheral rectangle of the text content matches the size of the photo preview interface includes:
judging whether the length of at least one side of the peripheral rectangle of the text content is the same as the side length of the parallel photographing preview interface;
if yes, determining that the size of the peripheral rectangle of the text content is matched with the size of the photographing preview interface; and if not, determining that the size of the peripheral rectangle of the text content is not matched with the size of the photographing preview interface.
In an optional implementation manner, the controlling the camera to zoom according to the feature of the text content includes:
determining a zooming parameter corresponding to the text content according to the peripheral rectangle of the text content and the position information of the peripheral rectangle in the display interface, wherein the zooming parameter is used for adjusting the focal length of the camera until the size of the peripheral rectangle of the text content is matched with the size of the photographing preview interface;
and controlling the camera to zoom according to the zooming parameters corresponding to the text content.
In an optional embodiment, the controlling the camera to focus on the text content includes:
and calculating modulation transfer functions of N points selected on a peripheral rectangle of the text content in the preview image, and controlling the camera to focus on the text content by using the modulation transfer functions, wherein N is a positive integer.
In an optional implementation manner, after obtaining the feature of the text content, the method further includes:
and displaying a text box in a photo preview interface, wherein the text box is determined by the peripheral rectangle of the text content and the position information of the peripheral rectangle in the display interface.
In an optional embodiment, after displaying the text box in the photo preview interface, the method further comprises:
when touch operation on a target position of the photographing preview interface is detected, moving the text box to the target position for displaying;
and when the zooming operation on the text box is detected, zooming out or zooming in the text box according to the zooming operation.
In a second aspect, a wearable device is provided, which includes a camera, a preprocessing module, a control module, and a display module, wherein:
the preprocessing module is used for acquiring the characteristics of the text content from the preview image when the preview image containing the text content is acquired by the camera, wherein the characteristics of the text content are used for representing the position and the size of the text content;
the control module is used for controlling the camera to zoom according to the characteristics of the text content;
the control module is further used for controlling the camera to focus the text content, and the camera collects a text image after the focusing is finished;
and the display module is used for displaying the text image acquired by the camera in a display interface.
In a third aspect, there is provided another wearable device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the first aspect and any of its possible implementations.
In a fourth aspect, there is provided a computer storage medium storing one or more instructions adapted to be loaded by a processor and to perform the steps of the first aspect and any possible implementation thereof.
According to the method and the device, when a preview image containing text content is acquired by a camera, the characteristics of the text content are acquired from the preview image, the characteristics of the text content are used for representing the position and the size of the text content, the camera is controlled to zoom according to the characteristics of the text content, the camera is controlled to focus on the text content, the camera acquires a text image after the focusing is completed, the text image acquired by the camera is displayed in a display interface, a wearable device can be used for shooting a text, the text content is identified under the condition of camera preview, and the automatic zooming and the focusing are performed, so that a clearer image is obtained.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present application, the drawings required to be used in the embodiments or the background art of the present application will be described below.
Fig. 1 is a schematic flowchart of a text photographing method according to an embodiment of the present application;
fig. 2 is a schematic view of a 9-point focus lens provided in an embodiment of the present application;
fig. 3 is a schematic flowchart of another text photographing method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a wearable device provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of another wearable device provided in the embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The embodiments of the present application are described below with reference to the drawings.
Referring to fig. 1, fig. 1 is a schematic flowchart of a text shooting method according to an embodiment of the present disclosure. The method can comprise the following steps:
101. when a preview image containing the text content is acquired by the camera, the characteristics of the text content are acquired from the preview image, and the characteristics of the text content are used for representing the position and the size of the text content.
The subject of embodiments of the present application may be a wearable device, including but not limited to, such as a wearable device having a touch-sensitive surface (e.g., a touch screen display and/or a touch pad), such as a smart watch. In an optional implementation manner, the method may also be applied to other terminal devices such as a mobile terminal with a camera, which is not limited in this application.
The wearable device in the embodiment of the application is provided with the camera, and can acquire images and display the images on a screen. Specifically, when the camera is started and the camera enters a photographing mode, the camera starts to acquire preview images, the preview images mentioned in the embodiment of the application are image images which are acquired by the camera and can be displayed on a screen of the wearable device, and the image images are images which are not framed before photographing.
The wearable device in the embodiment of the application can identify whether the preview image of the camera contains characters. Specifically, when a frame of preview image is acquired, whether the preview image contains characters or not can be identified through a preset character identification algorithm. Optionally, when it is determined that the current preview image is a preview image containing text content, the feature of the text content may be obtained from the preview image, where the feature of the text content may represent the position and size of the text content, and is mainly used to determine the relative position and relative size of the text content in the preview image, and may be combined with an edge detection algorithm to determine the text content area, so as to extract the feature of the text content, where the feature may include (length, width, and the like of) a peripheral rectangle of the text content, and coordinates in the preview image, specifically, coordinates of the center or four corners of the peripheral rectangle of the text content in the preview image, and this is not limited here.
Optionally, after obtaining the characteristics of the text content, the method further includes:
and displaying a text box in the photo preview interface, wherein the text box is determined by the peripheral rectangle of the text content and the position information of the peripheral rectangle in the display interface.
The mark of a text box can be displayed in a photo preview interface of a display screen of the wearable device to determine the text object needing to be focused.
Further optionally, after the text box is displayed in the photo preview interface, the method further includes:
when touch operation on a target position of the photographing preview interface is detected, moving the text box to the target position for displaying;
and when the zoom operation on the text box is detected, zooming out or zooming in the text box according to the zoom operation.
Specifically, for a displayed text box, a user may modify the text box manually, for example, when a touch operation occurs to a target position, the text box may move to a corresponding position, and specifically, the target position may be used as the center of a new text box. Zooming operations may also be performed on the text box, such as sliding two fingers closer together or farther apart on the screen to indicate zooming in or out, respectively. The mode can adjust the size of a focusing object and a focusing area by a user so as to enable subsequent image shooting to be clearer.
102. And controlling the camera to zoom according to the characteristics of the text content.
Specifically, the text zooming adaptive algorithm can be preset, and whether the focal length needs to be enlarged or reduced relative to the current screen (display interface) can be judged according to the characteristics of the text content, so that the text content is completely and clearly shot. The text zooming self-adaptive algorithm is combined with the magnification algorithm, so that the text area can be adapted to the size of the screen, and the photographing effect is improved.
The zooming mainly refers to optical zooming (optical zoom), a common digital camera realizes zooming by means of an optical lens structure, a scene needing to be shot is enlarged and reduced by moving a lens, the larger the optical zooming multiple is, the farther the scene can be shot is, the contrast is realized for a camera with weak zooming performance, and the remote shooting for adjusting the focal length is relatively clear.
The specific implementation manner of step 102 may also refer to the specific description in the embodiment shown in fig. 3, and is not described here again.
103. And controlling the camera to focus the character content, and acquiring a text image after the camera finishes focusing.
In the embodiment of the application, focusing and photographing can be completed through a multi-point focusing algorithm.
In an optional embodiment, the controlling the camera to focus on the text content includes: and calculating modulation transfer functions of N points selected from the peripheral rectangle of the text content in the preview image, and controlling the camera to focus on the text content by using the modulation transfer functions, wherein N is a positive integer.
For example, N may be 9, that is, a 9-point focusing algorithm is adopted for focusing. The 9-point focusing is relative to the central focusing, and the basic principle is to calculate the Modulation Transfer Function (MTF) of a specific 9 points selected on an image, and finally determine the position of the lens to complete focusing. MTF is often used in individual camera lenses to describe the MTF curve of the lens, indicating the capabilities of the optical lens. The focusing point related to the embodiment of the application is a group array which can be seen in a viewfinder, and can be understood as a detection sample of a focusing sensor, and when a shot object is in the focusing point, the lens can be automatically focused to the position.
Specifically, reference may be made to a schematic diagram of a 9-point focusing lens picture shown in fig. 2, which is a camera focusing lens picture shown in fig. 2, where a central circle a is a light measuring region, the picture further includes 9 focusing points, one focusing point is at a central position, and the other 8 light measuring regions surround the middle. The arrangement mode of the focus points may also have other forms, such as a 3 × 3 array form, and the photographing preview interface corresponding to the lens may also have different sizes according to different terminal devices, which is not limited in this embodiment of the present application.
After the focusing of the camera is completed, the text image can be obtained by shooting through the camera, and then step 104 can be executed. The zooming and focusing processes may be performed automatically or controlled by manual operation of a user, and optionally, if the user has manual operation in one-time photographing, the subsequent manual operation steps may be based on, and the operations such as manual zooming, manual focusing, and the like may be included.
104. And displaying the text image acquired by the camera in a display interface.
The text image comprises the text content which the user wants to shoot, and through the zooming and focusing, the text content can be promoted to be in a proper size in the shot text image, the size of the screen of the wearable device is also adapted, and the text content is clear and complete.
Optionally, after obtaining the text image, the user may perform a series of optional editing operations or other processing on the text image, such as sharing the text image in an application program, and the like, which is not limited in this embodiment of the application.
In the embodiment of the application, when the camera acquires the preview image containing the text content, the characteristics of the text content are acquired from the preview image and are used for representing the position and the size of the text content, controlling the camera to zoom according to the character content, controlling the camera to focus the character content, and acquiring a text image after the focusing is finished, the text image collected by the camera is displayed in a display interface, the wearable device can be used for shooting the text, the text content is identified under the condition of camera preview, automatic zooming and focusing are carried out, so that a clearer image is obtained, the problems that the operation of wearable equipment such as a watch screen is inconvenient, the text cannot be shot well and cannot be shot clearly can be solved, and the use experience of a user is improved.
For more clearly explaining a text photographing method provided by the present application, please refer to fig. 3, and fig. 3 is a schematic flow chart of another text photographing method provided by an embodiment of the present application. As shown in fig. 3, the method specifically includes:
301. when the camera acquires a preview image containing the text content, the peripheral rectangle of the text content and the position information of the peripheral rectangle in the display interface are acquired from the preview image.
The subject of embodiments of the present application may be a wearable device, including but not limited to, such as a wearable device having a touch-sensitive surface (e.g., a touch screen display and/or a touch pad), such as a smart watch. In an optional implementation manner, the method may also be applied to other terminal devices such as a mobile terminal with a camera, which is not limited in this application.
The wearable device in the embodiment of the application is provided with the camera, and can acquire images and display the images on a screen. Specifically, when the camera is started and the camera enters a photographing mode, the camera starts to acquire preview images, the preview images mentioned in the embodiment of the application are image images which are acquired by the camera and can be displayed on a screen of the wearable device, and the image images are images which are not framed before photographing.
The wearable device in the embodiment of the application can identify whether the preview image of the camera contains characters. Specifically, when a frame of preview image is acquired, whether the preview image contains characters or not can be identified through a preset character identification algorithm. Optionally, when the current preview image is determined to be a preview image containing text content, the characteristics of the text content may be acquired from the preview image, so as to determine the relative position and the relative size of the text content in the preview image. Specifically, the text content area may be determined by combining an edge detection algorithm, and a peripheral rectangle (which may include parameters such as length and width) of the text content area and coordinates of the center or four corners of the peripheral rectangle in the preview image are obtained, which is not limited herein.
Step 301 may also refer to the detailed description in step 101 in the embodiment shown in fig. 1, and is not described herein again.
302. And judging whether the size of the peripheral rectangle of the text content is matched with the size of the photographing preview interface.
In order to adapt the text to the size of the screen, whether the size of the peripheral rectangle of the text content is matched with the size of the photographing preview interface or not can be judged. In an optional implementation manner, the step 302 specifically includes:
judging whether the peripheral rectangle of the text content has at least one side with the same length as the side of the parallel photographing preview interface;
if yes, determining that the size of the peripheral rectangle of the text content is matched with the size of the photographing preview interface; and if not, determining that the size of the peripheral rectangle of the text content is not matched with the size of the photographing preview interface.
Specifically, whether the peripheral rectangle of the text content is already the largest in the photo preview interface, that is, the image, may be determined by comparing one side of the peripheral rectangle with the side length of the parallel photo preview interface, where the case that the two side lengths are the same indicates that the peripheral rectangle is already the largest in the photo preview interface (and exceeds the interface range), and the photo may be performed, that is, the size of the peripheral rectangle of the text content is matched with the photo preview interface, and step 305 may be performed without zooming, that is, step 303 is not performed, but step 305 may be performed.
If the length of at least one side of the peripheral rectangle of the text content is not the same as the side length of the parallel photo preview interface, that is, the side lengths of the peripheral rectangles are all smaller than the corresponding interface side lengths, at this time, the peripheral rectangle does not reach the maximum, and an amplification space may also exist, that is, the size of the peripheral rectangle of the text content is not matched with the size of the photo preview interface, triggering step 303.
303. And determining a zooming parameter corresponding to the text content according to the peripheral rectangle of the text content and the position information of the peripheral rectangle in the display interface, wherein the zooming parameter is used for adjusting the focal length of the camera until the size of the peripheral rectangle of the text content is matched with the size of the photographing preview interface.
304. And controlling the camera to zoom according to the zooming parameters corresponding to the text content.
In the case that the size of the peripheral rectangle of the text content does not match the size of the photo preview interface, it can be understood that the text area content is in a smaller range in the photo, and the text can be zoomed to shoot more clearly. And the zooming parameter corresponding to the text content is a camera parameter for matching the size of the peripheral rectangle of the text content with the size of the photographing preview interface. And calculating by using a preset text zooming self-adaptive algorithm and the peripheral rectangle of the text content and the position information of the peripheral rectangle in the display interface, and obtaining the zooming parameter to control and adjust the focal length of the camera, so that the peripheral rectangle of the text content is maximally displayed in a photographing preview interface, and a more completely cleaned text image can be photographed.
305. And controlling the camera to focus the character content, and acquiring a text image after the camera finishes focusing.
306. And displaying the text image acquired by the camera in a display interface.
Step 305 and step 306 may also refer to the detailed descriptions in step 103 and step 104 in the embodiment shown in fig. 1, which are not described herein again.
In the embodiment of the application, when the camera acquires the preview image containing the text content, the peripheral rectangle of the text content and the position information of the peripheral rectangle in the display interface are acquired from the preview image, whether the size of the peripheral rectangle of the text content is matched with the size of the photographing preview interface is judged, if not, determining the zooming parameter corresponding to the text content according to the peripheral rectangle of the text content and the position information of the peripheral rectangle in the display interface, the zooming parameter is used for adjusting the focal length of the camera to match the size of the peripheral rectangle of the text content with the size of the photographing preview interface, the camera is controlled to zoom according to the zooming parameter corresponding to the text content, then the camera is controlled to focus the text content, and the camera acquires a text image after the focusing is finished; if the size of the peripheral rectangle of the text content is matched with the size of the photographing preview interface, focusing photographing can be directly carried out, and the text image collected by the camera is displayed in the display interface. The wearable device can be used for shooting texts, the text content is recognized under the condition of camera preview, adaptive zooming and focusing are carried out through the peripheral rectangle of the text content and the size matching condition of the shooting preview interface, so that clearer images can be obtained, the problems that the texts are not shot well and are not clear due to the fact that the wearable device is inconvenient to operate as the watch screen is too small can be particularly solved, and the use experience of a user is improved.
Based on the description of the text photographing method embodiment, the embodiment of the application further discloses a terminal device. Referring to fig. 4, the wearable device 400 includes:
including camera 410, preprocessing module 420, control module 430 and display module 440, wherein:
the preprocessing module 420 is configured to, when a preview image including text content is acquired by the camera 410, obtain features of the text content from the preview image, where the features of the text content are used to indicate a position and a size of the text content;
the control module 430 is configured to control the camera 410 to zoom according to the characteristics of the text content;
the control module 430 is further configured to control the camera 410 to focus on the text content, and the camera 410 collects a text image after the focusing is completed;
the display module 440 is configured to display the text image collected by the camera 410 in a display interface.
Optionally, the characteristics of the text content include a peripheral rectangle of the text content and position information of the peripheral rectangle in the display interface;
the preprocessing module 420 is further configured to, before the control module 430 controls the camera to zoom according to the characteristics of the text content, determine whether the size of the peripheral rectangle of the text content matches the size of the preview interface;
if the size of the peripheral rectangle of the text content does not match the size of the preview interface, the control module 430 is triggered to control the camera to zoom according to the characteristics of the text content.
Optionally, the preprocessing module 420 is specifically configured to:
judging whether the peripheral rectangle of the text content has at least one side with the same length as the side of the parallel photographing preview interface;
if yes, determining that the size of the peripheral rectangle of the text content is matched with the size of the photographing preview interface; and if not, determining that the size of the peripheral rectangle of the text content is not matched with the size of the photographing preview interface.
Optionally, the control module 430 is specifically configured to:
determining a zooming parameter corresponding to the text content according to the peripheral rectangle of the text content and the position information of the peripheral rectangle in the display interface, wherein the zooming parameter is used for adjusting the focal length of the camera until the size of the peripheral rectangle of the text content is matched with the size of the photographing preview interface;
and controlling the camera to zoom according to the zooming parameters corresponding to the text content.
Optionally, the control module 430 is further specifically configured to:
and calculating modulation transfer functions of N points selected from the peripheral rectangle of the text content in the preview image, and controlling the camera to focus on the text content by using the modulation transfer functions, wherein N is a positive integer.
Optionally, the display module 440 is further configured to display a text box in a photo preview interface after the preprocessing module 420 obtains the characteristics of the text content, where the text box is determined by the peripheral rectangle of the text content and the position information of the peripheral rectangle in the display interface.
Optionally, the display module 440 is further configured to, after displaying the text box in the photo preview interface, move the text box to the target position for displaying when the touch operation on the target position of the photo preview interface is detected, and/or,
and when the zoom operation on the text box is detected, zooming out or zooming in the text box according to the zoom operation.
According to an embodiment of the present application, each step involved in the methods shown in fig. 1 and fig. 3 may be performed by each module in the wearable device 400 shown in fig. 4, and is not described herein again.
In the wearable device 400 in the embodiment of the application, the wearable device 400 may obtain the characteristics of the text content from the preview image when the camera 410 acquires the preview image containing the text content, the characteristics of the text content are used to indicate the position and size of the text content, the camera 410 is controlled to zoom according to the characteristics of the text content, the camera 410 is controlled to focus on the text content, the camera 410 acquires the text image after the focusing is completed, the text image acquired by the camera 410 is displayed in the display interface, the text can be shot, the text content is identified in the case of previewing by the camera 410, and the automatic zooming and focusing are performed to obtain a clearer image, and particularly, the problems that the operation of the wearable device 400 is inconvenient due to the fact that the watch screen is too small, the text is not easy to shoot, and the text is not easy to shoot are solved, The problem of unclear shooting is solved, and the use experience of a user is improved.
Based on the description of the method embodiment and the device embodiment, the embodiment of the application further provides a wearable device. Referring to fig. 5, the wearable device 500 includes at least a processor 501, an input device 502, an output device 503, and a computer storage medium 504. The processor 501, the input device 502, the output device 503, and the computer storage medium 504 in the terminal may be connected by a bus or other means.
A computer storage medium 504 may be stored in the memory of the terminal, the computer storage medium 504 being used for storing a computer program comprising program instructions, and the processor 501 being used for executing the program instructions stored by the computer storage medium 504. The processor 501 (or CPU) is a computing core and a control core of the terminal, and is adapted to implement one or more instructions, and specifically, adapted to load and execute the one or more instructions so as to implement a corresponding method flow or a corresponding function; in one embodiment, the processor 501 described above in the embodiments of the present application may be used to perform a series of processes, including the methods in the embodiments shown in fig. 1 and fig. 3, and so on.
An embodiment of the present application further provides a computer storage medium (Memory), where the computer storage medium is a Memory device in a terminal and is used to store programs and data. It is understood that the computer storage medium herein may include a built-in storage medium in the terminal, and may also include an extended storage medium supported by the terminal. The computer storage medium provides a storage space that stores an operating system of the terminal. Also stored in this memory space are one or more instructions, which may be one or more computer programs (including program code), suitable for loading and execution by processor 501. The computer storage medium may be a high-speed RAM memory, or may be a non-volatile memory (non-volatile memory), such as at least one disk memory; and optionally at least one computer storage medium located remotely from the processor.
In one embodiment, one or more instructions stored in a computer storage medium may be loaded and executed by processor 501 to perform the corresponding steps in the above embodiments; in particular implementations, one or more instructions in the computer storage medium may be loaded by processor 501 and executed to perform any step of the method in fig. 1 and/or fig. 3, which is not described herein again.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the division of the module is only one logical division, and other divisions may be possible in actual implementation, for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not performed. The shown or discussed mutual coupling, direct coupling or communication connection may be an indirect coupling or communication connection of devices or modules through some interfaces, and may be in an electrical, mechanical or other form.
Modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are wholly or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on or transmitted over a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)), or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a read-only memory (ROM), or a random access memory (ram), or a magnetic medium, such as a floppy disk, a hard disk, a magnetic tape, a magnetic disk, or an optical medium, such as a Digital Versatile Disk (DVD), or a semiconductor medium, such as a Solid State Disk (SSD).
Claims (8)
1. A text photographing method, comprising:
when a camera acquires a frame of preview image, identifying whether the preview image contains text content or not through a preset text recognition algorithm;
when the preview image contains the text content, acquiring the characteristics of the text content from the preview image, wherein the characteristics of the text content are used for representing the position and the size of the text content;
according to the character content, controlling the camera to zoom comprises the following steps: determining a zooming parameter corresponding to the text content according to the peripheral rectangle of the text content and the position information of the peripheral rectangle in a photographing preview interface, wherein the zooming parameter is used for adjusting the focal length of the camera until the size of the peripheral rectangle of the text content is matched with the size of the photographing preview interface, and controlling the camera to zoom according to the zooming parameter corresponding to the text content;
calculating a modulation transfer function of N points selected on a peripheral rectangle of the text content in the preview image, controlling the camera to focus the text content by using the modulation transfer function, and acquiring a text image after the camera finishes focusing, wherein N is a positive integer;
and displaying the text image acquired by the camera in the photographing preview interface.
2. The text photographing method according to claim 1, wherein the characteristics of the text content include a peripheral rectangle of the text content and position information of the peripheral rectangle in the photographing preview interface;
before controlling the camera to zoom according to the characteristics of the text content, the method further includes:
judging whether the size of the peripheral rectangle of the text content is matched with the size of the photographing preview interface;
and if the size of the peripheral rectangle of the text content is not matched with the size of the photographing preview interface, triggering the step of controlling the camera to zoom according to the characteristics of the text content.
3. The method of claim 2, wherein the determining whether the size of the peripheral rectangle of the text content matches the size of the preview interface comprises:
judging whether the peripheral rectangle of the text content has at least one side with the same length as the parallel side of the photographing preview interface;
if yes, determining that the size of the peripheral rectangle of the text content is matched with the size of the photographing preview interface; and if not, determining that the size of the peripheral rectangle of the text content is not matched with the size of the photographing preview interface.
4. The text photographing method according to claim 1, wherein after the obtaining of the feature of the text content, the method further comprises:
and displaying a text box in the photo preview interface, wherein the text box is determined by the peripheral rectangle of the text content and the position information of the peripheral rectangle in the photo preview interface.
5. The text photographing method of claim 4, wherein after the text box is displayed in the photographing preview interface, the method further comprises:
when the touch operation of the target position of the photographing preview interface is detected, moving the text box to the target position for displaying, and/or,
and when the zooming operation on the text box is detected, zooming out or zooming in the text box according to the zooming operation.
6. The wearable device is characterized by comprising a camera, a preprocessing module, a control module and a display module, wherein:
the preprocessing module is used for identifying whether the preview image contains text content or not through a preset text recognition algorithm when the camera acquires the frame of preview image;
the preprocessing module is further used for acquiring the characteristics of the text content from the preview image when the preview image contains the text content, wherein the characteristics of the text content are used for representing the position and the size of the text content;
the control module is configured to control the camera to zoom according to the feature of the text content, and the control module is specifically configured to: determining a zooming parameter corresponding to the text content according to the peripheral rectangle of the text content and the position information of the peripheral rectangle in a photographing preview interface, wherein the zooming parameter is used for adjusting the focal length of the camera until the size of the peripheral rectangle of the text content is matched with the size of the photographing preview interface, and controlling the camera to zoom according to the zooming parameter corresponding to the text content;
the control module is further configured to calculate a modulation transfer function of N points selected on a peripheral rectangle of the text content in the preview image, control the camera to focus the text content using the modulation transfer function, and acquire a text image after the camera finishes focusing, where N is a positive integer;
and the display module is used for displaying the text image acquired by the camera in the photographing preview interface.
7. A wearable device, characterized by comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to carry out the steps of the text photographing method according to any one of claims 1 to 5.
8. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, causes the processor to carry out the steps of the text photographing method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010766614.6A CN111970437B (en) | 2020-08-03 | 2020-08-03 | Text shooting method, wearable device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010766614.6A CN111970437B (en) | 2020-08-03 | 2020-08-03 | Text shooting method, wearable device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111970437A CN111970437A (en) | 2020-11-20 |
CN111970437B true CN111970437B (en) | 2022-08-09 |
Family
ID=73363815
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010766614.6A Active CN111970437B (en) | 2020-08-03 | 2020-08-03 | Text shooting method, wearable device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111970437B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113905153A (en) * | 2021-08-26 | 2022-01-07 | 秒针信息技术有限公司 | Copying equipment for adjusting focusing position and method for adjusting focusing position |
CN113784119B (en) * | 2021-09-26 | 2023-05-02 | 联想(北京)有限公司 | Focusing detection method and device and electronic equipment |
CN116935391A (en) * | 2022-04-08 | 2023-10-24 | 广州视源电子科技股份有限公司 | A camera-based text recognition method, device, equipment and storage medium |
CN119937769A (en) * | 2023-11-03 | 2025-05-06 | 珠海莫界科技有限公司 | Text display method, device, wearable device and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005269072A (en) * | 2004-03-17 | 2005-09-29 | Fuji Xerox Co Ltd | Image processing apparatus |
JP2006094082A (en) * | 2004-09-24 | 2006-04-06 | Casio Comput Co Ltd | Image photographing apparatus and program |
CN100382572C (en) * | 2004-04-26 | 2008-04-16 | 卡西欧计算机株式会社 | digital camera |
CN101753846A (en) * | 2008-12-05 | 2010-06-23 | 三星电子株式会社 | Device and method for automatically adjusting character size using camera |
WO2016154806A1 (en) * | 2015-03-27 | 2016-10-06 | 华为技术有限公司 | Automatic zooming method and device |
CN106845472A (en) * | 2016-12-30 | 2017-06-13 | 深圳仝安技术有限公司 | A kind of novel intelligent wrist-watch scans explanation/interpretation method and novel intelligent wrist-watch |
CN110933293A (en) * | 2019-10-31 | 2020-03-27 | 努比亚技术有限公司 | Shooting method, terminal and computer readable storage medium |
-
2020
- 2020-08-03 CN CN202010766614.6A patent/CN111970437B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005269072A (en) * | 2004-03-17 | 2005-09-29 | Fuji Xerox Co Ltd | Image processing apparatus |
CN100382572C (en) * | 2004-04-26 | 2008-04-16 | 卡西欧计算机株式会社 | digital camera |
JP2006094082A (en) * | 2004-09-24 | 2006-04-06 | Casio Comput Co Ltd | Image photographing apparatus and program |
CN101753846A (en) * | 2008-12-05 | 2010-06-23 | 三星电子株式会社 | Device and method for automatically adjusting character size using camera |
WO2016154806A1 (en) * | 2015-03-27 | 2016-10-06 | 华为技术有限公司 | Automatic zooming method and device |
CN106845472A (en) * | 2016-12-30 | 2017-06-13 | 深圳仝安技术有限公司 | A kind of novel intelligent wrist-watch scans explanation/interpretation method and novel intelligent wrist-watch |
CN110933293A (en) * | 2019-10-31 | 2020-03-27 | 努比亚技术有限公司 | Shooting method, terminal and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111970437A (en) | 2020-11-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111970437B (en) | Text shooting method, wearable device and storage medium | |
KR102598109B1 (en) | Electronic device and method for providing notification relative to image displayed via display and image stored in memory based on image analysis | |
CN107205125B (en) | An image processing method, device, terminal and computer-readable storage medium | |
CN111866392B (en) | Shooting prompting method, device, storage medium and electronic device | |
US20180131869A1 (en) | Method for processing image and electronic device supporting the same | |
CN108076278B (en) | A kind of automatic focusing method, device and electronic equipment | |
WO2017124899A1 (en) | Information processing method, apparatus and electronic device | |
EP2577951A1 (en) | Camera system and method for taking photographs that correspond to user preferences | |
EP3627822A1 (en) | Focus region display method and apparatus, and terminal device | |
CN108668086A (en) | Automatic focusing method and device, storage medium and terminal | |
WO2010128579A1 (en) | Electron camera, image processing device, and image processing method | |
CN108776800B (en) | Image processing method, mobile terminal and computer readable storage medium | |
CN110717452B (en) | Image recognition method, device, terminal and computer readable storage medium | |
CN108288044A (en) | Electronic device, face identification method and Related product | |
CN106791416A (en) | A kind of background blurring image pickup method and terminal | |
CN113411498B (en) | Image shooting method, mobile terminal and storage medium | |
US20150187056A1 (en) | Electronic apparatus and image processing method | |
CN118396863A (en) | Image processing method, device, electronic device and computer readable storage medium | |
CN113014798A (en) | Image display method and device and electronic equipment | |
CN111277758A (en) | Photographing method, terminal and computer-readable storage medium | |
JP6283329B2 (en) | Augmented Reality Object Recognition Device | |
CN103179349A (en) | Automatic photographing method and device | |
CN112188097A (en) | Photographing method, photographing apparatus, terminal device, and computer-readable storage medium | |
CN108769538B (en) | Automatic focusing method and device, storage medium and terminal | |
CN106851099B (en) | A kind of method and mobile terminal of shooting |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |