CN112861861B - Method and device for recognizing nixie tube text and electronic equipment - Google Patents
Method and device for recognizing nixie tube text and electronic equipment Download PDFInfo
- Publication number
- CN112861861B CN112861861B CN202110053318.6A CN202110053318A CN112861861B CN 112861861 B CN112861861 B CN 112861861B CN 202110053318 A CN202110053318 A CN 202110053318A CN 112861861 B CN112861861 B CN 112861861B
- Authority
- CN
- China
- Prior art keywords
- text
- model
- equipment
- image
- recognition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/14—Image acquisition
- G06V30/148—Segmentation of character regions
- G06V30/153—Segmentation of character regions using recognition of characters or words
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Character Discrimination (AREA)
Abstract
The application relates to a method and a device for recognizing a nixie tube text and electronic equipment, and belongs to the technical field of character recognition; selecting an equipment model diagram matched with the target equipment from a pre-constructed model diagram library, and identifying and buckling an on-site acquired image according to the equipment model diagram to obtain an area image to be identified; based on the region image to be identified, carrying out text identification by adopting a pre-constructed and trained nixie tube text identification model to obtain a text identification result; and carrying out matching combination processing according to text attribute configuration information associated with the equipment model diagram and the text recognition result, and combining to obtain structured data as a final recognition result. The identification of the digital tube display is better realized.
Description
Technical Field
The application belongs to the technical field of character recognition, and particularly relates to a method and device for recognizing a nixie tube text and electronic equipment.
Background
Optical character recognition OCR generally refers to a process of checking characters printed on paper using an electronic device (scanner, digital camera, etc.), detecting brightness and shape, and translating them into characters using a character recognition technique. The traditional OCR recognition application uses Tesseact-OCR, a software developed by the Hewlett-packard Bristol laboratory in 1984-1994, originally as a text recognition engine for a Hewlett-packard flatbed scanner. The first to get in the UNLV OCR character recognition accuracy test in 1995 has received great attention. The development was stopped after 1994 because of the Hewlett-packard abandoned OCR market. Hewlett-packard in 2005 contributed Tesseact-OCR to the open source community. Google obtains the source code and starts to perform function expansion and optimization on the source code.
In a complex scene, OCR recognition (such as commodity photo brand recognition, webpage information recognition, automatic driving guideboard recognition, standard certificate recognition, license plate recognition and the like) is performed, and a core algorithm of the method mainly comprises three parts of text detection, character segmentation and character recognition (a part of a neural network does not need character segmentation).
The file detection method is divided into graphic image positioning and machine learning positioning, wherein the graphic positioning is divided into color positioning, texture positioning, edge detection and mathematical morphology, but the graphic image positioning method is easy to be interfered by external interference information to cause positioning failure. If the color of the license plate background is similar to that of the license plate, the license plate is difficult to extract from the background; the method of edge detection also can easily cause positioning failure due to the contamination of the target edge. The interference of external interference information can deceive the positioning algorithm, so that the positioning algorithm generates too many false target candidate areas to be identified, and the system load is increased. The text character segmentation license plate and standard certificate generally adopt a vertical projection method, because the projection of characters in the vertical direction inevitably obtains the vicinity of a local minimum value at a gap between the characters or in the characters, and the position meets the character writing format, character, size limitation and some other conditions of the license plate, the vertical projection method has better effect on character segmentation in an automobile image in a complex environment. The character recognition method mainly comprises a template matching algorithm and an artificial neural network algorithm, wherein the template matching algorithm firstly matches the segmented characters with all templates, and finally selects the best matching as a result. There are two algorithms for artificial neural networks: firstly, splitting a text character into single characters, and taking the characters as input training neural network distributor, thereby realizing recognition; the other is to directly transmit the text characters into a trained neural network, and the network realizes the rapid recognition of the whole text through feature extraction, so that the method has wide application and a network structure as follows: CRNN, cnn+ CTCOCR, denseNet +ctc, and the like.
As described above, conventional and complex scene OCR has been well solved and implemented in practical applications, but in a specific scene, recognition effect of nixie tube display text (including text displayed in nixie tube font) on a display screen of a device in the related art is not ideal.
Specifically, the traditional OCR is mainly aimed at character recognition of printing paper type, and has large difference between characters and background for simple scenes, and has remarkable effect of binarization scenes; the actual application scene has a plurality of interference factors, the original picture comprises the whole equipment and the running environment, the original picture is influenced by a plurality of factors such as brightness, angle, color and the like during recognition, irrelevant information is often recognized as useful information, a character target cannot be effectively extracted, and the existing model does not support seven-segment nixie tube character recognition.
The complex scene OCR recognition function supports seven-segment nixie tube target positioning, but no algorithm model support is needed, a large amount of time is needed for designing, training and optimizing a network, a large amount of equipment types are needed for training, effective labeling pictures of scenes with different brightness, angles, colors and the like are needed, a large amount of manpower and material resources are needed for the part of work, the field cannot provide the pictures, the trained network needs to be continuously verified and optimized, the network needs to be retrained and verified and optimized when new equipment is added, and the part of work load cannot be estimated when rapid development is needed. Meanwhile, the existing model does not support seven-segment nixie tube text recognition, so that the complex scene OCR function is supported, a great deal of work is needed from design realization to optimization, the recognition effect cannot be ensured, the recognition result is a segment of text, and no data attribute service information exists.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present invention and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
In order to overcome the problems in the related art to at least a certain extent, the application provides a method, a device and electronic equipment for identifying the nixie tube text, which are beneficial to avoiding the defects in the prior art and better realizing the identification of the nixie tube display.
In order to achieve the above purpose, the present application adopts the following technical scheme:
in a first aspect of the present invention,
the application provides a method for identifying a nixie tube text, which comprises the following steps:
acquiring a field acquisition image of target equipment;
selecting an equipment model diagram matched with the target equipment from a pre-constructed model diagram library, and identifying and buckling the field acquisition image according to the equipment model diagram to obtain an area image to be identified;
based on the region image to be identified, performing text identification by adopting a pre-constructed and trained nixie tube text identification model to obtain a text identification result;
and carrying out matching combination processing according to text attribute configuration information associated with the equipment model diagram and the text recognition result, and combining to obtain structured data as a final recognition result.
Optionally, selecting an equipment model diagram matched with the target equipment from a pre-constructed model diagram library, which specifically includes:
and selecting an equipment model diagram matched with the target equipment from a pre-constructed model diagram library according to the model information of the target equipment contained in the field acquisition image.
Optionally, the identifying and buckling the field collected image according to the equipment model diagram to obtain an area image to be identified, specifically:
and extracting characteristic points from the equipment model graph by adopting a model graph matching algorithm, identifying and buckling the field acquisition image according to the characteristic points, and correcting the size and angle of the identified and buckled area image to obtain the area image to be identified.
Optionally, the feature points include corner points, edge points, bright points of dark areas.
Optionally, based on the area image to be identified, performing text identification by using a pre-constructed and trained nixie tube text identification model to obtain a text identification result, including:
determining and extracting a text unit image from the region image to be identified according to text position configuration information associated with the equipment model image;
preprocessing the text unit image, and adopting the nixie tube text recognition module to perform text recognition according to the processed image so as to obtain the text recognition result.
Optionally, the preprocessing includes binarizing, expanding, and erosion optimizing the image.
Optionally, the process of pre-constructing the model gallery includes:
collecting equipment pictures, and capturing pictures which contain screen display content areas and outwards extend certain areas from the equipment pictures to serve as equipment model pictures;
respectively marking display units of the equipment model diagrams, selecting a region capable of displaying the maximum image in a frame mode, and generating configuration files associated with the corresponding equipment model diagrams according to position information obtained by frame selection and attribute information corresponding to the frame selection region;
and taking the equipment model number as an identification field, and warehousing each equipment model drawing and the corresponding associated configuration file thereof to obtain the model drawing library.
Optionally, based on a text recognition model in Tesseact-OCR, carrying out digital tube font custom packaging on the model to construct the digital tube text recognition model.
In a second aspect of the present invention,
the application provides a device for recognizing nixie tube text, which comprises:
the acquisition module is used for acquiring an on-site acquisition image of the target equipment;
the first recognition processing module is used for selecting an equipment model diagram matched with the target equipment from a pre-constructed model diagram library, and recognizing and buckling the field acquisition image according to the equipment model diagram to obtain an area image to be recognized;
the second recognition processing module is used for carrying out text recognition by adopting a pre-constructed and trained nixie tube text recognition model based on the region image to be recognized to obtain a text recognition result;
and the combination processing module is used for carrying out matching combination processing according to the text attribute configuration information associated with the equipment model diagram and the text recognition result, and combining the obtained structured data as a final recognition result.
In a third aspect of the present invention,
the application provides an electronic device, comprising:
a memory having an executable program stored thereon;
and a processor for executing the executable program in the memory to implement the steps of the method described above.
The application adopts the technical scheme, possesses following beneficial effect at least:
according to the technical scheme, in the text recognition process, the model diagram is firstly adopted to carry out matching recognition on the field acquisition image to determine the area image to be recognized, further the subsequent recognition processing is carried out, and the realization difficulty of text position recognition in the whole realization is reduced. The text attribute configuration associated with the model diagram is established to structure the identification text data, so that the data availability is ensured. And the user-defined recognition model is adopted, so that the recognition accuracy of the display text of the nixie tube is ensured.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the technical aspects or prior art of the present application and constitute a part of this specification. The drawings, which are used to illustrate the technical solution of the present application, together with the embodiments of the present application, but do not limit the technical solution of the present application.
FIG. 1 is a flow chart of a method for recognizing a nixie tube text according to one embodiment of the present application;
FIG. 2 is a schematic illustration of a device model diagram in one embodiment of the present application;
FIG. 3 is a schematic illustration of the marking of a device model diagram display unit during the construction of a model gallery in one embodiment of the application;
FIG. 4 is a schematic structural diagram of an apparatus for recognizing a nixie tube text according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail below. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, based on the examples herein, which are within the scope of the protection sought by those of ordinary skill in the art without undue effort, are intended to be encompassed by the present application.
In the prior art, the recognition effect on the nixie tube display text on the display screen of the device is not ideal, and for this reason, the application provides a method for recognizing the nixie tube text, which is helpful for better realizing recognition on the nixie tube display.
In one embodiment, as shown in fig. 1, the method for recognizing the nixie tube text provided in the present application includes the following steps:
step S110, acquiring an on-site acquired image of the target device.
For example, the application scenario of this embodiment is equipment inspection, where the field-acquired image is an image obtained by an inspector photographing a target device (for example, a power cabinet of a certain device) through an image acquisition device (for example, PDA).
And then, step S120 is carried out, wherein an equipment model diagram matched with the target equipment is selected from a pre-constructed model diagram library, and the on-site acquired image is identified and buckled according to the equipment model diagram to obtain an area image to be identified.
Specifically, in step S120, according to model information of the target device contained in the field acquisition image, a device model diagram matched with the target device is selected from a model diagram library constructed in advance; for example, the name of the image file carrying the field collected image information contains the model code of the target device, the model information of the target device is obtained by analyzing the name of the image file, and then the device model diagram corresponding to the target device is retrieved and selected from the model diagram library. It should be noted that, the device model diagram in the present application refers to a picture (as illustrated in fig. 2 for example) that includes a display content area of a device screen and extends a certain area outwards.
After the equipment model diagram is selected, the on-site acquired images are matched and identified and buckled according to the equipment model diagram, so that the position positioning of the target to be identified can be realized efficiently.
In step S120 of this embodiment, a model map matching algorithm (e.g., a SIFT matching algorithm) is used to extract feature points (typically, feature points include corner points, edge points, bright points in dark areas, etc.) from the device model map, identify and clip the field acquired image according to the feature points, and correct the size and angle of the identified and clipped area image to obtain the area image to be identified.
According to the method and the device, the feature points extracted by the model diagram are used as references for matching and matting, so that heavy labeling data preparation is avoided, new equipment type identification is performed, a new type of equipment model diagram is added, and secondary training is not needed.
Step S130 is then carried out, and based on the region image to be identified obtained in step S120, a pre-constructed and trained nixie tube text identification model is adopted for text identification, so that a text identification result is obtained;
in the embodiment, based on a text recognition model in Tesseact-OCR, a nixie tube font custom package is carried out on the model, so that the construction of the nixie tube text recognition model is realized.
First, the principle of Tesseact-OCR recognition is introduced, the Tesseact-OCR recognition steps are adopted to be divided into four steps approximately,
the first step: analyzing the connected area, detecting a character area (outline shape) and a sub-outline, and integrating the outline into a block area at the stage;
and a second step of: text lines (text lines) are derived from the character outline and the block area. There are two methods of analyzing text lines, fixed scenes and scaled scenes. The fixed scene cuts out individual characters by character units, and the proportional scene (Proportional text) completes the segmentation by clear spaces and fuzzy spaces (fuzzy spaces):
and a third step of: each character is analyzed and identified in sequence, an adaptive classifier is used, the classifier has learning capability, characters meeting the conditions are analyzed firstly and simultaneously serve as training samples, so that the character (such as the tail of a page) is identified more accurately after the character is identified, the character identification accuracy of the head of a page is lower, the implementation algorithm can identify the character which is not well identified for the second time again, so that the identification accuracy is improved, and the step is two;
fourth step: resolving ambiguous spaces, checking x-height, locating (small-cap) text, and identifying using other methods.
In the prior art, tesseact-OCR authorities provide text recognition models of Chinese, english, numerals and the like, but do not have seven-segment nixie tube font recognition models. Custom packaging is carried out on the basis of a text recognition model in Tesseact-OCR to obtain a required model:
specifically, the optimized picture (the digital tube font display) is packaged by a tool, a single text position and a text value are marked, the text position marking requirement comprises a complete text, the distance between a frame and the text picture is as small as possible, unnecessary abnormal information is reduced, and the identification accuracy is ensured.
After the text recognition model is packaged, the test can be performed. And the abnormal recognition text labeling object can be added into the model according to the actual use effect, so that the model recognition effect is enhanced.
Preferably, in step S130, a text unit image is determined and extracted from the region image to be recognized according to text position configuration information associated with the device model diagram (the configuration information is generated when the model diagram library is constructed, and related content will be described in detail later), where the text unit image refers to a character display region image (for example, a box selection region as shown in fig. 3) in the region image to be recognized;
and preprocessing the text unit image, such as binarization processing, expansion processing, erosion optimization processing and the like, and performing text recognition by adopting a nixie tube text recognition module according to the processed image to obtain a text recognition result.
By adding the text unit extraction function on the basis of the text position, the recognition object is further accurately identified, the recognition information quantity is reduced, the result is prevented from being influenced by abnormal information, the subsequent establishment of the data and service relationship is facilitated, the result data structure is facilitated, and the logic processing flow is simplified. And through preprocessing the text unit images, different colors and background pictures are converted into black and white pictures, so that the influence of noise information on the recognition accuracy is further reduced.
Continuing to refer to fig. 1, after step S130, performing step S140, performing matching and combining processing according to the text attribute configuration information and the text recognition result associated with the device model diagram, and taking the combined structured data as a final recognition result.
The text recognition result obtained in step S130 is only the text numeric symbol displayed by the nixie tube, and the specific meaning thereof is not known from the data processing perspective. In step S140, the text recognition result is combined with the corresponding data attribute based on the text attribute configuration information associated with the device model diagram, and the combined structure is used as the final recognition result.
For example, the text recognition result for a text unit image is "56", the text unit image corresponds to the real-time temperature display item in the first row of the model diagram, and the structured data obtained by combining is the real-time temperature of 56 degrees.
According to the technical scheme, in the text recognition process, the model diagram is firstly adopted to carry out matching recognition on the field acquisition image to determine the area image to be recognized, further the subsequent recognition processing is carried out, and the realization difficulty of text position recognition in the whole realization is reduced. The text attribute configuration associated with the model diagram is established to structure the identification text data, so that the data availability is ensured. And the user-defined recognition model is adopted, so that the recognition accuracy of the display text of the nixie tube is ensured.
The following describes how to construct a model gallery in advance in the technical scheme in the present application.
Firstly, collecting related equipment pictures, at least one piece of each equipment, intercepting a region to be identified in the pictures by using picture processing software to serve as an equipment model picture (the model picture is required to contain a complete region of screen display content, and simultaneously expands a certain region outwards, and exclusive characteristic information is required to be contained except the screen display region so as to acquire more characteristic information positioning targets by using an algorithm in practical application).
Then, respectively marking display units of the equipment model diagrams, selecting a region capable of displaying the maximum value image in a frame mode, and generating configuration files associated with the corresponding equipment model diagrams according to the position information obtained by the frame selection and the attribute information corresponding to the frame selection region;
the reason for marking the display unit is that in practice, if the picture extracted by the text target is directly identified, the picture contains all texts of the equipment, the equipment is different and the layout is different, the text identification is directly carried out, the abnormal information is more, and the identification difficulty is high; and the recognized result is a text segment, which lacks data attributes, and if judged according to code logic, standard specifications cannot be defined.
Based on the above, in order to reduce the recognition difficulty, a structured recognition result is constructed, in the process of constructing a model gallery, text unit information labeling is performed on each model diagram, self-grinding marking software is used for framing the position information (shown in fig. 3) of a recognition object, the framing range must contain a displayable maximum image range, when the display extremum is ensured, the recognition range is not deleted, meanwhile, attribute information of the object name and the data type is created, and a structured data model (which can be realized based on a configuration file) of equipment display information is established, so that the subsequent service processing is convenient.
After marking is completed, storing the model diagram and the configuration file in a certain format, establishing an association relation through code logic, and extracting text units, names and data type information through position information in the configuration file when the type equipment is required to be identified, so that a structured text identification result is conveniently constructed;
for example, the device model number may be used as an identification field, and each device model graph and its corresponding association configuration file may be put in storage to obtain a model graph library.
Fig. 4 is a schematic structural diagram of an apparatus 400 for recognizing a nixie text according to an embodiment of the present application, and as shown in fig. 4, the apparatus 400 for recognizing a nixie text includes:
an acquisition module 401, configured to acquire an on-site acquired image of a target device;
the first recognition processing module 402 is configured to select an equipment model diagram matched with the target equipment from a pre-constructed model gallery, and perform recognition buckling on an on-site acquired image according to the equipment model diagram to obtain an area image to be recognized;
the second recognition processing module 403 is configured to perform text recognition by using a pre-constructed and trained nixie tube text recognition model based on the region image to be recognized, so as to obtain a text recognition result;
and the combination processing module 404 is configured to perform matching and combination processing according to the text attribute configuration information and the text recognition result associated with the device model diagram, and use the combined structured data as a final recognition result.
The specific manner in which the various modules perform the operations of the apparatus 400 for recognizing a nixie tube text in the related embodiments described above has been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application, as shown in fig. 5, the electronic device 500 includes:
a memory 501 on which an executable program is stored;
a processor 502 for executing an executable program in the memory 501 to implement the steps of the above method.
The specific manner in which the processor 502 executes the program in the memory 501 of the electronic device 500 in the above embodiment has been described in detail in the embodiment related to the method, and will not be described in detail here.
The present invention is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the scope of the present invention are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.
Claims (6)
1. A method of identifying a nixie tube text, comprising:
acquiring a field acquisition image of target equipment;
selecting an equipment model diagram matched with the target equipment from a pre-constructed model diagram library, and identifying and buckling the field acquisition image according to the equipment model diagram to obtain an area image to be identified; the method comprises the steps of carrying out identification buckling on the field acquisition image according to the equipment model diagram to obtain an area image to be identified, wherein the specific steps are as follows: extracting feature points from the equipment model graph by adopting a model graph matching algorithm, identifying and buckling the field acquisition image according to the feature points, and correcting the size and angle of the identified and buckled area image to obtain the area image to be identified;
based on the region image to be identified, performing text identification by adopting a pre-constructed and trained nixie tube text identification model to obtain a text identification result, wherein the method comprises the following steps: determining and extracting a text unit image from the region image to be identified according to text position configuration information associated with the equipment model image; preprocessing the text unit image, and carrying out text recognition by adopting the nixie tube text recognition model according to the processed image so as to obtain the text recognition result;
matching and combining according to text attribute configuration information associated with the equipment model diagram and the text recognition result, and combining to obtain structured data as a final recognition result;
the process of the pre-constructed model gallery comprises the following steps:
collecting equipment pictures, and capturing pictures which contain screen display content areas and outwards extend certain areas from the equipment pictures to serve as equipment model pictures;
respectively marking display units of the equipment model diagrams, selecting a region capable of displaying the maximum image in a frame mode, and generating configuration files associated with the corresponding equipment model diagrams according to position information obtained by frame selection and attribute information corresponding to the frame selection region;
taking the equipment model as an identification field, and warehousing each equipment model graph and a corresponding associated configuration file thereof to obtain the model graph library;
based on a text recognition model in Tesseact-OCR, the model is subjected to digital tube font custom packaging to construct the digital tube text recognition model.
2. The method according to claim 1, wherein the selecting a device model map matched with the target device from a pre-constructed model map library specifically comprises:
and selecting an equipment model diagram matched with the target equipment from a pre-constructed model diagram library according to the model information of the target equipment contained in the field acquisition image.
3. The method according to claim 1, wherein the feature points comprise corner points, edge points, bright spots of dark areas.
4. The method of claim 1, wherein the preprocessing comprises binarizing, expanding, and erosion optimizing the image.
5. An apparatus for recognizing a nixie tube text, comprising:
the acquisition module is used for acquiring an on-site acquisition image of the target equipment;
the first recognition processing module is used for selecting an equipment model diagram matched with the target equipment from a pre-constructed model diagram library, and recognizing and buckling the field acquisition image according to the equipment model diagram to obtain an area image to be recognized; the method comprises the following steps: extracting feature points from the equipment model graph by adopting a model graph matching algorithm, identifying and buckling the field acquisition image according to the feature points, and correcting the size and angle of the identified and buckled area image to obtain the area image to be identified;
the second recognition processing module is used for carrying out text recognition by adopting a pre-constructed and trained nixie tube text recognition model based on the region image to be recognized to obtain a text recognition result; the method is particularly used for: determining and extracting a text unit image from the region image to be identified according to text position configuration information associated with the equipment model image; preprocessing the text unit image, and carrying out text recognition by adopting the nixie tube text recognition model according to the processed image so as to obtain the text recognition result;
the combination processing module is used for carrying out matching combination processing according to text attribute configuration information associated with the equipment model diagram and the text recognition result, and combining the obtained structured data as a final recognition result;
the process of the pre-constructed model gallery comprises the following steps:
collecting equipment pictures, and capturing pictures which contain screen display content areas and outwards extend certain areas from the equipment pictures to serve as equipment model pictures;
respectively marking display units of the equipment model diagrams, selecting a region capable of displaying the maximum image in a frame mode, and generating configuration files associated with the corresponding equipment model diagrams according to position information obtained by frame selection and attribute information corresponding to the frame selection region;
taking the equipment model as an identification field, and warehousing each equipment model graph and a corresponding associated configuration file thereof to obtain the model graph library;
based on a text recognition model in Tesseact-OCR, the model is subjected to digital tube font custom packaging to construct the digital tube text recognition model.
6. An electronic device, comprising:
a memory having an executable program stored thereon;
a processor for executing the executable program in the memory to implement the steps of the method of any one of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110053318.6A CN112861861B (en) | 2021-01-15 | 2021-01-15 | Method and device for recognizing nixie tube text and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110053318.6A CN112861861B (en) | 2021-01-15 | 2021-01-15 | Method and device for recognizing nixie tube text and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112861861A CN112861861A (en) | 2021-05-28 |
CN112861861B true CN112861861B (en) | 2024-04-09 |
Family
ID=76006554
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110053318.6A Active CN112861861B (en) | 2021-01-15 | 2021-01-15 | Method and device for recognizing nixie tube text and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112861861B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113610082B (en) * | 2021-08-12 | 2024-09-06 | 北京有竹居网络技术有限公司 | Character recognition method and related equipment thereof |
CN114399768A (en) * | 2022-01-11 | 2022-04-26 | 南京工业大学 | Method, device and system for identifying serial number of workpiece product based on Tesseract-OCR engine |
CN114863085B (en) * | 2022-04-25 | 2025-07-25 | 河北省特种设备技术检查中心 | Automatic positioning and automatic calibration method for elevator fault code identification |
CN116416636A (en) * | 2023-04-06 | 2023-07-11 | 珠海读书郎软件科技有限公司 | Method, storage medium and equipment for mapping topic and frame topic coordinates |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE2146074A1 (en) * | 1970-09-21 | 1972-03-30 | Searle Medidata Inc | Data interpretation terminal |
CN103065146A (en) * | 2012-12-24 | 2013-04-24 | 广东电网公司电力调度控制中心 | Character recognition method for power communication machine room dumb equipment signboards |
CN103984930A (en) * | 2014-05-21 | 2014-08-13 | 南京航空航天大学 | Digital meter recognition system and method based on vision |
CN109919014A (en) * | 2019-01-28 | 2019-06-21 | 平安科技(深圳)有限公司 | OCR recognition methods and its electronic equipment |
WO2019184524A1 (en) * | 2018-03-27 | 2019-10-03 | 杭州欧镭激光技术有限公司 | Detection system and detection method for detecting vehicle external environment information |
CN111582262A (en) * | 2020-05-07 | 2020-08-25 | 京源中科科技股份有限公司 | Segment type liquid crystal picture content identification method, device, equipment and storage medium |
CN111738264A (en) * | 2020-05-08 | 2020-10-02 | 上海允登信息科技有限公司 | An intelligent collection method of display panel data of equipment room equipment |
CN111832565A (en) * | 2020-07-24 | 2020-10-27 | 桂林电子科技大学 | A digital tube recognition method based on decision tree |
-
2021
- 2021-01-15 CN CN202110053318.6A patent/CN112861861B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE2146074A1 (en) * | 1970-09-21 | 1972-03-30 | Searle Medidata Inc | Data interpretation terminal |
CN103065146A (en) * | 2012-12-24 | 2013-04-24 | 广东电网公司电力调度控制中心 | Character recognition method for power communication machine room dumb equipment signboards |
CN103984930A (en) * | 2014-05-21 | 2014-08-13 | 南京航空航天大学 | Digital meter recognition system and method based on vision |
WO2019184524A1 (en) * | 2018-03-27 | 2019-10-03 | 杭州欧镭激光技术有限公司 | Detection system and detection method for detecting vehicle external environment information |
CN109919014A (en) * | 2019-01-28 | 2019-06-21 | 平安科技(深圳)有限公司 | OCR recognition methods and its electronic equipment |
CN111582262A (en) * | 2020-05-07 | 2020-08-25 | 京源中科科技股份有限公司 | Segment type liquid crystal picture content identification method, device, equipment and storage medium |
CN111738264A (en) * | 2020-05-08 | 2020-10-02 | 上海允登信息科技有限公司 | An intelligent collection method of display panel data of equipment room equipment |
CN111832565A (en) * | 2020-07-24 | 2020-10-27 | 桂林电子科技大学 | A digital tube recognition method based on decision tree |
Non-Patent Citations (2)
Title |
---|
一种基于模板匹配的数字仪表字符识别方法;卢卫娜;刘长荣;郑玉才;王海芳;;现代计算机(专业版)(第03期);70-72、86页 * |
数码管数字仪表自动识别方法的研究;郭爽;;通信技术(第08期);91-93页 * |
Also Published As
Publication number | Publication date |
---|---|
CN112861861A (en) | 2021-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112861861B (en) | Method and device for recognizing nixie tube text and electronic equipment | |
CN110363102B (en) | Object identification processing method and device for PDF (Portable document Format) file | |
CN109753953B (en) | Method and device for positioning text in image, electronic equipment and storage medium | |
CN110874618B (en) | OCR template learning method and device based on small sample, electronic equipment and medium | |
CN108764352B (en) | Method and device for detecting repeated page content | |
CN114663904B (en) | A PDF document layout detection method, device, equipment and medium | |
CN113657162A (en) | Bill OCR recognition method based on deep learning | |
CN109784342B (en) | OCR (optical character recognition) method and terminal based on deep learning model | |
CN110598566A (en) | Image processing method, device, terminal and computer readable storage medium | |
CN111738252B (en) | Text line detection method, device and computer system in image | |
CN114463767B (en) | Letter of credit identification method, device, computer equipment and storage medium | |
CN111414889B (en) | Financial statement identification method and device based on character identification | |
CN111259891B (en) | Method, device, equipment and medium for identifying identity card in natural scene | |
CN111915635A (en) | Test question analysis information generation method and system supporting self-examination paper marking | |
CN117437651A (en) | Table data extraction method, apparatus, terminal device and storage medium | |
CN115019310A (en) | Image-text identification method and equipment | |
CN111008635A (en) | OCR-based multi-bill automatic identification method and system | |
CN114627457A (en) | A method and device for identifying face information | |
CN115546219B (en) | Detection plate type generation method, plate card defect detection method, device and product | |
CN112101356A (en) | Method and device for positioning specific text in picture and storage medium | |
CN114202761B (en) | Information batch extraction method based on picture information clustering | |
CN119888234B (en) | A method and device for intelligent identification and restoration of ancient books based on machine learning | |
CN119723528B (en) | Traffic sign damage detection method and system, storage medium and computer system | |
Bharadwaj et al. | Web Application Based on Optical Character Recognition | |
CN118334402A (en) | Identification card recognition method and system based on DBNet multi-classification network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |