Disclosure of Invention
Based on this, it is necessary to provide a number identification method, apparatus, computer device, and storage medium.
A number identification method, comprising:
Starting a camera, starting a light source, and acquiring images of a target area for a plurality of times through the camera to obtain a plurality of images to be analyzed;
analyzing each image to be analyzed to obtain a depth value of each image to be analyzed;
Calculating a depth change value of the image to be analyzed based on the depth value of each image to be analyzed;
determining a numbering region of the image to be analyzed according to the depth change value;
And analyzing the image to be analyzed based on the numbering region to obtain the numbering of the target region.
In one embodiment, the step of analyzing the image to be analyzed based on the numbered area to obtain the number of the target area includes:
Creating a reference surface image;
performing binarization processing on the image to be analyzed based on the numbering region to obtain a binarized image;
Inputting each binarized image into the reference surface image, and storing the binarized image into an integrated image in a preset format;
and filtering and restoring the integrated image to obtain the number of the target area.
In one embodiment, the preset format is a tif format.
In one embodiment, the step of turning on the camera and turning on the light source to obtain the images of the target area multiple times by the camera to obtain multiple images to be analyzed includes:
and starting the camera, starting a light source facing the preset direction of the target area, acquiring an image of the target area through the camera, obtaining an image to be analyzed corresponding to the light source in the preset direction, closing the camera, and closing the light source in the preset direction.
In one embodiment, the steps of turning on the camera, turning on a light source facing the preset direction of the target area, obtaining an image of the target area through the camera, obtaining an image to be analyzed corresponding to the light source in the preset direction, turning off the camera, and turning off the light source in the preset direction include:
starting the camera and starting a light source facing the preset direction of the target area;
And starting a slow motion mode of the camera, controlling the camera to acquire an image of the target area in the slow motion mode, obtaining an image to be analyzed corresponding to the light source in the preset direction, closing the camera, and closing the light source in the preset direction.
In one embodiment, the steps of turning on the camera, turning on a light source facing the preset direction of the target area, obtaining an image of the target area through the camera, obtaining an image to be analyzed corresponding to the light source in the preset direction, turning off the camera, and turning off the light source in the preset direction include:
starting the camera, starting a first light source facing the first direction of the target area, acquiring at least one image of the target area through the camera to obtain a first image to be analyzed, closing the camera, and closing the first light source;
starting the camera, starting a second light source facing the second direction of the target area, acquiring at least one image of the target area through the camera to obtain a second image to be analyzed, closing the camera, and closing the second light source;
Starting the camera, starting a third light source facing a third direction of the target area, acquiring an image of the target area at least once through the camera to obtain a third image to be analyzed, closing the camera, and closing the third light source;
starting the camera, starting a fourth light source facing the fourth direction of the target area, acquiring an image of the target area at least once through the camera to obtain a fourth image to be analyzed, closing the camera, and closing the fourth light source;
and taking the first image to be analyzed, the second image to be analyzed, the third image to be analyzed and the fourth image to be analyzed as the images to be analyzed.
In one embodiment, the number of the first image to be analyzed, the second image to be analyzed and the second image to be analyzed is six.
A number identification device comprising:
The image acquisition module to be analyzed is used for starting the camera, starting the light source, and acquiring images of the target area for a plurality of times through the camera to obtain a plurality of images to be analyzed;
The depth value obtaining module is used for analyzing each image to be analyzed to obtain the depth value of each image to be analyzed;
The depth change value obtaining module is used for calculating the depth change value of the image to be analyzed based on the depth value of each image to be analyzed;
The shadow area determining module is used for determining the numbering area of the image to be analyzed according to the depth change value;
And the target area number obtaining module is used for analyzing the image to be analyzed based on the number area to obtain the number of the target area.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the number identification method described in any of the embodiments above when the computer program is executed.
A computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the number identification method described in any of the embodiments above.
The method has the beneficial effects that the number in the target area is analyzed by analyzing the depth value of the image and further determining the shadow area in the target area. The identification precision and the identification efficiency of the numbers on the same-color metal surface are effectively improved, and therefore quick ground object warehousing is achieved.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It will be appreciated that the numbering and identification method of the present application is applicable to the identification of the numbering of items in a warehouse of items, which may be production materials such as bottles, caps, boxes, casings, etc., but also guns. Embodiments of the application are not limited in this regard.
As shown in fig. 1, the number identifying method according to an embodiment of the present invention is used for identifying numbers of the same-color metal surfaces, and includes:
Step 110, turning on a camera, turning on a light source, and obtaining images of a target area through the camera for multiple times to obtain multiple images to be analyzed.
In this embodiment, the target area is a position with a number on the article, and when the image is acquired, the camera is aligned and faces the target area, and the light source emits light in a direction towards the target area. Therefore, when the camera is started, the light source is synchronously started to irradiate the target area, and then the camera shoots the target area to obtain an image to be analyzed.
It should be understood that, the object is placed on the detection table for numbering and identifying, because the object is manually placed, because the position may be random, in order to accurately obtain the target area, in one embodiment, the camera and the light source are respectively arranged on different mechanical arms, the mechanical arms can drive the camera and the light source to move, and the positions of the camera and the light source can be adjusted according to the position where the object is placed, so that when the user randomly places the object, the camera can accurately align with the target area and the light source can accurately irradiate the target area.
And 120, analyzing each image to be analyzed to obtain a depth value of each image to be analyzed.
In this embodiment, the number in the target area has a protruding feature or a recessed feature, for example, the number in the target area protrudes from the surface of the target area, for example, the number in the target area is recessed from the surface of the target area, so that the number of the target area can be identified by analyzing the depth of the number in the target area. In this embodiment, a plurality of images to be analyzed are analyzed to obtain depth values of the images to be analyzed.
Specifically, when the number in the target area is recessed in the surface of the target area, the depth value in the image to be analyzed is the value of the depth of the recess, and when the number in the target area is raised in the surface of the target area, the depth value in the image to be analyzed is the value of the height of the surface of the number raised in the target area.
Step 130, calculating a depth variation value of the image to be analyzed based on the depth value of each image to be analyzed.
In this embodiment, the depth values of the images to be analyzed are compared, so as to obtain a depth variation value of the images to be analyzed, where the depth variation value can be regarded as a difference value of the depths of the images to be analyzed. In one embodiment, the average depth value of each image to be analyzed may be calculated according to the depth value of each image to be analyzed.
And step 140, determining a shadow area of the image to be analyzed according to the depth change value.
In this embodiment, the shadow area is a numbered area, and the shape of the shadow area is matched with the shape of the numbered area. In this embodiment, according to the depth change value, it may be determined that a region with a depth value unchanged or a change value smaller than a first preset depth value is a blank region, and a region with a depth value changed or a change value larger than a second preset depth value is a numbered region.
And step 150, analyzing the image to be analyzed based on the shadow area to obtain the number of the target area.
In this embodiment, the number of the target area is obtained by analyzing the image to be analyzed based on the shadow area.
In one embodiment, after the number of the target area is obtained, the number of the target area is stored, and the number of the target area is associated with personnel information and stored. In this embodiment, the personnel information is information of personnel returning the article, and the personnel information is obtained, that is, the personnel information of the personnel returning the article is obtained, so that the personnel information is associated with the number of the article, and the quick storage of the article is realized.
In this embodiment, the number in the target area is resolved by resolving the depth value of the image, and then determining the shadow area in the target area. The identification precision and the identification efficiency of the numbers are effectively improved, so that quick ground object storage is realized.
In one embodiment, the image to be analyzed is input to a neural network, and the image to be analyzed is analyzed based on the shadow area through the neural network to obtain the number of the target area.
In this embodiment, a large number of sample images with depth values, shadow areas and numbers of target areas determined are input to a neural network for learning, and a neural network model for analyzing the numbers of the target areas is obtained.
In order to analyze and obtain the number of the target area, in one embodiment, the step of analyzing the image to be analyzed based on the shadow area to obtain the number of the target area includes creating a reference surface image, performing binarization processing on the image to be analyzed based on the shadow area to obtain a binarized image, inputting the binarized images into the reference surface image, storing the binarized images into an integrated image in a preset format, and performing filtering reduction on the integrated image to obtain the number of the target area.
In this embodiment, the reference surface image is a blank image as a reference surface. In this embodiment, the binarization processing is performed on the salient light to be analyzed based on the shadow area, so that the image to be analyzed is converted into a black-and-white image, and then each black-and-white image to be analyzed is put into the reference surface image and stored as an image with a preset Format, in this embodiment, the preset Format is a tif (TAG IMAGE FILE Format, label image file) Format. In this embodiment, the reference surface image into which the image to be analyzed after binarization processing is placed is converted into an image in the tif format, and then the image in the tif format is filtered and restored, so that the number of the target area is obtained by analysis.
In order to more accurately calculate the depth value of the target area, in one embodiment, the steps of starting the camera, starting the light source, acquiring the images of the target area through the camera for multiple times, and obtaining a plurality of images to be analyzed include starting the camera, starting the light source facing the preset direction of the target area, acquiring the images of the target area through the camera, obtaining the images to be analyzed corresponding to the light source in the preset direction, closing the camera, and closing the light source in the preset direction.
In this embodiment, the light source is used to illuminate the target area and improve the brightness of the target area. In this way, due to the number protrusions or depressions in the target area, shadows are formed under the irradiation of light, which is beneficial to the analysis of the depth value of the target area. In this embodiment, when the camera is turned on, the image capturing triggers the light source to be turned on synchronously, the light source irradiates towards the target area, and the camera shoots an image of the target area, and as the target area generates local shadows under the irradiation of the light source, the depth detection of the shot image of the target area is more accurate. After the image is obtained by shooting, the camera is turned off, and the light source is triggered to be turned off. In this embodiment, the preset directions are plural, that is, each preset direction corresponds to one light source, the light emitting direction of each light source faces to a preset direction, when an image of a target area is shot once, one light source corresponding to the preset direction is turned on, after the image is shot, the camera is turned off once, the light source corresponding to the preset direction is turned off, then, the camera is turned on again, another light source corresponding to the preset direction is triggered, after the image is shot, the camera is turned off once again, the light source corresponding to the other preset direction is turned off, the light sources of the preset directions are turned on in sequence, and the target area irradiated by the light source of each preset direction is respectively shot with a plurality of images to be analyzed, so as to obtain a plurality of images to be analyzed. Therefore, the images to be analyzed under the irradiation of the light sources with different angles are obtained through the irradiation of the light sources with different angles, and the depth value of the images to be analyzed can be obtained through accurate analysis.
In one embodiment, the steps of starting the camera, starting the light source in the preset direction towards the target area, acquiring an image of the target area through the camera, obtaining an image to be analyzed corresponding to the light source in the preset direction, closing the camera, and closing the light source in the preset direction comprise the steps of starting the camera, starting the light source in the preset direction towards the target area, starting a slow motion mode of the camera, controlling the camera to acquire the image of the target area in the slow motion mode, obtaining the image to be analyzed corresponding to the light source in the preset direction, closing the camera, and closing the light source in the preset direction.
In this embodiment, by turning on the slow motion mode, the captured image can be fully exposed, so that the sharpness of the obtained image to be analyzed can be improved.
In one embodiment, the steps of turning on the camera, turning on a light source facing the preset direction of the target area, obtaining an image of the target area through the camera, obtaining an image to be analyzed corresponding to the light source in the preset direction, turning off the camera, and turning off the light source in the preset direction include:
starting the camera, starting a first light source facing the first direction of the target area, acquiring at least one image of the target area through the camera to obtain a first image to be analyzed, closing the camera, and closing the first light source;
starting the camera, starting a second light source facing the second direction of the target area, acquiring at least one image of the target area through the camera to obtain a second image to be analyzed, closing the camera, and closing the second light source;
Starting the camera, starting a third light source facing a third direction of the target area, acquiring an image of the target area at least once through the camera to obtain a third image to be analyzed, closing the camera, and closing the third light source;
starting the camera, starting a fourth light source facing the fourth direction of the target area, acquiring an image of the target area at least once through the camera to obtain a fourth image to be analyzed, closing the camera, and closing the fourth light source;
and taking the first image to be analyzed, the second image to be analyzed, the third image to be analyzed and the fourth image to be analyzed as the images to be analyzed.
In this embodiment, the first direction is the upper end or the upper portion of the target area, the second direction is the lower end or the lower portion of the target area, the third direction is the left side of the target area, and the fourth direction is the right side of the target area, so that the four positions of the target area are sequentially irradiated with the light source, the four positions are sequentially captured and acquired to obtain a plurality of images to be analyzed, the height variation value of each image to be analyzed is obtained, and the shadow area is obtained by analyzing according to the height variation value.
In one embodiment, the number of the first image to be analyzed, the second image to be analyzed and the second image to be analyzed is six, respectively.
In this embodiment, the light sources are sequentially irradiated in four directions of the upper portion, the lower portion, the left side and the right side of the target area, and the capturing and collecting images are sequentially performed six times on four positions in a circulating manner, so that six first images to be analyzed, six second images to be analyzed, six third images to be analyzed and six fourth images to be analyzed are obtained, the calculation precision of the depth value is improved, the precision of the shadow area is improved, and the number can be accurately extracted.
In one embodiment, as shown in fig. 2, there is provided a number recognition apparatus including:
the image to be analyzed acquisition module 210 is configured to turn on a camera, turn on a light source, and acquire images of a target area through the camera multiple times to obtain multiple images to be analyzed;
A depth value obtaining module 220, configured to parse each of the images to be analyzed to obtain a depth value of each of the images to be analyzed;
a depth change value obtaining module 230, configured to calculate a depth change value of the image to be analyzed based on the depth value of each image to be analyzed;
a shadow area determining module 240, configured to determine a shadow area of the image to be analyzed according to the depth change value;
the target area number obtaining module 250 is configured to parse the image to be analyzed based on the shadow area to obtain the number of the target area.
In one embodiment, the target area number obtaining module includes:
A surface image creation unit for creating a reference surface image;
The binarization processing unit is used for carrying out binarization processing on the image to be analyzed based on the shadow area to obtain a binarized image;
An integrated image storage unit for inputting each of the binarized images to the reference surface image and storing the same as an integrated image in a preset format;
and the target area number obtaining unit is used for filtering and restoring the integrated image to obtain the number of the target area.
In one embodiment, the preset format is a tif format.
In one embodiment, the image obtaining module to be analyzed is further configured to turn on the camera, turn on a light source facing the target area in a preset direction, obtain an image of the target area through the camera, obtain an image to be analyzed corresponding to the light source in the preset direction, turn off the camera, and turn off the light source in the preset direction.
In one embodiment the image acquisition module to be analyzed comprises:
The synchronous starting unit is used for starting the camera and starting a light source facing the preset direction of the target area;
The slow motion mode shooting unit is used for starting a slow motion mode of the camera, controlling the camera to acquire an image of the target area in the slow motion mode, obtaining an image to be analyzed corresponding to the light source in the preset direction, closing the camera, and closing the light source in the preset direction.
In one embodiment, the image acquisition module to be analyzed includes:
the first capturing and collecting unit is used for starting the camera, starting a first light source facing the first direction of the target area, acquiring an image of the target area at least once through the camera to obtain a first image to be analyzed, closing the camera, and closing the first light source;
the second capturing and collecting unit is used for starting the camera, starting a second light source facing the second direction of the target area, acquiring an image of the target area at least once through the camera to obtain a second image to be analyzed, closing the camera, and closing the second light source;
the third capturing and collecting unit is used for starting the camera, starting a third light source facing the third direction of the target area, acquiring an image of the target area at least once through the camera to obtain a third image to be analyzed, closing the camera, and closing the third light source;
The fourth capturing and collecting unit is used for starting the camera, starting a fourth light source facing the fourth direction of the target area, acquiring an image of the target area at least once through the camera to obtain a fourth image to be analyzed, closing the camera, and closing the fourth light source;
And the output unit is used for taking the first image to be analyzed, the second image to be analyzed, the third image to be analyzed and the fourth image to be analyzed as the images to be analyzed.
In one embodiment, the number of the first image to be analyzed, the second image to be analyzed and the second image to be analyzed is six, respectively.
For specific limitations of the number recognition means, reference may be made to the above limitations of the number recognition method, and no further description is given here. The respective modules in the above number identifying means may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided. The internal structure thereof can be shown in fig. 3. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program, and is deployed with a database for images, numbers. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used to communicate with other computer devices in which application software is deployed. The computer program is executed by a processor to implement a number identification method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in FIG. 3 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided that includes a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the number identification method described in any of the embodiments above when the computer program is executed by the processor.
In one embodiment, a computer readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the number identification method described in any of the embodiments above.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.