Detailed Description
In order to make the objects, technical solutions and advantages of the present disclosure more apparent, exemplary embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present disclosure and not all of the embodiments of the present disclosure, and that the present disclosure is not limited by the example embodiments described herein.
In addition, in the present specification and the drawings, steps and elements having substantially the same or similar are denoted by the same or similar reference numerals, and repeated descriptions of the steps and elements will be omitted.
Furthermore, in the present specification and drawings, elements are described in the singular or plural form according to the embodiments. However, the singular and plural forms are properly selected for the proposed case only for convenience of explanation and are not intended to limit the present disclosure thereto. Accordingly, the singular may include the plural and the plural may include the singular unless the context clearly indicates otherwise.
Furthermore, in the present specification and the drawings, the terms "first" and "second" merely distinguish between similar objects and do not denote a particular order of objects, it being understood that the "first" and "second" may, where allowed, interchange a particular order or precedence order such that the embodiments of the disclosure described herein may be practiced otherwise than as illustrated or described herein.
Furthermore, in the present specification and the drawings, terms relating to azimuth or positional relationship such as "upper", "lower", "vertical", "horizontal", and the like are used for convenience only in describing embodiments according to the present disclosure, and are not intended to limit the present disclosure thereto. And thus should not be construed as limiting the present disclosure.
In addition, in the present description and in the drawings, unless explicitly stated otherwise, "connected" does not mean necessarily "directly connected" or "directly contacted" and, as such, "connected" may mean both fixedly and electrically connected.
For purposes of describing the present disclosure, the following presents concepts related to the present disclosure.
The image processing technology is a technology for processing image information by a computer, and mainly comprises image target detection, image enhancement and restoration, image data encoding, image segmentation, image recognition and the like. The image enhancement method comprises histogram equalization, contrast enhancement, sharpening, noise removal, color enhancement, super-resolution reconstruction, color correction and the like.
Object detection is an important application in artificial intelligence that involves identifying and locating some type of semantic object (e.g., person, building, car, etc.) in digital images and video. Target detection is an important task in computer vision, and relates to technologies such as image segmentation, object tracking, key point detection and the like, and is widely applied to many fields of modern life, such as security fields, military fields, traffic fields, medical fields and life fields. For example, security monitoring, intelligent transportation, medical image analysis, unmanned driving, etc. may be implemented using target detection techniques.
The basic flow of target detection generally comprises the steps of firstly preprocessing an image, including noise removal, enhancement and the like, secondly extracting features capable of representing a target object from the image by using deep learning or other methods, secondly designing a classifier, training the classifier by using training data to realize the identification of different target objects, and finally determining the position and the size of the target object in the image by target positioning and bounding box regression.
In summary, the present disclosure relates to techniques of image processing, object detection, and the like. Embodiments of the present disclosure will be further described below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram illustrating a method 100 for image processing according to an embodiment of the present disclosure.
It should be appreciated that the method 100 for image processing of the present disclosure may be used to implement image enhancement in a variety of application scenarios, such as security monitoring, autopilot, traffic condition monitoring, unmanned airport scene analysis, robotic vision, medical image analysis, and the like.
In step S110, a first tone scale distribution corresponding to the input image is obtained, where a tone scale range of the first tone scale distribution is a first range.
It should be appreciated that the method 100 for image processing of the present disclosure may process both grayscale images and color images.
In particular, according to embodiments of the present disclosure, the input image may be a gray scale image and the first tone distribution may be a gray scale distribution (e.g., a gray scale histogram).
According to a further embodiment of the present disclosure, the input image may also be a color image. In this case, a distribution corresponding to one channel may be acquired for the input image, and the distribution corresponding to the channel may be used as the first tone distribution (for example, a tone histogram of the channel), where the channel is one of a red channel, a green channel, and a blue channel.
Note that, the tone scale distribution in the present disclosure may be in the form of a tone scale histogram (for example, the abscissa represents a tone scale value, and the ordinate represents the number of pixels corresponding to the tone scale value), or may be a tone scale probability distribution (for example, the abscissa represents a tone scale value, and the ordinate represents a probability corresponding to the tone scale value).
In step S120, a second range and a third range other than the second range are determined in the first range, wherein the third range includes a range of tone levels located on the left and/or right of the second range.
According to an embodiment of the present disclosure, the third range may be determined based on a product of a total number of pixels corresponding to the input image and a predetermined coefficient, wherein a number of pixels corresponding to a tone scale distribution within the third range is not greater than the product of the total number of pixels corresponding to the input image and the predetermined coefficient.
It should be appreciated that the predetermined coefficients may be determined empirically. Alternatively, the predetermined coefficient may be determined based on a neural network model. For example, the neural network model may be trained with processed image samples, which show a clearer view of the target object, as positive samples, to determine model parameters. And determining proper values of the preset coefficients based on the trained neural network model so as to enable the target object in the processed image to be displayed more clearly.
Alternatively, in the case where the third range includes both the tone scale ranges located on the left and right sides of the second range, a first number of pixels may be determined based on a product of a total number of pixels corresponding to the input image and a first predetermined coefficient, and a fourth range located on the left side of the second range may be determined based on the first number of pixels, wherein the number of pixels corresponding to the tone scale distribution in the fourth range is not greater than the first number of pixels, and/or a second number of pixels may be determined based on a product of a total number of pixels corresponding to the input image and a second predetermined coefficient, and a fifth range located on the right side of the second range may be determined based on the second number of pixels, wherein the number of pixels corresponding to the tone scale distribution in the fifth range is not greater than the second number of pixels, wherein the third range includes the fourth range and the fifth range.
For example, assuming that the input image has M pixels on the long side and N pixels on the wide side, the total number of pixels corresponding to the input image is m×n. The first number of pixels m×n×c 1 may be determined based on a product of a total number of pixels m×n corresponding to the input image and a first predetermined coefficient C 1, and the number of pixels in a fourth range located on the left side of the second range is not greater than the first number of pixels m×n×c 1. The second number of pixels M x N x C 2 may be determined based on a product of the total number of pixels M x N corresponding to the input image and the second predetermined coefficient C 2. The number of pixels in the fifth range located on the right side of the second range is not greater than the second number of pixels m×n×c 2.
Alternatively, in the case where the third range includes only a gradation range located on the left side of the second range, a third number of pixels may be determined based on a product of a total number of pixels corresponding to the input image and a third predetermined coefficient, and the third range located on the left side of the second range may be determined based on the third number of pixels, wherein the number of pixels corresponding to a gradation distribution within the third range is not greater than the third number of pixels. For example, assuming that the input image has M pixels on the long side and N pixels on the wide side, the total number of pixels corresponding to the input image is m×n. The third number of pixels m×n×c 3 may be determined based on a product of the total number of pixels m×n corresponding to the input image and a third predetermined coefficient C 3, and the number of pixels in a third range located on the left side of the second range is not greater than the third number of pixels m×n×c 3.
Alternatively, in the case where the third range includes only a tone scale range located on the right side of the second range, a fourth number of pixels may be determined based on a product of a total number of pixels corresponding to the input image and a fourth predetermined coefficient, and the third range located on the right side of the second range may be determined based on the fourth number of pixels, wherein the number of pixels corresponding to a tone scale distribution within the third range is not greater than the fourth number of pixels. For example, assuming that the input image has M pixels on the long side and N pixels on the wide side, the total number of pixels corresponding to the input image is m×n. The fourth pixel number m×n×c 4 may be determined based on a product of the total pixel number m×n corresponding to the input image and the fourth predetermined coefficient C 4, and the number of pixels in the third range located on the right side of the second range is not greater than the first pixel number m×n×c 4.
In step S130, the first tone distribution in the second range is subjected to a distribution stretching to obtain a second tone distribution with a minimum tone level being a first threshold value and a maximum tone level being a second threshold value.
According to an embodiment of the disclosure, the tone scale values of the tone scale distribution in the second range may be linearly transformed to obtain a second tone scale distribution having a tone scale minimum value being a first threshold value and a tone scale maximum value being a second threshold value, where the first threshold value is smaller than the minimum value of the second range and the second threshold value is larger than the maximum value of the second range.
It should be appreciated that the distributed stretching based on linear transformation is described herein as an example, and in fact, the distributed stretching may be implemented based on other ways (e.g., by various functions transforming the second range into the sixth range, where the minimum value of the sixth range is the first threshold and the maximum value is the second threshold).
In step S140, a third level distribution is obtained based on the level distribution in the third range.
According to an embodiment of the present disclosure, in a case where the third range includes a tone scale range located on the left side of the second range, a tone scale value of a tone scale distribution within the third range located on the left side of the second range may be set as the first threshold value, and in a case where the third range includes a tone scale range located on the right side of the second range, a tone scale value of a tone scale distribution within the third range located on the right side of the second range may be set as the second threshold value.
It should be understood that in order to make the image resolution improvement effect as noticeable as possible, the first threshold value may be set to 0 and the second threshold value may be set to 255. In practice, however, the first threshold may be any value less than the minimum value of the second range and the second threshold may be any value greater than the maximum value of the second range.
According to an embodiment of the present disclosure, in a case where the third range includes a tone scale range located on the left side of the second range, a tone scale value of a gradation distribution within the third range located on the left side of the second range may be set as the third threshold value, and in a case where the third range includes a tone scale range located on the right side of the second range, a tone scale value of a gradation distribution within the third range located on the right side of the second range may be set as the fourth threshold value, wherein the third threshold value is smaller than the first threshold value, and the fourth threshold value is larger than the second threshold value.
In step S150, an output image is determined based on the second tone scale distribution and the third tone scale distribution.
According to the embodiment of the disclosure, when the input image is a color image, the distribution corresponding to the red channel is taken as the first color level distribution to determine a first output image, the distribution corresponding to the green channel is taken as the first color level distribution to determine a second output image, and the distribution corresponding to the blue channel is taken as the first color level distribution to determine a third output image. The first, second, and third output images may then be superimposed to determine an optimized color image.
According to an embodiment of the present disclosure, a synthesized tone scale distribution may be obtained based on the second tone scale distribution and the third tone scale distribution, and the output image may be determined based on the synthesized tone scale distribution.
Optionally, the synthesized tone scale distribution may be further adjusted, and the output image may be determined based on the adjusted tone scale distribution. For example, the brightness range of the synthesized tone scale distribution may be adjusted (e.g., if the synthesized tone scale distribution is concentrated in a dark region (i.e., the corresponding image brightness is low), the synthesized tone scale distribution may be adjusted entirely to a brighter region to increase the brightness of the image, if the synthesized tone scale distribution is concentrated in a bright region (i.e., the corresponding image brightness is high), the synthesized tone scale distribution may be adjusted entirely to a darker region to prevent image overexposure), noise of the synthesized tone scale distribution may be adjusted (e.g., denoised), and so forth.
The image is processed by the method 100, so that the contrast ratio of the image can be effectively improved for any color level distribution image. That is, the method of the present disclosure can be utilized to enhance contrast, whether the gradation distribution is in the entire range of 0-255 or in other ranges between 0-255 (e.g., the gradation distribution is in the range of 100-200), whereas conventional contrast enhancement methods can only enhance contrast for images having gradation distribution between 0-255 (e.g., the gradation distribution is in the range of 100-200), but cannot enhance contrast for images having gradation distribution in the entire range of 0-255. The processing of the method 100 can make the outline and shape of the image clearer and the color more vivid, thereby enhancing the layering of the picture. Further, in the case of performing target object recognition based on the processed image, more accurate results can be obtained more easily.
Since the gradation value of the gradation distribution in the third range is directly set to the predetermined value (for example, the first threshold value or the second threshold value) in step S140 of the method 100, no further transformation processing is required for the gradation value of the gradation distribution in the third range, so that the method 100 can reduce the complexity of image processing and increase the speed of image processing while improving the image contrast.
Fig. 2A is a schematic diagram illustrating a first gradation distribution corresponding to an input image according to an embodiment of the present disclosure.
As shown in fig. 2A, the first gradation distribution range of the input image has a minimum value of a and a maximum value of d, and may further have gradation values b and c in the first gradation distribution range, and satisfy 0< a < b < c < d <255.
Alternatively, it is possible to determine a range between the gradation levels b, c as a second gradation range in the first range, and a range other than the second range as a third range (i.e., the third range includes a fourth range between a, b and a fifth range between c, d).
According to an embodiment of the present disclosure, the first pixel number m×n×c 1 may be determined based on a product of a total number of pixels corresponding to the input image (assuming that a long side of the input image has M pixels and a wide side has N pixels, then the total number of pixels corresponding to the input image is m×n) and a first predetermined coefficient C 1 (for example, may be 4%), and the fourth range may be determined based on the first pixel number, wherein the number of pixels corresponding to the tone scale distribution in the fourth range is not greater than the first pixel number. And a second number of pixels M x N x C 2 may be determined based on a product of a total number of pixels M x N corresponding to the input image and a second predetermined coefficient C 2 (for example, may be 5%), and the fifth range may be determined based on the second number of pixels, wherein a number of pixels corresponding to a tone scale distribution in the fifth range is not greater than the second number of pixels.
By performing the distribution stretching on the first gradation distribution in the second range, a second gradation distribution (as shown in fig. 2B) having a minimum gradation value of 0 and a maximum gradation value of 255 can be obtained.
By performing the distribution stretching (for example, but not limited to, by linear transformation) on the first gradation distribution in the second range, the gradation distribution can be made not to concentrate in the range between b, c but to a larger range between 0 and 255, thereby realizing the contrast stretching.
Meanwhile, the gradation value of the gradation distribution located in the four ranges may be set to 0, and the gradation value of the gradation distribution located in the fifth range may be set to 255. Therefore, the tone scale distribution of the third range (including the fourth range and the fifth range) is converted into the tone scale distribution shown in fig. 2C.
That is, the transformation of the first gradation distribution can be achieved by the following formula:
where x represents the gradation value before conversion, and f (x) represents the gradation value after conversion.
By superimposing the gradation distribution shown in fig. 2B and the gradation distribution shown in fig. 2C, the synthesized gradation distribution shown in fig. 2D can be obtained. Further, an output image may be determined based on the synthesized tone scale distribution. The output image has improved contrast compared to the input image.
Fig. 3 is a schematic diagram illustrating a method 300 for identifying a target object according to an embodiment of the present disclosure.
In step S310, a first grayscale image is acquired that includes the target object.
It should be appreciated that the identification of the target object may be performed on the basis of a color image. However, in the case where the color of the target object is not a distinguishing key, the target recognition based on the gray image can further reduce the complexity of the image processing and increase the image processing speed.
In step S320, a first gray scale distribution corresponding to the input first gray scale image is obtained, where a gray scale range of the first gray scale distribution is a first range.
According to embodiments of the present disclosure, the gray distribution may be in the form of a gray histogram, i.e., the abscissa of the gray histogram represents the gray scale value (e.g., ranging from 0 to 255), and the ordinate represents the number of pixels to which the gray scale value corresponds.
It should be appreciated that the gray scale distribution of the image may be distributed throughout the range of 0-255, or may be distributed in other ranges between 0-255 (e.g., in the range of 100-200). For images with gray scale distribution in any range, the method 300 can effectively improve the contrast of the images and more accurately identify the target object.
In step S330, a second range and a third range other than the second range are determined in the first range, wherein the third range includes a range of tone levels located on the left and/or right of the second range.
That is, the second range may be a low-luminance range, a high-luminance range, or a medium-luminance range. In the case that the second range is any one of the three ranges, the method 300 can effectively improve the contrast of the image and more accurately identify the target object.
In step S340, the gray scale distribution in the second range is subjected to distribution stretching to obtain a second gray scale distribution with a gray scale minimum value being the first threshold value and a gray scale maximum value being the second threshold value.
It should be appreciated that the first threshold should be less than the minimum value of the second range and the second threshold should be greater than the maximum value of the second range.
In step S350, a third gradation distribution is obtained based on the gradation distribution in the third range.
According to an embodiment of the present disclosure, in a case where the third range includes a gradation range located on the left side of the second range, a gradation value of a gradation distribution within the third range located on the left side of the second range may be set as the first threshold value, and in a case where the third range includes a gradation range located on the right side of the second range, a gradation value of a gradation distribution within the third range located on the right side of the second range may be set as the second threshold value.
In step S360, a second gray scale image is determined based on the second gray scale distribution and the third gray scale distribution.
For example, a synthesized gray scale distribution may be obtained based on the second gray scale distribution and the third gray scale distribution, and the second gray scale image may be determined based on the synthesized gray scale distribution.
It should be understood that the processing procedure of steps S320 to S360 in fig. 3 may be similar to steps S110 to S150 in fig. 1, and will not be repeated here.
In step S370, the target object is identified based on the second gray level image.
It should be appreciated that the process of identifying a target object may be implemented based on a variety of target identification methods. For example, the target recognition method may be implemented based on a neural network model. Alternatively, the target object may be identified based on the second gray level image based on the trained neural network model, and the identification result of the target object may be displayed, where the result may include the position and size of the target object in the image (e.g., may be represented by a box of the position of the target object).
Fig. 4A is a view showing an image to be processed (here, a gray-scale image is illustrated as an example) according to an embodiment of the present disclosure.
As shown in fig. 4A, the camera on the vehicle captures an image of the garage, but because the whole image is gray, the garage line is more similar to the surrounding environment, if parking is performed based on fig. 4A, the garage line cannot be clearly identified, and misjudgment is easy to occur.
Fig. 4B is a schematic diagram of a tone scale distribution (here, a gray level histogram is illustrated as an example) corresponding to the image shown in fig. 4A.
Since the image shown in fig. 4A is entirely gray, as can be seen from fig. 4B, the gray scale values of the pixels corresponding to the image shown in fig. 4A are all concentrated in the middle.
The gray-level histogram shown in fig. 4B is processed by the method shown in fig. 1, and a processed gray-level histogram (i.e., as shown in fig. 4C) can be obtained.
As can be seen from fig. 4C, for the processed gray-scale histogram, the gray-scale values of the pixels can be distributed over a larger range. The processed image obtained based on fig. 4C is shown in fig. 4D. As can be seen from a comparison of fig. 4A and 4D, the contrast of the image is significantly improved and the sharpness of the garage line in the image is significantly increased after processing. Therefore, operations such as parking can be more easily realized based on the processed image, misjudgment is prevented, and driving safety is improved.
Fig. 5 is a composition diagram showing an image processing apparatus 500 according to an embodiment of the present disclosure.
According to an embodiment of the present disclosure, the image processing apparatus 500 may include a tone scale distribution acquisition module 510, a range determination module 520, a tone scale distribution optimization module 530, and an image output module 540.
The image processing apparatus 500 may process a grayscale image or may process a color image.
Specifically, the tone scale distribution acquisition module 510 may be configured to acquire a first tone scale distribution corresponding to an input image, where a tone scale range of the first tone scale distribution is a first range.
The range determination module 520 may be configured to determine a second range and a third range other than the second range in the first range, wherein the third range includes a range of color levels located to the left and/or right of the second range.
The tone scale distribution optimizing module 530 may be configured to perform distribution stretching on the tone scale distribution in the second range to obtain a second tone scale distribution having a tone scale minimum value of a first threshold value and a tone scale maximum value of a second threshold value, and obtain a third tone scale distribution based on the tone scale distribution in the third range, wherein in a case where the third range includes a tone scale range located at the left side of the second range, a tone scale value of the tone scale distribution in the third range located at the left side of the second range is set as the first threshold value, and in a case where the third range includes a tone scale range located at the right side of the second range, a tone scale value of the tone scale distribution in the third range located at the right side of the second range is set as the second threshold value.
The image output module 540 may be configured to determine an output image based on the second and third tone scale distributions.
It should be appreciated that the image processing apparatus 500 shown in fig. 5 may implement various image processing methods as described with respect to fig. 1, and will not be described in detail herein. The image processing apparatus 500 may be located on a server (for example, acquiring an input image transmitted from a user device via a network and transmitting a processed output image to the user device), or may be located on a user terminal (for example, acquiring an input image via a camera on the user device or acquiring an image in a memory of the user device as an input image and providing the processed output image to a display of the user device).
Fig. 6 is a composition diagram illustrating an apparatus 600 for identifying a target object according to an embodiment of the present disclosure.
According to an embodiment of the present disclosure, the apparatus 600 for identifying a target object may include an image acquisition module 610, a gray distribution acquisition module 620, a range determination module 630, a gray distribution optimization module 640, an image output module 650, and a target identification module 660.
In particular, the image acquisition module 610 may be configured to acquire a first grayscale image containing the target object.
The gray distribution acquisition module 620 may be configured to acquire a first gray distribution corresponding to the input first gray image, wherein a gray range of the first gray distribution is a first range.
The range determination module 630 may be configured to determine a second range and a third range other than the second range in the first range, wherein the third range includes a range of color levels located to the left and/or right of the second range.
The gray scale distribution optimizing module 640 may be configured to perform distribution stretching on the gray scale distribution in the second range to obtain a second gray scale distribution having a gray scale minimum value of a first threshold value and a gray scale maximum value of a second threshold value, and obtain a third gray scale distribution based on the gray scale distribution in the third range.
The image output module 650 may be configured to output a second gray scale image based on the second gray scale distribution and the third gray scale distribution.
The target recognition module 660 may be configured to recognize the target object based on the second gray scale image.
It should be noted that, the apparatus 600 for identifying a target object shown in fig. 6 may implement various image processing methods as described with respect to fig. 3, and will not be described herein. The apparatus 600 for identifying a target object may be located on a server or on a user terminal.
It should be appreciated that the image processing apparatus 500 and the apparatus 600 for identifying a target object may be arranged in a scene of safety monitoring, automatic driving, traffic condition monitoring, unmanned airport scene analysis, robotic vision, medical image analysis, etc.
Fig. 7 is a composition diagram showing a vehicle assisted driving system 700 according to an embodiment of the present disclosure.
It should be appreciated that different types of vehicle assistance systems may be provided on the vehicle depending on the application scenario. For example, the vehicle driving support system 700 in the present disclosure may be an Automatic Driving (AD) system of a vehicle or an auxiliary driving (AP) system of a vehicle, which may be used to implement functions of automatic tracking, automatic parking, auxiliary parking, wire pressing alarm, and the like of a vehicle.
According to an embodiment of the present disclosure, the vehicle assisted driving system 700 may include an image acquisition module 710, an image processing module 720, and an image analysis module 730.
Specifically, the image acquisition module 710 may be configured to acquire an image of the surroundings of the vehicle and/or the interior of the vehicle and take the image as an input image to the image processing module.
It should be appreciated that the image acquisition module 710 may acquire images of the surroundings of the vehicle (e.g., lane lines, vehicles in front and back, etc.) and/or images of the interior of the vehicle (e.g., driver, dashboard, etc.) via a camera. For a low-speed application scene and a high-speed application scene, cameras with different performances can be selected.
The image acquired by the image acquisition module 710 may be either a gray scale image or a color image, depending on the desired performance.
The image processing module 720 may be configured to process the input image to obtain an optimized output image.
The image processing module 720 may process the image by any of the methods shown in fig. 1.
The image analysis module 730 may be configured to determine an assisted driving strategy of the vehicle based on the output image.
For example, the image analysis module 730 may provide an automated parking service in the event that a garage is detected based on the output image. Or the image analysis module 730 may issue an alarm to alert the driver to draw attention, etc., if it is recognized that the car is pressed to the lane line.
By using the method disclosed by the invention for the vehicle auxiliary driving system on the vehicle, the images around and/or in the vehicle can be analyzed more accurately so as to provide more accurate auxiliary driving service, thereby providing possibility for vehicle intellectualization.
In general, the various example embodiments of the disclosure may be implemented in hardware or special purpose circuits, software, firmware, logic, or any combination thereof. Some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device. While aspects of the embodiments of the present disclosure are illustrated or described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that the blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
For example, a method or apparatus according to embodiments of the present disclosure may also be implemented by means of the architecture of computing device 3000 shown in fig. 8. As shown in fig. 8, computing device 3000 may include a bus 3010, one or more CPUs 3020, a Read Only Memory (ROM) 3030, a Random Access Memory (RAM) 3040, a communication port 3050 connected to a network, an input/output component 3060, a hard disk 3070, and the like. A storage device in the computing device 3000, such as a ROM 3030 or hard disk 3070, may store various data or files for processing and/or communication of the methods provided by the present disclosure and program instructions for execution by the CPU. The computing device 3000 may also include a user interface 3080. Of course, the architecture shown in FIG. 8 is merely exemplary, and one or more components of the computing device shown in FIG. 8 may be omitted as may be practical in implementing different devices.
According to yet another aspect of the present disclosure, a computer-readable storage medium is also provided. The computer storage medium has computer readable instructions stored thereon. When the computer readable instructions are executed by the processor, the method according to the embodiments of the present disclosure described with reference to the above figures may be performed. The computer readable storage medium in embodiments of the present disclosure may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), or flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (ddr SDRAM), enhanced Synchronous Dynamic Random Access Memory (ESDRAM), synchronous Link Dynamic Random Access Memory (SLDRAM), and direct memory bus random access memory (DR RAM). It should be noted that the memory of the methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory. It should be noted that the memory of the methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
Embodiments of the present disclosure also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. A processor of a computer device reads the computer instructions from a computer-readable storage medium, the processor executing the computer instructions, causing the computer device to perform a method according to an embodiment of the present disclosure.
In summary, the embodiment of the disclosure provides a method for image processing, which includes obtaining a first tone distribution corresponding to an input image, wherein a tone range of the first tone distribution is a first range, determining a second range and a third range except the second range in the first range, wherein the third range includes a tone range located at the left side and/or the right side of the second range, performing distribution stretching on the first tone distribution in the second range to obtain a second tone distribution with a tone minimum value being a first threshold and a tone maximum value being a second threshold, obtaining a third tone distribution based on the tone distribution in the third range, setting a tone value of the tone distribution in the third range located at the left side of the second range as the first threshold when the third range includes the tone range located at the left side of the second range, setting a tone value of the tone distribution in the third range located at the right side of the second range to be the third threshold when the third range includes the tone distribution located at the right side of the second range, and setting the tone distribution in the third range as the third threshold when the third range is located at the right side of the third range.
The method for image processing can effectively improve contrast ratio of images with any tone distribution, and reduce complexity of image processing, so that speed of image processing is improved. After the image is processed by the method for processing the image, the contrast of the image can be improved, so that the outline and the shape of the image are clearer, the color is more vivid and gorgeous, the layering sense of the image is enhanced, and therefore, more accurate results can be obtained more easily when the target object is identified based on the processed image.
It is noted that the flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises at least one executable instruction for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The present disclosure uses specific words to describe embodiments of the disclosure. Such as "first/second embodiment," "an embodiment," and/or "some embodiments," means a particular feature, structure, or characteristic associated with at least one embodiment of the present disclosure. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the present disclosure may be combined as suitable.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The foregoing is illustrative of the present invention and is not to be construed as limiting thereof. Although a few exemplary embodiments of this invention have been described, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of this invention. Accordingly, all such modifications are intended to be included within the scope of this invention as defined in the following claims. It is to be understood that the foregoing is illustrative of the present invention and is not to be construed as limited to the specific embodiments disclosed, and that modifications to the disclosed embodiments, as well as other embodiments, are intended to be included within the scope of the appended claims. The invention is defined by the claims and their equivalents.