[go: up one dir, main page]

CN107784631B - Image deblurring method and device - Google Patents

Image deblurring method and device Download PDF

Info

Publication number
CN107784631B
CN107784631B CN201610718848.7A CN201610718848A CN107784631B CN 107784631 B CN107784631 B CN 107784631B CN 201610718848 A CN201610718848 A CN 201610718848A CN 107784631 B CN107784631 B CN 107784631B
Authority
CN
China
Prior art keywords
region
image
size
processed
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610718848.7A
Other languages
Chinese (zh)
Other versions
CN107784631A (en
Inventor
李芳�
陈兵
王军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Longhorn Security and Technology Co Ltd
Original Assignee
Shenzhen Longhorn Security and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Longhorn Security and Technology Co Ltd filed Critical Shenzhen Longhorn Security and Technology Co Ltd
Priority to CN201610718848.7A priority Critical patent/CN107784631B/en
Publication of CN107784631A publication Critical patent/CN107784631A/en
Application granted granted Critical
Publication of CN107784631B publication Critical patent/CN107784631B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses an image deblurring method and device, wherein an image to be processed is divided into a plurality of regions based on depth information of pixel points, a blur kernel of each region is calculated respectively, for each region, the blur kernel of the region is utilized to perform deblurring processing, the deblurred regions are subjected to edge fusion to obtain a deblurred image, and the problems of ringing phenomenon and distortion of the deblurred image are solved.

Description

Image deblurring method and device
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image deblurring method and an image deblurring device.
Background
Image blur is typically a result of relative motion (e.g., camera shake) between the camera and the scene during the time the camera is exposed. The problem of image deblurring becomes very difficult due to the arbitrary nature of the displacement motion, the diversity of natural images, and the complexity of various noises.
The purpose of image deblurring is to re-estimate the original image from the observed blurred image. Image deblurring is divided into two main categories: image non-blind deblurring and image blind deblurring. If the degradation process of the image is known, namely the blur kernel is known, the problem of image deblurring is called image non-blind deblurring, the problem is well researched, and a plurality of technologies can obtain a very clear solution; this type of image deblurring problem is called image blind deblurring if the blur kernel of the image is unknown. The blind deblurring of images is more difficult because of less empirical knowledge available in the images in the problems, but the problems are more in line with practical requirements, so that the blind deblurring of images becomes a hotspot of modern research.
Since both the original sharp image and the blur kernel are unknown in the image blind deblurring problem, all practical solutions have to make assumptions about the blur kernel or the sharp image to be restored in advance. The inventor researches and discovers that the existing image blind deblurring methods assume a blurring kernel by using a simple parameter model, so that the deblurring effect of certain areas of a deblurred image is poor, and the deblurred image often has ringing and distortion.
Disclosure of Invention
The invention aims to provide an image deblurring method and an image deblurring device, which are used for overcoming the problems of ringing phenomenon and distortion of a deblurred image.
In order to achieve the purpose, the invention provides the following technical scheme:
an image deblurring method, comprising:
estimating the depth of each pixel point of the image to be processed;
dividing the image to be processed into a plurality of areas based on the depth;
respectively calculating a fuzzy core of each region;
for each region, deblurring the region based on the fuzzy core of the region;
and performing edge fusion on the deblurred region to obtain a deblurred image.
In the above method, preferably, the estimating the depth of each pixel point of the image to be processed includes:
obtaining a gray level image of an image to be processed;
calculating the gradient of each pixel point in the gray level image;
for each pixel point, the depth of the pixel point is calculated based on the gradient of the pixel point.
The above method, preferably, the dividing the image to be processed into several regions based on the depth includes:
roughly dividing the image to be processed based on the depth of each pixel point to obtain a plurality of first-level regions;
if a first-stage region meeting a preset condition exists in the first-stage regions, acquiring a plurality of pixel points at the outermost periphery of the first-stage region meeting the preset condition; and carrying out area subdivision on the first-stage area meeting the preset condition based on the plurality of pixel points at the outermost periphery to obtain a plurality of second-stage areas.
In the method, preferably, the roughly dividing the image to be processed based on the depth of each pixel point includes:
and marking the pixel points with the same depth in the neighborhood as a region.
In the method, preferably, the step of satisfying the preset condition in the first-stage area includes:
the ratio of the number of the pixels of the first-stage area to the number of the pixels of the image to be processed is larger than a preset ratio threshold; or,
the first-level area comprises specific identification information, and the specific identification information is calibrated artificially.
The above method, preferably, the calculating the blur kernel for each region respectively includes:
calculating the ambiguity of each region;
for each region, if the ambiguity of the region is smaller than a first threshold, determining the size of the ambiguity kernel of the region as a first size; if the ambiguity of the region is greater than a second threshold, determining the size of the ambiguity kernel of the region as a second size; if the ambiguity of the region is greater than or equal to the first threshold and less than or equal to the second threshold, determining that the ambiguity kernel size of the region is a third size; calculating a blur kernel for the region based on the blur kernel size for the region;
wherein the first threshold is less than the second threshold; the first size is greater than the third size, which is greater than the second size.
An image deblurring apparatus, comprising:
the estimation module is used for estimating the depth of each pixel point of the image to be processed;
the region dividing module is used for dividing the image to be processed into a plurality of regions based on the depth;
the calculation module is used for calculating the fuzzy core of each region respectively;
the processing module is used for carrying out deblurring processing on each region based on the fuzzy core of the region;
and the fusion module is used for carrying out edge fusion on the deblurred region to obtain a deblurred image.
The above apparatus, preferably, the estimating module includes:
an obtaining unit configured to obtain a grayscale image of an image to be processed;
the first calculating unit is used for calculating the gradient of each pixel point in the gray level image;
and the second calculation unit is used for calculating the depth of each pixel point based on the gradient of the pixel point.
Preferably, in the above apparatus, the area dividing module includes:
the first dividing unit is used for roughly dividing the image to be processed based on the depth of each pixel point to obtain a plurality of first-level regions;
the second dividing unit is used for acquiring a plurality of pixel points at the outermost periphery of the first-stage areas meeting the preset conditions if the first-stage areas meeting the preset conditions exist in the plurality of first-stage areas; and carrying out area subdivision on the first-stage area meeting the preset condition based on the plurality of pixel points at the outermost periphery to obtain a plurality of second-stage areas.
Preferably, in the device, the first dividing unit is specifically configured to mark the pixel points with the same depth in the neighborhood as a region.
The above apparatus, preferably, the calculating module includes:
a third calculation unit for calculating the degree of blur of each region;
the fourth calculation unit is used for determining the size of the fuzzy core of each area as the first size if the fuzzy degree of the area is smaller than the first threshold; if the ambiguity of the region is greater than a second threshold, determining the size of the ambiguity kernel of the region as a second size; if the ambiguity of the region is greater than or equal to the first threshold and less than or equal to the second threshold, determining that the ambiguity kernel size of the region is a third size; calculating a blur kernel for the region based on the blur kernel size for the region;
wherein the first threshold is less than the second threshold; the first size is greater than the third size, which is greater than the second size.
According to the scheme, the image to be processed is divided into the plurality of regions based on the depth of the pixel points, the fuzzy core of each region is calculated respectively, the deblurring processing is carried out on each region based on the fuzzy core of the region, the deblurred regions are subjected to edge fusion to obtain the deblurred image, and the problems of ringing and distortion of the deblurred image are solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flowchart of an implementation of an image deblurring method according to an embodiment of the present invention;
fig. 2 is a flowchart of an implementation of estimating depths of pixel points of an image to be processed according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of an image deblurring apparatus according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of an estimation module according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a region partitioning module according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a computing module according to an embodiment of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be practiced otherwise than as specifically illustrated.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of an implementation of an image deblurring method according to an embodiment of the present invention, which may include:
step S11: estimating the depth of each pixel point of the image to be processed;
step S12: dividing an image to be processed into a plurality of regions based on the depth of the pixel points;
in the embodiment of the invention, when the region of the image to be processed is divided, the division is carried out according to the depth of the pixel points.
Step S13: respectively calculating a fuzzy core of each region;
in the embodiment of the invention, the fuzzy kernel of the whole image to be processed is not calculated, but the fuzzy kernel is calculated in different areas, and the fuzzy kernels in different areas can be the same or different. Since the division of the region of the image to be processed is based on the depth of the pixel point, the blur kernel of the region is associated with the depth of the region.
Step S14: for each region, deblurring the region based on the fuzzy core of the region;
in the embodiment of the invention, the deblurring processing is carried out in a subarea mode. Optionally, the deblurring process may be performed on each region based on a fully-variant deconvolution algorithm.
Step S15: and performing edge fusion on the deblurred region to obtain a deblurred image.
The deblurred regions may be edge-fused based on commonly used edge-fusion algorithms.
The image deblurring method provided by the embodiment of the invention divides the image to be processed into a plurality of regions based on the depth of the pixel point, respectively calculates the fuzzy core of each region, deblurrs each region based on the fuzzy core of the region, performs edge fusion on the deblurred region to obtain the deblurred image, and overcomes the problems of ringing phenomenon and distortion of the deblurred image.
Optionally, an implementation flowchart for estimating the depth of each pixel point of the image to be processed, provided in the embodiment of the present invention, is shown in fig. 2, and may include:
step S21: obtaining a gray level image of an image to be processed;
and if the image to be processed is a color space image, converting the image to be processed into a gray image. For example, if the image to be processed is an RGB color image, a gray scale image of the image to be processed can be obtained by using formula (1).
I(i,j)=0.299*R(i,j)+0.578G(i,j)+0.114*B(i,j) (1)
Wherein I (I, j) represents the gray level of a pixel point at (I, j) in the image to be processed; r (i, j) represents the red component of the pixel point at (i, j) in the image to be processed; g (i, j) represents the green component of the pixel point at (i, j) in the image to be processed; and B (i, j) represents the blue component of the pixel point at (i, j) in the image to be processed.
If the image to be processed is an image in other color space, the calculation can be performed according to the corresponding published formula for calculating the gray scale of the image.
If the image to be processed is a gray scale image, step S22 may be directly performed.
Step S22: calculating the gradient of each pixel point in the gray level image;
for each pixel point, calculating partial derivatives of the pixel point in the x direction, the y direction, the 45-degree direction and the 135-degree direction through the gray value of the pixel in the 8-neighborhood of the pixel point; wherein,
partial derivative P of pixel point at (i, j) in x directionx[i,j]Comprises the following steps:Px[i,j]=I[i+1,j]-I[i-1,j];
partial derivative P of pixel point at (i, j) in y directiony[i,j]Comprises the following steps: py[i,j]=I[i,j+1]-I[i,j-1];
(i, j) partial derivative P of pixel point at 45 DEG direction45°[i,j]Comprises the following steps:
P45°[i,j]=I[i-1,j+1]-I[i+1,j-1];
(i, j) the partial derivative P of the pixel at 135 DEG135°[i,j]Comprises the following steps:
P135°[i,j]=I[i+1,j+1]-I[i-1,j-1];
and calculating the second-order norm of the partial derivatives in the four directions to obtain the gradient of the pixel point. The calculation formula is shown as formula (2):
Figure BDA0001090269400000061
wherein M (i, j) is the gradient of the pixel point at (i, j).
Step S23: for each pixel point, the depth of the pixel point is calculated based on the gradient of the pixel point.
Optionally, since the scene has a certain depth and the details of the scene mainly exist in the gray scale gradient information of the image, the depth of the pixel point at (i, j) may be calculated based on formula (3).
Figure BDA0001090269400000062
Wherein d (i, j) is the depth of the pixel point at the position (i, j); m (i, j) is the gradient of the pixel point at (i, j); c. C1、c2All are gradient correction coefficients, 0 < c1<1,0<c2<1,c1And c2The values of (A) may be the same or different; 1,2,. m; j is 1,2,. n; m is the number of rows of pixels in the image to be processed; n is the number of columns of pixels in the image to be processed.
c1And c2The specific value of (a) can be determined according to the actual application scene, customer requirements and experimental results.
Optionally, an implementation manner for dividing the image to be processed into a plurality of regions based on the depth of the pixel point provided in the embodiment of the present invention may be:
roughly dividing the image to be processed based on the depth of each pixel point to obtain a plurality of first-level regions;
if the first-stage areas meeting the preset conditions exist in the plurality of first-stage areas, acquiring a plurality of pixel points at the outermost periphery of the first-stage areas meeting the preset conditions; and carrying out area subdivision on the first-stage area meeting the preset condition based on the plurality of pixel points at the outermost periphery to obtain a plurality of second-stage areas.
That is to say, in the embodiment of the present invention, the image to be processed is firstly subjected to region division once based on the depth of the pixel point, and if there is a region satisfying the preset condition in the divided regions, the region satisfying the preset condition is subjected to region subdivision again, that is, two times of division are performed. And if the area which meets the preset condition does not exist after the first area division, the second division is not carried out. Therefore, in the embodiment of the present invention, after the image to be processed is subjected to the region division, the image to be processed only includes the first-level region, or only includes the second-level region, or includes both the first-level region and the second-level region.
Optionally, an implementation manner for roughly dividing the image to be processed based on the depth of each pixel point provided by the embodiment of the present invention may be:
and marking the pixel points with the same depth in the neighborhood as a region.
Wherein, the depth of two pixel points is the same can include: the depth values of the two pixel points are absolutely equal, or the difference value of the depth values of the two pixel points is within a preset range.
The fuzzy degree is the same for the pixels with the same depth.
Optionally, the pixel point in the neighborhood of one pixel point may refer to an 8-neighborhood pixel point of the pixel point, where the neighborhood of the edge pixel point refers to a pixel point adjacent to the pixel point in the horizontal direction, the vertical direction, and the diagonal direction.
The inventor researches and discovers that the fuzzy degrees of pixel points at the same depth are the same for rigid objects; for non-rigid objects, such as pedestrians, the pixel points at different positions may have different blurriness even if the depth is the same. Thus, the region with the non-rigid object (especially the pedestrian) is further partitioned.
Optionally, an implementation manner that the first-level region provided in the embodiment of the present invention satisfies the preset condition may be:
the ratio of the number of pixels of the first-stage area to the number of pixels of the image to be processed is larger than a preset ratio threshold.
In the embodiment of the present invention, if the ratio of the number of pixels in the first-level region to the number of pixels in the image to be processed is greater than the preset ratio threshold, it indicates that the region is large, and further partitioning is required.
Optionally, another implementation manner that the first-level region provided in the embodiment of the present invention satisfies the preset condition may be:
the first-level region includes specific identification information, and the specific identification information is calibrated by human, for example, when a relevant person observes that a certain first-level region contains a relatively large non-rigid object (e.g., a person, a tree, etc.) by naked eyes, the specific identification information is calibrated for the first-level region.
Optionally, an implementation manner of obtaining the plurality of pixels at the outermost periphery of the first-stage region that satisfy the preset condition provided in the embodiment of the present invention may be:
and manually selecting a plurality of pixel points from the outermost pixel points of the first-stage area meeting the preset conditions by related personnel.
Or, the computer automatically sets a plurality of pixel points in the outermost pixel points of the first-stage area meeting the preset conditions according to a certain rule.
Optionally, 7 pixels can be selected from the outermost pixels in the first-level region satisfying the preset condition, and for convenience of description, the 7 pixels are respectively recorded as: (x)1,y1),(x2,y2),(x3,y3),(x4,y4),(x5,y5),(x6,y6),(x7,y7). The 7 pixel points sequentially correspond to the top (top of head), the left upper (left shoulder), the right upper (right shoulder), the left lower (left hand), the right lower (right hand), the left bottom (left foot) and the right bottom (right foot) of the human body; and dividing the first-level region into at least 6 regions based on the 7 pixel point coordinates. Specifically, the head may be divided into one region, the upper torso may be divided into one region, the left and right arms may be divided into one region, and the left and right legs may be divided into one region. The corresponding partitioning method is as follows:
x is to be1Row and (x)2+x3) The area between/2 rows is used as a new area, namely a head area;
locate the number of lines in (x)2+x3) Row 2 and [ (x)6+x7)/4+(x2+x3)/4]Between rows and at y columns2And y3The area in between as a new area, the upper torso part;
locate the column number at y4And y2The area between the two is used as a new area, namely a left arm area;
locate the column number at y3And y5The area between the two is used as a new area, namely a right arm area;
the number of lines is set to [ (x)6+x7)/4+(x2+x3)/4]And x6Between rows and at y columns6And (y)2+y3) The area between/2 is taken as a new area, namely the left leg area;
the number of lines is set to [ (x)6+x7)/4+(x2+x3)/4]And x7Between rows and at (y)2+y3) A combination of/2 and y7The area in between as a new area, the right leg area;
if there is an overlap between the new region and the already confirmed region when determining a new region, the overlapped region is deleted from the newly determined region, that is, the overlapped region belongs to the already confirmed region.
After the subdivision by the above method, if there is any remaining area in the first-stage area satisfying the preset condition, the remaining area is set as a new area.
Optionally, an implementation manner of separately calculating the blur kernel of each region provided by the embodiment of the present invention may be as follows:
calculating the ambiguity of each region;
for each region, the sum of squares of image differences in the row direction and the column direction of the region can be used as a measure of the ambiguity, and a specific calculation formula is as follows:
Figure BDA0001090269400000091
wherein s represents a calculated value of the ambiguity of the region; n represents the number of pixel points in the area; and (i, j) represents the coordinates of the nth pixel point in the image to be processed.
For each region, if the ambiguity of the region is smaller than a first threshold, determining the size of the ambiguity kernel of the region as a first size; if the ambiguity of the region is greater than a second threshold, determining the size of the ambiguity kernel of the region as a second size; if the ambiguity of the region is greater than or equal to the first threshold and less than or equal to the second threshold, determining that the ambiguity kernel size of the region is a third size; calculating a blur kernel for the region based on the blur kernel size for the region;
wherein the first threshold is less than the second threshold; the first size is greater than the third size, which is greater than the second size.
As can be seen from the formula (4), the larger the gray level change degree is, the larger the fuzzy metric value s is, the clearer the image is; the smaller the degree of change in gradation, the smaller the blur metric value s, and the more blurred the image.
When the fuzzy metric value s is smaller than the first threshold value, the size of the fuzzy core of the area is k1×k1(ii) a When the fuzzy metric value s is larger than the second threshold value, the size of the fuzzy core of the area is k2×k2(ii) a When the fuzzy metric values is greater than or equal to a first threshold value, and the blur metric value s is less than or equal to a second threshold value, then the size of the blur kernel of the region is k3×k3
Wherein the first threshold is less than the second threshold; k is a radical of1>k3>k2. The specific values of the first threshold and the second threshold can be determined according to the actual application scene, the customer requirements and the experimental results.
The calculation process of the fuzzy core comprises the steps of calculating the fuzziness, calculating the size of the fuzzy core according to the fuzziness, and then calculating the fuzzy core according to the size of the fuzzy core. The process of calculating the fuzzy kernel in the embodiment of the invention is different from the prior art in that the fuzzy kernel is calculated by regions, the fuzzy kernel size obtained by calculation is possibly different due to different fuzzy degree measurement values of different regions, and the fuzzy kernel obtained by calculation is also different.
Corresponding to the method embodiment, an embodiment of the present invention further provides an image deblurring apparatus, and a schematic structural diagram of the image deblurring apparatus provided in the embodiment of the present invention is shown in fig. 3, and may include:
an estimation module 31, a region division module 32, a calculation module 33, a processing module 34 and a fusion module 35; wherein,
the estimation module 31 is configured to estimate depths of each pixel point of the image to be processed;
the region dividing module 32 is configured to divide the image to be processed into a plurality of regions based on the depths of the pixel points;
the calculation module 33 is configured to calculate a blur kernel for each region respectively;
the processing module 34 is configured to, for each region, deblur the region based on the blur kernel of the region;
the fusion module 35 is configured to perform edge fusion on the deblurred region to obtain a deblurred image.
The image deblurring device provided by the embodiment of the invention divides the image to be processed into a plurality of regions based on the depth of the pixel point, respectively calculates the fuzzy core of each region, deblurrs each region based on the fuzzy core of the region, performs edge fusion on the deblurred region to obtain the deblurred image, and overcomes the problems of ringing phenomenon and distortion of the deblurred image.
Optionally, a schematic structural diagram of the estimation module 31 provided in the embodiment of the present invention is shown in fig. 4, and may include: an obtaining unit 41, a first calculating unit 42 and a second calculating unit 43; wherein,
the obtaining unit 41 is configured to obtain a grayscale image of an image to be processed;
the first calculating unit 42 is configured to calculate gradients of each pixel point in the grayscale image;
the second calculating unit 43 is configured to calculate, for each pixel point, a depth of the pixel point based on a gradient of the pixel point.
Optionally, as shown in fig. 5, a schematic structural diagram of the area dividing module 32 according to the embodiment of the present invention may include: a first dividing unit 51 and a second dividing unit 52; wherein,
the first dividing unit 51 is configured to perform coarse division on the image to be processed based on the depth of each pixel point to obtain a plurality of first-level regions;
the second dividing unit 52 is configured to, if a first-stage region meeting a preset condition exists in the plurality of first-stage regions, obtain a plurality of pixels on the outermost periphery of the first-stage region meeting the preset condition; and carrying out area subdivision on the first-stage area meeting the preset condition based on the plurality of pixel points at the outermost periphery to obtain a plurality of second-stage areas.
Optionally, the first dividing unit 51 may be specifically configured to mark pixel points with the same depth in the neighborhood as a region.
Optionally, an implementation manner that the first-level region provided in the embodiment of the present invention satisfies the preset condition may be:
the ratio of the number of pixels of the first-stage area to the number of pixels of the image to be processed is larger than a preset ratio threshold.
In the embodiment of the present invention, if the ratio of the number of pixels in the first-level region to the number of pixels in the image to be processed is greater than the preset ratio threshold, it indicates that the region is large, and further partitioning is required.
Optionally, another implementation manner that the first-level region provided in the embodiment of the present invention satisfies the preset condition may be:
the first-level region includes specific identification information, the specific identification information is a person for calibration, for example, when a related person observes that a certain first-level region includes a relatively large non-rigid object (e.g., a person, a tree, etc.) with naked eyes, the specific information is calibrated for the first-level region.
Optionally, a schematic structural diagram of the calculation module 33 provided in the embodiment of the present invention is shown in fig. 6, and may include: a third calculation unit 61 and a fourth calculation unit 62; wherein,
the third calculation unit 61 is used for calculating the ambiguity of each region;
the fourth calculating unit 62 is configured to, for each region, determine the size of the blur kernel of the region as the first size if the ambiguity of the region is smaller than the first threshold; if the ambiguity of the region is greater than a second threshold, determining the size of the ambiguity kernel of the region as a second size; if the ambiguity of the region is greater than or equal to the first threshold and less than or equal to the second threshold, determining that the ambiguity kernel size of the region is a third size; calculating a blur kernel for the region based on the blur kernel size for the region;
wherein the first threshold is less than the second threshold; the first size is greater than the third size, which is greater than the second size.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems (if any), apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. An image deblurring method, comprising:
estimating the depth of each pixel point of the image to be processed;
dividing the image to be processed into a plurality of areas based on the depth;
respectively calculating a fuzzy core of each region;
for each region, deblurring the region based on the fuzzy core of the region;
performing edge fusion on the deblurred region to obtain a deblurred image;
wherein the dividing the image to be processed into a number of regions based on the depth comprises:
roughly dividing the image to be processed based on the depth of each pixel point to obtain a plurality of first-level regions;
if a first-stage region meeting a preset condition exists in the first-stage regions, acquiring a plurality of pixel points at the outermost periphery of the first-stage region meeting the preset condition; carrying out area subdivision on the first-stage area meeting the preset condition based on the plurality of pixel points at the outermost periphery to obtain a plurality of second-stage areas;
wherein, the first-stage area meeting the preset conditions comprises:
the ratio of the number of the pixels of the first-stage area to the number of the pixels of the image to be processed is larger than a preset ratio threshold; or the first-level area comprises specific identification information which is calibrated artificially; the specific identification information includes identification information that is calibrated for the first-level region when a non-rigid object meeting a predetermined condition is contained in the first-level region.
2. The method of claim 1, wherein estimating the depth of each pixel point of the image to be processed comprises:
obtaining a gray level image of an image to be processed;
calculating the gradient of each pixel point in the gray level image;
for each pixel point, the depth of the pixel point is calculated based on the gradient of the pixel point.
3. The method according to claim 1, wherein the coarsely dividing the image to be processed based on the depth of each pixel point comprises:
and marking the pixel points with the same depth in the neighborhood as a region.
4. The method of claim 1, wherein the separately calculating the blur kernel for each region comprises:
calculating the ambiguity of each region; for each region, taking the square sum of the image differences in the row direction and the column direction of the region as a measure of the ambiguity, the calculation formula is as follows:
Figure FDA0002380470730000021
wherein s represents a calculated value of the ambiguity of the region; n represents the number of pixel points in the area; (i, j) representing the coordinate of the nth pixel point in the image to be processed; i (I, j) represents the gray level of a pixel point at (I, j) in the image to be processed;
for each region, if the ambiguity of the region is smaller than a first threshold, determining the size of the ambiguity kernel of the region as a first size; if the ambiguity of the region is greater than a second threshold, determining the size of the ambiguity kernel of the region as a second size; if the ambiguity of the region is greater than or equal to the first threshold and less than or equal to the second threshold, determining that the ambiguity kernel size of the region is a third size; calculating a blur kernel for the region based on the blur kernel size for the region;
wherein the first threshold is less than the second threshold; the first size is greater than the third size, which is greater than the second size.
5. An image deblurring apparatus, comprising:
the estimation module is used for estimating the depth of each pixel point of the image to be processed;
the region dividing module is used for dividing the image to be processed into a plurality of regions based on the depth;
the calculation module is used for calculating the fuzzy core of each region respectively;
the processing module is used for carrying out deblurring processing on each region based on the fuzzy core of the region;
the fusion module is used for carrying out edge fusion on the deblurred region to obtain a deblurred image;
wherein the region dividing module comprises:
the first dividing unit is used for roughly dividing the image to be processed based on the depth of each pixel point to obtain a plurality of first-level regions;
the second dividing unit is used for acquiring a plurality of pixel points at the outermost periphery of the first-stage areas meeting the preset conditions if the first-stage areas meeting the preset conditions exist in the plurality of first-stage areas; carrying out area subdivision on the first-stage area meeting the preset condition based on the plurality of pixel points at the outermost periphery to obtain a plurality of second-stage areas;
wherein, the first-stage area meeting the preset conditions comprises:
the ratio of the number of the pixels of the first-stage area to the number of the pixels of the image to be processed is larger than a preset ratio threshold; or the first-level area comprises specific identification information which is calibrated artificially; the specific identification information includes identification information that is calibrated for the first-level region when a non-rigid object meeting a predetermined condition is contained in the first-level region.
6. The apparatus of claim 5, wherein the estimation module comprises:
an obtaining unit configured to obtain a grayscale image of an image to be processed;
the first calculating unit is used for calculating the gradient of each pixel point in the gray level image;
and the second calculation unit is used for calculating the depth of each pixel point based on the gradient of the pixel point.
7. The apparatus according to claim 5, wherein the first partition unit is specifically configured to mark pixels with the same depth in a neighborhood as a region.
8. The apparatus of claim 5, wherein the computing module comprises:
a third calculation unit for calculating the degree of blur of each region; for each region, taking the square sum of the image differences in the row direction and the column direction of the region as a measure of the ambiguity, the calculation formula is as follows:
Figure FDA0002380470730000031
wherein s represents a calculated value of the ambiguity of the region; n represents the number of pixel points in the area; (i, j) representing the coordinate of the nth pixel point in the image to be processed; i (I, j) represents the gray level of a pixel point at (I, j) in the image to be processed;
the fourth calculation unit is used for determining the size of the fuzzy core of each area as the first size if the fuzzy degree of the area is smaller than the first threshold; if the ambiguity of the region is greater than a second threshold, determining the size of the ambiguity kernel of the region as a second size; if the ambiguity of the region is greater than or equal to the first threshold and less than or equal to the second threshold, determining that the ambiguity kernel size of the region is a third size; calculating a blur kernel for the region based on the blur kernel size for the region;
wherein the first threshold is less than the second threshold; the first size is greater than the third size, which is greater than the second size.
CN201610718848.7A 2016-08-24 2016-08-24 Image deblurring method and device Active CN107784631B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610718848.7A CN107784631B (en) 2016-08-24 2016-08-24 Image deblurring method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610718848.7A CN107784631B (en) 2016-08-24 2016-08-24 Image deblurring method and device

Publications (2)

Publication Number Publication Date
CN107784631A CN107784631A (en) 2018-03-09
CN107784631B true CN107784631B (en) 2020-05-05

Family

ID=61388726

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610718848.7A Active CN107784631B (en) 2016-08-24 2016-08-24 Image deblurring method and device

Country Status (1)

Country Link
CN (1) CN107784631B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109727254B (en) * 2018-11-27 2021-03-05 深圳市重投华讯太赫兹科技有限公司 Human body scanning image processing method, human body scanning image processing equipment and computer readable storage medium
CN110503611A (en) * 2019-07-23 2019-11-26 华为技术有限公司 Method and device for image processing
WO2021081903A1 (en) * 2019-10-31 2021-05-06 深圳先进技术研究院 Method for denoising image, apparatus, and computer readable storage medium
CN111062878B (en) * 2019-10-31 2023-04-18 深圳先进技术研究院 Image denoising method and device and computer readable storage medium
CN111835968B (en) * 2020-05-28 2022-02-08 北京迈格威科技有限公司 Image definition restoration method and device and image shooting method and device
CN113160103B (en) * 2021-04-22 2022-08-12 展讯通信(上海)有限公司 Image processing method and device, storage medium and terminal
CN115564687A (en) * 2022-11-09 2023-01-03 阿里巴巴(中国)有限公司 Training method of image processing model, electronic device and computer storage medium
CN120220077B (en) * 2025-05-27 2025-08-08 西安博奥电力工程有限公司 Remote monitoring method of power equipment status in substation based on image analysis

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102436639A (en) * 2011-09-02 2012-05-02 清华大学 Image acquisition method and image acquisition system for removing image blur
CN103426144A (en) * 2012-05-17 2013-12-04 佳能株式会社 Method and device for deblurring image having perspective distortion
CN103514582A (en) * 2012-06-27 2014-01-15 郑州大学 Visual saliency-based image deblurring method
CN104867111A (en) * 2015-03-27 2015-08-26 北京理工大学 Block-blur-kernel-set-based heterogeneous video blind deblurring method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102436639A (en) * 2011-09-02 2012-05-02 清华大学 Image acquisition method and image acquisition system for removing image blur
CN103426144A (en) * 2012-05-17 2013-12-04 佳能株式会社 Method and device for deblurring image having perspective distortion
CN103514582A (en) * 2012-06-27 2014-01-15 郑州大学 Visual saliency-based image deblurring method
CN104867111A (en) * 2015-03-27 2015-08-26 北京理工大学 Block-blur-kernel-set-based heterogeneous video blind deblurring method

Also Published As

Publication number Publication date
CN107784631A (en) 2018-03-09

Similar Documents

Publication Publication Date Title
CN107784631B (en) Image deblurring method and device
CN105144232B (en) Image de-noising method and system
CN103198463B (en) Spectrum image panchromatic sharpening method based on fusion of whole structure and space detail information
CN101902547B (en) Image processing method and image apparatus
Duran et al. Self-similarity and spectral correlation adaptive algorithm for color demosaicking
US8374428B2 (en) Color balancing for partially overlapping images
WO2013168618A1 (en) Image processing device and image processing method
KR102481882B1 (en) Method and apparaturs for processing image
Liu et al. Spatial-Hessian-feature-guided variational model for pan-sharpening
CN113744142B (en) Image restoration method, electronic device and storage medium
US11145032B2 (en) Image processing apparatus, method and storage medium for reducing color noise and false color
CN107451976B (en) A kind of image processing method and device
WO2017096814A1 (en) Image processing method and apparatus
US20150350576A1 (en) Raw Camera Noise Reduction Using Alignment Mapping
JP7741654B2 (en) Learning device, image processing device, learning processing method, and program
JP2011095861A5 (en) Image processing apparatus, image processing method, and program
CN106651783A (en) Image filtering method and device
US7751641B2 (en) Method and system for digital image enhancement
CN113744294A (en) Image processing method and related device
Apdilah et al. A study of Frei-Chen approach for edge detection
EP3070670B1 (en) Using frequency decomposition for better color consistency in a synthesized region
WO2015198368A1 (en) Image processing device and image processing method
CN104318518A (en) Projection-onto-convex-sets image reconstruction method based on SURF matching and edge detection
CN112541853A (en) Data processing method, device and equipment
JP2015093131A5 (en)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 518135 fourth buildings and fifth first floor of Wan Dai Heng Guangming hi tech Industrial Park, Guangming New District, Shenzhen, Guangdong

Applicant after: Shenzhen Haoen Safety Technology Co., Ltd.

Address before: 518107 fourth buildings and fifth first floor of Wan Dai Heng Guangming hi tech Industrial Park, Guangming New District, Shenzhen, Guangdong

Applicant before: Zhong An (Shenzhen) Co., Ltd.

GR01 Patent grant
GR01 Patent grant