[go: up one dir, main page]

CN110084752B - Image super-resolution reconstruction method based on edge direction and K-means clustering - Google Patents

Image super-resolution reconstruction method based on edge direction and K-means clustering Download PDF

Info

Publication number
CN110084752B
CN110084752B CN201910371191.5A CN201910371191A CN110084752B CN 110084752 B CN110084752 B CN 110084752B CN 201910371191 A CN201910371191 A CN 201910371191A CN 110084752 B CN110084752 B CN 110084752B
Authority
CN
China
Prior art keywords
low
resolution image
resolution
image
image block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910371191.5A
Other languages
Chinese (zh)
Other versions
CN110084752A (en
Inventor
李晓峰
李爽
周宁
许埕秸
傅志中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201910371191.5A priority Critical patent/CN110084752B/en
Publication of CN110084752A publication Critical patent/CN110084752A/en
Application granted granted Critical
Publication of CN110084752B publication Critical patent/CN110084752B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种基于边缘方向和K均值聚类的图像超分辨重建方法,属于机器视觉和图像处理技术领域。本发明的处理步骤包括:S1:构建高低分辨率图像块训练集;S2:提取低分辨率图像块的边缘方向特征向量;S3:使用K均值聚类对边缘方向特征进行聚类;S4:对高低分辨率图像块进行分类;S5:使用岭回归训练每个类别的线性映射矩阵;S6:对输入的低分辨率图像进行放大。本发明充分利用了图像块的边缘幅度和方向特征,采用K均值聚类能够更加精确的对图像块进行分类,保证了重建图像的质量,且重建过程具有较低的计算复杂度,方便进行快速实现。

Figure 201910371191

The invention discloses an image super-resolution reconstruction method based on edge direction and K-means clustering, belonging to the technical fields of machine vision and image processing. The processing steps of the present invention include: S1: building a training set of high and low resolution image blocks; S2: extracting edge direction feature vectors of low resolution image blocks; S3: using K-means clustering to cluster edge direction features; S4: Classify high- and low-resolution image patches; S5: use ridge regression to train a linear mapping matrix for each category; S6: scale up the input low-resolution image. The present invention makes full use of the edge amplitude and direction characteristics of image blocks, uses K-means clustering to classify image blocks more accurately, ensures the quality of reconstructed images, and the reconstruction process has low computational complexity, which is convenient and fast accomplish.

Figure 201910371191

Description

一种基于边缘方向和K均值聚类的图像超分辨重建方法An image super-resolution reconstruction method based on edge direction and K-means clustering

技术领域technical field

本发明属于计算机视觉和图像处理领域,具体涉及一种基于边缘方向和K均值聚类的图像超分辨重建方法。The invention belongs to the field of computer vision and image processing, in particular to an image super-resolution reconstruction method based on edge direction and K-means clustering.

背景技术Background technique

随着技术的发展,市面上出现了越来越多支持超高清分辨率图像的播放设备。但由于采集设备的价格昂贵,导致应用市场上缺少足够的超高清分辨率图像。图像超分辨是使用软件方法提高图像分辨率的技术,只需要较低的成本,能够很好的解决上述问题,因此在多媒体、医疗、卫星遥感、军事等多个领域都有着重要的研究价值和应用前景。With the development of technology, more and more playback devices that support ultra-high-definition resolution images have appeared on the market. However, due to the high price of the acquisition equipment, there is a lack of sufficient ultra-high-definition resolution images in the application market. Image super-resolution is a technology that uses software methods to increase image resolution. It only requires low cost and can solve the above problems well. Therefore, it has important research value and Application prospect.

当前,国内外的许多研究人员都对图像超分辨重建技术进行了研究,根据研究者选择的理论基础不同,可以将超分辨方法分为三类:基于插值、重建和学习的超分辨方法。At present, many researchers at home and abroad have studied image super-resolution reconstruction technology. According to the different theoretical basis chosen by researchers, super-resolution methods can be divided into three categories: super-resolution methods based on interpolation, reconstruction and learning.

其中,基于插值的图像超分辨算法是最直观,最基础的一类方法,该类方法一般通过周围的邻域像素的值来推断待插值点的像素的值。基于插值的方法通常计算开销都很小,所以常用于日常软件中进行图像的放大,但是其重建的图像质量不高,图像边缘容易出现锯齿。Among them, the image super-resolution algorithm based on interpolation is the most intuitive and basic type of method. This type of method generally infers the value of the pixel of the point to be interpolated by the value of the surrounding neighborhood pixels. The interpolation-based method usually has very little computational overhead, so it is often used in daily software to enlarge images, but the quality of the reconstructed image is not high, and the edges of the image are prone to jagged.

由于插值技术对图像分辨率的提升有限,不能满足某些应用的要求,于是出现了基于重建的超分辨处理技术。基于重建的方法通常会利用多幅低分辨率图像之间的相关信息,并且加入合适的先验知识来进行超分辨。研究人员将集合论和概率论等学科的理论引入到超分辨算法中,以便得到质量相对较高的高分辨率图像。Since the interpolation technology can only improve the image resolution, it cannot meet the requirements of some applications, so the super-resolution processing technology based on reconstruction appears. Reconstruction-based methods usually utilize correlation information between multiple low-resolution images and add appropriate prior knowledge for super-resolution. Researchers have introduced theories of set theory and probability theory into the super-resolution algorithm in order to obtain relatively high-quality high-resolution images.

近年来基于学习的超分辨方法成为了研究的热点,许多基于学习的超分辨算法被提出,并且都具有比传统插值和重建技术更加优秀的图像重建质量。因为跟前者不同,基于机器学习的方法是通过对大量高低分辨率图像集进行训练来学习获取高低分辨率图像块之间的对应关系,这样得到的先验知识比人为的假设更加准确,训练得到的映射关系也更能反映高低分辨率图像之间的联系。但是基于学习的方法通常具有很高的计算开销,因此难以在硬件上进行快速实现。In recent years, learning-based super-resolution methods have become a research hotspot. Many learning-based super-resolution algorithms have been proposed, and all of them have better image reconstruction quality than traditional interpolation and reconstruction techniques. Because it is different from the former, the method based on machine learning learns to obtain the corresponding relationship between high and low resolution image blocks by training a large number of high and low resolution image sets. The prior knowledge obtained in this way is more accurate than artificial assumptions. The mapping relationship can also better reflect the connection between high and low resolution images. But learning-based methods usually have high computational overhead, making fast implementation on hardware difficult.

插值法着重追求图像重建的速度,学习法则追求重建图像的质量,但现有的超分辨方法很难在重建质量和计算开销上取得较好的平衡。The interpolation method focuses on the speed of image reconstruction, and the learning method pursues the quality of the reconstructed image. However, it is difficult for existing super-resolution methods to achieve a good balance between reconstruction quality and computational overhead.

发明内容Contents of the invention

本发明的发明目的在于:针对上述存在的问题,提供一种在重建质量和计算开销上取得较好的平衡的图像超分辨重建方法。The object of the present invention is to provide an image super-resolution reconstruction method that achieves a good balance between reconstruction quality and calculation cost in view of the above-mentioned existing problems.

本发明的基于边缘方向和K均值聚类的图像超分辨重建方法,包括下列步骤:The image super-resolution reconstruction method based on edge direction and K-means clustering of the present invention comprises the following steps:

步骤一、采集高分辨率图像数据集;Step 1, collecting high-resolution image datasets;

对高分辨率图像数据集中的高低分辨率图像进行退化处理,得到对应的低分辨率图像数据集;Degrading the high and low resolution images in the high resolution image data set to obtain the corresponding low resolution image data set;

将高低分辨率图像转换为YUV图像,对Y通道高低分辨率图像进行分割,得到相同图像块数的高分辨率图像块集合

Figure BDA0002050011330000021
和低分辨图像块集合
Figure BDA0002050011330000022
其中,i表示图像块标识,t表示图像区分符,n表示分割得到的图像块的总个数,由采集到的高分辨率图像个数和大小决定,图像块之间组成高低分辨率图像块对
Figure BDA0002050011330000023
Convert the high and low resolution images to YUV images, segment the Y channel high and low resolution images, and obtain a set of high resolution image blocks with the same number of image blocks
Figure BDA0002050011330000021
and a set of low-resolution image patches
Figure BDA0002050011330000022
Among them, i represents the image block identifier, t represents the image distinguisher, n represents the total number of segmented image blocks, which is determined by the number and size of the collected high-resolution images, and the image blocks form high and low resolution image blocks right
Figure BDA0002050011330000023

其中,高分辨率图像的分割方式为:基于预设的图像块大小(例如2×2,3×3等),将高分辨率图像分割为n个大小相同且相互邻接的图像块;Wherein, the segmentation method of the high-resolution image is: based on a preset image block size (such as 2×2, 3×3, etc.), the high-resolution image is divided into n image blocks of the same size and adjacent to each other;

低分辨率图像的分割方式为:基于预设的图像块大小(例如3×3,5×5,7×7,9×9等),将低分辨率图像分割为n个大小相同且相互间重叠的图像块;The segmentation method of the low-resolution image is: based on the preset image block size (such as 3×3, 5×5, 7×7, 9×9, etc.), the low-resolution image is divided into n pieces of the same size and mutually overlapping image blocks;

步骤二、提取各低分辨率图像块的边缘方向特征向量:Step 2, extracting the edge direction feature vectors of each low-resolution image block:

将步骤一中的低分辨率图像块进一步分割成为低分辨率图像子块

Figure BDA0002050011330000024
其中r表示分割得到的图像子块的总个数,由低分辨率图像块的大小决定,采用一阶梯度算子计算低分辨率子图像块的水平和垂直梯度值,使用水平和垂直梯度值计算得到低分辨率图像子块的边缘幅度mj和边缘方向aj,将同一个低分辨率图像块的全部子块的边缘幅度和方向结合起来组成边缘方向特征向量f=[a1 m1…ar mr]T;The low-resolution image block in step 1 is further segmented into low-resolution image sub-blocks
Figure BDA0002050011330000024
Where r represents the total number of segmented image sub-blocks, which is determined by the size of the low-resolution image block. The first-order gradient operator is used to calculate the horizontal and vertical gradient values of the low-resolution sub-image blocks, and the horizontal and vertical gradient values are used Calculate the edge magnitude m j and edge direction a j of the low-resolution image sub-block, and combine the edge magnitude and direction of all sub-blocks of the same low-resolution image block to form the edge direction feature vector f=[a 1 m 1 … a r m r ] T ;

步骤三、采用K均值聚类方法对低分辨率图像块的边缘方向特征向量进行聚类,每一个类别计算得到一个中心点ck,k=1,…,K,其中K为中心点个数,由超分辨重建结果决定,选取能够取得最好的超分辨重建质量的聚类个数,并且聚类的中心点在特征空间中最大的分散,将中心点集保存;Step 3: Clustering the edge direction feature vectors of the low-resolution image blocks using the K-means clustering method, and calculating a center point c k for each category, k=1,...,K, where K is the number of center points , determined by the super-resolution reconstruction results, select the number of clusters that can obtain the best super-resolution reconstruction quality, and the center points of the clusters are the largest dispersion in the feature space, and save the center point set;

步骤四、对每一对高低分辨率图像块对

Figure BDA0002050011330000025
计算图像块对中的低分辨率图像块的边缘方向特征向量分别与K个中心点的距离,并将高低分辨率图像块对
Figure BDA0002050011330000026
分配到距离中心点最近的类别中;Step 4. For each pair of high and low resolution image blocks
Figure BDA0002050011330000025
Calculate the distances between the edge direction eigenvectors of the low-resolution image blocks in the image block pair and the K center points, and combine the high- and low-resolution image block pairs
Figure BDA0002050011330000026
Assigned to the category closest to the center point;

步骤五、对于每一个类别,计算(例如采用岭回归方式计算)出能将低分辨率图像块转换成为高分辨率图像块的线性映射矩阵mk,k=1,…,K并且保存;Step 5. For each category, calculate (for example, calculate by ridge regression) a linear mapping matrix m k that can convert low-resolution image blocks into high-resolution image blocks, k=1,...,K and save it;

步骤六、输入待重建的低分辨率图像,对所述低分辨率图像进行加边处理后,转换为YUV图像;Step 6, input the low-resolution image to be reconstructed, and convert the low-resolution image into a YUV image after edge processing;

按照步骤一中对低分辨率图像的分割方式,将Y通道图像分割为低分辨率图像块,并提取各低分辨率图像块的边缘方向特征向量,基于边缘方向特征向量与K个中心点的距离,将当前低分辨率图像块分配到距离中心点最近的类别中;According to the segmentation method of the low-resolution image in step 1, the Y channel image is divided into low-resolution image blocks, and the edge direction feature vectors of each low-resolution image block are extracted, based on the edge direction feature vector and K center points Distance, assign the current low-resolution image block to the category closest to the center point;

基于每个类别对应的线性映射矩阵mk,对Y通道低分辨率图像块进行超分辨率重建,得到Y通道高分辨率图像块,将高分辨率图像块组合得到Y通道高分辨率图像,对于UV通道低分辨率图像,采用双三次插值的方法进行相同倍数的超分辨,将高分辨率YUV图像转换成为RGB图像,得到重建结果;Based on the linear mapping matrix m k corresponding to each category, perform super-resolution reconstruction on the low-resolution image block of the Y channel to obtain a high-resolution image block of the Y channel, and combine the high-resolution image blocks to obtain a high-resolution image of the Y channel, For the low-resolution image of the UV channel, the bicubic interpolation method is used for super-resolution of the same multiple, and the high-resolution YUV image is converted into an RGB image to obtain the reconstruction result;

进一步的,步骤六中,基于每个类别对应的线性映射矩阵mk,对Y通道低分辨率图像块进行超分辨率重建具体为:hi=mkli;其中,li为第i个低分辨率图像块矢量化后构成的列向量,mk为第k类的线性映射矩阵,hi为第i个高分辨率图像块矢量化后构成的列向量。Further, in Step 6, based on the linear mapping matrix m k corresponding to each category, the super-resolution reconstruction of the low-resolution image blocks of the Y channel is specifically: h i =m k l i ; where l i is the i-th The column vector formed after the vectorization of the low-resolution image blocks, m k is the linear mapping matrix of the kth class, and h i is the column vector formed after the i-th high-resolution image block is vectorized.

进一步的,步骤六中,加边处理具体为:在所述低分辨率图像的四周各添加(e-1)/2条边(e为低分辨率图像块的长度),添加的边的值为零或者为低分辨率图像最外层像素的值。Further, in step 6, the edge-adding process is specifically: adding (e-1)/2 edges (e is the length of the low-resolution image block) around the low-resolution image, and the value of the added edge Either zero or the value of the outermost pixel of the low-resolution image.

本发明中,为了显著降低重建处理的运行时间且提升重建效果,只考虑对Y通道采用本发明的步骤S2所述的重建方式得到对应的Y通道高分辨率图像,而对其他两个通道(UV通道)则采用现有的双三次插值的方法进行相同倍数的超分辨重建。当然,出于对重建效果的更进一步考虑,也可以对UV通道也采用Y通道高分辨率图像的重建方式进行对应通道的高分辨率图像重建,即,分别对U、V通道的高低分辨率图像进行分割,得到图像块相同的高低分辨率图像块对,提取低分辨率图像块的边缘方向特征向量以及对其进行K均值聚类,再将高低分辨率图像块对基于各聚类中心(中心点)的距离分配到不同的类别中;并构建对应的U、V通道在各类别中的线性映射矩阵,然后基于待重建的低分辨率图像的U、V通道的对应图像块的边缘方向特征向量,对其进行分配对应的类别(距离中心点最近的类),再结合对应的线性映射矩阵得到重构的U、V通道的高分辨率图像。只是该处理方式会损失掉部分计算开销,但是其计算开销依然比现有的基于学习的超分辨算法低。In the present invention, in order to significantly reduce the running time of the reconstruction process and improve the reconstruction effect, only the reconstruction method described in step S2 of the present invention is considered for the Y channel to obtain the corresponding high-resolution image of the Y channel, while for the other two channels ( UV channel) uses the existing bicubic interpolation method to perform super-resolution reconstruction of the same multiple. Of course, for further consideration of the reconstruction effect, it is also possible to reconstruct the high-resolution image of the corresponding channel by using the reconstruction method of the high-resolution image of the Y channel for the UV channel, that is, for the high and low resolutions of the U and V channels respectively The image is segmented to obtain high and low resolution image block pairs with the same image block, the edge direction feature vector of the low resolution image block is extracted and K-means clustering is performed on it, and then the high and low resolution image block pairs are based on each clustering center ( The distance between the center point) is assigned to different categories; and the linear mapping matrix of the corresponding U and V channels in each category is constructed, and then based on the edge direction of the corresponding image block of the U and V channels of the low-resolution image to be reconstructed The feature vector is assigned to the corresponding category (the category closest to the center point), and then combined with the corresponding linear mapping matrix to obtain the reconstructed high-resolution image of the U and V channels. Only this processing method will lose part of the computational overhead, but its computational overhead is still lower than that of existing learning-based super-resolution algorithms.

综上所述,由于采用了上述技术方案,本发明的有益效果是:In summary, owing to adopting above-mentioned technical scheme, the beneficial effect of the present invention is:

本发明通过计算子图像块的边缘幅度和方向并组成联合特征向量,能够充分利用子图像块的边缘方向信息,该特征提取方法也具有较低的复杂度;使用K均值聚类对边缘方向特征向量进行聚类,能够灵活的选择聚类个数,以便取得更好的重建效果,且该方法的计算开销小,方便在硬件上进行快速实现。The present invention can make full use of the edge direction information of the sub-image blocks by calculating the edge amplitude and direction of the sub-image blocks and forming a joint feature vector, and the feature extraction method also has low complexity; Vector clustering can flexibly select the number of clusters to achieve better reconstruction results, and the calculation overhead of this method is small, which is convenient for rapid implementation on hardware.

附图说明Description of drawings

图1为本发明基于边缘方向和K均值聚类的图像超分辨算法的的训练阶段流程图;Fig. 1 is the training phase flowchart of the image super-resolution algorithm based on edge direction and K-means clustering of the present invention;

图2为本发明低分辨图像恢复的流程图;Fig. 2 is the flowchart of low-resolution image recovery of the present invention;

图3为用于实施例的低分辨率图像,其图像宽度为144,高度为144;Fig. 3 is the low-resolution image used for embodiment, and its image width is 144, and height is 144;

图4为用于实施例的高分辨率图像,其图像宽度为288,高度为288。Figure 4 is a high-resolution image used in the embodiment with an image width of 288 and a height of 288.

具体实施方式Detailed ways

为使本发明的目的、技术方案和优点更加清楚,下面结合实施方式和附图,对本发明作进一步地详细描述。In order to make the purpose, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the implementation methods and accompanying drawings.

本发明的基于边缘方向和K均值聚类的图像超分辨重建方法包括训练阶段和低分辨图像重建两个部分。参见图1,训练阶段(步骤S1)的具体处理过程如下;The image super-resolution reconstruction method based on edge direction and K-means clustering of the present invention includes two parts: a training stage and a low-resolution image reconstruction. Referring to Fig. 1, the specific process of the training stage (step S1) is as follows;

步骤S101:采集高分辨率图像数据集,采用预选的图像退化模型生成对应的低分辨率数据集,将高低分辨率图像从RGB通道转换为YUV通道,并对Y通道高低分辨率图像进行分割,基于预设的分割块数n(高分辨率图像个数和大小决定),以及待分割图像的大小,设置具体的分割方式。Step S101: collecting a high-resolution image dataset, using a preselected image degradation model to generate a corresponding low-resolution dataset, converting the high- and low-resolution images from RGB channels to YUV channels, and segmenting the high- and low-resolution images of the Y channel, Based on the preset number of divided blocks n (determined by the number and size of high-resolution images) and the size of the image to be divided, set the specific division method.

本具体实施方式中,将高分辨率图像数据集中的高分辨率图像分割成为邻接的大小为2×2的图像块集合

Figure BDA0002050011330000041
低分辨率图像使用滑动窗口分割成为相互间重叠的7×7大小的图像块集合
Figure BDA0002050011330000042
即分割后的高、低分辨率图像的图像块数相同;In this specific embodiment, the high-resolution image in the high-resolution image data set is divided into a set of adjacent image blocks with a size of 2×2
Figure BDA0002050011330000041
The low-resolution image is divided into a set of overlapping 7×7 image blocks using a sliding window
Figure BDA0002050011330000042
That is, the number of image blocks of the segmented high and low resolution images is the same;

步骤S102:对步骤步骤S101中的低分辨率图像块中间区域(本具体实施方式中为其中间3×3大小的区域),采用滑动窗口进一步分割成为4个2×2大小的低分辨率图像子块

Figure BDA0002050011330000043
采用一阶梯度算子计算低分辨率子图像块的水平和垂直梯度值,使用水平和垂直梯度值计算得到低分辨率图像子块的边缘幅度mj和边缘方向aj,将同一个低分辨率图像块的全部子块的边缘幅度和方向结合起来组成边缘方向特征向量f=[a1 m1…a4 m4]T;即将边缘方向特征向量f作为每个低分辨率图像块的特征向量f。Step S102: For the middle area of the low-resolution image block in step S101 (in this specific embodiment, it is the middle area with a size of 3×3), use a sliding window to further divide it into four low-resolution images with a size of 2×2 Subblock
Figure BDA0002050011330000043
Use the first-order gradient operator to calculate the horizontal and vertical gradient values of the low-resolution sub-image block, use the horizontal and vertical gradient values to calculate the edge amplitude m j and edge direction a j of the low-resolution image sub-block, and use the same low-resolution The edge magnitude and direction of all sub-blocks of the high-resolution image block are combined to form the edge direction feature vector f=[a 1 m 1 …a 4 m 4 ] T ; the edge direction feature vector f is used as the feature of each low-resolution image block vector f.

步骤S103:设置K均值聚类方法聚类数,本具体实施方式中,设置K=512。然后采用K均值聚类方法对步骤S102中得到的所有图像块的特征(低分辨率图像块特征向量f)进行聚类,每一个类别计算得到一个中心点ck,k=1,…,512,并且聚类的中心点在特征空间中最大的分散,将中心点集保存;Step S103: Set the number of clusters in the K-means clustering method. In this specific implementation, set K=512. Then adopt the K-means clustering method to cluster the features (low-resolution image block feature vector f) of all image blocks obtained in step S102, and calculate a center point c k for each category, k=1,...,512 , and the center point of the cluster has the largest dispersion in the feature space, and the center point set is saved;

步骤S104:对步骤S101中得到的高低分辨率图像块对,采用步骤S102中的方法提取每个图像块对中的低分辨图像块的边缘方向特征向量,并计算边缘方向特征向量与步骤S103中的各中心点的距离,基于对应的低分辨率图像块的边缘方向特征向量与各中心点的距离,将高低分辨率图像块对分配到距离最近的类别中去;Step S104: For the high and low resolution image block pair obtained in step S101, adopt the method in step S102 to extract the edge direction feature vector of the low resolution image block in each image block pair, and calculate the edge direction feature vector and step S103 The distance between each center point of the corresponding low-resolution image block is based on the distance between the edge direction feature vector of the corresponding low-resolution image block and each center point, and the high- and low-resolution image block pairs are assigned to the nearest category;

步骤S105:对于每一个类别,基于每个类别所包括的高低分辨率图像块对,采用岭回归计算出能将低分辨率图像块转换成为高分辨率图像块的线性映射矩阵mk,k=1,…,512并保存。Step S105: For each category, based on the high- and low-resolution image block pairs included in each category, use ridge regression to calculate the linear mapping matrix m k that can convert low-resolution image blocks into high-resolution image blocks, k= 1,...,512 and save.

步骤S2:低分辨图像重建。Step S2: Low-resolution image reconstruction.

参见图2,首先输入待重建的低分辨率图像,如图3所示,其大小为144×144,对原图像进行加边处理后,转换成为YUV图像。Referring to Figure 2, first input the low-resolution image to be reconstructed, as shown in Figure 3, its size is 144×144, and after adding borders to the original image, it is converted into a YUV image.

本具体实施方式中,加边处理具体为:在原图四周各添加3条边,添加的边的值为零或者为低分辨率图像最外层像素的值。In this specific implementation manner, the edge adding process specifically includes adding three edges around the original image, and the value of the added edges is zero or the value of the outermost pixel of the low-resolution image.

然后,按照步骤S101的方法将Y通道图像分割为低分辨率图像块li,i=1,…,1442,按照步骤S102和步骤S104的方法对低分辨率图像块进行分类,使用对应的线性映射函数(即各类别所对应的线性映射矩阵)对Y通道低分辨率图像块进行超分辨率重建,得到Y通道高分辨率图像块hi,i=1,…,1442;其中对应通道的高分辨率图像块的计算式为:hi=mkli;其中,li为第i个低分辨率图像块矢量化后构成的列向量,mk为步骤五中得到的第k类线性映射矩阵,hi为第i个高分辨率图像块矢量化后构成的列向量。Then, divide the Y-channel image into low-resolution image blocks l i , i=1,...,144 2 according to the method of step S101, classify the low-resolution image blocks according to the methods of step S102 and step S104, and use the corresponding The linear mapping function (that is, the linear mapping matrix corresponding to each category) performs super-resolution reconstruction on the Y-channel low-resolution image block to obtain the Y-channel high-resolution image block h i , i=1,...,144 2 ; where the corresponding The formula for calculating the high-resolution image block of the channel is: h i =m k l i ; where, l i is the column vector formed after the i-th low-resolution image block is vectorized, and m k is the first K-type linear mapping matrix, h i is the column vector formed after the i-th high-resolution image block is vectorized.

最后,将高分辨率图像块组合得到Y通道高分辨率图像,对于UV通道低分辨率图像,采用双三次插值的方法进行相同倍数的超分辨,将高分辨率YUV图像转换成为RGB图像,得到重建的高分辨率图像,如图4所示,其大小为288×288,重建的图像具有良好的效果。Finally, the high-resolution image blocks are combined to obtain the high-resolution image of the Y channel. For the low-resolution image of the UV channel, the bicubic interpolation method is used for super-resolution of the same multiple, and the high-resolution YUV image is converted into an RGB image. The reconstructed high-resolution image, as shown in Figure 4, has a size of 288×288, and the reconstructed image has a good effect.

以上所述,仅为本发明的具体实施方式,本说明书中所公开的任一特征,除非特别叙述,均可被其他等效或具有类似目的的替代特征加以替换;所公开的所有特征、或所有方法或过程中的步骤,除了互相排斥的特征和/或步骤以外,均可以任何方式组合。The above is only a specific embodiment of the present invention. Any feature disclosed in this specification, unless specifically stated, can be replaced by other equivalent or alternative features with similar purposes; all the disclosed features, or All method or process steps may be combined in any way, except for mutually exclusive features and/or steps.

Claims (6)

1. The image super-resolution reconstruction method based on the edge direction and the K-means clustering is characterized by comprising the following steps of:
step one, collecting a high-resolution image data set;
performing degradation treatment on high-resolution and low-resolution images in the high-resolution image dataset to obtain a corresponding low-resolution image dataset;
converting the high-low resolution image into a YUV image, and dividing the Y-channel high-low resolution image to obtain a high-resolution image block set with the same image block number
Figure FDA0004105485590000011
And low resolution image block set->
Figure FDA0004105485590000012
Wherein i represents image block identification, t represents image identifier, n represents total number of image blocks obtained by segmentation, and high-low resolution image block pair is formed between the image blocks>
Figure FDA0004105485590000013
The segmentation mode of the high-resolution image is as follows: dividing the high-resolution image into n image blocks which are identical in size and mutually adjacent based on a preset image block size;
the segmentation mode of the low resolution image is as follows: dividing the low-resolution image into n image blocks which are identical in size and overlap each other based on a preset image block size;
extracting edge direction feature vectors of each low-resolution image block:
aggregating low resolution image blocks
Figure FDA0004105485590000014
Each low resolution image block in the image is further divided into low resolution image sub-blocks
Figure FDA0004105485590000015
Wherein r represents the total number of the segmented image sub-blocks;
calculating horizontal and vertical gradient values of the low-resolution sub-image block by adopting a gradient operator, and calculating the edge amplitude m of the low-resolution sub-image block by using the horizontal and vertical gradient values j And edge direction a j The edge amplitude and the edge direction of all image subblocks of the same low-resolution image block are combined to form an edge direction feature vector f= [ a ] 1 m 1 … a r m r ] T
Clustering edge direction feature vectors of the low-resolution image blocks by adopting a K-means clustering method, and calculating each category to obtain a center point c k K=1, …, K, and willThe center point set is stored, wherein K is the number of preset center points;
step four, for each pair of high-low resolution image block pairs
Figure FDA0004105485590000016
Calculating the distances between the edge direction feature vectors of the low-resolution image blocks in the image block pairs and K center points respectively, and adding the high-resolution image block pairs +.>
Figure FDA0004105485590000017
Assigning to the category nearest to the center point;
step five, for each category, calculating a linear mapping matrix m capable of converting the low resolution image block into the high resolution image block k K=1, …, K and saved;
step six, inputting a low-resolution image to be reconstructed, and converting the low-resolution image into a YUV image after edge processing;
dividing the Y channel image into low-resolution image blocks according to the dividing mode of the low-resolution image in the first step, extracting edge direction feature vectors of the low-resolution image blocks, and distributing the current low-resolution image blocks into categories closest to the center points based on the distances between the edge direction feature vectors and K center points;
based on the linear mapping matrix m corresponding to each category k And performing super-resolution reconstruction on the Y-channel low-resolution image block to obtain a Y-channel high-resolution image block, combining the high-resolution image block to obtain a Y-channel high-resolution image, performing super-resolution of the same multiple on the UV-channel low-resolution image by adopting a bicubic interpolation method, and converting the high-resolution YUV image into an RGB image to obtain a reconstruction result.
2. The method of claim 1, wherein in step six, the linear mapping matrix m corresponding to each category is based on k The super-resolution reconstruction of the Y-channel low-resolution image block is specifically: h is a i =m k l i The method comprises the steps of carrying out a first treatment on the surface of the Wherein l i Column vector formed by vectorizing ith low-resolution image block, m k A linear mapping matrix of the kth class, h i And (5) vectorizing the ith high-resolution image block to form a column vector.
3. The method according to claim 1 or 2, wherein in step six, the edge processing is specifically: and (e-1)/2 edges are added to the periphery of the low-resolution image, wherein the added edges have zero value or are the values of the outermost pixels of the low-resolution image, and e is the length of the low-resolution image block.
4. The method according to claim 1, wherein the high resolution image reconstruction mode of the UV channel of the low resolution image to be reconstructed in step six is replaced by: and carrying out high-resolution image reconstruction of the corresponding channel by adopting a reconstruction mode of the high-resolution image of the Y channel.
5. The method of claim 1, wherein the edge direction feature vector of the low resolution image block is extracted by dividing a middle region of the low resolution image block into r low resolution image sub-blocks having the same size.
6. The method of claim 5, wherein the middle region of the low resolution image block is divided into r low resolution image sub-blocks having the same size and overlapping each other.
CN201910371191.5A 2019-05-06 2019-05-06 Image super-resolution reconstruction method based on edge direction and K-means clustering Expired - Fee Related CN110084752B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910371191.5A CN110084752B (en) 2019-05-06 2019-05-06 Image super-resolution reconstruction method based on edge direction and K-means clustering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910371191.5A CN110084752B (en) 2019-05-06 2019-05-06 Image super-resolution reconstruction method based on edge direction and K-means clustering

Publications (2)

Publication Number Publication Date
CN110084752A CN110084752A (en) 2019-08-02
CN110084752B true CN110084752B (en) 2023-04-21

Family

ID=67418759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910371191.5A Expired - Fee Related CN110084752B (en) 2019-05-06 2019-05-06 Image super-resolution reconstruction method based on edge direction and K-means clustering

Country Status (1)

Country Link
CN (1) CN110084752B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801879B (en) * 2021-02-09 2023-12-08 咪咕视讯科技有限公司 Image super-resolution reconstruction method and device, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077505A (en) * 2013-01-25 2013-05-01 西安电子科技大学 Image super-resolution reconstruction method based on dictionary learning and structure clustering
CN103295196A (en) * 2013-05-21 2013-09-11 西安电子科技大学 Super-resolution image reconstruction method based on non-local dictionary learning and biregular terms
CN104036519A (en) * 2014-07-03 2014-09-10 中国计量学院 Partitioning compressive sensing reconstruction method based on image block clustering and sparse dictionary learning
CN104699781A (en) * 2015-03-12 2015-06-10 西安电子科技大学 Specific absorption rate image retrieval method based on double-layer anchor chart hash
CN105321156A (en) * 2015-11-26 2016-02-10 三维通信股份有限公司 Multi-structure-based image restoration method
CN107341776A (en) * 2017-06-21 2017-11-10 北京工业大学 Single frames super resolution ratio reconstruction method based on sparse coding and combinatorial mapping
CN107392855A (en) * 2017-07-19 2017-11-24 苏州闻捷传感技术有限公司 Image Super-resolution Reconstruction method based on sparse autoencoder network Yu very fast study
CN108648147A (en) * 2018-05-08 2018-10-12 北京理工大学 A kind of super-resolution image acquisition method and system of human eye retina's mechanism
CN108805814A (en) * 2018-06-07 2018-11-13 西安电子科技大学 Image Super-resolution Reconstruction method based on multiband depth convolutional neural networks

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200407799A (en) * 2002-11-05 2004-05-16 Ind Tech Res Inst Texture partition and transmission method for network progressive transmission and real-time rendering by using the wavelet coding algorithm
US7187811B2 (en) * 2003-03-18 2007-03-06 Advanced & Wise Technology Corp. Method for image resolution enhancement
US20060291750A1 (en) * 2004-12-16 2006-12-28 Peyman Milanfar Dynamic reconstruction of high resolution video from low-resolution color-filtered video (video-to-video super-resolution)
US8081256B2 (en) * 2007-03-20 2011-12-20 Samsung Electronics Co., Ltd. Method and system for edge directed deinterlacing in video image processing
CN101727568B (en) * 2008-10-10 2013-04-17 索尼(中国)有限公司 Foreground action estimation device and foreground action estimation method
JP5388779B2 (en) * 2009-09-28 2014-01-15 京セラ株式会社 Image processing apparatus, image processing method, and image processing program
US8861853B2 (en) * 2010-03-19 2014-10-14 Panasonic Intellectual Property Corporation Of America Feature-amount calculation apparatus, feature-amount calculation method, and program
US20120075440A1 (en) * 2010-09-28 2012-03-29 Qualcomm Incorporated Entropy based image separation
US8755636B2 (en) * 2011-09-14 2014-06-17 Mediatek Inc. Method and apparatus of high-resolution image reconstruction based on multi-frame low-resolution images
CN102800094A (en) * 2012-07-13 2012-11-28 南京邮电大学 Fast color image segmentation method
CN103049750B (en) * 2013-01-11 2016-06-15 广州广电运通金融电子股份有限公司 Character identifying method
CN103984946B (en) * 2014-05-23 2017-04-26 北京联合大学 High resolution remote sensing map road extraction method based on K-means
CN105761207B (en) * 2015-05-08 2018-11-16 西安电子科技大学 Image Super-resolution Reconstruction method based on the insertion of maximum linear block neighborhood
CN104992407B (en) * 2015-06-17 2018-03-16 清华大学深圳研究生院 A kind of image super-resolution method
KR101845476B1 (en) * 2015-06-30 2018-04-05 한국과학기술원 Image conversion apparatus and image conversion method thereof
CN106558022B (en) * 2016-11-30 2020-08-25 重庆大学 A single image super-resolution reconstruction method based on edge difference constraints
CN108335265B (en) * 2018-02-06 2021-05-07 上海通途半导体科技有限公司 Rapid image super-resolution reconstruction method and device based on sample learning
CN108764368B (en) * 2018-06-07 2021-11-30 西安邮电大学 Image super-resolution reconstruction method based on matrix mapping
CN109712153A (en) * 2018-12-25 2019-05-03 杭州世平信息科技有限公司 A kind of remote sensing images city superpixel segmentation method
CN112801879B (en) * 2021-02-09 2023-12-08 咪咕视讯科技有限公司 Image super-resolution reconstruction method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077505A (en) * 2013-01-25 2013-05-01 西安电子科技大学 Image super-resolution reconstruction method based on dictionary learning and structure clustering
CN103295196A (en) * 2013-05-21 2013-09-11 西安电子科技大学 Super-resolution image reconstruction method based on non-local dictionary learning and biregular terms
CN104036519A (en) * 2014-07-03 2014-09-10 中国计量学院 Partitioning compressive sensing reconstruction method based on image block clustering and sparse dictionary learning
CN104699781A (en) * 2015-03-12 2015-06-10 西安电子科技大学 Specific absorption rate image retrieval method based on double-layer anchor chart hash
CN105321156A (en) * 2015-11-26 2016-02-10 三维通信股份有限公司 Multi-structure-based image restoration method
CN107341776A (en) * 2017-06-21 2017-11-10 北京工业大学 Single frames super resolution ratio reconstruction method based on sparse coding and combinatorial mapping
CN107392855A (en) * 2017-07-19 2017-11-24 苏州闻捷传感技术有限公司 Image Super-resolution Reconstruction method based on sparse autoencoder network Yu very fast study
CN108648147A (en) * 2018-05-08 2018-10-12 北京理工大学 A kind of super-resolution image acquisition method and system of human eye retina's mechanism
CN108805814A (en) * 2018-06-07 2018-11-13 西安电子科技大学 Image Super-resolution Reconstruction method based on multiband depth convolutional neural networks

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Zeng-Wei Ju et al..Image segmentation based on edge detection using K-means and an improved ant colony optimization.《2013 International Conference on Machine Learning and Cybernetics,Tianjin,China》.2013,第297-303页. *
康凯.图像超分辨率重建研究.《中国博士学位论文全文数据库 信息科技辑》.2016,(第9期),第I138-18页. *
赵志辉 ; 赵瑞珍 ; 岑翼刚 ; 张凤珍 ; .基于稀疏表示与线性回归的图像快速超分辨率重建.智能系统学报.2017,第12卷(第01期),第8-14页. *

Also Published As

Publication number Publication date
CN110084752A (en) 2019-08-02

Similar Documents

Publication Publication Date Title
Deng et al. Lau-net: Latitude adaptive upscaling network for omnidirectional image super-resolution
CN110992262B (en) Remote sensing image super-resolution reconstruction method based on generation countermeasure network
CN109064396B (en) Single image super-resolution reconstruction method based on deep component learning network
Xia et al. Knowledge distillation based degradation estimation for blind super-resolution
CN103093444B (en) Image super-resolution reconstruction method based on self-similarity and structural information constraint
CN101976435B (en) Combination learning super-resolution method based on dual constraint
CN103218776B (en) Non-local depth map super resolution ratio reconstruction method based on minimum spanning tree
WO2018205676A1 (en) Processing method and system for convolutional neural network, and storage medium
CN110136062A (en) A Super-resolution Reconstruction Method for Joint Semantic Segmentation
Li et al. A two-channel convolutional neural network for image super-resolution
CN101872472A (en) A face image super-resolution reconstruction method based on sample learning
CN102800094A (en) Fast color image segmentation method
CN103455988A (en) Super-resolution image reconstruction method based on structure self-similarity and sparse representation
CN105631807A (en) Single-frame image super resolution reconstruction method based on sparse domain selection
CN108550111B (en) Residual error example regression super-resolution reconstruction method based on multi-level dictionary learning
CN106910215B (en) Super-resolution method based on fractional order gradient interpolation
CN113837946A (en) A lightweight image super-resolution reconstruction method based on progressive distillation network
CN102831581A (en) Method for reconstructing super-resolution image
CN104036468A (en) Super-resolution reconstruction method for single-frame images on basis of pre-amplification non-negative neighbor embedding
Zhu et al. Image interpolation based on non-local geometric similarities
CN110084752B (en) Image super-resolution reconstruction method based on edge direction and K-means clustering
CN103020905A (en) Sparse-constraint-adaptive NLM (non-local mean) super-resolution reconstruction method aiming at character image
CN116563167A (en) Face image reconstruction method, system, device and medium based on self-adaptive texture and frequency domain perception
CN104331883B (en) A kind of image boundary extraction method based on asymmetric inversed placement model
CN110443754B (en) Method for improving resolution of digital image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20230421

CF01 Termination of patent right due to non-payment of annual fee