CN107145855A - One kind is without reference mass blurred picture Forecasting Methodology, terminal and storage medium - Google Patents
One kind is without reference mass blurred picture Forecasting Methodology, terminal and storage medium Download PDFInfo
- Publication number
- CN107145855A CN107145855A CN201710296834.5A CN201710296834A CN107145855A CN 107145855 A CN107145855 A CN 107145855A CN 201710296834 A CN201710296834 A CN 201710296834A CN 107145855 A CN107145855 A CN 107145855A
- Authority
- CN
- China
- Prior art keywords
- image
- predicted
- picture
- eigenvector
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/12—Fingerprints or palmprints
- G06V40/1347—Preprocessing; Feature extraction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/94—Hardware or software architectures specially adapted for image or video understanding
- G06V10/95—Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses one kind without reference mass blurred picture Forecasting Methodology, methods described includes:Image to be predicted is converted into gray level image;And, obtain the first eigenvector and second feature vector of the Gaussian Blur distorted image of reference picture, wherein, first eigenvector for reference picture textural characteristics constitute characteristic vector, second feature vector for reference picture it is low-pass filtered after textural characteristics constitute characteristic vector;The architectural feature of image to be predicted is extracted, and calculating obtains the structural similarity between image to be predicted and reference picture, and first eigenvector and the vectorial texture similarity of second feature;According to default neural network prediction model, using structural similarity and texture similarity as input sample, using obtained output result as image to be predicted prognostic chart picture.The invention also discloses a kind of terminal and computer-readable medium.The embodiment of the present invention, can be predicted to incomplete picture to be predicted.
Description
Technical field
The present invention relates to technical field of image processing, more particularly to one kind is without reference mass blurred picture Forecasting Methodology, end
End and storage medium.
Background technology
Current fingerprint recognition is with rapid changepl. never-ending changes and improvements, while the equipment of fingerprint recognition and application are also more and more, it is assumed that there is position
Disabled user, because certain accident, which loses, loses some fingerprint after some finger injuries, while can not obtain the finger
Original complete fingerprint, at this moment, the user just using many equipment when, just have obstacle.Finger print after injury
Image as to be predicted.
In the prior art, the prediction for image to be predicted does not have effective technology and realized, therefore, realizes figure to be predicted
The prediction of picture is a kind of technical problem urgently to be resolved hurrily.
The content of the invention
It is a kind of without reference mass blurred picture Forecasting Methodology, terminal and storage Jie it is a primary object of the present invention to propose
Matter, it is intended to solve the technical problem being predicted to incomplete picture to be predicted.
To achieve the above object, one kind that the present invention is provided is without reference mass blurred picture Forecasting Methodology, methods described bag
Include:
Image to be predicted is converted into gray level image;And, obtain the first of the Gaussian Blur distorted image of reference picture
Characteristic vector and second feature vector, wherein, the spy that the first eigenvector is constituted for the textural characteristics of the reference picture
Levy vector, the characteristic vector that the textural characteristics that the second feature vector is the reference picture after low-pass filtered are constituted;
According to the gray level image, the architectural feature of the image to be predicted is extracted, and calculating obtains the figure to be predicted
As the texture with the structural similarity between the reference picture, and the first eigenvector and second feature vector
Similarity;
According to default neural network prediction model, using the structural similarity and the texture similarity as input sample,
Using obtained output result as the image to be predicted prognostic chart picture.
Optionally, it is described that image to be predicted is converted into gray level image, including:
Image to be predicted is converted into the gray level image that size is 512*512.
Optionally, the texture similarity for calculating the first eigenvector and second feature vector, including:
Calculate the Euclidean distance of the first eigenvector and second feature vector;
The Euclidean distance that calculating is obtained is used as texture similarity.
Optionally, the calculation formula of the number of the hidden node of the default neural network prediction model is:
Wherein, n, m represent the number of Inport And Outport Node, and a is any value between 1 to 10.
Optionally, the first eigenvector and the second feature vector are the characteristic vector of 34 dimensions.
Compared to prior art, terminal proposed by the present invention is by calculating the texture between image to be evaluated and reference picture
Similitude is last as the measurement of image local detailed information, regard two similarity indices as the defeated of neural network prediction model
Enter, the image predicted, so that the picture of completion loss of learning.
In addition, to achieve the above object, the present invention also proposes a kind of terminal, and the terminal includes:Memory, processor and
Communication bus;
The communication bus is used to realize the connection communication between processor and memory;
The processor be used for perform stored in memory without reference mass blurred picture Prediction program, it is following to realize
Step:
Image to be predicted is converted into gray level image;And, obtain the first of the Gaussian Blur distorted image of reference picture
Characteristic vector and second feature vector, wherein, the spy that the first eigenvector is constituted for the textural characteristics of the reference picture
Levy vector, the characteristic vector that the textural characteristics that the second feature vector is the reference picture after low-pass filtered are constituted;
According to the gray level image, the architectural feature of the image to be predicted is extracted, and calculating obtains the figure to be predicted
As the texture with the structural similarity between the reference picture, and the first eigenvector and second feature vector
Similarity;
According to default neural network prediction model, using the structural similarity and the texture similarity as input sample,
Using obtained output result as the image to be predicted prognostic chart picture.
Optionally, the processor is described without reference mass blurred picture Prediction program for performing, to realize following step
Suddenly:
Image to be predicted is converted into the gray level image that size is 512*512.
Optionally, the processor is described without reference mass blurred picture Prediction program for performing, to realize following step
Suddenly:
Calculate the Euclidean distance of the first eigenvector and second feature vector;
The Euclidean distance that calculating is obtained is used as texture similarity.
Optionally, the processor is described without reference mass blurred picture Prediction program for performing, to realize hidden layer section
The calculating of the number of point, specific formula is:
Wherein, n, m represent the number of Inport And Outport Node, and a is any value between 1 to 10.
Optionally, the processor is described without reference mass blurred picture Prediction program for performing, to realize following step
Suddenly:The first eigenvector and the second feature vector are the characteristic vector of 34 dimensions.
Compared to prior art, terminal proposed by the present invention is by calculating the texture between image to be evaluated and reference picture
Similitude is last as the measurement of image local detailed information, regard two similarity indices as the defeated of neural network prediction model
Enter, the image predicted, so that the picture of completion loss of learning.
To achieve the above object, the present invention also proposes a kind of computer-readable recording medium, the computer-readable storage
Media storage has one or more program, one or more of programs can by one or more computing device, with
Realize following steps:
Image to be predicted is converted into gray level image;And, obtain the first of the Gaussian Blur distorted image of reference picture
Characteristic vector and second feature vector, wherein, the spy that the first eigenvector is constituted for the textural characteristics of the reference picture
Levy vector, the characteristic vector that the textural characteristics that the second feature vector is the reference picture after low-pass filtered are constituted;
According to the gray level image, the architectural feature of the image to be predicted is extracted, and calculating obtains the figure to be predicted
As the texture with the structural similarity between the reference picture, and the first eigenvector and second feature vector
Similarity;
According to default neural network prediction model, using the structural similarity and the texture similarity as input sample,
Using obtained output result as the image to be predicted prognostic chart picture.
Optionally, one or more of programs can also by one or more of computing devices, with realize with
Lower step:
Image to be predicted is converted into the gray level image that size is 512*512.
Optionally, one or more of programs can also by one or more of computing devices, with realize with
Lower step:
Calculate the Euclidean distance of the first eigenvector and second feature vector;
The Euclidean distance that calculating is obtained is used as texture similarity.
Optionally, one or more of programs can also by one or more of computing devices, with realize with
It is lower to calculate:
Wherein, n, m represent the number of Inport And Outport Node, and a is any value between 1 to 10.
Optionally, one or more of programs can also by one or more of computing devices, with realize with
Lower step:
The first eigenvector and the second feature vector are the characteristic vector of 34 dimensions.
Compared to prior art, computer-readable recording medium proposed by the present invention is by calculating image to be evaluated and reference
Texture paging between image is last as the measurement of image local detailed information, regard two similarity indices as nerve net
The input of network forecast model, the image predicted, so that the picture of completion loss of learning.
Brief description of the drawings
Fig. 1 is the hardware architecture diagram for an optional mobile terminal for realizing each embodiment of the invention;
Fig. 2 is the wireless communication system schematic diagram of mobile terminal as shown in Figure 1;
Fig. 3 is the schematic flow sheet of the present invention without reference mass blurred picture Forecasting Methodology;
Fig. 4 is the schematic diagram of the incomplete fingerprint of acquisition of present embodiment of the present invention;
Fig. 5 exports schematic diagram after being predicted for the incomplete fingerprint of present embodiment of the present invention;
Fig. 6 is one embodiment implementing procedure signal of the present invention without reference mass blurred picture Forecasting Methodology
Figure;
Fig. 7 is terminal module schematic diagram of the present invention;
Fig. 8 is computer-readable recording medium module diagram of the present invention.
Reference:
The realization, functional characteristics and advantage of the object of the invention will be described further referring to the drawings in conjunction with the embodiments.
Embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
In follow-up description, the suffix using such as " module ", " part " or " unit " for representing element is only
Be conducive to the explanation of the present invention, itself there is no a specific meaning.Therefore, " module ", " part " or " unit " can be mixed
Ground is used.
Terminal can be implemented in a variety of manners.For example, the terminal described in the present invention can include such as mobile phone, flat board
Computer, notebook computer, palm PC, personal digital assistant (Personal Digital Assistant, PDA), portable
Media player (Portable Media Player, PMP), guider, wearable device, Intelligent bracelet, pedometer etc. are moved
Move the fixed terminals such as terminal, and numeral TV, desktop computer.
It will be illustrated in subsequent descriptions by taking mobile terminal as an example, it will be appreciated by those skilled in the art that except special
Outside element for moving purpose, construction according to the embodiment of the present invention can also apply to the terminal of fixed type.
Referring to Fig. 1, its hardware architecture diagram for a kind of mobile terminal of realization each embodiment of the invention, the shifting
Dynamic terminal 100 can include:RF (Radio Frequency, radio frequency) unit 101, WiFi module 102, audio output unit
103rd, A/V (audio/video) input block 104, sensor 105, display unit 106, user input unit 107, interface unit
108th, the part such as memory 109, processor 110 and power supply 111.It will be understood by those skilled in the art that shown in Fig. 1
Mobile terminal structure does not constitute the restriction to mobile terminal, and mobile terminal can be included than illustrating more or less parts,
Either combine some parts or different parts arrangement.
The all parts of mobile terminal are specifically introduced with reference to Fig. 1:
Radio frequency unit 101 can be used for receiving and sending messages or communication process in, the reception and transmission of signal, specifically, by base station
Downlink information receive after, handled to processor 110;In addition, up data are sent into base station.Generally, radio frequency unit 101
Including but not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier, duplexer etc..In addition, penetrating
Frequency unit 101 can also be communicated by radio communication with network and other equipment.Above-mentioned radio communication can use any communication
Standard or agreement, including but not limited to GSM (Global System of Mobile communication, global system for mobile telecommunications
System), GPRS (General Packet Radio Service, general packet radio service), CDMA2000 (Code
Division Multiple Access 2000, CDMA 2000), WCDMA (Wideband Code Division
Multiple Access, WCDMA), TD-SCDMA (Time Division-Synchronous Code
Division Multiple Access, TD SDMA), FDD-LTE (Frequency Division
Duplexing-Long Term Evolution, FDD Long Term Evolution) and TDD-LTE (Time Division
Duplexing-Long Term Evolution, time division duplex Long Term Evolution) etc..
WiFi belongs to short range wireless transmission technology, and mobile terminal can help user's transmitting-receiving electricity by WiFi module 102
Sub- mail, browse webpage and access streaming video etc., it has provided the user wireless broadband internet and accessed.Although Fig. 1 shows
Go out WiFi module 102, but it is understood that, it is simultaneously not belonging to must be configured into for mobile terminal, completely can be according to need
To be omitted in the essential scope for do not change invention.
Audio output unit 103 can be in call signal reception pattern, call mode, record mould in mobile terminal 1 00
When under the isotypes such as formula, speech recognition mode, broadcast reception mode, it is that radio frequency unit 101 or WiFi module 102 are received or
The voice data stored in memory 109 is converted into audio signal and is output as sound.Moreover, audio output unit 103
The audio output related to the specific function that mobile terminal 1 00 is performed can also be provided (for example, call signal receives sound, disappeared
Breath receives sound etc.).Audio output unit 103 can include loudspeaker, buzzer etc..
A/V input blocks 104 are used to receive audio or video signal.A/V input blocks 104 can include graphics processor
(Graphics Processing Unit, GPU) 1041 and microphone 1042,1041 pairs of graphics processor is in video acquisition mode
Or the view data progress of the static images or video obtained in image capture mode by image capture apparatus (such as camera)
Reason.Picture frame after processing may be displayed on display unit 106.Picture frame after being handled through graphics processor 1041 can be deposited
Storage is transmitted in memory 109 (or other storage mediums) or via radio frequency unit 101 or WiFi module 102.Mike
Wind 1042 can connect in telephone calling model, logging mode, speech recognition mode etc. operational mode via microphone 1042
Quiet down sound (voice data), and can be voice data by such acoustic processing.Audio (voice) data after processing can
To be converted to the form output that mobile communication base station can be sent to via radio frequency unit 101 in the case of telephone calling model.
Microphone 1042 can implement various types of noises and eliminate (or suppression) algorithm to eliminate (or suppression) in reception and send sound
The noise produced during frequency signal or interference.
Mobile terminal 1 00 also includes at least one sensor 105, such as optical sensor, motion sensor and other biographies
Sensor.Specifically, optical sensor includes ambient light sensor and proximity transducer, wherein, ambient light sensor can be according to environment
The light and shade of light adjusts the brightness of display panel 1061, and proximity transducer can close when mobile terminal 1 00 is moved in one's ear
Display panel 1061 and/or backlight.As one kind of motion sensor, accelerometer sensor can detect in all directions (general
For three axles) size of acceleration, size and the direction of gravity are can detect that when static, the application available for identification mobile phone posture
(such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, percussion) etc.;
The fingerprint sensor that can also configure as mobile phone, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer,
The other sensors such as hygrometer, thermometer, infrared ray sensor, will not be repeated here.
Display unit 106 is used for the information for showing the information inputted by user or being supplied to user.Display unit 106 can be wrapped
Display panel 1061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used
Forms such as (Organic Light-Emitting Diode, OLED) configures display panel 1061.
User input unit 107 can be used for the numeral or character information for receiving input, and produce the use with mobile terminal
The key signals input that family is set and function control is relevant.Specifically, user input unit 107 may include contact panel 1071 with
And other input equipments 1072.Contact panel 1071, also referred to as touch-screen, collect touch operation of the user on or near it
(such as user is using any suitable objects such as finger, stylus or annex on contact panel 1071 or in contact panel 1071
Neighbouring operation), and corresponding attachment means are driven according to formula set in advance.Contact panel 1071 may include touch detection
Two parts of device and touch controller.Wherein, touch detecting apparatus detects the touch orientation of user, and detects touch operation band
The signal come, transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and by it
It is converted into contact coordinate, then gives processor 110, and the order sent of reception processing device 110 and can be performed.In addition, can
To realize contact panel 1071 using polytypes such as resistance-type, condenser type, infrared ray and surface acoustic waves.Except contact panel
1071, user input unit 107 can also include other input equipments 1072.Specifically, other input equipments 1072 can be wrapped
Include but be not limited to physical keyboard, in function key (such as volume control button, switch key etc.), trace ball, mouse, action bars etc.
One or more, do not limit herein specifically.
Further, contact panel 1071 can cover display panel 1061, detect thereon when contact panel 1071 or
After neighbouring touch operation, processor 110 is sent to determine the type of touch event, with preprocessor 110 according to touch thing
The type of part provides corresponding visual output on display panel 1061.Although in Fig. 1, contact panel 1071 and display panel
1061 be input and the output function that mobile terminal is realized as two independent parts, but in certain embodiments, can
By contact panel 1071 and the input that is integrated and realizing mobile terminal of display panel 1061 and output function, not do specifically herein
Limit.
Interface unit 108 is connected the interface that can pass through as at least one external device (ED) with mobile terminal 1 00.For example,
External device (ED) can include wired or wireless head-band earphone port, external power source (or battery charger) port, wired or nothing
Line FPDP, memory card port, the port for connecting the device with identification module, audio input/output (I/O) end
Mouth, video i/o port, ear port etc..Interface unit 108 can be used for receiving the input from external device (ED) (for example, number
It is believed that breath, electric power etc.) and the input received is transferred to one or more elements in mobile terminal 1 00 or can be with
For transmitting data between mobile terminal 1 00 and external device (ED).
Memory 109 can be used for storage software program and various data.Memory 109 can mainly include storing program area
And storage data field, wherein, application program (the such as sound that storing program area can be needed for storage program area, at least one function
Sound playing function, image player function etc.) etc.;Storage data field can be stored uses created data (such as according to mobile phone
Voice data, phone directory etc.) etc..In addition, memory 109 can include high-speed random access memory, it can also include non-easy
The property lost memory, for example, at least one disk memory, flush memory device or other volatile solid-state parts.
Processor 110 is the control centre of mobile terminal, utilizes each of various interfaces and the whole mobile terminal of connection
Individual part, by operation or performs and is stored in software program and/or module in memory 109, and calls and be stored in storage
Data in device 109, perform the various functions and processing data of mobile terminal, so as to carry out integral monitoring to mobile terminal.Place
Reason device 110 may include one or more processing units;It is preferred that, processor 110 can integrated application processor and modulatedemodulate mediate
Device is managed, wherein, application processor mainly handles operating system, user interface and application program etc., and modem processor is main
Handle radio communication.It is understood that above-mentioned modem processor can not also be integrated into processor 110.
Mobile terminal 1 00 can also include the power supply 111 (such as battery) powered to all parts, it is preferred that power supply 111
Can be logically contiguous by power-supply management system and processor 110, so as to realize management charging by power-supply management system, put
The function such as electricity and power managed.
Although Fig. 1 is not shown, mobile terminal 1 00 can also will not be repeated here including bluetooth module etc..
For the ease of understanding the embodiment of the present invention, the communications network system that the mobile terminal of the present invention is based on is entered below
Row description.
Referring to Fig. 2, Fig. 2 is a kind of communications network system Organization Chart provided in an embodiment of the present invention, the communication network system
Unite as the LTE system of universal mobile communications technology, UE (User Equipment, use of the LTE system including communicating connection successively
Family equipment) 201, E-UTRAN (Evolved UMTS Terrestrial Radio Access Network, evolved UMTS lands
Ground wireless access network) 202, EPC (Evolved Packet Core, evolved packet-based core networks) 203 and operator IP operation
204。
Specifically, UE201 can be above-mentioned terminal 100, and here is omitted.
E-UTRAN202 includes eNodeB2021 and other eNodeB2022 etc..Wherein, eNodeB2021 can be by returning
Journey (backhaul) (such as X2 interface) is connected with other eNodeB2022, and eNodeB2021 is connected to EPC203,
ENodeB2021 can provide UE201 to EPC203 access.
EPC203 can include MME (Mobility Management Entity, mobility management entity) 2031, HSS
(Home Subscriber Server, home subscriber server) 2032, other MME2033, SGW (Serving Gate Way,
Gateway) 2034, PGW (PDN Gate Way, grouped data network gateway) 2035 and PCRF (Policy and
Charging Rules Function, policy and rate functional entity) 2036 etc..Wherein, MME2031 be processing UE201 and
There is provided carrying and connection management for the control node of signaling between EPC203.HSS2032 is all to manage for providing some registers
Such as function of attaching position register (not shown) etc, and some are preserved about the use such as service features, data rate
The special information in family.All customer data can be transmitted by SGW2034, and PGW2035 can provide UE 201 IP
Address is distributed and other functions, and PCRF2036 is strategy and the charging control strategic decision-making of business data flow and IP bearing resources
Point, it selects and provided available strategy and charging control decision-making with charge execution function unit (not shown) for strategy.
IP operation 204 can include internet, Intranet, IMS (IP Multimedia Subsystem, IP multimedia
System) or other IP operations etc..
Although above-mentioned be described by taking LTE system as an example, those skilled in the art it is to be understood that the present invention not only
Suitable for LTE system, be readily applicable to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA with
And following new network system etc., do not limit herein.
Based on above-mentioned mobile terminal hardware configuration and communications network system, each embodiment of the inventive method is proposed.
As shown in figure 3, being of the present invention without reference mass blurred picture Forecasting Methodology schematic flow sheet.This method should
For mobile terminal 1 00, the mobile terminal 1 00 includes memory and controller.In the present embodiment, the content is automatic
Sharing method can be divided into following steps:Step S310 is step of converting:Image to be predicted is converted into gray level image;Step
S320 is acquisition step:The first eigenvector and second feature vector of the Gaussian Blur distorted image of reference picture are obtained, its
In, the first eigenvector is the characteristic vector of the textural characteristics composition of the reference picture, and the second feature vector is
The characteristic vector that textural characteristics after the reference picture is low-pass filtered are constituted;Step S330 is calculation procedure:According to described
Gray level image, extracts the architectural feature of the image to be predicted, and calculating obtains the image to be predicted and the reference picture
Between structural similarity, and the first eigenvector and the second feature vector texture similarity;Step S340
For prediction steps:According to default neural network prediction model, using the structural similarity and the texture similarity as input sample
This, using obtained output result as the image to be predicted prognostic chart picture.These above-mentioned steps are adjusted according to different demands
Whole order, or omit some steps.Or the step of other can also be supplemented according to demand.
In the present embodiment, the mobile terminal 1 00 be terminal device, include, but are not limited to mobile phone, flat board, e-book,
PC etc..
S310, gray level image is converted into by image to be predicted.
In the embodiment of the present invention, image to be predicted is the incomplete incomplete image of information, first carries out the image to be predicted
Pretreatment, is translated into gray level image, specifically, image to be predicted can be changed into the ash that size is 512 × 512
Spend image.
It should be noted that full images are not generally coloured image to original information, coloured image be comprising
RGB 3 primary colour images, if directly the information of 3 groups of primary colors will be analyzed using original image progress processing, and coloured image are turned
As long as changing the information that gray level image analyzes one group of gray level image, the efficiency of image procossing can be improved.
S320, obtains the first eigenvector and second feature vector of the Gaussian Blur distorted image of reference picture, wherein,
The characteristic vector that the first eigenvector is constituted for the textural characteristics of the reference picture, the second feature vector is described
The characteristic vector that textural characteristics after reference picture is low-pass filtered are constituted.
In the embodiment of the present invention, many panel heights of image data base can be extracted using gray level co-occurrence matrixes method and Wavelet Transform
This fuzzy textural characteristics lost.And be indicated the textural characteristics with characteristic vector, i.e. first eigenvector, specifically,
One characteristic vector can be the characteristic vector of one group of 34 dimension.
It should be noted that can first using gray level co-occurrence matrixes method extract 8 dimension textural characteristics of image to be predicted to
Amount, then treats prognostic chart picture from biorl4.4 small echos and carries out 4 layers of wavelet decomposition, each subband is extracted respectively and is decomposed each
The average and variance of Energy distribution on layer constitute one 26 dimension texture feature vector, 8 extracted with reference to gray level co-occurrence matrixes method
Tie up texture feature vector, the vector of final one 34 dimension of composition.
Gray level co-occurrence matrixes are a kind of to describe the common method of texture by studying the spatial correlation characteristic of gray scale.Due to
Texture be occurred repeatedly on locus as intensity profile formed by, thus be separated by two pictures of certain distance in image space
There can be the spatial correlation characteristic of gray scale in certain gray-scale relation, i.e. image between element.Gray level co-occurrence matrixes are exactly a kind of logical
The spatial correlation characteristic of research gray scale is crossed to describe the common method of texture.Grey level histogram is that have to single pixel on image
The result that some gray scale is counted, and gray level co-occurrence matrixes are to keeping two pixels of certain distance to have certain ash respectively on image
The situation of degree carries out counting what is obtained.
It is exemplary, take in (N × N) any image block image a bit, (x, y) and another point (x+a, y+b) for deviateing it,
If this to gray value be (g1, g2).Make point (x, y) be moved on whole picture, then can obtain various (g1, g2) values, if
The series of gray value is k, then the shared k of the combination of (g1, g2) square kind.For whole picture, count each (g1, g2)
It is worth the number of times occurred, is then arranged in a square formation, then they are normalized to the general of appearance by the total degree occurred with (g1, g2)
Rate P (g1, g2), such square formation is referred to as gray level co-occurrence matrixes.Different combinations of values are taken apart from difference value (a, b), can be obtained
Joint probability matrix under to different situations.Characteristic that (a, b) value will be distributed according to Texture-period is selected, for thinner
Texture, chooses the small difference values such as (1,0), (1,1), (2,0).Work as a=1, during b=0, pixel is to being level, i.e., 0 degree is swept
Retouch;Work as a=0, during b=1, pixel is to being vertical, i.e., 90 degree scannings;Work as a=1, during b=1, pixel to be it is right cornerwise,
I.e. 45 degree scannings;Work as a=-1, during b=1, pixel is to being degree scanning of left diagonal, i.e., 135.So, two pixel grayscales are same
The space coordinate of (x, y), is just converted into the description of " gray scale to " (g1, g2), forms gray scale symbiosis square by Shi Fasheng probability
Battle array.
Wavelet transformation is a kind of strong time frequency analyzing tool, can apply to signal transacting, image procossing, pattern and knows
Not Deng field, it has good local characteristic in time domain and frequency domain.It is exactly briefly that necessarily small is used to a function
The mathematic(al) manipulation that ripple basic function is localized in the time and space, original function can be obtained at this by the translation of wavelet basis
Temporal information under wavelet basis, then obtains frequency information by scaling the yardstick of wavelet basis.Main or calculating is small echo
With the approximation coefficient of local signal.It is that primary signal is in time-domain under different frequency yardstick that wavelet transform, which is finally obtained,
Approximate signal and detail signal.
In the embodiment of the present invention, LPF is carried out to Gaussian Blur image, low pass filter selects median filter, filtered
Ripple window size is 9 × 9, and extracts the texture feature vector of image after LPF, i.e. second feature vector, specifically, the
Two characteristic vectors can be one group of 34 characteristic vector tieed up.
It should be noted that picture rich in detail includes more rich high-frequency information than blurred picture, picture rich in detail passes through low pass
Composition is lost after wave filter many, and that composition is lost after low pass filter is few for blurred picture.Without reference configuration definition
(NRSS) image quality evaluating method is exactly to construct reference picture by low pass filter, by calculating image to be predicted and ginseng
The structural similarity value of image is examined to evaluate fuzzy distorted image quality.Structural similarity is smaller, and image is more clear, otherwise structure
Similarity is bigger, and image is fuzzyyer.
It should be noted that S310 can be performed before S320 in the embodiment of the present invention, in practical application or
S320 can also be performed before S310, can also be that S310 and S320 is performed parallel, and the embodiment of the present invention is not done specifically herein
Limit.
S330, according to the gray level image, extracts the architectural feature of the image to be predicted, calculating obtains described to be predicted
Structural similarity between image and the reference picture, and the first eigenvector and the vectorial line of the second feature
Manage similarity.
It should be noted that according to the gradient information of image corresponding grey scale image to be predicted, the gradient information that can be found
The abundantest N number of piece.It is understood that image to be predicted is divided into multiple pieces, every piece of variance, the bigger explanation of variance are calculated
Gradient information is abundanter, finds out the maximum N blocks of wherein variance, and as gradient information is the abundantest N number of piece.N value size is straight
Influence evaluation result is connect, while Riming time of algorithm is also influenceed, thus it is most important in the selection of block, directly affect picture pre-
The result of survey.
Specifically, calculating the operational formula without reference configuration definition NRSS of image to be predicted:
Wherein, N is that gradient information is the abundantest N number of piece, and xi, yi are to refer to from the upper left corner of gray level image and current point
What is surrounded holds all pixels point in region.
SSIM is the structural similarity of image:
SSIM (x, y)=lα(x,y)·cβ(x,y)·sγ(x,y)
For the sake of simplicity, can make:α=β=γ=1, l is brightness ratio compared with c compares for contrast, and s is structural information ratio
Compared with x, y are the occurrence of pixel.Image definition is obtained as image to be predicted and reference picture specifically, can calculate
Between structural similarity.
It will be appreciated by persons skilled in the art that calculating the line of the first eigenvector and second feature vector
Reason similarity can be by calculating the first eigenvector and the vectorial Euclidean distance of the second feature;It will calculate
The Euclidean distance arrived is as texture similarity.
Wherein, X (i), Y (i) represent the first eigenvector for the Gaussian Blur image texture similarity extracted and carried respectively
The second feature vector of the texture similarity of Gaussian Blur image after filtering is taken, i represents the dimension of vector, and d represents fisrt feature
The number of dimensions of vector sum second feature vector.
S340, according to default neural network prediction model, using the structural similarity and the texture similarity as input
Sample, using obtained output result as the image to be predicted prognostic chart picture.
BP neural network is a kind of Multi-layered Feedforward Networks trained by Back Propagation Algorithm, is most widely used at present
One of neural network model.BP networks can learn and store substantial amounts of input-output mode map relation, without disclosing in advance
The math equation of this mapping relations is described.Its learning rules are to use gradient descent method, are constantly adjusted by backpropagation
The weights and threshold value of whole network, make the error sum of squares of network minimum.BP neural network model topology structure includes input layer
(input), hidden layer (hidden layer) and output layer (output layer).
In the embodiment of the present invention, setting up the technical process of BP neural network forecast model can be:Network is with the line of image
Similarity and structural similarity are managed as input sample, the subjective assessment value DMOS of distortion blurred picture is used as output in database
Sample, sets up the BP neural network forecast model of a single hidden layer.The number of hidden node rule of thumb formula:
Wherein m, n represent input, the number of output node, and a is any value number between 1~10;Specifically, input, output node
And a value can also carry out experiment determination.
By multiple test result indicates that, when the node number of hidden layer is 9, iterations is minimum, predicted value with it is defeated
The error gone out between sample value is minimum, it is thus determined that the hidden node number of network is 9.
Compared to prior art, terminal proposed by the present invention is by calculating the texture between image to be evaluated and reference picture
Similitude is last as the measurement of image local detailed information, regard two similarity indices as the defeated of neural network prediction model
Enter, the image predicted, so that the picture of completion loss of learning.
Shown in Fig. 4, Fig. 5, the mobile terminal 1 00 is terminal device, include but not limited to mobile phone, flat board, electronics
Book, PC etc., as long as can store and perform no reference mass blurred picture Prediction program.Fig. 4 is without referring to matter
The pending picture that amount blurred picture Prediction program is obtained is that incomplete fingerprint, Fig. 4 are no reference mass blurred picture Prediction program
Output fingerprint after being predicted to pending picture.
Embodiment is introduced in detail below.
As shown in fig. 6, being that the first embodiment flow of the present invention without reference mass blurred picture Forecasting Methodology is shown
It is intended to.In the present embodiment, according to different demands, the step execution sequence in the flow chart shown in Fig. 6 can change, some
Step can be omitted.
Step S610, mobile terminal 1 00, which is obtained, needs incomplete fingerprint image to be processed, and incomplete fingerprint is carried out at gray scale
Reason.
It is understood that the incomplete fingerprint image for needing to be predicted is put into image storehouse to be predicted, no reference by user
Quality blurred picture Prediction program can also can therefrom remove the figure for needing to carry out fingerprint recovery in a certain order at random
Picture.
Step S620, detects whether obtained incomplete fingerprint grayscale image meets default requirement, if so, then performing step
S630;Otherwise, return to step S610, reacquires incomplete fingerprint image.
Step S630, it is random from pre-set image database to obtain Gaussian Blur distorted image, and to Gaussian Blur distortion
Image is analyzed, to obtain the corresponding characteristic vector of textural characteristics structure.
In practical application, S630 can be performed after S610 and S620 is performed, S630 can also be first carried out and performed again
S610 and S620, or, S630 is synchronous with S610 and S620 to be carried out, what the embodiment of the present invention was merely exemplary, does not do specific
Limit.
Step S640, extracts the architectural feature of the image to be predicted, and using low pass filter to Gaussian Blur distortion
Image enters filtering process, and obtains the corresponding vector for representing textural characteristics of filtered image.
The picture it will be appreciated by persons skilled in the art that picture after LPF compares before filtering is present
Certain difference, then can obtain specific difference condition from them respectively corresponding textural characteristics, can also cross acquisition
Texture similarity.
Step S650, calculate by the low pass filter image corresponding feature forward and backward to Gaussian Blur distorted image to
The texture similarity of amount.
Step S660, structural similarity and texture similarity, to BP neural network forecast model, are exported as input
Prognostic chart picture.
You need to add is that, in the present embodiment, the mobile terminal 1 00 performs the result after step S610-S660,
In other embodiment, above-mentioned steps can omit either reversed order or be replaced, for example, it is similar first to calculate texture
Degree calculates structural similarity etc. again.
First, the present invention proposes a kind of terminal 700.
As shown in figure 8, the module diagram of one embodiment of terminal of the present invention.
In the present embodiment, the terminal 700 can be divided into one or more modules, one or more of modules
It is stored in the memory 710, and by one or more processors 720 (controller 120 in the present embodiment) institute
Perform, to complete the present invention.Include specifically, image cache removes equipment:Memory 710, processor 720 and communication bus
730;
The communication bus is used to realize the connection communication between processor and memory;
The processor be used for perform stored in memory without reference mass blurred picture Prediction program, it is following to realize
Step:
Image to be predicted is converted into gray level image;And, obtain the first of the Gaussian Blur distorted image of reference picture
Characteristic vector and second feature vector, wherein, the spy that the first eigenvector is constituted for the textural characteristics of the reference picture
Levy vector, the characteristic vector that the textural characteristics that the second feature vector is the reference picture after low-pass filtered are constituted;
According to the gray level image, the architectural feature of the image to be predicted is extracted, and calculating obtains the figure to be predicted
As the texture with the structural similarity between the reference picture, and the first eigenvector and second feature vector
Similarity;
According to default neural network prediction model, using the structural similarity and the texture similarity as input sample,
Using obtained output result as the image to be predicted prognostic chart picture.
Specifically, the processor is described without reference mass blurred picture Prediction program for performing, to realize following step
Suddenly:
Image to be predicted is converted into the gray level image that size is 512*512.
Specifically, the processor is described without reference mass blurred picture Prediction program for performing, to realize following step
Suddenly:
Calculate the Euclidean distance of the first eigenvector and second feature vector;
The Euclidean distance that calculating is obtained is used as texture similarity.
Specifically, the processor is described without reference mass blurred picture Prediction program for performing, to realize hidden layer section
The calculating of the number of point, specific formula is:
Wherein, n, m represent the number of Inport And Outport Node, and a is any value between 1 to 10.
Specifically, the processor is described without reference mass blurred picture Prediction program for performing, to realize following step
Suddenly:The first eigenvector and the second feature vector are the characteristic vector of 34 dimensions.
Compared to prior art, terminal proposed by the present invention is by calculating the texture between image to be evaluated and reference picture
Similitude is last as the measurement of image local detailed information, regard two similarity indices as the defeated of neural network prediction model
Enter, the image predicted, so that the picture of completion loss of learning.
In the embodiment of the present invention, image to be predicted is the incomplete incomplete image of information, first carries out the image to be predicted
Pretreatment, is translated into gray level image, specifically, image to be predicted can be changed into the ash that size is 512 × 512
Spend image.
It should be noted that full images are not generally coloured image to original information, coloured image be comprising
RGB 3 primary colour images, if directly the information of 3 groups of primary colors will be analyzed using original image progress processing, and coloured image are turned
As long as changing the information that gray level image analyzes one group of gray level image, the efficiency of image procossing can be improved.
In the embodiment of the present invention, many panel heights of image data base can be extracted using gray level co-occurrence matrixes method and Wavelet Transform
This fuzzy textural characteristics lost.And be indicated the textural characteristics with characteristic vector, i.e. first eigenvector, specifically,
One characteristic vector can be the characteristic vector of one group of 34 dimension.
It should be noted that can first using gray level co-occurrence matrixes method extract 8 dimension textural characteristics of image to be predicted to
Amount, then treats prognostic chart picture from biorl4.4 small echos and carries out 4 layers of wavelet decomposition, each subband is extracted respectively and is decomposed each
The average and variance of Energy distribution on layer constitute one 26 dimension texture feature vector, 8 extracted with reference to gray level co-occurrence matrixes method
Tie up texture feature vector, the vector of final one 34 dimension of composition.
Gray level co-occurrence matrixes are a kind of to describe the common method of texture by studying the spatial correlation characteristic of gray scale.Due to
Texture be occurred repeatedly on locus as intensity profile formed by, thus be separated by two pictures of certain distance in image space
There can be the spatial correlation characteristic of gray scale in certain gray-scale relation, i.e. image between element.Gray level co-occurrence matrixes are exactly a kind of logical
The spatial correlation characteristic of research gray scale is crossed to describe the common method of texture.Grey level histogram is that have to single pixel on image
The result that some gray scale is counted, and gray level co-occurrence matrixes are to keeping two pixels of certain distance to have certain ash respectively on image
The situation of degree carries out counting what is obtained.
It is exemplary, take in (N × N) any image block image a bit, (x, y) and another point (x+a, y+b) for deviateing it,
If this to gray value be (g1, g2).Make point (x, y) be moved on whole picture, then can obtain various (g1, g2) values, if
The series of gray value is k, then the shared k of the combination of (g1, g2) square kind.For whole picture, count each (g1, g2)
It is worth the number of times occurred, is then arranged in a square formation, then they are normalized to the general of appearance by the total degree occurred with (g1, g2)
Rate P (g1, g2), such square formation is referred to as gray level co-occurrence matrixes.Different combinations of values are taken apart from difference value (a, b), can be obtained
Joint probability matrix under to different situations.Characteristic that (a, b) value will be distributed according to Texture-period is selected, for thinner
Texture, chooses the small difference values such as (1,0), (1,1), (2,0).Work as a=1, during b=0, pixel is to being level, i.e., 0 degree is swept
Retouch;Work as a=0, during b=1, pixel is to being vertical, i.e., 90 degree scannings;Work as a=1, during b=1, pixel to be it is right cornerwise,
I.e. 45 degree scannings;Work as a=-1, during b=1, pixel is to being degree scanning of left diagonal, i.e., 135.So, two pixel grayscales are same
The space coordinate of (x, y), is just converted into the description of " gray scale to " (g1, g2), forms gray scale symbiosis square by Shi Fasheng probability
Battle array.
Wavelet transformation is a kind of strong time frequency analyzing tool, can apply to signal transacting, image procossing, pattern and knows
Not Deng field, it has good local characteristic in time domain and frequency domain.It is exactly briefly that necessarily small is used to a function
The mathematic(al) manipulation that ripple basic function is localized in the time and space, original function can be obtained at this by the translation of wavelet basis
Temporal information under wavelet basis, then obtains frequency information by scaling the yardstick of wavelet basis.Main or calculating is small echo
With the approximation coefficient of local signal.It is that primary signal is in time-domain under different frequency yardstick that wavelet transform, which is finally obtained,
Approximate signal and detail signal.
In the embodiment of the present invention, LPF is carried out to Gaussian Blur image, low pass filter selects median filter, filtered
Ripple window size is 9 × 9, and extracts the texture feature vector of image after LPF, i.e. second feature vector, specifically, the
Two characteristic vectors can be one group of 34 characteristic vector tieed up.
It should be noted that picture rich in detail includes more rich high-frequency information than blurred picture, picture rich in detail passes through low pass
Composition is lost after wave filter many, and that composition is lost after low pass filter is few for blurred picture.Without reference configuration definition
(NRSS) image quality evaluating method is exactly to construct reference picture by low pass filter, by calculating image to be predicted and ginseng
The structural similarity value of image is examined to evaluate fuzzy distorted image quality.Structural similarity is smaller, and image is more clear, otherwise structure
Similarity is bigger, and image is fuzzyyer.
It should be noted that according to the gradient information of image to be predicted, it is N number of that the gradient information that can be found enriches the most
Block.It is understood that image to be predicted is divided into multiple pieces, every piece of variance is calculated, variance is bigger, and explanation gradient information is richer
Richness, finds out the maximum N blocks of wherein variance, and as gradient information is the abundantest N number of piece.N value size directly affects evaluation knot
Really, while also influenceing Riming time of algorithm, thus it is most important in the selection of block, directly affect the result of picture prediction.
Specifically, calculating the operational formula without reference configuration definition NRSS of image to be predicted:
Wherein, N is that gradient information is the abundantest N number of piece, and xi, yi are to refer to from the upper left corner of gray level image and current point
What is surrounded holds all pixels point in region.
SSIM is the structural similarity of image:
SSIM (x, y)=lα(x,y)·cβ(x,y)·sγ(x,y)
For the sake of simplicity, can make:α=β=γ=1, l is brightness ratio compared with c compares for contrast, and s is structural information ratio
Compared with x, y are the occurrence of pixel.Image definition is obtained as image to be predicted and reference picture specifically, can calculate
Between structural similarity.
It will be appreciated by persons skilled in the art that calculating the line of the first eigenvector and second feature vector
Reason similarity can be by calculating the first eigenvector and the vectorial Euclidean distance of the second feature;It will calculate
The Euclidean distance arrived is as texture similarity.
Wherein, X (i), Y (i) represent the first eigenvector for the Gaussian Blur image texture similarity extracted and carried respectively
The second feature vector of the texture similarity of Gaussian Blur image after filtering is taken, i represents the dimension of vector, and d represents fisrt feature
The number of dimensions of vector sum second feature vector.
BP neural network is a kind of Multi-layered Feedforward Networks trained by Back Propagation Algorithm, is most widely used at present
One of neural network model.BP networks can learn and store substantial amounts of input-output mode map relation, without disclosing in advance
The math equation of this mapping relations is described.Its learning rules are to use gradient descent method, are constantly adjusted by backpropagation
The weights and threshold value of whole network, make the error sum of squares of network minimum.BP neural network model topology structure includes input layer
(input), hidden layer (hidden layer) and output layer (output layer).
In the embodiment of the present invention, setting up the technical process of BP neural network forecast model can be:Network is with the line of image
Similarity and structural similarity are managed as input sample, the subjective assessment value DMOS of distortion blurred picture is used as output in database
Sample, sets up the BP neural network forecast model of a single hidden layer.The number of hidden node rule of thumb formula:
Wherein m, n represent input, the number of output node, and a is any value number between 1~10;Specifically, input, output node
And a value can also carry out experiment determination.
By multiple test result indicates that, when the node number of hidden layer is 9, iterations is minimum, predicted value with it is defeated
The error gone out between sample value is minimum, it is thus determined that the hidden node number of network is 9.
Compared to prior art, terminal proposed by the present invention is by calculating the texture between image to be evaluated and reference picture
Similitude is last as the measurement of image local detailed information, regard two similarity indices as the defeated of neural network prediction model
Enter, the image predicted, so that the picture of completion loss of learning.
Referring to Fig. 8, the present invention also proposes computer-readable recording medium 800, the computer-readable recording medium storage
Having one or more program 810, (program 1 to program n), one or more of programs can be handled by one or more
(processor 1 to processor m) is performed device 820, to realize following steps:
Image to be predicted is converted into gray level image;And, obtain the first of the Gaussian Blur distorted image of reference picture
Characteristic vector and second feature vector, wherein, the spy that the first eigenvector is constituted for the textural characteristics of the reference picture
Levy vector, the characteristic vector that the textural characteristics that the second feature vector is the reference picture after low-pass filtered are constituted;
According to the gray level image, the architectural feature of the image to be predicted is extracted, and calculating obtains the figure to be predicted
As the texture with the structural similarity between the reference picture, and the first eigenvector and second feature vector
Similarity;
According to default neural network prediction model, using the structural similarity and the texture similarity as input sample,
Using obtained output result as the image to be predicted prognostic chart picture.
Further, one or more of programs can also be by one or more of computing devices, to realize
Following steps:
Image to be predicted is converted into the gray level image that size is 512*512.
Further, one or more of programs can also be by one or more of computing devices, to realize
Following steps:
Calculate the Euclidean distance of the first eigenvector and second feature vector;
The Euclidean distance that calculating is obtained is used as texture similarity.
Further, one or more of programs can also be by one or more of computing devices, to realize
Calculate below:
Wherein, n, m represent the number of Inport And Outport Node, and a is any value between 1 to 10.
Further, one or more of programs can also be by one or more of computing devices, to realize
Following steps:
The first eigenvector and the second feature vector are the characteristic vector of 34 dimensions.
Compared to prior art, computer-readable recording medium proposed by the present invention is by calculating image to be evaluated and reference
Texture paging between image is last as the measurement of image local detailed information, regard two similarity indices as nerve net
The input of network forecast model, the image predicted, so that the picture of completion loss of learning.
The embodiments of the present invention are for illustration only, and the quality of embodiment is not represented.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Understood based on such, technical scheme is substantially done to prior art in other words
Going out the part of contribution can be embodied in the form of software product, and the computer software product is stored in a storage medium
In (such as ROM/RAM, magnetic disc, CD), including some instructions are to cause a station terminal equipment (can be mobile phone, computer, clothes
It is engaged in device, air conditioner, or network equipment etc.) perform method described in each embodiment of the invention.
The alternative embodiment of the present invention is these are only, is not intended to limit the scope of the invention, it is every to utilize this hair
Equivalent structure or equivalent flow conversion that bright specification and accompanying drawing content are made, or directly or indirectly it is used in other related skills
Art field, is included within the scope of the present invention.
Claims (10)
1. one kind is without reference mass blurred picture Forecasting Methodology, it is characterised in that methods described includes:
Image to be predicted is converted into gray level image;And, obtain the fisrt feature of the Gaussian Blur distorted image of reference picture
Vector sum second feature vector, wherein, the first eigenvector for the reference picture textural characteristics constitute feature to
Amount, the characteristic vector that the textural characteristics that second feature vector is the reference picture after low-pass filtered are constituted;
According to the gray level image, extract the architectural feature of the image to be predicted, and calculate obtain the image to be predicted with
Structural similarity between the reference picture, and the first eigenvector are similar with the texture of second feature vector
Degree;
According to default neural network prediction model, using the structural similarity and the texture similarity as input sample, will
The output result arrived as the image to be predicted prognostic chart picture.
2. it is according to claim 1 without reference mass blurred picture Forecasting Methodology, it is characterised in that described by figure to be predicted
As being converted into gray level image, including:
Image to be predicted is converted into the gray level image that size is 512*512.
3. it is according to claim 1 or 2 without reference mass blurred picture Forecasting Methodology, it is characterised in that the calculating institute
The texture similarity of first eigenvector and second feature vector is stated, including:
Calculate the Euclidean distance of the first eigenvector and second feature vector;
The Euclidean distance that calculating is obtained is used as texture similarity.
4. it is according to claim 1 without reference mass blurred picture Forecasting Methodology, it is characterised in that the default nerve net
The calculation formula of the number of the hidden node of network forecast model is:
<mrow>
<mi>N</mi>
<mo>=</mo>
<msqrt>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mi>m</mi>
</mrow>
</msqrt>
<mo>+</mo>
<mi>a</mi>
</mrow>
Wherein, n, m represent the number of Inport And Outport Node, and a is any value between 1 to 10.
5. according to claim 1 without reference mass blurred picture Forecasting Methodology, it is characterised in that the fisrt feature to
Amount and the second feature vector are the characteristic vector of 34 dimensions.
6. a kind of terminal, it is characterised in that the terminal includes:Memory, processor and communication bus;
The communication bus is used to realize the connection communication between processor and memory;
The processor be used for perform stored in memory without reference mass blurred picture Prediction program, to realize following step
Suddenly:
Image to be predicted is converted into gray level image;And, obtain the fisrt feature of the Gaussian Blur distorted image of reference picture
Vector sum second feature vector, wherein, the first eigenvector for the reference picture textural characteristics constitute feature to
Amount, the characteristic vector that the textural characteristics that second feature vector is the reference picture after low-pass filtered are constituted;
According to the gray level image, extract the architectural feature of the image to be predicted, and calculate obtain the image to be predicted with
Structural similarity between the reference picture, and the first eigenvector are similar with the texture of second feature vector
Degree;
According to default neural network prediction model, using the structural similarity and the texture similarity as input sample, will
The output result arrived as the image to be predicted prognostic chart picture.
7. terminal according to claim 1, it is characterised in that the processor is described fuzzy without reference mass for performing
Image prediction program, to realize following steps:
Image to be predicted is converted into the gray level image that size is 512*512.
8. the terminal according to claim 6 or 7, it is characterised in that the processor is described without reference mass for performing
Blurred picture Prediction program, to realize following steps:
Calculate the Euclidean distance of the first eigenvector and second feature vector;
The Euclidean distance that calculating is obtained is used as texture similarity.
9. terminal according to claim 6, it is characterised in that the processor is described fuzzy without reference mass for performing
Image prediction program, to realize following steps:
The first eigenvector and the second feature vector are the characteristic vector of 34 dimensions.
10. a kind of computer-readable recording medium, it is characterised in that the computer-readable recording medium storage have one or
Multiple programs, one or more of programs can be by one or more computing device, to realize following steps:
Image to be predicted is converted into gray level image;And, obtain the fisrt feature of the Gaussian Blur distorted image of reference picture
Vector sum second feature vector, wherein, the first eigenvector for the reference picture textural characteristics constitute feature to
Amount, the characteristic vector that the textural characteristics that second feature vector is the reference picture after low-pass filtered are constituted;
According to the gray level image, extract the architectural feature of the image to be predicted, and calculate obtain the image to be predicted with
Structural similarity between the reference picture, and the first eigenvector are similar with the texture of second feature vector
Degree;
According to default neural network prediction model, using the structural similarity and the texture similarity as input sample, will
The output result arrived as the image to be predicted prognostic chart picture.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710296834.5A CN107145855B (en) | 2017-04-28 | 2017-04-28 | Reference quality blurred image prediction method, terminal and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710296834.5A CN107145855B (en) | 2017-04-28 | 2017-04-28 | Reference quality blurred image prediction method, terminal and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107145855A true CN107145855A (en) | 2017-09-08 |
CN107145855B CN107145855B (en) | 2020-10-09 |
Family
ID=59774149
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710296834.5A Active CN107145855B (en) | 2017-04-28 | 2017-04-28 | Reference quality blurred image prediction method, terminal and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107145855B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107680051A (en) * | 2017-09-18 | 2018-02-09 | 维沃移动通信有限公司 | A kind of image filtering method and mobile terminal |
CN109919894A (en) * | 2017-12-07 | 2019-06-21 | 航天信息股份有限公司 | A kind of non-reference picture quality appraisement method and system based on human visual system |
WO2020172999A1 (en) * | 2019-02-28 | 2020-09-03 | 苏州润迈德医疗科技有限公司 | Quality evaluation method and apparatus for sequence of coronary angiogram images |
CN112149566A (en) * | 2020-09-23 | 2020-12-29 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN114419446A (en) * | 2022-01-26 | 2022-04-29 | 国网浙江省电力有限公司超高压分公司 | Flow pattern identification method and device for oil-water two-phase flow, storage medium and electronic device |
WO2022105197A1 (en) * | 2020-11-17 | 2022-05-27 | Zhejiang Dahua Technology Co., Ltd. | Systems and methods for image detection |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020131641A1 (en) * | 2001-01-24 | 2002-09-19 | Jiebo Luo | System and method for determining image similarity |
US20100195926A1 (en) * | 2009-02-02 | 2010-08-05 | Olympus Corporation | Image processing apparatus and image processing method |
US20100322518A1 (en) * | 2009-06-23 | 2010-12-23 | Lakshman Prasad | Image segmentation by hierarchial agglomeration of polygons using ecological statistics |
CN103854268A (en) * | 2014-03-26 | 2014-06-11 | 西安电子科技大学 | Image super-resolution reconstruction method based on multi-core gaussian process regression |
-
2017
- 2017-04-28 CN CN201710296834.5A patent/CN107145855B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020131641A1 (en) * | 2001-01-24 | 2002-09-19 | Jiebo Luo | System and method for determining image similarity |
US20100195926A1 (en) * | 2009-02-02 | 2010-08-05 | Olympus Corporation | Image processing apparatus and image processing method |
US20100322518A1 (en) * | 2009-06-23 | 2010-12-23 | Lakshman Prasad | Image segmentation by hierarchial agglomeration of polygons using ecological statistics |
CN103854268A (en) * | 2014-03-26 | 2014-06-11 | 西安电子科技大学 | Image super-resolution reconstruction method based on multi-core gaussian process regression |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107680051A (en) * | 2017-09-18 | 2018-02-09 | 维沃移动通信有限公司 | A kind of image filtering method and mobile terminal |
CN109919894A (en) * | 2017-12-07 | 2019-06-21 | 航天信息股份有限公司 | A kind of non-reference picture quality appraisement method and system based on human visual system |
WO2020172999A1 (en) * | 2019-02-28 | 2020-09-03 | 苏州润迈德医疗科技有限公司 | Quality evaluation method and apparatus for sequence of coronary angiogram images |
CN112149566A (en) * | 2020-09-23 | 2020-12-29 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN112149566B (en) * | 2020-09-23 | 2025-02-25 | 上海商汤智能科技有限公司 | Image processing method, device, electronic device and storage medium |
WO2022105197A1 (en) * | 2020-11-17 | 2022-05-27 | Zhejiang Dahua Technology Co., Ltd. | Systems and methods for image detection |
CN114419446A (en) * | 2022-01-26 | 2022-04-29 | 国网浙江省电力有限公司超高压分公司 | Flow pattern identification method and device for oil-water two-phase flow, storage medium and electronic device |
Also Published As
Publication number | Publication date |
---|---|
CN107145855B (en) | 2020-10-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107145855A (en) | One kind is without reference mass blurred picture Forecasting Methodology, terminal and storage medium | |
CN107508994A (en) | Touch-screen report point rate processing method, terminal and computer-readable recording medium | |
CN108037893A (en) | A kind of display control method of flexible screen, device and computer-readable recording medium | |
CN107172364A (en) | A kind of image exposure compensation method, device and computer-readable recording medium | |
CN107133092A (en) | Multi-thread synchronization processing method, terminal and computer-readable recording medium | |
CN107256530A (en) | Adding method, mobile terminal and the readable storage medium storing program for executing of picture watermark | |
CN107748856A (en) | Two-dimensional code identification method, terminal and computer-readable recording medium | |
CN107566635A (en) | Screen intensity method to set up, mobile terminal and computer-readable recording medium | |
CN106953684A (en) | A kind of method for searching star, mobile terminal and computer-readable recording medium | |
CN107844231A (en) | A kind of interface display method, mobile terminal and computer-readable recording medium | |
CN107493426A (en) | A kind of information collecting method, equipment and computer-readable recording medium | |
CN106953989A (en) | Incoming call reminding method and device, terminal, computer-readable recording medium | |
CN108230270A (en) | A kind of noise-reduction method, terminal and computer readable storage medium | |
CN107291334A (en) | A kind of icon font color determines method and apparatus | |
CN107273035A (en) | Application program recommends method and mobile terminal | |
CN107295270A (en) | Determination method, device, terminal and the computer-readable recording medium of a kind of image brightness values | |
CN108682040A (en) | A kind of sketch image generation method, terminal and computer readable storage medium | |
CN107124531A (en) | A kind of image processing method and mobile terminal | |
CN107104886A (en) | A kind of information indicating method, equipment and computer-readable recording medium | |
CN108170817A (en) | Differentiation video acquiring method, device and the readable storage medium storing program for executing of photo main body | |
CN107656644A (en) | Grip recognition methods and corresponding mobile terminal | |
CN108182664A (en) | A kind of image processing method, mobile terminal and computer readable storage medium | |
CN109300099A (en) | A kind of image processing method, mobile terminal and computer readable storage medium | |
CN108600325A (en) | A kind of determination method, server and the computer readable storage medium of push content | |
CN108196777A (en) | A kind of flexible screen application process, equipment and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |