CN106529606A - Method of improving image recognition accuracy - Google Patents
Method of improving image recognition accuracy Download PDFInfo
- Publication number
- CN106529606A CN106529606A CN201611099552.8A CN201611099552A CN106529606A CN 106529606 A CN106529606 A CN 106529606A CN 201611099552 A CN201611099552 A CN 201611099552A CN 106529606 A CN106529606 A CN 106529606A
- Authority
- CN
- China
- Prior art keywords
- term vector
- image recognition
- sequence
- noun
- degree
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method of improving the image recognition accuracy. According to the method, the word embedding technique is adopted for analyzing the associated text of an image, so that the image recognition accuracy is improved. The method comprises the steps of extracting an image and a text description thereof; for the image, recognizing a training neural network based on the deep learning technology; for image recognition, taking previous m recognition results and subjecting the previous m recognition results to subsequent treatment; recognizing a noun sequence in the text based on the noun recognition technology; training and calculating term vectors; and filtering image recognition results based on the word vector approximation degree to improve the recognition accuracy. According to the technical scheme of the invention, based on the word embedding technique, the associated text of the image is analyzed, so that the image recognition accuracy is improved.
Description
Technical field
The invention belongs to technical field of image processing, more particularly to a kind of method for lifting image recognition accuracy rate.
Background technology
Over 2006, start to be subject to using " deep learning " (Deep Learning) technology of deep neural network (DNN)
Academia extensive concern, has become a upsurge of the Internet big data and artificial intelligence today.Deep learning is by setting up
Similar to the hierarchical mode structure of human brain, input data is extracted step by step from bottom to high-rise feature, so as to build well
The vertical mapping relations from bottom layer signal to high-level semantic, are machine intelligence field this maximum progress over 10 years recently, know in image
The maximum breakthrough in other field.
When sorting out to image using deep learning, as a rule, to a picture, a series of recognition results can be provided
And its probability.However, image is not lonely presence.For news picture, except picture, there are headline and news
Content.Headline and news content are closely related with the content in news picture.For electric business picture, generally sell
The title description of commodity, these word descriptions and commodity picture are closely related.
Art methods are, using technologies such as deep learnings, to recognize that image draws recognition result.For news image or
Electric business image, is often furnished with word description, existing technology of identification, not using upper these character description informations.Profit of the invention
With character description information, candidate's knot low with the word description degree of association in the candidate result of deep learning technology identification is removed
Really, reach the purpose for improving identification accuracy
The content of the invention
It is an object of the invention to provide a kind of method for lifting image recognition accuracy rate, it is intended to solve existing identification skill
Art, does not utilize the character description information that news image or electric business image are furnished with, it is impossible to remove candidate's knot of prior art identification
With the low candidate result of the word description degree of association in fruit, it is impossible to improve the identification problem of accuracy.
The present invention is achieved in that a kind of method for lifting image recognition accuracy rate, and the lifting image recognition is accurate
The method of rate adopts word embedded technology, calculate candidate result and the comment of image recognition the degree of association (with minimum range come
Characterize), the low candidate result of the degree of association is removed, the accuracy rate of image recognition is lifted;
Comprise the following steps that:
The extraction of picture, word description pair;
For image, using deep learning technology, neural network recognization is trained, recognition result is class probability sequence (C1,
P1), (C2, P2) ... ... (Cn,Pn), the class probability sequence is according to probability sorting, specially P1≥P2≥……≥Pn;Using
Neutral net have but be not limited to AlexNet, GoogLeNet, VGG, Inception, ResNet;
For image recognition, front m items recognition result (C is taken1, P1), (C2, P2) ... ... (Cm,Pm) (m≤n), participate in follow-up
Process;
For descriptive text, the noun sequence in word is identified by noun technology of identification, noun sequence is gone
The noun sequence obtained after the noun for falling repetition is designated as N1,N2,…Nk;
The training of term vector and calculating;
Carry out filtration to lift identification accuracy to image recognition result using the term vector degree of approximation.
Further, the picture, the extraction of word description pair includes:
A) for news picture, headline is extracted, using the headline extracted as the word description to news picture;
B) description of electric business product for electric business picture, is extracted, using the description of the electric business product for extracting as to electric business figure
The word description of piece.
What news article and electric business product data were obtained by webpage capture, what crawl came is html contents, by right
Html contents carry out structured analysis, can extract the description of headline and electric business product.
Further, the training of the term vector is included with calculating:
1) news corpus are adopted, news term vector model is trained;Using electric business language material, electric business term vector model is trained;Choosing
Select the term vector that corresponding term vector model calculates noun sequence;
2) calculate noun sequence N1,N2,…NkTerm vector, be designated as Vn1,Vn2,…Vnk;
3) calculate classification sequence C1,C2,…CmTerm vector, be designated as Vc1,Vc2,…Vcm。
Further, the employing term vector degree of approximation carries out filtration to lift identification accuracy bag to image recognition result
Include:
(1) two term vector V are remembered1, V2Between the degree of approximation be dv1,v2;The degree of approximation is nearer, represents that two word meanings get over phase
Closely;For the term vector V of each classification sequenceCi, calculate from VCiThe term vector V of nearest noun sequencenj, minimum distance is
dvci,nj;
(2) setpoint distance threshold value t, works as dvci,njDuring more than t, d is representedvci,njClassification more than t is closed with iamge description text
Connection degree is low, abandons the category;
(3) remaining sequence is before the little classification of minimum distance comes, as most according to being ranked up with minimum distance
Whole image recognition result.
Further, in step (1), calculate from VCiThe term vector V of nearest noun sequencenj, minimum distance is dvci,nj, tool
Body includes:
COS distance method or Euclidean distance method is selected to calculate distance between term vector;Noun sequence N1,N2,…NkWord
Vector, is designated as Vn1,Vn2,…Vnk;It is assumed that candidate result CmTerm vector be Vcm;V is calculated respectivelycmAnd Vn1,Vn2,…VnkAway from
From being designated as dvcm,n1,dvcm,n2,…,dvcm,nk;Then, minimum distance dvcm,vn=min (dvcm,n1,dvcm,n2,…,dvcm,nk)。
Further, in step (3), the remaining sequence is:It is assumed that candidate result is C1,C2,…Cm, their word to
Measure as Vc1,Vc2,…Vcm;Noun sequence N1,N2,…NkTerm vector, be designated as Vn1,Vn2,…Vnk;Calculate each candidate result with
The degree of association of ranking sequence;The low result of the degree of association is abandoned, it is remaining for remaining sequence.Such as, it is assumed that have C1, C2, C3Three times
Result is selected, by calculating, C2Low with the degree of association of ranking sequence, then remaining sequence is C1, C3。
Further, Euclidean distance method is:
Give two vector V1(x11,x12,…,x1n) and V2(x21,x22,…,x2n), Euclidean distance refers to Euclidean distance;
COS distance method is:COS distance is two vectorial angle cosines;
A kind of method of lifting image recognition accuracy rate that the present invention is provided, using word embedded technology, analyzes picture
Association word, improves the accuracy rate of image recognition with this.
The present invention utilizes character description information, removes in the candidate result of deep learning technology identification and word description is closed
The low candidate result of connection degree, reaches the purpose for improving identification accuracy.
The present invention adopts electric business language material, trains the term vector model of 60 latitudes.Using Euclidean distance, distance threshold is 10, row
Except the low classification of the degree of association, and entered after rearrangement according to minimum range, final identifies accurate result.
Description of the drawings
Fig. 1 is the method flow diagram for lifting image recognition accuracy rate provided in an embodiment of the present invention.
Specific embodiment
In order that the objects, technical solutions and advantages of the present invention become more apparent, with reference to embodiments, to the present invention
It is further elaborated.It should be appreciated that specific embodiment described herein is not used to only to explain the present invention
Limit the present invention.
Below in conjunction with the accompanying drawings the application principle of the present invention is described in detail.
As shown in figure 1, the method for lifting image recognition accuracy rate provided in an embodiment of the present invention, the lifting image recognition
The method of accuracy rate adopts word embedded technology, analyzes the association word of picture, lifts the accuracy rate of image recognition;
Comprise the following steps that:
S101:The extraction of picture, word description pair;
S102:For image, using deep learning technology, neural network recognization is trained, recognition result is class probability sequence
Row (C1, P1), (C2, P2) ... ... (Cn,Pn), the class probability sequence is according to probability sorting, specially P1≥P2≥……≥
Pn;Using neutral net have but be not limited to AlexNet, GoogLeNet, VGG, Inception, ResNet;
S103:For image recognition, front m items recognition result (C is taken1, P1), (C2, P2) ... ... (Cm,Pm) (m≤n), participate in
Subsequent treatment;
S104:For descriptive text, the noun sequence in word is identified by noun technology of identification, noun sequence is entered
The noun sequence obtained after the capable noun for removing repetition is designated as N1,N2,…Nk;
S105:The training of term vector and calculating;
S106:Carry out filtration to lift identification accuracy to image recognition result using the term vector degree of approximation.
Further, the picture, the extraction of word description pair includes:
A) for news picture, headline is extracted, using the headline extracted as the word description to news picture;
B) description of electric business product for electric business picture, is extracted, using the description of the electric business product for extracting as to electric business figure
The word description of piece.
Further, the training of the term vector is included with calculating:
1) news corpus are adopted, news term vector model is trained;Using electric business language material, electric business term vector model is trained;Choosing
Select the term vector that corresponding term vector model calculates noun sequence;
2) calculate noun sequence N1,N2,…NkTerm vector, be designated as Vn1,Vn2,…Vnk;
3) calculate classification sequence C1,C2,…CmTerm vector, be designated as Vc1,Vc2,…Vcm。
Further, the employing term vector degree of approximation carries out filtration to lift identification accuracy bag to image recognition result
Include:
(1) two term vector V are remembered1, V2Between the degree of approximation be dv1,v2;The degree of approximation is nearer, represents that two word meanings get over phase
Closely;For the term vector V of each classification sequenceCi, calculate from VCiThe term vector V of nearest noun sequencenj, minimum distance is
dvci,nj;
(2) setpoint distance threshold value t, works as dvci,njDuring more than t, d is representedvci,njClassification more than t is closed with iamge description text
Connection degree is low, abandons the category;
(3) remaining sequence is before the little classification of minimum distance comes, as most according to being ranked up with minimum distance
Whole image recognition result.
In step (1), calculate from VCiThe term vector V of nearest noun sequencenj, minimum distance is dvci,nj, specifically include:
COS distance method or Euclidean distance method is selected to calculate distance between term vector;Noun sequence N1,N2,…NkWord
Vector, is designated as Vn1,Vn2,…Vnk;It is assumed that candidate result CmTerm vector be Vcm;V is calculated respectivelycmAnd Vn1,Vn2,…VnkAway from
From being designated as dvcm,n1,dvcm,n2,…,dvcm,nk;Then, minimum distance dvcm,vn=min (dvcm,n1,dvcm,n2,…,dvcm,nk)。
In step (3), the remaining sequence is:It is assumed that candidate result is C1,C2,…Cm, their term vector is Vc1,
Vc2,…Vcm;Noun sequence N1,N2,…NkTerm vector, be designated as Vn1,Vn2,…Vnk;Calculate each candidate result and ranking sequence
The degree of association;The low result of the degree of association is abandoned, it is remaining for remaining sequence.Such as, it is assumed that have C1, C2, C3Three candidate results,
By calculating, C2Low with the degree of association of ranking sequence, then remaining sequence is C1, C3。
Euclidean distance method is:
Give two vector V1(x11,x12,…,x1n) and V2(x21,x22,…,x2n), Euclidean distance refers to Euclidean distance;
COS distance method is:COS distance is two vectorial angle cosines;
A kind of method of lifting image recognition accuracy rate that the present invention is provided, using word embedded technology, analyzes picture
Association word, improves the accuracy rate of image recognition with this.
The present invention utilizes character description information, removes in the candidate result of deep learning technology identification and word description is closed
The low candidate result of connection degree, reaches the purpose for improving identification accuracy.
The present invention adopts electric business language material, trains the term vector model of 60 latitudes.Using Euclidean distance, distance threshold is 10, row
Except the low classification of the degree of association, and entered after rearrangement according to minimum range, final identifies accurate result.
Embodiment:
Word description is " the big bag PU of 2016 trendy Korea Spro's version fashion knapsacks of Ou Shina pieces school bag both shoulders bag together ".Using
Inception Network Recognition, the candidate result of first 20 is:
Mailbag (0.1564)
Knapsack (0.0818)
Ice hockey (0.0596)
Button (0.0332)
Knee cap (0.0270)
Cuirass (0.0180)
Corset (0.0169)
Military uniform (0.0169)
Satcheel (0.0150)
T-shirt (0.0110)
Shield (0.0104)
Radix Ipomoeae/Tao Di (0.0103)
Apron (0.0101)
Leather sheath (0.0098)
Sport shirt (0.0096)
Football helmet (0.0092)
Bullet-proof vest (0.0067)
Maillot/tightss (0.0063)
Using electric business language material, the term vector model of 60 latitudes is trained.Using Euclidean distance, distance threshold is 10, excludes association
The low classification of degree, and entered after rearrangement according to minimum range, final recognition result is:
Knapsack
Mailbag
Knee cap
Sport shirt
Apron
Leather sheath
Bullet-proof vest
Nightwear
Corset
Cuirass
T-shirt
Military uniform
Button
Candidate result after treatment is more accurate.
Presently preferred embodiments of the present invention is the foregoing is only, not to limit the present invention, all essences in the present invention
Any modification, equivalent and improvement made within god and principle etc., should be included within the scope of the present invention.
Claims (7)
1. it is a kind of lifted image recognition accuracy rate method, it is characterised in that the method for the lifting image recognition accuracy rate is adopted
Word embedded technology, calculates the candidate result of image recognition and the degree of association of comment, removes the low candidate result of the degree of association,
Lift the accuracy rate of image recognition;
Comprise the following steps that:
The extraction of picture, word description pair;
For image, using deep learning technology, neural network recognization is trained, recognition result is class probability sequence (C1, P1),
(C2, P2) ... ... (Cn,Pn), the class probability sequence is according to probability sorting, specially P1≥P2≥……≥Pn;Using
Neutral net has but is not limited to AlexNet, GoogLeNet, VGG, Inception, ResNet;
For image recognition, front m items recognition result (C is taken1, P1), (C2, P2) ... ... (Cm,Pm) (m≤n), participate in subsequent treatment;
For descriptive text, the noun sequence in word is identified by noun technology of identification, noun sequence is carried out removing weight
The noun sequence obtained after multiple noun is designated as N1,N2,…Nk;
The training of term vector and calculating;
Carry out filtration to lift identification accuracy to image recognition result using the term vector degree of approximation.
2. the method for lifting image recognition accuracy rate as claimed in claim 1, it is characterised in that the picture, word description
To extraction include:
A) for news picture, headline is extracted, using the headline extracted as the word description to news picture;
B) description of electric business product for electric business picture, is extracted, using the description of the electric business product for extracting as to electric business picture
Word description.
3. the as claimed in claim 1 method for lifting image recognition accuracy rate, it is characterised in that the training of the term vector with
Calculating includes:
1) news corpus are adopted, news term vector model is trained;Using electric business language material, electric business term vector model is trained;It is right to select
The term vector model answered calculates the term vector of noun sequence;
2) calculate noun sequence N1,N2,…NkTerm vector, be designated as Vn1,Vn2,…Vnk;
3) calculate classification sequence C1,C2,…CmTerm vector, be designated as Vc1,Vc2,…Vcm。
4. the method for lifting image recognition accuracy rate as claimed in claim 1, it is characterised in that the employing term vector is approximate
Degree carries out filtration and recognizes that accuracy includes to be lifted to image recognition result:
(1) two term vector V are remembered1, V2Between the degree of approximation be dv1,v2;The degree of approximation is nearer, represents that two word meanings are more close;It is right
In the term vector V of each classification sequenceCi, calculate from VCiThe term vector V of nearest noun sequencenj, minimum distance is dvci,nj;
(2) setpoint distance threshold value t, works as dvci,njDuring more than t, d is representedvci,njClassification more than t and iamge description textual association degree
It is low, abandon the category;
(3) remaining sequence is before the little classification of minimum distance comes, as final according to being ranked up with minimum distance
Image recognition result.
5. the method for lifting image recognition accuracy rate as claimed in claim 4, it is characterised in that in step (1), calculate from VCi
The term vector V of nearest noun sequencenj, minimum distance is dvci,nj, specifically include:
COS distance method or Euclidean distance method is selected to calculate distance between term vector;Noun sequence N1,N2,…NkTerm vector,
It is designated as Vn1,Vn2,…Vnk;It is assumed that candidate result CmTerm vector be Vcm;V is calculated respectivelycmAnd Vn1,Vn2,…VnkDistance, be designated as
dvcm,n1,dvcm,n2,…,dvcm,nk;Then, minimum distance dvcm,vn=min (dvcm,n1,dvcm,n2,…,dvcm,nk)。
6. the method for lifting image recognition accuracy rate as claimed in claim 4, it is characterised in that in step (3), the remainder
Sequence be:It is assumed that candidate result is C1,C2,…Cm, their term vector is Vc1,Vc2,…Vcm;Noun sequence N1,N2,…Nk
Term vector, be designated as Vn1,Vn2,…Vnk;Calculate the degree of association of each candidate result and ranking sequence;Abandon the low knot of the degree of association
Really, it is remaining for remaining sequence.
7. the as claimed in claim 5 method for lifting image recognition accuracy rate, it is characterised in that Euclidean distance method is:
Give two vector V1(x11,x12,…,x1n) and V2(x21,x22,…,x2n), Euclidean distance refers to Euclidean distance;
COS distance method is:COS distance is two vectorial angle cosines;
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611099552.8A CN106529606A (en) | 2016-12-01 | 2016-12-01 | Method of improving image recognition accuracy |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611099552.8A CN106529606A (en) | 2016-12-01 | 2016-12-01 | Method of improving image recognition accuracy |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106529606A true CN106529606A (en) | 2017-03-22 |
Family
ID=58354732
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611099552.8A Pending CN106529606A (en) | 2016-12-01 | 2016-12-01 | Method of improving image recognition accuracy |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106529606A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107464260A (en) * | 2017-07-06 | 2017-12-12 | 山东农业大学 | A kind of rice canopy image processing method using unmanned plane |
CN107679484A (en) * | 2017-09-28 | 2018-02-09 | 辽宁工程技术大学 | A kind of Remote Sensing Target automatic detection and recognition methods based on cloud computing storage |
CN107861972A (en) * | 2017-09-15 | 2018-03-30 | 广州唯品会研究院有限公司 | The method and apparatus of the full result of display of commodity after a kind of user's typing merchandise news |
CN108804530A (en) * | 2017-05-02 | 2018-11-13 | 达索系统公司 | To the region captioning of image |
CN109657710A (en) * | 2018-12-06 | 2019-04-19 | 北京达佳互联信息技术有限公司 | Data screening method, apparatus, server and storage medium |
CN109740671A (en) * | 2019-01-03 | 2019-05-10 | 北京妙医佳信息技术有限公司 | An image recognition method and device |
CN110490240A (en) * | 2019-08-09 | 2019-11-22 | 北京影谱科技股份有限公司 | Image-recognizing method and device based on deep learning |
CN110895602A (en) * | 2018-09-13 | 2020-03-20 | 中移(杭州)信息技术有限公司 | Authentication method, device, electronic device and storage medium |
CN111291594A (en) * | 2018-12-07 | 2020-06-16 | 中国移动通信集团山东有限公司 | Image identification method and system |
CN111931621A (en) * | 2020-07-31 | 2020-11-13 | 青岛大学 | Vehicle window system for reducing traffic accidents based on human body recognition and control method |
CN112149653A (en) * | 2020-09-16 | 2020-12-29 | 北京达佳互联信息技术有限公司 | Information processing method, information processing device, electronic equipment and storage medium |
CN119379263A (en) * | 2024-12-25 | 2025-01-28 | 国网浙江省电力有限公司 | Intelligent operation and maintenance method and system for power grid dispatching automation based on multi-modal large model |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103473327A (en) * | 2013-09-13 | 2013-12-25 | 广东图图搜网络科技有限公司 | Image retrieval method and image retrieval system |
CN103955543A (en) * | 2014-05-20 | 2014-07-30 | 电子科技大学 | Multimode-based clothing image retrieval method |
CN104298749A (en) * | 2014-10-14 | 2015-01-21 | 杭州淘淘搜科技有限公司 | Commodity retrieval method based on image visual and textual semantic integration |
US20150032719A1 (en) * | 2008-06-05 | 2015-01-29 | Enpulz, L.L.C. | Search system employing multiple image based search processing approaches |
CN104376105A (en) * | 2014-11-26 | 2015-02-25 | 北京航空航天大学 | Feature fusing system and method for low-level visual features and text description information of images in social media |
-
2016
- 2016-12-01 CN CN201611099552.8A patent/CN106529606A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150032719A1 (en) * | 2008-06-05 | 2015-01-29 | Enpulz, L.L.C. | Search system employing multiple image based search processing approaches |
CN103473327A (en) * | 2013-09-13 | 2013-12-25 | 广东图图搜网络科技有限公司 | Image retrieval method and image retrieval system |
CN103955543A (en) * | 2014-05-20 | 2014-07-30 | 电子科技大学 | Multimode-based clothing image retrieval method |
CN104298749A (en) * | 2014-10-14 | 2015-01-21 | 杭州淘淘搜科技有限公司 | Commodity retrieval method based on image visual and textual semantic integration |
CN104376105A (en) * | 2014-11-26 | 2015-02-25 | 北京航空航天大学 | Feature fusing system and method for low-level visual features and text description information of images in social media |
Non-Patent Citations (2)
Title |
---|
董海鹰: "《智能控制理论及应用》", 30 September 2016, 北京:中国铁道出版社 * |
赵守香 等: "《互联网数据分析与应用》", 30 September 2015, 北京:清华大学出版社 * |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108804530A (en) * | 2017-05-02 | 2018-11-13 | 达索系统公司 | To the region captioning of image |
CN108804530B (en) * | 2017-05-02 | 2024-01-12 | 达索系统公司 | Add subtitles to areas of the image |
CN107464260A (en) * | 2017-07-06 | 2017-12-12 | 山东农业大学 | A kind of rice canopy image processing method using unmanned plane |
CN107861972A (en) * | 2017-09-15 | 2018-03-30 | 广州唯品会研究院有限公司 | The method and apparatus of the full result of display of commodity after a kind of user's typing merchandise news |
CN107861972B (en) * | 2017-09-15 | 2022-02-22 | 广州唯品会研究院有限公司 | A method and device for displaying the full results of a product after a user enters product information |
CN107679484A (en) * | 2017-09-28 | 2018-02-09 | 辽宁工程技术大学 | A kind of Remote Sensing Target automatic detection and recognition methods based on cloud computing storage |
CN110895602B (en) * | 2018-09-13 | 2021-12-14 | 中移(杭州)信息技术有限公司 | Authentication method, device, electronic device and storage medium |
CN110895602A (en) * | 2018-09-13 | 2020-03-20 | 中移(杭州)信息技术有限公司 | Authentication method, device, electronic device and storage medium |
CN109657710A (en) * | 2018-12-06 | 2019-04-19 | 北京达佳互联信息技术有限公司 | Data screening method, apparatus, server and storage medium |
CN111291594A (en) * | 2018-12-07 | 2020-06-16 | 中国移动通信集团山东有限公司 | Image identification method and system |
CN109740671B (en) * | 2019-01-03 | 2021-02-23 | 北京妙医佳信息技术有限公司 | An image recognition method and device |
CN109740671A (en) * | 2019-01-03 | 2019-05-10 | 北京妙医佳信息技术有限公司 | An image recognition method and device |
CN110490240A (en) * | 2019-08-09 | 2019-11-22 | 北京影谱科技股份有限公司 | Image-recognizing method and device based on deep learning |
CN111931621A (en) * | 2020-07-31 | 2020-11-13 | 青岛大学 | Vehicle window system for reducing traffic accidents based on human body recognition and control method |
CN112149653A (en) * | 2020-09-16 | 2020-12-29 | 北京达佳互联信息技术有限公司 | Information processing method, information processing device, electronic equipment and storage medium |
CN112149653B (en) * | 2020-09-16 | 2024-03-29 | 北京达佳互联信息技术有限公司 | Information processing method, information processing device, electronic equipment and storage medium |
CN119379263A (en) * | 2024-12-25 | 2025-01-28 | 国网浙江省电力有限公司 | Intelligent operation and maintenance method and system for power grid dispatching automation based on multi-modal large model |
CN119379263B (en) * | 2024-12-25 | 2025-06-27 | 国网浙江省电力有限公司 | Multi-mode large model-based power grid dispatching automation intelligent operation and maintenance method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106529606A (en) | Method of improving image recognition accuracy | |
Zahisham et al. | Food recognition with resnet-50 | |
Pandey et al. | FoodNet: Recognizing foods using ensemble of deep networks | |
Tudor Ionescu et al. | How hard can it be? Estimating the difficulty of visual search in an image | |
Gupta et al. | Wearable sensors for evaluation over smart home using sequential minimization optimization-based random forest | |
CN107944559B (en) | Method and system for automatically identifying entity relationship | |
US20180357258A1 (en) | Personalized search device and method based on product image features | |
CN107766894A (en) | Remote sensing images spatial term method based on notice mechanism and deep learning | |
Singha et al. | Effect of variation in gesticulation pattern in dynamic hand gesture recognition system | |
Yeh et al. | Intelligent mango fruit grade classification using AlexNet-SPP with mask R-CNN-based segmentation algorithm | |
CN103366160A (en) | Objectionable image distinguishing method integrating skin color, face and sensitive position detection | |
CN112183198A (en) | Gesture recognition method for fusing body skeleton and head and hand part profiles | |
CN108205684A (en) | Image disambiguation method, device, storage medium and electronic equipment | |
CN106650694A (en) | Human face recognition method taking convolutional neural network as feature extractor | |
Hasan et al. | Classification of sign language characters by applying a deep convolutional neural network | |
CN109213853A (en) | A kind of Chinese community's question and answer cross-module state search method based on CCA algorithm | |
CN107808113A (en) | A kind of facial expression recognizing method and system based on difference depth characteristic | |
Thongtawee et al. | A novel feature extraction for American sign language recognition using webcam | |
Shekhar | Domain-specific semantics guided approach to video captioning | |
CN107451565A (en) | A kind of semi-supervised small sample deep learning image model classifying identification method | |
CN116798093A (en) | A two-stage facial expression recognition method based on course learning and label smoothing | |
Jolly et al. | How do convolutional neural networks learn design? | |
CN109145944A (en) | A kind of classification method based on longitudinal depth of 3 D picture learning characteristic | |
CN106570170A (en) | Text classification and naming entity recognition integrated method and system based on depth cyclic neural network | |
Çaylı et al. | Auxiliary classifier based residual rnn for image captioning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 100040 Shijingshan District railway building, Beijing, the 16 floor Applicant after: Chinese translation language through Polytron Technologies Inc Address before: 100040 Shijingshan District railway building, Beijing, the 16 floor Applicant before: Mandarin Technology (Beijing) Co., Ltd. |
|
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170322 |