[go: up one dir, main page]

CN118690039B - A graphical display method for search engine retrieval results - Google Patents

A graphical display method for search engine retrieval results Download PDF

Info

Publication number
CN118690039B
CN118690039B CN202411186934.9A CN202411186934A CN118690039B CN 118690039 B CN118690039 B CN 118690039B CN 202411186934 A CN202411186934 A CN 202411186934A CN 118690039 B CN118690039 B CN 118690039B
Authority
CN
China
Prior art keywords
image
search
graph
knowledge graph
search engine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411186934.9A
Other languages
Chinese (zh)
Other versions
CN118690039A (en
Inventor
贾新志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan Quanfang Technology Co ltd
Original Assignee
Jinan Quanfang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan Quanfang Technology Co ltd filed Critical Jinan Quanfang Technology Co ltd
Priority to CN202411186934.9A priority Critical patent/CN118690039B/en
Publication of CN118690039A publication Critical patent/CN118690039A/en
Application granted granted Critical
Publication of CN118690039B publication Critical patent/CN118690039B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/54Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/535Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本发明公开了一种搜索引擎检索结果的图形化显示方法,属于搜索引擎技术领域,其具体包括:通过图形数据库构建图形知识图谱,利用深度学习预训练图像提取高级特征,用户可通过搜索引擎以关键词或图像样本发起搜索请求,系统运用NLP和深度学习技术对查询结果进行语义分析和图像识别,实现与图形知识图谱的实时匹配,搜索结果通过轮播图展示,并提供多维度信息查看,系统还具备智能颜色筛选功能,结合形状、纹理特征进行图形化展示,系统收集用户反馈,通过机器学习优化搜索结果排序算法,并构建图形社区,实现了资源的分享、评论和智能推荐,提高了搜索的精准度。

The present invention discloses a graphical display method for search engine retrieval results, which belongs to the technical field of search engines, and specifically includes: constructing a graphical knowledge graph through a graphical database, extracting advanced features by using deep learning pre-trained images, and users can initiate search requests with keywords or image samples through search engines. The system uses NLP and deep learning technologies to perform semantic analysis and image recognition on query results to achieve real-time matching with the graphical knowledge graph. The search results are displayed through a carousel and multi-dimensional information viewing is provided. The system also has an intelligent color screening function and performs graphical display in combination with shape and texture features. The system collects user feedback, optimizes the search result sorting algorithm through machine learning, and builds a graphical community, thereby achieving resource sharing, commenting and intelligent recommendation, and improving the accuracy of search.

Description

Graphical display method for search engine retrieval result
Technical Field
The invention belongs to the technical field of search engines, and particularly relates to a graphical display method of search engine search results.
Background
With the rapid increase of information volume, users need search engines not only to find related information, but also to understand and screen results quickly and intuitively, and graphical displays can provide more intuitive and easily understood search results, so that the user needs to acquire information efficiently. Graphical display technologies, such as a clustering method, a super-linking method and a semantic content method, provide possibility for the graphical display of search results of a search engine, and the methods display the search results to a user in a graphical mode through different logic structures and expression forms, so that the efficiency of information acquisition of the user is improved, the development of deep learning and natural language processing technologies is improved, and the search engine can more accurately understand the query intention of the user and more accurately match and sort the search results.
The patent with publication number CN111339450A discloses a graphical display method of search engine search results, which comprises the steps of storing all information in a server, searching information to be displayed from all information, selecting common attributes of all information to be displayed by standardizing and perfecting all attributes of the information to be displayed, selecting coordinate attributes from the common attributes, and displaying graphics on a browser by taking the coordinate attributes as coordinate axes. Through the technical scheme, the user can intuitively see the search result display which is ordered according to the plurality of attributes, the user can quickly know the attribute of each search result, the required search result can be quickly found, more search results can be displayed in a visual interface without turning pages or pulling down a brace or a scroll bar, and thousands of search results can be displayed in the visual interface without influencing resolution.
The above prior art has the problems of 1) lack of advanced feature extraction, 2) mention that the user can view the search results through a graphical interface only, limiting the interaction between the user and the search results, and 3) lack of real-time matching and intelligent recommendation.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a graphical display method of search engine search results, which constructs a graphical knowledge graph through a graphical database, extracts advanced features by utilizing deep learning pre-training images, enables a user to initiate a search request through a search engine by using keywords or image samples, enables a system to carry out semantic analysis and image recognition on query results by using NLP and deep learning technology, realizes real-time matching with the graphical knowledge graph, enables the search results to be displayed through a carousel graph, provides multi-dimensional information viewing, and also has an intelligent color screening function, combines shape and texture features to carry out graphical display, enables the system to collect user feedback, optimizes a search result ordering algorithm through machine learning, builds a graphical community, realizes sharing, commenting and intelligent recommendation of resources, and improves the searching accuracy.
In order to achieve the above purpose, the present invention provides the following technical solutions:
A graphical display method of search engine search results comprises the following steps:
Step S1, a graphic database is established, a graphic knowledge graph is established according to the graphic database, the images, labels and metadata information in the graphic database are organized into a graph structure, the images in the graphic database are pre-trained through a deep learning method, advanced features of the images are extracted, intelligent indexes are established, a user inputs query keywords or uploads image samples through a search engine, and a search request is initiated;
Step S2, a search engine receives a user request, performs semantic analysis on query keywords by using an NLP method, recognizes uploaded images by using deep learning, performs real-time matching with the images in the graph knowledge graph, and determines the sequence of search results according to the matching results and a ranking algorithm;
step S3, carrying out result display by using a carousel graph according to the type of the search result, displaying multi-dimensional information of an image while displaying the search result and allowing a user to view and switch different dimensional information through interactive operation, wherein the multi-dimensional information of the image comprises sources, click quantity and comment quantity of the image;
Step S4, providing an intelligent color selector on the search interface, allowing a user to select or input colors on the search interface for screening, screening images by the search engine according to the color feature vectors, and combining the shape and texture features of the graphics for graphical display;
s5, collecting feedback and suggestions of a user, carrying out emotion analysis on feedback data by using a machine learning algorithm, and carrying out iteration and optimization on a graph knowledge graph and a search result ordering algorithm according to the feedback of the user;
And S6, constructing a graphic community, allowing users to share, comment and praise graphic resources, introducing a social graph analysis method, obtaining the relevance among the users, and performing intelligent recommendation according to the relevance.
Specifically, the specific steps of the step S2 include:
S2.1, a search engine receives a query keyword input by a user or an uploaded image sample;
s2.2, for the query keywords, encoding the query keywords by using an NLP method to generate word vectors, and calculating the similarity between the word vectors and labels or metadata in the graphic knowledge graph by using an improved semantic similarity algorithm, wherein the formula is as follows:
Wherein, Representing the similarity of word vector a and word vector B, w represents the same weight vector as the a and B dimensions,The sparsity factor is represented by a factor of sparsity,Indicating the sensitivity adjustment parameter(s),Representing a 2-norm.
Specifically, the specific steps of the step S2 further include:
s2.3, extracting features of the uploaded image sample by using a deep learning model, generating feature vectors of the image, matching the extracted feature vectors of the image with pre-trained image features in a graph knowledge graph by using a graph searching strategy, and finding similar images;
And S2.4, sorting the search results by using a weighted sorting algorithm according to the relevance of the matching results, and returning the sorted search results to the user.
Specifically, the specific steps of the graph searching strategy in S2.3 include:
s2.31, receiving an image sample uploaded by a user and preprocessing the image sample;
s2.32, extracting features of the preprocessed image by using a pre-trained ResNet deep learning model, and performing pooling operation on the extracted feature images to convert the extracted feature images into feature vectors with the length of h;
S2.33, loading the pre-trained image feature vector from the graph knowledge graph, and calculating the similarity between the feature vector C of the uploaded image and each pre-trained feature vector D in the graph knowledge graph according to the similarity algorithm calculation formula in S2.2
Specifically, the specific steps of the graph searching strategy in S2.3 further include:
S2.34 defining a graph search strategy according to Searching an image most similar to the uploaded image in the graph knowledge graph;
S2.35, setting the similarity threshold as H according to Sorting the search results;
If it is Discarding the mixture;
If it is And outputting the corresponding image as a query result.
Specifically, the formula of the graph search strategy in S2.34 is:
Wherein r represents the similarity coefficient of the uploaded image and the image in the graphic knowledge graph, d represents the color channel of the uploaded image, Representing different color channels d and different spatial positions in an uploaded imageThe weight parameter of the pixel is used to determine,Representing the non-linear transformation function of the color channel d,Representing the position of a template image in a graphic knowledge graph in a color channel dIs used for the display of the display panel,Representing the pixel mean value of the template image in the color channel d in the graph knowledge graph,Representing the uploaded image in color channel d and positionIs used for the display of the display panel,Representing the pixel mean of the uploaded image at color channel d,Representing the offset parameter of the template image in the color channel d in the graph knowledge graph,An offset parameter representing the uploaded image in color channel d.
The electronic equipment comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of a graphical display method of search engine search results when executing the computer program.
Specifically, a computer readable storage medium having stored thereon computer instructions which when executed perform the steps of a method for graphically displaying search results of a search engine.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention provides a graphical display method of search engine search results, which is characterized in that images, labels and metadata are organized through a graph structure, a search engine can be rapidly positioned to related image resources, the combination of NLP and deep learning enables the search of keywords and image samples to be more intelligent, the search accuracy is improved, the multi-dimensional information display mode of a carousel graph is used, detailed information of the search results, such as the labels and the metadata of the images, can be intuitively known, meanwhile, an intelligent color selector allows a user to select according to color characteristics, and the search results are more in accordance with the personalized requirements of the user by combining the shape and texture characteristics of the images.
2. The invention provides a graphical display method of search engine retrieval results, which optimizes a search result ordering algorithm and a graphical knowledge graph by collecting feedback and suggestions of users and carrying out emotion analysis by using a machine learning algorithm, wherein the construction of a graphical community allows users to share, comment and praise graphical resources, promotes communication and interaction among the users, and simultaneously, the social graph analysis method is introduced, so that the system can obtain the relevance among the users and carry out intelligent recommendation according to the relevance.
Drawings
FIG. 1 is a schematic diagram of a method for graphically displaying search results of a search engine according to the present invention;
FIG. 2 is a flow chart of a method for graphically displaying search results of a search engine according to the present invention;
FIG. 3 is a flow chart of a method for graphically displaying search results of a search engine according to the present invention;
FIG. 4 is a flow chart of a method for graphically displaying search results of a search engine according to the present invention.
Detailed Description
Example 1
Referring to fig. 1-2, an embodiment of the present invention provides a graphical display method for search results of a search engine, including the following steps:
step S1, a graphic database is established, a graphic knowledge graph is established according to the graphic database, images, labels and metadata information in the graphic database are organized into a graph structure, the images in the graphic database are pre-trained through a deep learning method, advanced features of the images are extracted, intelligent indexes are constructed, a user inputs query keywords or uploads image samples through a search engine, and a search request is initiated, wherein the pre-training of the images through the deep learning method is the prior art content in the field, and the deep learning method is not an inventive scheme of the application and is not repeated herein;
The specific steps for establishing the graphic database comprise:
(1) Extracting key features of the image, such as color, texture, and shape, from the internet or local resources by using image processing and computer vision technologies, such as SURF algorithm, which is the prior art in the field and is not an inventive scheme of the present application, and is not described herein;
(2) These feature information are stored with the metadata of the image, such as title, description, tags, into a graphic database, which uses Neo4j, to form a structured image data repository.
The specific steps of constructing the graphic knowledge graph comprise:
(1) Determining the target and the range of the knowledge graph;
(2) Firstly, designing an ontology construction layer, which comprises defining entities, relations, attributes and types thereof, and then extracting knowledge from a structured data source based on the designs;
(3) Identifying named entities from the text by using a rule-based information extraction method, and extracting association relations among the entities from the text to form a netlike knowledge structure;
(4) Carrying out knowledge fusion on the extracted information, and storing the knowledge graph in a graph form by using a graph database;
(5) The knowledge graph is updated and expanded periodically as new data sources and knowledge are added.
Step S2, a search engine receives a user request, performs semantic analysis on query keywords by using an NLP method, recognizes uploaded images by using deep learning, performs real-time matching with the images in the graph knowledge graph, and determines the sequence of search results according to the matching results and a ranking algorithm;
Step S3, performing result display by using a carousel graph according to the type of the search result, displaying multi-dimensional information of an image while displaying the search result, and allowing a user to view and switch different dimensional information through interactive operation, wherein the multi-dimensional information comprises a source, a click amount and a comment number, and the interactive operation comprises clicking and sliding to view picture details, save pictures and share operation;
Step S4, providing an intelligent color selector on the search interface, allowing a user to select or input colors on the search interface for screening, screening images by the search engine according to the color feature vectors, and combining the shape and texture features of the graphics for graphical display;
Step S5, collecting feedback and suggestions of a user, carrying out emotion analysis on feedback data by using a machine learning algorithm, iterating and optimizing a graph knowledge graph and a search result ordering algorithm according to the feedback of the user, wherein in carrying out emotion analysis on the feedback data by using the machine learning algorithm, the machine learning algorithm uses a naive Bayesian algorithm which is the prior art content in the field, and is not an inventive scheme of the application and is not repeated herein;
further, the specific steps of iterating and optimizing the graph knowledge graph and the search result ordering algorithm include:
(1) Carrying out statistics and analysis on the emotion analysis result, and knowing the satisfaction degree and opinion of the user on the graph knowledge graph and the search result;
(2) Determining an optimization direction of a graph knowledge graph and a search result ordering algorithm according to user feedback;
(3) The method for extracting the entity relationship of the knowledge graph is used for iterating and optimizing a construction algorithm and a search result ordering algorithm of the graph knowledge graph, so that the accuracy and the integrity of the knowledge graph are improved, wherein the method for extracting the entity relationship of the knowledge graph is the prior art content in the field and is not an inventive scheme of the application, and details are not repeated here;
(4) Optimizing a search result ordering algorithm by using a feature selection algorithm, so as to improve the accuracy of the search result, wherein the feature selection algorithm is the prior art in the field and is not an inventive scheme of the application, and is not repeated here;
(5) Testing and evaluating the optimized graph knowledge graph and search result ordering algorithm by using user feedback as an evaluation index;
(6) And continuously iterating and optimizing the graph knowledge graph and search result ordering algorithm according to the user feedback and the test result.
And S6, constructing a graphic community, allowing users to share, comment and praise graphic resources, introducing a social graph analysis method, obtaining the relevance among the users, and performing intelligent recommendation according to the relevance.
Further, the specific steps of constructing the graphic community include:
The method comprises the steps of (1) determining main purposes of communities, such as sharing graphic design resources and exchanging knowledge graph construction experience, defining core functions, (2) designing community structures and interfaces, (3) developing community platforms, (4) collecting user behavior data including browsing records, praise and comments of users, (5) constructing social networks among users based on the user behavior data, (6) storing user social network data by using a graphic database, designing query and analysis methods, (7) integrating social graph analysis results into collaborative filtering recommendation algorithms according to community characteristics and user requirements, and continuously optimizing and improving the collaborative filtering recommendation algorithms according to user feedback and data analysis results, (8) comprehensively testing the functions of the community platforms, and collecting user feedback through questionnaires and user interviews.
Example 2
Referring to fig. 3-4, the specific steps of step S2 in the present embodiment include:
S2.1, a search engine receives a query keyword input by a user or an uploaded image sample;
s2.2, for the query keywords, encoding the query keywords by using an NLP method to generate word vectors, and calculating the similarity between the word vectors and labels or metadata in the graphic knowledge graph by using an improved semantic similarity algorithm, wherein the formula is as follows:
Wherein, Representing the similarity of word vector a and word vector B, w represents the same weight vector as the a and B dimensions, for adjusting the importance of the different dimensions,The sparsity factor is expressed and used for processing the situation of vector sparsity, when the vector is very sparse, namely contains a plurality of zero elements, the denominator becomes very small, the similarity value is abnormally high, and a non-zero element is addedThe value, which can be avoided,Representing sensitivity adjustment parameters for controlling sensitivity of the degree of similarity whenWhen larger, the similarity value is more sensitive to changes in the vector length whenAt smaller times, the similarity value is insensitive to changes in vector length,Represents 2 norms andThe manner of arrangement of (c) is determined by a person skilled in the art through a number of experiments.
Further, the NLP preprocessing is mainly to divide the text into single words or phrases by using a word segmentation method based on dictionary matching, and label the words according to the context or form of the words by using a predefined part-of-speech labeling rule, wherein the word segmentation method and the part-of-speech labeling rule based on dictionary matching are prior art contents in the field and are not inventive schemes of the application and are not repeated herein.
The specific steps of step S2 further include:
s2.3, extracting features of the uploaded image sample by using a deep learning model, generating feature vectors of the image, matching the extracted feature vectors of the image with pre-trained image features in a graph knowledge graph by using a graph searching strategy, and finding similar images;
And S2.4, sorting the search results by using a weighted sorting algorithm according to the relevance of the matching results, and returning the sorted search results to the user.
The specific formula of the weighted sorting algorithm is:
Wherein, Representing the kth correlation metric,Representation ofThe corresponding weights can be adjusted according to actual conditions to reflect the importance of different metrics on the final sorting result.
The specific steps of the graph searching strategy in S2.3 comprise:
s2.31, receiving an image sample uploaded by a user and preprocessing the image sample;
s2.32, extracting features of the preprocessed image by using a pre-trained ResNet deep learning model, and performing pooling operation on the extracted feature images to convert the extracted feature images into feature vectors with the length of h;
S2.33, loading the pre-trained image feature vector from the graph knowledge graph, and calculating the similarity between the feature vector C of the uploaded image and each pre-trained feature vector D in the graph knowledge graph according to the similarity algorithm calculation formula in S2.2
The specific steps of the graph searching strategy in S2.3 further comprise:
S2.34 defining a graph search strategy according to Searching an image most similar to the uploaded image in the graph knowledge graph;
S2.35, setting the similarity threshold as H according to Sorting the search results;
If it is Discarding the mixture;
If it is And outputting the corresponding image as a query result.
The formula of the graph search strategy in S2.34 is:
Wherein r represents the similarity coefficient of the uploaded image and the image in the graphic knowledge graph, d represents the color channel of the uploaded image, Representing different color channels d and different spatial positions in an uploaded imageThe weight parameter of the pixel is used to determine,Representing the non-linear transformation function of the color channel d,Representing the position of a template image in a graphic knowledge graph in a color channel dIs used for the display of the display panel,Representing the pixel mean value of the template image in the color channel d in the graph knowledge graph,Representing the uploaded image in color channel d and positionIs used for the display of the display panel,Representing the pixel mean of the uploaded image at color channel d,Representing the offset parameter of the template image in the color channel d in the graph knowledge graph,An offset parameter representing the uploaded image in color channel d.
Example 3
An electronic device comprising a memory storing a computer program and a processor implementing the steps of a method for graphically displaying search results of a search engine when the computer program is executed.
A computer readable storage medium having stored thereon computer instructions which when executed perform the steps of a method of graphically displaying search engine search results.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and variations, modifications, substitutions and alterations can be made to the above-described embodiments by those having ordinary skill in the art without departing from the spirit and scope of the present invention, and these are all within the protection of the present invention.

Claims (8)

1. A method for graphically displaying search results of a search engine, comprising:
Step S1, a graphic database is established, a graphic knowledge graph is established according to the graphic database, the images, labels and metadata information in the graphic database are organized into a graph structure, the images in the graphic database are pre-trained through a deep learning method, advanced features of the images are extracted, intelligent indexes are established, a user inputs query keywords or uploads image samples through a search engine, and a search request is initiated;
Step S2, a search engine receives a user request, performs semantic analysis on query keywords by using an NLP method, recognizes uploaded images by using deep learning, performs real-time matching with the images in the graph knowledge graph, and determines the sequence of search results according to the matching results and a ranking algorithm;
step S3, carrying out result display by using a carousel graph according to the type of the search result, displaying multi-dimensional information of an image while displaying the search result and allowing a user to view and switch different dimensional information through interactive operation, wherein the multi-dimensional information of the image comprises sources, click quantity and comment quantity of the image;
Step S4, providing an intelligent color selector on the search interface, allowing a user to select or input colors on the search interface for screening, screening images by the search engine according to the color feature vectors, and combining the shape and texture features of the graphics for graphical display;
s5, collecting feedback and suggestions of a user, carrying out emotion analysis on feedback data by using a machine learning algorithm, and carrying out iteration and optimization on a graph knowledge graph and a search result ordering algorithm according to the feedback of the user;
And S6, constructing a graphic community, allowing users to share, comment and praise graphic resources, introducing a social graph analysis method, obtaining the relevance among the users, and performing intelligent recommendation according to the relevance.
2. The method for graphically displaying search results of a search engine according to claim 1, wherein the specific steps of step S2 include:
S2.1, a search engine receives a query keyword input by a user or an uploaded image sample;
s2.2, for the query keywords, encoding the query keywords by using an NLP method to generate word vectors, and calculating the similarity between the word vectors and labels or metadata in the graphic knowledge graph by using an improved semantic similarity algorithm, wherein the formula is as follows:
;
Wherein, Representing the similarity of word vector a and word vector B, w represents the same weight vector as the a and B dimensions,The sparsity factor is represented by a factor of sparsity,Indicating the sensitivity adjustment parameter(s),Representing a 2-norm.
3. The method for graphically displaying search results of a search engine according to claim 2, wherein the specific step of step S2 further comprises:
s2.3, extracting features of the uploaded image sample by using a deep learning model, generating feature vectors of the image, matching the extracted feature vectors of the image with pre-trained image features in a graph knowledge graph by using a graph searching strategy, and finding similar images;
And S2.4, sorting the search results by using a weighted sorting algorithm according to the relevance of the matching results, and returning the sorted search results to the user.
4. A method for graphically displaying search results of a search engine according to claim 3, wherein the specific step of the graph search strategy in S2.3 includes:
s2.31, receiving an image sample uploaded by a user and preprocessing the image sample;
s2.32, extracting features of the preprocessed image by using a pre-trained ResNet deep learning model, and performing pooling operation on the extracted feature images to convert the extracted feature images into feature vectors with the length of h;
S2.33, loading the pre-trained image feature vector from the graph knowledge graph, and calculating the similarity between the feature vector C of the uploaded image and each pre-trained feature vector D in the graph knowledge graph according to the similarity algorithm calculation formula in S2.2
5. The method for graphically displaying search results of a search engine according to claim 4, wherein the specific step of searching the policy in S2.3 further comprises:
S2.34 defining a graph search strategy according to Searching an image most similar to the uploaded image in the graph knowledge graph;
S2.35, setting the similarity threshold as H according to Sorting the search results;
If it is Discarding the mixture;
If it is And outputting the corresponding image as a query result.
6. The method for graphically displaying search results of a search engine according to claim 5, wherein the formula of the graph search strategy in S2.34 is:
;
Wherein r represents the similarity coefficient of the uploaded image and the image in the graphic knowledge graph, d represents the color channel of the uploaded image, Representing different color channels d and different spatial positions in an uploaded imageThe weight parameter of the pixel is used to determine,Representing the non-linear transformation function of the color channel d,Representing the position of a template image in a graphic knowledge graph in a color channel dIs used for the display of the display panel,Representing the pixel mean value of the template image in the color channel d in the graph knowledge graph,Representing the uploaded image in color channel d and positionIs used for the display of the display panel,Representing the pixel mean of the uploaded image at color channel d,Representing the offset parameter of the template image in the color channel d in the graph knowledge graph,An offset parameter representing the uploaded image in color channel d.
7. An electronic device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of a method for graphically displaying search results of a search engine according to any one of claims 1-6.
8. A computer readable storage medium having stored thereon computer instructions which when executed perform the steps of a method of graphically displaying search results of a search engine as claimed in any one of claims 1 to 6.
CN202411186934.9A 2024-08-28 2024-08-28 A graphical display method for search engine retrieval results Active CN118690039B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411186934.9A CN118690039B (en) 2024-08-28 2024-08-28 A graphical display method for search engine retrieval results

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411186934.9A CN118690039B (en) 2024-08-28 2024-08-28 A graphical display method for search engine retrieval results

Publications (2)

Publication Number Publication Date
CN118690039A CN118690039A (en) 2024-09-24
CN118690039B true CN118690039B (en) 2025-01-21

Family

ID=92764935

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411186934.9A Active CN118690039B (en) 2024-08-28 2024-08-28 A graphical display method for search engine retrieval results

Country Status (1)

Country Link
CN (1) CN118690039B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116467291A (en) * 2023-03-10 2023-07-21 北京无代码科技有限公司 Knowledge graph storage and search method and system
CN117932074A (en) * 2023-12-08 2024-04-26 北京国电通网络技术有限公司 Audit knowledge mapping system based on digital audit platform

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11914674B2 (en) * 2011-09-24 2024-02-27 Z Advanced Computing, Inc. System and method for extremely efficient image and pattern recognition and artificial intelligence platform
CN113191858A (en) * 2021-06-10 2021-07-30 数贸科技(北京)有限公司 Commodity display method and device based on picture search
CN116701357A (en) * 2023-06-15 2023-09-05 深圳市象无形信息科技有限公司 IFC data management method and device based on semantic network
CN117763158A (en) * 2023-12-01 2024-03-26 上海市大数据股份有限公司 Knowledge graph system based on graph algorithm and text search engine
CN118467851B (en) * 2024-07-15 2024-10-25 北京蜂窝科技有限公司 Artificial intelligent data searching and distributing method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116467291A (en) * 2023-03-10 2023-07-21 北京无代码科技有限公司 Knowledge graph storage and search method and system
CN117932074A (en) * 2023-12-08 2024-04-26 北京国电通网络技术有限公司 Audit knowledge mapping system based on digital audit platform

Also Published As

Publication number Publication date
CN118690039A (en) 2024-09-24

Similar Documents

Publication Publication Date Title
CN118467851B (en) Artificial intelligent data searching and distributing method and system
CN111581510B (en) Shared content processing method, device, computer equipment and storage medium
CN119377433B (en) Commodity information processing and inquiring method and system
CN112966091B (en) A knowledge graph recommendation system integrating entity information and popularity
US7962500B2 (en) Digital image retrieval by aggregating search results based on visual annotations
WO2020211566A1 (en) Method and device for making recommendation to user, computing apparatus, and storage medium
WO2023108980A1 (en) Information push method and device based on text adversarial sample
US8527564B2 (en) Image object retrieval based on aggregation of visual annotations
CN105005578A (en) Multimedia target information visual analysis system
US20170371965A1 (en) Method and system for dynamically personalizing profiles in a social network
CN101520785A (en) Information retrieval method and system therefor
CN117556118B (en) Visual recommendation system and method based on scientific research big data prediction
CN116010696A (en) A news recommendation method, system and medium that integrates knowledge graphs and users' long-term and short-term interests
CN119128277A (en) A product recommendation method, device and medium based on intelligent agent
CN117891939A (en) Text classification method based on particle swarm optimization algorithm combined with CNN convolutional neural network
JP2022035314A (en) Information processing unit and program
CN118964590A (en) A method, device and computer-readable storage medium for recommending an intelligent agent
CN118626727A (en) A personalized recommendation method based on dynamic user portrait
CN114610913B (en) Multimedia data recommendation method, recommendation model training method and related equipment
Zhu et al. Multimodal sparse linear integration for content-based item recommendation
CN119782494B (en) Search optimization method, device and medium based on large model
CN120086425A (en) A method for fast retrieval of e-commerce information
CN118690039B (en) A graphical display method for search engine retrieval results
CN116580120A (en) A system and method for image generation and processing based on user interest analysis
Fakhfakh et al. Fuzzy User Profile Modeling for Information Retrieval.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant