CN119513377B - Enterprise information query method and device - Google Patents
Enterprise information query method and device Download PDFInfo
- Publication number
- CN119513377B CN119513377B CN202510064282.XA CN202510064282A CN119513377B CN 119513377 B CN119513377 B CN 119513377B CN 202510064282 A CN202510064282 A CN 202510064282A CN 119513377 B CN119513377 B CN 119513377B
- Authority
- CN
- China
- Prior art keywords
- query
- enterprise information
- user
- preference
- type
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/903—Querying
- G06F16/9032—Query formulation
- G06F16/90332—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/903—Querying
- G06F16/90335—Query processing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/903—Querying
- G06F16/9038—Presentation of query results
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The application provides a query method and a query device for enterprise information, wherein the method comprises the steps of combining a time sequence model (such as a hidden Markov model) with an information foraging theory, constructing a preference function (such as a user taste function) at the same time, identifying the hidden query intention of a user and the preference degree of various types of enterprise information, and mapping the preference degree into a weight of a corresponding type, so that personalized reordering of query results is realized. In addition, the embodiment of the application also introduces a large language model as a real-time online self-adaptive adjustment layer to cope with the temporary demand change of the user, thereby continuously improving the query precision and the user satisfaction.
Description
Technical Field
The application relates to the technical field of Internet, in particular to a method and a device for inquiring enterprise information.
Background
In the related art, most of enterprise information query platforms are static presentation, and the recognition and personalized recommendation capability of implicit query intention of users are insufficient. For example, the related art cannot make full use of behavior sequence data (such as access sequence, page stay time, etc.) of the user to accurately predict the potential query intent of the user. Meanwhile, the enterprise information query scene has multi-dimensional and vertical business requirements of individual (i.e. C-end) users (such as job hunting, litigation maintaining, financial anti-cheating and the like) and enterprise (i.e. B-end) users (such as client-side adjustment, back money laundering, supplier-side adjustment, compliance auditing, bidding, risk assessment and the like), which requires that the platform can flexibly rearrange information types under different business situations. In addition, the related art provides an offline model training-based query scheme that cannot respond quickly in the face of a situation where a specific high-value user (e.g., VIP user) or a specific organization (e.g., bank, insurance, fund, etc.) temporarily changes search preferences.
Disclosure of Invention
The embodiment of the application provides a query method, a query device, electronic equipment, a computer readable storage medium and a computer program product for enterprise information, which can identify potential query intention and preference degree of various types of users, thereby realizing personalized reordering of search results, and simultaneously, can also cope with temporary demand change of users, and further continuously improve query precision and user satisfaction.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a query method of enterprise information, which is applied to an enterprise information query platform, wherein an enterprise information base of the enterprise information query platform comprises a plurality of types of sets, one set comprises enterprise information of a plurality of enterprises of one type, and the query method comprises the following steps:
Receiving a query request sent by terminal equipment, wherein the query request carries keywords;
Acquiring first operation data of a user of the terminal equipment on the enterprise information query platform, wherein the first operation data is generated before the query request is received;
Based on the first operation data, respectively predicting a plurality of candidate states included in a state set through a pre-trained time sequence model to obtain a selected probability of each candidate state, wherein each candidate state corresponds to a query intention, and the query intention represents a potential query target of a user when accessing the enterprise information query platform;
Sorting the plurality of candidate states according to the order of the selected probability from high to low, and taking the query intention corresponding to at least one candidate state with the top sorting in the sorting result as the query intention of the user;
constructing a preference function based on the plurality of types and the state set, and determining preference parameters of the user for each type under the query intention through the preference function;
Mapping the multiple preference parameters into weights of corresponding types respectively, and filling the multiple weights and the keywords into a preset query data structure to obtain a query expression;
Querying the enterprise information base based on the query expression to obtain a plurality of enterprise information matched with the keywords, and sorting the plurality of enterprise information based on a plurality of weights;
When the preference adjustment instruction sent by the terminal equipment is not received, sending the sequenced plurality of enterprise information to the terminal equipment;
and when receiving a preference adjustment instruction sent by the terminal equipment, re-ordering the sequenced plurality of enterprise information through a large language model, and returning the re-ordered plurality of enterprise information to the terminal equipment.
The embodiment of the application provides a query device of enterprise information, which is applied to an enterprise information query platform, wherein an enterprise information base of the enterprise information query platform comprises a plurality of types of sets, one set comprises enterprise information of a plurality of enterprises of one type, and the query device comprises:
the receiving module is used for receiving a query request sent by the terminal equipment, wherein the query request carries keywords;
The acquisition module is used for acquiring first operation data of a user of the terminal equipment on the enterprise information query platform, wherein the first operation data is generated before the query request is received;
The prediction module is used for respectively predicting a plurality of candidate states included in a state set through a pre-trained time sequence model based on the first operation data to obtain a selected probability of each candidate state, wherein each candidate state corresponds to a query intention, and the query intention represents a potential query target of a user when the user accesses the enterprise information query platform;
The sorting module is used for sorting the candidate states according to the order of the selected probability from high to low;
The determining module is used for taking the query intention corresponding to at least one candidate state which is ranked at the front in the ranking result as the query intention of the user;
a building module for building a preference function based on the plurality of types and the set of states;
The determining module is further configured to determine, by using the preference function, preference parameters for each of the types of users under the query intention;
the mapping module is used for mapping the plurality of preference parameters into weights of corresponding types respectively;
the filling module is used for filling the weights and the keywords into a preset query data structure to obtain a query expression;
the query module is used for querying the enterprise information base based on the query expression to obtain a plurality of enterprise information matched with the keywords;
The sorting module is further configured to sort the plurality of enterprise information based on a plurality of weights;
a sending module, configured to send the ordered plurality of enterprise information to the terminal device when the preference adjustment instruction sent by the terminal device is not received;
the ordering module is further used for re-ordering the plurality of the enterprise information after ordering through a large language model when receiving the preference adjustment instruction sent by the terminal equipment;
and the sending module is also used for returning the reordered plurality of enterprise information to the terminal equipment.
The embodiment of the application provides an enterprise information query platform, which comprises the following components:
a memory for storing computer executable instructions;
And the processor is used for realizing the enterprise information query method provided by the embodiment of the application when executing the computer executable instructions stored in the memory.
The embodiment of the application provides a computer readable storage medium which stores computer executable instructions for realizing the enterprise information query method provided by the embodiment of the application when being executed by a processor.
The embodiment of the application provides a computer program product, which comprises a computer program or computer executable instructions and is used for realizing the enterprise information query method provided by the embodiment of the application when being executed by a processor.
The embodiment of the application has the following beneficial effects:
Based on historical operation data of a user on an enterprise information query platform, the current query intention of the user is identified through a pre-trained time sequence model, preference parameters of the user for different types of enterprise information under the query intention are determined through construction of a preference function, and the preference parameters are mapped into weights of corresponding types, so that a plurality of enterprise information matched with keywords can be ordered based on the weights, and the personalized requirements of the user are met. In addition, the embodiment of the application introduces a large language model as a real-time online self-adaptive adjustment layer so as to cope with the temporary demand change of the user and further improve the query experience of the user.
Drawings
FIG. 1 is a schematic diagram of an architecture of an enterprise information query system 100 according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an enterprise information query platform 200 according to an embodiment of the present application;
FIG. 3 is a first flow chart of a method for querying enterprise information according to an embodiment of the present application;
FIG. 4 is a second flow chart of a method for querying enterprise information according to an embodiment of the present application;
Fig. 5 is a third flow chart of a method for querying enterprise information according to an embodiment of the present application.
Detailed Description
The present application will be further described in detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present application more apparent, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
It can be appreciated that in the embodiments of the present application, related data such as user information (e.g., operation data, log data, etc. of a user on an enterprise information query platform) is related, when the embodiments of the present application are applied to specific products or technologies, user permission or consent needs to be obtained, and the collection, use and processing of related data need to comply with related laws and regulations and standards.
In the following description, the term "first/second/is merely to distinguish similar objects and does not represent a specific ordering of objects, it being understood that the" first/second/is interchangeable with a specific order or sequence, if allowed, to enable embodiments of the application described herein to be practiced otherwise than as illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
Before describing embodiments of the present application in further detail, the terms and terminology involved in the embodiments of the present application will be described, and the terms and terminology involved in the embodiments of the present application will be used in the following explanation.
1) In response to a condition or state representing a dependency of an operation performed, one or more operations performed may be performed in real-time or with a set delay when the dependency is satisfied, and without any particular limitation to execution sequencing of the operations performed.
2) The time sequence model is a theory and method for establishing a mathematical model through curve fitting and parameter estimation according to time sequence data obtained by system observation. It is generally performed using curve fitting and parameter estimation methods. For example, hidden markov models (HMMs, hidden Markov Model) are included, wherein the HMM is a statistical model describing a markov process with hidden unknown parameters. It generates unobservable sequence states through hidden Markov chains and generates observable sequences from these hidden states.
3) Information foraging theory (IFT, information Foraging Theory) is a theory provided by computer scientists and psychologists about human information acquisition behavior. It models action foraging behavior, and it is believed that humans will also take strategies similar to actions to acquire food during the information acquisition process.
4) Large language models (LLM, large Language Model) refer to deep learning models trained using large amounts of text data so that the model can generate natural language text or understand the meaning of language text. These models can provide in-depth knowledge and language production about various topics by training on a vast dataset. The key idea is to learn the mode and structure of natural language through large-scale unsupervised training to simulate the human language cognition and generation process to a certain extent.
5) Query domain specific language (DSL, domain Specific Language), a language that is specific to building queries, provides rich query syntax and functionality that enables users to flexibly define a variety of complex queries. Queries in query DSL can be divided into two categories, query and filtering, wherein query refers to computing a relevance score for a document according to specified conditions and returning results ordered by relevance, and filtering refers to screening out documents meeting the conditions according to specified conditions, but not computing a relevance score.
The applicant found in the practice of the embodiments of the present application that the related art has the following problems:
1. failure to effectively identify user-hidden query intent
The schemes provided by the related art mostly rely on static keyword matching or preconfigured field weights, and lack dynamic identification of real requirements (such as C-terminal job hunting, B-terminal customer exhaustion, supplier exhaustion and the like) of users at different moments and under different contexts. Furthermore, even if user behavior data (e.g., click sequence, dwell time, etc.) exists in large amounts, it is not fully utilized to infer the user's deeper query intent.
2. Lack of adaptive personalized ranking capability
The scheme provided by the related art can only score the query results based on fixed weights, and can not adjust the query result sequence in real time according to the temporary demand change of the user (such as sudden attention of 'provider information' or 'backwash Qian Xiansuo', etc.). In addition, for high value users (such as VIP users or financial institution users), the system responds slowly and adjusts slowly when searching for specific depth information, resulting in poor user experience.
3. Model updating and long-term evolution mechanism without closed loop
The related art lacks closed loop collection and model retraining for user feedback (e.g., satisfaction, dissatisfaction, browsing time, etc.), and is unable to continue evolving at the system level. Meanwhile, the user preference transfer cannot be automatically captured and the index strategy or weight configuration cannot be updated, so that the query result is gradually disjointed with the real requirement of the user in long-term use.
4. Lack of online intelligent scheduling capability
The scheme provided by the related technology does not involve secondary semantic analysis or situation understanding of the result through a large language model, only can return a query result based on character string matching, and lacks deep semantic recommendation under a complex search scene. Meanwhile, efficient and intelligent countermeasures for the instantaneous preference change cannot be made.
In view of this, the embodiment of the application identifies the hidden query intention of the user by fusing the HMM and the information foraging theory, for example, by constructing the HMM and combining the click sequence, the residence time, and the like of the user on the enterprise information query platform, dynamically deducing the hidden query intention of the user, and simultaneously, combining the concept of 'information smell' in the information foraging theory, quantifying the 'tastes' (i.e. preference degrees) of the user on different types of enterprise information, and carrying out real-time preference weighting in the retrieval process, so that the multi-scene intentions of the C end, the B end, and the like can be automatically detected and distinguished, and the simple dependence on static weight or manual selection can be avoided. In addition, under specific conditions (such as temporary sudden demand turning of a user), the embodiment of the application can introduce a large language model to perform secondary sorting on the primary sorting results, and the large language model can capture finer semantic clues and preference transfer to form an instant-presented high-efficiency matching degree result, thereby overcoming the defect of long training period of the HMM, rapidly responding to preference mutation in a short period and improving the use experience of the user. In addition, the embodiment of the application can collect feedback (such as clicking, stay time, unsatisfied labeling and the like) of the query result from the user, record preference transfer events by the large language model, and also can regularly retrain the HMM and adjust the search strategy to realize long-term self-adaptive evolution, thereby solving the limitation of 'static model+feedback-free update' in the related technology and leading the enterprise information query platform to be more and more intelligible to the user.
That is, the embodiments of the present application provide a method, an apparatus, an enterprise information query platform, a computer readable storage medium, and a computer program product for querying enterprise information, which can identify a potential query intention of a user and a preference degree for each type, so as to implement personalized reordering of search results, and also can cope with a temporary requirement change of the user, thereby continuously improving query accuracy and user satisfaction. The method for querying enterprise information provided by the embodiment of the application is specifically described below with reference to fig. 1.
Referring to fig. 1, fig. 1 is a schematic architecture diagram of an enterprise information query system 100 provided by an embodiment of the present application, as shown in fig. 1, the enterprise information query system 100 provided by an embodiment of the present application includes an enterprise information query platform 200, a network 300, a terminal device 400, and an enterprise information library 500, where the network 300 may be a local area network or a wide area network, or a combination of the two, the terminal device 400 is a terminal device associated with a user, a client 410 (e.g., a client or a browser of the enterprise information query platform) is running on the terminal device 400, the enterprise information library 500 is connected to the enterprise information query platform 200, and the enterprise information library 500 may include multiple types of sets, where each set may include enterprise information of multiple enterprises of one type.
In some embodiments, a search box may be displayed in the man-machine interaction interface provided by the client 410, and the user may input a keyword (for example, a name of a certain enterprise) that the user wants to query in the search box, and then the terminal device 400 may send a query request to the enterprise information query platform 200 through the network 300, where the query request may carry at least one keyword input by the user. After receiving the query request sent by the terminal device 400, the enterprise information query platform 200 may first obtain first operation data (i.e., historical operation data) of a user associated with the terminal device 400 on the enterprise information query platform 200, where the first operation data is generated before the query request is received, and then the enterprise information query platform 200 may predict, based on the first operation data, a plurality of candidate states included in the state set through a pre-trained time sequence model (e.g., HMM) to obtain a selection probability of each candidate state, where each candidate state may respectively correspond to a type of query intention, and the query intention may be used to characterize a potential query target or task type of the user when accessing the enterprise information query platform 200. After obtaining the respective selection probabilities corresponding to each candidate state, the enterprise information query platform 200 may rank the plurality of candidate states in order from high to low according to the selection probabilities, and use the query intention corresponding to at least one candidate state ranked earlier in the ranking result as the current query intention of the user, and then the enterprise information query platform 200 may construct a preference function (e.g., a user taste function) based on the plurality of types and state sets, and determine preference parameters (e.g., preference degree or preference value) for each type of the user under the query intention through the preference function. After obtaining the preference parameters, the enterprise information query platform 200 may map the preference parameters to weights of corresponding types, and fill the weights and keywords into a preset query data structure to obtain a query expression (e.g., query DSL). enterprise information query platform 200 may then query enterprise information library 500 based on the generated query expression to obtain a plurality of enterprise information from enterprise information library 500 that match the keywords and rank the plurality of enterprise information based on the plurality of weights. Finally, the enterprise information query platform 200 may further determine whether a preference adjustment instruction sent by the terminal device 400 is received (i.e. determine whether the user temporarily adjusts the preference), if the preference adjustment instruction sent by the terminal device 400 is received (i.e. the user temporarily adjusts the preference, for example, the user manually selects a certain type in the type filtering panel), the enterprise information query platform 200 may reorder the ordered multiple pieces of enterprise information through the large language model, and return the reordered multiple pieces of enterprise information to the terminal device 400 through the network 300, so that the terminal device 400 invokes the client 410 to present, and if the preference adjustment instruction sent by the terminal device 400 is not received (i.e. the user does not temporarily adjust the preference), the enterprise information query platform 200 may directly send the ordered multiple pieces of enterprise information to the terminal device 400 through the network 300.
It should be noted that, the enterprise information query platform 200 in fig. 1 may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery networks (CDNs, content Delivery Network), and basic cloud computing services such as big data and artificial intelligence platforms. The terminal device 400 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, a vehicle-mounted terminal, etc. The terminal device 400 and the enterprise information query platform 200 may be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiment of the present application.
The enterprise information query method provided by the embodiment of the application has wide application scenes, for example, can be oriented to C-end users, for example, can provide accurate and personalized enterprise information search results for personal demand scenes such as job application, litigation maintenance, financial anti-deception and the like, can also be oriented to B-end users, for example, can meet the verticality demands (such as client adjustment, supplier chain analysis, wind control, money back flushing investigation and the like) of professional users such as banks, securities, insurance, funds, internet financial institutions and the like, and can remarkably improve the information acquisition efficiency and decision quality of the professional users, and can also be oriented to member users (such as VIP users), for example, can realize customized and highly flexible search experience, respond to preference mutation immediately and provide data support for subsequent long-term policy optimization, thereby enhancing the viscosity and competitive advantage of products.
The following continues to describe the structure of the enterprise information query platform provided by the embodiment of the present application. Referring to fig. 2, fig. 2 is a schematic structural diagram of an enterprise information query platform 200 according to an embodiment of the present application, and the enterprise information query platform 200 shown in fig. 2 includes at least one processor 210, a memory 240, and at least one network interface 220. The various components in enterprise information query platform 200 are coupled together via bus system 230. It is understood that the bus system 230 is used to enable connected communications between these components. The bus system 230 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for clarity of illustration the various buses are labeled in fig. 2 as bus system 230.
The Processor 210 may be an integrated circuit chip having signal processing capabilities such as a general purpose Processor, such as a microprocessor or any conventional Processor, a digital signal Processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
The memory 240 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. Memory 240 optionally includes one or more storage devices that are physically located remote from processor 210.
Memory 240 includes volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read Only Memory (ROM) and the volatile Memory may be a random access Memory (RAM, random Access Memory). The memory 240 described in embodiments of the present application is intended to comprise any suitable type of memory.
In some embodiments, memory 240 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 241 including system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks;
A network communication module 242 for accessing other computing devices via one or more (wired or wireless) network interfaces 220, exemplary network interfaces 220 include bluetooth, wireless compatibility authentication (WiFi), and universal serial bus (USB, universal Serial Bus), among others;
In some embodiments, the apparatus provided by the embodiments of the present application may be implemented in software, and fig. 2 shows a query apparatus 243 of enterprise information stored in the memory 240, which may be software in the form of programs, plug-ins, etc., including software modules including a receiving module 2431, an obtaining module 2432, a predicting module 2433, a sorting module 2434, a determining module 2435, a constructing module 2436, a mapping module 2437, a filling module 2438, a query module 2439, a transmitting module 24310, an input module 24311, a training module 24312, and a weighting model 24313, which are logical, and thus may be arbitrarily combined or further split according to the implemented functions. It should be noted that, in fig. 2, all the above modules are shown once for convenience of expression, but should not be taken as excluding implementation of the query device 243 of the enterprise information that may include only the receiving module 2431, the acquiring module 2432, the predicting module 2433, the sorting module 2434, the determining module 2435, the constructing module 2436, the mapping module 2437, the filling module 2438, the query module 2439, and the transmitting module 24310, and functions of each module will be described below.
The method for querying enterprise information provided by the embodiment of the application will be specifically described with reference to fig. 3.
It should be noted that, the method for querying enterprise information provided by the embodiment of the present application may be applied to an enterprise information query platform, where an enterprise information base of the enterprise information query platform may include multiple types of sets, where each set may include enterprise information of multiple enterprises of one type.
Referring to fig. 3, an exemplary embodiment of a first flowchart of a method for querying enterprise information according to the present application will be described with reference to the steps shown in fig. 3.
In step 101, a query request sent by a terminal device is received.
Here, the query request may carry at least one keyword, for example, the keyword may be a name of an enterprise, a name of a legal person, or the like.
For example, a client may be run on the terminal device, a search box may be displayed in a man-machine interaction interface of the client, a user may input a keyword that the user wants to query in the search box, for example, the user may input a name or a natural language description of an enterprise that the user wants to query in the search box (for example, "check risk information and vendor status of XX enterprise"), and then the terminal device may send a query request carrying the enterprise name input by the user to the enterprise information query platform.
In some embodiments, prior to performing step 101, the enterprise information query platform may divide in advance a plurality of enterprise information (including, for example, enterprise names, enterprise acronyms, trademarks, great names, social unification credit codes, etc.) of a plurality of enterprises stored in the enterprise information base into a plurality of business types (i.e., may divide in advance a plurality of enterprise information of a plurality of enterprises stored in the enterprise information base into a plurality of types of sets), provided that the divided plurality of types of sets may be denoted asFor exampleMay be a set of basic information types (i.eIncludes basic class information of a plurality of enterprises),May be a collection of financial information types (i.eIncluding financial information of a plurality of enterprises),Can be a set of customer-level typesIncludes the client-side adjustment information of a plurality of enterprises),May be a set of risk early warning types (i.eIncludes risk early warning information of a plurality of enterprises),May be a collection of vendor information types (i.e.Including vendor information for multiple enterprises),May be a set of recruitment information types (i.eIncluded are recruitment information for a plurality of businesses). During the searching and browsing process, the user clicks or accesses certain types of enterprise information according to the requirements. Furthermore, in discrete time steps t=1, 2,..t, the observable behavior of the user may be recordedFor exampleIndicating that the user has accessed the business information of the basic information type in a first step.
In step 102, first operation data of a user of the terminal device on an enterprise information query platform is obtained.
Here, the first operation data is generated before the query request is received, i.e., the first operation data is historical operation data of the user.
In some embodiments, after receiving the query request sent by the terminal device, the enterprise information query platform may obtain, from the system log, operation data such as keywords that have been input by the user of the terminal device, a sequence of clicked enterprise information, a residence time, a jump rate, historical access data, and the like.
It should be noted that if enough historical behavior data of the user cannot be obtained in some scenarios (for example, the current user is a new user), the large language model may also be directly used as a main query scheduler to intelligently sort the query results, which is not particularly limited in the embodiment of the present application.
In addition, it should be noted that, when the first operation data cannot be obtained, a default value set in advance may be used, that is, a default value may be used to infer a potential query intention of the user, where the default value corresponds to an initial value in mathematical modeling. For example, for a new user that cannot obtain the first operation data, the new user may default to query the underlying information of the enterprise when accessing the enterprise information query platform.
In step 103, based on the first operation data, a plurality of candidate states included in the state set are respectively predicted by a pre-trained time sequence model, so as to obtain a selected probability of each candidate state.
Here, each candidate state may correspond to a query intent, respectively, where each query intent characterizes a potential query goal or task type (e.g., job hunting, enterprise profiling, or vendor profiling, etc.) of the user when accessing the enterprise information query platform.
In some embodiments, the first operation data may include a click sequence, an input keyword, a stay time length and the like of the user on the enterprise information query platform, and the step 103 may be implemented by determining a historical query intention of the user based on the click sequence, the input keyword and the stay time length of the user on the enterprise information query platform, inputting the historical query intention into a pre-trained time sequence model, so that the pre-trained time sequence model determines posterior probabilities of the query intention corresponding to each candidate state included in the state set at the current moment, and taking each posterior probability as a selected probability of the corresponding candidate type.
By way of example, taking a timing model as an HMM as an example, in the technical solution provided in the embodiment of the present application, a set of hidden state sets may be created in advance, and assumed to be recorded asWherein each candidate stateCorresponding to a particular query intention, e.g.Can correspond to the job application scene of the C-terminal user,Can correspond to a C-end user litigation right scene,Can correspond to C-end user financial anti-deception scene,Can correspond to the end-of-job investigation scene of the B-end user client,Can correspond to the money laundering examination scene of the B-end user,May correspond to a B-terminal user provider-end tune scenario. It should be noted that the different candidate states point to potential decision targets or task types of the user when accessing the enterprise information query platform. These states are hidden, i.e. the user's actual query intent is not directly visible, and need to be inferred by observing user behavior. Then, behavior data such as the latest query keyword of the user, the sequence of the type of the clicked enterprise information, the residence time length and the like can be obtained from the cache or the log, the previous query intention of the user (namely, the query intention of the user before the current query is carried out) can be determined based on the obtained behavior data, and then the previous query intention of the user can be input into the pre-trained HMM, so that the pre-trained HMM calculates posterior probabilities of the user under the query intention corresponding to each candidate state respectively, and each posterior probability is used as the selected probability of the corresponding candidate state.
In some embodiments, referring to fig. 4, fig. 4 is a second flowchart of the enterprise information query method provided in the embodiment of the present application, as shown in fig. 4, before step 103 shown in fig. 3 is performed, steps 111 to 115 shown in fig. 4 may also be performed, and will be described with reference to the steps shown in fig. 4.
In step 111, an initialized timing model is constructed.
Here, the parameters of the initialized timing model (e.g., HMM) may include a set of states (i.e.) Probability of initial state (assumed to be recorded as) A state transition probability matrix (assumed to be a) and an observation probability distribution (assumed to be B).
In step 112, log data of the user on the enterprise information query platform is obtained and an observation sequence is created based on the log data.
It should be noted that, the log data and the first operation data may be behavior data of the user on the enterprise information query platform at different periods, for example, the log data may be behavior data of the user on the enterprise information query platform for one month in the past, and the first operation data may be behavior data of the user on the enterprise information query platform for one week in the past.
In step 113, a first probability is determined that each point in time of the observed sequence exists in a respective candidate state comprised by the state set, and a second probability is determined that the observed sequence occurs in the respective candidate state from the current point in time to the end of the sequence.
Here, the first probability is a forward probability, and the second probability is a backward probability.
In step 114, a desired number of transitions between each candidate state and a desired number of observations in each candidate state are determined based on the first probability and the second probability.
In step 115, the initial state distribution, the state transition probability matrix, and the observation probability distribution are updated based on the expected number of transitions and the expected number of observations.
In some embodiments, taking a time sequence model as an HMM as an example, the embodiment of the present application can take log data (such as an access type sequence, a residence time, a click behavior, etc.) from a user as a training sample of the model, and assist with a query intention marked manually or regularly, meanwhile, in the process of training the HMM, a log likelihood function can be adopted as a loss function to maximize the probability of an observation sequence under the model, and in addition, a forward-backward algorithm+expectation maximization (EM, expectation Maximization) iteration can be adopted in the training process, so as to update the initial state probability, the state transition probability matrix and the observation probability distribution included in the HMM.
In other embodiments, a set of hidden states is definedThereafter, embodiments of the present application may also incorporate preference functions (e.g., user taste functions, assumed to be recorded as) For measuring in stateCorresponding query intent typeAttraction to the user (similar to the information smell in IFT). And then, estimating the initial value according to the data such as the access frequency, the stay time, the field expert experience and the like of the user under the marked query intention. Therefore, the embodiment of the application can enable the user to be in the stateCorresponding access type under query intentionThe probability of enterprise information is defined as:
(1)
Wherein, In order to regulate the parameters of the device,Representing an exponential operation, N is the total number of different types divided.
That is, embodiments of the present application may train the initial state probability, the state transition probability matrix, and the observation probability distribution using historical behavior data of the user through a forward-backward algorithm of the HMM (e.g., a Baum-Welch algorithm). Through iterative optimization, the HMM may be enabled to infer a user's potential query intent from the user's current behavior at any time, and thereby calculate preference parameters (e.g., preference values) for various types of enterprise information for the user's query intent.
For example, in the technical solution provided in the embodiment of the present application, the HMM may be composed of the following parameters:
hidden state set S
Initial state probability: Wherein, the method comprises the steps of, wherein,
State transition probability matrix: Wherein, the method comprises the steps of, wherein,
And (2) and
Observing probability distribution: Wherein, the method comprises the steps of, wherein,
In an embodiment of the present application, in the present application,Indicating the type of access by the user at time step tHMM assumes that the time-step-to-step state transitions satisfy markov properties and that observations depend only on the current state.
By way of example, embodiments of the present application may be derived from a user taste functionStarting from, get in stateUser access type under corresponding query intentionProbability of (2)For example, assume that the user is in a stateCorresponding query intention pair typeDepending on their relative taste values, embodiments of the present application may incorporate normalization function (e.g., softmax function) pairs for this purposeNormalization is performed, namely:
(2)
Wherein, For regulating parameters, and is greater than 0, for regulating sensitivity to taste differences,。The larger, the sharper the user preference distribution,The smaller the user preference distribution, the smoother.
In HMM, probability of observationConsidering that the embodiment of the application has translated the preference of the user to the type into a probability distribution, it is straightforward to:
(3)
in addition, if there is a stay time in the actual service Or other features, can be further extended to:
(4)
for example, in training the HMM, the user may first preset 、AndWherein, the initial value of (2) is,The initial values of (2) can be set to be evenly distributed or the initial access preference of the user can be counted according to the historical data; The transition frequency may be statistically based on a priori business logic (e.g., the probability of a user turning from a base query state to a risk assessment state) or from data; the initial value of (a) may be from statistical data, e.g. may be made To be in a labeled dataset (e.g., manually or regularly labeled user query intent), the user is in a stateCorresponding query intent time access typeOr average residence time length. When no domain expert knowledge exists, the embodiment of the application can also take the logarithmic frequency proportion as an initial value, namely:
(5)
Wherein, AndIn order to smooth the parameters of the image,Representing logarithmic operation, linear weighting adjustment can also be performed on the basis of this if expert experience is available.
The process of updating the parameters of the HMM using the Baum-Welch algorithm is described further below.
In training the HMM, an observation sequence may be defined first, hypothetically noted asAt the same time, the forward variable can be definedThe method comprises the following steps:
(6)
and can define backward variables The method comprises the following steps:
(7)
in this way, forward-backward algorithms can be used at a given time Calculation in the case of A, BAnd。
After obtainingAndAfter that, it can be calculated that at the time t the state isPosterior probability (assumed to be(I) A), namely:
(8)
Also, the posterior probability of a state transition (assumed to be noted as ) The method comprises the following steps:
(9)
after obtaining (I) AndAfter that, can utilize(I) AndUpdating parameters of the HMM, e.g. updated initial state probabilities (assumed to be recorded as) The method can be as follows:
(i) (10)
updated state transition probabilities (assumed to be written as ) The method comprises the following steps:
(11)
For the update of the observation probability distribution part, the update is required Thereby influencing the parameters of. Due toIs present in (a)The term, when updated, needs to maximize log-likelihood, possibly using iterative optimization (e.g., gradient-rising or EM-nesting procedures), where the log-likelihood function is as follows:
(12)
Wherein, 。
In step M, for updatingConsideration is needed to take into account(I) For a particular purposeIs a weighted statistic of (a). For a given stateObserved asProbability of (2)Closely related, namely:
(13)
And (3) making:
(14)
Then:
(15)
Statistics are performed on all (t, i) of the training set, and the user is in a state Time observation isThe expected frequency (assumed to be recorded as) The method comprises the following steps:
(16)
Wherein, To indicate the function, when updatingWhen it is desired to have under maximum likelihood estimation:
(17)
Wherein, Representing the partial derivative.
Is not dependent onLog likelihood of (v) termThe partial derivatives of (2) are:
(18)
thus, by iterative methods (e.g. gradient rising) pairs And continuously correcting until convergence. In practical application, a fixed point iteration or newton and other numerical methods can be adopted.
After training the HMM by the training method, a group of updated initial state probabilities, state transition probability matrixes and user taste functions can be obtainedThus, in the observation sequences of the current and the last steps of the given user (such as the accessed type set, the latest clicking actions and the like), the posterior probability of the user under the query intentions corresponding to the candidate states respectively can be rapidly calculated by using a forward-backward algorithm, then the most probable candidate state (such as the candidate state with the maximum posterior probability) can be selected, and the user taste function is usedCalculating preference parameters of users for various types of enterprise information under the query intention corresponding to the candidate state, such as. Subsequently, can also be according toDetermining the user's strength of preference (i.e., weight, equivalent to a quantitative measure in the smell of information) for each type at the moment provides the basis for subsequent ES (which is an abbreviation for elastic search, which is an open-source, distributed, highly extended real-time search and analysis engine) query weighting and result ranking. That is, the embodiment of the application can combine the user behavior, the business scene requirement and the preference aiming at each type into a computable and trainable probability model, thereby providing a solid theoretical basis for the subsequent personalized result sequencing, dynamic tuning and large language model auxiliary strategy.
In addition to using HMM to infer the hidden query intention of the user, cyclic neural network (RNN, recurrent Neural Network), long Short-Term Memory (LSTM), etc. may be used to perform sequence prediction in the presence of sufficient data and computational power, and learn the access pattern of the user on the enterprise information query platform, thereby identifying the potential query intention of the user. In addition, a bayesian network or other graph model can be used instead of the HMM, and probability inference can be made on the user's potential query intent (or business intent). For example, multi-dimensional user feature nodes (e.g., including access type frequency, dwell time, context category, etc.) may be defined, which are then collectively directed to "query intent" nodes. For more complex scenarios or multi-factor combinations, bayesian networks may possess more flexible extensibility than HMMs, e.g., when the sources of user behavioral cues are diverse, the structure is more diffuse, the use of graph models may be considered to infer the user's potential query intent.
In step 104, the plurality of candidate states are ranked in order of the probability of being selected from high to low, and the query intention corresponding to at least one candidate state ranked earlier in the ranking result is used as the query intention of the user.
In some embodiments, after the selection probabilities corresponding to the candidate states are obtained, the candidate states may be ranked according to the order of the selection probabilities from high to low, and the query intention corresponding to at least one candidate state ranked earlier in the ranking result is used as the current query intention of the user. For example, taking 10 candidate states as an example, assume states respectivelyTo stateAt the same time assume a stateThe selected probability of (2) is 89%, stateThe selected probability of (2) is 86%, stateThe probability of selection is 91 percent, the stateThe selected probability of (2) is 94%, stateThe selected probability of (2) is 80%, the stateThe selected probability of (2) is 84%, stateThe selected probability of (2) is 96%, stateThe selected probability of (1) is 81%, stateThe selected probability of (2) is 95%, stateThe selected probability of (2) is 85%, the 10 candidate states are ranked in the order from high to low according to the selected probability, and the ranking result is that the states areStatus ofStatus ofStatus of、、、、、、The state can be setThe corresponding query intent (e.g., assuming a job application scenario) serves as the user's current query intent.
In step 105, a preference function is constructed based on the plurality of types and state sets, and preference parameters for each type under query intent by the user are determined by the preference function.
In some embodiments, the above-described construction of a preference function based on multiple types and state sets (e.g., user taste function) may be implemented by) The method comprises the steps of obtaining marked query intents in a plurality of query intents respectively corresponding to a plurality of candidate states included in a state set, obtaining types of enterprise information interacted (clicked or browsed) by a user under the marked query intents and interaction parameters (such as click frequency or browsing duration and the like) aiming at the enterprise information, taking the marked query intents and the types as independent variables, and taking the interaction parameters as dependent variables to construct a preference function from the independent variables to the dependent variables.
By way of example, embodiments of the present application may employ information foraging theory, which treats a user as an "information hunter" searching for valuable information in an information environment. In an embodiment of the application, each typeFor the user in stateThe corresponding query is intended to have some appeal, like "information smell" (Information Scent). In view of this, embodiments of the present application can extend and define this concept as a "user taste function"(I.e., preference function):
(19)
Wherein, Representing the number field, user taste functionThe higher the function value of (2) is, the more in-stateUnder the corresponding query intention, the user is relative to the typeThe stronger the potential preference of (i) that is, the user is for the typeThe more interesting the enterprise information.
It should be noted that the user taste functionThe construction process of (a) is described above, and the embodiments of the present application are not described herein. In addition, collaborative filtering or matrix decomposition methods can be directly adopted, the 'user multiplied by type' is regarded as a scoring matrix, and the preference degree of the user for each type of enterprise information is deduced by analyzing the historical click or stay time of the user and the like. In addition, if each type of business information has a specific topic label (e.g., risk, finance, recruitment, etc.) at the text level, a "type appeal" score may also be derived by topic modeling or simple keyword statistics, replacing the user taste function in the IFT.
In some embodiments, the above-described pass-through preference function may be implemented by determining a preference parameter for each type of user under query intent by performing, for each type, processing of inputting the query intent and the type of preference function to obtain a function value output by the preference function, normalizing the function value by a normalization function (e.g., softmax function), and taking the normalized function value as the preference parameter (e.g., preference degree) for the type of user under query intent.
Illustratively, the preference function is taken as the user taste functionFor example, in constructing a user taste functionThereafter, the query intent (e.g., C-terminal user job application scenario) and type can be enteredInputting user taste functionAnd willThe output function value is used as the user aiming at the type in the job application sceneIs a potential degree of preference for the enterprise information,The larger the output function value is, the user is relative to the type in the job application sceneThe higher the potential preference of the enterprise information. Then can be paired by a normalization function (e.g., softmax function)The output function value is normalized, and the normalization result is used as a target type of the user in the job application sceneThat is, the preference parameter has a value ranging from 0 to 1.
That is, the embodiment of the application lays a strict foundation for the personalized sequencing strategy through the HMM+IFT fusion model, and maps the complex user behavior and service requirements into a set of trainable and optimizable probability model and parameter updating mechanism.
In some embodiments, a type filtering panel may also be displayed in the man-machine interaction interface provided by the client, and a plurality of candidate types may be displayed in the type filtering panel for selection by the user. If the user manually selects or excludes certain types in the type screening panel in the query process, the preference parameters obtained through the preference function calculation can be corrected according to the selection result or the exclusion result of the user in the type screening panel. For example, if the user manually selects "job-seeking recruitment" in the type screening panel, the preference parameter of the user for the "job-seeking recruitment" calculated by the preference function can be increased, so that greater weight can be allocated to the enterprise information of the "job-seeking recruitment" in the subsequent searching process, and if the user manually excludes "financial information" in the type screening panel, the preference parameter of the user for the "financial information" calculated by the preference function can be reduced, namely, smaller weight can be allocated to the enterprise information of the "financial information" in the subsequent searching process, thereby meeting the temporary preference adjustment of the user.
In other embodiments, in the case of having sufficient behavior data (i.e., when the data amount of the first operation data is greater than the data amount threshold value), the embodiment of the present application may further combine algorithms such as collaborative filtering (CF, collaborative Filtering), graph neural network (GNN, graph Neural Network), etc. to perform multidimensional modeling on the user interests, and then fuse with the HMM result, so as to improve accuracy of type preference prediction.
In step 106, the multiple preference parameters are mapped to the weights of the corresponding types, and the multiple weights and the keywords are filled into a preset query data structure to obtain a query expression.
In some embodiments, the above mapping of the plurality of preference parameters to the weights of the corresponding types may be implemented by determining a weight difference amplitude matched with the service requirement, selecting a target mapping manner corresponding to the weight difference amplitude from a plurality of mapping manners (e.g., mapping formulas), where the plurality of mapping manners may include linear mapping, logarithmic mapping, and exponential mapping, and mapping, for each preference parameter, the preference parameter through the target mapping manner to obtain the weights of the corresponding types.
Taking a timing model as an example of an HMM, by modeling the HMM and a preference function (such as a user taste function), the preference degree of the user for each type of enterprise information under the query intention corresponding to the current most probable state (such as the candidate state with the highest probability of being selected) can be obtained during real-time query, namelyThe output function value is then normalized by the Softmax function to obtainI.e. the user is in the most probable state at the present timePreference parameters for various types of enterprise information under corresponding query intents, e.g., the user's most likely state at the present timeThe relative preference probabilities of accessing various types of business information under the corresponding query intent. To facilitate the use of a weight or boost parameter (a parameter for modifying the relevance of a document, default to 1) in an ES, a weight or boost parameter may be usedThe value of (i.e., preference parameter) maps to a positive number weight suitable for ES processing.
It should be noted that, in ES, the boost or weight parameter is usually multiplied by or added to the original relevance score (score), if the interval is too large, the influence of "preference" on the sorting result may be too strong, and if the interval is too small, the original relevance score may not be effectively differentiated, so the embodiment of the present application may appropriately adjust the upper and lower limits of the interval according to the test result or experimental feedback, so as to achieve the ideal sorting effect. For example due toIn order to make the weights have a significant difference in ES, embodiments of the present application may define a linear mapping, which may be converted into a range, for example between 1 and 5, i.e. ifThe larger the value of (1), the greater the mapped weight (assumed to be) The closer to 5, ifThe smaller the value of (2)The closer to 1. In addition, the embodiment of the application can adjust the mapping formula according to the service requirement, for example, logarithmic mapping or exponential mapping can be used for improving the difference amplitude between different weights. For example, when a more significant difference between different weights is desired, an exponential mapping pair may be usedMapping the values of (2).
In other embodiments, the enterprise information query platform may divide enterprise information in the enterprise information base into multiple types according to business requirements, including, for example, basic information types #) Including business basic registration information, business registration, equity structure, etc., financial information classes (i.e) Including financial statements, annual business conditions, credit rating data, etc., of the business, risk information classes (i.e) Including litigation, execution, loss of confidence executives, administrative penalties, associated business risk cues, and the like, recruitment information classes (i.e) Including enterprise recruitment posts, employee assessment, team structural information, etc., vendor information classes (i.e) Including vendor chain, upstream and downstream business relationships, compliance records, etc., customer-optimized (i.e.)) Including due diligence reports, associations, collaborative records, etc., backwashing money classes (assumed to be recorded as) Including suspected money laundering records, cross-border funds transaction anomaly data, and the like. Of course, other types may be extended depending on other traffic conditions. For example, the user may be mapped to preference parameters of the basic information class under the query intent based on HMM prediction to obtain weights of the basic information class (assuming that they are recorded as) Mapping preference parameters of the user for risk information class under the query intention obtained based on HMM prediction to obtain the weight of the risk information class (assumed to be recorded as). Thus, weights corresponding to the respective types can be obtained. After obtaining the weights corresponding to the types, the keyword input by the user and the weights corresponding to the types are filled into a preset query data structure, so as to obtain a corresponding query expression (such as query DSL).
By way of example, ES queries generally comprise three parts, namely a base query (query) that matches user search keywords, e.g., including match, bool, filter, must conditions, etc., a custom scoring strategy (e.g., scoring function_score) added to the result of the base match, and a control score merge mode (e.g., including score_mode and boost_mode). For example, in querying DSL, it is assumed that the basic query condition may be a business name (i.e., a keyword) input by a user, and at the same time, a must condition may be added to the pool query to match the name, and at the same time, the functions array of the scoring function function_score may be filled with weights corresponding to each type. Taking score_mode as an example, after a plurality of enterprise documents matched with enterprise names are acquired, a basic relevance score may be calculated first, then the types to which each enterprise document respectively belongs are acquired, and the relevance scores are weighted (e.g. multiplied) according to weights corresponding to the types, and the more the types conform to the tastes of users, the higher the final score of the enterprise document.
In step 107, the enterprise information base is queried based on the query expression, a plurality of enterprise information matching the keyword is obtained, and the plurality of enterprise information is ranked based on the plurality of weights.
In some embodiments, the enterprise information base may include a plurality of sub-bases, where each sub-base may be used to store enterprise information of a plurality of enterprises of one type, and the querying the enterprise information base based on the query expression may be implemented to obtain a plurality of enterprise information matched with the keywords, where the plurality of sub-bases are queried based on the query expression to obtain the enterprise information matched with the keywords from the plurality of sub-bases, and the enterprise information obtained from the plurality of sub-bases is combined to obtain the plurality of enterprise information matched with the keywords.
For example, the enterprise information query platform may store different types of enterprise information in different sub-libraries after dividing the plurality of enterprise information in the enterprise information library into a plurality of service types according to service requirements, for example, may store the enterprise documents of the basic information class in the sub-library 1, store the enterprise documents of the financial information class in the sub-library 2, store the enterprise documents of the risk information class in the sub-library 3, and so on, and each sub-library stores only one type of enterprise document. In addition, corresponding indexes can be respectively constructed for a plurality of sub-libraries, so that a plurality of indexes (namely, a multi-index strategy) can be queried simultaneously during searching, and weighting processing is performed in the query result merging stage.
It should be noted that if different types of enterprise information are not separately stored, a single index multi-type marking policy may be used, for example, the same index may be used but a field is added to the enterprise document to mark the type to which the enterprise document belongs. In this index, a type (type) field may take basic class information (basic), risk class information (task), financial class information (finance), vendor information (supply), etc. for distinguishing the type to which the enterprise document belongs. In practical applications, the two methods can be flexibly selected, and one recommended scheme is to use a single index and distinguish the single index by a type field so as to simplify the search request structure.
In other embodiments, the sorting of the plurality of pieces of enterprise information based on the plurality of weights may be implemented by determining, for each piece of enterprise information in the plurality of pieces of enterprise information, a type to which the piece of enterprise information belongs, weighting a relevance between the piece of enterprise information and the keyword based on the weight corresponding to the type to obtain a weighted relevance, and sorting the plurality of pieces of enterprise information in order of high-to-low weighted relevance. That is, by mapping the preference parameters (such as preference values) of each type for the query intention predicted by the user based on the time sequence model (such as HMM) to the ES indexing process, it can be ensured that corresponding weights can be assigned to different types of enterprise information in the search stage, and the query result returned by the ES better accords with the hidden query intention and taste preference of the user after the weight adjustment.
When calculating the relevance between the enterprise information and the keywords, if the content of the enterprise information is large (for example, the text word number included in the enterprise information is greater than the word number threshold value) and more accurate semantic search is required, the enterprise information may be vectorized first, and then the relevance may be calculated by a vector engine.
That is, after the query request sent by the user arrives at the enterprise information query platform, the enterprise information query platform may firstly infer through the pre-trained HMM according to the user session information and the latest click behavior (i.e., the first operation data), calculate the posterior probability of the user in each candidate state, select the candidate state corresponding to the maximum posterior probability, use the query intention corresponding to the candidate state as the current query intention of the user, and then calculate the preference parameter of the user for each type of enterprise information under the query intention through the user taste function, and map the preference parameter into the corresponding type of weight. Then, the ES query DSL (i.e., query expression) may be dynamically constructed, for example, the function array of the function_score may be filled with the weights described above, and the generated query DSL is submitted to the ES for execution, and finally, the ES returns a query result, and sequences according to a plurality of weights and returns the query result to the user.
In order to improve performance, in the technical solution provided in the embodiment of the present application, the enterprise information query platform may also cache repeated requests in a short time, for example, if the user queries the same keyword multiple times in a short time, the calculated query intention and weight may be multiplexed. In addition, the parameters of the HMM and the user taste function can be updated periodically to cope with long-period preference changes of the user.
In step 108, it is determined whether a preference adjustment instruction sent by the terminal device is received, if yes, step 109 is executed, and if no, step 110 is executed.
In some embodiments, the preference adjustment instruction may be sent by the terminal device after receiving a selection operation of the user for a plurality of candidate types, where the preference adjustment instruction may carry a preference type selected by the user from the plurality of candidate types.
For example, a type option panel may be further displayed in a man-machine interaction interface of a client running on the terminal device, in which a plurality of candidate types (including, for example, basic information, financial information, risk information, recruitment information, provider-side adjustment, customer-side adjustment, money back-flushing investigation, etc.) may be displayed for the user to select, when the user makes a query, the user may make a query directly without making a selection, or may select a type of preference in the type option panel, that is, the user may switch the preference for some types by itself, for example, assuming that the user selects "risk information" in the type option panel, the terminal device may send a signal for temporary preference adjustment by the user to the enterprise information query platform, that is, the terminal device may send a preference adjustment instruction to the enterprise information query background, where the preference adjustment instruction may carry the type of preference (for example, "risk information") selected by the user in the search option panel.
In step 109, the sorted plurality of enterprise information is reordered by the large language model, and the reordered plurality of enterprise information is returned to the terminal device.
In some embodiments, when a preference adjustment instruction sent by a terminal device is received, the received preference adjustment instruction may be parsed first to obtain a preference type carried by the preference adjustment instruction, then, for each enterprise information in the plurality of enterprise information, a similarity between a summary of the enterprise information and the preference type may be determined through a large language model, and finally, the sorted plurality of enterprise information may be reordered according to a sequence from high to low in similarity.
By way of example, the embodiment of the application can introduce a large language model for immediately responding to the temporary demand change of the VIP user or the user of the specific mechanism, for example, after receiving the preference adjustment instruction sent by the terminal device, the enterprise information query platform can input the preference type carried by the preference adjustment instruction into the large language model, so that the large language model carries out additional sorting fine adjustment according to the temporary preference of the user, namely, the natural language understanding capability of the large language model can be utilized, the sorting result is subjected to secondary sorting according to the current temporary intention change of the user, and a more matched intelligent sorting result can be presented to the user.
In the technical scheme provided by the embodiment of the application, before the large language model is used, instruction fine adjustment or field fine adjustment can be performed aiming at the enterprise information query scene, so that the large language model can be more familiar with the semantics of keywords such as finance, suppliers, risks and the like. In addition, besides the large language model can be used for re-ordering the ordered enterprise information, a lightweight classifier or sentence vector model can also be used for identifying the temporary requirement transition of the user, for example, in a resource-limited environment or a delay-sensitive scene, and the lightweight small model can be used more flexibly.
In addition, it should be noted that, when the terminal device invokes the client to present the reordered multiple pieces of enterprise information, the terminal device may also guide the user to view other possible types of enterprise information by guiding the prompt question-answering. For example, a real-time feedback component can be further presented in a man-machine interaction interface of the client, a user can perform satisfactory/dissatisfaction or like/dislike feedback on the current sequencing result, and the enterprise information query platform can input feedback data of the user to a large language model or an HMM to trigger short-term or long-term optimization. In addition, when the user needs to be deeper or dynamically adjust the query intent, the large language model may also give intelligent question-and-answer cues such as "if it is necessary to see vendor tune away.
In step 110, the ordered plurality of enterprise information is returned to the terminal device.
In some embodiments, with the above examples, when the user does not select multiple candidate types in the search option panel at the time of query, i.e. the user does not temporarily adjust own preferences, the enterprise information query platform may directly return the sequenced multiple enterprise information to the user's terminal device.
In other embodiments, in some extremely plumbed B-terminal scenarios, such as those that are only specific to a particular financial institution, and are rule-stabilized, expert systems or rule engines may also be used to secondarily sort the sorted plurality of business information. For example, when it is detected that the user queries the keyword of "provider risk" multiple times in the current round of session, the priority of the enterprise information of the "provider-perfect" class in the ranking may be raised.
In some embodiments, referring to fig. 5, fig. 5 is a third flow chart of the enterprise information query method provided in the embodiment of the present application, as shown in fig. 5, after step 109 or step 110 shown in fig. 3 is performed, steps 116 to 118 shown in fig. 5 may also be performed, and the description will be made with reference to the steps shown in fig. 5.
In step 116, second operational data of the user on the enterprise information query platform is obtained.
Here, the second operation data is generated by the enterprise information query platform returning the sequenced plurality of enterprise information to the terminal device or returning the reordered plurality of enterprise information. For example, the second operation data may include keywords of the last several queries of the user, the type of clicked results, the page stay time, etc. In addition, if the user is a VIP user or a specific institution user, the user's past interaction history and long-term query preference characteristics can be provided.
In step 117, the second operation data is input into the large language model so that the large language model judges whether or not the user's preference is shifted based on the second operation data.
In some embodiments, the second operational data may include operational data for a plurality of observation periods, and step 117 may be implemented by inputting the second operational data into the large language model such that the large language model performs a process of acquiring, for each observation period of the operational data, a type to which enterprise information having an interaction frequency (e.g., a click frequency) greater than an interaction frequency threshold belongs (i.e., a preference type of a user within the observation period) from the observation period of the operational data, comparing the type with a preference type of the user determined based on the first operational data, or comparing the type with a type corresponding to a preference parameter greater than a parameter threshold (i.e., a preference type of the user predicted based on a time sequence model and a preference function), and generating a preference transfer signal when the comparison results are inconsistent, wherein the preference transfer signal may be used to characterize a change in preference of the user during the observation period.
For example, embodiments of the present application may introduce a large language model as an online adapter for a particular user (e.g., VIP user or a particular organization), which may detect user preference changes in real-time by contextual understanding of the user's recent query keywords, click behavior, HMM status, and ES results as the user deviates from the HMM predicted query intent (e.g., shifts from risk of interest information to provider of interest information). In addition, the large language model may store detected preference transfer information (e.g., user turns from "risk assessment" to "vendor-side") in the user profile repository, and if the change exhibits some persistence (e.g., the number of times the user preference changes reaches a threshold), the HMM may be retrained.
Specifically, taking the timing model as an HMM as an example, in most cases, the HMM and ES weight adjustment policy can meet the personalized requirements of the user. But for a particular user (e.g., VIP user, key partner bank, insurance agency, or fund company, etc.), its query behavior may exhibit more complex and variable features. For example, such users may focus on one type (e.g., risk information) for a period of time, and then suddenly shift to another type (e.g., vendor information) for a deep query. At this time, the schemes of state prediction of HMM and ES based on offline policy optimization may not be able to capture such fast steering in time. In view of this, the embodiment of the application uses the large language model as an online and intelligent enhancement layer, and provides more timely and fine personalized response for special or high-value users. The task of the large language model is to analyze recent behavior data of the user, and determine whether the current user generates preference transfer, for example, if the user frequently clicks enterprise documents of risk classes before but continuously clicks and pays attention to enterprise documents of vendor information classes after the query, the large language model can conclude that the temporary requirement of the user is changed from the risk class to the vendor-out class, or if the type of HMM prediction is "risk assessment", but the recent actual behavior of the user is more prone to another type (such as a customer-end job survey), the large language model can understand keywords, click sequences, document summaries and user portrait data input by the user at a semantic level to obtain preference transfer signals.
It should be noted that, after the large language model obtains the conclusion that the preference of the user is transferred, semantic understanding and matching degree evaluation can be performed on each enterprise document according to the current temporary environment, for example, if the user pays attention to the enterprise information of the provider class for a short time, the large language model can give higher sorting priority to the enterprise information belonging to the provider class in the sorted multiple enterprise information, or the large language model can also combine natural language reasoning to perform optimization promotion according to whether the abstract of the enterprise information contains text features highly related to the latest preference of the user. After the process is completed, the large language model may output a new ranked list or weight distribution scoring each business information. The enterprise information query platform can re-order the sequenced multiple enterprise information according to the sequencing results to obtain final sequencing results, and the final sequencing results are returned to the user, so that instant high-precision personalized response is realized.
In other embodiments, the large language model may record preference transfer information of the user, for example, when the large language model determines that the preference of the user changes significantly, the large language model may output a preference transfer signal, and after receiving the signal, the enterprise information query platform may record the preference transfer information into a user portrait repository, where the repository may be an independent data storage, for example, a key value pair or graph database structure may be used to store a search history, a preference transfer track, a requirement diversion at a specific time point, and corresponding context information of the user.
In step 118, when the user's preferences are shifted and the number of times shifted to the same type is greater than the number of times threshold, the timing model is retrained based on the second operation data.
In some embodiments, taking 10 observation periods as an example, assuming that the large language model generates 8 preference transfer signals in total in the 10 observation periods, that is, the preference of the user changes in the 8 observation periods, and assuming that the preset number of times threshold is 7, it may be determined that the current preference of the user has changed significantly (that is, the preference of the user has similarly transferred multiple times in a subsequent time), and then the sequence model may be retrained.
For example, for a user temporary preference transfer, the large language model may rank the current query requests twice in time, thereby presenting the user with results that better meet the temporary needs. The user can enjoy the customized result based on the latest preference transfer without waiting for offline retraining of the HMM or ES policy when querying next time. In addition, when the user's preference transfer records accumulate to some extent (e.g., the number of times the user's preferences change is greater than a threshold number of times), the enterprise information query platform may perform periodic evaluations, such as when the large language model indicates that the user is biased from "HMM predicted status" to another type multiple times, and this situation continues to occur or is more frequent (e.g., the user does not agree with the HMM predicted results multiple times during the past two weeks), then the retraining process may be initiated. For example, the enterprise information query platform can import the actual user access records and the preference transfer events in the period into a training pipeline of the HMM, and optimize the model parameters again through a Baum-Welch algorithm, so that the state transfer probability and the user taste function are updated, and the HMM can be better adapted to the new long-term demand characteristics of the user.
In addition, when the user permanently shows strong preferences for certain types, the enterprise information query platform may also update the ES segmenter, adjust the dictionary, enhance the weight initialization parameters of the relevant fields, and so on. Preset optimizations may also be performed in a scoring function (function score) or corresponding type of underlying weight policy to more quickly satisfy the user's new basic preferences, thus eliminating the need to rely on a large language model for each intervention. For example, if long-term statistics indicate that a bank user requests a large increase in frequency in back money laundering and provider outgrowth, a higher initial ES weight may be pre-set for that user, such that the baseline ranking of ESs is closer to the determined demand characteristics of such users.
The self-adaptive evolution closed loop can be formed through the fusion of the long-term strategy and the short-term strategy, namely, 1, preference change occurs in a short term, the large language model is subjected to instant secondary sequencing, users immediately enjoy better experience, 2, the collection of multiple preference change records, the training of the HMM and the adjustment of the ES strategy, the long-term solidification of the optimized result, 3, the optimized HMM and the ES strategy are more efficient in future inquiry, the dependence on instant intervention of the large language model is reduced, and 4, if the users again generate new preference conversion, the large language model is captured and corrected again, and a positive feedback loop with continuous iteration and evolution is formed.
It can be seen that this multi-level optimization can both meet the needs of individual users for temporary, drill-in, and maintain a preferential adaptation to the entire user population or specific important users over a long period of time. For VIP users, the results can be quickly optimized by means of the real-time understanding capability of the large language model, and frustration of users due to dissatisfaction with search results is avoided. Meanwhile, by continuously using user behavior feedback for offline retraining of HMM and ES strategies, the basic capability of the enterprise information query platform also continuously evolves, and when the similar demand changes are faced in the future, frequent intervention of too much dependent large language models is not needed, so that the complexity and the calculation cost are reduced, and the overall query efficiency and the user satisfaction are improved. In long-term operation, the accumulated behavior data of the user portrait warehouse, the continuous updating iteration of the HMM and the refined weight distribution of the ES strategy are combined with the advantages of the large language model on the natural language and the context understanding, so that the enterprise information query platform has the characteristics of self-adaption and self-evolution, and accords with the development trend and the application value of the future intelligent search and recommendation system. That is, the user's short-term preference transfer can be adapted by the large language model in real time, and the long-term trend is solidified by training the HMM and updating the ES strategy after recording the preference change. In addition, the temporary adjustment data accumulated by multiple sessions of the user can be used as a reference for subsequent HMM training and ES optimization, so that the enterprise information query platform shows higher matching degree and intelligent level for specific users and scenes in long-term evolution.
In other embodiments, besides retraining the time sequence model after multiple similar preference shifts of the user occur, an online learning or reinforcement learning scheme may be adopted, and small-step iteration is performed after feedback data of the user is received each time, that is, the feedback of the user can influence model parameters in time, and convergence is faster.
In summary, the method for querying enterprise information provided by the embodiment of the application has the following beneficial effects:
1. The embodiment of the application can finely describe the implicit query intention of the user in the enterprise information query process and the preference of the enterprise information aiming at each type by combining the hidden Markov model with the information foraging theory, and remarkably improves the prediction precision of the type of the preference of the user in the user diversified search scene;
2. After calculating preference parameters of a user aiming at different types of enterprise information under the current most possible query intention through a preference function, mapping a plurality of preference parameters into weights of corresponding types respectively, realizing type-level fine reordering of the search result, and helping the user to efficiently find required contents in enterprise credit data with complex information quantity;
3. the embodiment of the application introduces a large language model as an online enhancement layer, can respond immediately when a specific user (such as a VIP user or a financial institution) prefers burst transition, and performs secondary intelligent sequencing on the primary sequencing result, thereby avoiding the damage to user experience caused by untimely updating of the traditional strategy;
4. In the technical scheme provided by the embodiment of the application, the large language model can continuously feed back dynamic change information such as user preference transfer and the like to a user portrait warehouse, a bottom HMM, an ES strategy optimization module and the like. In the subsequent iteration, the system automatically retrains and updates the core model and the index strategy to form a closed-loop optimization flow from short-term remediation to long-term evolution, and the overall intelligence and the matching degree are continuously improved.
That is, the embodiment of the application constructs the user taste function model by fusing the HMM and the information foraging theory, and based on the model, realizes personalized reordering at the type level in the ES search. And then, the real-time self-adaptive adjustment and the long-term strategy closed-loop optimization of the large language model to the specific user are assisted, so that the enterprise information query platform has the advantages of quick response, multi-level evolution and continuous improvement of the user experience. The technical scheme provided by the embodiment of the application can effectively meet diversified and dynamic user requirements in the field of enterprise information query, and provides a high-value intelligent information service for business decision and risk control.
Continuing with the description of an exemplary architecture of the enterprise information query apparatus 243 provided by embodiments of the present application implemented as software modules, in some embodiments, as shown in fig. 2, the software modules stored in the enterprise information query apparatus 243 of the memory 240 may include a receiving module 2431, an obtaining module 2432, a predicting module 2433, a sorting module 2434, a determining module 2435, a constructing module 2436, a mapping module 2437, a populating module 2438, a querying module 2439, and a transmitting module 24310.
A receiving module 2431 for receiving a query request sent by a terminal device, wherein the query request carries a keyword, an acquiring module 2432 for acquiring first operation data of a user of the terminal device on the enterprise information query platform, wherein the first operation data is generated before the query request is received, a predicting module 2433 for respectively predicting a plurality of candidate states included in a state set through a pre-trained time sequence model based on the first operation data to obtain a selected probability of each candidate state, wherein each candidate state corresponds to a query intention, the query intention represents a potential query target when the user accesses the enterprise information query platform, a ranking module 2434 for ranking the plurality of candidate states in order from high to low according to the selected probability, a determining module 2435 for ranking at least one candidate state corresponding to the first in the ranking result as a query intention of the user, a constructing module 2433 for respectively constructing a bias function based on the plurality of types and the state set, a mapping function, a mapping module 2435 for mapping the query intention to the query intention, a plurality of the query intention in the mapping function 2438, and a plurality of the query intention, a mapping module 2434 for respectively filling the query intention into the query intention, a keyword in the query structure, a matching module 2434 for the query intention, and the mapping function 2438, the system further comprises a weight module 24310, a ranking module 2434 and a sending module 24310, wherein the weight module is used for ranking the enterprise information based on the weights, the sending module 24310 is used for returning the ranked enterprise information to the terminal equipment when a preference adjustment instruction sent by the terminal equipment is not received, the ranking module 2434 is also used for re-ranking the ranked enterprise information through a large language model when the preference adjustment instruction sent by the terminal equipment is received, and the sending module 24310 is also used for returning the re-ranked enterprise information to the terminal equipment.
In some embodiments, the preference adjustment instruction is sent after the terminal device receives a selection operation of the user for a plurality of candidate types, wherein the preference adjustment instruction carries a preference type selected by the user from the plurality of candidate types, a sorting module 2434 is further configured to parse the preference adjustment instruction to obtain the preference type carried by the preference adjustment instruction, determine, for each enterprise information in the plurality of enterprise information, a similarity between a summary of the enterprise information and the preference type through a large language model, and reorder the plurality of enterprise information after sorting according to the order of the similarity from high to low.
In some embodiments, after the sending module 24310 returns the ordered plurality of pieces of enterprise information or returns the reordered plurality of pieces of enterprise information to the terminal device, the obtaining module 2432 is further configured to obtain second operation data of the user on the enterprise information query platform, where the second operation data is generated after the ordered plurality of pieces of enterprise information is returned or the reordered plurality of pieces of enterprise information is returned, the enterprise information query device 243 further includes an input module 24311 and a training module 24312, where the input module 24311 is configured to input the second operation data into the large language model, so that the large language model determines whether the preference of the user is shifted based on the second operation data, and the training module 24312 is configured to retrain the timing model based on the second operation data when the preference of the user is shifted and the number of times of shifting to the same type is greater than a number of times threshold.
In some embodiments, the second operation data comprises operation data of a plurality of observation periods, an input module 24311 is further configured to input the second operation data into the large language model, so that the large language model performs processing of acquiring a type to which enterprise information with an interaction frequency greater than an interaction frequency threshold belongs from the operation data of the observation periods, comparing the type with a preference type of the user determined based on the first operation data or comparing the type with a type corresponding to the preference parameter greater than a parameter threshold, and generating a preference transfer signal when the comparison result is inconsistent, wherein the preference transfer signal characterizes that the preference of the user in the observation period is transferred.
In some embodiments, the mapping module 2437 is further configured to determine a weight difference magnitude matching the business requirement, select a target mapping manner corresponding to the weight difference magnitude from a plurality of mapping manners, where the plurality of mapping manners includes linear mapping, logarithmic mapping, and exponential mapping, map the preference parameter according to the target mapping manner for each preference parameter to obtain a weight of a corresponding type, determine a type to which the enterprise information belongs according to each enterprise information in the plurality of enterprise information, determine the type to which the enterprise information belongs by the determining module 2435, and the query device 243 of the enterprise information further includes a weighting module 24313, configured to weight a relevance between the enterprise information and the keyword based on the weight corresponding to the type to obtain a weighted relevance, and rank the plurality of enterprise information according to a sequence of the weighted relevance from high to low.
In some embodiments, the obtaining module 2432 is further configured to obtain a marked query intent of a plurality of query intents respectively corresponding to a plurality of candidate states included in the state set, obtain a type of enterprise information that the user interacts with under the marked query intent, and an interaction parameter for the enterprise information, and the constructing module 2436 is further configured to construct a preference function from the argument to the dependent variable using the marked query intent and the type as arguments, and using the interaction parameter as the dependent variable.
In some embodiments, the determining module 2435 is further configured to perform, for each of the types, a process of inputting the query intent and the type into the preference function to obtain a function value output by the preference function, and a process of normalizing the function value by a normalization function, and using the normalized function value as a preference parameter for the type for the user under the query intent.
In some embodiments, the first operational data includes a sequence of clicks, input keywords, and dwell time of the user on the enterprise information query platform, a determination module 2435 further configured to determine a historical query intent of the user based on the sequence of clicks, input keywords, and dwell time of the user on the enterprise information query platform, a prediction module 2433 further configured to input the historical query intent into a pre-trained timing model to cause the pre-trained timing model to determine posterior probabilities of the user at a current time with respect to query intentions corresponding to respective candidate states included in a state set, a training module 24312 further configured to perform a process of constructing an initialized timing model before the prediction module 2433 predicts a plurality of candidate states included in a state set, respectively, by the pre-trained timing model, wherein parameters of the timing model include the state set, an initial state distribution, a state transition probability matrix, and an observation probability distribution, to cause the pre-trained timing model to determine posterior probabilities of the user at the current time with respect to the respective candidate states included in a state set, to each candidate state set, a second time to be determined based on the expected state set, and an observation probability of the expected state set, a second time to be established based on the expected state set, and a first time to be expected state set, and a second time to be expected to be present, and a second time to be expected to be present, based on the candidate state set, and updating the initial state distribution, the state transition probability matrix and the observation probability distribution.
In some embodiments, the enterprise information base includes a plurality of sub-bases, wherein each of the sub-bases is used for storing enterprise information of a plurality of enterprises of one type, the query module 2439 is further used for querying the plurality of sub-bases based on the query expression to obtain enterprise information matched with the keywords from the plurality of sub-bases, and merging the enterprise information obtained from the plurality of sub-bases to obtain a plurality of enterprise information matched with the keywords.
It should be noted that, the description of the apparatus according to the embodiment of the present application is similar to the description of the embodiment of the method described above, and has similar beneficial effects as the embodiment of the method, so that a detailed description is omitted. The technical details of the enterprise information query apparatus provided in the embodiments of the present application may be understood according to the description of any one of fig. 3, fig. 4, or fig. 5.
Embodiments of the present application provide a computer program product comprising a computer program or computer-executable instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer executable instructions from the computer readable storage medium, and the processor executes the computer executable instructions, so that the computer device executes the enterprise information query method according to the embodiment of the present application.
Embodiments of the present application provide a computer-readable storage medium having stored therein computer-executable instructions which, when executed by a processor, cause the processor to perform a method of querying enterprise information provided by embodiments of the present application, for example, the method of querying enterprise information as illustrated in fig. 3, 4, or 5.
In some embodiments, the computer readable storage medium may be FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM, or various devices including one or any combination of the above.
In some embodiments, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, executable instructions may be deployed to be executed on one electronic device or on multiple electronic devices located at one site or distributed across multiple sites and interconnected by a communication network.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and scope of the present application are included in the protection scope of the present application.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202510064282.XA CN119513377B (en) | 2025-01-15 | 2025-01-15 | Enterprise information query method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202510064282.XA CN119513377B (en) | 2025-01-15 | 2025-01-15 | Enterprise information query method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN119513377A CN119513377A (en) | 2025-02-25 |
CN119513377B true CN119513377B (en) | 2025-04-04 |
Family
ID=94666216
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202510064282.XA Active CN119513377B (en) | 2025-01-15 | 2025-01-15 | Enterprise information query method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN119513377B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105426528A (en) * | 2015-12-15 | 2016-03-23 | 中南大学 | Retrieving and ordering method and system for commodity data |
CN107180078A (en) * | 2017-04-21 | 2017-09-19 | 河海大学 | A kind of method for vertical search based on user profile learning |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7562068B2 (en) * | 2004-06-30 | 2009-07-14 | Microsoft Corporation | System and method for ranking search results based on tracked user preferences |
US20240311559A1 (en) * | 2023-03-15 | 2024-09-19 | AIble Inc. | Enterprise-specific context-aware augmented analytics |
CN117708270A (en) * | 2023-12-11 | 2024-03-15 | 中移动信息技术有限公司 | Enterprise data query method, device, equipment and storage medium |
CN119003891B (en) * | 2024-10-25 | 2025-02-11 | 北森云计算有限公司 | Method, device and equipment for generating employee search recommended content |
-
2025
- 2025-01-15 CN CN202510064282.XA patent/CN119513377B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105426528A (en) * | 2015-12-15 | 2016-03-23 | 中南大学 | Retrieving and ordering method and system for commodity data |
CN107180078A (en) * | 2017-04-21 | 2017-09-19 | 河海大学 | A kind of method for vertical search based on user profile learning |
Also Published As
Publication number | Publication date |
---|---|
CN119513377A (en) | 2025-02-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20240281472A1 (en) | Interactive interface with generative artificial intelligence | |
US11263277B1 (en) | Modifying computerized searches through the generation and use of semantic graph data models | |
CN117009490A (en) | Training method and device for generating large language model based on knowledge base feedback | |
CN118227655B (en) | Database query statement generation method, device, equipment and storage medium | |
US11599666B2 (en) | Smart document migration and entity detection | |
US12353477B2 (en) | Providing an object-based response to a natural language query | |
CN114358657B (en) | A method and device for job recommendation based on model fusion | |
US12222992B1 (en) | Using intent-based rankings to generate large language model responses | |
EP4575822A1 (en) | Data source mapper for enhanced data retrieval | |
US20250086190A1 (en) | Context for language models | |
CN120144763B (en) | A method and system for classifying and grading tobacco enterprise customers based on sentiment analysis | |
US20250078456A1 (en) | Data Processing Method, Object Processing Method, Recommendation Method, and Computing Device | |
CN119415680B (en) | Data query result ordering method and system integrating large model and incremental learning | |
CN119646161A (en) | Preference learning method, model application method, device, equipment and storage medium | |
US20190370402A1 (en) | Profile spam removal in search results from social network | |
CN119513268A (en) | Knowledge question answering method, device, computer equipment and storage medium | |
US20250225146A1 (en) | Training and utilizing language machine learning models to create structured outputs for building digital visualizations from analytics databases and digital text prompts | |
CN119513377B (en) | Enterprise information query method and device | |
Hu et al. | Graph attention networks with adaptive neighbor graph aggregation for cold-start recommendation | |
CN120429414B (en) | A technology development situation awareness system and method | |
CN116431779B (en) | FAQ question-answering matching method and device in legal field, storage medium and electronic device | |
US12353492B1 (en) | Machine learning system and method for targeted growth projection of market inquiry results | |
CN120541206B (en) | Retrieval enhancement generation method and device | |
CN118427335B (en) | Intelligent indexing method and system for supply chain documents based on language model | |
US20240112103A1 (en) | System and method for recommending next best action to create new employment pathways for job applicants |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |