[go: up one dir, main page]

CN118193174B - Service plug-in calling method based on large language model - Google Patents

Service plug-in calling method based on large language model Download PDF

Info

Publication number
CN118193174B
CN118193174B CN202410571594.5A CN202410571594A CN118193174B CN 118193174 B CN118193174 B CN 118193174B CN 202410571594 A CN202410571594 A CN 202410571594A CN 118193174 B CN118193174 B CN 118193174B
Authority
CN
China
Prior art keywords
language model
large language
module
service
chain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410571594.5A
Other languages
Chinese (zh)
Other versions
CN118193174A (en
Inventor
李子星
张浩港
刘晨男
陈鑫凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Linji Zhiyun Technology Suzhou Co ltd
Original Assignee
Linji Zhiyun Technology Suzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Linji Zhiyun Technology Suzhou Co ltd filed Critical Linji Zhiyun Technology Suzhou Co ltd
Priority to CN202410571594.5A priority Critical patent/CN118193174B/en
Publication of CN118193174A publication Critical patent/CN118193174A/en
Application granted granted Critical
Publication of CN118193174B publication Critical patent/CN118193174B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44521Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading
    • G06F9/44526Plug-ins; Add-ons

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a service plug-in calling method based on a large language model, which comprises the following steps: (a) Setting up middleware in a large language model and a user side, enabling the middleware to firstly receive user information, carrying out semantic retrieval and obtaining context information, forming semantic vectors and inputting the semantic vectors to the large language model; (b) And (c) enabling the middleware to find out the service to be called according to the disassembled problem, inputting the entity parameters into the service, assembling information returned by the service into a complete answer by the large language model, and returning to the user side. By setting up middleware for connecting the large language model with the user side, the large language model can be connected to different data sources, and the large language model is allowed to interact with other environments, so that automatic adaptation of application scenes is realized.

Description

Service plug-in calling method based on large language model
Technical Field
The invention belongs to the technical field of service plug-in call, and particularly relates to a service plug-in call method based on a large language model.
Background
With the release of large language models (i.e., large-scale language models, LLMs) such as ChatGPT, application developers are increasingly inclined to integrate LLMs into their own applications. However, due to uncertainty and inaccuracy of LLM generation results, it is not currently possible to provide intelligent services by means of LLM alone. Current large language models such as Hugging Face, openAI, cohere can provide a base model and API interface, but integrating and using them in a product still requires a significant amount of effort. The current practical application of artificial intelligence is mainly based on LORA (Low-Rank Adaptation of LLM) plug-in fine tuning to train the model, so that the large language model is ensured to conform to the practical application scene, and the scheme can meet the adaptability problem encountered by artificial intelligence application development in most scenes, particularly data sharing in cross-scenes. However, the scheme still has challenges on the aspects of standardized flow, standardization and the like of model training and calling, requires special developers to write codes and debug for configuration for switching different scenes, and lacks a unified standardization method and the problem of integration and adaptation of a third party library.
The existing large language models such as Hugging Face and OpenAI, cohere are only suitable for model training of a central brain, and in practical application, arms for solving problems in various practical application scenes are lacking. The existing large language model is more developed as an academic research project, and lacks a technical means for flexible practical application. For actual productization of large models, more existing practical adaptation schemes are performed in a fine tuning-based manner, in actual application, special design is required for each different application scene by a professional technician, and currently, unified standards and specifications are lacking for adaptation between the brain and the actual application of a large language model, a large number of codes are required for special adaptation, and modularized automatic integration and application adaptation technical means are lacking.
Disclosure of Invention
Based on the defects, the invention provides a service plug-in calling method based on a large language model.
In order to achieve the above object, the present invention provides a service plug-in calling method based on a large language model, comprising the following steps:
(a) Setting up middleware in a large language model and a user side, enabling the middleware to firstly receive user information, carrying out semantic retrieval and obtaining context information, forming semantic vectors and inputting the semantic vectors to the large language model;
(b) The large language model is utilized to disassemble the semantic vector to disassemble the problem and entity parameters input by the user,
(C) And the middleware finds out the service to be called according to the disassembled problem, inputs the entity parameters into the service, assembles the information returned by the service into a complete answer by the large language model, and returns the complete answer to the user side.
Optimally, in step (a), the middleware comprises:
The agent module uses the large language model to determine the action and sequence taken by the application program and returns the result information to the user;
a chain module for combining multiple components together to create a single, consistent application, or combining multiple chains or chains with other components to build a more complex chain;
an indexing module for constructing a document to enable a large language model to interact with the document;
The memory storage module is used for transmitting data in a single session or acquiring and updating information in a plurality of sessions;
the model module is used for integrating various models and providing a simplified unified interface for the various models;
and the prompt template module is used for carrying out normalization and formatting on parameter information input by a user and transmitting the parameter information to a large language model to be called, so that the large model can process more complex semantic information.
Further, the indexing module includes a cooperating document loader for combining the large language model with the text data, a text segmenter for segmenting the text into a plurality of small text segments, a vector store for storing vectors created by the embedding, and a retriever for combining the document and the large language model.
Further, the proxy module operates as follows: entering the agent chain, selecting the agent type, agent input, observing record information, model reasoning, returning results, and exiting the agent chain.
Still further, the proxy module receives queries by using a set of tools or resources that it can provision, including access encyclopedias, web searches, databases, or/and LLMs.
Further, passing the memory object as a parameter in the chain allows the chain to persist data across multiple calls, making the chain a stateful object.
The prompt template module is further used for receiving a custom knowledge base, the custom knowledge base is stored in a vector storage library after being segmented and embedded, the vector storage library supports semantic retrieval, text fragments are retrieved from a long document according to user input, the prompt template combines the text fragments and the user input into prompts, the prompts are transmitted to a large language model, the large language model infers results, and the final results are output after analysis.
The service plug-in calling method based on the large language model can connect the large language model to different data sources by setting up the middleware for connecting the large language model with the user side, and allows the large language model to interact with other environments so as to realize automatic adaptation of application scenes, thereby supporting various large language models and various class libraries depending on the large language model, and simultaneously providing a standard memory interface for maintaining semantic context state information.
The invention provides a standard link port based on the call chain design of the language model, and realizes an end-to-end call interface between application programs.
The invention realizes a program agent interface, and the agent relates to the flow of LLM for making action decision, executing the action, checking results and the like, and can generate different call chains according to different user inputs.
Drawings
FIG. 1 is a flowchart of the sequence of events and steps followed by the proxy module of the present invention;
FIG. 2 is a flow chart of an entire chained call system in the chained module of the present invention;
FIG. 3 is a schematic diagram of a large language model usage scenario of the present invention;
FIG. 4 is a flow chart illustrating the execution of the prompt template module according to the present invention;
FIG. 5 is a data flow diagram of a proxy application of the present invention;
Fig. 6 is a data flow diagram of a chain application of the present invention.
Detailed Description
In order that the present invention may be better understood, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings, in which it is to be understood that the invention is illustrated in the appended drawings. All other embodiments obtained under the premise of equivalent changes and modifications made by those skilled in the art based on the embodiments of the present invention shall fall within the scope of the present invention.
The invention relates to a service plug-in calling method based on a large language model, which comprises the following steps:
(a) Setting up middleware in a large language model and a user side, enabling the middleware to firstly receive user information, carrying out semantic retrieval and obtaining context information, forming semantic vectors and inputting the semantic vectors to the large language model;
(b) The large language model is utilized to disassemble the semantic vector to disassemble the problem and entity parameters input by the user,
(C) And the middleware finds out the service to be called according to the disassembled problem, inputs the entity parameters into the service, assembles the information returned by the service into a complete answer by the large language model, and returns the complete answer to the user side.
In this embodiment, the middleware mainly includes a proxy module, a chain module, an index module, a memory storage module, a model module, a prompt template module, and the like, which are matched with each other. The method comprises the following steps:
the agent module uses the large language model to determine the actions and order taken by the application and returns the resulting information to the user. In the practical application of AI programs, especially in large language models, the application program not only needs a predetermined large language model and a tool class library on which the application program depends, but also needs to call different plug-ins and services thereof according to different input information of users. Through the proxy module, longer and more complex user interaction can be realized, processes can run in series or in parallel, and simultaneously, user prompt information can be programmed, shared, stored and templated. An agent may receive a query by using a set of tools or resources that it may have at its disposal. These tools may include access encyclopedias, web searches, databases, LLMs, and the like. It is a series of steps that are dynamically compiled as agents cycle through their available tools to service requests. FIG. 1 illustrates the sequence of events and the flow of steps followed by the proxy module of the present invention.
The chain module is used to combine multiple components together to create a single, consistent application, or multiple chains or to combine chains with other components to build a more complex chain. The chain structure in the chain module may use one service output of the language model as input of another service, which may be possible for some simple applications using a separate service, but for more complex AI applications it is necessary to link different language model services to form a series of chain calls, and fig. 2 shows a flowchart of the whole chain call system. That is, in the present invention, multiple components can be combined together by a chain module to create a single, consistent application. By creating a chain that receives user input, formats it with a hint template, and then passes the formatted information to the large language model. While more complex chains may be built by combining multiple chains together, or by combining chains with other components. In a chain, the memory object may be passed as a parameter, allowing the chain to persist data across multiple calls, making the chain a stateful object. The invention defines an interface of a long-chain storage memory, which allows the stored data to be read through a load_memory_variable method and new data to be stored through a save_context method. The next step after invoking the language model is to make a series of invocations of the language model. The present invention accomplishes this using a sequence chain, which is a chain that performs the linking in a predefined order, with each step having one input/output, the output of one step being the input of the next step.
The indexing module is used for constructing a document so that a large language model interacts with the document. This module contains the utility functions, different types of indexes, for processing documents. The way the index is used in the chain of the present invention is the "search" step, which refers to accepting a user's query and returning the most relevant documents. This is distinguished because the index can be used for something other than retrieval, while retrieval can use logic other than the index to find relevant documents.
The index module mainly comprises a document loader, a text divider, a vector storage and a retriever.
The document loader is mainly used for combining a large language model with own text data, firstly, the data is loaded into a document object, the document loader is mainly based on a constructed package, and the constructed package is a python package, can convert various types of files into texts, and only needs to introduce a corresponding loader tool when in use.
The text divider is mainly used for processing long texts, and because the model has limitation on the length of the input characters, when the long texts are encountered, the texts need to be divided into a plurality of small text fragments. In the conventional method, the text is segmented according to the character length in the simplest way, but this causes problems, for example, if the text is a piece of code, a function is segmented into two pieces and then meaningless characters are formed, so that the whole principle should be to put together semantically related text pieces. What is "semantically relevant" depends on the text type. At a high level, the text segmenter works as follows: 1. splitting text into small, semantically meaningful blocks (typically sentences); 2. the small blocks are initially combined into a larger block until a certain size is reached (as measured by the metric function); 3. once this size is reached, the block is treated as its own text block, and then a new text block is started to be created, containing some overlap (to preserve the context between text blocks). In the present invention, the most basic text segmenter is CharacterTextSplitter, which segments according to a specified separator (default "\n\n"), and considers the maximum length of the text segment. Vector storage is one of the important components of constructing an index, essentially a special type of database, whose role is to store vectors created by embedding, provide similar query functions, and the like. The retriever interface is a generic interface that allows documents and language models to be easily combined. The interface defined by the present invention discloses a get_ relevant _documents method that accepts queries (strings) and returns a list of documents.
The memory storage module is used for transmitting data in a single session or acquiring and updating information in processing a plurality of sessions. Typically, large language models are stateless, meaning that they process each incoming query independently, similar to traditional large language chat models, which do not hold the content of the last interaction, e.g., chatGPT can normally talk to a person because it makes a layer of packaging, passing the history back to the model. In similar AI chat applications, it is important to remember previous interactions, both on a short-term and a long-term level. To solve this problem, the present invention provides a memory assembly (i.e., a memory storage module). There are two types of memory: short term and long term memory. Short-term memory generally refers to data transfer during a single session, and long-term memory refers to information acquisition and update during processing of multiple sessions; at the same time, the present invention provides a conventional aid to managing and manipulating previous chat messages. These tools are designed to be modular and useful, regardless of how they are used, and secondly, methods of incorporating these components into the chain are designed.
The model module is used for integrating with various models and providing a simplified unified interface for the various models. The models supported by the invention can be divided into three types, the use scenes of the models are different, the input and the output of the models are different, and a developer needs to select corresponding types (namely a large language model, a chat model and a text embedding model) according to project requirements.
LLMs: the Large Language Model (LLMs) takes the text string as input and returns the text string as output. Most scenes use this type of model, as shown in fig. 3.
Chat model: the chat model is a variant of the language model. While chat models use language models internally, their disclosed interfaces are slightly different. They do not disclose a "text input, text output" API, but rather an interface in which "chat messages" are input and output. Chat messages contain several types, which require appropriate values to be entered according to conventions when in use:
AIMESSAGE: for saving the responses of the large language model to pass back to the model at the next request; humanMessage: prompt information sent to the large language model, such as user input "implement a quick sorting method in C language"; SYSTEMMESSAGE: and setting the behavior mode and the target of the large language model. The user may give specific indications here, such as "as a code expert", or "return json format"; CHATMESSAGE: CHATMESSAGE can receive any form of value, but most of the time the above three types should be used as criteria.
Text embedding model: the model takes text as input and returns a list of floating point numbers. The present invention defines a Embedding class for interacting with the embedding. The present invention adapts multiple embedded providers such as (OpenAI, cohere, hugging Face, etc.) and provides a standard interface. Embedding creates a vector representation of the text, which may be considered in vector space, and performs operations such as semantic search, where the most similar text segments are found in vector space. The invention defines two methods, an email_documents and an email_query, based on Embedding classes. The biggest difference is that the two methods have different interfaces: one for multiple documents and another for a single document. In addition to this, another reason for using these two methods as two separate methods is: some embedding providers have different embedding methods for documents to be searched than the query itself.
The prompt template module is used for carrying out normalization and formatting on parameter information input by a user and transmitting the parameter information to a large language model to be called, so that the large model can process more complex semantic information. In the large language model, a list of chat messages is used as input (this list is commonly referred to as a prompt). These chat messages differ from the original string (the string that the user would pass to the LLM model) because each message is associated with a role.
Although the large language model receives natural language, many optimization adjustments are needed to the prompt to obtain the output result desired by the user, and the process of the adjustment is defined as prompt engineering. The module can normalize prompt information (parameters) input by a user, format the parameter information through a prompt template and transmit the parameter information to a large language model to be called, so that the large model can process more complex semantic information. The execution flow of the invention is shown in figure 4: in this process, the application receives two inputs, one a custom knowledge base and one a user input. The user-defined knowledge base is divided and embedded and then stored in a vector storage library, the vector storage library supports semantic retrieval, text fragments are retrieved from a long document according to user input, a prompt template combines the text fragments and the user input into prompts, the prompts are transmitted to a large language model, the large language model reasoning results, and the final results are output after analysis.
Because of the limited knowledge of large language models, many applications require additional knowledge bases including not only txt, word, pdf and other types of documents, but also databases, APIs, web pages and even audio and video files, depending on whether there is a corresponding document loader component.
The present invention has now supported a large number of document loader components, which can be divided into three categories depending on functionality:
Conversion loader: such loaders convert files of a particular format into a text format, such as csv, pdf, markdown, etc., the conversion function being implemented on the basis of a tenstrudate;
Common dataset loader: the disclosed data set does not require authorization;
proprietary dataset loader: an authorized data set is required.
The proxy component and the chain component function similarly, both of which are used to schedule business processes, determine which actions to take, and in what order. Except that the execution flow of the chain is deterministic and the agent relies on a large language model to decide the flow direction. The action controlled by the agent can be any tool supporting input and output, such as a search engine, a database, a model, a chain, or even another agent. The data flows of the proxy application and the chain application are shown in fig. 5 and 6, respectively.
The foregoing is merely a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention; while the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.

Claims (6)

1. A service plug-in calling method based on a large language model is characterized by comprising the following steps:
(a) Setting up middleware in a large language model and a user side, enabling the middleware to firstly receive user information, carrying out semantic retrieval and obtaining context information, forming semantic vectors and inputting the semantic vectors to the large language model;
(b) The large language model is utilized to disassemble the semantic vector to disassemble the problem and entity parameters input by the user,
(C) The middleware finds out the service to be called according to the disassembled problem, the entity parameters are input into the service, the information returned by the service is assembled into a complete answer by the large language model, and the complete answer is returned to the user side;
in step (a), the middleware includes:
The agent module uses the large language model to determine the action and sequence taken by the application program and returns the result information to the user;
a chain module for combining multiple components together to create a single, consistent application, or combining multiple chains or chains with other components to build a more complex chain;
an indexing module for constructing a document to enable a large language model to interact with the document;
The memory storage module is used for transmitting data in a single session or acquiring and updating information in a plurality of sessions;
the model module is used for integrating various models and providing a simplified unified interface for the various models;
and the prompt template module is used for carrying out normalization and formatting on parameter information input by a user and transmitting the parameter information to a large language model to be called, so that the large model can process more complex semantic information.
2. The large language model based service plugin invocation method of claim 1, wherein: the indexing module includes a document loader for combining a large language model with text data, a text segmenter for segmenting text into a plurality of small text segments, a vector store for storing vectors created by embedding, and a retriever for combining the document and the large language model.
3. The service plugin calling method based on a large language model according to claim 1, wherein the proxy module operates as follows: entering the agent chain, selecting the agent type, agent input, observing record information, model reasoning, returning results, and exiting the agent chain.
4. A service plug-in invocation method based on a large language model as recited in claim 3, wherein: the proxy module receives queries using a set of tools or resources that it can provision, including access to encyclopedias, web searches, databases, or/and LLMs.
5. The large language model based service plugin invocation method of claim 1, wherein: the chain passes the memory object as a parameter, allowing the chain to persist data across multiple calls, making the chain a stateful object.
6. The large language model based service plugin invocation method of claim 1, wherein: the prompt template module is also used for receiving a custom knowledge base, the custom knowledge base is stored in a vector storage library after being segmented and embedded, the vector storage library supports semantic retrieval, text fragments are retrieved from a long document according to user input, the prompt template combines the text fragments and the user input into prompts, the prompts are transmitted to a large language model, the large language model reasoning results, and the final results are output after analysis.
CN202410571594.5A 2024-05-10 2024-05-10 Service plug-in calling method based on large language model Active CN118193174B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410571594.5A CN118193174B (en) 2024-05-10 2024-05-10 Service plug-in calling method based on large language model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410571594.5A CN118193174B (en) 2024-05-10 2024-05-10 Service plug-in calling method based on large language model

Publications (2)

Publication Number Publication Date
CN118193174A CN118193174A (en) 2024-06-14
CN118193174B true CN118193174B (en) 2024-08-02

Family

ID=91400096

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410571594.5A Active CN118193174B (en) 2024-05-10 2024-05-10 Service plug-in calling method based on large language model

Country Status (1)

Country Link
CN (1) CN118193174B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119476260B (en) * 2024-08-12 2025-11-04 西安电子科技大学 Large Language Model Hint Methods that Hybridize Natural Language and Control Text

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116795968A (en) * 2023-07-05 2023-09-22 珠海市卓轩科技有限公司 A knowledge expansion and QA system based on Chat LLM technology
CN117112065A (en) * 2023-08-30 2023-11-24 北京百度网讯科技有限公司 Large model plug-in calling method, device, equipment and medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240062016A1 (en) * 2020-04-17 2024-02-22 Auditoria.AI, Inc. Systems and Methods for Textual Classification Using Natural Language Understanding Machine Learning Models for Automating Business Processes
US11954102B1 (en) * 2023-07-31 2024-04-09 Intuit Inc. Structured query language query execution using natural language and related techniques
CN117251553B (en) * 2023-11-15 2024-02-27 知学云(北京)科技股份有限公司 Intelligent learning interaction method based on custom plug-in and large language model

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116795968A (en) * 2023-07-05 2023-09-22 珠海市卓轩科技有限公司 A knowledge expansion and QA system based on Chat LLM technology
CN117112065A (en) * 2023-08-30 2023-11-24 北京百度网讯科技有限公司 Large model plug-in calling method, device, equipment and medium

Also Published As

Publication number Publication date
CN118193174A (en) 2024-06-14

Similar Documents

Publication Publication Date Title
CN104572122B (en) A kind of generating means and method of software application data
US6721724B1 (en) Validating multiple execution plans for database queries
KR101083488B1 (en) Impact Analysis of Object Models
US11928438B1 (en) Computing technologies for large language models
CN119759598B (en) Intelligent scheduling method of computing network resources based on large model intention perception
CN118193174B (en) Service plug-in calling method based on large language model
CN119211393B (en) Automatic Internet of things protocol adaptation method and system based on large model
Khan et al. Developing retrieval augmented generation (RAG) based LLM systems from PDFs: an experience report
CN110232028A (en) A kind of test exemple automation operation method and system
CN120104637A (en) Structured query language generation method, device, equipment and medium based on large model
CN117056316B (en) Multi-source heterogeneous data association query acceleration method, device and equipment
CN114218114A (en) Full-automatic test data generation method based on interface flow arrangement
CN117493333A (en) Data archiving method and device, electronic equipment and storage medium
Jones et al. The OKS persistent in-memory object manager
CN109388406A (en) Convert method and device, the storage medium, electronic device of java code
CN114817315B (en) Data processing method and system
CN118939560A (en) Software test data and test case intelligent generation method and system
CN119831132A (en) Business performance abnormality detection and treatment method based on AI intelligent agent
CN118708686A (en) A device and method for configuring a question-answering process for a knowledge base in combination with a large model, and a knowledge base question-answering system
CN118210809A (en) An object definition method, system, device and medium based on ER information
CN117828064A (en) Question answering system and method for constructing question answering system
CN117453630A (en) File path checking method and device, electronic equipment and readable storage medium
CN114185907A (en) Method, device and electronic device for synchronizing database to data warehouse
CN112130841B (en) SQL development method and device and terminal equipment
CN120372378B (en) A data classification method based on dynamic tasks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant