[go: up one dir, main page]

CN119323254A - Unified argumentation mining method and system based on instruction learning - Google Patents

Unified argumentation mining method and system based on instruction learning Download PDF

Info

Publication number
CN119323254A
CN119323254A CN202410564145.8A CN202410564145A CN119323254A CN 119323254 A CN119323254 A CN 119323254A CN 202410564145 A CN202410564145 A CN 202410564145A CN 119323254 A CN119323254 A CN 119323254A
Authority
CN
China
Prior art keywords
arguments
unified
types
uni
mining method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410564145.8A
Other languages
Chinese (zh)
Inventor
张伟男
肖瑞宇
刘挺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology Shenzhen
Original Assignee
Harbin Institute of Technology Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology Shenzhen filed Critical Harbin Institute of Technology Shenzhen
Priority to CN202410564145.8A priority Critical patent/CN119323254A/en
Publication of CN119323254A publication Critical patent/CN119323254A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • G06N5/025Extracting rules from data

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Machine Translation (AREA)

Abstract

The invention belongs to the technical field of calculation arguments and argumentation mining, and particularly relates to a unified argumentation mining method and system based on instruction learning. Step 1, uniformly modeling three types of AM as instruction learning tasks, namely constructing a Uni-AM framework, step 2, constructing a Uni-AM framework by using the step 1 to train a LLM-based generation model, and step 3, realizing the function of simultaneously processing multiple types of AM by using the LLM-based generation model of the step 2. The invention is used for solving the problem that the prior art cannot process all three types of arguments for excavation and the respective arguments for the subtasks.

Description

Unified argumentation mining method and system based on instruction learning
Technical Field
The invention belongs to the technical field of calculation arguments and argumentation mining, and particularly relates to a unified argumentation mining method and system based on instruction learning.
Background
Argumentation Mining (AM) aims at analyzing the logical structure of the arguments text. According to the logical flow structure of the treatises, AM can be divided into three types, tree, generic, and meta-pair. Existing approaches typically handle three types of AM independently, ignoring that in real life scenarios, three types of AM often mix together and share the same context and background. This creates difficulties when applying existing methods to real-world scenarios and also creates potential performance loss. In the present invention, a framework is presented that unifies three types of AM into a self-guided learning task, called Uni-AM. Uni-AM is known to be the first attempt to effectively address all three types of AM on three subtasks in a unified manner. Uni-AM clearly enhances the interest in information gain for different AM types, and experimental results on three widely used datasets with high scalability to accommodate a broad range show the effectiveness of Uni-AM on all AM types and subtasks. Furthermore, it was found that joint learning of different AM types significantly improves performance, which further verifies the necessity of unified modeling.
Analysis of treatises provides valuable insight across a broad range of fields, from predicting financial market trends to public relations, showing the importance of automated treatise analysis. Demonstration mining is a text mining and understanding task that extracts and analyzes the logical structure of demonstration text.
Given a proof text, such as a lecture claiming a certain perspective or a post expressing a particular perspective, the proof mining (AM) first identifies, extracts and classifies all proof components (ACs), and then organizes the structure of the proof text and classifies the dispute relations (ARs) between ACs. AC is a claim or argument in the text (e.g., "euthanasia should be legal"), AR represents a relationship (e.g., support, attack) between ACs.
According to the logical structure of the treatise text structure, AM can be classified into three categories, tree AM, generic AM and proof pair AM. The core difference between the different types of AM is the structure of the demonstrated relationship in the text.
For tree AM, the arguments form a tree structure, whereas in generic AM, the arguments form a Directed Acyclic Graph (DAG), while in arguments for AM, the arguments form a bipartite graph.
To more intuitively explain the distinction and connection between different types of AM, a social media scenario is taken as an example.
In social media, a user may post a posting with various claims and arguments, and analysis of the posting is consistent with the characteristics of the generic AM. The comments of other users under the push naturally form a tree, which can be regarded as a tree AM. If two users exchange views and conduct dialectics on social media, then the analysis of their interaction process is a proof-of-AM. Fig. 1 shows one example of three types of AM, and illustrates the demonstrated relationship corresponding to these examples.
Many outstanding attempts have been made by previous work. In these works, it is widely recognized that AM has three basic subtasks, arguments Component Type Classification (ACTC), classifying extracted arguments into categories (e.g., value, policy). Demonstration relationship identification (ARI) determines whether two demonstrations constitute a demonstration relationship. The Arguments Relationship Type Classification (ARTC) classifies all identified inter-arguments relationships (e.g., deductions, evidence).
However, current methods are generally task dependent and difficult to migrate to other types of AM. As can be seen from the above examples, in real-world scenarios such as social media discussion and television debate, the different types of AM and different AM subtasks are not isolated. They tend to be mixed, sharing the same background and background knowledge. This close association means the ability to handle multiple types of AM simultaneously, and also shows the potential information gain between different types of AM and AM subtasks. However, there is still a lack of a practical framework for unified modeling of these three AM types. In addition, most of the current efforts fail to fully utilize the powerful text modeling capabilities of LLM and the rich world knowledge. These methods tend to use hidden states of the model output to generate embedding of arguments components, which are both ineffective and not easy to train for current large language models.
Disclosure of Invention
The invention provides a unified argumentation mining method and a mining system based on instruction learning, which are used for solving the problem that the conventional technology cannot process argumentation mining of all three types and respective argumentation mining subtasks.
The invention is realized by the following technical scheme:
a unified argumentation mining method based on instruction learning, the unified argumentation mining method comprising the steps of:
Step 1, uniformly modeling three types of AM as instruction learning tasks, namely constructing a Uni-AM framework;
step 2, constructing a Uni-AM framework training LLM-based generation model by using the step 1;
And 3, using the LLM-based generation model in the step2 to realize the function of simultaneously processing multiple types of AM.
Further, in the step 1, the three types of AM are tree AM, generic AM and proof pair AM respectively, and an instruction shared between tasks is constructed for the three types of AM to instruct the model to understand the text of the discussion paper.
Further, an n-input query is constructed to confirm the category of each AC, completing the ACTC subtask.
Further, an input query is constructed to determine the type of relationship between any two arguments components ac_i and ac_j to implement the ARI and ARTC subtasks.
Further, for ACTC tasks, queries are constructed for each AC as shown by instrucing by reading the following paragraphs and answering the question [ arguments text ] input: [ AC ] arguments of type [ AC type set list ].
Further, for ARI and ARTC tasks, consider a combination of any two ACs and use the query to solve ARI and ARTC.
Further, for each AC pair, the query is constructed by constructing an instruction to read the following article and answer the question, [ arguments text ], input to argue what the relationship between [ AC1] and [ AC2] is [ AR type set list ], if the model output has no relationship, consider that there is no relationship between the two ACs, otherwise it is considered to be the result of ARTC.
A unified argumentation mining system based on instruction learning using the unified argumentation mining method based on instruction learning as described above, the unified argumentation mining system comprising
The Uni-AM framework building module is used for uniformly modeling three types of AM into instruction learning tasks;
The training module is used for training a LLM-based generation model by using the Uni-AM framework constructed by the Uni-AM framework construction module;
the LLM-based generative model is used to realize the function of processing multiple types of AM simultaneously.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing a method as described above when executing the computer program.
A computer readable storage medium having stored therein a computer program which when executed by a processor implements the method described above.
The beneficial effects of the invention are as follows:
The present invention combines the ARI and ARTC queries to greatly improve computing performance.
The invention can process all three types of arguments and the respective arguments of the excavation subtasks simultaneously.
Drawings
Fig. 1 is a schematic diagram of the algorithm principle of the present invention.
FIG. 2 is a diagram of the Uni-AM in PE dataset effects of the invention.
FIG. 3 is a diagram of the Uni-AM in CDCP dataset effect of the present invention.
FIG. 4 is a graph of the Uniam effect of the present invention on RR datasets.
Fig. 5 is a flow chart of the method of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The following description of the embodiments of the present application, taken in conjunction with the accompanying drawings, will clearly and fully illustrate the technical aspects of the embodiments of the present application, and it will be apparent that the embodiments described are only some, but not all, embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, but the present application may be practiced in other ways other than those described herein, and persons skilled in the art will readily appreciate that the present application is not limited to the specific embodiments disclosed below.
Example 1
As shown in fig. 1, a unified argumentation mining method based on instruction learning, the unified argumentation mining method includes the following steps:
Step 1, uniformly modeling three types of AM as instruction learning tasks, namely constructing a Uni-AM framework;
Step 2, constructing a Uni-AM framework training LLM-based generation model by using the step 1, wherein the negotiable article text is regarded as an instruction, and AM subtasks of different types are modeled as inputs;
And 3, using the LLM-based generation model in the step2 to realize the function of simultaneously processing multiple types of AM.
Further, in the step 1, the three types of AM are tree AM, generic AM and proof pair AM respectively, and an instruction shared between tasks is constructed for the three types of AM to instruct the model to understand the text of the discussion paper.
Further, an n-input query is constructed to confirm the category of each AC, completing the ACTC subtask.
Further, an input query is constructed to determine the type of relationship between any two arguments components ac_i and ac_j to implement the ARI and ARTC subtasks.
Further, for ACTC tasks, queries are constructed for each AC as shown by instrucing by reading the following paragraphs and answering the question [ arguments text ] input: [ AC ] arguments of type [ AC type set list ].
Further, for ARI and ARTC tasks, consider a combination of any two ACs and use the query to solve ARI and ARTC.
Further, for each AC pair, the query is constructed by constructing an instruction to read the following article and answer the question, [ arguments text ], input to argue what the relationship between [ AC1] and [ AC2] is [ AR type set list ], if the model output has no relationship, consider that there is no relationship between the two ACs, otherwise it is considered to be the result of ARTC.
Modeling ARI and ARTC as the same question-answer pair has two considerations, one is to reduce the computation time of the model, combining ARI and ARTC as one query to compute long text with a large number of ACs will save computation overhead because the number of queries between the arguments is the square of the number of ACs, and secondly ARI and ARTC are both pairwise relationship learning processes between arguments whose results should naturally have a large correlation with each other (e.g., ARTC results should also be None when the ARI results between the two arguments are None), thus combining two queries based on experimental results will greatly improve computation performance.
In particular, experiments were performed on the PE, CDCP and RR datasets at Uni-AM to analyze the impact of Uni-AM on tree, generalization and arguments on AM.
From FIG. 2, it is observed that Uni-AM achieves the best results among all subtasks. For ACTC subtasks, uni-AM is 1.2 percent higher than the current best result on macroscopic F1 socre. For the ARI subtask, uni-AM was found to be 1.2 percent better than SOTA in macroscopic F1, and better results were achieved in both Rel and No-Rel. For the ARTC subtask, we found that Uni-AM performed 1.7 percent better than SOTA. The results demonstrate the effectiveness of Uni-AM because the co-training approach results in an effect gain between different AM subtasks.
From FIG. 3, it is observed that Uni-AM is superior to SOTA results in all subtasks. For ACTC subtasks, the macroscopic F1 index is improved by 2.5% over the current SOTA. For the ARI subtask, 5.0% better than T5-Trans in macroscopic F1 score and better results were obtained in both Rel and No-Rel indicators. For the ARTC subtask, uni-AM is the first job to report ARTC the task index on the CDCP dataset. This is because the unified modeling scheme is highly scalable for all AM types and all AM subtasks.
From fig. 4, it is observed that for the argued pair task, the experimental results still show an improvement Of 0.8 percentage points compared to State-Of-Art MLMC Of the current Macro-F1, although only ARI test was performed due to data annotation limitations. This is because instruction learning methods and joint training frameworks can better understand the demonstration context and thus perform well on ARI tasks that require deep semantic understanding.
Overall, the experimental results demonstrate that some of the most advanced models cannot handle ARTC subtasks and some models cannot handle tree-based AM and generic AM simultaneously. More importantly, no existing model, other than the one presented in the present invention, is capable of handling all three types of AM and their respective AM sub-tasks simultaneously. This further demonstrates the scalability of the conversion framework.
Example two
The embodiment provides a unified argumentation mining system based on instruction learning, which uses the unified argumentation mining method based on instruction learning as described in embodiment one, and comprises a Uni-AM framework building module and a training module;
The Uni-AM framework building module is used for uniformly modeling three types of AM into instruction learning tasks;
the training module trains a generation model based on LLM by using the Uni-AM framework constructed by the Uni-AM framework construction module, wherein the treatise text is regarded as an instruction, and AM subtasks of different types are modeled as inputs;
the LLM-based generative model is used to realize the function of processing multiple types of AM simultaneously.
Furthermore, the Uni-AM framework building module is specifically configured to build a shared instruction between tasks for three types of AM, namely tree AM, generic AM and proof pair AM, so as to instruct the model to understand the treatise text.
Further, an n-input query is constructed to confirm the category of each AC, completing the ACTC subtask.
Further, an input query is constructed to determine the type of relationship between any two arguments components ac_i and ac_j to implement the ARI and ARTC subtasks.
Further, for ACTC tasks, queries are constructed for each AC as shown by instrucing by reading the following paragraphs and answering the question [ arguments text ] input: [ AC ] arguments of type [ AC type set list ].
Further, for ARI and ARTC tasks, consider a combination of any two ACs and use the query to solve ARI and ARTC.
Further, for each AC pair, the query is constructed by constructing an instruction to read the following article and answer the question, [ arguments text ], input to argue what the relationship between [ AC1] and [ AC2] is [ AR type set list ], if the model output has no relationship, consider that there is no relationship between the two ACs, otherwise it is considered to be the result of ARTC.
Modeling ARI and ARTC as the same question-answer pair has two considerations, one is to reduce the computation time of the model, combining ARI and ARTC as one query to compute long text with a large number of ACs will save computation overhead because the number of queries between the arguments is the square of the number of ACs, and secondly ARI and ARTC are both pairwise relationship learning processes between arguments whose results should naturally have a large correlation with each other (e.g., ARTC results should also be None when the ARI results between the two arguments are None), thus combining two queries based on experimental results will greatly improve computation performance.
From the above, the embodiment of the invention trains a LLM-based generation model to learn all the self-instruction inputs together by uniformly modeling three types of AM as instruction learning tasks, so that all AM subtasks can benefit from each other, and the function of processing multiple types of AM simultaneously is realized. Experimental results show that combining two queries in the method can greatly improve the computing performance.
Example III
The embodiment of the invention provides an electronic device, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the memory is used for storing the software program and a module, and the processor executes various functional applications and data processing by running the software program and the module stored in the memory. The memory and the processor are connected by a bus. In particular, the processor implements any of the steps of the above-described embodiment by running the above-described computer program stored in the memory.
It should be appreciated that in embodiments of the present invention, the Processor may be a central processing unit (Central Processing Unit, CPU), which may also be other general purpose Processor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATEARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may include read-only memory, flash memory, and random access memory, and provides instructions and data to the processor. Some or all of the memory may also include non-volatile random access memory.
From the above, the electronic device provided by the embodiment of the present invention can implement the unified argumentation mining method as described in embodiment one by running a computer program, and a new framework, called Uni-AM, is obtained for unified modeling of three types of AM as instruction learning tasks. Training a LLM-based generative model to co-learn all of these self-instruction inputs enables all AM subtasks to benefit from each other. Combining the two queries should greatly improve computing performance.
It should be appreciated that the above-described integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer-readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by instructing related hardware by a computer program, where the computer program may be stored in a computer readable storage medium, and the computer program may implement the steps of each of the method embodiments described above when executed by a processor. The computer program comprises computer program code, and the computer program code can be in a source code form, an object code form, an executable file or some intermediate form and the like. The computer readable medium may include any entity or device capable of carrying the computer program code, a recording medium, a USB flash disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RandomAccess Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. The content of the computer readable storage medium can be appropriately increased or decreased according to the requirements of the legislation and the patent practice in the jurisdiction.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present invention. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
It should be noted that, the method and the details thereof provided in the foregoing embodiments may be combined into the apparatus and the device provided in the embodiments, and are referred to each other and are not described in detail.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the various embodiments described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/device embodiments described above are merely illustrative, e.g., the division of modules or elements described above is merely a logical functional division, and may be implemented in other ways, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed.
The foregoing embodiments are merely for illustrating the technical solution of the present invention, but not for limiting the same, and although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those skilled in the art that the technical solution described in the foregoing embodiments may be modified or substituted for some of the technical features thereof, and that such modifications or substitutions do not depart from the spirit and scope of the technical solution of the embodiments of the present invention and are intended to be included in the scope of the present invention.

Claims (10)

1. The unified argumentation mining method based on instruction learning is characterized by comprising the following steps of:
Step 1, uniformly modeling three types of AM as instruction learning tasks, namely constructing a Uni-AM framework;
step 2, constructing a Uni-AM framework training LLM-based generation model by using the step 1;
And 3, using the LLM-based generation model in the step2 to realize the function of simultaneously processing multiple types of AM.
2. The unified argumentation mining method according to claim 1, wherein the step1 is specifically that three types of AM are tree AM, generic AM and argument-pair AM, respectively, and the three types of AM are constructed into an instruction shared between tasks to instruct the model to understand the text of the treatise.
3. The unified arguments mining method according to claim 2, wherein n-input queries are constructed to confirm the category of each AC, completing ACTC subtasks.
4. A unified arguments mining method according to claim 3, wherein an input query is constructed to determine the type of relationship between any two arguments ac_i and ac_j to implement ARI and ARTC subtasks.
5. The unified arguments mining method according to claim 2, wherein for ACTC tasks, a query is constructed for each AC, as shown by instrucing reading the following paragraphs and answering the question [ arguments text ] input: [ AC ] arguments of type is.
6. The unified arguments mining method according to claim 4, wherein for ARI and ARTC tasks, consider a combination of any two ACs and use queries to solve ARI and ARTC.
7. The unified arguments mining method according to claim 6, wherein for each AC pair, the query is constructed such that the construction reads the following articles and answers questions, [ arguments text ], input: argues what the relationship between [ AC1] and [ AC2] is [ AR-type set list ], if the model output has no relationship, consider that there is no relationship between the two ACs, otherwise consider to be the result of ARTC.
8. A unified argumentation mining system based on instruction learning, characterized in that it uses the unified argumentation mining method based on instruction learning according to claims 1-7, comprising
The Uni-AM framework building module is used for uniformly modeling three types of AM into instruction learning tasks;
The training module is used for training a LLM-based generation model by using the Uni-AM framework constructed by the Uni-AM framework construction module;
the LLM-based generative model is used to realize the function of processing multiple types of AM simultaneously.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method according to any of claims 1-7 when executing the computer program.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when executed by a processor, implements the method of any of claims 1-7.
CN202410564145.8A 2024-05-08 2024-05-08 Unified argumentation mining method and system based on instruction learning Pending CN119323254A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410564145.8A CN119323254A (en) 2024-05-08 2024-05-08 Unified argumentation mining method and system based on instruction learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410564145.8A CN119323254A (en) 2024-05-08 2024-05-08 Unified argumentation mining method and system based on instruction learning

Publications (1)

Publication Number Publication Date
CN119323254A true CN119323254A (en) 2025-01-17

Family

ID=94227870

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410564145.8A Pending CN119323254A (en) 2024-05-08 2024-05-08 Unified argumentation mining method and system based on instruction learning

Country Status (1)

Country Link
CN (1) CN119323254A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008015907A1 (en) * 2006-08-03 2008-02-07 Nec Corporation Text mining device, text mining method, and text mining program
CN107967257A (en) * 2017-11-20 2018-04-27 哈尔滨工业大学 A kind of tandem type composition generation method
CN110929024A (en) * 2019-12-10 2020-03-27 哈尔滨工业大学 Extraction type text abstract generation method based on multi-model fusion
CN110941700A (en) * 2019-11-22 2020-03-31 福州大学 A debate mining system based on multi-task joint learning and its working method
CN112651853A (en) * 2020-11-17 2021-04-13 四川大学 Judgment and opinion mining method and system based on referee document
CN113312464A (en) * 2021-05-28 2021-08-27 北京航空航天大学 Event extraction method based on conversation state tracking technology
CN116306582A (en) * 2022-09-06 2023-06-23 哈尔滨工业大学(深圳) Text argumentation strategy analysis method, analysis device and computer-readable storage medium
CN117390151A (en) * 2023-10-10 2024-01-12 哈尔滨工业大学 Structural health diagnosis visual-language basic model and establishment method of multi-modal interactive system
CN117407589A (en) * 2023-10-30 2024-01-16 复旦大学 Counter-argument generation model, model training and inference methods, and evaluation criteria based on large models
CN117788164A (en) * 2024-01-26 2024-03-29 国金证券股份有限公司 Large language model multi-agent collaborative control algorithm and system for securities and futures industry

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008015907A1 (en) * 2006-08-03 2008-02-07 Nec Corporation Text mining device, text mining method, and text mining program
CN107967257A (en) * 2017-11-20 2018-04-27 哈尔滨工业大学 A kind of tandem type composition generation method
CN110941700A (en) * 2019-11-22 2020-03-31 福州大学 A debate mining system based on multi-task joint learning and its working method
CN110929024A (en) * 2019-12-10 2020-03-27 哈尔滨工业大学 Extraction type text abstract generation method based on multi-model fusion
CN112651853A (en) * 2020-11-17 2021-04-13 四川大学 Judgment and opinion mining method and system based on referee document
CN113312464A (en) * 2021-05-28 2021-08-27 北京航空航天大学 Event extraction method based on conversation state tracking technology
CN116306582A (en) * 2022-09-06 2023-06-23 哈尔滨工业大学(深圳) Text argumentation strategy analysis method, analysis device and computer-readable storage medium
CN117390151A (en) * 2023-10-10 2024-01-12 哈尔滨工业大学 Structural health diagnosis visual-language basic model and establishment method of multi-modal interactive system
CN117407589A (en) * 2023-10-30 2024-01-16 复旦大学 Counter-argument generation model, model training and inference methods, and evaluation criteria based on large models
CN117788164A (en) * 2024-01-26 2024-03-29 国金证券股份有限公司 Large language model multi-agent collaborative control algorithm and system for securities and futures industry

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WEINAN ZHANG, 等: "Building Dialogue Understanding Models for Low-resource Language Indonesian from Scratch", 《ASSOCIATION FOR COMPUTING MACHINERY》, vol. 22, no. 4, 6 April 2023 (2023-04-06), pages 1 - 20 *
李永泽,等: "论辩挖掘研究综述", 《图书情报工作》, vol. 64, no. 19, 16 October 2020 (2020-10-16), pages 128 - 139 *

Similar Documents

Publication Publication Date Title
Guo et al. Coupled-channel scattering on a torus
Pollitt An ad hoc review of digital forensic models
Khormali et al. Self-supervised graph transformer for deepfake detection
CN111985207A (en) An access control policy acquisition method, device and electronic device
Zhang et al. AttacKG+: Boosting attack graph construction with large language models
CN111932148B (en) Smart city evaluation method and device, computer equipment and storage medium
Chen et al. Socio-semantic network motifs framework for discourse analysis
Frazier et al. Learning from a consistently ignorant teacher
CN119323254A (en) Unified argumentation mining method and system based on instruction learning
Schnoebelen et al. Bisimulation and the reduction of Petri nets
US11520764B2 (en) Multicriteria record linkage with surrogate blocking keys
Jia et al. [Retracted] An Association Rule‐Based Multiresource Mining Method for MOOC Teaching
De Nicola Process algebras
CN115545988A (en) Internet and government affair big data service cooperative processing method based on data fusion
El Hadri et al. The Impact of Artificial Intelligence on Education for Sustainable Development:“A Systematic Review”
KR102835707B1 (en) Device and method for training event span-based temporal relation prediction model
Heule Without Loss of Satisfaction
Ping Application of Decision Tree Algorithm in Mental Health Evaluation
Munassar et al. Dimensionality Reduction Techniques in Big Data and Their Impact on E-Learning
Rauber Advances in Reliably Evaluating and Improving Adversarial Robustness
Kelemenová Variants of Grammar Systems: Motivations and Problems.
Patel et al. Functional Validation of Supervised Learning Tasks for Reproducible Machine Learning
Westphal Specification and verification of dynamic topology systems: on the applicability of query-and data-type-reduction-based abstractions
Chen et al. The Challenge and Response of Network Security in The Era of Artificial Intelligence
Al-hobishi et al. Dimensionality Reduction Techniques in Big Data and Their Impact on E-Learning.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination