[go: up one dir, main page]

CN108717406A - Text mood analysis method, device and storage medium - Google Patents

Text mood analysis method, device and storage medium Download PDF

Info

Publication number
CN108717406A
CN108717406A CN201810443238.XA CN201810443238A CN108717406A CN 108717406 A CN108717406 A CN 108717406A CN 201810443238 A CN201810443238 A CN 201810443238A CN 108717406 A CN108717406 A CN 108717406A
Authority
CN
China
Prior art keywords
sentence
word
target text
analyzed
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810443238.XA
Other languages
Chinese (zh)
Other versions
CN108717406B (en
Inventor
李正洋
李海疆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201810443238.XA priority Critical patent/CN108717406B/en
Priority to PCT/CN2018/107725 priority patent/WO2019214145A1/en
Publication of CN108717406A publication Critical patent/CN108717406A/en
Application granted granted Critical
Publication of CN108717406B publication Critical patent/CN108717406B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/211Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Machine Translation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention provides a kind of text mood analysis method, this method includes:The text mood analysis request for carrying target text is received, the target text is pre-processed, and the corresponding available set of words of the target text is obtained to carrying out word segmentation processing by pretreated target text using predetermined sequence mark method;Determine the sentence to be analyzed of the target text, the corresponding available set of words of sentence to be analyzed in the target text is obtained, according to default computation rule, calculates separately the sentence vector of the sentence to be analyzed, wherein, sentence vector includes the other vector of vector sum word-level of sentence level;And the sentence vector of the sentence to be analyzed is inputted into trained emotion judgment model in advance respectively, the emotional valence that result judges the target text is exported according to model.The present invention also provides a kind of electronic device and computer storage medias.Using the present invention, the accuracy of target text mood analysis can be improved.

Description

Text mood analysis method, device and storage medium
Technical field
The present invention relates to a kind of technical field of data processing more particularly to text mood analysis method, electronic device and meters Calculation machine readable storage medium storing program for executing.
Background technology
Investment institution, investor are formulated for trading strategies, the analysis of the corresponding text of particular event or personage is It is very necessary and useful.The all about middle pass issued on twitter by analysis someone (for example, Mr. X) such as us The speech of trade war, we substantially will be seen that its attitude to the event, and it is ten that this does corresponding preparation in advance for middle side Divide useful.However the manual work in professional, such side nowadays are relied primarily on for the analysis of this kind of event and personage Method depends on professional ability and the personal experience of researcher, it is easy to obtain unilateral conclusion.
Invention content
In view of the foregoing, a kind of text mood analysis method of present invention offer, server and computer-readable storage medium Matter, main purpose are to improve the accuracy and efficiency of the analysis of target text mood.
To achieve the above object, the present invention provides a kind of text mood analysis method, and this method includes:
S1, the text mood analysis request for carrying target text is received, the target text is pre-processed, and uses Predetermined sequence marks method to carrying out word segmentation processing by pretreated target text, and obtaining that the target text is corresponding can word Set;
S2, the sentence to be analyzed for determining the target text obtain institute according to the corresponding available set of words of the target text The corresponding available set of words of sentence to be analyzed in target text is stated, according to default computation rule, is calculated separately described to be analyzed Sentence sentence vector, wherein sentence vector includes the other vector of vector sum word-level of sentence level;And
S3, respectively by the sentence vector of the sentence to be analyzed input in advance trained emotion judgment model, according to Model output result judges the emotional valence of the target text.
In addition, the present invention also provides a kind of electronic devices, which is characterized in that the device includes:Memory, processor, institute The text mood analysis program that is stored with and can run on the processor on memory is stated, the text mood analyzes program quilt , it can be achieved that following steps when the processor executes:
A1, the text mood analysis request for carrying target text is received, the target text is pre-processed, and uses Predetermined sequence marks method to carrying out word segmentation processing by pretreated target text, and obtaining that the target text is corresponding can word Set;
A2, the sentence to be analyzed for determining the target text obtain institute according to the corresponding available set of words of the target text The corresponding available set of words of sentence to be analyzed in target text is stated, according to default computation rule, is calculated separately described to be analyzed Sentence sentence vector, wherein sentence vector includes the other vector of vector sum word-level of sentence level;And
A3, respectively by the sentence vector of the sentence to be analyzed input in advance trained emotion judgment model, according to Model output result judges the emotional valence of the target text.
In addition, to achieve the above object, it is described computer-readable the present invention also provides a kind of computer readable storage medium Storage medium includes text mood analysis program, it can be achieved that as above when the text mood analysis program is executed by processor Arbitrary steps in the text mood analysis method.
Text mood analysis method, electronic device and computer readable storage medium proposed by the present invention, by treating point The target text of analysis is segmented, and sentence to be analyzed is determined according to the length length of target text, and target text is calculated The sentence vector of each corresponding preset kind of sentence to be analyzed in this, so as to get sentence vector can more accurately express The information of sentence to be analyzed;Using the sentence of each sentence to be analyzed is vectorial and advance trained emotion judgment model, More accurately judge the emotional valence of each sentence to be analyzed;According to the mood pole of each sentence to be analyzed in target text Property, the emotional valence of comprehensive descision target text helps to improve the accuracy of target text mood analysis;By literary from target The sentence to be analyzed that can fully demonstrate target text viewpoint is filtered out in this, contributes to the meter for reducing the analysis of target text mood Calculation amount improves the efficiency of target text mood analysis.
Description of the drawings
Fig. 1 is the flow chart of text mood analysis method preferred embodiment of the present invention;
Fig. 2 is the schematic diagram of electronic device preferred embodiment of the present invention;
Fig. 3 is text mood analyzer module diagram in Fig. 2 of the present invention.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific implementation mode
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
The present invention provides a kind of text mood analysis method.Referring to Fig.1 shown in, be text mood analysis method of the present invention compared with The flow chart of good embodiment.This method can be executed by a device, which can be by software and or hardware realization.
In the present embodiment, text mood analysis method includes step S1-S3:
S1, the text mood analysis request for carrying target text is received, the target text is pre-processed, and uses Predetermined sequence marks method to carrying out word segmentation processing by pretreated target text, and obtaining that the target text is corresponding can word Set;
The target text is the text about particular event/particular persons, and content of text both may be Chinese, it is also possible to It is English.For example, when the research report that target text is domestic each stock trader, mechanism issues, in view of Chinese unlike English etc. Romance, it is closely coupled between word other than punctuation mark, without apparent word boundary, therefore it is difficult that word is simply accurate True extracts.In Chinese, individual character is as most basic semantic unit, although also having the meaning of oneself, competency Poor, meaning is relatively disperseed, and the competency of word is stronger, can more accurately describe a things, therefore at natural language In reason, word (including individual character is at word) is most basic processing unit under normal conditions, it is therefore necessary to accurately be divided text Word processing;On the contrary, when the content of target text is English, for example, all about middle pass that Mr. X issues on twitter The speech of trade war has apparent word boundary (space) between English each word, is then not required to do text word segmentation processing.
Segmenting method is roughly divided into two kinds:Mechanical cutting based on dictionary, the sequence labelling cutting two based on statistical model Kind mode.
In the present embodiment, a shot and long term memory Recognition with Recurrent Neural Network (Long Short- is trained using sequence labelling method Term Memory, abbreviation LSTM) model segmented as participle model, and the training process of the participle model is as follows:
Obtain the sample sentence of preset quantity (for example, 100,000), wherein the word in sample sentence is in default corpus The word marked using predetermined sequence mark method.Wherein, which uses in classical bakeoff2005 Training set part therein can be taken back and do training by the cutting language material of Microsoft Research, using test set part as final Test.Predetermined sequence mark method is to be labeled according to position of each word in word, and marking types include:Lead-in mark, Middle word mark, tail word mark, individual character mark.For example, in passage, can by each word according to the position in word into Rower is noted, and common label has label:B, Begin indicate that this word is the lead-in of a word;M, Middle, table Show that this is the word among a word;E, End indicate that this is the tail word of a word;S, Single indicate that this is individual character into word.Point The process of word is exactly then one section of character input model is obtained corresponding flag sequence, segmented further according to flag sequence. For example, " it is big data service provider of enterprise to take things philosophically data ", the ideal annotated sequence obtained after model is: " BMMESBEBMEBME ", the word segmentation result finally restored are " take things philosophically data/be/enterprise/big data/service provider ".
In the training process, every preset time, the participle model obtained using training is to sample language in the test set Each word mark of sentence is identified, and will identify the mark in character and the sample sentence using predetermined sequence mark method into Row compares, with the error of assessment models identification mark;If the error for the Model Identification mark that training obtains dissipates, adjust Preset training parameter and re -training (for example, using back-propagation algorithm computation model error, and according to error transfer factor model Parameter), until the error for the Model Identification mark that training obtains can restrain;If the error for the Model Identification mark that training obtains Convergence, then terminate model training, using the model of generation as participle model.
It should be noted that in order to ensure that above-mentioned participle step is smoothed out, before carrying out participle operation, the step is also Including:The target text is converted into object format by unprocessed form.Wherein, object format is the lattice of executable participle operation Formula.For example, when the research report that the target text received is each stock trader, mechanism issues, research report is generally pdf lattice Formula can not directly carry out participle operation, therefore, the research report that format is pdf is converted into executable participle behaviour by software The format of work, for example, word.
Further, it before carrying out participle operation, needs to pre-process the target text after above-mentioned format transformation, For example, the target text is divided into multiple sentences according to the fullstop in the target text, be then directed to each sentence into Row participle operation.
S2, the sentence to be analyzed for determining the target text obtain institute according to the corresponding available set of words of the target text The corresponding available set of words of sentence to be analyzed in target text is stated, according to default computation rule, is calculated separately described to be analyzed Sentence sentence vector, wherein sentence vector includes the other vector of vector sum word-level of sentence level;
Wherein, the step of described " sentence to be analyzed for determining the target text " includes:
Count the number of words of the target text;When number of words is less than predetermined threshold value, obtain each in the target text A sentence is as sentence to be analyzed;When number of words is greater than or equal to predetermined threshold value, then the target text is obtained respectively and is corresponded to Available set of words in each word statistical nature, the scoring of each word is calculated according to default code of points, filters out and comments Keyword of the word for dividing sequence forward as target text, and the sentence comprising keyword is filtered out from the target text Son, using the sentence comprising keyword as sentence to be analyzed.
In the present embodiment, by counting the number of words of target text, the length of target text length is determined.
When number of words is less than predetermined threshold value (for example, 300), the length of target text is shorter, each sentence of target text It is likely to be key sentence, it therefore, can be using each sentence of target text as sentence to be analyzed.When number of words is more than or waits When predetermined threshold value, the length of target text is longer, can there is the noise language of the more key message that is beyond expression in target text Sentence, will influence follow-up mood analysis result, therefore, the sentence that can represent target text key message need to be filtered out from target text Son carries out subsequent operation.
Preferably, marking sequence is carried out to each word according to unsupervised statistics class method to extract keyword.Tool Body, dittograph and nonsense words are first filtered out from the word segmentation result of target text, for example, by each of target text All words in the available set of words of sentence are extracted into a big available set of words, and (word does not repeat in set, i.e., each Word only occurs primary in this set), then delete some meaningless words, for example, " I, you, be " etc..By these The word without specific meaning such as pronoun, preposition empirically forms a list in advance, then deletes these meaningless words.
Specifically, the statistical nature includes:Word frequency, location information and word span.
Word frequency indicates the frequency that a word occurs in the text.Under normal circumstances, if what a word occurred in the text Frequently, then this word gets over the core word possible as article.Therefore, frequency is higher, and word frequency scoring is also higher;
Location information, under normal circumstances, the position that word occurs have prodigious value for word.For example, title, plucking Will article that inherently author summarizes central idea, therefore the words for appearing in these places have certain representativeness, It is more likely to become keyword;
In the present embodiment, using ratio be by the way of 5: 5: 1, for word location information be set in beginning, ending, in Between significance level, beginning, ending, intermediate division proportion are 1: 1: 8.For example, if a target text is altogether by 10,000 A word orderly forms, and the word positioned at preceding percent X position is divided into beginning location, is located at last percent X position Word is divided into end position, remaining is as centre position, significance level 5: 5: 1.For example, it is assumed that " trial zone " word has altogether Occurred 5 times in target text, starting for 2 times, 1 time at the end of, remaining is in centre, and then ' trial zone ' word is about position Score be:5 × 2+5*1+1*2=17.
Word span refers to and first appears the distance between last appearance in a word or phrase word text, and word span is bigger Illustrate that this word is more important to text, can reflect the theme of text.Therefore, word span is bigger, and the scoring of word span is also higher.Tool Body, the calculation formula of word span is:
spani=(lasti-firsti+1)/sum
Wherein, lastiIndicate the position that word i occurs for the last time in target text, firstiIndicate word i in mesh The position occurred for the first time in mark text, sum indicate the sum of word in target text.Word span is by as extraction keyword Method be because always there is many noises (it is not those of keyword word to refer to) in reality, in text, can be with using word span Reduce these noises.
Consider above-mentioned statistical nature, and calculates the scoring that can use each word in set of words.Specifically, described each The calculation formula of the scoring of word is:
S=α * X1+β*X2+γ*X3
Wherein, X1For the word frequency scoring for the frequency that word occurs in the target text, α is preset word frequency weight, X2 Occurs the location score of position in the target text for word, β is preset position weight, X3It is word in the target Word span scoring in text, γ are preset word span weight.
It is ranked up according to the scoring sequence of each word, (range of K is can be free by the forward K of selected and sorted Range, set according to demand) keyword of a word as target text.Above-mentioned steps consider the word of each word Frequently, location information, word span, improve the accuracy of keyword extraction.Then, it is filtered out from target text comprising above-mentioned pass The sentence of keyword is as sentence to be analyzed.Made by filtering out the sentence comprising keyword from the longer target text of length For sentence to be analyzed, helps to reduce calculation amount, improve the efficiency of text mood analysis.
After the sentence to be analyzed for determining target text, need according to each corresponding available set of words of sentence to be analyzed Its corresponding sentence vector is calculated, first, calculates separately each word in the corresponding available set of words of each sentence to be analyzed Term vector, specifically, which includes:
Each word is inputted trained term vector model (word2vec models) in advance, generates a word rank (word-level) vectorial rwrd;Forming letter/character input of each word trained convolutional neural networks mould in advance Type (Convolutional Neural Network, CNN), generates the corresponding letter of the word/character rank (character- Level vectorial r)wch;By the vector combination of the vector sum character-level of the word-level obtain one it is new Term vector un=[rwrd, rwch], the term vector as each word.
Wherein, rwrdIndicate the vector obtained using word2vec model trainings, processing mode and existing word2vec Model is consistent, and which is not described herein again;rwchIt indicates the vector trained by one layer of convolutional neural networks, is as follows:
Suppositive w is made of M letter, and each letter passes through a character vector matrix (character embedding Matrix) it is converted to a vector rchr, i.e. rchr=Wchrvc, wherein vcBe one-hot vectors (length is the array of n, Only it is 1.0 there are one element, other elements are that 0.0), word w can be expressed as a d after handling successivelychr* the vector matrix of M ?.Then it is k to use a convolution lengthchrFilter convolution is carried out to above-mentioned vector, reuse a maximum pond layer Progress Chi Huahou obtains a length and isVector namely rwch.It should be noted that convolution is done in the present invention program Method and traditional convolution are not quite alike, and after adjacent several vectors are spliced, one is converted to admittedly by linear calculates Determine the vector of dimension, the dimension of unified different length word.
The vector for obtaining the vector sum character-level of the word-level of a word through the above steps, has It is the sentence vector of each sentence to be analyzed of subsequent calculations conducive to the semantic information and word shape information of word is captured simultaneously The step of lay the first stone.
After term vector using different terms in each sentence to be analyzed of above-mentioned steps acquisition, it need to further determine that each The corresponding sentence vector of sentence.Specifically, which includes:Word2vec models described in each sentence inputting to be analyzed, Generate the vector of a sentence level (sentence-level);
Obtain the term vector u of each word in each corresponding available set of words of sentence to be analyzed1, u2..., un, The word of each sentence to be analyzed of composition inputs the convolutional neural networks model, and it is corresponding to generate each sentence to be analyzed The vector of word-level;And
The vector combination of the vector sum word-level of the sentence-level is obtained into a new sentence vector, Sentence vector as each sentence to be analyzed.
Wherein, the computational methods of the vector of each corresponding word-level of sentence to be analyzed and above-mentioned rwch's Step is roughly the same, does not repeat here.
Utilize above-mentioned steps to calculate the sentence vector of each sentence to be analyzed, so as to get sentence vector can be more accurately The information of each sentence to be analyzed of expression, to judge that the emotional valence of target text lays the first stone below.
S3, respectively by the sentence vector of the sentence to be analyzed input in advance trained emotion judgment model, according to Model output result judges the emotional valence of the target text.
Specifically, sample database is built in advance, trains predetermined depth neural network model (for example, one three using sample database The neural network of layer), it determines model parameter, the neural network model of model parameter will be determined as emotion judgment model.It is described The training step of emotion judgment model includes:
The sample sentence of preset quantity is obtained, and is that each sample sentence marks label according to its emotional valence, obtains sample Notebook data.Wherein, label includes:" 1 ", " 0 ", " -1 ", " 1 " indicate that the emotional valence of sample sentence is partial to front, and " 0 " indicates The emotional valence of sample sentence is partial to neutrality, and " -1 " indicates that the emotional valence of sample sentence is partial to negatively.
Based on cross-validation method (cross-validation) by the sample sentence of preset quantity (for example, 100,000) according to pre- If ratio (for example, 7: 1: 2) is divided into:Training set, assessment collection, these three parts of test set, wherein test set is to be not involved in completely The data of model training are intended merely to the data of observation training effect;The sample data of training set is input to three layers of god Through network model, which is trained, primarily determines model parameter;In order to relatively objective judge preliminary true Fixed model parameter is trained by the sample data input of the test set to the fitting degree of the sample data except training set To the neural network model in, to test the obtained neural network model of training, when the institute that training obtains State neural network model meet preset verification condition (for example, model prediction accuracy rate be greater than or equal to predetermined threshold value, 95%), then Training is completed, and the neural network model that training is completed is as emotion judgment model.
In the present embodiment, using the sample data that label is centrifugal pump is labelled with when training pattern, therefore, by mesh After marking emotion judgment model described in each of the text sentence inputting to be analyzed, it is centrifugal pump that model, which exports result also,.
Further, the emotional valence that result judges the target text need to be exported according to model.Specifically, the step packet It includes:
The emotional valence that result determines each sentence to be analyzed respectively is exported according to model, counts do not sympathize with respectively The quantity of the corresponding sentence to be analyzed of thread polarity;The emotional valence for selecting sentence quantity to be analyzed most is as target text Corresponding emotional valence.
In the present embodiment, using the sentence vector of each sentence to be analyzed as the input of emotion judgment model, output Each corresponding mood label of sentence to be analyzed, for example, " 1 ", " 0 ", " -1 ", determine each to be analyzed according to mood label The emotional valence of sentence.Then, the output result of all sentences to be analyzed is merged, obtains the mood pole of target text Property.In the present embodiment, divide the quantity of the corresponding sentence to be analyzed of the different emotional valences of statistics, which emotional valence corresponding Sentence quantity is most, then using the emotional valence as the emotional valence of target text.For example, determining Mr. X through the above steps Publication about it is middle close trade war text in per a word emotional valence and to the corresponding sentence of different emotional valences carry out Statistics, if the corresponding sentence quantity of " negative " emotional valence is most, judges Mr. X to the attitude of Sino-U.S.'s trade war for " negative ".
The text mood analysis method that above-described embodiment proposes, by being segmented to target text to be analyzed, according to The length length of target text determines sentence to be analyzed, and it is corresponding that each sentence to be analyzed in target text is calculated The sentence vector of preset kind, so as to get sentence vector can more accurately express the information of sentence to be analyzed;Using every The sentence of a sentence to be analyzed is vectorial and advance trained emotion judgment model, more accurately judges each sentence to be analyzed The emotional valence of son;According to the emotional valence of each sentence to be analyzed in target text, the mood of comprehensive descision target text Polarity helps to improve the accuracy of target text mood analysis;Target can be fully demonstrated by being filtered out from target text The sentence to be analyzed of text viewpoint contributes to the calculation amount for reducing the analysis of target text mood, improves target text mood point The efficiency of analysis.
The present invention also provides a kind of electronic devices.With reference to shown in Fig. 2, for showing for 1 preferred embodiment of electronic device of the present invention It is intended to.
In the present embodiment, electronic device 1 can be server, smart mobile phone, tablet computer, pocket computer, on table Type computer etc. has the terminal device of data processing function, and the server can be rack-mount server, blade type service Device, tower server or Cabinet-type server.
The electronic device 1 includes memory 11, processor 12, communication bus 13 and network interface 14.
Wherein, memory 11 include at least a type of readable storage medium storing program for executing, the readable storage medium storing program for executing include flash memory, Hard disk, multimedia card, card-type memory (for example, SD or DX memories etc.), magnetic storage, disk, CD etc..Memory 11 Can be the internal storage unit of the electronic device 1, such as the hard disk of the electronic device 1 in some embodiments.Memory 11 can also be the External memory equipment of the electronic device 1 in further embodiments, such as be equipped on the electronic device 1 Plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card dodge Deposit card (Flash Card) etc..Further, memory 11 can also both include the electronic device 1 internal storage unit or Including External memory equipment.
Memory 11 can be not only used for the application software and Various types of data that storage is installed on the electronic device 1, such as text This mood analyzes program 10 etc., can be also used for temporarily storing the data that has exported or will export.
Processor 12 can be in some embodiments a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor or other data processing chips, the program for being stored in run memory 11 Code or processing data, such as text mood analysis program 10 etc..
Communication bus 13 is for realizing the connection communication between these components.
Network interface 14 may include optionally standard wireline interface and wireless interface (such as WI-FI interface), be commonly used in Communication connection is established between the electronic device 1 and other electronic equipments.
Fig. 2 illustrates only the electronic device 1 with component 11-14, it will be appreciated by persons skilled in the art that Fig. 2 shows The structure gone out does not constitute the restriction to electronic device 1, may include than illustrating less either more components or combining certain A little components or different components arrangement.
Optionally, which can also include user interface, and user interface may include display (Display), input unit such as keyboard (Keyboard), optional user interface can also include standard wireline interface, Wireless interface.
Optionally, in some embodiments, display can be that light-emitting diode display, liquid crystal display, touch control type LCD are shown Device and Organic Light Emitting Diode (Organic Light-Emitting Diode, OLED) touch device etc..Wherein, display It is referred to as display screen or display unit, for showing the information handled in the electronic apparatus 1 and for showing visualization User interface.
In 1 embodiment of electronic device shown in Fig. 2, as being stored in a kind of memory 11 of computer storage media Text mood is analyzed the program code of program 10 and is realized when processor 12 executes the program code of text mood analysis program 10 Following steps:
A1, the text mood analysis request for carrying target text is received, the target text is pre-processed, and uses Predetermined sequence marks method to carrying out word segmentation processing by pretreated target text, and obtaining that the target text is corresponding can word Set;
The target text is the text about particular event/particular persons, and content of text both may be Chinese, it is also possible to It is English.For example, when the research report that target text is domestic each stock trader, mechanism issues, text need to be carried out accurate Word segmentation processing;On the contrary, when the content of target text be English when, for example, Mr. X issued on twitter it is all about in The speech of U.S. trade war has apparent word boundary (space) between English each word, is then not required to do text word segmentation processing.
Segmenting method is roughly divided into two kinds:Mechanical cutting based on dictionary, the sequence labelling cutting two based on statistical model Kind mode.
In the present embodiment, a shot and long term is trained to remember Recognition with Recurrent Neural Network (LSTM) model using sequence labelling method It is segmented as participle model, the training process of the participle model is as follows:
Obtain the sample sentence of preset quantity (for example, 100,000), wherein the word in sample sentence is in default corpus The word marked using predetermined sequence mark method.Wherein, which uses in classical bakeoff2005 Training set part therein can be taken back and do training by the cutting language material of Microsoft Research, using test set part as final Test.Predetermined sequence mark method is to be labeled according to position of each word in word, and marking types include:Lead-in mark, Middle word mark, tail word mark, individual character mark.For example, in passage, can by each word according to the position in word into Rower is noted, and common label has label:B, Begin indicate that this word is the lead-in of a word;M, Middle, table Show that this is the word among a word;E, End indicate that this is the tail word of a word;S, Single indicate that this is individual character into word.Point The process of word is exactly then one section of character input model is obtained corresponding flag sequence, segmented further according to flag sequence. For example, " it is big data service provider of enterprise to take things philosophically data ", the ideal annotated sequence obtained after model is: " BMMESBEBMEBME ", the word segmentation result finally restored are " take things philosophically data/be/enterprise/big data/service provider ".
In the training process, every preset time, the participle model obtained using training is to sample language in the test set Each word mark of sentence is identified, and will identify the mark in character and the sample sentence using predetermined sequence mark method into Row compares, with the error of assessment models identification mark;If the error for the Model Identification mark that training obtains dissipates, adjust Preset training parameter and re -training (for example, using back-propagation algorithm computation model error, and according to error transfer factor model Parameter), until the error for the Model Identification mark that training obtains can restrain;If the error for the Model Identification mark that training obtains Convergence, then terminate model training, using the model of generation as participle model.
It should be noted that in order to ensure that above-mentioned participle step is smoothed out, before carrying out participle operation, the step is also Including:The target text is converted into object format by unprocessed form.Wherein, object format is the lattice of executable participle operation Formula.For example, when the research report that the target text received is each stock trader, mechanism issues, research report is generally pdf lattice Formula can not directly carry out participle operation, therefore, the research report that format is pdf is converted into executable participle behaviour by software The format of work, for example, word.
Further, it before carrying out participle operation, needs to pre-process the target text after above-mentioned format transformation, For example, the target text is divided into multiple sentences according to the fullstop in the target text, be then directed to each sentence into Row participle operation.
A2, the sentence to be analyzed for determining the target text obtain institute according to the corresponding available set of words of the target text The corresponding available set of words of sentence to be analyzed in target text is stated, according to default computation rule, is calculated separately described to be analyzed Sentence sentence vector, wherein sentence vector includes the other vector of vector sum word-level of sentence level;
Wherein, the step of described " sentence to be analyzed for determining the target text " includes:
Count the number of words of the target text;When number of words is less than predetermined threshold value, obtain each in the target text A sentence is as sentence to be analyzed;When number of words is greater than or equal to predetermined threshold value, then the target text is obtained respectively and is corresponded to Available set of words in each word statistical nature, the scoring of each word is calculated according to default code of points, filters out and comments Keyword of the word for dividing sequence forward as target text, and the sentence comprising keyword is filtered out from the target text Son, using the sentence comprising keyword as sentence to be analyzed.
In the present embodiment, by counting the number of words of target text, the length of target text length is determined.
When number of words is less than predetermined threshold value (for example, 300), the length of target text is shorter, each sentence of target text It is likely to be key sentence, it therefore, can be using each sentence of target text as sentence to be analyzed.When number of words is more than or waits When predetermined threshold value, the length of target text is longer, can there is the noise language of the more key message that is beyond expression in target text Sentence, will influence follow-up mood analysis result, therefore, the sentence that can represent target text key message need to be filtered out from target text Son carries out subsequent operation.
Preferably, marking sequence is carried out to each word according to unsupervised statistics class method to extract keyword.Tool Body, dittograph and nonsense words are first filtered out from the word segmentation result of target text, for example, by each of target text All words in the available set of words of sentence are extracted into a big available set of words, and (word does not repeat in set, i.e., each Word only occurs primary in this set), then delete some meaningless words, for example, " I, you, be " etc..By these The word without specific meaning such as pronoun, preposition empirically forms a list in advance, then deletes these meaningless words.
Specifically, the statistical nature includes:Word frequency, location information and word span.
Word frequency indicates the frequency that a word occurs in the text, and frequency is higher, and word frequency scoring is also higher.
Location information, under normal circumstances, the position that word occurs have a prodigious value for word, in the present embodiment, Using ratio be by the way of 5: 5: 1, for word location information be set in beginning, ending, centre significance level, beginning, Ending, intermediate division proportion are 1: 1: 8.
Word span refers to and first appears the distance between last appearance in a word or phrase word text, and word span is bigger Illustrate that this word is more important to text, can reflect the theme of text.Therefore, word span is bigger, and the scoring of word span is also higher.Tool Body, the calculation formula of word span is:
spani=(lasti-firsti+1)/sum
Wherein, lastiIndicate the position that word i occurs for the last time in target text, firstiIndicate word i in mesh The position occurred for the first time in mark text, sum indicate the sum of word in target text.Word span is by as extraction keyword Method be because always there is many noises (it is not those of keyword word to refer to) in reality, in text, can be with using word span Reduce these noises.
Consider above-mentioned statistical nature, and calculates the scoring that can use each word in set of words.Specifically, described each The calculation formula of the scoring of word is:
S=α * X1+β*X2+γ*X3
Wherein, X1For the word frequency scoring for the frequency that word occurs in the target text, α is preset word frequency weight, X2 Occurs the location score of position in the target text for word, β is preset position weight, X3It is word in the target Word span scoring in text, γ are preset word span weight.
It is ranked up according to the scoring sequence of each word, (range of K is can be free by the forward K of selected and sorted Range, set according to demand) keyword of a word as target text.Above-mentioned steps consider the word of each word Frequently, location information, word span, improve the accuracy of keyword extraction.Then, it is filtered out from target text comprising above-mentioned pass The sentence of keyword is as sentence to be analyzed.Made by filtering out the sentence comprising keyword from the longer target text of length For sentence to be analyzed, helps to reduce calculation amount, improve the efficiency of text mood analysis.
After the sentence to be analyzed for determining target text, need according to each corresponding available set of words of sentence to be analyzed Its corresponding sentence vector is calculated, first, calculates separately each word in the corresponding available set of words of each sentence to be analyzed Term vector, specifically, which includes:
Each word is inputted trained term vector model (word2vec models) in advance, generates a word rank (word-level) vectorial rwrd;Forming letter/character input of each word trained convolutional neural networks mould in advance Type (Convolutional Neural Network, CNN), generates the corresponding letter of the word/character rank (character- Level vectorial r)wch;By the vector sum letter of the word rank (word-level)/character rank (character- Level vector combination) obtains a new term vector un=[rwrd, rwch], the term vector as each word.
Wherein, rwrdIndicate the vector obtained using word2vec model trainings, processing mode and existing word2vec Model is consistent, and which is not described herein again;rwchIt indicates the vector trained by one layer of convolutional neural networks, is as follows:
Suppositive w is made of M letter, and each letter passes through a character vector matrix (character embedding Matrix) it is converted to a vector rchr, i.e. rchr=Wchrvc, wherein vcBe one-hot vectors (length is the array of n, Only it is 1.0 there are one element, other elements are that 0.0), word w can be expressed as a d after handling successivelychr* the vector matrix of M ?.Then it is k to use a convolution lengthchrFilter convolution is carried out to above-mentioned vector, reuse a maximum pond layer Progress Chi Huahou obtains a length and isVector namely rwch.It should be noted that convolution is done in the present invention program Method and traditional convolution are not quite alike, and after adjacent several vectors are spliced, one is converted to admittedly by linear calculates Determine the vector of dimension, the dimension of unified different length word.
The vector for obtaining the vector sum character-level of the word-level of a word through the above steps, has It is the sentence vector of each sentence to be analyzed of subsequent calculations conducive to the semantic information and word shape information of word is captured simultaneously The step of lay the first stone.
After term vector using different terms in each sentence to be analyzed of above-mentioned steps acquisition, it need to further determine that each The corresponding sentence vector of sentence.Specifically, which includes:
Word2vec models described in each sentence inputting to be analyzed, a sentence level (sentence- is generated Level vector);Obtain the term vector u of each word in each corresponding available set of words of sentence to be analyzed1, u2..., un, the word of each sentence to be analyzed of composition is inputted the convolutional neural networks model, generates each sentence to be analyzed The vector of corresponding word-level;The vector combination of the vector sum word-level of the sentence-level is obtained One new sentence vector, the sentence vector as each sentence to be analyzed.
Wherein, the computational methods of the vector of each corresponding word-level of sentence to be analyzed and above-mentioned rwch's Step is roughly the same, does not repeat here.
Utilize above-mentioned steps to calculate the sentence vector of each sentence to be analyzed, so as to get sentence vector can be more accurately The information of each sentence to be analyzed of expression, to judge that the emotional valence of target text lays the first stone below.
A3, respectively by the sentence vector of the sentence to be analyzed input in advance trained emotion judgment model, according to Model output result judges the emotional valence of the target text.
Specifically, sample database is built in advance, trains predetermined depth neural network model (for example, one three using sample database The neural network of layer), it determines model parameter, the neural network model of model parameter will be determined as emotion judgment model.It is described The training step of emotion judgment model includes:
The sample sentence of preset quantity is obtained, and is that each sample sentence marks label according to its emotional valence, obtains sample Notebook data.Wherein, label includes:" 1 ", " 0 ", " -1 ", " 1 " indicate that the emotional valence of sample sentence is partial to front, and " 0 " indicates The emotional valence of sample sentence is partial to neutrality, and " -1 " indicates that the emotional valence of sample sentence is partial to negatively.
Based on cross-validation method (cross-validation) by the sample sentence of preset quantity (for example, 100,000) according to pre- If ratio (for example, 7: 1: 2) is divided into:Training set, assessment collection, these three parts of test set, wherein test set is to be not involved in completely The data of model training are intended merely to the data of observation training effect;The sample data of training set is input to three layers of god Through network model, which is trained, primarily determines model parameter;In order to relatively objective judge preliminary true Fixed model parameter is trained by the sample data input of the test set to the fitting degree of the sample data except training set To the neural network model in, to test the obtained neural network model of training, when the institute that training obtains State neural network model meet preset verification condition (for example, model prediction accuracy rate be greater than or equal to predetermined threshold value, 95%), then Training is completed, and the neural network model that training is completed is as emotion judgment model.
In the present embodiment, using the sample data that label is centrifugal pump is labelled with when training pattern, therefore, by mesh After marking emotion judgment model described in each of the text sentence inputting to be analyzed, it is centrifugal pump that model, which exports result also,.
Further, the emotional valence that result judges the target text need to be exported according to model.Specifically, the step packet It includes:
The emotional valence that result determines each sentence to be analyzed respectively is exported according to model, counts do not sympathize with respectively The quantity of the corresponding sentence to be analyzed of thread polarity;The emotional valence for selecting sentence quantity to be analyzed most is as target text Corresponding emotional valence.
Using the sentence vector of each sentence to be analyzed as the input of emotion judgment model, each sentence to be analyzed of output The corresponding mood label of son, for example, " 1 ", " 0 ", " -1 ", the mood pole of each sentence to be analyzed is determined according to mood label Property.Then, the output result of all sentences to be analyzed is merged, obtains the emotional valence of target text.In this implementation In example, the quantity of the corresponding sentence to be analyzed of different emotional valences, the corresponding sentence quantity of which emotional valence are counted respectively At most, then using the emotional valence as the emotional valence of target text.For example, determining the pass of Mr.'s X publication through the above steps In trade war text in per a word emotional valence and the corresponding sentence of different emotional valences is counted, if The corresponding sentence quantity of " negative " emotional valence is most, judges Mr. X to the attitude of Sino-U.S.'s trade war for " negative ".
The electronic device 1 that above-described embodiment proposes, by being segmented to target text to be analyzed, according to target text Length length determine sentence to be analyzed, and each corresponding preset kind of sentence to be analyzed is calculated in target text Sentence vector, so as to get sentence vector can more accurately express the information of sentence to be analyzed;Using each to be analyzed Sentence sentence is vectorial and advance trained emotion judgment model, more accurately judge the mood of each sentence to be analyzed Polarity;According to the emotional valence of each sentence to be analyzed in target text, the emotional valence of comprehensive descision target text helps In the accuracy for improving the analysis of target text mood;Target text viewpoint can be fully demonstrated by being filtered out from target text Sentence to be analyzed contributes to the calculation amount for reducing the analysis of target text mood, improves the efficiency of target text mood analysis.
Optionally, in other examples, text mood analysis program 10 can also be divided into one or more Module, one or more module are stored in memory 11, and (the present embodiment is processor by one or more processors 12) performed, to complete the present invention, the so-called module of the present invention is the series of computation machine program for referring to complete specific function Instruction segment.It is the module diagram that text mood analyzes program 10 in Fig. 2 shown in Fig. 3, in the embodiment, text Mood analysis program 10 can be divided into word-dividing mode 110, vector calculation module 120 and mood analysis module 130, described The functions or operations step that module 110-130 is realized is similar as above, and and will not be described here in detail, illustratively, such as wherein:
Word-dividing mode 110 carries out the target text for receiving the text mood analysis request for carrying target text Pretreatment, and target text is obtained to carrying out word segmentation processing by pretreated target text using predetermined sequence mark method This corresponding available set of words;
Vector calculation module 120, the sentence to be analyzed for determining the target text are corresponding according to the target text The corresponding available set of words of sentence to be analyzed in the target text can be obtained with set of words, according to default computation rule, divided The sentence vector of the sentence to be analyzed is not calculated, wherein sentence vector includes that the vector sum word-level of sentence level is other Vector;And
Mood analysis module 130, for respectively that the input of the sentence vector of the sentence to be analyzed is trained in advance Emotion judgment model exports the emotional valence that result judges the target text according to model.
In addition, the embodiment of the present invention also proposes a kind of computer readable storage medium, the computer readable storage medium Include text mood analysis program 10, the text mood analysis program 10 realizes following operation when being executed by processor:
A1, the text mood analysis request for carrying target text is received, the target text is pre-processed, and uses Predetermined sequence marks method to carrying out word segmentation processing by pretreated target text, and obtaining that the target text is corresponding can word Set;
A2, the sentence to be analyzed for determining the target text obtain institute according to the corresponding available set of words of the target text The corresponding available set of words of sentence to be analyzed in target text is stated, according to default computation rule, is calculated separately described to be analyzed Sentence sentence vector, wherein sentence vector includes the other vector of vector sum word-level of sentence level;And
A3, respectively by the sentence vector of the sentence to be analyzed input in advance trained emotion judgment model, according to Model output result judges the emotional valence of the target text.
The specific implementation mode of the computer readable storage medium of the present invention is specific with above-mentioned text mood analysis method Embodiment is roughly the same, and details are not described herein.
The embodiments of the present invention are for illustration only, can not represent the quality of embodiment.
It should be noted that herein, the terms "include", "comprise" or its any other variant are intended to non-row His property includes, so that process, device, article or method including a series of elements include not only those elements, and And further include other elements that are not explicitly listed, or further include for this process, device, article or method institute it is intrinsic Element.In the absence of more restrictions, the element limited by sentence " including one ... ", it is not excluded that including There is also other identical elements in the process of the element, device, article or method.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases The former is more preferably embodiment.Based on this understanding, technical scheme of the present invention substantially in other words does the prior art Going out the part of contribution can be expressed in the form of software products, which is stored in one as described above In storage medium (such as ROM/RAM, magnetic disc, CD), including some instructions use so that a station terminal equipment (can be mobile phone, Computer, server or network equipment etc.) execute method described in each embodiment of the present invention.
It these are only the preferred embodiment of the present invention, be not intended to limit the scope of the invention, it is every to utilize this hair Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills Art field, is included within the scope of the present invention.

Claims (10)

1. a kind of text mood analysis method is applied to electronic device, which is characterized in that the method includes:
S1, the text mood analysis request for carrying target text is received, the target text is pre-processed, and using default Sequence labelling method obtains the corresponding available word set of the target text to carrying out word segmentation processing by pretreated target text It closes;
S2, the sentence to be analyzed for determining the target text obtain the mesh according to the corresponding available set of words of the target text The corresponding available set of words of sentence to be analyzed calculates separately the sentence to be analyzed according to default computation rule in mark text The sentence vector of son, wherein sentence vector includes the other vector of vector sum word-level of sentence level;And
S3, respectively by the sentence vector of the sentence to be analyzed input in advance trained emotion judgment model, according to model Output result judges the emotional valence of the target text.
2. text mood analysis method according to claim 1, which is characterized in that described " to export result according to model to sentence Break the target text emotional valence " the step of include:
The emotional valence that result determines each sentence to be analyzed respectively is exported according to model, counts different mood poles respectively The quantity of the corresponding sentence to be analyzed of property;And
The emotional valence for selecting sentence quantity to be analyzed most is as the corresponding emotional valence of target text.
3. text mood analysis method according to claim 2, which is characterized in that described " to calculate separately described to be analyzed Sentence sentence vector " the step of include:
Term vector model described in the sentence inputting to be analyzed, the vector of a sentence level is generated;
The term vector for obtaining each word in the corresponding available set of words of the sentence to be analyzed waits for point composition is each described The word of the sentence of analysis inputs the convolutional neural networks model, generates the other vector of the corresponding word-level of each sentence;And
The other vector combination of the vector sum word-level of the sentence level is obtained into a new sentence vector, as each described The sentence vector of sentence to be analyzed.
4. text mood analysis method according to claim 3, which is characterized in that described " to obtain the sentence to be analyzed The step of term vector of each word in the corresponding available set of words of son " includes:
Each word is inputted trained term vector model in advance, generates an other vector of word-level;
Letter/character input of each word trained convolutional neural networks model in advance is formed, word correspondence is generated The other vector of letter/character level;And
The other vector combination of the other vector sum letter/character level of the word-level is obtained into a new term vector, as each The term vector of word.
5. text mood analysis method according to claim 4, which is characterized in that described " to mark method using predetermined sequence To by pretreated target text carry out word segmentation processing " the step of include:
The position of each word in word by pretreated target text is labeled, marking types include Lead-in mark, middle word mark, tail word mark, individual character mark;And
According to the marking types of each word, the word segmentation result by pretreated target text is determined.
6. text mood analysis method as claimed in any of claims 1 to 5, which is characterized in that described " determining should The step of sentence to be analyzed of target text " includes:
Count the number of words of the target text;
When number of words is less than predetermined threshold value, each sentence in the target text is obtained as sentence to be analyzed;Or
When number of words is greater than or equal to predetermined threshold value, then each word in the corresponding available set of words of the target text is obtained respectively The statistical nature of language calculates the scoring of each word according to default code of points, filters out the forward word conduct of marking and queuing The keyword of target text, and the sentence comprising keyword is filtered out from the target text, by described comprising keyword Sentence is as sentence to be analyzed.
7. text mood analysis method according to claim 6, it is characterised in that:
The statistical nature includes word frequency, location information and word span;
The calculation formula of the scoring of each word is:
S=α * X1+β*X2+γ*X3
Wherein, X1For the word frequency scoring for the frequency that word occurs in the target text, α is preset word frequency weight, X2For word There is the location score of position in the target text in language, and β is preset position weight, X3It is word in the target text In word span scoring, γ be preset word span weight.
8. a kind of electronic device, which is characterized in that the device includes:Memory, processor, being stored on the memory can be The text mood analysis program run on the processor can when the text mood analysis program is executed by the processor Realize following steps:
A1, the text mood analysis request for carrying target text is received, the target text is pre-processed, and using default Sequence labelling method obtains the corresponding available word set of the target text to carrying out word segmentation processing by pretreated target text It closes;
A2, the sentence to be analyzed for determining the target text obtain the mesh according to the corresponding available set of words of the target text The corresponding available set of words of sentence to be analyzed calculates separately the sentence to be analyzed according to default computation rule in mark text The sentence vector of son, wherein sentence vector includes the other vector of vector sum word-level of sentence level;And
A3, respectively by the sentence vector of the sentence to be analyzed input in advance trained emotion judgment model, according to model Output result judges the emotional valence of the target text.
9. electronic device according to claim 8, which is characterized in that described " to determine the sentence to be analyzed of the target text Son " the step of include:
Count the number of words of the target text;
When number of words is less than predetermined threshold value, each sentence in the target text is obtained as sentence to be analyzed;Or
When number of words is greater than or equal to predetermined threshold value, then each word in the corresponding available set of words of the target text is obtained respectively The statistical nature of language calculates the scoring of each word according to default code of points, filters out the forward word conduct of marking and queuing The keyword of target text, and the sentence comprising keyword is filtered out from the target text, by described comprising keyword Sentence is as sentence to be analyzed.
10. a kind of computer readable storage medium, which is characterized in that the computer readable storage medium includes text mood Program is analyzed, it can be achieved that such as any one of claim 1 to 7 institute when the text mood analysis program is executed by processor The step of text mood analysis method stated.
CN201810443238.XA 2018-05-10 2018-05-10 Text emotion analysis method and device and storage medium Active CN108717406B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810443238.XA CN108717406B (en) 2018-05-10 2018-05-10 Text emotion analysis method and device and storage medium
PCT/CN2018/107725 WO2019214145A1 (en) 2018-05-10 2018-09-26 Text sentiment analyzing method, apparatus and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810443238.XA CN108717406B (en) 2018-05-10 2018-05-10 Text emotion analysis method and device and storage medium

Publications (2)

Publication Number Publication Date
CN108717406A true CN108717406A (en) 2018-10-30
CN108717406B CN108717406B (en) 2021-08-24

Family

ID=63899574

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810443238.XA Active CN108717406B (en) 2018-05-10 2018-05-10 Text emotion analysis method and device and storage medium

Country Status (2)

Country Link
CN (1) CN108717406B (en)
WO (1) WO2019214145A1 (en)

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109783800A (en) * 2018-12-13 2019-05-21 北京百度网讯科技有限公司 Acquisition methods, device, equipment and the storage medium of emotion keyword
CN109829152A (en) * 2018-12-13 2019-05-31 深圳壹账通智能科技有限公司 Head portrait replacing options, device, computer equipment and storage medium
CN109918641A (en) * 2019-01-17 2019-06-21 平安城市建设科技(深圳)有限公司 Article theme ingredient breakdown method, apparatus, equipment and storage medium
CN109979592A (en) * 2019-03-25 2019-07-05 广东邮电职业技术学院 Mental health method for early warning, user terminal, server and system
CN110222331A (en) * 2019-04-26 2019-09-10 平安科技(深圳)有限公司 Lie recognition methods and device, storage medium, computer equipment
CN110222182A (en) * 2019-06-06 2019-09-10 腾讯科技(深圳)有限公司 A kind of statement classification method and relevant device
CN110263344A (en) * 2019-06-25 2019-09-20 名创优品(横琴)企业管理有限公司 A text sentiment analysis method, device and equipment based on a mixed model
CN110334342A (en) * 2019-06-10 2019-10-15 阿里巴巴集团控股有限公司 The analysis method and device of word importance
CN110400173A (en) * 2019-07-23 2019-11-01 中译语通科技股份有限公司 Market sentiment monitoring system method for building up and system
CN110442857A (en) * 2019-06-18 2019-11-12 平安科技(深圳)有限公司 Emotion intelligent determination method, device and computer readable storage medium
CN110597961A (en) * 2019-09-18 2019-12-20 腾讯科技(深圳)有限公司 Text category labeling method and device, electronic equipment and storage medium
CN110796565A (en) * 2019-10-14 2020-02-14 广州供电局有限公司 Analysis method and analysis system for supervision logs
CN111047353A (en) * 2019-11-27 2020-04-21 泰康保险集团股份有限公司 Data processing method and system and electronic equipment
CN111144127A (en) * 2019-12-25 2020-05-12 科大讯飞股份有限公司 Text semantic recognition method and model acquisition method thereof and related device
CN111177308A (en) * 2019-12-05 2020-05-19 上海云洽信息技术有限公司 Emotion recognition method for text content
CN111177402A (en) * 2019-12-13 2020-05-19 中移(杭州)信息技术有限公司 Evaluation method, device, computer equipment and storage medium based on word segmentation processing
WO2020107840A1 (en) * 2018-11-28 2020-06-04 平安科技(深圳)有限公司 Sentence distance mapping method and apparatus based on machine learning, and computer device
CN111259138A (en) * 2018-11-15 2020-06-09 航天信息股份有限公司 Tax field short text emotion classification method and device
CN111651996A (en) * 2019-03-04 2020-09-11 北京嘀嘀无限科技发展有限公司 Abstract generation method and device, electronic equipment and storage medium
CN111666588A (en) * 2020-05-14 2020-09-15 武汉大学 Emotion difference privacy protection method based on generation countermeasure network
CN111782803A (en) * 2020-06-05 2020-10-16 京东数字科技控股有限公司 Work order processing method and device, electronic equipment and storage medium
CN111816211A (en) * 2019-04-09 2020-10-23 Oppo广东移动通信有限公司 Emotion recognition method, device, storage medium and electronic device
CN111832282A (en) * 2020-07-16 2020-10-27 平安科技(深圳)有限公司 External knowledge fused BERT model fine adjustment method and device and computer equipment
CN112016296A (en) * 2020-09-07 2020-12-01 平安科技(深圳)有限公司 Sentence vector generation method, device, equipment and storage medium
CN112036175A (en) * 2020-07-17 2020-12-04 苏宁金融科技(南京)有限公司 Domain text emotion recognition method and device, computer equipment and storage medium
CN112528628A (en) * 2020-12-18 2021-03-19 北京一起教育科技有限责任公司 Text processing method and device and electronic equipment
CN112732910A (en) * 2020-12-29 2021-04-30 华南理工大学 Cross-task text emotion state assessment method, system, device and medium
CN114120978A (en) * 2021-11-29 2022-03-01 中国平安人寿保险股份有限公司 Emotion recognition model training and voice interaction method, device, equipment and medium
CN114911922A (en) * 2021-01-29 2022-08-16 中国移动通信有限公司研究院 Emotion analysis method, emotion analysis device and storage medium
CN115049018A (en) * 2022-07-21 2022-09-13 浙江极氪智能科技有限公司 Emotion analysis model training method, device, equipment and medium
CN116205749A (en) * 2023-05-06 2023-06-02 深圳市秦保科技有限公司 Electronic insurance policy information data management method, device, equipment and readable storage medium

Families Citing this family (71)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112818114B (en) * 2019-11-15 2024-05-24 阿里巴巴集团控股有限公司 Information classification method, detection method, computing device and storage medium
CN110929528B (en) * 2019-11-21 2023-09-05 腾讯科技(深圳)有限公司 Method, device, server and storage medium for sentence sentiment analysis
CN110991163B (en) * 2019-11-29 2023-09-19 达观数据有限公司 A document comparison and analysis method, device, electronic equipment and storage medium
CN111126066B (en) * 2019-12-13 2023-05-02 北京因特睿软件有限公司 Method and device for determining Chinese congratulation technique based on neural network
CN111062204B (en) * 2019-12-13 2023-08-22 北京因特睿软件有限公司 Text punctuation use error identification method and device based on machine learning
CN111209748B (en) * 2019-12-16 2023-10-24 合肥讯飞数码科技有限公司 Misspelled word recognition method, related equipment and readable storage media
CN111178068B (en) * 2019-12-25 2023-05-23 华中科技大学鄂州工业技术研究院 Method and device for evaluating furcation violence tendency based on dialogue emotion detection
CN111199150B (en) * 2019-12-30 2024-04-16 科大讯飞股份有限公司 Text segmentation method, related device and readable storage medium
CN115062338B (en) * 2019-12-31 2025-03-14 北京懿医云科技有限公司 Data desensitization method and device, electronic device and storage medium
CN111192692B (en) * 2020-01-02 2023-12-08 上海联影智能医疗科技有限公司 A method, device, electronic equipment and storage medium for determining entity relationships
CN111241290B (en) * 2020-01-19 2023-05-30 车智互联(北京)科技有限公司 Comment tag generation method and device and computing equipment
CN113254573B (en) * 2020-02-12 2024-08-20 北京嘀嘀无限科技发展有限公司 Text abstract generation method and device, electronic equipment and readable storage medium
CN111428467B (en) * 2020-02-19 2024-05-07 平安科技(深圳)有限公司 Method, device, equipment and storage medium for generating problem questions for reading and understanding
CN111414758B (en) * 2020-02-21 2023-10-20 平安科技(深圳)有限公司 Zero-pointer position detection method, device, equipment and computer-readable storage medium
CN111444339B (en) * 2020-02-29 2024-05-03 平安国际智慧城市科技股份有限公司 Text question difficulty labeling method and device and computer readable storage medium
CN111506726B (en) * 2020-03-18 2023-09-22 大箴(杭州)科技有限公司 Short text clustering method and device based on part-of-speech coding and computer equipment
CN111428496B (en) * 2020-03-24 2023-08-15 北京小米松果电子有限公司 Training method of text word segmentation model, word segmentation processing method and device and medium
CN111695337B (en) * 2020-04-29 2024-11-08 平安科技(深圳)有限公司 Method, device, equipment and medium for extracting professional terms in intelligent interviews
CN113590768B (en) * 2020-04-30 2023-10-27 北京金山数字娱乐科技有限公司 Training method and device for text relevance model, question answering method and device
CN113626587B (en) * 2020-05-08 2024-03-29 武汉金山办公软件有限公司 Text category identification method and device, electronic equipment and medium
CN113742478B (en) * 2020-05-29 2023-09-05 国家计算机网络与信息安全管理中心 Directional screening device and method for massive text data
CN111639177B (en) * 2020-06-04 2023-06-02 虎博网络技术(上海)有限公司 Text extraction method and device
CN111639185B (en) * 2020-06-04 2023-06-02 虎博网络技术(上海)有限公司 Relation information extraction method, device, electronic equipment and readable storage medium
CN113761904B (en) * 2020-06-05 2025-04-25 阿里巴巴集团控股有限公司 Text recognition model training method, device, electronic device and storage medium
CN111767728A (en) * 2020-06-29 2020-10-13 北京百度网讯科技有限公司 Short text classification method, apparatus, device and storage medium
CN111783469A (en) * 2020-06-29 2020-10-16 中国计量大学 A method for extracting text sentence features
CN111814453B (en) * 2020-07-09 2023-08-08 重庆大学 Fine-grained sentiment analysis method based on BiLSTM-TextCNN
CN111858933B (en) * 2020-07-10 2024-08-06 暨南大学 Hierarchical text emotion analysis method and system based on characters
CN111966827B (en) * 2020-07-24 2024-06-11 大连理工大学 Conversation Sentiment Analysis Method Based on Heterogeneous Bipartite Graph
CN112016309B (en) * 2020-09-04 2024-03-08 平安科技(深圳)有限公司 Extraction drug combination method, device, apparatus and storage medium
CN112069309B (en) * 2020-09-14 2024-03-15 腾讯科技(深圳)有限公司 Information acquisition method, information acquisition device, computer equipment and storage medium
CN112084769B (en) * 2020-09-14 2024-07-05 深圳前海微众银行股份有限公司 Dependency syntax model optimization method, apparatus, device and readable storage medium
CN112131888B (en) * 2020-09-23 2023-11-14 平安科技(深圳)有限公司 Methods, devices, equipment and storage media for analyzing semantic sentiments
CN112183053B (en) * 2020-10-10 2024-11-08 湖南快乐阳光互动娱乐传媒有限公司 A data processing method and device
CN112347790B (en) * 2020-11-06 2024-01-16 北京乐学帮网络技术有限公司 Text processing method, device, computer equipment and storage medium
CN112330408B (en) * 2020-11-13 2024-09-24 上海络昕信息科技有限公司 Product recommendation method and device and electronic equipment
CN112580366B (en) * 2020-11-30 2024-02-13 科大讯飞股份有限公司 Emotion recognition method, electronic device and storage device
CN112507082B (en) * 2020-12-16 2024-08-16 作业帮教育科技(北京)有限公司 Method and device for intelligently identifying improper text interaction and electronic equipment
CN112527963B (en) * 2020-12-17 2024-05-03 深圳市欢太科技有限公司 Dictionary-based multi-label emotion classification method and device, equipment and storage medium
CN112668343B (en) * 2020-12-22 2024-04-30 科大讯飞股份有限公司 Text rewriting method, electronic device, and storage device
CN112686018B (en) * 2020-12-23 2024-08-23 中国科学技术大学 Text segmentation method, device, equipment and storage medium
CN114662487A (en) * 2020-12-23 2022-06-24 苏州国双软件有限公司 Text segmentation method and device, electronic equipment and readable storage medium
CN113822514A (en) * 2020-12-23 2021-12-21 常州中吴网传媒有限公司 A method for quality control of all-media manuscripts
CN112541476B (en) * 2020-12-24 2023-09-29 西安交通大学 A method for identifying malicious web pages based on semantic feature extraction
CN112735428A (en) * 2020-12-27 2021-04-30 科大讯飞(上海)科技有限公司 Hot word acquisition method, voice recognition method and related equipment
CN112765444B (en) * 2021-01-08 2025-02-07 深圳前海微众银行股份有限公司 Method, device, equipment and storage medium for extracting target text fragments
CN112860887B (en) * 2021-01-18 2023-09-05 北京奇艺世纪科技有限公司 Text labeling method and device
CN113220887B (en) * 2021-05-31 2022-03-15 华南师范大学 A sentiment classification method using target knowledge to enhance the model
CN113204964B (en) * 2021-05-31 2024-03-08 平安科技(深圳)有限公司 Data processing method, system, electronic equipment and storage medium
CN113515630B (en) * 2021-06-10 2024-04-09 深圳数联天下智能科技有限公司 Triplet generation and verification method, device, electronic device and storage medium
CN113434630B (en) * 2021-06-25 2023-07-25 平安科技(深圳)有限公司 Customer service evaluation method, customer service evaluation device, terminal equipment and medium
CN113468292B (en) * 2021-06-29 2024-06-25 中国银联股份有限公司 Aspect-level emotion analysis method, device and computer-readable storage medium
CN113535813B (en) * 2021-06-30 2023-07-28 北京百度网讯科技有限公司 Data mining method and device, electronic equipment and storage medium
CN113407679B (en) * 2021-06-30 2023-10-03 竹间智能科技(上海)有限公司 Text topic mining methods, devices, electronic equipment and storage media
CN113536772A (en) * 2021-07-15 2021-10-22 浙江诺诺网络科技有限公司 Text processing method, device, equipment and storage medium
CN113658577B (en) * 2021-08-16 2024-06-14 腾讯音乐娱乐科技(深圳)有限公司 Speech synthesis model training method, audio generation method, equipment and medium
CN113919340B (en) * 2021-08-27 2024-08-13 北京邮电大学 Self-media language emotion analysis method based on unsupervised unregistered word recognition
CN113792541B (en) * 2021-09-24 2023-08-11 福州大学 An Aspect-Level Sentiment Analysis Method Introducing Mutual Information Regularizer
CN114239595B (en) * 2021-12-15 2024-05-10 平安科技(深圳)有限公司 Intelligent return visit list generation method, device, equipment and storage medium
CN114547234A (en) * 2022-01-17 2022-05-27 特斯联科技集团有限公司 Method, device, electronic device and medium for recognizing emotional sentences in text
CN114818685B (en) * 2022-04-21 2023-06-20 平安科技(深圳)有限公司 Keyword extraction method, device, electronic equipment and storage medium
CN114969345B (en) * 2022-06-16 2024-12-13 平安科技(深圳)有限公司 Intelligent news topic sentiment analysis method, device, equipment and storage medium
CN115080701B (en) * 2022-07-05 2025-08-01 上海找钢网信息科技股份有限公司 Unstructured statement analysis processing method, device, equipment and storage medium
CN115374276A (en) * 2022-08-09 2022-11-22 北京百度网讯科技有限公司 Emotion polarity determination method, device, equipment, storage medium and program product
CN115563987B (en) * 2022-10-17 2023-07-04 北京中科智加科技有限公司 Comment text analysis processing method
CN115600646B (en) * 2022-10-19 2023-10-03 北京百度网讯科技有限公司 Language model training methods, devices, media and equipment
CN115906835B (en) * 2022-11-23 2024-02-20 之江实验室 A method for learning text representation of Chinese questions based on clustering and contrastive learning
CN117150025B (en) * 2023-10-31 2024-01-26 湖南锦鳞智能科技有限公司 A data service intelligent identification system
CN117422071B (en) * 2023-12-19 2024-03-15 中南大学 Text term multiple segmentation annotation conversion method and device
CN117787270B (en) * 2023-12-27 2025-10-10 金叶天成(北京)科技有限公司 A lightweight Chinese keyword extraction method based on statistical features and word graph
CN118210880B (en) * 2024-05-21 2024-07-26 北京心企领航科技有限公司 AI emotion visual recognition method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014189400A1 (en) * 2013-05-22 2014-11-27 Axon Doo A method for diacritisation of texts written in latin- or cyrillic-derived alphabets
CN106569998A (en) * 2016-10-27 2017-04-19 浙江大学 Text named entity recognition method based on Bi-LSTM, CNN and CRF
CN106776581A (en) * 2017-02-21 2017-05-31 浙江工商大学 Subjective texts sentiment analysis method based on deep learning
US20170308523A1 (en) * 2014-11-24 2017-10-26 Agency For Science, Technology And Research A method and system for sentiment classification and emotion classification
CN107423284A (en) * 2017-06-14 2017-12-01 中国科学院自动化研究所 Merge the construction method and system of the sentence expression of Chinese language words internal structural information
CN107609009A (en) * 2017-07-26 2018-01-19 北京大学深圳研究院 Text sentiment analysis method, device, storage medium and computer equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106294845B (en) * 2016-08-19 2019-08-09 清华大学 Multi-emotion classification method and device based on weight learning and multi-feature extraction
CN106919673B (en) * 2017-02-21 2019-08-20 浙江工商大学 Text sentiment analysis system based on deep learning
CN107239439A (en) * 2017-04-19 2017-10-10 同济大学 Public sentiment sentiment classification method based on word2vec
CN107403017A (en) * 2017-08-09 2017-11-28 上海数旦信息技术有限公司 A kind of method that real-time news of intellectual analysis influences on financial market
CN107944014A (en) * 2017-12-11 2018-04-20 河海大学 A kind of Chinese text sentiment analysis method based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014189400A1 (en) * 2013-05-22 2014-11-27 Axon Doo A method for diacritisation of texts written in latin- or cyrillic-derived alphabets
US20170308523A1 (en) * 2014-11-24 2017-10-26 Agency For Science, Technology And Research A method and system for sentiment classification and emotion classification
CN106569998A (en) * 2016-10-27 2017-04-19 浙江大学 Text named entity recognition method based on Bi-LSTM, CNN and CRF
CN106776581A (en) * 2017-02-21 2017-05-31 浙江工商大学 Subjective texts sentiment analysis method based on deep learning
CN107423284A (en) * 2017-06-14 2017-12-01 中国科学院自动化研究所 Merge the construction method and system of the sentence expression of Chinese language words internal structural information
CN107609009A (en) * 2017-07-26 2018-01-19 北京大学深圳研究院 Text sentiment analysis method, device, storage medium and computer equipment

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111259138A (en) * 2018-11-15 2020-06-09 航天信息股份有限公司 Tax field short text emotion classification method and device
WO2020107840A1 (en) * 2018-11-28 2020-06-04 平安科技(深圳)有限公司 Sentence distance mapping method and apparatus based on machine learning, and computer device
CN109783800B (en) * 2018-12-13 2024-04-12 北京百度网讯科技有限公司 Emotion keyword acquisition method, device, equipment and storage medium
CN109829152A (en) * 2018-12-13 2019-05-31 深圳壹账通智能科技有限公司 Head portrait replacing options, device, computer equipment and storage medium
CN109783800A (en) * 2018-12-13 2019-05-21 北京百度网讯科技有限公司 Acquisition methods, device, equipment and the storage medium of emotion keyword
CN109918641A (en) * 2019-01-17 2019-06-21 平安城市建设科技(深圳)有限公司 Article theme ingredient breakdown method, apparatus, equipment and storage medium
CN111651996A (en) * 2019-03-04 2020-09-11 北京嘀嘀无限科技发展有限公司 Abstract generation method and device, electronic equipment and storage medium
CN109979592A (en) * 2019-03-25 2019-07-05 广东邮电职业技术学院 Mental health method for early warning, user terminal, server and system
CN111816211A (en) * 2019-04-09 2020-10-23 Oppo广东移动通信有限公司 Emotion recognition method, device, storage medium and electronic device
CN110222331B (en) * 2019-04-26 2024-05-14 平安科技(深圳)有限公司 Lie recognition method and device, storage medium and computer equipment
CN110222331A (en) * 2019-04-26 2019-09-10 平安科技(深圳)有限公司 Lie recognition methods and device, storage medium, computer equipment
CN110222182B (en) * 2019-06-06 2022-12-27 腾讯科技(深圳)有限公司 Statement classification method and related equipment
CN110222182A (en) * 2019-06-06 2019-09-10 腾讯科技(深圳)有限公司 A kind of statement classification method and relevant device
CN110334342B (en) * 2019-06-10 2024-02-09 创新先进技术有限公司 Word importance analysis method and device
CN110334342A (en) * 2019-06-10 2019-10-15 阿里巴巴集团控股有限公司 The analysis method and device of word importance
CN110442857B (en) * 2019-06-18 2024-05-10 平安科技(深圳)有限公司 Emotion intelligent judging method and device and computer readable storage medium
WO2020253042A1 (en) * 2019-06-18 2020-12-24 平安科技(深圳)有限公司 Intelligent sentiment judgment method and device, and computer readable storage medium
CN110442857A (en) * 2019-06-18 2019-11-12 平安科技(深圳)有限公司 Emotion intelligent determination method, device and computer readable storage medium
CN110263344A (en) * 2019-06-25 2019-09-20 名创优品(横琴)企业管理有限公司 A text sentiment analysis method, device and equipment based on a mixed model
WO2021012684A1 (en) * 2019-07-23 2021-01-28 中译语通科技股份有限公司 Method and system for establishing market sentiment monitoring system
CN110400173A (en) * 2019-07-23 2019-11-01 中译语通科技股份有限公司 Market sentiment monitoring system method for building up and system
CN110597961A (en) * 2019-09-18 2019-12-20 腾讯科技(深圳)有限公司 Text category labeling method and device, electronic equipment and storage medium
CN110597961B (en) * 2019-09-18 2023-10-27 腾讯云计算(北京)有限责任公司 Text category labeling method and device, electronic equipment and storage medium
CN110796565A (en) * 2019-10-14 2020-02-14 广州供电局有限公司 Analysis method and analysis system for supervision logs
CN111047353A (en) * 2019-11-27 2020-04-21 泰康保险集团股份有限公司 Data processing method and system and electronic equipment
CN111177308B (en) * 2019-12-05 2023-07-18 上海云洽信息技术有限公司 A Method for Identifying Emotions in Text Content
CN111177308A (en) * 2019-12-05 2020-05-19 上海云洽信息技术有限公司 Emotion recognition method for text content
CN111177402B (en) * 2019-12-13 2023-09-22 中移(杭州)信息技术有限公司 Evaluation method, device, computer equipment and storage medium based on word segmentation processing
CN111177402A (en) * 2019-12-13 2020-05-19 中移(杭州)信息技术有限公司 Evaluation method, device, computer equipment and storage medium based on word segmentation processing
CN111144127A (en) * 2019-12-25 2020-05-12 科大讯飞股份有限公司 Text semantic recognition method and model acquisition method thereof and related device
CN111666588B (en) * 2020-05-14 2023-06-23 武汉大学 A sentimental differential privacy protection method based on generative adversarial networks
CN111666588A (en) * 2020-05-14 2020-09-15 武汉大学 Emotion difference privacy protection method based on generation countermeasure network
CN111782803A (en) * 2020-06-05 2020-10-16 京东数字科技控股有限公司 Work order processing method and device, electronic equipment and storage medium
CN111832282A (en) * 2020-07-16 2020-10-27 平安科技(深圳)有限公司 External knowledge fused BERT model fine adjustment method and device and computer equipment
CN112036175A (en) * 2020-07-17 2020-12-04 苏宁金融科技(南京)有限公司 Domain text emotion recognition method and device, computer equipment and storage medium
CN112016296A (en) * 2020-09-07 2020-12-01 平安科技(深圳)有限公司 Sentence vector generation method, device, equipment and storage medium
CN112016296B (en) * 2020-09-07 2023-08-25 平安科技(深圳)有限公司 Sentence vector generation method, sentence vector generation device, sentence vector generation equipment and sentence vector storage medium
CN112528628B (en) * 2020-12-18 2024-02-02 北京一起教育科技有限责任公司 A text processing method, device and electronic equipment
CN112528628A (en) * 2020-12-18 2021-03-19 北京一起教育科技有限责任公司 Text processing method and device and electronic equipment
CN112732910A (en) * 2020-12-29 2021-04-30 华南理工大学 Cross-task text emotion state assessment method, system, device and medium
CN112732910B (en) * 2020-12-29 2024-04-16 华南理工大学 Cross-task text emotional state assessment method, system, device and medium
CN114911922A (en) * 2021-01-29 2022-08-16 中国移动通信有限公司研究院 Emotion analysis method, emotion analysis device and storage medium
CN114120978A (en) * 2021-11-29 2022-03-01 中国平安人寿保险股份有限公司 Emotion recognition model training and voice interaction method, device, equipment and medium
CN114120978B (en) * 2021-11-29 2025-04-25 中国平安人寿保险股份有限公司 Emotion recognition model training, voice interaction method, device, equipment and medium
CN115049018A (en) * 2022-07-21 2022-09-13 浙江极氪智能科技有限公司 Emotion analysis model training method, device, equipment and medium
CN115049018B (en) * 2022-07-21 2025-08-01 浙江极氪智能科技有限公司 Training method, device, equipment and medium of emotion analysis model
CN116205749A (en) * 2023-05-06 2023-06-02 深圳市秦保科技有限公司 Electronic insurance policy information data management method, device, equipment and readable storage medium

Also Published As

Publication number Publication date
WO2019214145A1 (en) 2019-11-14
CN108717406B (en) 2021-08-24

Similar Documents

Publication Publication Date Title
CN108717406A (en) Text mood analysis method, device and storage medium
CN113051356B (en) Open relation extraction method and device, electronic equipment and storage medium
CN107729309B (en) Deep learning-based Chinese semantic analysis method and device
CN113312453B (en) Model pre-training system for cross-language dialogue understanding
CN111709242B (en) Chinese punctuation mark adding method based on named entity recognition
CN111177326A (en) Key information extraction method and device based on fine labeling text and storage medium
CN108629043A (en) Extracting method, device and the storage medium of webpage target information
CN112417854A (en) Chinese document abstraction type abstract method
CN111177374A (en) Active learning-based question and answer corpus emotion classification method and system
CN109325165A (en) Internet public opinion analysis method, apparatus and storage medium
CN102929861B (en) Method and system for calculating text emotion index
CN109670041A (en) A kind of band based on binary channels text convolutional neural networks is made an uproar illegal short text recognition methods
CN109992782A (en) Legal documents name entity recognition method, device and computer equipment
CN108563638B (en) Microblog emotion analysis method based on topic identification and integrated learning
CN108363790A (en) For the method, apparatus, equipment and storage medium to being assessed
CN109299271A (en) Training sample generation, text data, public sentiment event category method and relevant device
CN110415071B (en) A method of comparing automotive competing products based on opinion mining analysis
CN110532563A (en) The detection method and device of crucial paragraph in text
CN104008091A (en) Sentiment value based web text sentiment analysis method
CN104850617B (en) Short text processing method and processing device
CN113486174B (en) Model training, reading understanding method and device, electronic equipment and storage medium
CN111291566A (en) An event subject identification method, device and storage medium
CN108228569A (en) A kind of Chinese microblog emotional analysis method based on Cooperative Study under the conditions of loose
CN111462752A (en) Client intention identification method based on attention mechanism, feature embedding and BI-L STM
CN109582788A (en) Comment spam training, recognition methods, device, equipment and readable storage medium storing program for executing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant