[go: up one dir, main page]

CN109199411A - Case insider's recognition methods based on Model Fusion - Google Patents

Case insider's recognition methods based on Model Fusion Download PDF

Info

Publication number
CN109199411A
CN109199411A CN201811135018.7A CN201811135018A CN109199411A CN 109199411 A CN109199411 A CN 109199411A CN 201811135018 A CN201811135018 A CN 201811135018A CN 109199411 A CN109199411 A CN 109199411A
Authority
CN
China
Prior art keywords
saccade
duration
fixation
time
total
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811135018.7A
Other languages
Chinese (zh)
Other versions
CN109199411B (en
Inventor
唐闺臣
梁瑞宇
谢跃
徐梦圆
叶超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Institute of Technology
Original Assignee
Nanjing Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Institute of Technology filed Critical Nanjing Institute of Technology
Priority to CN201811135018.7A priority Critical patent/CN109199411B/en
Publication of CN109199411A publication Critical patent/CN109199411A/en
Application granted granted Critical
Publication of CN109199411B publication Critical patent/CN109199411B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
    • A61B5/1103Detecting muscular movement of the eye, e.g. eyelid movement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Animal Behavior & Ethology (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Psychiatry (AREA)
  • Physiology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Fuzzy Systems (AREA)
  • Human Computer Interaction (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Dentistry (AREA)

Abstract

本发明公开了一种基于模型融合的案件知情者识别方法,包括以下步骤,提取各个被测试者在观看单一图片时的32维眼动特征;基于32维眼动特征训练支撑向量机模型A,来识别每个被测试者在单一图片时的言语置信度,并输出每个被测试者在单一图片时的概率f1(xi)和f2(xi);提取各个被测试者在观看组合图片时的110维眼动特征;基于110维眼动特征训练支撑向量机模型B,来识别每个被测试者在组合图片时的言语置信度,并输出每个被测试者在组合图片时的概率g1(xi)和g2(xi);运用乘法规则,融合支撑向量机模型A和B的分类器概率,得到联合概率,取各个被测试者的概率最大的类别为最后的决策结果。本发明可以有效抑制反测谎手段,提高了算法效率。

The invention discloses a case insider identification method based on model fusion. to identify the speech confidence of each test subject in a single picture, and output the probability f 1 ( xi ) and f 2 ( xi ) of each test subject in a single picture; 110-dimensional eye movement features when combining pictures; SVM model B is trained based on 110-dimensional eye movement features to identify the verbal confidence of each test subject when combining pictures, and output each test subject when combining pictures The probabilities g 1 (x i ) and g 2 (x i ) of decision result. The invention can effectively suppress the anti-polygraph method and improve the algorithm efficiency.

Description

Case insider's recognition methods based on Model Fusion
Technical field
The present invention relates to criminal investigations to inquest analysis technical field, and in particular to a kind of case insider knowledge based on Model Fusion Other method.
Background technique
Under criminal investigation background, the key for inquesting suspect is to evaluate and test the abnormal emotion of suspect, I.e. so-called " detecting a lie ".Interrogator judges its psychological condition by observing the performance of suspect, and in its speech Loophole is broken through the mental line of defense of suspect, it is forced to say the truth of the matter using some hearing skills.But normal person Ability of detecting a lie near conjecture, usually rely on intuition judgement, therefore, accuracy rate only it is higher than random chance little by little, Er Qietong Often will also be by a small number of hearing experts having wide experience, this is clearly time-consuming and inefficient.
Since psychology variation of the people when lying can cause some physiological parameters (such as skin pricktest, heartbeat, blood pressure, breathing brain Electric wave, sound etc.) variation, therefore, it is a kind of effective for assessing measured by detecting these variations whether case is known Supplementary means.In the research of early stage, case is carried out to suspect using polygraph and knows identify it is most common One of method.But physical signs used by polygraph is often subject to the influence of various factors, the body machine including people Energy, psychological condition, task stimulus intensity and the person's of detecting a lie ability etc..
In recent years, with the development of brain cognition neural technology, researcher can directly observe interior when mentition occurs The nervous activity of portion related brain areas, it is more objective compared with by traditional lie-detection technology of external physiologic activity change, more can Movable inherent laws of telling a lie are disclosed, one of lie-detection technology developing direction is become.However, professional equipment needed for such technology is huge It is big and valuable, the practicality is limited, and there are corresponding anti-means of detecting a lie to influence test result.
It follows that the lie-detection technology based on above-mentioned physiological signal, is above urgently improved there are still some actually practical Place, main reason is that: 1) degree of cooperation of measured, most of physiology lie detecting method, acquisition physiology ginseng Whens number such as electrocardio, skin potential activity, blood pressure, brain wave etc., require to paste at the electrode of contact-sensing instrument or sensor patch In the body somewhere of tested person, need measured subjective must cooperate, otherwise, concealed anti-lie-detection technology can be used in measured (such as dynamic toe is given free rein to fancy) carrys out disturbed test result;2) concealment of measurement means, emotional stress have weight in detecting a lie The research significance wanted, it will be clear, however, that test equipment can inherently cause certain extra pressure, feelings in this case to patient Thread fluctuation bring measurement influences to be difficult to estimate.Although voice lie detection technology has certain concealment, voice is easy It by external environment influence, for example dialect, accent, chips in, technical difficulty is larger, studies at present at the early-stage.Therefore, effectively The characteristics of detecting a lie, it is untouchable to have, strong concealment, and analyzed signal is convenient for acquisition and processing.
It can be seen that above-mentioned existing case insider recognition methods, there are still there is inconvenient and defect, and be urgently subject into One step is improved.In order to solve the problems, such as that case insider's recognition methods exists, those skilled in the relevant arts there's no one who doesn't or isn't painstakingly come Seek solution, but has no that applicable method is developed completion always for a long time, and general case insider's recognizer Again cannot be appropriate solve the above problems, this is clearly the problem of related dealer is suddenly to be solved.Relative to other anti-means of detecting a lie For, some eye movement indexs are not known by people's will to be controlled, and control certain eye movement indexs intentionally will appear Indexes Abnormality instead.Therefore, Carrying out case insider identification with eye movement index has certain feasibility, current problem to be solved when how to realize.
Summary of the invention
In order to overcome case insider recognition methods in the prior art, existing inconvenient and defect.It is of the invention based on mould Case insider's recognition methods of type fusion, solves the examined people's degree of cooperation of case insider recognition methods in the prior art Restrict, test method it is not concealed, the technical problems such as testing efficiency is low carry out case insider identification using eye movement data, can be with Effectively inhibit anti-means of detecting a lie, and using 32 eye movement characteristics and 110 dimension eye movement characteristics Model Fusion algorithms, is effectively utilized difference Subject Psychological Manifestations under mode improve efficiency of algorithm, and the ingenious novelty of method, identification accuracy is high, has good application Prospect.
In order to achieve the above object, the technical scheme adopted by the invention is that:
A kind of case insider's recognition methods based on Model Fusion, includes the following steps,
Step (A) extracts 32 dimension eye movement characteristics of each testee when watching single picture;
Step (B), based on 32 dimension eye movement characteristics training SVM model A, to identify each testee single Speech confidence level when picture, and export probability f of each testee in single picture1(xi) and f2(xi), wherein xiGeneration I-th of testee of table, f1And f2Respectively indicating i-th of testee is insider in single picture or is not insider Probability;
Step (C) extracts 110 dimension eye movement characteristics of each testee when watching combination picture;
Step (D), based on 110 dimension eye movement characteristics training SVM Model Bs, to identify that each testee is combining Speech confidence level when picture, and export probability g of each testee in combination picture1(xi) and g2(xi), wherein xiGeneration I-th of testee of table, g1And g2Respectively indicating i-th of testee is insider in combination picture or is not insider Probability;
Step (E) merges the classifier probability of SVM model A and B, obtains joint probability with multiplication rule f1(xi)g1(xi) and f2(xi)g2(xi), the classification for taking each testee's maximum probability corresponding is the last result of decision.
Case insider's recognition methods above-mentioned based on Model Fusion, step (A), the 32 dimension eye movement characteristics, including Blink statistic 6: number of winks, frequency of wink, blink total duration, duration of averagely blinking, blink duration are maximum Value, blink duration minimum value;Watch statistic 11 attentively: fixation times, gaze frequency, when watching total duration, average fixation attentively It is long, watch duration maximum value attentively, watch duration minimum value attentively, always watching deviation, average fixation deviation, maximum attentively and watch deviation, minimum note attentively Depending on deviation, pan path length;Twitching of the eyelid statistic 15: twitching of the eyelid number, twitching of the eyelid frequency, twitching of the eyelid total duration, twitching of the eyelid be averaged duration, Twitching of the eyelid duration maximum value, twitching of the eyelid duration minimum value, total twitching of the eyelid amplitude, average eye skip frame degree, maximum twitching of the eyelid amplitude, minimum twitching of the eyelid Amplitude, total saccadic speed, average saccadic speed, twitching of the eyelid maximum speed, twitching of the eyelid minimum speed, average twitching of the eyelid delay time.
Case insider's recognition methods above-mentioned based on Model Fusion, step (C), the 110 dimension eye movement characteristics refer to There are 11 dimensional features in the area combination picture Shang Fen10Ge, each area, including fixation time summation Net Dwell Time in region of interest, emerging Watched attentively in interesting area with the sum of twitching of the eyelid time Dwell Time, into the sum of the twitching of the eyelid time of region of interest and Dwell time Glance The sum of Duration, the twitching of the eyelid time for leaving region of interest and Glance Duration, watch duration attentively for the first time, twitching of the eyelid is from other areas Domain jump to fixation time summation Net Dwell Time in the number, fixation times, region of interest in the region account for total time ratio, Watch attentively in region of interest and accounts for the ratio of total time with the sum of twitching of the eyelid time Dwell Time, always watches duration attentively, always watches attentively and account for total time Ratio.
The beneficial effects of the present invention are: case insider's recognition methods of the invention based on Model Fusion, solves existing The skills such as the examined people's degree of cooperation of case insider's recognition methods in technology restricts, test method is not concealed, and testing efficiency is low Art problem carries out case insider identification using eye movement data, can effectively inhibit anti-means of detecting a lie, and use 32 eye movement characteristics With 110 dimension eye movement characteristics Model Fusion algorithms, the subject Psychological Manifestations being effectively utilized under different mode improve algorithm effect Rate, the ingenious novelty of method, identification accuracy is high, has a good application prospect.
Detailed description of the invention
Fig. 1 is the flow chart of the abnormal emotion recognition methods of the invention based on eye movement data analysis.
Specific embodiment
Below in conjunction with Figure of description, the present invention is further illustrated.
As shown in Figure 1, case insider's recognition methods of the invention based on Model Fusion, includes the following steps,
Step (A) extracts 32 dimension eye movement characteristics of each testee when watching single picture, 32 Wei Yandongte Sign, including blink statistic 6: when number of winks, frequency of wink, blink total duration, averagely blink duration, blink continue Long maximum value, blink duration minimum value;Watch statistic 11 attentively: fixation times, gaze frequency watch total duration attentively, are average Watch attentively duration, watch attentively duration maximum value, watch attentively duration minimum value, always watch attentively deviation, average fixation deviation, maximum watch attentively deviation, Minimum watches deviation, pan path length attentively;Twitching of the eyelid statistic 15: twitching of the eyelid number, twitching of the eyelid frequency, twitching of the eyelid total duration, twitching of the eyelid are flat Equal duration, twitching of the eyelid duration maximum value, twitching of the eyelid duration minimum value, total twitching of the eyelid amplitude, average eye skip frame degree, maximum twitching of the eyelid amplitude, When minimum twitching of the eyelid amplitude, total saccadic speed, average saccadic speed, twitching of the eyelid maximum speed, twitching of the eyelid minimum speed, average twitching of the eyelid delay Between;
Step (B), based on 32 dimension eye movement characteristics training SVM (SVM) model A, to identify that each testee exists Speech confidence level when single picture, and export probability f of each testee in single picture1(xi) and f2(xi), In, xiRepresent i-th of testee, f1And f2Respectively indicating i-th of testee is insider in single picture or is not The probability of insider;
Step (C) extracts 110 dimension eye movement characteristics of each testee when watching combination picture, the 110 dimension eye movement Feature refers to that there are 11 dimensional features in the area combination picture Shang Fen10Ge, each area, including fixation time summation Net Dwell in region of interest Watch attentively in Time, region of interest with the sum of twitching of the eyelid time Dwell Time, into the twitching of the eyelid time of region of interest and Dwell time it With Glance Duration, the sum of the twitching of the eyelid time and the Glance Duration that leave region of interest, watch duration, twitching of the eyelid attentively for the first time When jumping to that fixation time summation Net Dwell Time accounts for total in the number, fixation times, region of interest in the region from other regions Between ratio, watch attentively in region of interest and to account for the ratio of total time with the sum of twitching of the eyelid time Dwell Time, always watch duration attentively, always watch attentively Account for the ratio of total time;
Step (D), based on 110 dimension eye movement characteristics training SVM Model Bs, to identify that each testee is combining Speech confidence level when picture, and export probability g of each testee in combination picture1(xi) and g2(xi), wherein xiGeneration I-th of testee of table, g1And g2Respectively indicating i-th of testee is insider in combination picture or is not insider Probability;
Step (E) merges the classifier probability of SVM model A and B, obtains joint probability with multiplication rule f1(xi)g1(xi) and f2(xi)g2(xi), taking corresponding classification when the maximum probability of each testee is the last result of decision, The last result of decision is case insider's recognition result.
Case insider's recognition methods based on Model Fusion of the invention, recognition effect is as shown in table 1, compares algorithm packet SVM (SVM), artificial neural network (ANN), decision tree (DT) and random forest (RF) are included, as can be known from Table 1, for For single picture, RF algorithm highest, ANN algorithm is minimum;And for combination picture, ANN algorithm highest, RF algorithm is most Low, comparatively, the discrimination of SVM algorithm and DT algorithm is moderate, is suitble to two kinds of models, after Model Fusion strategy, support The discrimination of vector machine (SVM) can reach 86.1%, and 9.2% He is respectively increased for the highest discrimination of both of which 17.6%, therefore, with multiplication rule, the classifier probability of SVM model A and B is merged, joint probability f is obtained1(xi) g1(xi) and f2(xi)g2(xi), taking corresponding classification when the maximum probability of each testee is the last result of decision, can The accuracy of the result of decision is greatly improved, moreover, ten division of selection of 32 eye movement characteristics of the invention and 110 dimension eye movement characteristics Reason, can accurately react all indexs in eye moving process.
Algorithm discrimination comparison under 1 both of which of table
In conclusion case insider's recognition methods of the invention based on Model Fusion, solves case in the prior art The technical problems such as the examined people's degree of cooperation of part insider's recognition methods restricts, test method is not concealed, and testing efficiency is low, use Eye movement data carries out case insider identification, can effectively inhibit anti-means of detecting a lie, and using 32 eye movement characteristics and 110 dimension eye movements Characteristic model blending algorithm, the subject Psychological Manifestations being effectively utilized under different mode improve efficiency of algorithm, and method is ingenious new Grain husk, identification accuracy is high, has a good application prospect.
Basic principles and main features and advantage of the invention have been shown and described above.The technical staff of the industry should Understand, the present invention is not limited to the above embodiments, and the above embodiments and description only describe originals of the invention Reason, without departing from the spirit and scope of the present invention, various changes and improvements may be made to the invention, these changes and improvements It all fall within the protetion scope of the claimed invention.The claimed scope of the invention is by appended claims and its equivalent circle It is fixed.

Claims (3)

1.基于模型融合的案件知情者识别方法,其特征在于:包括以下步骤,1. a case insider identification method based on model fusion, is characterized in that: comprise the following steps, 步骤(A),提取各个被测试者在观看单一图片时的32维眼动特征;Step (A), extracts the 32-dimensional eye movement feature of each tested person when watching a single picture; 步骤(B),基于32维眼动特征训练支撑向量机模型A,来识别每个被测试者在单一图片时的言语置信度,并输出每个被测试者在单一图片时的概率f1(xi)和f2(xi),其中,xi代表第i个被测试者,f1和f2分别表示第i个被测试者在单一图片时是知情者或者不是知情者的概率;Step (B), based on the 32-dimensional eye movement feature training support vector machine model A, to identify the speech confidence of each tested person in a single picture, and output the probability f 1 ( x i ) and f 2 ( xi ), where x i represents the i-th subject, and f 1 and f 2 respectively represent the probability that the i-th subject is an insider or not in a single picture; 步骤(C),提取各个被测试者在观看组合图片时的110维眼动特征;Step (C), extracts the 110-dimensional eye movement features of each tested person when watching the combined picture; 步骤(D),基于110维眼动特征训练支撑向量机模型B,来识别每个被测试者在组合图片时的言语置信度,并输出每个被测试者在组合图片时的概率g1(xi)和g2(xi),其中,xi代表第i个被测试者,g1和g2分别表示第i个被测试者在组合图片时是知情者或者不是知情者的概率;Step (D), based on the 110-dimensional eye movement feature training support vector machine model B, to identify the speech confidence of each tested person when combining pictures, and output the probability g 1 ( x i ) and g 2 ( xi ), where x i represents the i-th subject, and g 1 and g 2 respectively represent the probability that the i-th subject is an insider or not an insider when combining pictures; 步骤(E),运用乘法规则,融合支撑向量机模型A和B的分类器概率,得到联合概率f1(xi)g1(xi)和f2(xi)g2(xi),取各个被测试者概率最大对应的的类别为最后的决策结果。Step (E), using the multiplication rule, fuse the classifier probabilities of the support vector machine models A and B to obtain the joint probabilities f 1 (x i )g 1 (x i ) and f 2 (x i )g 2 (x i ) , and take the category corresponding to the maximum probability of each tested person as the final decision result. 2.根据权利要求1所述的基于模型融合的案件知情者识别方法,其特征在于:步骤(A),所述32维眼动特征,包括眨眼统计量6项:眨眼次数、眨眼频率、眨眼总持续时间、平均眨眼时长、眨眼持续时长最大值、眨眼持续时长最小值;注视统计量11项:注视次数、注视频率、注视总时长、平均注视时长、注视时长最大值、注视时长最小值、总注视偏差、平均注视偏差、最大注视偏差、最小注视偏差、扫视路径长度;眼跳统计量15项:眼跳次数、眼跳频率、眼跳总时长、眼跳平均时长、眼跳时长最大值、眼跳时长最小值、总的眼跳幅度、平均眼跳幅度、最大眼跳幅度、最小眼跳幅度、总眼跳速度、平均眼跳速度、眼跳最大速度、眼跳最小速度、平均眼跳延迟时间。2. the case insider identification method based on model fusion according to claim 1, is characterized in that: step (A), described 32-dimensional eye movement feature, comprises 6 items of blinking statistics: blinking times, blinking frequency, blinking Total duration, average blink duration, maximum blink duration, minimum blink duration; 11 gaze statistics: number of fixations, fixation rate, total fixation duration, average fixation duration, maximum fixation duration, minimum fixation duration, Total fixation deviation, average fixation deviation, maximum fixation deviation, minimum fixation deviation, saccade path length; 15 saccade statistics: saccade count, saccade frequency, total saccade duration, average saccade duration, and maximum saccade duration , Minimum saccade duration, Total saccade amplitude, Average saccade amplitude, Maximum saccade amplitude, Minimum saccade amplitude, Total saccade speed, Average saccade speed, Maximum saccade speed, Minimum saccade speed, Average saccade speed Jump delay time. 3.根据权利要求1所述的基于模型融合的案件知情者识别方法,其特征在于:步骤(C),所述110维眼动特征是指组合图片上分10个区,每个区有11维特征,包括兴趣区中注视时间总和Net Dwell Time、兴趣区中注视与眼跳时间之和Dwell Time、进入兴趣区的眼跳时间与Dwell time之和Glance Duration、离开兴趣区的眼跳时间与Glance Duration之和、首次注视时长、眼跳从其他区域跳到该区域的次数、注视次数、兴趣区中注视时间总和NetDwell Time占总时间的比例、兴趣区中注视与眼跳时间之和Dwell Time占总时间的比例、总注视时长、总注视占总时间的比例。3. the case insider identification method based on model fusion according to claim 1, is characterized in that: step (C), described 110-dimensional eye movement feature refers to dividing 10 districts on the combined picture, and each district has 11 Dimensional features, including the sum of fixation time in the area of interest, Net Dwell Time, the sum of gaze and saccade time in the area of interest, Dwell Time, the sum of saccade time and Dwell time in the area of interest, Glance Duration, and the sum of saccade time and Dwell time in the area of interest. Sum of Glance Duration, duration of first fixation, the number of saccades jumping from other areas to this area, the number of fixations, the total fixation time in the area of interest, the proportion of NetDwell Time to the total time, the sum of fixation and saccade time in the area of interest Dwell Time Proportion of total time, total fixation duration, and total fixation as a percentage of total time.
CN201811135018.7A 2018-09-28 2018-09-28 A Case Knower Identification Method Based on Model Fusion Active CN109199411B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811135018.7A CN109199411B (en) 2018-09-28 2018-09-28 A Case Knower Identification Method Based on Model Fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811135018.7A CN109199411B (en) 2018-09-28 2018-09-28 A Case Knower Identification Method Based on Model Fusion

Publications (2)

Publication Number Publication Date
CN109199411A true CN109199411A (en) 2019-01-15
CN109199411B CN109199411B (en) 2021-04-09

Family

ID=64981889

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811135018.7A Active CN109199411B (en) 2018-09-28 2018-09-28 A Case Knower Identification Method Based on Model Fusion

Country Status (1)

Country Link
CN (1) CN109199411B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110327061A (en) * 2019-08-12 2019-10-15 北京七鑫易维信息技术有限公司 It is a kind of based on the personality determining device of eye movement tracer technique, method and apparatus
CN110693509A (en) * 2019-10-17 2020-01-17 中国人民公安大学 Case correlation determination method and device, computer equipment and storage medium
CN110956143A (en) * 2019-12-03 2020-04-03 交控科技股份有限公司 Abnormal behavior detection method and device, electronic equipment and storage medium
CN111568367A (en) * 2020-05-14 2020-08-25 中国民航大学 Method for identifying and quantifying eye jump invasion

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102005047211A1 (en) * 2005-10-01 2007-04-05 Carl Zeiss Meditec Ag Mammal or human eye movement detecting system, has detection device generating independent detection signal using radiation from spot pattern, and control device evaluating signal for determining data about movement of eyes
CN101098241A (en) * 2006-06-26 2008-01-02 腾讯科技(深圳)有限公司 Method and system for implementing virtual image
WO2009116043A1 (en) * 2008-03-18 2009-09-24 Atlas Invest Holdings Ltd. Method and system for determining familiarity with stimuli
US7792335B2 (en) * 2006-02-24 2010-09-07 Fotonation Vision Limited Method and apparatus for selective disqualification of digital images
CN202060785U (en) * 2011-03-31 2011-12-07 上海天岸电子科技有限公司 Human eye pupil lie detector
CN202472688U (en) * 2011-12-03 2012-10-03 辽宁科锐科技有限公司 Inquest-assisting judgment and analysis meter based on eyeball characteristic
CN103116763A (en) * 2013-01-30 2013-05-22 宁波大学 Vivo-face detection method based on HSV (hue, saturation, value) color space statistical characteristics
CN103211605A (en) * 2013-05-14 2013-07-24 重庆大学 Psychological testing system and method
CN203379122U (en) * 2013-07-26 2014-01-08 蔺彬涛 Wireless electroencephalogram and eye movement polygraph
CN105147248A (en) * 2015-07-30 2015-12-16 华南理工大学 Physiological information-based depressive disorder evaluation system and evaluation method thereof
US20160132726A1 (en) * 2014-05-27 2016-05-12 Umoove Services Ltd. System and method for analysis of eye movements using two dimensional images
CN206285117U (en) * 2016-08-31 2017-06-30 北京新科永创科技有限公司 Intelligence hearing terminal
CN106999111A (en) * 2014-10-01 2017-08-01 纽洛斯公司 System and method for detecting invisible human emotion
CN107480716A (en) * 2017-08-15 2017-12-15 安徽大学 Method and system for identifying saccade signal by combining EOG and video
WO2018005594A1 (en) * 2016-06-28 2018-01-04 Google Llc Eye gaze tracking using neural networks
CN108108715A (en) * 2017-12-31 2018-06-01 厦门大学 It is inspired based on biology and depth attribute learns the face aesthetic feeling Forecasting Methodology being combined
US20180268733A1 (en) * 2017-03-15 2018-09-20 International Business Machines Corporation System and method to teach and evaluate image grading performance using prior learned expert knowledge base
CN109063551A (en) * 2018-06-20 2018-12-21 新华网股份有限公司 Speech authenticity testing method and system

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102005047211A1 (en) * 2005-10-01 2007-04-05 Carl Zeiss Meditec Ag Mammal or human eye movement detecting system, has detection device generating independent detection signal using radiation from spot pattern, and control device evaluating signal for determining data about movement of eyes
US7792335B2 (en) * 2006-02-24 2010-09-07 Fotonation Vision Limited Method and apparatus for selective disqualification of digital images
CN101098241A (en) * 2006-06-26 2008-01-02 腾讯科技(深圳)有限公司 Method and system for implementing virtual image
WO2009116043A1 (en) * 2008-03-18 2009-09-24 Atlas Invest Holdings Ltd. Method and system for determining familiarity with stimuli
CN202060785U (en) * 2011-03-31 2011-12-07 上海天岸电子科技有限公司 Human eye pupil lie detector
CN202472688U (en) * 2011-12-03 2012-10-03 辽宁科锐科技有限公司 Inquest-assisting judgment and analysis meter based on eyeball characteristic
CN103116763A (en) * 2013-01-30 2013-05-22 宁波大学 Vivo-face detection method based on HSV (hue, saturation, value) color space statistical characteristics
CN103211605A (en) * 2013-05-14 2013-07-24 重庆大学 Psychological testing system and method
CN203379122U (en) * 2013-07-26 2014-01-08 蔺彬涛 Wireless electroencephalogram and eye movement polygraph
US20160132726A1 (en) * 2014-05-27 2016-05-12 Umoove Services Ltd. System and method for analysis of eye movements using two dimensional images
CN106999111A (en) * 2014-10-01 2017-08-01 纽洛斯公司 System and method for detecting invisible human emotion
CN105147248A (en) * 2015-07-30 2015-12-16 华南理工大学 Physiological information-based depressive disorder evaluation system and evaluation method thereof
WO2018005594A1 (en) * 2016-06-28 2018-01-04 Google Llc Eye gaze tracking using neural networks
CN206285117U (en) * 2016-08-31 2017-06-30 北京新科永创科技有限公司 Intelligence hearing terminal
US20180268733A1 (en) * 2017-03-15 2018-09-20 International Business Machines Corporation System and method to teach and evaluate image grading performance using prior learned expert knowledge base
CN107480716A (en) * 2017-08-15 2017-12-15 安徽大学 Method and system for identifying saccade signal by combining EOG and video
CN108108715A (en) * 2017-12-31 2018-06-01 厦门大学 It is inspired based on biology and depth attribute learns the face aesthetic feeling Forecasting Methodology being combined
CN109063551A (en) * 2018-06-20 2018-12-21 新华网股份有限公司 Speech authenticity testing method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
C. MENG AND X. ZHAO: "Webcam-Based Eye Movement Analysis Using CNN", 《IEEE ACCESS》 *
DAVID A. LEOPOLD等: "Multistable phenomena:changing views in perception", 《TRENDS IN COGNITIVE SCIENCES》 *
任延涛 等: "基于眼运动追踪技术的测谎模式构建", 《中国刑警学院学报》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110327061A (en) * 2019-08-12 2019-10-15 北京七鑫易维信息技术有限公司 It is a kind of based on the personality determining device of eye movement tracer technique, method and apparatus
CN110693509A (en) * 2019-10-17 2020-01-17 中国人民公安大学 Case correlation determination method and device, computer equipment and storage medium
CN110693509B (en) * 2019-10-17 2022-04-05 中国人民公安大学 Case correlation determination method and device, computer equipment and storage medium
CN110956143A (en) * 2019-12-03 2020-04-03 交控科技股份有限公司 Abnormal behavior detection method and device, electronic equipment and storage medium
CN111568367A (en) * 2020-05-14 2020-08-25 中国民航大学 Method for identifying and quantifying eye jump invasion
CN111568367B (en) * 2020-05-14 2023-07-21 中国民航大学 A method to identify and quantify saccadic intrusions

Also Published As

Publication number Publication date
CN109199411B (en) 2021-04-09

Similar Documents

Publication Publication Date Title
Zhang et al. Multimodal depression detection: Fusion of electroencephalography and paralinguistic behaviors using a novel strategy for classifier ensemble
CN112259237B (en) Depression evaluation system based on multi-emotion stimulus and multi-stage classification model
Sharma et al. Objective measures, sensors and computational techniques for stress recognition and classification: A survey
Zhu et al. Detecting emotional reactions to videos of depression
CN109199412B (en) Abnormal emotion recognition method based on eye movement data analysis
Sulaiman et al. EEG-based stress features using spectral centroids technique and k-nearest neighbor classifier
CN109199411A (en) Case insider's recognition methods based on Model Fusion
Sengupta et al. A multimodal system for assessing alertness levels due to cognitive loading
Berbano et al. Classification of stress into emotional, mental, physical and no stress using electroencephalogram signal analysis
Moscato et al. Automatic pain assessment on cancer patients using physiological signals recorded in real-world contexts
WO2024108669A1 (en) System and method for wakeful state detection
CN117770821A (en) Human-caused intelligent cabin driver fatigue detection method, device, vehicle and medium
Anandhi et al. Time domain analysis of heart rate variability signals in valence recognition for children with autism spectrum disorder (asd)
Ali et al. LSTM-based electroencephalogram classification on autism spectrum disorder
Dharia et al. Multimodal deep learning model for subject-independent EEG-based emotion recognition
Altaf et al. Machine learning approach for stress detection based on alpha-beta and theta-beta ratios of EEG signals
CN114366102A (en) Multi-mode nervous emotion recognition method, device, equipment and storage medium
CN112155577B (en) A social pressure detection method, device, computer equipment and storage medium
CN119272125A (en) A multimodal fusion feature emotion recognition method
CN113729729A (en) Early detection system of schizophrenia based on graph neural network and brain network
Shing et al. Multistage anxiety state recognition based on eeg signal using safe-level smote
Reches et al. A novel ERP pattern analysis method for revealing invariant reference brain network models
Nirde et al. EEG mental arithmetic task levels classification using machine learning and deep learning algorithms
Chow et al. Classifying document categories based on physiological measures of analyst responses
Zhu et al. Visceral versus verbal: Can we see depression?

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant