[go: up one dir, main page]

CN111222837A - Intelligent interviewing method, system, equipment and computer storage medium - Google Patents

Intelligent interviewing method, system, equipment and computer storage medium Download PDF

Info

Publication number
CN111222837A
CN111222837A CN201910968962.9A CN201910968962A CN111222837A CN 111222837 A CN111222837 A CN 111222837A CN 201910968962 A CN201910968962 A CN 201910968962A CN 111222837 A CN111222837 A CN 111222837A
Authority
CN
China
Prior art keywords
candidate
information
text information
resume
quality model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910968962.9A
Other languages
Chinese (zh)
Inventor
刘志龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Property and Casualty Insurance Company of China Ltd
Original Assignee
Ping An Property and Casualty Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Property and Casualty Insurance Company of China Ltd filed Critical Ping An Property and Casualty Insurance Company of China Ltd
Priority to CN201910968962.9A priority Critical patent/CN111222837A/en
Publication of CN111222837A publication Critical patent/CN111222837A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/105Human resources
    • G06Q10/1053Employment or hiring

Landscapes

  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the invention provides an intelligent interview method, which comprises the following steps: acquiring resume information of a candidate, verifying the authenticity of the resume information through an application programming interface of a preset website and recording a verification result; receiving voice data of the candidate person returned by an interview site, identifying text information corresponding to the voice data, carrying out emotion marking on the text information, and analyzing the marked text information to obtain a quality model of the candidate person; receiving test question answers uploaded by the candidate, checking the correctness of the test question answers, obtaining answer scores, and recording professional grades corresponding to the scores; and performing weighted operation on the recorded checking results corresponding to the candidate to obtain the final evaluation result of the candidate. The invention saves the time cost of manual interview and improves the evaluation accuracy of the candidate.

Description

Intelligent interviewing method, system, equipment and computer storage medium
Technical Field
The embodiment of the invention relates to the field of human-computer interaction, in particular to an intelligent interview method, an intelligent interview system, computer equipment and a computer-readable storage medium.
Background
Recruitment is an important link of human resource management, talents suitable for corresponding posts of a company are screened out from a large number of candidates every time a recruiter of a human unit is in a peak period, and enterprises need to bear large time cost and labor cost. At present, an enterprise can eliminate a part of people to select by setting rough screening conditions such as academic expertise, working experience and the like through software, so that corresponding time cost is reduced, however, the characters of candidates, the pressure resistance required by talking and a part of posts cannot be truly embodied on a resume, manual interview is still required to confirm the expression communication capacity and the pressure resistance of the candidates, and the labor cost is still high.
Disclosure of Invention
In view of this, an object of the embodiments of the present invention is to provide an intelligent interviewing method, system, computer device and computer-readable storage medium, which can more accurately evaluate the matching degree of interviewing candidates with respect to the positions to be interviewed, so as to replace the manual interviewing link and save labor cost.
In order to achieve the above object, an embodiment of the present invention provides an intelligent interview method, including the following steps:
acquiring candidate resume information uploaded by an interviewer terminal, acquiring information on a preset website through an application programming interface of the preset website to verify the authenticity of the resume information and recording a resume information verification result;
receiving voice data of the candidate person returned by the interview site terminal, identifying text information corresponding to the voice data, carrying out emotion marking on the text information, and analyzing the marked text information to obtain a quality model of the candidate person;
receiving test question answers uploaded by the candidate terminal, checking the correctness of the test question answers, obtaining answer scores, and recording professional grades corresponding to the scores;
and performing weighted operation according to the resume information verification result, the quality model and the professional grade to obtain a final evaluation result of the candidate, and sending the evaluation result to the interviewer terminal.
Preferably, the steps of acquiring candidate resume information uploaded by the interviewer terminal, acquiring information on a preset website through an application programming interface of the preset website to verify the authenticity of the resume information, and recording the resume information verification result include:
acquiring candidate resume information uploaded by the interviewer terminal;
calling real historical data corresponding to the candidate in a preset website database through an application programming interface of a preset website;
and extracting specified items in the candidate resume information, verifying the correctness of the specified items by taking the real resume data as a reference, and storing the verification result of each item.
Preferably, the step of recognizing text information corresponding to the voice data and performing emotion labeling on the text information includes:
recognizing the voice data and generating corresponding text information;
performing word segmentation processing on the generated text information, and calculating the emotion score of each word segmentation;
and taking a single sentence as a unit, carrying out statistical calculation on the emotion scores of the word segments in the sentence to obtain the emotion score of each sentence and endowing the emotion score of each sentence with corresponding labels.
Preferably, the step of recognizing text information corresponding to the voice data and performing emotion labeling on the text information further includes:
recognizing the voice data and generating corresponding text information;
identifying an emotional tendency field in the text information, and performing frequency calculation on the emotional tendency field;
and giving weights to the emotional tendency fields and the frequency values thereof by referring to a preset expectation analysis library, calculating to generate a final emotional value, and giving corresponding emotional labels to the text information according to the final emotional value.
Preferably, the step of emotion labeling the text information includes:
and searching a relation table between the prestored emotion scores and the emotion labels to obtain the emotion labels corresponding to the sentences, and adding storage address pointers of the corresponding emotion labels to the head or tail of the data of each sentence for storage.
Preferably, the step of analyzing the labeled text information to obtain the quality model of the candidate includes:
searching a prime model set from a preset prime model library according to an interview problem corresponding to the text information;
and calculating the matching degree of the marked text information and each model in the set, and selecting the item with the highest matching degree as the prime model of the candidate.
Preferably, the performing a weighted operation according to the resume information verification result, the quality model, and the professional grade to obtain a final evaluation result of the candidate, and sending the evaluation result to the interviewer terminal includes:
giving weight values to the resume verification result, the prime model information and the professional grade information and calculating a final evaluation score, wherein the resume verification weight is greater than the prime model information weight, and the prime model information weight is greater than the professional grade information weight;
and judging whether the final evaluation score is larger than a preset threshold value or not, and if so, defining the candidate as being capable of being recorded.
In order to achieve the above object, an embodiment of the present invention further provides an intelligent interview system, including:
the resume verification module is used for acquiring candidate resume information uploaded by the interviewer terminal, acquiring information on a preset website through an application programming interface of the preset website to verify the authenticity of the resume information and recording a resume information verification result;
the quality model screening module is used for receiving the voice data of the candidate person returned by the interview site terminal, identifying text information corresponding to the voice data, carrying out emotion marking on the text information, and analyzing the marked text information to obtain a quality model of the candidate person;
the answer rating module is used for receiving the test question answers uploaded by the candidate terminal, checking the correctness of the test question answers, obtaining answer scores and recording professional grades corresponding to the scores;
and the weighted evaluation module is used for carrying out weighted operation according to the resume information verification result, the quality model and the professional grade to obtain a final evaluation result of the candidate and sending the evaluation result to the interviewer terminal.
In order to achieve the above object, an embodiment of the present invention further provides a computer device, where the computer device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the computer program is executed by the processor, the computer device implements the steps of the intelligent interview method as described above.
To achieve the above object, an embodiment of the present invention further provides a computer-readable storage medium, in which a computer program is stored, where the computer program is executable by at least one processor, so as to cause the at least one processor to execute the steps of the intelligent interview method.
In addition, according to the intelligent interviewing method, the intelligent interviewing system, the computer equipment and the computer readable storage medium, emotion analysis and candidate quality model selection steps are configured for the question and answer step, so that the evaluation result of the candidate is matched with the corresponding application post, for example, the work pressure resistance and the like, the evaluation result is more accurate, the time cost of manual interviewing is saved, and the selection accuracy of the candidate is improved.
Drawings
FIG. 1 is a flow chart of the steps corresponding to the embodiment of the intelligent interviewing method of the invention;
FIG. 2 is a schematic flowchart of step S100 in the first embodiment of the intelligent interview method according to the present invention;
FIG. 3 is a schematic flowchart of step S200 according to a first embodiment of the intelligent interview method;
fig. 4 is a schematic flow chart of another embodiment of the step S200 in the first embodiment of the intelligent interview method of the invention;
FIG. 5 is a flowchart illustrating a step S200 according to an embodiment of the intelligent interviewing method;
FIG. 6 is a flowchart illustrating a step S400 of the intelligent interview method according to the present invention;
FIG. 7 is a schematic diagram of a second program module of the intelligent interview system according to the embodiment of the invention;
fig. 8 is a schematic diagram of a hardware structure of a third embodiment of the computer apparatus according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
It should be understood that although the terms first, second, etc. may be used to describe the designated key in embodiments of the present invention, the designated key should not be limited to these terms. These terms are only used to distinguish specified keywords from each other. For example, the first specified keyword may also be referred to as the second specified keyword, and similarly, the second specified keyword may also be referred to as the first specified keyword, without departing from the scope of embodiments of the present invention.
The word "if" as used herein may be interpreted as referring to "at … …" or "when … …" or "corresponding to a determination" or "in response to a detection", depending on the context. Similarly, the phrase "if determined" or "if detected (a stated condition or time)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
Example one
With reference to figure 1 of the drawings,
step S100, candidate resume information uploaded by an interviewer terminal is obtained, information on a preset website is obtained through an application programming interface of the preset website to verify the authenticity of the resume information, and resume information verification results are recorded.
In the recruitment process, the phenomenon of resume counterfeiting and the like is inevitable, so that the verification of information such as a academic calendar in the resume of the candidate is necessary.
For example, the current candidate for interviewing is zhang san, which uploads resume information with a academic record of this division of Beijing university. And extracting the name 'Zusanli' in the Zusanli resume information as a retrieval element, calling an API (application program interface) query interface of a student information network, acquiring the real student resume information corresponding to Zusanli from the student information network database, and verifying the real student resume information with the resumes uploaded by the candidate.
Step S200, receiving the voice data of the candidate person returned by the interview site terminal, identifying text information corresponding to the voice data, carrying out emotion marking on the text information, and analyzing the marked text information to obtain a prime model of the candidate person.
Based on the idea of replacing manual interview, the link of communication with the candidate is realized by intelligent interaction. An audio collector such as a microphone and an audio player such as a loudspeaker are arranged on an interview site, a processing unit sends preset basic questions to the interview site, the candidates are informed in a mode of displaying characters on a screen or playing the questions through the loudspeaker, when the candidates answer the questions, the microphone feeds back collected voice data of the candidates to the processing unit through a transmission line, and the processing unit restores the voice data of analog signals and identifies voice text information in the voice data.
After the voice text information is identified, performing emotion analysis on the voice text information and performing emotion marking. Emotion analysis, also known as emotion recognition in the computer technology field, is a process of analyzing, processing, generalizing, and reasoning subjective text with emotion colors. The invention provides three emotion recognition modes which are specially needed for an intelligent interview system, and the emotion recognition modes are explained in the following paragraphs.
After the text information is subjected to emotion analysis, emotion marks are added to the text information according to the analysis result, and the emotion marks are added to the text information by data such as specific numbers, characters and expression pictures so as to assist other processing units or modules in identifying emotion content contained in the text information. Emotion annotations are typically only a few bytes. The basic unit of emotion labeling may be a word, a sentence, a paragraph, or even the entire text, which is not limited by the present invention.
Emotion markup is used as a selected reference for a candidate personality model, which is a set of data sets reflecting non-physical attributes of candidate mental traits, emotional tendencies, and the like, illustratively,
step 300, receiving the test question answers uploaded by the candidate terminal, checking the correctness of the test question answers, obtaining answer scores, and recording the professional grades corresponding to the scores.
The professional grade is also an indispensable link for each personnel to the personnel, and the best method for checking the professional grade is test question test.
In the step of answering the test questions, the candidate answers the preset test paper through computer equipment, the answer is uploaded after the answer is finished, the processing unit obtains the uploaded answer of the test questions, the answer data of the candidate is verified according to preset verification data, and the score of the answer data is calculated.
The method includes the steps that corresponding professional grade ratings can be set for test question scores, a rating strategy is preset exemplarily, a 90-100 segmentation rating is A, a 70-90 segmentation rating is B, and a 40-70 segmentation rating is C, whether the strategy is applied or not is determined according to actual demand scenes, the strategy concept is provided only for meeting more demand scenes, and limitation is not carried out.
In addition, the sequence of step 300, step 200 and step 100 may be disordered, and the present invention does not limit the sequence of these three steps.
Step S400, carrying out weighted operation according to the resume information verification result, the quality model and the professional grade to obtain a final evaluation result of the candidate, and sending the evaluation result to the interviewer terminal.
Specifically, the verification result may include the resume verification result, the candidate quality model selection, the professional ranking result in the previous step, and may further include other additional results, such as: whether the candidate meets expectations or not is judged according to salary requirements of the candidate, whether the candidate is stable or not is judged according to the reason of the job leaving and the like, and a technician can add verification steps and verification parameters according to a demand scene.
And giving a weight value to each verification result to perform weighting operation to generate a final evaluation result of the candidate.
The invention replaces the traditional manual interview link by utilizing the design of computer equipment and intelligent interaction, and in addition, the invention is provided with the steps of sentiment analysis and candidate quality model selection aiming at the question-answer link, so that the evaluation result of the candidate is more matched with the corresponding applied post, for example, the working pressure resistance and the like, the evaluation result is more accurate, the time cost of manual interview is saved, and the selection accuracy of the candidate is improved.
Optionally, referring to fig. 2, the step S100 of obtaining candidate resume information uploaded by the interviewer terminal, obtaining information on a preset website through an application programming interface of the preset website to verify the authenticity of the resume information, and recording a resume information verification result includes:
step S110 obtains candidate resume information uploaded by the interviewer terminal.
The method for acquiring the resume information of the candidate can be used for actively delivering the candidate or pulling data from a related recruitment website, but generally, the resume uploaded by the candidate in the interview is more comprehensive than the resume of the candidate in the recruitment website, so that the method for acquiring the resume information of the candidate can also be used for scanning the paper resume submitted by the candidate in the interview to generate a PDF (portable document format) file, and the processing unit identifies each text field in the PDF file to perform the subsequent resume verification step.
Step S120, calling real historical data corresponding to the candidate in a preset website database through an application programming interface of a preset website;
verifying the authenticity of the candidate resume requires real data which can be compared as a reference, for example, official certified database websites such as the academic messenger, wherein an application programming interface is also called an API interface in computer technology, specifically, the name of the candidate is used as a query condition, and the academic messenger database interface is called to acquire academic information corresponding to the name of the candidate.
Step S130 extracts a designated entry in the candidate resume information, verifies the correctness of the designated entry based on the real resume data, and stores the verification result of each entry.
Illustratively, the candidate resume refers to the candidate ' Beijing university ' which is the subject of the university of Beijing, the field of the Beijing university ' is extracted, each text field in the candidate resume information pulled by the student network is traversed, and the authenticity of the text fields is further verified, and a simple code is provided for facilitating understanding:
if (value! ═ Beijing university')
System.out.println (candidate resume true)
Alternatively, and with reference to figure 3,
step S200, receiving voice data of the candidate person returned by an interviewing site, identifying text information corresponding to the voice data, carrying out emotion marking on the text information, and analyzing the marked text information to obtain a quality model of the candidate person, wherein the step S comprises the following steps:
step S210, recognizing the voice data and generating corresponding text information;
the voice data collected by the microphone and other audio is in an analog model mode, the processing unit converts the voice data into a digital signal which can be recognized by a computer, or converts the voice data into a digital signal which can be recognized by the computer through other modules such as a signal processing module, and after the conversion is finished, the processing unit reads text information in the digital signal and loads the text information into a cache region so as to carry out subsequent emotion marking processing.
Step S220, performing word segmentation processing on the generated text information, and calculating the emotion score of each word segmentation;
specifically, the minimum granularity of the emotion analysis object is a word, but the basic unit expressing one emotion is a sentence, although the word can be used for familiarizing the basic information of the emotion, a single word lacks the object and lacks the degree of association, and the emotion degrees obtained by combining different words are different and even the emotion tendencies are opposite, so the emotion analysis granularity taking the sentence as the most basic is reasonable and has high accuracy.
Illustratively, the word segmentation process is as follows:
i/am/go/company/welfare/may also/,/but/overtime/too much/.
Aiming at each word segmentation, a preset anticipation database is preset by a technician, wherein the weight of each emotional word is defined in the database and is related to the degree of the emotional word.
Specifically, "can also be" a positive word, "too many" are negative words, and the emotional degree level of "too many" is very high, which reflects the extreme complaint emotion of the candidate to overtime, wherein it implies that the working pressure resistance of the candidate is not high, if the overtime frequency of the job application position is very high, the recording of the candidate is easy to cause an unstable problem, and the candidate is easy to leave the job after entering the job.
In the sentence, the positive word is recorded as positive number, the negative word is recorded as negative number, and in the sentence, "may be" +1 "in the database, and" too many "are" -2 ".
Step S230 is to take a single sentence as a unit, perform statistical calculation on the emotion scores of the participles in the sentence to obtain the emotion score of each sentence, and assign corresponding labels to the emotion scores.
Continuing with the above, calculating the emotion score of a single sentence by counting the emotion scores of the participles, wherein the emotion score of the single sentence is "I/M/company/welfare/Fuli/,/but/overtime/too much/" only two emotion words "can still", "too much", "can also" score as "+ 1", "too much" is "-2", the emotion score of the sentence is 1-2-1, and the emotion content corresponding to the sentence is positive.
And after the emotional information is analyzed, setting the emotional information as a label of the text information, and storing the emotional information and the sentence resume mapping relation.
Alternatively, and with reference to figure 4,
step S200, receiving voice data of the candidate person returned by the interviewing site, identifying text information corresponding to the voice data, carrying out emotion marking on the text information, and analyzing the marked text information to obtain a quality model of the candidate person, further comprising the following steps:
step S240 identifies the voice data, and generates text information corresponding to the voice data.
Step S250, identifying the emotional tendency field in the text information, and performing frequency calculation on the emotional tendency field.
The manner of frequency calculation for the emotion tendency field is a second embodiment of the present invention for the precise emotion recognition of text information, and when further explaining the embodiment, first, think that one would see how a sentence thinks.
Such as: he looks very tired, yesterday plus one shift. For overtime and tiredness, only specific dialog objects indicate that the weather of the current day is discussed, and the real effect is 'tiredness', so that the evaluation that the sentence is negative can be obtained, and the 'tiredness' can be defined as an emotional tendency field in the text information
And step S260, giving weights to the emotional tendency fields and the frequency values thereof by referring to a preset expectation analysis library, calculating to generate a final emotional value, and giving corresponding emotional labels to the text information according to the final emotional value.
According to the thought, the frequency of the emotional tendency field in the single sentence or the whole text information is calculated, and the emotional score of the emotional tendency field is calculated according to the frequency value and the preset weight value corresponding to the frequency value. And counting the emotion scores of all the emotional tendency fields to calculate a value, and further obtaining a final emotion value.
Optionally, the step of performing emotion annotation on the text information in step 220 includes:
and searching a relation table between the prestored emotion scores and the emotion labels to obtain the emotion labels corresponding to the sentences, and adding storage address pointers of the corresponding emotion labels to the head or tail of the data of each sentence for storage. .
Since the emoticons can represent the emotion information, the emotion information can be matched to obtain corresponding emoticons, and illustratively, a plurality of emoticons and emotion information corresponding to each emoticon are stored in a preset database, so that the emoticons corresponding to the emotion scores calculated in the previous steps can be matched in the database during matching. For example, the phrase "more work before work shifts", a negative emotion is recognized, and an emoticon for representing the negative emotion is mapped to the phrase and stored as a tag of the phrase.
In another embodiment, a label may also be represented by a symbolic number, as follows:
example documentation:
< none > this time we chose to live in a five-star hotel.
< + S > is good
'N' is lunch and chow, and no matter how many people go, no food is added in the lunch.
Figure BDA0002231441490000111
Figure BDA0002231441490000121
TABLE 1
As shown in Table 1, the tags are used to label the emotional tendency of a sentence. The position of the label is at the beginning of the sentence. The label is represented as described above.
Specifically, a relation table between the prestored emotion scores and emotion labels is searched to obtain emotion labels corresponding to the sentences, and storage address pointers of the corresponding emotion labels are added to the head or tail of data of the sentences to be stored together.
Selecting < none > this time we have selected a five-star hotel as an example, analyzing to obtain that the emotion of 'we have selected a five-star hotel' this time is none, the storage address of none is 0010, then adding 0010 to the head or tail of the sentence data 'we have selected a five-star hotel' in the form that '0010 we have selected a five-star hotel' or 'we have selected a five-star hotel 0010', certainly 'we have selected a five-star hotel' in the form of byte data composed of 0 and 1, for convenience of explanation, characters are adopted for substitution, and since the number of bytes of the pointer is less than that of the representation data of the actual emotion, the occupation of the storage space can be reduced by adding storage by using the pointer.
Optionally, referring to fig. 5, the step of analyzing the labeled text information to obtain the quality model of the candidate in step S200 includes:
step S270, searching a prime model set from a preset prime model library according to an interview question corresponding to the text information;
step S280, the labeled text information and each model in the set are subjected to matching degree calculation, and the item with the highest matching degree is selected as the prime model of the candidate.
Specifically, for the labeled text information, a matched prime model can be found through a pre-constructed prime model library, and a prime model which corresponds to the candidate and can reflect the prime of the candidate can be obtained. The parameters characterized by the quality model comprise the personality, the talking and telling ability, the communication ability, the pressure resistance and the like of the human.
Alternatively, referring to fig. 6, step S400 includes:
s410, giving weight values to the resume verification result, the prime model information and the professional grade information and calculating a final evaluation score, wherein the resume verification weight is greater than the prime model information weight, and the prime model information weight is greater than the professional grade information weight;
s420, judging whether the final evaluation score is larger than a preset threshold value or not, and if so, defining the candidate as being capable of being recorded.
For example, the preset weighting algorithm is that the prime model score of a candidate is 10%, the professional score is 80%, and the rest resume elements (e.g., academic records and the like) are 10%, if 30 scores are given to three prime models, 100 scores are given to professional scores, and 50 scores are given to rest resume elements, the evaluation result of three is 30 x 10% +100 x 80+50 x 10 ═ 88, the interviewer can perform preferential recording through the evaluation result of each candidate given by the query processing unit, and in another embodiment, the evaluation result can be presented in a report form, for example, the candidate prime models are presented in a hexagon model, and a visibility report is formed by matching the professional score specific scores and the rest resume elements. The duty cycle of the weighting algorithm, as well as the algorithm itself, may be adjusted by the developer.
In addition, in another embodiment, the present invention provides a third logic idea of emotion recognition as a supplement, including:
one method is to define emotion information by using emotion of a candidate when speaking, namely extracting acoustic features in voice data, and analyzing and identifying emotion information corresponding to the voice data, wherein parameters corresponding to the acoustic features can be extracted by referring to LPCC linear prediction cepstrum coefficients, MFCC parameters, formant parameters, fundamental frequency parameters based on prosodic features, characteristic parameters in energy aspect, speaking duration and amplitude parameters.
The second is to define emotional information by using the semantics of the words spoken by the candidate, such as the words without strong emotions: the work and the multiple work are connected into a block to form a preset comparison template, and when the processing unit detects that the candidate speaks the sentence and the tone is normal, the candidate can be identified to express the negative emotion. In other embodiments, the processing unit may further extract facial feature points of the candidate through a camera in the conference room, analyze and detect details of the candidate, such as expressions, laryngeal movement, and the like, and complete emotion information recognition.
Example two
Referring to fig. 7, a schematic diagram of program modules of a second embodiment of the intelligent interview system of the invention is shown. In this embodiment, the intelligent interview-based system 20 can include or be divided into one or more program modules, which are stored in a storage medium and executed by one or more processors to implement the present invention and implement the intelligent interview method described above. The program modules referred to in the embodiments of the present invention refer to a series of computer program instruction segments capable of performing specific functions, and are more suitable than the programs themselves for describing the execution process in the storage medium based on the intelligent interview system 20. The following description will specifically describe the functions of the program modules of the present embodiment:
the resume verification module 200 is used for acquiring candidate resume information uploaded by the interviewer terminal, acquiring information on a preset website through an application programming interface of the preset website to verify the authenticity of the resume information and recording a resume information verification result;
in an exemplary embodiment, the resume verification module 200 is further configured to obtain resume information submitted by the candidate;
calling real historical data corresponding to the candidate in a preset website database through an application programming interface of a preset website;
and extracting specified items in the candidate resume information, verifying the correctness of the specified items by taking the real resume data as a reference, and storing the verification result of each item.
The quality model screening module 210 is configured to receive the voice data of the candidate person returned by the interview site terminal, identify text information corresponding to the voice data, perform emotion labeling on the text information, and analyze the labeled text information to obtain a quality model of the candidate person;
in an exemplary embodiment, the prime model filtering module 210 is further configured to recognize the voice data and generate corresponding text information;
performing word segmentation processing on the generated text information, and calculating the emotion score of each word segmentation;
and taking a single sentence as a unit, carrying out statistical calculation on the emotion scores of the word segments in the sentence to obtain the emotion score of each sentence and endowing the emotion score of each sentence with corresponding labels.
In an exemplary embodiment, the prime model filtering module 210 is further configured to recognize the voice data and generate corresponding text information;
identifying an emotional tendency field in the text information, and performing frequency calculation on the emotional tendency field;
and giving weights to the emotional tendency fields and the frequency values thereof by referring to a preset expectation analysis library, calculating to generate a final emotional value, and giving corresponding emotional labels to the text information according to the final emotional value.
Optionally, the quality model filtering module 210 includes emoticon labels, arabic numeral labels, and letter labels according to the definition form of the emotion annotation.
In an exemplary embodiment, the prime model screening module 210 is further configured to search a prime model set from a preset prime model library according to an interview question corresponding to the text information;
and calculating the matching degree of the marked text information and each model in the set, and selecting the item with the highest matching degree as the prime model of the candidate.
The answer rating module 220 is configured to receive the test question answers uploaded by the candidate terminal, check correctness of the test question answers, obtain answer scores, and record professional grades corresponding to the scores;
and the weighted evaluation module 230 is configured to perform weighted operation according to the resume information verification result, the quality model, and the professional grade to obtain a final evaluation result of the candidate, and send the evaluation result to the interviewer terminal.
In an exemplary embodiment, the weighted evaluation module 230 is further configured to assign a weight value to the resume verification result, the prime model information, and the professional grade information, and calculate a final evaluation score, wherein the resume verification weight is greater than the prime model information weight, and the prime model information weight is greater than the professional grade information weight;
and judging whether the final evaluation score is larger than a preset threshold value or not, and if so, defining the candidate as being capable of being recorded.
EXAMPLE III
Fig. 8 is a schematic diagram of a hardware architecture of a computer device according to a third embodiment of the present invention. In the present embodiment, the computer device 2 is a device capable of automatically performing numerical calculation and/or information processing in accordance with a preset or stored instruction. The computer device 2 may be a personal computer, a tablet computer, a mobile phone, or the like, or may be a cloud device for providing a virtual client, such as a rack server, a blade server, a tower server, or a rack server (including an independent server or a server cluster composed of a plurality of servers). As shown, the computer device 2 includes, but is not limited to, at least a memory 21, a processor 22, a network interface 23, and an intelligent interview-based system 20 communicatively coupled to each other via a system bus. Wherein:
in this embodiment, the memory 21 includes at least one type of computer-readable storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. In some embodiments, the storage 21 may be an internal storage unit of the computer device 2, such as a hard disk or a memory of the computer device 2. In other embodiments, the memory 21 may also be an external storage device of the computer device 2, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like provided on the computer device 2. Of course, the memory 21 may also comprise both internal and external memory units of the computer device 2. In this embodiment, the memory 21 is generally used for storing an operating system installed in the computer device 2 and various application software, such as a program code of the intelligent interview method in the first embodiment. Further, the memory 21 may also be used to temporarily store various types of data that have been output or are to be output.
Processor 22 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 22 is typically used to control the overall operation of the computer device 2. In this embodiment, the processor 22 is configured to execute the program codes stored in the memory 21 or process data, such as the intelligent system 20, to implement the intelligent interview method according to the first embodiment.
The network interface 23 may comprise a wireless network interface or a wired network interface, and the network interface 23 is generally used for establishing communication connection between the computer device 2 and other electronic apparatuses. For example, the network interface 23 is used to connect the computer device 2 to an external terminal through a network, establish a data transmission channel and a communication connection between the computer device 2 and the external terminal, and the like. The network may be a wireless or wired network such as an Intranet (Intranet), the Internet (Internet), a Global System of Mobile communication (GSM), Wideband Code Division Multiple Access (WCDMA), a 4G network, a 5G network, Bluetooth (Bluetooth), Wi-Fi, and the like.
It is noted that fig. 8 only shows the computer device 2 with components 20-23, but it is to be understood that not all shown components are required to be implemented, and that more or less components may be implemented instead.
In this embodiment, the intelligent interview system 20 stored in the memory 21 can be further divided into one or more program modules, and the one or more program modules are stored in the memory 21 and executed by one or more processors (in this embodiment, the processor 22) to complete the present invention.
For example, the figure shows a schematic diagram of program modules implementing the fourth embodiment of the intelligent interview system 20, in which the intelligent interview system 20 can be divided into a resume verification module 200, a quality model filtering module 210, an answer rating module 220, and a weighted evaluation module 230. The program modules referred to in the present invention refer to a series of computer program instruction segments capable of performing specific functions, and are more suitable than programs for describing the execution process of the intelligent system 20 in the computer device 2. The specific functions of the program modules 200-230 have been described in detail in the second embodiment, and are not described herein again.
Example four
The present embodiment also provides a computer-readable storage medium, such as a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a server, an App application mall, etc., on which a computer program is stored, which when executed by a processor implements corresponding functions. The computer-readable storage medium of the embodiment is used for storing the intelligent interview system 20, and when being executed by the processor, the intelligent interview-based method of the first embodiment is implemented.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1.一种智能化面试方法,其特征在于,包括:1. an intelligent interview method, is characterized in that, comprises: 获取面试者终端上传的候选人简历信息,通过预设网站的应用程序编程接口获取所述预设网站上的信息以验证所述简历信息的真实性并记录简历信息验证结果;Obtain the candidate resume information uploaded by the interviewer's terminal, and obtain the information on the preset website through the application programming interface of the preset website to verify the authenticity of the resume information and record the resume information verification result; 接收面试现场终端传回的所述候选人语音数据,识别所述语音数据所对应的文本信息并将所述文本信息进行情感标注,分析标注后的文本信息得到所述候选人的素质模型;Receive the candidate voice data returned by the interview site terminal, identify the text information corresponding to the voice data and perform emotional annotation on the text information, and analyze the marked text information to obtain the quality model of the candidate; 接收所述候选人终端上传的试题答案,校验所述试题答案的正确性并得到答题评分,记录所述评分所对应的专业等级;Receive the test question answer uploaded by the candidate terminal, verify the correctness of the test question answer and obtain the answer score, and record the professional level corresponding to the score; 根据所述简历信息验证结果、所述素质模型以及所述专业等级进行加权运算,得到所述候选人的最终评估结果,将所述评估结果发送至所述面试者终端。A weighted operation is performed according to the verification result of the resume information, the quality model and the professional level to obtain the final evaluation result of the candidate, and the evaluation result is sent to the interviewer terminal. 2.根据权利要求1所述的智能化面试方法,其特征在于,所述获取面试者终端上传的候选人简历信息,通过预设网站的应用程序编程接口获取所述预设网站上的信息以验证所述简历信息的真实性并记录简历信息验证结果的步骤包括:2. The intelligent interview method according to claim 1, characterized in that, obtaining the candidate resume information uploaded by the interviewer terminal, obtaining the information on the preset website through the application programming interface of the preset website to obtain the information on the preset website. The steps of verifying the authenticity of the resume information and recording the verification result of the resume information include: 获取所述面试者终端上传的候选人简历信息;Obtain the candidate resume information uploaded by the interviewer terminal; 通过预设网站的应用程序编程接口,调用所述候选人在所述预设网站数据库中所对应的真实履历数据;Calling the real resume data corresponding to the candidate in the database of the preset website through the application programming interface of the preset website; 提取所述候选人简历信息中的指定条目,以所述真实履历数据作为基准,对所述指定条目验证其正确性,并将各条目的验证结果进行存储。Extracting specified items in the candidate's resume information, using the real resume data as a reference, verifying the correctness of the specified items, and storing the verification results of each item. 3.根据权利要求1所述的智能化面试方法,其特征在于,所述识别所述语音数据所对应的文本信息并将所述文本信息进行情感标注的步骤包括:3. intelligent interview method according to claim 1, is characterized in that, described identifying the corresponding text information of described voice data and the step that described text information is carried out emotional labeling comprises: 识别所述语音数据,生成其对应的文本信息;Identify the voice data, and generate its corresponding text information; 对所述生成的文本信息进行分词处理,计算每个分词的情感得分;Perform word segmentation processing on the generated text information, and calculate the sentiment score of each word segmentation; 以单句为单位,将句子内各分词情感得分进行统计计算,得到各句子的情感得分并对其赋予相应标注。Taking a single sentence as a unit, the sentiment score of each word segment in the sentence is statistically calculated to obtain the sentiment score of each sentence and assign corresponding labels to it. 4.根据权利要求1所述的智能化面试方法,其特征在于,所述识别所述语音数据所对应的文本信息并将所述文本信息进行情感标注的步骤还包括:4. The intelligent interview method according to claim 1, wherein the step of recognizing the corresponding text information of the voice data and carrying out emotional annotation to the text information further comprises: 识别所述语音数据,生成其对应的文本信息;Identify the voice data, and generate its corresponding text information; 识别所述文本信息中情感倾向字段,对所述情感倾向字段进行频率计算;Identifying the emotional tendency field in the text information, and performing frequency calculation on the emotional tendency field; 参照预置的预料分析库,对所述情感倾向字段及其频率值赋予权重,计算生成最终情感值,并依据所述最终情感值对所述文本信息赋予相应情感标注。With reference to a preset prediction analysis library, weight is given to the sentiment tendency field and its frequency value, a final sentiment value is calculated and generated, and corresponding sentiment labels are given to the text information according to the final sentiment value. 5.根据权利要求3所述的智能化面试方法,其特征在于,所述将所述文本信息进行情感标注的步骤包括:5. intelligent interview method according to claim 3, is characterized in that, the described step that described text information is carried out emotional labeling comprises: 查找预存储的情感得分与情感标注之间的关系表,得到各句子所对应的情感标注,对各句子的数据首部或尾部增添所对应情感标注的存放地址指针一并存储。Find the relationship table between the pre-stored sentiment scores and sentiment labels, get the sentiment labels corresponding to each sentence, and add the storage address pointers of the corresponding sentiment labels to the data header or tail of each sentence and store them together. 6.根据权利要求1所述的智能化面试方法,其特征在于,所述分析标注后的文本信息得到所述候选人的素质模型的步骤包括:6. The intelligent interview method according to claim 1, wherein the step of analyzing the marked text information to obtain the quality model of the candidate comprises: 根据所述文本信息对应的面试问题,向预置的素质模型库中查找素质模型集合;According to the interview questions corresponding to the text information, search for a set of quality models in a preset quality model library; 将所述标注后的文本信息与所述集合中各模型进行匹配度计算,选取匹配度最高项作为所述候选人的素质模型。The matching degree calculation is performed between the marked text information and each model in the set, and the item with the highest matching degree is selected as the quality model of the candidate. 7.根据权利要求1所述的智能化面试方法,其特征在于,所述根据所述简历信息验证结果、所述素质模型以及所述专业等级进行加权运算,得到所述候选人的最终评估结果,将所述评估结果发送至所述面试者终端的步骤包括:7. The intelligent interview method according to claim 1, wherein the weighted operation is performed according to the verification result of the resume information, the quality model and the professional level to obtain the final evaluation result of the candidate , the step of sending the evaluation result to the interviewee terminal includes: 对所述简历验证结果、所述素质模型信息以及所述专业等级信息赋予权重值并计算最终评估得分,其中,所述简历验证权重大于所述素质模型信息权重,所述素质模型信息权重大于所述专业等级信息权重;A weight value is assigned to the resume verification result, the quality model information, and the professional level information, and a final evaluation score is calculated, wherein the resume verification weight is greater than the quality model information weight, and the quality model information weight is greater than all the quality model information. the weight of the professional level information; 判断所述最终评估得分是否大于预设阈值,若大于预设阈值,则将所述候选人定义为可以录取。It is judged whether the final evaluation score is greater than a preset threshold, and if it is greater than the preset threshold, the candidate is defined as eligible for admission. 8.一种智能化面试系统,其特征在于,包括:8. An intelligent interview system, characterized in that, comprising: 简历验证模块,用于获取面试者终端上传的候选人简历信息,通过预设网站的应用程序编程接口获取所述预设网站上的信息以验证所述简历信息的真实性并记录简历信息验证结果;The resume verification module is used to obtain the candidate resume information uploaded by the interviewer's terminal, obtain the information on the preset website through the application programming interface of the preset website to verify the authenticity of the resume information and record the resume information verification result ; 素质模型筛选模块,用于接收面试现场终端传回的所述候选人语音数据,识别所述语音数据所对应的文本信息并将所述文本信息进行情感标注,分析标注后的文本信息得到所述候选人的素质模型;The quality model screening module is used to receive the candidate voice data returned by the interview site terminal, identify the text information corresponding to the voice data, and perform emotional annotation on the text information, and analyze the marked text information to obtain the The candidate's quality model; 答题评级模块,用于接收所述候选人终端上传的试题答案,校验所述试题答案的正确性并得到答题评分,记录所述评分所对应的专业等级;The answer rating module is used to receive the test answer uploaded by the candidate terminal, verify the correctness of the test answer and obtain the answer score, and record the professional level corresponding to the score; 加权评估模块,用于根据所述简历信息验证结果、所述素质模型以及所述专业等级进行加权运算,得到所述候选人的最终评估结果,将所述评估结果发送至所述面试者终端。A weighted evaluation module, configured to perform a weighted operation according to the resume information verification result, the quality model and the professional level to obtain a final evaluation result of the candidate, and send the evaluation result to the interviewer terminal. 9.一种计算机设备,所述计算机设备包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7中任一项所述的智能化面试方法的步骤。9. A computer device comprising a memory, a processor and a computer program stored on the memory and running on the processor, wherein the computer program is implemented when the processor is executed The steps of the intelligent interview method according to any one of claims 1 to 7. 10.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质内存储有计算机程序,所述计算机程序可被至少一个存储器所执行,以使所述至少一个处理器执行如权利要求1至7中任一项所述的智能化面试方法的步骤。10. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, and the computer program can be executed by at least one memory, so that the at least one processor executes as claimed in the claim Steps of the intelligent interview method described in any one of 1 to 7 are required.
CN201910968962.9A 2019-10-12 2019-10-12 Intelligent interviewing method, system, equipment and computer storage medium Pending CN111222837A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910968962.9A CN111222837A (en) 2019-10-12 2019-10-12 Intelligent interviewing method, system, equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910968962.9A CN111222837A (en) 2019-10-12 2019-10-12 Intelligent interviewing method, system, equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN111222837A true CN111222837A (en) 2020-06-02

Family

ID=70828954

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910968962.9A Pending CN111222837A (en) 2019-10-12 2019-10-12 Intelligent interviewing method, system, equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN111222837A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111833010A (en) * 2020-06-12 2020-10-27 北京网聘咨询有限公司 An intelligent interview method, system, device and storage medium
CN112786054A (en) * 2021-02-25 2021-05-11 深圳壹账通智能科技有限公司 Intelligent interview evaluation method, device and equipment based on voice and storage medium
CN114418366A (en) * 2022-01-06 2022-04-29 北京博瑞彤芸科技股份有限公司 Data processing method and device for intelligent cloud interview
CN117114475A (en) * 2023-08-21 2023-11-24 广州红海云计算股份有限公司 Comprehensive capability assessment system based on multidimensional talent assessment strategy
CN118365296A (en) * 2024-04-27 2024-07-19 北京神州光大科技有限公司 An AI video interview information conversion system and method
CN118505177A (en) * 2024-05-21 2024-08-16 北京位来教育科技有限公司 Virtual interview management method and system based on artificial intelligence
CN119273313A (en) * 2024-09-20 2025-01-07 埃摩森网络科技(上海)有限公司 Statistical methods and systems for human resource management based on digitalization

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991172A (en) * 2017-04-05 2017-07-28 安徽建筑大学 Method for establishing multi-mode emotion interaction database
CN107909339A (en) * 2017-11-01 2018-04-13 平安科技(深圳)有限公司 Job candidates verify grading approach, application server and computer-readable recording medium
CN109298779A (en) * 2018-08-10 2019-02-01 济南奥维信息科技有限公司济宁分公司 Virtual training system and method based on virtual agent interaction
CN109325124A (en) * 2018-09-30 2019-02-12 武汉斗鱼网络科技有限公司 A kind of sensibility classification method, device, server and storage medium
CN109766917A (en) * 2018-12-18 2019-05-17 深圳壹账通智能科技有限公司 Interview video data processing method, device, computer equipment and storage medium
CN109960725A (en) * 2019-01-17 2019-07-02 平安科技(深圳)有限公司 Text classification processing method, device and computer equipment based on emotion
CN110162599A (en) * 2019-04-15 2019-08-23 深圳壹账通智能科技有限公司 Personnel recruitment and interview method, apparatus and computer readable storage medium
CN110211591A (en) * 2019-06-24 2019-09-06 卓尔智联(武汉)研究院有限公司 Interview data analysing method, computer installation and medium based on emotional semantic classification

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991172A (en) * 2017-04-05 2017-07-28 安徽建筑大学 Method for establishing multi-mode emotion interaction database
CN107909339A (en) * 2017-11-01 2018-04-13 平安科技(深圳)有限公司 Job candidates verify grading approach, application server and computer-readable recording medium
CN109298779A (en) * 2018-08-10 2019-02-01 济南奥维信息科技有限公司济宁分公司 Virtual training system and method based on virtual agent interaction
CN109325124A (en) * 2018-09-30 2019-02-12 武汉斗鱼网络科技有限公司 A kind of sensibility classification method, device, server and storage medium
CN109766917A (en) * 2018-12-18 2019-05-17 深圳壹账通智能科技有限公司 Interview video data processing method, device, computer equipment and storage medium
CN109960725A (en) * 2019-01-17 2019-07-02 平安科技(深圳)有限公司 Text classification processing method, device and computer equipment based on emotion
CN110162599A (en) * 2019-04-15 2019-08-23 深圳壹账通智能科技有限公司 Personnel recruitment and interview method, apparatus and computer readable storage medium
CN110211591A (en) * 2019-06-24 2019-09-06 卓尔智联(武汉)研究院有限公司 Interview data analysing method, computer installation and medium based on emotional semantic classification

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
周鸣争: "《大数据导论》", 31 March 2018, 中国铁道出版社, pages: 114 *
昵称败给了备注: ""根据某某数量加权,是什么意思?"", pages 1, Retrieved from the Internet <URL:https://www.zhihu.com/question/24656722> *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111833010A (en) * 2020-06-12 2020-10-27 北京网聘咨询有限公司 An intelligent interview method, system, device and storage medium
CN112786054A (en) * 2021-02-25 2021-05-11 深圳壹账通智能科技有限公司 Intelligent interview evaluation method, device and equipment based on voice and storage medium
CN112786054B (en) * 2021-02-25 2024-06-11 深圳壹账通智能科技有限公司 Intelligent interview evaluation method, device, equipment and storage medium based on voice
CN114418366A (en) * 2022-01-06 2022-04-29 北京博瑞彤芸科技股份有限公司 Data processing method and device for intelligent cloud interview
CN114418366B (en) * 2022-01-06 2022-08-26 北京博瑞彤芸科技股份有限公司 Data processing method and device for intelligent cloud interview
CN117114475A (en) * 2023-08-21 2023-11-24 广州红海云计算股份有限公司 Comprehensive capability assessment system based on multidimensional talent assessment strategy
CN118365296A (en) * 2024-04-27 2024-07-19 北京神州光大科技有限公司 An AI video interview information conversion system and method
CN118505177A (en) * 2024-05-21 2024-08-16 北京位来教育科技有限公司 Virtual interview management method and system based on artificial intelligence
CN119273313A (en) * 2024-09-20 2025-01-07 埃摩森网络科技(上海)有限公司 Statistical methods and systems for human resource management based on digitalization

Similar Documents

Publication Publication Date Title
CN112346567B (en) Virtual interaction model generation method and device based on AI (Artificial Intelligence) and computer equipment
CN111222837A (en) Intelligent interviewing method, system, equipment and computer storage medium
CN109767787B (en) Emotion recognition method, device and readable storage medium
US12282928B2 (en) Method and apparatus for analyzing sales conversation based on voice recognition
WO2021218028A1 (en) Artificial intelligence-based interview content refining method, apparatus and device, and medium
CN110689261A (en) Service quality evaluation product customization platform and method
KR102280490B1 (en) Training data construction method for automatically generating training data for artificial intelligence model for counseling intention classification
CN107256428B (en) Data processing method, data processing device, storage equipment and network equipment
CN111276148A (en) Return visit method, system and storage medium based on convolutional neural network
CN114911929B (en) Classification model training method, text mining method, device and storage medium
KR102476099B1 (en) METHOD AND APPARATUS FOR GENERATING READING DOCUMENT Of MINUTES
CN109960790B (en) Summary generation method and device
CN117592470A (en) Low-cost bulletin data extraction method driven by large language model
CN115641101A (en) Intelligent recruitment method, device and computer readable medium
CN113868271B (en) Knowledge base updating method, device, electronic device and storage medium for intelligent customer service
CN118134049A (en) Conference decision support condition prediction method, device, equipment, medium and product
CN112434144A (en) Method, device, electronic equipment and computer readable medium for generating target problem
CN115033675B (en) Conversation method, conversation device, electronic device and storage medium
CN109408175B (en) Real-time interaction method and system in general high-performance deep learning calculation engine
CN119724196B (en) Character separation method, device, equipment and medium based on voice
CN116127011B (en) Intent recognition method, device, electronic device and storage medium
CN117690413A (en) Audio processing method, apparatus, device, medium, and program product
CN116127037A (en) Method for intelligently screening resume from human resources
CN116303942A (en) Intelligent question answering method, device, equipment and storage medium
CN113609833A (en) Dynamic generation method and device of file, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200602