[go: up one dir, main page]

CN108334850A - A kind of intelligent interactive system - Google Patents

A kind of intelligent interactive system Download PDF

Info

Publication number
CN108334850A
CN108334850A CN201810138519.4A CN201810138519A CN108334850A CN 108334850 A CN108334850 A CN 108334850A CN 201810138519 A CN201810138519 A CN 201810138519A CN 108334850 A CN108334850 A CN 108334850A
Authority
CN
China
Prior art keywords
image
region
saliency
module
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201810138519.4A
Other languages
Chinese (zh)
Inventor
钟建明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huitong Intelligent Technology Co Ltd
Original Assignee
Shenzhen Huitong Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huitong Intelligent Technology Co Ltd filed Critical Shenzhen Huitong Intelligent Technology Co Ltd
Priority to CN201810138519.4A priority Critical patent/CN108334850A/en
Publication of CN108334850A publication Critical patent/CN108334850A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/008Artificial life, i.e. computing arrangements simulating life based on physical entities controlled by simulated intelligence so as to replicate intelligent life forms, e.g. based on robots replicating pets or humans in their appearance or behaviour
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Robotics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of intelligent interactive system, including:Camera module sends the image to central processing module for obtaining user images;Central processing module identifies the information of user's expression for handling described image;Human-computer interaction module provides a user corresponding response according to the information of central processing module identification and designed man-machine interaction mode;Memory module, for storing the man-machine interaction mode being pre-designed.The present invention acquires and analyzes user's image, and provides the response of specific hommization, improves the intelligent and interactive of machine.

Description

Intelligent interaction system
Technical Field
The invention relates to the technical field of intelligent interaction, in particular to an intelligent interaction system.
Background
Nowadays, people increasingly depend on computers, mobile phones, tablet computers and other devices in work and daily life, and the time facing the devices even exceeds the time of communicating with natural people. If these devices are not user friendly, users will easily feel solitary, tired, and tired, so that the happiness index is lowered, the working efficiency is lowered, and serious people may cause mental or psychological problems.
In the prior art, the electronic device often only receives the user's instruction in a single way, and cannot understand the expression or behavior of the user like a "natural person" and give a humanized response, and the interaction performance needs to be improved.
Disclosure of Invention
In view of the above problems, the present invention aims to provide an intelligent interactive system.
The purpose of the invention is realized by adopting the following technical scheme:
an intelligent interactive system is provided, comprising:
the camera module is used for acquiring a user image and sending the image to the central processing module;
the central processing module is used for processing the image and identifying the information expressed by the user;
the man-machine interaction module provides corresponding responses to the user according to the information identified by the central processing module and the designed man-machine interaction mode;
and the storage module is used for storing a pre-designed human-computer interaction mode.
In one embodiment, the human-computer interaction module comprises:
the searching module is used for searching a corresponding human-computer interaction mode and corresponding data according to the information identified by the central processing module, wherein the modes comprise audio playing, character displaying and video playing, and the data comprise audio, character and video data;
the multimedia module comprises an audio playing module, a character displaying module and a video playing module and is used for displaying or playing the corresponding audio, character and video data;
the storage module is further used for storing the audio, text and video data.
In one embodiment, the camera module includes a camera and an auxiliary lighting device.
In one embodiment, the central processing module includes an image processing module, and specifically includes:
the image processing unit is used for processing the acquired user image, including graying, horizontal adjustment and size adjustment, and detecting a face part in the image;
the expression recognition unit is used for carrying out expression recognition processing on the detected face part, automatically recognizing the facial expression and judging the type of the expression;
the search module further comprises: and searching a corresponding man-machine interaction mode and corresponding data according to the identified expression type.
In an embodiment, the system further includes a statistical module, configured to obtain corresponding psychological states according to facial expression analysis, store the expression types and the psychological states of the user at different times, and provide the user with the expression types and the psychological states for query.
The invention has the beneficial effects that: the intelligent interaction system is used for equipment such as computers, mobile phones and tablet computers, the images of the users are collected and analyzed, specific humanized responses are given, and the intelligence and the interactivity of the machine are improved.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, other drawings can be obtained on the basis of the following drawings without inventive effort.
FIG. 1 is a block diagram of the frame of the present invention;
FIG. 2 is a block diagram of a human-computer interaction module according to the present invention;
FIG. 3 is a frame structure diagram of the CPU module of the present invention.
Reference numerals:
camera module 10, central processing module 20, image processing module 21, image processing unit 211, expression recognition unit 212, human-computer interaction module 30, search module 31, multimedia module 32, storage module 40 and statistics module 50
Detailed Description
The invention is further described in connection with the following application scenarios.
Referring to FIG. 1, there is shown an intelligent interactive system comprising:
a camera module 10 for acquiring a user image and transmitting the image to a central processing module 20;
a central processing module 20, configured to process the image and identify information expressed by the user;
the human-computer interaction module 30 provides corresponding responses to the user according to the information identified by the central processing module 20 and the designed human-computer interaction mode;
and the storage module 40 is used for storing a pre-designed human-computer interaction mode.
In one embodiment, the human-computer interaction module 30 comprises:
the searching module 31 is configured to search a corresponding human-computer interaction mode and corresponding data according to the information identified by the central processing module 20, where the mode includes audio playing, text displaying, and video playing, and the data includes audio, text, and video data.
The multimedia module 32 includes an audio playing module, a text displaying module and a video playing module, and is configured to display or play the corresponding audio, text and video data.
The storage module 40 is further configured to store the audio, text, and video data.
In one embodiment, the camera module 10 includes a camera and an auxiliary lighting device.
In one embodiment, referring to fig. 2, the central processing module 20 includes an image processing module 21, which specifically includes:
an image processing unit 211, configured to process the acquired user image, including graying, level adjustment, and size adjustment, and detect a face portion in the image;
an expression recognition unit 212, configured to perform expression recognition processing on the detected face portion, automatically recognize a facial expression, and determine the type of the expression;
the lookup module 31 further includes: and searching a corresponding man-machine interaction mode and corresponding data according to the identified expression type.
In one embodiment, the statistical module 50 is further included for obtaining corresponding psychological states according to facial expression analysis, storing the facial expression types and psychological states of the user at different times, and providing the user with a query.
In the embodiment, the camera is used for acquiring the facial image of the user, the facial image of the user is subjected to expression recognition processing, the expression type of the user is analyzed, and the corresponding human-computer interaction mode and data are selected according to the expression of the user for playing or displaying, so that the emotion-based interaction with the user is realized, and the experience of the user is improved; meanwhile, the system is applied to electronic equipment aiming at equipment such as mobile phones, notebook computers, tablet computers and the like, so that the machines acquire image information of users through cameras, dynamic expression analysis is carried out on the users, and the current psychological states of the users are obtained, so that the users respond in forms of animation, voice, characters and the like, the machines are more humanized, a good man-machine operation environment is provided for the users, and the service level of the machines to the people is improved.
In one embodiment, the detecting, by the image processing unit 211, a face part in an image comprises:
processing the adjusted face image by adopting an R-layer Gaussian pyramid to obtain a multi-scale saliency map of the face image;
wherein, each saliency map is processed by adopting the following steps:
(1) dividing an image T into G image regions { N }g}g=1,2,…,G
(2) Acquiring a color characteristic image of the image T according to the color characteristic and the texture characteristic of the image T respectivelyAnd texture feature images
(3) Obtaining a region NgThe color feature and the texture feature of (1), wherein the function employed is:
wherein x ∈ Ng,Ngq∈G(Ng),
In the formula,andrespectively represent regions NgOf color features and texture features of G (N)g) Indicating the region NgQ ═ G (N) of the set of all adjacent regions of (c)g)|,Indicating the region NgqSeat ofThe mark is that,representing pixel x and region NgqDistance between center coordinates, wherein XZr_λ(Ngq) Indicating the region NgqColor area level saliency of, XZr_λ(Ngq)=D1_λ(Ngq)*D2_λ(Ngq)*D3(Ngq) Wherein D is1_λ(Ngq) Indicating the region NgqThe color global area contrast of (a) is,ω(Ngq,Nb) Denotes a global control factor, where ω (N)gq,Nb)=1+{-A(Ngq,Nb)},A(Ngq,Nb) Indicating the region NgqAnd region NbThe Euclidean distance of (a) is,andrespectively represent regions NgqAnd region NbColor characteristic of (D)2_λ(Ngq) Indicating the region NgqThe contrast of the color background of (a),Bprepresenting the upper, lower, left and right border areas of the image T,represents the region BpColor characteristic of (1), XZr_γ(Ngq) Indicating the region NgqOf the texture region level, XZr_γ(Ngq)=D1_γ(Ngq)*D2_γ(Ngq)*D3(Ngq),D1_γ(Ngq) Indicating the region NgqThe contrast of the texture global area of (a),andrespectively represent regions NgqAnd region NbTexture feature of (D)2_γ(Ngq) Indicating the region NgqThe contrast of the background of the texture of (2), representing the textural features of region Bp, D3Ngq representing the central prior of region Ngq,indicating the region NgqCoordinate of (a), oT_zxCoordinates representing the center of the image T;
then there is a change in the number of,wherein, XZλAnd XZγRespectively representing the significance of the color and texture features of the image T;
(4) obtaining single-scale saliency about color and texture features after central prior in a saliency region, wherein the adopted function is as follows:
XZs_λ(q)=Qc(q)XZλ(q)
XZs_γ(q)=Qc(q)XZγ(q)
wherein,
in the formula oqCoordinates, o, representing pixel point ks_zxCoordinates representing the center of the salient region, XZs_λ(q) is represented byλ(q) single-scale saliency with respect to color features after a central prior in the saliency region, XZ, is incorporateds_γ(q) is represented byγ(q) incorporating single-scale saliency about texture features after central prior in the saliency region;
obtaining multi-scale color saliency and multi-scale texture saliency, wherein the function adopted is as follows:
in the formula, XZe_λ(g) And XZe_γ(g) Respectively multiscale color saliency and multiscale texture saliency, XZs_λ_rAnd XZs_γ_rRespectively representing the single scale color saliency and the single scale texture saliency at the r-th scale, U (XZ)s_λ_r) Entropy of information representing the r-th layer saliency image, whereinH denotes the size of the saliency image, Pr(g) Representing the probability distribution of pixel points g in the r-th layer of significant images;
obtaining the contrast significance of a multi-feature multi-scale global region, wherein the adopted function is as follows:
XZf=XZe_λ*XZe_γ
in the formula, XZfRepresenting the multi-feature multi-scale global regional contrast saliency of an image.
And carrying out self-adaptive segmentation according to the final saliency image, detecting the face in the image and determining the face part in the image T.
In the embodiment, the method is adopted for face detection, firstly, the image is subjected to multi-scale saliency processing to obtain a multi-scale saliency map of the image, the saliency map is subjected to saliency analysis of color and texture at the same time to obtain saliency characteristics of the saliency map about the color and the texture, and the face part in the image is obtained by adopting self-adaptive segmentation, so that the problem that the segmentation result is inaccurate due to the fact that the traditional face detection method is easily influenced by brightness or masking and the like can be effectively solved, the face part detection segmentation precision is improved, and a foundation is laid for the recognition of the face expression by a system later.
In one embodiment, before performing adaptive segmentation according to a saliency image, performing filtering processing on a face image according to a final saliency image to remove non-saliency areas in the image, specifically including:
and filtering the face image according to the final saliency image, wherein the adopted filtering function is as follows:
in the formula, g (x, y) represents the gray value of the face pixel (x, y) after filtering, F (α) represents the gray value of the pixel (α), wherein (α) is any pixel in a rectangular region with the side length of 2 α +1 and the pixel (x, y) as the center, and μ (x, y, α) represents a weight coefficient, whereinσeAnd σzRespectively representing kernel factors of a definition domain kernel and a value domain kernel, and respectively representing saliency values of pixel points (x, y) and (α) in a final saliency map by H (x, y) and H (α), wherein the saliency values are subjected to normalization processing, and H (x, y) belongs to [0,1 ∈ []。
And then segmenting the face part in the image by adopting threshold segmentation according to the filtering result.
In the embodiment, the significant image is filtered by the method, so that the non-significant region in the significant image can be removed, the significant region is reserved, and then the face part segmentation is performed according to the filtered image, so that the face part segmentation precision is improved.
In one embodiment, the expression recognition unit 212 includes:
performing expression recognition processing on the face part by adopting an expression recognition algorithm of sparse representation to obtain expression classification;
before the expression recognition is carried out by adopting the expression recognition algorithm of sparse representation, an expression dictionary needs to be constructed, sparse coefficients on the basis of the dictionary are calculated in the expression recognition algorithm of sparse representation, and then an expression recognition result is judged according to reconstruction errors, wherein the construction algorithm of the expression dictionary is as follows:
obtaining different types of expression training sample feature matrix C ═ { C ═ C1,c2,…,cNAnd setting the maximum iteration times M and the sparsity as Q0Training the sample feature dimension u, i.e. ci∈δuWhere N represents the number of expression training samples, δuThe characteristic size of the expression training sample characteristic is represented as u;
random selection of L training samples to initialize sparse dictionary matrix X(0),X(0)∈δu×LAnd performing l on each column of the matrix2Normalized, i ═ 1, where X ═ X1,x2,…,xL]Representing a sparse dictionary, where L represents the number of dictionary elements in the sparse dictionary, δu×LRepresenting sparse dictionary matrix X(0)The characteristic size of (1) is uxL;
calculating each training sample c by adopting a tracking algorithmiIs a sparse representation vector riWherein the function used is:
wherein ri||0≤Q0,i=1,2,…,N,
Wherein X represents a sparse dictionary matrix, ciExpressing the characteristics of the ith expression training sample, N expressing the number of the expression training samples, Q0Expressing the sparsity;
and (3) codebook updating: updating X(i-1)Each column x in (1)lWherein L ═ 1,2, …, L specifically includes:
defining a set of sample sequence numbers using the ith dictionary atomNamely, the column where the element which is not 0 in the ith row of the vector matrix X is located;
calculating the overall representation error:wherein xiRepresenting the ith column in the sparse dictionary X,represents the ith row in the sparse vector matrix X;
from the representation error taulMiddle selection set omegalThe columns corresponding to the middle sequence number form an error matrix
For error matrixPerforming CED decomposition, i.e.Select column 1 of U as the updated dictionary columnTo sparsely vectorColumn 1 updated to E and Δ (1, 1);
updating the iteration times i to i + 1;
repeating codebook updating, and outputting a sparse dictionary X when the maximum iteration number M is reachedM=[x1,x2,…,xL]。
In the embodiment, expression recognition processing is performed on the obtained face part by adopting an expression recognition algorithm based on sparse representation, sparse representation coefficients of the face image are obtained by adopting an expression dictionary, and an expression recognition result is judged according to a reconstruction error, so that the accuracy is high; the expression dictionary required by the method can be accurately classified in the expression library, the redundancy is effectively reduced, the data characteristics of expression training data are comprehensively acquired, the performance of the expression recognition algorithm in the unit is indirectly improved, the accuracy of face recognition is improved, and a foundation is laid for selecting the most suitable interaction mode according to the expression recognition result in a man-machine interaction module.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the protection scope of the present invention, although the present invention is described in detail with reference to the preferred embodiments, it should be analyzed by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (6)

1. An intelligent interactive system, comprising:
the camera module is used for acquiring a user image and sending the image to the central processing module;
the central processing module is used for processing the image and identifying the information expressed by the user;
the man-machine interaction module provides corresponding responses to the user according to the information identified by the central processing module and the designed man-machine interaction mode;
and the storage module is used for storing a pre-designed human-computer interaction mode.
2. The intelligent interaction system of claim 1, wherein the human-computer interaction module comprises:
the searching module is used for searching a corresponding human-computer interaction mode and corresponding data according to the information identified by the central processing module, wherein the modes comprise audio playing, character displaying and video playing, and the data comprise audio, character and video data;
the multimedia module comprises an audio playing module, a character displaying module and a video playing module and is used for displaying or playing the corresponding audio, character and video data;
the storage module is further used for storing the audio, text and video data.
3. The intelligent interactive system of claim 1, wherein the camera module comprises a camera and an auxiliary lighting device.
4. The intelligent interactive system according to claim 2, wherein the central processing module comprises an image processing module, and specifically comprises:
the image processing unit is used for processing the acquired user image, including graying, horizontal adjustment and size adjustment, and detecting a face part in the image;
the expression recognition unit is used for carrying out expression recognition processing on the detected face part, automatically recognizing the facial expression and judging the type of the expression;
the search module further comprises: and searching a corresponding man-machine interaction mode and corresponding data according to the identified expression type.
5. The intelligent interactive system according to claim 4, further comprising a statistical module, configured to obtain corresponding mental states according to facial expression analysis, store the facial expression types and the mental states of the user at different times, and provide the stored mental states and facial expression types for the user to query.
6. The intelligent interactive system of claim 4, wherein the image processing unit detecting the face portion in the image comprises:
processing the adjusted face image by adopting an R-layer Gaussian pyramid to obtain a multi-scale saliency map of the face image;
wherein, each saliency map is processed by adopting the following steps:
(1) dividing an image T into G image regions { N }g}g=1,2,…,G
(2) Acquiring a color characteristic image of the image T according to the color characteristic and the texture characteristic of the image T respectivelyAnd texture feature images
(3) Obtaining a region NgThe color feature and the texture feature of (1), wherein the function employed is:
wherein x ∈ Ng,Ngq∈G(Ng),
In the formula,andrespectively represent regions NgOf color features and texture features of G (N)g) Indicating the region NgQ ═ G (N) of the set of all adjacent regions of (c)g)|,Indicating the region NgqIs determined by the coordinate of (a) in the space,representing pixel x and region NgqDistance between center coordinates, wherein XZr_λ(Ngq) Indicating the region NgqColor area level saliency of, XZr_λ(Ngq)=D1_λ(Ngq)*D2_λ(Ngq)*D3(Ngq) Wherein D is1_λ(Ngq) Indicating the region NgqThe color global area contrast of (a) is,ω(Ngq,Nb) Denotes a global control factor, where ω (N)gq,Nb)=1+{-A(Ngq,Nb)},A(Ngq,Nb) Indicating the region NgqAnd region NbThe Euclidean distance of (a) is,andrespectively represent regions NgqAnd region NbColor characteristic of (D)2_λ(Ngq) Indicating the region NgqThe contrast of the color background of (a),Bprepresenting the upper, lower, left and right border areas of the image T,represents the region BpColor characteristic of (1), XZr_γ(Ngq) Indicating the region NgqOf the texture region level, XZr_γ(Ngq)=D1_γ(Ngq)*D2_γ(Ngq)*D3(Ngq),D1_γ(Ngq) Indicating the region NgqThe contrast of the texture global area of (a), andrespectively represent regions NgqAnd region NbTexture feature of (D)2_γ(Ngq) Indicating the region NgqThe contrast of the background of the texture of (2), represents the region BpTexture feature of (D)3(Ngq) Indicating the region NgqIs a priori at the center of (c), indicating the region NgqCoordinate of (a), oT_zxCoordinates representing the center of the image T;
then there is a change in the number of,wherein, XZλAnd XZγRespectively representing the significance of the color and texture features of the image T;
(4) obtaining single-scale saliency about color and texture features after central prior in a saliency region, wherein the adopted function is as follows:
XZs_λ(q)=Qc(q)XZλ(q)
XZs_γ(q)=Qc(q)XZγ(q)
wherein,
in the formula oqCoordinates, o, representing pixel point ks_zxCoordinates representing the center of the salient region, XZs_λ(q) is represented byλ(q) single-scale saliency with respect to color features after a central prior in the saliency region, XZ, is incorporateds_γ(q) is represented byγ(q) incorporating single-scale saliency about texture features after central prior in the saliency region;
obtaining multi-scale color saliency and multi-scale texture saliency, wherein the function adopted is as follows:
in the formula, XZe_λ(g) And XZe_γ(g) Respectively multiscale color saliency and multiscale texture saliency, XZs_λ_rAnd XZs_γ_rRespectively representing the single scale color saliency and the single scale texture saliency at the r-th scale, U (XZ)s_λ_r) Entropy of information representing the r-th layer saliency image, whereinH denotes the size of the saliency image, Pr(g) Representing the probability distribution of pixel points g in the r-th layer of significant images;
obtaining the contrast significance of a multi-feature multi-scale global region, wherein the adopted function is as follows:
XZf=XZe_λ*XZe_γ
in the formula, XZfRepresenting the contrast significance of a multi-feature multi-scale global region of an image;
and carrying out self-adaptive segmentation according to the final saliency image, detecting the face in the image and determining the face part in the image T.
CN201810138519.4A 2018-02-10 2018-02-10 A kind of intelligent interactive system Withdrawn CN108334850A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810138519.4A CN108334850A (en) 2018-02-10 2018-02-10 A kind of intelligent interactive system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810138519.4A CN108334850A (en) 2018-02-10 2018-02-10 A kind of intelligent interactive system

Publications (1)

Publication Number Publication Date
CN108334850A true CN108334850A (en) 2018-07-27

Family

ID=62929179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810138519.4A Withdrawn CN108334850A (en) 2018-02-10 2018-02-10 A kind of intelligent interactive system

Country Status (1)

Country Link
CN (1) CN108334850A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113634356A (en) * 2021-08-12 2021-11-12 长春市九台区侬富米业有限公司 Intelligent system and method for multifunctional grinding of rice and corn to prepare rice and flour

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113634356A (en) * 2021-08-12 2021-11-12 长春市九台区侬富米业有限公司 Intelligent system and method for multifunctional grinding of rice and corn to prepare rice and flour

Similar Documents

Publication Publication Date Title
CN112162930B (en) Control identification method, related device, equipment and storage medium
CN113159147B (en) Image recognition method and device based on neural network and electronic equipment
CN109961009B (en) Pedestrian detection method, system, device and storage medium based on deep learning
CN112348117B (en) Scene recognition method, device, computer equipment and storage medium
CN112784810B (en) Gesture recognition method, gesture recognition device, computer equipment and storage medium
CN109657533B (en) Pedestrian re-identification method and related product
CN108399386B (en) Method and device for extracting information in pie chart
CN111160335A (en) Image watermarking processing method and device based on artificial intelligence and electronic equipment
CN109343920B (en) Image processing method and device, equipment and storage medium thereof
CN107977633A (en) Age recognition methods, device and the storage medium of facial image
CN110765860A (en) Tumble determination method, tumble determination device, computer apparatus, and storage medium
CN111597884A (en) Facial action unit identification method and device, electronic equipment and storage medium
US20120119984A1 (en) Hand pose recognition
CN113255557B (en) Deep learning-based video crowd emotion analysis method and system
CN111612822B (en) Object tracking method, device, computer equipment and storage medium
CN112380978B (en) Multi-face detection method, system and storage medium based on key point positioning
CN112001362A (en) Image analysis method, image analysis device and image analysis system
CN112560857B (en) Character area boundary detection method, equipment, storage medium and device
CN113971742A (en) Key point detection method, model training method, model live broadcasting method, device, equipment and medium
CN112784691A (en) Target detection model training method, target detection method and device
CN106682669A (en) Image processing method and mobile terminal
CN120107598A (en) Plate recognition method, device, equipment, storage medium and computer program product
CN113705511A (en) Gesture recognition method and device
CN108334850A (en) A kind of intelligent interactive system
CN113435531A (en) Zero sample image classification method and system, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20180727

WW01 Invention patent application withdrawn after publication