[go: up one dir, main page]

CN111222569A - Method, device, electronic equipment and medium for identifying food - Google Patents

Method, device, electronic equipment and medium for identifying food Download PDF

Info

Publication number
CN111222569A
CN111222569A CN202010009628.3A CN202010009628A CN111222569A CN 111222569 A CN111222569 A CN 111222569A CN 202010009628 A CN202010009628 A CN 202010009628A CN 111222569 A CN111222569 A CN 111222569A
Authority
CN
China
Prior art keywords
food
identified
characteristic parameter
information
obtaining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010009628.3A
Other languages
Chinese (zh)
Inventor
章文文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Original Assignee
Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yulong Computer Telecommunication Scientific Shenzhen Co Ltd filed Critical Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Priority to CN202010009628.3A priority Critical patent/CN111222569A/en
Publication of CN111222569A publication Critical patent/CN111222569A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/60ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to nutrition control, e.g. diets
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/68Food, e.g. fruit or vegetables

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Nutrition Science (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method, a device, electronic equipment and a medium for identifying food. According to the food identification method and device, after the target image including the food to be identified is obtained, the first characteristic parameter of the food to be identified can be extracted by using a preset neural network detection model, and the heat information corresponding to the food to be identified is obtained based on the first characteristic parameter of the food to be identified. Through applying the technical scheme of this application, can utilize the neural network model to confirm after the image that contains the food that awaits measuring that the user shot, other information such as the kind and the culinary art mode of this food, automatically for this food generation corresponding heat information and mark on the image to avoided the user to need manual inquiry just can know the drawback of food heat.

Description

Method, device, electronic equipment and medium for identifying food
Technical Field
The present application relates to image processing technologies, and in particular, to a method, an apparatus, an electronic device, and a medium for recognizing food.
Background
Due to the rise of the communication age and society, the living standard of people is also improved all the way. Further, during the eating process, as people who are more and more healthy, lose weight or pay attention to healthy eating now, calorie information corresponding to each food is generally searched before eating the food. Thereby ensuring that the daily energy intake is within the standard range.
Furthermore, when a user identifies food, the user often performs manual query, which also causes the user to have complicated operation, thereby affecting the user experience.
Disclosure of Invention
The embodiment of the application provides a method, a device, electronic equipment and a medium for identifying food.
According to an aspect of an embodiment of the present application, there is provided a method for identifying food, including:
acquiring a target image, wherein the target image comprises food to be identified;
extracting a first characteristic parameter of the food to be identified by using a preset neural network detection model;
and obtaining the heat information corresponding to the food to be identified based on the first characteristic parameter of the food to be identified.
Optionally, in another embodiment based on the above method of the present application, the obtaining the calorie information corresponding to the food to be identified based on the first characteristic parameter of the food to be identified includes:
acquiring a first characteristic parameter of the food to be identified;
determining the quantity of the food to be identified by using the first characteristic parameter;
and obtaining the calorie information corresponding to the food to be identified based on the number of the food to be identified.
Optionally, in another embodiment based on the above method of the present application, the obtaining the calorie information corresponding to the food to be identified based on the first characteristic parameter of the food to be identified includes:
acquiring a first characteristic parameter of the food to be identified;
matching the first characteristic parameters with characteristic parameters in a first matching database, and determining attribute information of the food to be identified, wherein the attribute information comprises at least one of food types and cooking manners;
and obtaining the calorie information corresponding to the food to be identified based on the attribute information of the food to be identified.
Optionally, in another embodiment based on the foregoing method of the present application, after obtaining the calorie information corresponding to the food to be identified based on the first characteristic parameter of the food to be identified, the method further includes:
acquiring a second characteristic parameter of the food to be identified;
matching the second characteristic parameters with characteristic parameters in a second matching database, and determining the nutritional information of the food to be identified;
and marking the caloric information and the nutritional information of the food to be identified in the target image.
Optionally, in another embodiment based on the foregoing method of the present application, after obtaining the calorie information corresponding to the food to be identified based on the first characteristic parameter of the food to be identified, the method further includes:
acquiring a third characteristic parameter of the food to be identified;
matching the third characteristic parameters with characteristic parameters in a third matching database to determine the fat information of the food to be identified;
and labeling the caloric information and the fat information of the food to be identified in the target image.
Optionally, in another embodiment based on the foregoing method of the present application, after obtaining the calorie information corresponding to the food to be identified based on the first characteristic parameter of the food to be identified, the method further includes:
acquiring data parameters of a target user, wherein the data parameters are used for reflecting at least one of weight information, posture information and face information of the target user;
generating a push message for the target user based on the calorie information corresponding to the food to be identified and the data parameter, wherein the push message is used for representing whether the target user is suitable for eating the food to be identified;
and displaying the push message in the target image.
Optionally, in another embodiment based on the foregoing method of the present application, after the acquiring the target image, the method further includes:
detecting whether a food recognition function is started;
and when the food identification function is determined to be started, extracting a first characteristic parameter of the food to be identified by using a preset neural network detection model.
According to another aspect of the embodiments of the present application, there is provided an apparatus for identifying food, including:
an acquisition module configured to acquire a target image including food to be identified;
the extraction module is used for extracting a first characteristic parameter of the food to be identified by utilizing a preset neural network detection model;
the generating module is used for obtaining the heat information corresponding to the food to be identified based on the first characteristic parameter of the food to be identified.
According to another aspect of the embodiments of the present application, there is provided an electronic device including:
a memory for storing executable instructions; and
a display for displaying with the memory to execute the executable instructions to perform the operations of any of the above-described methods of identifying food.
According to a further aspect of embodiments of the present application, there is provided a computer-readable storage medium for storing computer-readable instructions which, when executed, perform the operations of any one of the above-mentioned methods for identifying food.
In the method and the device, after the target image including the food to be recognized is obtained, the preset neural network detection model can be used for extracting the first characteristic parameter of the food to be recognized, and the heat information corresponding to the food to be recognized is obtained based on the first characteristic parameter of the food to be recognized. Through applying the technical scheme of this application, can utilize the neural network model to confirm after the image that contains the food that awaits measuring that the user shot, other information such as the kind and the culinary art mode of this food, automatically for this food generation corresponding heat information and mark on the image to avoided the user to need manual inquiry just can know the drawback of food heat.
The technical solution of the present application is further described in detail by the accompanying drawings and examples.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description, serve to explain the principles of the application.
The present application may be more clearly understood from the following detailed description with reference to the accompanying drawings, in which:
FIG. 1 is a schematic diagram of the system architecture for recognizing food according to the present application;
FIG. 2 is a schematic diagram of a method of identifying food as set forth herein;
FIG. 3 is a schematic diagram of a method of identifying food in accordance with the present application;
FIG. 4 is a schematic structural diagram of the apparatus for identifying food according to the present application;
fig. 5 is a schematic view of an electronic device according to the present application.
Detailed Description
Various exemplary embodiments of the present application will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present application unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the application, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
In addition, technical solutions between the various embodiments of the present application may be combined with each other, but it must be based on the realization of the technical solutions by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination of technical solutions should be considered to be absent and not within the protection scope of the present application.
It should be noted that all the directional indicators (such as upper, lower, left, right, front and rear … …) in the embodiment of the present application are only used to explain the relative position relationship between the components, the motion situation, etc. in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indicator is changed accordingly.
A method for identifying food according to an exemplary embodiment of the present application is described below in conjunction with fig. 1-3. It should be noted that the following application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present application, and the embodiments of the present application are not limited in this respect. Rather, embodiments of the present application may be applied to any scenario where applicable.
Fig. 1 shows a schematic diagram of an exemplary system architecture 100 to which a video processing method or a video processing apparatus of an embodiment of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include one or more of terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, server 105 may be a server cluster comprised of multiple servers, or the like.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may be various electronic devices having a display screen, including but not limited to smart phones, tablet computers, portable computers, desktop computers, and the like.
The terminal apparatuses 101, 102, 103 in the present application may be terminal apparatuses that provide various services. For example, the user acquires an adjustment instruction for the target seat through the terminal device 103 (or the terminal device 101 or 102); acquiring a target image, wherein the target image comprises food to be identified; extracting a first characteristic parameter of the food to be identified by using a preset neural network detection model; and obtaining the heat information corresponding to the food to be identified based on the first characteristic parameter of the food to be identified.
It should be noted that the video processing method provided in the embodiments of the present application may be executed by one or more of the terminal devices 101, 102, and 103, and/or the server 105, and accordingly, the video processing apparatus provided in the embodiments of the present application is generally disposed in the corresponding terminal device, and/or the server 105, but the present application is not limited thereto.
The application also provides a method, a device, a target terminal and a medium for identifying food.
Fig. 2 schematically shows a flow diagram of a method of identifying food according to an embodiment of the present application. As shown in fig. 2, the method includes:
s101, acquiring a target image, wherein the target image comprises food to be identified.
It should be noted that, in the present application, a device for acquiring a target image is not specifically limited, and may be, for example, an intelligent device or a server. The smart device may be a PC (Personal Computer), a smart phone, a tablet PC, an e-book reader, an MP3(Moving Picture Experts group Audio Layer iii, motion Picture Experts compression standard Audio Layer 3) device for identifying food, an MP4(Moving Picture Experts group Audio Layer IV, motion Picture Experts compression standard Audio Layer 4) device for identifying food, a portable Computer, and other mobile terminal devices having a display function.
Further, during the process of taking food, the user needs to obtain various nutrients and calories needed by the body through the food to maintain the normal functions of the body. Wherein, the requirements of people of different age groups and different health conditions on the heat quantity and the content are different. Meanwhile, the variety of food is very numerous and diverse, and the quality and quantity of energy and nutrients contained in various foods are different.
Still further, how to reasonably select the food most suitable for the user becomes a problem of great concern to the user. With the continuous improvement of living standard, people pay more and more attention to diet health. Improper or excessive diet may lead to food allergies and the development of some chronic diseases such as diabetes, hypertension, obesity, etc.
Further, for a user who has an excessive energy intake for a long period of time, obesity and the like are likely to be caused. Among them, obesity is a chronic metabolic disease caused by multiple factors resulting in excessive accumulation and/or abnormal distribution of body fat due to long-term energy intake exceeding energy expenditure, and reaching a degree of health risk. In other words, obesity is caused by an imbalance in energy metabolism, long-term energy intake is greater than energy expenditure, excess energy is converted into fat to be stored in the body, and excess body fat stores cause obesity. In view of the cause of obesity, it is necessary to control the intake of total energy and consume as much energy as possible in order to reduce the fat content in the body. The total energy intake control needs to select food reasonably, and does not need to select extremely low energy diet easily, and the basic nutritional requirements of the organism are met to the maximum extent by reasonably matching nutrition on the basis of limiting the total energy. Energy consumption mainly depends on basal metabolism for maintaining self life, food heat effect and kinetic energy consumption.
In order to avoid the above-mentioned problem of excessive energy intake for a long time, users generally input information such as names of dishes in a search box by using the internet to inquire about calorie information and other information such as nutrition of the food. As can be appreciated, this approach consumes more query time by the user, thereby delaying the eating experience of the user.
Further, the method for acquiring the target image is not specifically limited in the present application, and for example, the target image may be an image shot by a user for food to be recognized, or the image of the food to be recognized may be transmitted to an intelligent device by a server after being obtained.
In addition, the number of foods to be identified is not specifically limited, and may be one or more.
In addition, the food to be identified is not specifically limited in the present application, that is, the food to be identified may be any food. For example, the food can be fruits, dishes, staple food, uncooked food, cooked food and the like.
In addition, the number of target images is not particularly limited, and may be, for example, one or more.
S102, extracting a first characteristic parameter of the food to be identified by using a preset neural network detection model.
After the target image containing the food to be recognized is obtained, the first characteristic parameters of the food to be recognized corresponding to the target image can be extracted by using a preset neural network image detection model. The first characteristic parameter is not specifically limited in the present application, and for example, the first characteristic parameter may be a characteristic parameter corresponding to color information of food to be recognized, or may also be a characteristic parameter corresponding to shape information of food to be recognized, or may also be a characteristic parameter corresponding to quantity information of food to be recognized, or may also be a characteristic parameter corresponding to size information of food to be recognized, or the like.
Further, the present application also does not specifically limit the neural network detection model, for example, the neural network detection model may be a MobileNet (computer vision neural network) model. The MobileNet model is a general computer vision neural network designed for mobile devices, and therefore can support image classification and detection. Running deep networks on personal mobile devices in general can enhance user experience, improve flexibility of access, and gain additional advantages in security, privacy, and energy consumption. Furthermore, with the advent of new applications, users can interact with the real world in real time, and thus there can be a great need for more efficient neural networks.
Alternatively, the neural network detection model in the present application may be other convolutional neural networks. Among them, Convolutional Neural Networks (CNN) are a kind of feed forward Neural Networks (fed forward Neural Networks) containing convolution calculation and having a deep structure, and are one of the representative algorithms of deep learning. The convolutional neural network has a representation learning (representation learning) capability, and can perform translation invariant classification on input information according to a hierarchical structure of the convolutional neural network. Due to the strong characteristic characterization capability of the CNN model (convolutional neural network) on the image, the CNN model has remarkable effects in the fields of image classification, target detection, semantic segmentation and the like.
Further, the method can use a neural network detection model to extract the first characteristic parameters of the food to be identified in the target image. At least one target image needs to be input into a preset neural network detection model, and the output of a last full connected layer (FC) of the neural network detection model is used as a first characteristic parameter corresponding to the target image. So as to obtain the corresponding recognition result of the food to be recognized according to the characteristic parameters.
S103, obtaining the heat information corresponding to the food to be identified based on the first characteristic parameter of the food to be identified.
Further, after the characteristic parameters of the food to be identified are obtained, the preset database in which the calorie information corresponding to each food is stored can be used to obtain the calorie information corresponding to the food to be identified.
For example, a food calorie meter may be stored in the database. Wherein the food calorie meter is a tabulated or tabulated list of energy in terms of unit quantities (e.g., 100 grams) of food for dietary health reference. Different foods produce different energies. For users with chronic diseases such as obesity, diabetes, hypertension, etc., it is a method to control energy intake of their body every day and to reduce their own pathological changes. Therefore, it is necessary to measure the energy contained for each food item in order for the user to calculate or schedule the total energy contained in the food items consumed per meal.
Further, the definition of calorie for food samples is: 1ml (1 g mass) heated with 4.187 joules per 1 degree C rise. The food may include staple foods: such as rice, porridge, noodles, etc. It can also be meat, bean curd, egg, etc. And can also be food of vegetables, seafood, fruits, etc.
Furthermore, after the heat information corresponding to the food to be identified is obtained, the heat information can be displayed on the target image, and therefore the user can conveniently and directly check the heat information. Or after the heat information corresponding to the food to be identified is obtained, the heat information can be sent to the intelligent device of the user in other text forms, so that the user can check the heat information conveniently.
In the method and the device, after the target image including the food to be recognized is obtained, the preset neural network detection model can be used for extracting the first characteristic parameter of the food to be recognized, and the heat information corresponding to the food to be recognized is obtained based on the first characteristic parameter of the food to be recognized. Through applying the technical scheme of this application, can utilize the neural network model to confirm after the image that contains the food that awaits measuring that the user shot, other information such as the kind and the culinary art mode of this food, automatically for this food generation corresponding heat information and mark on the image to avoided the user to need manual inquiry just can know the drawback of food heat.
In another possible embodiment of the present application, in S103 (obtaining the calorie information corresponding to the food to be recognized based on the first characteristic parameter of the food to be recognized), the following two ways may be implemented:
the first mode is as follows:
acquiring a first characteristic parameter of food to be identified;
determining the quantity of the food to be identified by using the first characteristic parameter;
and obtaining the calorie information corresponding to the food to be identified based on the quantity of the food to be identified.
Further, in the process of obtaining the calorie information corresponding to the food to be identified, the calorie information can be determined based on the number of the food to be identified. It is understood that, when the number of the food to be identified is multiple, the application may determine the calorie information corresponding to the multiple food to be identified based on the combination of the multiple food.
It should be noted that, for a plurality of foods to be identified, the sum of the energies is not just the sum of the energies corresponding to each food. It will be appreciated that there may be a mutual consumption and/or combined effect between the various foods. Therefore, the method and the device can comprehensively determine the heat information corresponding to each food to be identified in the target image based on the category of each food in the plurality of foods to be identified.
The second mode is as follows:
acquiring a first characteristic parameter of food to be identified;
matching the first characteristic parameters with the characteristic parameters in the first matching database, and determining attribute information of the food to be identified, wherein the attribute information comprises at least one of food type and cooking mode;
and obtaining calorie information corresponding to the food to be identified based on the attribute information of the food to be identified and the quantity of the food to be identified.
Further, in the process of obtaining the heat information corresponding to the food to be identified, the food identification method and the food identification device can also determine the food identification method and the food identification device based on the food type and the cooking mode of the food to be identified. It can be understood that, when the food category of the food to be identified is a high-calorie food material, the corresponding calorie information is relatively high. For example, it can be used for foods corresponding to carbonated beverages such as popcorn, fatty foods, cola, etc. Furthermore, when the cooking mode of the food to be identified is a high-heat cooking mode, the corresponding heat information is relatively high. For example, the cooking method may be used for frying, stir-frying, and the like.
For example, for chicken serving as the food to be identified, after the first characteristic parameter of the chicken is acquired, the characteristic parameter of the chicken can be matched with the characteristic parameter in the first matching database, and the cooking mode of the chicken is determined. Further, when the cooking mode of the chicken is determined to be water cooking, the corresponding caloric information 30ka can be generated. Further, when it is determined that the cooking manner of the chicken is frying, the corresponding calorie information 230ka may be generated.
It should be noted that, in the process of acquiring the first characteristic parameter in the present application, a camera capture device of the smart device may be first used to capture a target image containing the food to be recognized. And determining characteristic information of each part of the food to be identified according to the food to be identified through a preset food characteristic map, wherein the characteristic information can correspond to color information, size information, shape information, density information and the like of the food to be identified. And after the first characteristic parameters of the food to be identified are determined, respectively calculating Euclidean distances by using the first characteristic parameters and the characteristic parameter data of each pre-stored food in a preset matching database. It will be appreciated that when a plurality of euclidean distances are detected, the value in which the currian distance is the smallest is determined. And comparing the value with a preset first threshold value, and if the difference value between the value and the preset first threshold value is smaller than a first set value, determining that the target characteristic parameter corresponding to the value with the minimum Euclidean distance is the characteristic data corresponding to the food to be detected. Furthermore, the food information corresponding to the target characteristic parameter can be called from the database to obtain the attribute information of the food to be identified, so that the food type and the cooking mode of the food are identified in a targeted manner, and then the corresponding heat information is obtained.
In addition, before the first characteristic parameters of the food to be identified are obtained, the detection network architecture of the food can be defined by adopting a deep convolutional neural network based on a cascade region suggestion network, a region regression network and a key point regression network structure. In the adopted deep convolution neural network, the input of the area suggestion network is 16 × 3 image data, the network is composed of a full convolution architecture, and the output is the confidence coefficient and the rough vertex position of the food area suggestion frame; the regional regression network input is 32 × 3 image data, and the network is composed of a convolution and a full-connection architecture.
In one possible implementation manner of the present application, before extracting the first characteristic parameter of the food to be recognized by using a preset neural network image detection model, the present application further needs to first obtain the neural network image detection model by the following steps:
obtaining a sample image, wherein the sample image includes at least one sample feature;
and training a preset neural network image detection model by using the sample image to obtain a convolutional neural network detection model meeting preset conditions.
Further, the present application may identify, through a neural network image detection model, a sample feature (for example, a shape feature, a size feature, a color feature, and the like) of at least one object included in the sample image. Furthermore, the neural network image detection model may classify each sample feature in the sample image, and classify the sample features belonging to the same category into the same type, so that a plurality of sample features obtained after semantic segmentation of the sample image may be sample features composed of a plurality of different types.
It should be noted that, when the neural network image detection model performs semantic segmentation processing on the sample image, the more accurate the classification of the pixel points in the sample image is, the higher the accuracy rate of identifying the labeled object in the sample image is. It should be noted that the preset condition may be set by a user.
For example, the preset conditions may be set as: the classification accuracy of the pixel points reaches more than 70%, then, the sample image is used for repeatedly training the neural network image classification model, and when the classification accuracy of the neural network image classification model on the pixel points reaches more than 70%, then the neural network image classification model can be applied to the embodiment of the application and carries out semantic segmentation processing on the key frame data.
In another possible embodiment of the present application, after S103 (obtaining the calorie information corresponding to the food to be recognized based on the first characteristic parameter of the food to be recognized), the following two cases may be further performed:
in the first case:
acquiring a second characteristic parameter of the food to be identified;
matching the second characteristic parameters with the characteristic parameters in the second matching database to determine the nutritional information of the food to be identified;
and marking the caloric information and the nutritional information of the food to be identified in the target image.
Further, after obtaining the calorie information corresponding to the food to be identified, the method and the device can also determine the corresponding nutritional information of the food to be identified based on the second characteristic parameter of the food to be identified. It is understood that the present application does not specifically limit the nutritional information, and for example, the nutritional information may be a vitamin content corresponding to the food, a protein content corresponding to the food, a sugar content corresponding to the food, and the like. Wherein, this application can mark the heat information and the common mark of nutrition information of waiting to discern food in the target image to make convenient the looking over of user.
In the second case:
acquiring a third characteristic parameter of the food to be identified;
matching the third characteristic parameters with the characteristic parameters in the third matching database to determine the fat information of the food to be identified;
and labeling the caloric information and the fat information of the food to be identified in the target image.
Further, after obtaining the calorie information corresponding to the food to be identified, the present application may also determine the fat information corresponding to the food to be identified based on the second characteristic parameter of the food to be identified. It will be appreciated that for some users of the obese type, their diets should be prohibited from containing fatty foods. Therefore, the third characteristic parameter of the food to be identified can be utilized to be matched with the characteristic parameter in the third matching database, and the fat content information of the food to be identified can be determined.
Furthermore, the method and the device can label the heat information and the fat information of the food to be identified in the target image together so that the user can conveniently view the food.
In another possible embodiment of the present application, after S103 (obtaining the calorie information corresponding to the food to be recognized based on the first characteristic parameter of the food to be recognized), the following two cases may be further performed:
acquiring data parameters of a target user, wherein the data parameters are used for reflecting at least one of weight information, posture information and face information of the target user;
generating a push message aiming at a target user based on the heat information and the data parameters corresponding to the food to be recognized, wherein the push message is used for representing whether the target user is suitable for eating the food to be recognized;
and displaying the push message in the target image.
Further, after obtaining the calorie information corresponding to the food to be identified, the present application may also determine whether the food to be identified is suitable for being eaten by the target user based on the data parameter of the target user. It will be appreciated that for users who have contraindications for part of their diet (e.g. users suffering from obesity, diabetes, or being allergic to certain foods), the food they consume needs to be strictly defined. Therefore, in the present application, the data parameter of the target user can be obtained first, and the data parameter is used to determine whether the food to be identified contains ingredients that are not edible for the target user.
The data parameter of the target user is not specifically limited in the present application, and may be at least one of weight information, posture information, and face information, for example. Alternatively, fingerprint information, iris information, or the like may be used. It will be appreciated that the device may be assisted in determining the identity or eating profile of the user. I.e. may be within the scope of the present application.
For example, for some users with obesity, the diet should be prohibited for foods containing fats. Therefore, the present application can determine the food to be identified as a high calorie food when it is determined that the food contains fat content information exceeding a threshold value. And further, when the target user is judged to be an obese user according to the information such as the weight information or the posture information of the target user, a push message that the food to be identified is not suitable for being eaten by the target user can be generated and displayed in the target image to prompt the user to check.
Further, for some egg-allergic users, the diet should be prohibited for foods containing eggs. Therefore, when the food to be identified is determined to contain the egg component, the food can be determined as the food containing eggs. And furthermore, when the target user is identified to be a pre-stored egg allergy user according to the face information, the posture information and other information of the target user, a push message that the food to be identified is not suitable for being eaten by the target user can be generated, and the push message is displayed in the target image to prompt the user to check.
Still further, for some users with diabetes, their diets should be prohibited from foods containing sugars. Therefore, the present application can determine the food to be identified as a high-saccharide food product when it is determined that the saccharide content information contained in the food exceeds the threshold value. And further, when the target user is identified to be the pre-stored user with diabetes according to the face information, the posture information and other information of the target user, a push message that the food to be identified is not suitable for being eaten by the target user can be generated, and the push message is displayed in the target image to prompt the user to check.
Further optionally, in an embodiment of the present application, after S103 (adjusting the target seat to the target area based on the identity information of the third user), a specific embodiment is further included, as shown in fig. 3, including:
s201, acquiring a target image.
S202, whether the food recognition function is started or not is detected.
S203, when the food recognition function is determined to be started, extracting a first characteristic parameter of the food to be recognized by using a preset neural network detection model.
Further, in the application, after the target image is acquired, whether a food recognition function for the food to be recognized is started in the current device is detected, and when the food recognition function is determined to be started, the first characteristic parameter of the food to be recognized is extracted by using a preset neural network detection model.
For example, in order to avoid the problem of consuming system resources caused by unnecessary consumption of equipment, the method and the device can extract the first characteristic parameter of the food to be recognized by using a preset neural network detection model after judging that a user shoots a certain image and actively starts a food recognition function, so that the heat information corresponding to the food to be recognized is obtained according to the characteristic parameter subsequently.
S204, extracting a first characteristic parameter of the food to be identified by using a preset neural network detection model.
S205, obtaining the calorie information corresponding to the food to be identified based on the first characteristic parameter of the food to be identified.
In the method and the device, after the target image including the food to be recognized is obtained, the preset neural network detection model can be used for extracting the first characteristic parameter of the food to be recognized, and the heat information corresponding to the food to be recognized is obtained based on the first characteristic parameter of the food to be recognized. Through applying the technical scheme of this application, can utilize the neural network model to confirm after the image that contains the food that awaits measuring that the user shot, other information such as the kind and the culinary art mode of this food, automatically for this food generation corresponding heat information and mark on the image to avoided the user to need manual inquiry just can know the drawback of food heat.
In another embodiment of the present application, as shown in fig. 3, the present application further provides a device for recognizing food. The device comprises an acquisition module 301, a detection module 302 and a determination module 303, wherein:
an acquisition module configured to acquire a target image including food to be identified;
the extraction module is used for extracting a first characteristic parameter of the food to be identified by utilizing a preset neural network detection model;
the generating module is used for obtaining the heat information corresponding to the food to be identified based on the first characteristic parameter of the food to be identified.
In the method and the device, after the target image including the food to be recognized is obtained, the preset neural network detection model can be used for extracting the first characteristic parameter of the food to be recognized, and the heat information corresponding to the food to be recognized is obtained based on the first characteristic parameter of the food to be recognized. Through applying the technical scheme of this application, can utilize the neural network model to confirm after the image that contains the food that awaits measuring that the user shot, other information such as the kind and the culinary art mode of this food, automatically for this food generation corresponding heat information and mark on the image to avoided the user to need manual inquiry just can know the drawback of food heat.
In another embodiment of the present application, the obtaining module 301 further includes:
an obtaining module 301 configured to obtain a first characteristic parameter of the food to be identified;
an obtaining module 301 configured to determine the number of the food to be identified by using the first characteristic parameter;
an obtaining module 301 configured to obtain calorie information corresponding to the food to be identified based on the number of the food to be identified.
In another embodiment of the present application, the obtaining module 301 further includes:
an obtaining module 301 configured to obtain a first characteristic parameter of the food to be identified;
an obtaining module 301, configured to match the first characteristic parameter with a characteristic parameter in a first matching database, and determine attribute information of the food to be identified, where the attribute information includes at least one of a food type and a cooking manner;
an obtaining module 301 configured to obtain calorie information corresponding to the food to be identified based on the attribute information of the food to be identified.
In another embodiment of the present application, the method further includes an obtaining module 301, where:
an obtaining module 301 configured to obtain a second characteristic parameter of the food to be identified;
matching the second characteristic parameters with characteristic parameters in a second matching database, and determining the nutritional information of the food to be identified;
an obtaining module 301 configured to label the caloric information and the nutritional information of the food to be identified in the target image.
In another embodiment of the present application, the obtaining module 301 further includes:
an obtaining module 301 configured to obtain a third characteristic parameter of the food to be identified;
an obtaining module 301, configured to match the third characteristic parameter with a characteristic parameter in a third matching database, and determine fat information of the food to be identified;
an obtaining module 301 configured to label the calorie information and the fat information of the food to be identified in the target image.
In another embodiment of the present application, the method further includes an obtaining module 301, where:
an obtaining module 301, configured to obtain a data parameter of a target user, where the data parameter is used to reflect at least one of weight information, posture information, and face information of the target user;
an obtaining module 301, configured to generate a push message for the target user based on the calorie information corresponding to the food to be identified and the data parameter, where the push message is used to represent whether the target user is suitable for eating the food to be identified;
an obtaining module 301 configured to display the push message in the target image.
In another embodiment of the present application, the method further includes an obtaining module 301, where:
an acquisition module 301 configured to detect whether a food recognition function is turned on;
an obtaining module 301 configured to extract a first characteristic parameter of the food to be recognized by using a preset neural network detection model when the food recognition function is determined to be turned on.
Fig. 5 is a block diagram illustrating a logical structure of an electronic device in accordance with an exemplary embodiment. For example, the electronic device 400 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 5, electronic device 400 may include one or more of the following components: a processor 401 and a memory 402.
Processor 401 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 401 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 401 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 401 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 401 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 402 may include one or more computer-readable storage media, which may be non-transitory. Memory 402 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 402 is configured to store at least one instruction for execution by the processor 401 to implement the interactive special effect calibration method provided by the method embodiments of the present application.
In some embodiments, the electronic device 400 may further optionally include: a peripheral interface 403 and at least one peripheral. The processor 401, memory 402 and peripheral interface 403 may be connected by bus or signal lines. Each peripheral may be connected to the peripheral interface 403 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 404, touch screen display 405, camera 406, audio circuitry 407, positioning components 408, and power supply 409.
The peripheral interface 403 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 401 and the memory 402. In some embodiments, processor 401, memory 402, and peripheral interface 403 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 401, the memory 402 and the peripheral interface 403 may be implemented on a separate chip or circuit board, which is not limited by this embodiment.
The Radio Frequency circuit 404 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 404 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 404 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 404 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 404 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 404 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 405 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 405 is a touch display screen, the display screen 405 also has the ability to capture touch signals on or over the surface of the display screen 405. The touch signal may be input to the processor 401 as a control signal for processing. At this point, the display screen 405 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 405 may be one, providing the front panel of the electronic device 400; in other embodiments, the display screen 405 may be at least two, respectively disposed on different surfaces of the electronic device 400 or in a folded design; in still other embodiments, the display screen 405 may be a flexible display screen disposed on a curved surface or a folded surface of the electronic device 400. Even further, the display screen 405 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The Display screen 405 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 406 is used to capture images or video. Optionally, camera assembly 406 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 406 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 407 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 401 for processing, or inputting the electric signals to the radio frequency circuit 404 for realizing voice communication. For stereo capture or noise reduction purposes, the microphones may be multiple and disposed at different locations of the electronic device 400. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 401 or the radio frequency circuit 404 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 407 may also include a headphone jack.
The positioning component 408 is used to locate a current geographic location of the electronic device 400 to implement navigation or LBS (location based Service). The positioning component 408 may be a positioning component based on the GPS (global positioning System) of the united states, the beidou System of china, the graves System of russia, or the galileo System of the european union.
The power supply 409 is used to supply power to the various components in the electronic device 400. The power source 409 may be alternating current, direct current, disposable or rechargeable. When power source 409 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the electronic device 400 also includes one or more sensors 410. The one or more sensors 410 include, but are not limited to: acceleration sensor 411, gyro sensor 412, pressure sensor 413, fingerprint sensor 414, optical sensor 415, and proximity sensor 416.
The acceleration sensor 411 may detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the electronic apparatus 400. For example, the acceleration sensor 411 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 401 may control the touch display screen 405 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 411. The acceleration sensor 411 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 412 may detect a body direction and a rotation angle of the electronic device 400, and the gyro sensor 412 may cooperate with the acceleration sensor 411 to acquire a 3D motion of the user on the electronic device 400. From the data collected by the gyro sensor 412, the processor 401 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensors 413 may be disposed on a side bezel of the electronic device 400 and/or on a lower layer of the touch display screen 405. When the pressure sensor 413 is arranged on the side frame of the electronic device 400, a holding signal of the user to the electronic device 400 can be detected, and the processor 401 performs left-right hand identification or shortcut operation according to the holding signal collected by the pressure sensor 413. When the pressure sensor 413 is disposed at the lower layer of the touch display screen 405, the processor 401 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 405. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 414 is used for collecting a fingerprint of the user, and the processor 401 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 414, or the fingerprint sensor 414 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, processor 401 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 414 may be disposed on the front, back, or side of the electronic device 400. When a physical button or vendor Logo is provided on the electronic device 400, the fingerprint sensor 414 may be integrated with the physical button or vendor Logo.
The optical sensor 415 is used to collect the ambient light intensity. In one embodiment, the processor 401 may control the display brightness of the touch display screen 405 based on the ambient light intensity collected by the optical sensor 415. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 405 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 405 is turned down. In another embodiment, the processor 401 may also dynamically adjust the shooting parameters of the camera assembly 406 according to the ambient light intensity collected by the optical sensor 415.
Proximity sensor 416, also known as a distance sensor, is typically disposed on the front panel of electronic device 400. The proximity sensor 416 is used to capture the distance between the user and the front of the electronic device 400. In one embodiment, the processor 401 controls the touch display screen 405 to switch from the bright screen state to the dark screen state when the proximity sensor 416 detects that the distance between the user and the front surface of the electronic device 400 gradually decreases; when the proximity sensor 416 detects that the distance between the user and the front of the electronic device 400 is gradually increased, the processor 401 controls the touch display screen 405 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 4 does not constitute a limitation of the electronic device 400, and may include more or fewer components than those shown, or combine certain components, or employ a different arrangement of components.
In an exemplary embodiment, there is also provided a non-transitory computer-readable storage medium, such as the memory 404, including instructions executable by the processor 420 of the electronic device 400 to perform the above-described method of identifying a food, the method including: acquiring a target image, wherein the target image comprises food to be identified; extracting a first characteristic parameter of the food to be identified by using a preset neural network detection model; and obtaining the heat information corresponding to the food to be identified based on the first characteristic parameter of the food to be identified. Optionally, the instructions may also be executable by the processor 420 of the electronic device 400 to perform other steps involved in the exemplary embodiments described above. Optionally, the instructions may also be executable by the processor 420 of the electronic device 400 to perform other steps involved in the exemplary embodiments described above. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided an application/computer program product comprising one or more instructions executable by the processor 420 of the electronic device 400 to perform the above-described method of identifying a food, the method comprising: acquiring a target image, wherein the target image comprises food to be identified; extracting a first characteristic parameter of the food to be identified by using a preset neural network detection model; and obtaining the heat information corresponding to the food to be identified based on the first characteristic parameter of the food to be identified. Optionally, the instructions may also be executable by the processor 420 of the electronic device 400 to perform other steps involved in the exemplary embodiments described above. Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A method of identifying food, comprising:
acquiring a target image, wherein the target image comprises food to be identified;
extracting a first characteristic parameter of the food to be identified by using a preset neural network detection model;
and obtaining the heat information corresponding to the food to be identified based on the first characteristic parameter of the food to be identified.
2. The method of claim 1, wherein the obtaining of the calorie information corresponding to the food to be recognized based on the first characteristic parameter of the food to be recognized comprises:
acquiring a first characteristic parameter of the food to be identified;
determining the quantity of the food to be identified by using the first characteristic parameter;
and obtaining the calorie information corresponding to the food to be identified based on the number of the food to be identified.
3. The method of claim 1, wherein the obtaining of the calorie information corresponding to the food to be recognized based on the first characteristic parameter of the food to be recognized comprises:
acquiring a first characteristic parameter of the food to be identified;
matching the first characteristic parameters with characteristic parameters in a first matching database, and determining attribute information of the food to be identified, wherein the attribute information comprises at least one of food types and cooking manners;
and obtaining the calorie information corresponding to the food to be identified based on the attribute information of the food to be identified.
4. The method of claim 1, wherein after obtaining the calorie information corresponding to the food to be recognized based on the first characteristic parameter of the food to be recognized, the method further comprises:
acquiring a second characteristic parameter of the food to be identified;
matching the second characteristic parameters with characteristic parameters in a second matching database, and determining the nutritional information of the food to be identified;
and marking the caloric information and the nutritional information of the food to be identified in the target image.
5. The method of claim 1 or 4, wherein after obtaining the calorie information corresponding to the food to be identified based on the first characteristic parameter of the food to be identified, the method further comprises:
acquiring a third characteristic parameter of the food to be identified;
matching the third characteristic parameters with characteristic parameters in a third matching database to determine the fat information of the food to be identified;
and labeling the caloric information and the fat information of the food to be identified in the target image.
6. The method of claim 5, wherein after obtaining the calorie information corresponding to the food to be recognized based on the first characteristic parameter of the food to be recognized, the method further comprises:
acquiring data parameters of a target user, wherein the data parameters are used for reflecting at least one of weight information, posture information and face information of the target user;
generating a push message for the target user based on the calorie information corresponding to the food to be identified and the data parameter, wherein the push message is used for representing whether the target user is suitable for eating the food to be identified;
and displaying the push message in the target image.
7. The method of claim 5, further comprising, after said acquiring a target image:
detecting whether a food recognition function is started;
and when the food identification function is determined to be started, extracting a first characteristic parameter of the food to be identified by using a preset neural network detection model.
8. An apparatus for identifying food, comprising:
an acquisition module configured to acquire a target image including food to be identified;
the extraction module is used for extracting a first characteristic parameter of the food to be identified by utilizing a preset neural network detection model;
the generating module is used for obtaining the heat information corresponding to the food to be identified based on the first characteristic parameter of the food to be identified.
9. An electronic device, comprising:
a memory for storing executable instructions; and the number of the first and second groups,
a processor for display with the memory to execute the executable instructions to perform the operations of the method of identifying a food item of any of claims 1-7.
10. A computer-readable storage medium storing computer-readable instructions that, when executed, perform the operations of the method of identifying food of any of claims 1-7.
CN202010009628.3A 2020-01-06 2020-01-06 Method, device, electronic equipment and medium for identifying food Withdrawn CN111222569A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010009628.3A CN111222569A (en) 2020-01-06 2020-01-06 Method, device, electronic equipment and medium for identifying food

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010009628.3A CN111222569A (en) 2020-01-06 2020-01-06 Method, device, electronic equipment and medium for identifying food

Publications (1)

Publication Number Publication Date
CN111222569A true CN111222569A (en) 2020-06-02

Family

ID=70829331

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010009628.3A Withdrawn CN111222569A (en) 2020-01-06 2020-01-06 Method, device, electronic equipment and medium for identifying food

Country Status (1)

Country Link
CN (1) CN111222569A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797754A (en) * 2020-06-30 2020-10-20 上海掌门科技有限公司 Image detection method, device, electronic device and medium
CN112528941A (en) * 2020-12-23 2021-03-19 泰州市朗嘉馨网络科技有限公司 Automatic parameter setting system based on neural network
CN113283364A (en) * 2021-06-04 2021-08-20 青岛海尔科技有限公司 Recipe determination method and apparatus, storage medium, and electronic apparatus
WO2022052021A1 (en) * 2020-09-11 2022-03-17 京东方科技集团股份有限公司 Joint model training method, object information processing method, apparatus, and system
WO2022179382A1 (en) * 2021-02-25 2022-09-01 山东英信计算机技术有限公司 Object recognition method and apparatus, and device and medium
CN120015292A (en) * 2025-01-22 2025-05-16 清华大学 Home dining calorie and blood sugar monitoring method, system and storage medium based on digital information technology

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108198188A (en) * 2017-12-28 2018-06-22 北京奇虎科技有限公司 Food nutrition analysis method, device and computing device based on picture
CN108766527A (en) * 2018-04-20 2018-11-06 拉扎斯网络科技(上海)有限公司 Method and device for determining food calorie
CN109034196A (en) * 2018-06-21 2018-12-18 北京健康有益科技有限公司 Model generating method and device, food recognition methods and device
CN110021404A (en) * 2018-01-08 2019-07-16 三星电子株式会社 For handling the electronic equipment and method of information relevant to food
CN110084663A (en) * 2019-03-14 2019-08-02 北京旷视科技有限公司 A kind of item recommendation method, device, terminal and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108198188A (en) * 2017-12-28 2018-06-22 北京奇虎科技有限公司 Food nutrition analysis method, device and computing device based on picture
CN110021404A (en) * 2018-01-08 2019-07-16 三星电子株式会社 For handling the electronic equipment and method of information relevant to food
CN108766527A (en) * 2018-04-20 2018-11-06 拉扎斯网络科技(上海)有限公司 Method and device for determining food calorie
CN109034196A (en) * 2018-06-21 2018-12-18 北京健康有益科技有限公司 Model generating method and device, food recognition methods and device
CN110084663A (en) * 2019-03-14 2019-08-02 北京旷视科技有限公司 A kind of item recommendation method, device, terminal and storage medium

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797754A (en) * 2020-06-30 2020-10-20 上海掌门科技有限公司 Image detection method, device, electronic device and medium
WO2022052021A1 (en) * 2020-09-11 2022-03-17 京东方科技集团股份有限公司 Joint model training method, object information processing method, apparatus, and system
US12147887B2 (en) 2020-09-11 2024-11-19 Beijing Boe Technology Development Co., Ltd. Method for training joint model, object information processing method, apparatus, and system
CN112528941A (en) * 2020-12-23 2021-03-19 泰州市朗嘉馨网络科技有限公司 Automatic parameter setting system based on neural network
CN112528941B (en) * 2020-12-23 2021-11-19 芜湖神图驭器智能科技有限公司 Automatic parameter setting system based on neural network
WO2022179382A1 (en) * 2021-02-25 2022-09-01 山东英信计算机技术有限公司 Object recognition method and apparatus, and device and medium
US20240071066A1 (en) * 2021-02-25 2024-02-29 Shandong Yingxin Computer Technologies Co., Ltd. Object recognition method and apparatus, and device and medium
CN113283364A (en) * 2021-06-04 2021-08-20 青岛海尔科技有限公司 Recipe determination method and apparatus, storage medium, and electronic apparatus
CN120015292A (en) * 2025-01-22 2025-05-16 清华大学 Home dining calorie and blood sugar monitoring method, system and storage medium based on digital information technology
CN120015292B (en) * 2025-01-22 2025-12-19 清华大学 Methods, systems, and storage media for home-based catering calorie and blood glucose monitoring based on digital information technology.

Similar Documents

Publication Publication Date Title
US10803315B2 (en) Electronic device and method for processing information associated with food
CN111161035B (en) Dish recommendation method and device, server, electronic equipment and storage medium
CN113039585B (en) Electronic device for displaying additional object in augmented reality image and method for driving the electronic device
CN111222569A (en) Method, device, electronic equipment and medium for identifying food
CN111652678B (en) Method, device, terminal, server and readable storage medium for displaying article information
KR102209741B1 (en) Electronic device and method for processing information associated with food
US9953248B2 (en) Method and apparatus for image analysis
CN112970026B (en) A method for estimating object parameters and electronic device
CN110163066B (en) Multimedia data recommendation method, device and storage medium
CN111476632A (en) Method and device for displaying resources, electronic equipment and readable storage medium
CN109256191A (en) The electronic device and method of the digestibility information of dietary intake for providing
CN109300526A (en) A recommendation method and mobile terminal
CN113420217A (en) Method and device for generating file, electronic equipment and computer readable storage medium
CN110765525B (en) Methods, devices, electronic equipment and media for generating scene pictures
CN112269559B (en) Volume adjustment method and device, electronic equipment and storage medium
CN112000264B (en) Dish information display method and device, computer equipment and storage medium
US20250139663A1 (en) Search Method, Terminal, Server, and System
CN111327819A (en) Method, device, electronic equipment and medium for selecting image
KR20100037842A (en) Apparatus and method for managing health condition
CN113704621A (en) Object information recommendation method, device, equipment and storage medium
CN112986159A (en) Method, device, electronic equipment and medium for detecting meat
CN112084041A (en) Resource processing method and device, electronic equipment and storage medium
CN117372748A (en) Meat identification method, meat identification device, computer equipment and storage medium
CN114297493B (en) Object recommendation method, object recommendation device, electronic equipment and storage medium
CN115204973A (en) Article determination method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20200602

WW01 Invention patent application withdrawn after publication