[go: up one dir, main page]

US20180189636A1 - Deep Learning Ingredient and Nutrient Identification Systems and Methods - Google Patents

Deep Learning Ingredient and Nutrient Identification Systems and Methods Download PDF

Info

Publication number
US20180189636A1
US20180189636A1 US15/859,126 US201715859126A US2018189636A1 US 20180189636 A1 US20180189636 A1 US 20180189636A1 US 201715859126 A US201715859126 A US 201715859126A US 2018189636 A1 US2018189636 A1 US 2018189636A1
Authority
US
United States
Prior art keywords
food
ingredient
user
recipes
menu items
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/859,126
Inventor
Victor Chapela
Ricardo Corral Corral
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suggestic Inc
Original Assignee
Suggestic Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suggestic Inc filed Critical Suggestic Inc
Priority to US15/859,126 priority Critical patent/US20180189636A1/en
Assigned to Suggestic, Inc. reassignment Suggestic, Inc. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHAPELA, VICTOR, CORRAL CORRAL, RICARDO
Publication of US20180189636A1 publication Critical patent/US20180189636A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • G06N3/0427
    • AHUMAN NECESSITIES
    • A23FOODS OR FOODSTUFFS; TREATMENT THEREOF, NOT COVERED BY OTHER CLASSES
    • A23LFOODS, FOODSTUFFS OR NON-ALCOHOLIC BEVERAGES, NOT OTHERWISE PROVIDED FOR; PREPARATION OR TREATMENT THEREOF
    • A23L33/00Modifying nutritive qualities of foods; Dietetic products; Preparation or treatment thereof
    • A23L33/40Complete food formulations for specific consumer groups or specific purposes, e.g. infant formula
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
    • A61B5/0205Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/486Biofeedback
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4866Evaluating metabolism
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • A61B5/6803Head-worn items, e.g. helmets, masks, headphones or goggles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • A61B5/681Wristwatch-type devices
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M5/00Devices for bringing media into the body in a subcutaneous, intra-vascular or intramuscular way; Accessories therefor, e.g. filling or cleaning devices, arm-rests
    • A61M5/14Infusion devices, e.g. infusing by gravity; Blood infusion; Accessories therefor
    • A61M5/142Pressure infusion, e.g. using pumps
    • A61M5/14244Pressure infusion, e.g. using pumps adapted to be carried by the patient, e.g. portable on the body
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9024Graphs; Linked lists
    • G06F17/30283
    • G06F17/30958
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/042Knowledge-based neural networks; Logical representations of neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • G06N5/025Extracting rules from data
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/045Explanation of inference; Explainable artificial intelligence [XAI]; Interpretable artificial intelligence
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • G06T11/26
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/0092Nutrition
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/60ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to nutrition control, e.g. diets
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • AHUMAN NECESSITIES
    • A23FOODS OR FOODSTUFFS; TREATMENT THEREOF, NOT COVERED BY OTHER CLASSES
    • A23VINDEXING SCHEME RELATING TO FOODS, FOODSTUFFS OR NON-ALCOHOLIC BEVERAGES AND LACTIC OR PROPIONIC ACID BACTERIA USED IN FOODSTUFFS OR FOOD PREPARATION
    • A23V2002/00Food compositions, function of food ingredients or processes for food or foodstuffs
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
    • A61B5/021Measuring pressure in heart or blood vessels
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
    • A61B5/024Measuring pulse rate or heart rate
    • A61B5/02438Measuring pulse rate or heart rate with portable devices, e.g. worn by the patient
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Measuring devices for evaluating the respiratory organs
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue
    • A61B5/14532Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue for measuring glucose, e.g. by tissue impedance measurement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue
    • A61B5/1455Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue using optical sensors, e.g. spectral photometrical oximeters
    • A61B5/14551Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue using optical sensors, e.g. spectral photometrical oximeters for measuring blood gases
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M5/00Devices for bringing media into the body in a subcutaneous, intra-vascular or intramuscular way; Accessories therefor, e.g. filling or cleaning devices, arm-rests
    • A61M5/14Infusion devices, e.g. infusing by gravity; Blood infusion; Accessories therefor
    • A61M5/142Pressure infusion, e.g. using pumps
    • A61M2005/14208Pressure infusion, e.g. using pumps with a programmable infusion control system, characterised by the infusion program
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M5/00Devices for bringing media into the body in a subcutaneous, intra-vascular or intramuscular way; Accessories therefor, e.g. filling or cleaning devices, arm-rests
    • A61M5/14Infusion devices, e.g. infusing by gravity; Blood infusion; Accessories therefor
    • A61M5/168Means for controlling media flow to the body or for metering media to the body, e.g. drip meters, counters ; Monitoring media flow to the body
    • A61M5/172Means for controlling media flow to the body or for metering media to the body, e.g. drip meters, counters ; Monitoring media flow to the body electrical or electronic
    • A61M5/1723Means for controlling media flow to the body or for metering media to the body, e.g. drip meters, counters ; Monitoring media flow to the body electrical or electronic using feedback of body parameters, e.g. blood-sugar, pressure

Definitions

  • the present technology relates generally to a dynamic and feedback-based ecosystem that provides tailored solutions for users.
  • the systems and methods provided herein comprise a plurality of individual feedback loops that provide users with adaptive health, wellness, productivity, activity and/or longevity programs that are being constantly adapted based on coded rulesets generated from empirical studies, personal biomarkers, genome, microbiome, blood test analysis, preferences, restrictions, beliefs and goals, as well as, sensor feedback, user feedback, external sources and input from multivariate causation analysis.
  • exemplary methods and systems that include a deep learning ingredient and nutrient identification system that can learn from websites, databases, ontologies, recipes, food lists and restaurant menu items to identify ingredients, ingredient quantity, nutrient composition and diet adherence of different products, recipes or restaurant menu items, a propagation algorithm that can enhance or correct ingredient lists, ingredient quantity, nutrient composition and diet adherence for similar items in the food or activity databases, an adaptive ontology that can be learned from the internet and can be encoded to aid the machine learning or artificial intelligence algorithms to better learn the relationships between different food elements and groupings; and a contextual filtering and adherence scoring system that identifies and selects recipes, food products, supplements, medications and restaurant menu items according to a personalized plan.
  • a deep learning ingredient and nutrient identification system that can learn from websites, databases, ontologies, recipes, food lists and restaurant menu items to identify ingredients, ingredient quantity, nutrient composition and diet adherence of different products, recipes or restaurant menu items, a propagation algorithm that can enhance or correct ingredient lists, ingredient quantity, nutrient composition
  • Further exemplary embodiments include the contextual filtering and adherence scoring system providing augmented reality selections of food products, supplements, medications, recipes and menu items using images obtained from a restaurant menu, the contextual filtering and adherence scoring system using deep learning, machine learning or artificial intelligence to correctly identify and characterize, food products, supplements, medications, recipes and menu items, the contextual filtering and adherence scoring system also using feedback from a user to fine tune image recognition, as well as increase content accuracy and data quality.
  • FIG. 1 illustrates an example ecosystem of the present disclosure.
  • FIG. 2 illustrates an example causation sequence performed by the multivariate causation system.
  • FIG. 3 shows an exemplary architecture for a multi-model, multi-ontology, multi-label deep neural network, otherwise known as “mLOM.”
  • FIG. 4 is a table illustrating ROC curves for ingredient prediction. Each ROC curve represents the output for each meal in a validation set.
  • FIG. 5 is a diagrammatic representation of an example machine in the form of a computer system.
  • FIG. 1 illustrates an example ecosystem 100 of the present disclosure.
  • the ecosystem generally comprises a ruleset generator system 102 , a personalized program generation system 104 , a contextual filtering and adherence scoring system 106 , a passive and active feedback system 108 , and a multivariate causation system 110 .
  • These various systems can be executed using, for example, a server or within a cloud-based computing environment.
  • each of the various systems of the ecosystem 100 can be consolidated into a single system.
  • the ruleset generator system 102 obtains input from two separate sub-systems.
  • the ruleset generator system 102 obtains lifestyle, dietary, nutrition and empirical evidence-based diets and programs from various sources 112 . For example, this could include peer-reviewed or other similar publications or data regarding diets and exercise such as ketogenic, paleo, vegan, low carbohydrate, low-fat, or even specific dietary plans such as whole 30, Daniel Diet, and so forth. These programs or diets can also be submitted directly by researchers, healthcare professionals, user groups or individual users. These data are received by the ruleset generator system 102 and converted into rulesets that can be applied to a personal program for a user.
  • the ruleset can include rules for those persons with specific biomarkers or characteristics, with specific ratios of macronutrients that would be found in a particular dietary plan, as well as restricted or promoted food items.
  • the ruleset generator system 102 can also obtain input from the multivariate causation system 110 , as will be described in greater detail below.
  • the multivariate causation system 110 can generate both updated lifestyle, dietary, nutrition and empirical evidence-based data for the various sources 112 , as well as ruleset updates that are each based on output of the passive and active feedback system 108 .
  • the passive feedback from sensors and external sources and active feedback from user input in system 108 is measuring empirical feedback from biometric resources or other applications that are tracking exactly how the user is behaving or acting based on the information given and plan established by the ecosystem 100 .
  • the multivariate causation system 110 can assess the lack of progress and determine a change to the ruleset(s) that might positively correlate with an improvement towards the goal. For example, if a user desires to lose weight and has not done so, the multivariate causation system 110 might suggest that the user reduce caloric intake or potentially remove a food item from their diet or substitute one item for another in their diet to improve their outcome.
  • the multivariate causation system 110 can utilize artificial intelligence techniques such as machine learning, deep learning, or big data that include information from other users having similar genetics, biomarkers, profile, activities, background, clinical data or other demographic or personal information.
  • the user is not only analyzed in context of their own personal goals and personal information, but the multivariate causation system 110 can also derive or infer new rules based on what has worked or not worked for other similarly situated individuals.
  • the multivariate causation engine it also includes any type of data streams or log data to derive or infer new rules based on the sequences and patterns found.
  • the sequence or log data can include, but is not limited to sensors, test results, biomarkers, activities, symptoms, supplements, medicine intake, food, beverages or locations.
  • the multivariate causation engine can also determine the likelihood that each pattern or sequence of events will have a predicted outcome.
  • the personalized program generation system 104 can utilize ruleset stacking to create a converging solution for a set of dietary considerations or limitations for a user. For example, a user may have the dietary considerations of being a vegetarian, as well as being on a reduced-salt diet. The user also does not like certain gluten products.
  • the personalized program generation system 104 can overlap these diets for a single user. In other embodiments the personalized program generation system 104 can overlap the dietary restrictions of multiple users to create a single converging solution for multiple parties.
  • the personalized program generation system 104 can also implement an adaptive program algorithm and create a personalized program for a user.
  • the personalized program generation system 104 receives one or more rulesets that are applicable to the user based on information known about the user. For example, the user prefers paleo diet. Thus, the personalized program generation system 104 will obtain rulesets for paleo adherents and will further personalize each of the rules based on the user's information.
  • the rulesets obtained from the ruleset generator system 102 can be selectively adjusted based on other information such as a user's genetic information, their microbiome, their biomarkers, their clinical, medical or health data, activities, their age, weight, height, ethnic background, other demographic information, and so forth.
  • the personalized program generation system 104 can implement a data gathering process whereby a user is questioned using a chatbot or other means to obtain information that is used to select any appropriate ruleset and goal for the user. To be sure, this can be augmented with more detailed information about the user such as specific objective and subjective demographic information, genetic test information, microbiome testing, and so forth. This information can also be obtained from medical records, including electronic medical records.
  • An example method for collecting user information comprises the use of a chatbot that is programmed to interact with a user to request diet preferences and health conditions, as well as a target goal(s). Another example is to obtain the user's permission to connect to their health application or service that will further enhance the personalization of their program.
  • the contextual filtering and adherence scoring system 106 is configured to execute dynamic adherence scoring algorithms to determine the adherence level of any meal or activity against the personalized program, diet or plan. It obtains information from external and internal sources such as restaurant and recipe sub-systems or databases (e.g. nutrient identification system 114 ). In some embodiments, the contextual filtering and adherence scoring system 106 obtains recipe, supplement, grocery, and restaurant menu information using deep learning and artificial intelligence information gathering and processing techniques. The contextual filtering and adherence scoring system 106 can also obtain feedback on these types of information from user interaction with the ecosystem 100 . This user feedback assists in resolving errors or inconsistencies with data.
  • the contextual filtering and adherence scoring system 106 can use specific techniques to examine menus, recipes, and ingredient lists from a wide variety of sources and correlate and/or harmonize what is known about individual meals, activities or places. In this way, the contextual filtering and adherence scoring system 106 can select appropriate meals or activities for the user based on their goals and personalized program.
  • the contextual filtering and adherence scoring system 106 provides personalized programs to the user through their computing device 116 .
  • the contextual filtering and adherence scoring system 106 provides the user with a personalized program that is tailored based on selection of lifestyle, dietary, nutrition and empirical evidence-based programs that are converted to rulesets and applied to a user's goals, preferences, and demographics.
  • Contextual filtering is applied in some embodiments to selectively tailor the recipe or menu suggestions provided to the user in accordance with their personalized plan generated by the personalized program generation system 104 .
  • the computing device 116 executes a client side application that provides personalized plans and receives both passive and active feedback, in some embodiments.
  • the passive and active feedback system 108 receives data from the user through the computing device 116 .
  • the user can create a food log or record their exercise.
  • the user can also take pictures of food, menus, ingredient lists, and so forth.
  • This information can be fed back into the restaurant and recipe sub-systems or databases (e.g. nutrient identification system 114 ).
  • This gathered information can also be redirected back to the passive and active feedback system 108 for further analysis by the multivariate causation system 110 .
  • the passive and active feedback system 108 collects information from external sensors 118 from sensors such as wearables (e.g., smart glasses, watches, etc.), sleep sensors, blood pressure monitors, glucose monitors and insulin pumps, blood pressure sensors, respiration monitors, pulse oximeters, heart rate meters, and so forth—just to name a few.
  • sensors such as wearables (e.g., smart glasses, watches, etc.), sleep sensors, blood pressure monitors, glucose monitors and insulin pumps, blood pressure sensors, respiration monitors, pulse oximeters, heart rate meters, and so forth—just to name a few.
  • the multivariate causation system 110 is configured to receive empirical feedback about the user and their behavior from the computing device 116 and the external sensors 118 .
  • the multivariate causation system 110 uses the specific information known about the user and those users who are similar in one way or another (goals, biometrics, biomarkers, genetics, demographics, lifestyle, and so forth), as well as feedback from the external sensors 118 to selectively modify how a user's diet is prioritized and potentially if rulesets are adjusted for the user. For example, as different users progress towards a goal, their passive and active feedback is analyzed by the multivariate causation system 110 that determines what has worked.
  • the multivariate causation system 110 can adjust priority and/or rules for the diets and programs to more closely align with the goals of the successful users.
  • the multivariate causation system 110 receives streams of data from user passive and active feedback, as well as the programs, goals and personal data and adjusts the rulesets on the fly or periodically.
  • the multivariate causation system 110 can also act as a data producing system that reports back information for use in the lifestyle, dietary, nutrition and empirical evidence-based diets and programs from various sources 112 .
  • the multivariate causation system 110 can deeply analyze user feedback and determine specific variations on behaviors and determine how they affect the desired outcomes. For example, the multivariate causation system 110 may determine that the user moves closer to achieving a goal when they restrict carbohydrate consumption in the morning and evening, or if they eat vegetables as snacks as opposed to combining them with protein sources.
  • FIG. 2 illustrates an example causation sequence performed by the multivariate causation system 110 .
  • the causal event sequence discovery is a process performed from events gathered from user behaviors and the external sensors 118 and the computing device 116 .
  • Other example methodologies that can be implemented by the multivariate causation system 110 are described in Attachments A (Human Activity Language) and B (Low entropy approaches for causal inference), which are incorporated by reference herein in its entirety, including all references cited therein.
  • the multivariate causation system 110 can also implement a deep multi-label, multi-ontology a multi-model network architecture as described in Attachment C, which is incorporated by reference herein in its entirety, including all references cited therein.
  • the ecosystem 100 uses various feedback loops (represented by the individual systems of the ecosystem 100 ) to create an environment that learns based on empirical feedback and fine tunes a plan for a user based on this information.
  • the ecosystem 100 can comprise a comprehensive knowledgebase of recipes and menu items from the restaurant and recipe sub-systems (e.g. nutrient identification system 114 ).
  • the knowledgebase is created and updated by searching the Internet for recipes, grocery products, meal plans, ingredient lists, and restaurant menus—just to name a few.
  • the ecosystem 100 generates nutritional and ingredient information for a recipe, grocery product, or restaurant menu items based on several different algorithms.
  • the algorithms include but are not limited to: deep learning algorithms, dynamic food ontologies and propagation algorithms. For example, when a new restaurant menu is incorporated to the system it will automatically populate the expected ingredients and nutrients for each of the menu items by reading the menu, comparing it to recipes, other restaurants and the ontology it has learned.
  • Caesar Salad most probably contains romaine lettuce and croutons dressed with Parmesan cheese, lemon juice, olive oil, egg, Worcestershire sauce, garlic, and black pepper. It will also infer the expected amount of each ingredient as well as the nutrients. If a user were to send feedback stating that this particular Caesar Salad also contains chicken. That feedback would not only modify the ingredient list for that specific restaurant's Caesar Salad as containing Chicken, but it would also increase the probability of all other similar Caesar Salads, at similar restaurants to also contain chicken.
  • the knowledge base Prior to searching and recommending items to a user, the knowledge base is built using deep learning or other machine learning or artificial intelligence methodologies. For example, deep learning algorithms are used to make inferences about a menu at a restaurant by learning from menu items from other restaurants or recipes that match the menu items at the restaurant in question.
  • the inferences can be affected by user input as well. For example, user feedback or input can be used that provides corrections or context for a restaurant menu, a menu item, or recipe—just to name a few. The user can specify, for example, if a restaurant is a vegan restaurant. Thus, even if the menu items are purposefully or unintentionally mis-descriptive, the ecosystem 100 can correctly identify a menu item based on ingredients or contextual information.
  • the contextual filtering and adherence scoring system 106 can determine the actual ingredients from those listed on the menu, a restaurant website, or other similar recipes for vegan cheeseburgers.
  • the deep learning algorithms learn from different food and recipe datasets which ingredients or nutrients are common for a specific type of meal and then infer these ingredients and nutrients for restaurant menu items.
  • An example methodology that can be implemented by the deep learning ingredient and nutrient identification system 114 is described in Attachment C, which is incorporated by reference herein in its entirety, including all references cited therein.
  • food and activity ontologies are built based on a continuous relation of descriptions and data such as those contained in recipes, menu items, dictionaries, databases and the Web pages, among others. These food and activity ontologies are then used to generate a complex network (graph) representation of the different activities, places, ingredients, menu items, recipes and food products. With this representation the deep learning algorithms learn to distinguish the relations of different activity, place, food, nutrition, ingredient groups and classifications.
  • An example methodology that can be implemented to create activity, place, food, nutrition and ingredient classifications used by the deep learning ingredient and nutrient identification system 114 is described in Attachment A, which is incorporated by reference herein in its entirety, including all references cited therein. These examples involve what is referred to as human activity analysis methods.
  • the propagation algorithm allows for corrections or enhancements to be propagated to all similar menu items, recipes or products.
  • the corrections or enhancements can be automatically generated by other algorithms, can be imported from external databases or can be fed back into the system through user feedback.
  • FIG. 3 shows an exemplary architecture for a multi-model, multi-ontology, multi-label deep neural network, otherwise known as “mLOM.”
  • mLOM represents artificial intelligence (“AI”) technology that learns to predict accurately every food ingredient, nutrient, flavor, portion, course, cuisine and quantity.
  • AI artificial intelligence
  • mLOM has achieved over 98.5% of median AUC (area under the curve) in a composite ROC curve (an indicator of accuracy), and when compared to humans, mLOM is almost twice as effective in knowing the ingredients in a menu item.
  • FIG. 4 shows a table illustrating ROC curves for ingredient prediction. Each ROC curve represents the output for each meal in a validation set:
  • mLOM handles multi-label learning, which comprises learning label subsets of particular objects from a base label set. This task is different from multi-class learning, where an output is expected to be only one of many mutually exclusive labels. Note that multi-class learning is a particular case of multi-label learning, thus, mLOM is also able to handle any multi-class setup if needed.
  • Multi-ontology label learning capabilities implemented in mLOM solve two apparently opposite challenges in contrast with simple multi-label learning. First, knowledge from different ontological domains is transferred to others, and labeling under a particular domain is fine-tuned to concentrate in specific domain characteristics.
  • mLOM can synergistically combine different heterogeneous approaches. Each approach is referred as a base model, thus, mLOM can be composed of an ensemble of any number of base models, depending of the combination that offers the overall superior predictive performance under any chosen performance criteria.
  • the mLOM architecture incorporates knowledge from m base models, each performing multi-label learning over ontological domains.
  • mLOM's principal objective transfers multi-ontology knowledge inside each model and across different models.
  • Base models might be any neural network able to perform multi-domain multi-label learning, this means that output neurons should represent label scores for the entire label set.
  • a multi-ontology merging block takes label scores from different domains from a base model as input and outputs the same labels after some transformation layers to optimize with respect to expected output.
  • Particular architecture of the multi-ontology merging block is may vary depending of the particular dataset to be analyzed.
  • Viable Auto Encoder refers to any distribution estimation technique.
  • the skilled in the art will recognize such distribution estimation techniques as any auto regressive auto encoders, masked autoencoder distribution estimators, variational autoencoders, generative adversarial networks among others.
  • the VAE is trained with observed label sets for each ontology. In this way, any given label set is more consistent with observed ones. For example, if lettuce, tomato, onion and chocolate are combined as a set, the chocolate might be removed from this label set, as there are not examples of meals containing chocolate and lettuce as ingredients.
  • FIG. 3 An example mLOM architecture for two base models and two ontologies are shown in FIG. 3 .
  • the aforementioned mLOM architecture requires that an initial target object is transformed with an appropriate input encoder for each base model.
  • Such initial object can be any digital piece holding a meal representation like text, audio or image file among others.
  • An input encoder is used to transform this raw initial object into an adequate representation for each particular chosen base model.
  • Raw output from each base model among different domains is taken as input for a Multi Ontology Merging Block (MOMB), which comprises additional layers that can refine independent domain predictions taking into account information from all domains being considered.
  • a fine tuning step consists of a stack of layers for each domain. Corresponding domain outputs from different base models are connected by gated layers, such gated layers are dynamically adjusted to learn how much of previous layers information should be remembered for upcoming layers.
  • the skilled on the art can recognize particular implementations of such strategy as highways networks or even more restricted setups like residual networks.
  • mLOM has the ability to understand portion sizes by restaurant depending on the price, type of restaurant, menu name, etc. This will allow the ability to add specific nutrient quantities that are required in some diets and nutrition plans. Other ontologies and labels may be added to create better and more accurate experiences for users.
  • FIG. 5 is a diagrammatic representation of an example machine in the form of a computer system 1 , within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein may be executed.
  • the machine operates as a standalone device or may be connected (e.g., networked) to other machines.
  • the machine may operate in the capacity of a server or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
  • the machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a cellular telephone, a portable music player (e.g., a portable hard drive audio device such as an Moving Picture Experts Group Audio Layer 3 (MP3) player), a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • PC personal computer
  • PDA personal digital assistant
  • MP3 Moving Picture Experts Group Audio Layer 3
  • MP3 Moving Picture Experts Group Audio Layer 3
  • web appliance e.g., a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • machine shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
  • the example computer system 1 includes a processor or multiple processor(s) 5 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both), and a main memory 10 and static memory 15 , which communicate with each other via a bus 20 .
  • the computer system 1 may further include a video display 35 (e.g., a liquid crystal display (LCD)).
  • a processor or multiple processor(s) 5 e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both
  • main memory 10 and static memory 15 which communicate with each other via a bus 20 .
  • the computer system 1 may further include a video display 35 (e.g., a liquid crystal display (LCD)).
  • LCD liquid crystal display
  • the computer system 1 may also include an alpha-numeric input device(s) 30 (e.g., a keyboard), a cursor control device (e.g., a mouse), a voice recognition or biometric verification unit (not shown), a drive unit 37 (also referred to as disk drive unit), a signal generation device 40 (e.g., a speaker), and a network interface device 45 .
  • the computer system 1 may further include a data encryption module (not shown) to encrypt data.
  • the disk drive unit 37 includes a computer or machine-readable medium 50 on which is stored one or more sets of instructions and data structures (e.g., instructions 55 ) embodying or utilizing any one or more of the methodologies or functions described herein.
  • the instructions 55 may also reside, completely or at least partially, within the main memory 10 and/or within the processor(s) 5 during execution thereof by the computer system 1 .
  • the main memory 10 and the processor(s) 5 may also constitute machine-readable media.
  • the instructions 55 may further be transmitted or received over a network via the network interface device 45 utilizing any one of a number of well-known transfer protocols (e.g., Hyper Text Transfer Protocol (HTTP)).
  • HTTP Hyper Text Transfer Protocol
  • the machine-readable medium 50 is shown in an example embodiment to be a single medium, the term “computer-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database and/or associated caches and servers) that store the one or more sets of instructions.
  • computer-readable medium shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the machine and that causes the machine to perform any one or more of the methodologies of the present application, or that is capable of storing, encoding, or carrying data structures utilized by or associated with such a set of instructions.
  • the term “computer-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media, and carrier wave signals. Such media may also include, without limitation, hard disks, floppy disks, flash memory cards, digital video disks, random access memory (RAM), read only memory (ROM), and the like.
  • RAM random access memory
  • ROM read only memory
  • the example embodiments described herein may be implemented in an operating environment comprising software installed on a computer, in hardware, or in a combination of software and hardware.
  • the Internet service may be configured to provide Internet access to one or more computing devices that are coupled to the Internet service, and that the computing devices may include one or more processors, buses, memory devices, display devices, input/output devices, and the like.
  • the Internet service may be coupled to one or more databases, repositories, servers, and the like, which may be utilized in order to implement any of the embodiments of the disclosure as described herein.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • a hyphenated term (e.g., “on-demand”) may be occasionally interchangeably used with its non-hyphenated version (e.g., “on demand”)
  • a capitalized entry e.g., “Software”
  • a non-capitalized version e.g., “software”
  • a plural term may be indicated with or without an apostrophe (e.g., PE's or PEs)
  • an italicized term e.g., “N+1” may be interchangeably used with its non-italicized version (e.g., “N+1”).
  • Such occasional interchangeable uses shall not be considered inconsistent with each other.
  • a “means for” may be expressed herein in terms of a structure, such as a processor, a memory, an I/O device such as a camera, or combinations thereof.
  • the “means for” may include an algorithm that is descriptive of a function or method step, while in yet other embodiments the “means for” is expressed in terms of a mathematical formula, prose, or as a flow chart or signal diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Public Health (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Surgery (AREA)
  • Nutrition Science (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Computer Hardware Design (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Physiology (AREA)
  • Cardiology (AREA)
  • Food Science & Technology (AREA)
  • Polymers & Plastics (AREA)
  • Pediatric Medicine (AREA)
  • Mycology (AREA)

Abstract

Provided herein are exemplary methods and systems that include a deep learning ingredient and nutrient identification system that can learn from websites, databases, ontologies, recipes, food lists and restaurant menu items to identify ingredients, ingredient quantity, nutrient composition and diet adherence of different products, recipes or restaurant menu items, a propagation algorithm that can enhance or correct ingredient lists, ingredient quantity, nutrient composition and diet adherence for similar items in the food or activity databases, an adaptive ontology that can be learned from the internet and can be encoded to aid the machine learning or artificial intelligence algorithms to better learn the relationships between different food elements and groupings; and a contextual filtering and adherence scoring system that identifies and selects recipes, food products, supplements, medications and restaurant menu items according to a personalized plan.

Description

    CROSS-REFERENCE TO RELATED TO APPLICATIONS
  • The present patent application claims the benefit of U.S. Provisional Patent Application No. 62/440,924, filed Dec. 30, 2016, and titled “Personalized Program Generation System with Plan and Ruleset Stacking”, U.S. Provisional Patent Application No. 62/440,689, filed Dec. 30, 2016, and titled “Dynamic and Feedback-Based Ecosystem”, U.S. Provisional Patent Application No. 62/440,982, filed Dec. 30, 2016, and titled “Personalized Program Generation System with Adaptive Program Engine”, U.S. Provisional Patent Application No. 62/440,801, filed Dec. 30, 2016, and titled “Contextual Filtering and Adherence Scoring Systems and Methods”, U.S. Provisional Patent Application No. 62/441,014, filed Dec. 30, 2016, and titled “Deep Learning and Ingredient Identification Systems and Methods”, and U.S. Provisional Patent Application No. 62/441,043, filed Dec. 30, 2016, and titled “Multivariate Causation Systems and Methods”. The present patent application is related to Non-Provisional U.S. patent application Ser. No. 15/858,713, filed Dec. 29, 2017, and entitled “Augmented Reality Systems Based on a Dynamic feedback-based Ecosystem and Multivariate Causation System” (Attorney Docket No. PA9009US). The present patent application is also related to Non-Provisional U.S. patent application Ser. No. 15/859,062, filed Dec. 29, 2017, and entitled “Augmented Reality and Blockchain Technology for Decision Augmentation Systems and Methods Using Contextual Filtering and Personalized Program Generation” (Attorney Docket No. PA9010US). All of the aforementioned disclosures are hereby incorporated by reference herein in their entireties including all references and appendices cited therein.
  • FIELD OF THE PRESENT TECHNOLOGY
  • The present technology relates generally to a dynamic and feedback-based ecosystem that provides tailored solutions for users. The systems and methods provided herein comprise a plurality of individual feedback loops that provide users with adaptive health, wellness, productivity, activity and/or longevity programs that are being constantly adapted based on coded rulesets generated from empirical studies, personal biomarkers, genome, microbiome, blood test analysis, preferences, restrictions, beliefs and goals, as well as, sensor feedback, user feedback, external sources and input from multivariate causation analysis.
  • SUMMARY OF THE PRESENT TECHNOLOGY
  • Provided herein are exemplary methods and systems that include a deep learning ingredient and nutrient identification system that can learn from websites, databases, ontologies, recipes, food lists and restaurant menu items to identify ingredients, ingredient quantity, nutrient composition and diet adherence of different products, recipes or restaurant menu items, a propagation algorithm that can enhance or correct ingredient lists, ingredient quantity, nutrient composition and diet adherence for similar items in the food or activity databases, an adaptive ontology that can be learned from the internet and can be encoded to aid the machine learning or artificial intelligence algorithms to better learn the relationships between different food elements and groupings; and a contextual filtering and adherence scoring system that identifies and selects recipes, food products, supplements, medications and restaurant menu items according to a personalized plan.
  • Further exemplary embodiments include the contextual filtering and adherence scoring system providing augmented reality selections of food products, supplements, medications, recipes and menu items using images obtained from a restaurant menu, the contextual filtering and adherence scoring system using deep learning, machine learning or artificial intelligence to correctly identify and characterize, food products, supplements, medications, recipes and menu items, the contextual filtering and adherence scoring system also using feedback from a user to fine tune image recognition, as well as increase content accuracy and data quality.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying drawings, where like reference numerals refer to identical or functionally similar elements throughout the separate views, together with the detailed description below, are incorporated in and form part of the specification, and serve to further illustrate embodiments of concepts that include the claimed disclosure, and explain various principles and advantages of those embodiments.
  • The methods and systems disclosed herein have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
  • FIG. 1 illustrates an example ecosystem of the present disclosure.
  • FIG. 2 illustrates an example causation sequence performed by the multivariate causation system.
  • FIG. 3 shows an exemplary architecture for a multi-model, multi-ontology, multi-label deep neural network, otherwise known as “mLOM.”
  • FIG. 4 is a table illustrating ROC curves for ingredient prediction. Each ROC curve represents the output for each meal in a validation set.
  • FIG. 5 is a diagrammatic representation of an example machine in the form of a computer system.
  • DESCRIPTION OF EXEMPLARY EMBODIMENTS
  • FIG. 1 illustrates an example ecosystem 100 of the present disclosure. The ecosystem generally comprises a ruleset generator system 102, a personalized program generation system 104, a contextual filtering and adherence scoring system 106, a passive and active feedback system 108, and a multivariate causation system 110. These various systems can be executed using, for example, a server or within a cloud-based computing environment. In some embodiments, each of the various systems of the ecosystem 100 can be consolidated into a single system.
  • In one embodiment, the ruleset generator system 102 obtains input from two separate sub-systems. In one embodiment, the ruleset generator system 102 obtains lifestyle, dietary, nutrition and empirical evidence-based diets and programs from various sources 112. For example, this could include peer-reviewed or other similar publications or data regarding diets and exercise such as ketogenic, paleo, vegan, low carbohydrate, low-fat, or even specific dietary plans such as whole 30, Daniel Diet, and so forth. These programs or diets can also be submitted directly by researchers, healthcare professionals, user groups or individual users. These data are received by the ruleset generator system 102 and converted into rulesets that can be applied to a personal program for a user. For example, the ruleset can include rules for those persons with specific biomarkers or characteristics, with specific ratios of macronutrients that would be found in a particular dietary plan, as well as restricted or promoted food items.
  • The ruleset generator system 102 can also obtain input from the multivariate causation system 110, as will be described in greater detail below. In general, the multivariate causation system 110 can generate both updated lifestyle, dietary, nutrition and empirical evidence-based data for the various sources 112, as well as ruleset updates that are each based on output of the passive and active feedback system 108. In general the passive feedback from sensors and external sources and active feedback from user input in system 108 is measuring empirical feedback from biometric resources or other applications that are tracking exactly how the user is behaving or acting based on the information given and plan established by the ecosystem 100. For example, if the user is following a prescribed plan that is designed to reduce weight but the user is not achieving results based on scale feedback received by the passive and active feedback system 108, the multivariate causation system 110 can assess the lack of progress and determine a change to the ruleset(s) that might positively correlate with an improvement towards the goal. For example, if a user desires to lose weight and has not done so, the multivariate causation system 110 might suggest that the user reduce caloric intake or potentially remove a food item from their diet or substitute one item for another in their diet to improve their outcome.
  • To be sure, the multivariate causation system 110 can utilize artificial intelligence techniques such as machine learning, deep learning, or big data that include information from other users having similar genetics, biomarkers, profile, activities, background, clinical data or other demographic or personal information. Thus, the user is not only analyzed in context of their own personal goals and personal information, but the multivariate causation system 110 can also derive or infer new rules based on what has worked or not worked for other similarly situated individuals. In other embodiments of the multivariate causation engine it also includes any type of data streams or log data to derive or infer new rules based on the sequences and patterns found. The sequence or log data can include, but is not limited to sensors, test results, biomarkers, activities, symptoms, supplements, medicine intake, food, beverages or locations. The multivariate causation engine can also determine the likelihood that each pattern or sequence of events will have a predicted outcome.
  • The personalized program generation system 104 can utilize ruleset stacking to create a converging solution for a set of dietary considerations or limitations for a user. For example, a user may have the dietary considerations of being a vegetarian, as well as being on a reduced-salt diet. The user also does not like certain gluten products. The personalized program generation system 104 can overlap these diets for a single user. In other embodiments the personalized program generation system 104 can overlap the dietary restrictions of multiple users to create a single converging solution for multiple parties.
  • The personalized program generation system 104 can also implement an adaptive program algorithm and create a personalized program for a user. The personalized program generation system 104 receives one or more rulesets that are applicable to the user based on information known about the user. For example, the user prefers paleo diet. Thus, the personalized program generation system 104 will obtain rulesets for paleo adherents and will further personalize each of the rules based on the user's information. The rulesets obtained from the ruleset generator system 102 can be selectively adjusted based on other information such as a user's genetic information, their microbiome, their biomarkers, their clinical, medical or health data, activities, their age, weight, height, ethnic background, other demographic information, and so forth.
  • In some embodiments, the personalized program generation system 104 can implement a data gathering process whereby a user is questioned using a chatbot or other means to obtain information that is used to select any appropriate ruleset and goal for the user. To be sure, this can be augmented with more detailed information about the user such as specific objective and subjective demographic information, genetic test information, microbiome testing, and so forth. This information can also be obtained from medical records, including electronic medical records. An example method for collecting user information comprises the use of a chatbot that is programmed to interact with a user to request diet preferences and health conditions, as well as a target goal(s). Another example is to obtain the user's permission to connect to their health application or service that will further enhance the personalization of their program.
  • The contextual filtering and adherence scoring system 106 is configured to execute dynamic adherence scoring algorithms to determine the adherence level of any meal or activity against the personalized program, diet or plan. It obtains information from external and internal sources such as restaurant and recipe sub-systems or databases (e.g. nutrient identification system 114). In some embodiments, the contextual filtering and adherence scoring system 106 obtains recipe, supplement, grocery, and restaurant menu information using deep learning and artificial intelligence information gathering and processing techniques. The contextual filtering and adherence scoring system 106 can also obtain feedback on these types of information from user interaction with the ecosystem 100. This user feedback assists in resolving errors or inconsistencies with data.
  • According to some embodiments, the contextual filtering and adherence scoring system 106 can use specific techniques to examine menus, recipes, and ingredient lists from a wide variety of sources and correlate and/or harmonize what is known about individual meals, activities or places. In this way, the contextual filtering and adherence scoring system 106 can select appropriate meals or activities for the user based on their goals and personalized program.
  • The contextual filtering and adherence scoring system 106 provides personalized programs to the user through their computing device 116. The contextual filtering and adherence scoring system 106 provides the user with a personalized program that is tailored based on selection of lifestyle, dietary, nutrition and empirical evidence-based programs that are converted to rulesets and applied to a user's goals, preferences, and demographics. Contextual filtering is applied in some embodiments to selectively tailor the recipe or menu suggestions provided to the user in accordance with their personalized plan generated by the personalized program generation system 104.
  • The computing device 116 executes a client side application that provides personalized plans and receives both passive and active feedback, in some embodiments.
  • In some embodiments, the passive and active feedback system 108 receives data from the user through the computing device 116. For example, the user can create a food log or record their exercise. The user can also take pictures of food, menus, ingredient lists, and so forth. This information can be fed back into the restaurant and recipe sub-systems or databases (e.g. nutrient identification system 114). This gathered information can also be redirected back to the passive and active feedback system 108 for further analysis by the multivariate causation system 110.
  • In some embodiments, the passive and active feedback system 108 collects information from external sensors 118 from sensors such as wearables (e.g., smart glasses, watches, etc.), sleep sensors, blood pressure monitors, glucose monitors and insulin pumps, blood pressure sensors, respiration monitors, pulse oximeters, heart rate meters, and so forth—just to name a few.
  • The multivariate causation system 110 is configured to receive empirical feedback about the user and their behavior from the computing device 116 and the external sensors 118. The multivariate causation system 110 uses the specific information known about the user and those users who are similar in one way or another (goals, biometrics, biomarkers, genetics, demographics, lifestyle, and so forth), as well as feedback from the external sensors 118 to selectively modify how a user's diet is prioritized and potentially if rulesets are adjusted for the user. For example, as different users progress towards a goal, their passive and active feedback is analyzed by the multivariate causation system 110 that determines what has worked. It then modifies and reprioritizes the program rulesets so that the patterns and activity sequences that work best are suggested, and those patterns or sequences that do not work are reduced or avoided. The multivariate causation system 110 can adjust priority and/or rules for the diets and programs to more closely align with the goals of the successful users. The multivariate causation system 110 receives streams of data from user passive and active feedback, as well as the programs, goals and personal data and adjusts the rulesets on the fly or periodically.
  • The multivariate causation system 110 can also act as a data producing system that reports back information for use in the lifestyle, dietary, nutrition and empirical evidence-based diets and programs from various sources 112. The multivariate causation system 110 can deeply analyze user feedback and determine specific variations on behaviors and determine how they affect the desired outcomes. For example, the multivariate causation system 110 may determine that the user moves closer to achieving a goal when they restrict carbohydrate consumption in the morning and evening, or if they eat vegetables as snacks as opposed to combining them with protein sources.
  • FIG. 2 illustrates an example causation sequence performed by the multivariate causation system 110. The causal event sequence discovery is a process performed from events gathered from user behaviors and the external sensors 118 and the computing device 116. Other example methodologies that can be implemented by the multivariate causation system 110 are described in Attachments A (Human Activity Language) and B (Low entropy approaches for causal inference), which are incorporated by reference herein in its entirety, including all references cited therein. The multivariate causation system 110 can also implement a deep multi-label, multi-ontology a multi-model network architecture as described in Attachment C, which is incorporated by reference herein in its entirety, including all references cited therein.
  • These systems of the ecosystem 100 work together in a collective and synergistic manner to provide a user with empirical and evidence-based plans for improving their health. The ecosystem 100 uses various feedback loops (represented by the individual systems of the ecosystem 100) to create an environment that learns based on empirical feedback and fine tunes a plan for a user based on this information.
  • In some embodiments, the ecosystem 100 can comprise a comprehensive knowledgebase of recipes and menu items from the restaurant and recipe sub-systems (e.g. nutrient identification system 114).
  • In some embodiments, the knowledgebase is created and updated by searching the Internet for recipes, grocery products, meal plans, ingredient lists, and restaurant menus—just to name a few. In addition to obtaining recipes and menu items, the ecosystem 100 generates nutritional and ingredient information for a recipe, grocery product, or restaurant menu items based on several different algorithms. The algorithms include but are not limited to: deep learning algorithms, dynamic food ontologies and propagation algorithms. For example, when a new restaurant menu is incorporated to the system it will automatically populate the expected ingredients and nutrients for each of the menu items by reading the menu, comparing it to recipes, other restaurants and the ontology it has learned. And with that accumulated knowledge the system would infer that a Caesar Salad most probably contains romaine lettuce and croutons dressed with Parmesan cheese, lemon juice, olive oil, egg, Worcestershire sauce, garlic, and black pepper. It will also infer the expected amount of each ingredient as well as the nutrients. If a user were to send feedback stating that this particular Caesar Salad also contains chicken. That feedback would not only modify the ingredient list for that specific restaurant's Caesar Salad as containing Chicken, but it would also increase the probability of all other similar Caesar Salads, at similar restaurants to also contain chicken.
  • Prior to searching and recommending items to a user, the knowledge base is built using deep learning or other machine learning or artificial intelligence methodologies. For example, deep learning algorithms are used to make inferences about a menu at a restaurant by learning from menu items from other restaurants or recipes that match the menu items at the restaurant in question. The inferences can be affected by user input as well. For example, user feedback or input can be used that provides corrections or context for a restaurant menu, a menu item, or recipe—just to name a few. The user can specify, for example, if a restaurant is a vegan restaurant. Thus, even if the menu items are purposefully or unintentionally mis-descriptive, the ecosystem 100 can correctly identify a menu item based on ingredients or contextual information. If the restaurant serves only vegan dishes (as listed on the menu), this knowledge is used to specify that a menu item of cheeseburger is likely to have vegan ingredients. The contextual filtering and adherence scoring system 106 can determine the actual ingredients from those listed on the menu, a restaurant website, or other similar recipes for vegan cheeseburgers.
  • In some embodiments the deep learning algorithms learn from different food and recipe datasets which ingredients or nutrients are common for a specific type of meal and then infer these ingredients and nutrients for restaurant menu items. An example methodology that can be implemented by the deep learning ingredient and nutrient identification system 114 is described in Attachment C, which is incorporated by reference herein in its entirety, including all references cited therein.
  • In some embodiments, food and activity ontologies are built based on a continuous relation of descriptions and data such as those contained in recipes, menu items, dictionaries, databases and the Web pages, among others. These food and activity ontologies are then used to generate a complex network (graph) representation of the different activities, places, ingredients, menu items, recipes and food products. With this representation the deep learning algorithms learn to distinguish the relations of different activity, place, food, nutrition, ingredient groups and classifications. An example methodology that can be implemented to create activity, place, food, nutrition and ingredient classifications used by the deep learning ingredient and nutrient identification system 114 is described in Attachment A, which is incorporated by reference herein in its entirety, including all references cited therein. These examples involve what is referred to as human activity analysis methods.
  • In some embodiments the propagation algorithm allows for corrections or enhancements to be propagated to all similar menu items, recipes or products. The corrections or enhancements can be automatically generated by other algorithms, can be imported from external databases or can be fed back into the system through user feedback.
  • FIG. 3 shows an exemplary architecture for a multi-model, multi-ontology, multi-label deep neural network, otherwise known as “mLOM.”
  • To be able to search restaurant menus, one needs to know the ingredients and nutrients every menu item has, out of thousands of possible options. Restaurants do not publish the ingredients they use. And only around 800 chain restaurants in the US are required to make available a reduced set of 11 nutrients out of the 200 that they may use.
  • To query over half a million restaurant menus, mLOM represents artificial intelligence (“AI”) technology that learns to predict accurately every food ingredient, nutrient, flavor, portion, course, cuisine and quantity. mLOM has achieved over 98.5% of median AUC (area under the curve) in a composite ROC curve (an indicator of accuracy), and when compared to humans, mLOM is almost twice as effective in knowing the ingredients in a menu item.
  • FIG. 4 shows a table illustrating ROC curves for ingredient prediction. Each ROC curve represents the output for each meal in a validation set:
  • One of the problems solved herein is inferring food related labels from meal names and their descriptions. Such relevant labels can belong to different ontological domains, like courses and cuisine type, additionally to ingredients, nutrients, substances and other characteristics. Thus, mLOM handles multi-label learning, which comprises learning label subsets of particular objects from a base label set. This task is different from multi-class learning, where an output is expected to be only one of many mutually exclusive labels. Note that multi-class learning is a particular case of multi-label learning, thus, mLOM is also able to handle any multi-class setup if needed.
  • Multi-ontology label learning capabilities implemented in mLOM solve two apparently opposite challenges in contrast with simple multi-label learning. First, knowledge from different ontological domains is transferred to others, and labeling under a particular domain is fine-tuned to concentrate in specific domain characteristics.
  • In some embodiments, mLOM can synergistically combine different heterogeneous approaches. Each approach is referred as a base model, thus, mLOM can be composed of an ensemble of any number of base models, depending of the combination that offers the overall superior predictive performance under any chosen performance criteria.
  • Accordingly, the mLOM architecture incorporates knowledge from m base models, each performing multi-label learning over ontological domains. mLOM's principal objective transfers multi-ontology knowledge inside each model and across different models. Base models might be any neural network able to perform multi-domain multi-label learning, this means that output neurons should represent label scores for the entire label set.
  • To extrapolate knowledge from inferences made inside a domain to any other domain, a multi-ontology merging block is employed. A multi-ontology merging block takes label scores from different domains from a base model as input and outputs the same labels after some transformation layers to optimize with respect to expected output.
  • Particular architecture of the multi-ontology merging block is may vary depending of the particular dataset to be analyzed.
  • To combine knowledge from different models and fine tune single domain label estimates, all layers from different models for each domain are merged. Then m+1 additional layers are added to a fine tune layer stack. At each fine tune layer, a highway to each model is merged in such a way that raw model output can be rescued before additional transformations. This fine tune step obtains consensus from the different base models considered.
  • As an additional input for the fine tune, the output from a Viable Auto Encoder (“VAE”) is used. Here, Viable Auto Encoder refers to any distribution estimation technique. The skilled in the art will recognize such distribution estimation techniques as any auto regressive auto encoders, masked autoencoder distribution estimators, variational autoencoders, generative adversarial networks among others.
  • The VAE is trained with observed label sets for each ontology. In this way, any given label set is more consistent with observed ones. For example, if lettuce, tomato, onion and chocolate are combined as a set, the chocolate might be removed from this label set, as there are not examples of meals containing chocolate and lettuce as ingredients.
  • An example mLOM architecture for two base models and two ontologies are shown in FIG. 3.
  • The aforementioned mLOM architecture requires that an initial target object is transformed with an appropriate input encoder for each base model.
  • Such initial object can be any digital piece holding a meal representation like text, audio or image file among others. An input encoder is used to transform this raw initial object into an adequate representation for each particular chosen base model.
  • Raw output from each base model among different domains is taken as input for a Multi Ontology Merging Block (MOMB), which comprises additional layers that can refine independent domain predictions taking into account information from all domains being considered. A fine tuning step consists of a stack of layers for each domain. Corresponding domain outputs from different base models are connected by gated layers, such gated layers are dynamically adjusted to learn how much of previous layers information should be remembered for upcoming layers. The skilled on the art can recognize particular implementations of such strategy as highways networks or even more restricted setups like residual networks. mLOM has the ability to understand portion sizes by restaurant depending on the price, type of restaurant, menu name, etc. This will allow the ability to add specific nutrient quantities that are required in some diets and nutrition plans. Other ontologies and labels may be added to create better and more accurate experiences for users.
  • Because of the multiple model nature of mLOM, new “points of view” may be added to analyze food. Therefore, the ability to recognize food plate images through a model, when combined with the knowledge mLOM already has of food ontologies and food components, allows the ability to predict the ingredients and nutrients within images of food plates.
  • FIG. 5 is a diagrammatic representation of an example machine in the form of a computer system 1, within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein may be executed. In various example embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a cellular telephone, a portable music player (e.g., a portable hard drive audio device such as an Moving Picture Experts Group Audio Layer 3 (MP3) player), a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
  • The example computer system 1 includes a processor or multiple processor(s) 5 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both), and a main memory 10 and static memory 15, which communicate with each other via a bus 20. The computer system 1 may further include a video display 35 (e.g., a liquid crystal display (LCD)). The computer system 1 may also include an alpha-numeric input device(s) 30 (e.g., a keyboard), a cursor control device (e.g., a mouse), a voice recognition or biometric verification unit (not shown), a drive unit 37 (also referred to as disk drive unit), a signal generation device 40 (e.g., a speaker), and a network interface device 45. The computer system 1 may further include a data encryption module (not shown) to encrypt data.
  • The disk drive unit 37 includes a computer or machine-readable medium 50 on which is stored one or more sets of instructions and data structures (e.g., instructions 55) embodying or utilizing any one or more of the methodologies or functions described herein. The instructions 55 may also reside, completely or at least partially, within the main memory 10 and/or within the processor(s) 5 during execution thereof by the computer system 1. The main memory 10 and the processor(s) 5 may also constitute machine-readable media.
  • The instructions 55 may further be transmitted or received over a network via the network interface device 45 utilizing any one of a number of well-known transfer protocols (e.g., Hyper Text Transfer Protocol (HTTP)). While the machine-readable medium 50 is shown in an example embodiment to be a single medium, the term “computer-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the machine and that causes the machine to perform any one or more of the methodologies of the present application, or that is capable of storing, encoding, or carrying data structures utilized by or associated with such a set of instructions. The term “computer-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media, and carrier wave signals. Such media may also include, without limitation, hard disks, floppy disks, flash memory cards, digital video disks, random access memory (RAM), read only memory (ROM), and the like. The example embodiments described herein may be implemented in an operating environment comprising software installed on a computer, in hardware, or in a combination of software and hardware.
  • One skilled in the art will recognize that the Internet service may be configured to provide Internet access to one or more computing devices that are coupled to the Internet service, and that the computing devices may include one or more processors, buses, memory devices, display devices, input/output devices, and the like. Furthermore, those skilled in the art may appreciate that the Internet service may be coupled to one or more databases, repositories, servers, and the like, which may be utilized in order to implement any of the embodiments of the disclosure as described herein.
  • The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present technology has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the present technology in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the present technology. Exemplary embodiments were chosen and described in order to best explain the principles of the present technology and its practical application, and to enable others of ordinary skill in the art to understand the present technology for various embodiments with various modifications as are suited to the particular use contemplated.
  • Aspects of the present technology are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the present technology. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • While this technology is susceptible of embodiment in many different forms, there is shown in the drawings and will herein be described in detail several specific embodiments with the understanding that the present disclosure is to be considered as an exemplification of the principles of the technology and is not intended to limit the technology to the embodiments illustrated.
  • The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the technology. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
  • It will be understood that like or analogous elements and/or components, referred to herein, may be identified throughout the drawings with like reference characters. It will be further understood that several of the figures are merely schematic representations of the present technology. As such, some of the components may have been distorted from their actual scale for pictorial clarity.
  • The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present technology. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
  • In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular embodiments, procedures, techniques, etc. in order to provide a thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details.
  • Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” or “according to one embodiment” (or other phrases having similar import) at various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Furthermore, depending on the context of discussion herein, a singular term may include its plural forms and a plural term may include its singular form. Similarly, a hyphenated term (e.g., “on-demand”) may be occasionally interchangeably used with its non-hyphenated version (e.g., “on demand”), a capitalized entry (e.g., “Software”) may be interchangeably used with its non-capitalized version (e.g., “software”), a plural term may be indicated with or without an apostrophe (e.g., PE's or PEs), and an italicized term (e.g., “N+1”) may be interchangeably used with its non-italicized version (e.g., “N+1”). Such occasional interchangeable uses shall not be considered inconsistent with each other.
  • Also, some embodiments may be described in terms of “means for” performing a task or set of tasks. It will be understood that a “means for” may be expressed herein in terms of a structure, such as a processor, a memory, an I/O device such as a camera, or combinations thereof. Alternatively, the “means for” may include an algorithm that is descriptive of a function or method step, while in yet other embodiments the “means for” is expressed in terms of a mathematical formula, prose, or as a flow chart or signal diagram.
  • The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
  • It is noted at the outset that the terms “coupled,” “connected”, “connecting,” “electrically connected,” etc., are used interchangeably herein to generally refer to the condition of being electrically/electronically connected. Similarly, a first entity is considered to be in “communication” with a second entity (or entities) when the first entity electrically sends and/or receives (whether through wireline or wireless means) information signals (whether containing data information or non-data/control information) to the second entity regardless of the type (analog or digital) of those signals. It is further noted that various figures (including component diagrams) shown and discussed herein are for illustrative purpose only, and are not drawn to scale.
  • While specific embodiments of, and examples for, the system are described above for illustrative purposes, various equivalent modifications are possible within the scope of the system, as those skilled in the relevant art will recognize. For example, while processes or steps are presented in a given order, alternative embodiments may perform routines having steps in a different order, and some processes or steps may be deleted, moved, added, subdivided, combined, and/or modified to provide alternative or sub-combinations. Each of these processes or steps may be implemented in a variety of different ways. Also, while processes or steps are at times shown as being performed in series, these processes or steps may instead be performed in parallel, or may be performed at different times.
  • While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. The descriptions are not intended to limit the scope of the invention to the particular forms set forth herein. To the contrary, the present descriptions are intended to cover such alternatives, modifications, and equivalents as may be included within the spirit and scope of the invention as defined by the appended claims and otherwise appreciated by one of ordinary skill in the art. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above-described exemplary embodiments.

Claims (1)

What is claimed is:
1. A system, comprising:
a deep learning nutrient and ingredient identification system that can learn from websites, databases, ontologies, recipes, food lists and restaurant menu items to identify ingredients, ingredient quantity, nutrient composition and diet adherence of different food or nutrition products, recipes or restaurant menu items;
a propagation algorithm that can enhance or correct ingredient lists, ingredient quantity, nutrient composition and diet adherence for similar items in the food or activity databases; and
an adaptive ontology that can be learned from the internet and can be encoded to aid the machine learning or artificial intelligence algorithms to better learn the relationships between different food elements and groupings.
US15/859,126 2016-12-30 2017-12-29 Deep Learning Ingredient and Nutrient Identification Systems and Methods Abandoned US20180189636A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/859,126 US20180189636A1 (en) 2016-12-30 2017-12-29 Deep Learning Ingredient and Nutrient Identification Systems and Methods

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US201662440801P 2016-12-30 2016-12-30
US201662440982P 2016-12-30 2016-12-30
US201662441014P 2016-12-30 2016-12-30
US201662440924P 2016-12-30 2016-12-30
US201662440689P 2016-12-30 2016-12-30
US201662441043P 2016-12-30 2016-12-30
US15/859,126 US20180189636A1 (en) 2016-12-30 2017-12-29 Deep Learning Ingredient and Nutrient Identification Systems and Methods

Publications (1)

Publication Number Publication Date
US20180189636A1 true US20180189636A1 (en) 2018-07-05

Family

ID=62709046

Family Applications (4)

Application Number Title Priority Date Filing Date
US15/859,126 Abandoned US20180189636A1 (en) 2016-12-30 2017-12-29 Deep Learning Ingredient and Nutrient Identification Systems and Methods
US15/859,062 Expired - Fee Related US10360495B2 (en) 2016-12-30 2017-12-29 Augmented reality and blockchain technology for decision augmentation systems and methods using contextual filtering and personalized program generation
US15/858,713 Expired - Fee Related US10685576B2 (en) 2016-12-30 2017-12-29 Augmented reality systems based on a dynamic feedback-based ecosystem and multivariate causation system
US16/898,318 Abandoned US20200334999A1 (en) 2016-12-30 2020-06-10 Augmented Reality Systems Based on a Dynamic Feedback-Based Ecosystem and Multivariate Causation System

Family Applications After (3)

Application Number Title Priority Date Filing Date
US15/859,062 Expired - Fee Related US10360495B2 (en) 2016-12-30 2017-12-29 Augmented reality and blockchain technology for decision augmentation systems and methods using contextual filtering and personalized program generation
US15/858,713 Expired - Fee Related US10685576B2 (en) 2016-12-30 2017-12-29 Augmented reality systems based on a dynamic feedback-based ecosystem and multivariate causation system
US16/898,318 Abandoned US20200334999A1 (en) 2016-12-30 2020-06-10 Augmented Reality Systems Based on a Dynamic Feedback-Based Ecosystem and Multivariate Causation System

Country Status (1)

Country Link
US (4) US20180189636A1 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190172575A1 (en) * 2017-08-18 2019-06-06 Serotonin, Inc. Method for populating user accounts with profiles of supplements for consumption
CN109871279A (en) * 2019-03-11 2019-06-11 京东方科技集团股份有限公司 Consensus task coordination method and device, blockchain system, storage medium
US10360495B2 (en) 2016-12-30 2019-07-23 Suggestic, Inc. Augmented reality and blockchain technology for decision augmentation systems and methods using contextual filtering and personalized program generation
US10515715B1 (en) 2019-06-25 2019-12-24 Colgate-Palmolive Company Systems and methods for evaluating compositions
WO2020186114A1 (en) 2019-03-12 2020-09-17 Inculab Llc Systems and methods for personal taste recommendation
US10832172B1 (en) 2019-08-22 2020-11-10 Kpn Innovations, Llc. Systems and methods for arranging transport of adapted nutrimental artifacts with user-defined restriction requirements using artificial intelligence
US20210042637A1 (en) * 2019-08-05 2021-02-11 Kenneth Neumann Methods and systems for generating a vibrant compatibility plan using artificial intelligence
US11114193B2 (en) 2019-07-03 2021-09-07 Kpn Innovations, Llc Methods and systems for optimizing dietary levels utilizing artificial intelligence
US20210343393A1 (en) * 2018-10-15 2021-11-04 Shinshu University Health management system
US11182815B1 (en) * 2018-08-21 2021-11-23 Sarath Chandar Krishnan Methods and apparatus for a dish rating and management system
US11342060B2 (en) 2020-05-12 2022-05-24 Bender, LLC Lifestyle preference management system and method
US20220207628A1 (en) * 2020-01-01 2022-06-30 Rockspoon, Inc. Biomarker-based food item design system and method
US11551185B2 (en) 2020-08-19 2023-01-10 Walmart Apollo, Llc Automated food selection using hyperspectral sensing
US11929161B2 (en) 2019-08-22 2024-03-12 Kpn Innovations, Llc Systems and methods for displaying nutrimental artifacts on a user device
US12374439B2 (en) 2019-08-05 2025-07-29 Kpn Innovations Llc Methods and systems for generating a vibrant compatibility plan using artificial intelligence

Families Citing this family (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10424121B1 (en) 2016-11-06 2019-09-24 Oded Melinek Generated offering exposure
US20190108287A1 (en) * 2017-10-11 2019-04-11 NutriStyle Inc Menu generation system tying healthcare to grocery shopping
WO2019146205A1 (en) * 2018-01-23 2019-08-01 ソニー株式会社 Information processing device, information processing method, and recording medium
US20210097610A1 (en) * 2018-02-08 2021-04-01 2Bc Innovations, Llc Utilizing blockchain-encoded records for rived longevity-contingent instruments
US20210035217A1 (en) * 2018-02-08 2021-02-04 2Bc Innovations, Llc Updating blockchain-encoded records of rived longevity-contingent instruments
US11243810B2 (en) * 2018-06-06 2022-02-08 The Bank Of New York Mellon Methods and systems for improving hardware resiliency during serial processing tasks in distributed computer networks
US11195143B1 (en) * 2018-07-24 2021-12-07 Staples, Inc. Interactive machine learning assistant
CN109448817A (en) * 2018-09-28 2019-03-08 小伍健康科技(上海)有限责任公司 A kind of recipe recommendation method and apparatus based on deep neural network
WO2020086055A1 (en) 2018-10-22 2020-04-30 Hewlett-Packard Development Company, L.P. Displaying data related to objects in images
CN109447530B (en) * 2018-12-25 2020-11-13 北京食安链科技有限公司 Automatic analysis and risk early warning system and method for food safety
US11399031B2 (en) 2019-02-05 2022-07-26 Centurylink Intellectual Property Llc Tracking or storing of equipment configuration data using immutable ledger functionality of blockchains
US11599935B2 (en) * 2019-04-29 2023-03-07 Kyndryl, Inc. Computer program product, computer implemented method, and system for cognitive item selection with data mining
US11748613B2 (en) * 2019-05-10 2023-09-05 Baidu Usa Llc Systems and methods for large scale semantic indexing with deep level-wise extreme multi-label learning
US11094124B1 (en) * 2019-05-31 2021-08-17 Walgreen Co. Augmented reality pharmaceutical interface
US20220039755A1 (en) 2020-08-06 2022-02-10 Medtronic Minimed, Inc. Machine learning-based system for estimating glucose values
US11883208B2 (en) * 2019-08-06 2024-01-30 Medtronic Minimed, Inc. Machine learning-based system for estimating glucose values based on blood glucose measurements and contextual activity data
US11107568B2 (en) * 2019-08-30 2021-08-31 MyFitnessPal, Inc. Versatile data structure for workout session templates and workout sessions
US11688504B2 (en) 2019-11-30 2023-06-27 Kpn Innovations, Llc. Methods and systems for informing food element decisions in the acquisition of edible materials from any source
WO2021176742A1 (en) 2020-03-03 2021-09-10 パナソニックIpマネジメント株式会社 Control method, information terminal, program, and recording medium
JP2021157390A (en) 2020-03-26 2021-10-07 東芝テック株式会社 Food inquiry system, food inquiry method, and food inquiry program
US11594317B2 (en) 2020-05-28 2023-02-28 Kpn Innovations, Llc. Methods and systems for determining a plurality of nutritional needs to generate a nutrient supplementation plan using artificial intelligence
US20230282331A1 (en) * 2020-05-28 2023-09-07 Per Södersten Virtual Reality Eating Behavior Training Systems and Methods
US11308432B2 (en) 2020-06-05 2022-04-19 International Business Machines Corporation Augmented reality virtual order assistant
US11823785B2 (en) 2020-07-02 2023-11-21 Kpn Innovations, Llc. Methods and systems for calculating nutritional requirements in a display interface
US11688506B2 (en) 2020-08-03 2023-06-27 Kpn Innovations, Llc. Methods and systems for calculating an edible score in a display interface
US11687813B2 (en) 2020-08-24 2023-06-27 Kpn Innovations, Llc. Systems and methods for ranking alimentary combinations using machine-learning
US11437147B2 (en) 2020-08-31 2022-09-06 Kpn Innovations, Llc. Method and systems for simulating a vitality metric
US12223530B2 (en) * 2020-09-24 2025-02-11 International Business Machines Corporation Method, system, and computer program product for representational machine learning for product formulation
US12475504B2 (en) * 2020-09-29 2025-11-18 Ncr Voyix Corporation Methods and a system of item nutrition information processing
US11080939B1 (en) * 2020-10-20 2021-08-03 Charter Communications Operating, Llc Generating test cases for augmented reality (AR) application testing
US11862322B2 (en) 2020-11-30 2024-01-02 Kpn Innovations, Llc. System and method for generating a dynamic weighted combination
US12417833B2 (en) * 2020-12-29 2025-09-16 Kpn Innovations, Llc. Systems and methods for generating a cancer alleviation nourishment plan
JP7614998B2 (en) * 2021-09-22 2025-01-16 株式会社東芝 Work estimation device, work estimation method, and program
US20230123341A1 (en) * 2021-10-16 2023-04-20 Vivek Singh Methods and systems for scoring foods and recipes
JP7166506B1 (en) * 2022-02-15 2022-11-07 三菱電機株式会社 Image filter generation system, image filter generation device, learning device, learning method and program
EP4475008A4 (en) * 2022-03-29 2025-11-19 Kikkoman Corp FOOD INFORMATION PRESENTATION SYSTEM, FOOD INFORMATION PRESENTATION METHOD, FOOD INFORMATION PRESENTATION DEVICE, FOOD INFORMATION PRESENTATION PROGRAM AND STORAGE MEDIUM WITH PROGRAM RECORDED ON IT
US11893150B2 (en) 2022-06-28 2024-02-06 Bank Of America Corporation Systems and methods for multi-point validation in communication network with associated virtual reality application layer
US12340408B2 (en) 2022-10-12 2025-06-24 Tata Consultancy Services Limited Method and system for generation of descriptive copy of grocery products
US12053300B2 (en) 2022-12-13 2024-08-06 GrowthWell LLC System and method for instituting wellness practices
CN116362337A (en) * 2023-02-24 2023-06-30 国电南瑞科技股份有限公司 An auxiliary decision-making method for low-carbon operation of power generation based on data-driven causal inference
CN118078230B (en) * 2024-04-24 2024-07-23 北京大学第三医院(北京大学第三临床医学院) Cardiovascular disease risk prediction method and device
CN118916617B (en) * 2024-09-30 2024-12-06 中国人民解放军国防科技大学 Information grading early warning method, device and equipment based on multi-label causality relation

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010138975A1 (en) * 2009-05-29 2010-12-02 Sk Telecom Americas, Inc. System and method for motivating users to improve their wellness
US9338622B2 (en) * 2012-10-04 2016-05-10 Bernt Erik Bjontegard Contextually intelligent communication systems and processes
US9412121B2 (en) * 2012-10-05 2016-08-09 Sap Se Backend support for augmented reality window shopping
US9189021B2 (en) * 2012-11-29 2015-11-17 Microsoft Technology Licensing, Llc Wearable food nutrition feedback system
US20140220516A1 (en) * 2013-02-01 2014-08-07 FoodCare Inc. System and method for food item search with nutritional insight analysis using big data infrastructure
CA2906002A1 (en) * 2013-03-14 2014-10-02 Andrew H. Gibbs Providing food-portion recommendations to faciliate dieting
US9818150B2 (en) * 2013-04-05 2017-11-14 Digimarc Corporation Imagery and annotations
US9582913B1 (en) * 2013-09-25 2017-02-28 A9.Com, Inc. Automated highlighting of identified text
US20150193853A1 (en) * 2014-01-06 2015-07-09 Leonid Ayzenshtat Methods and Apparatus to Generate Product Recommendations
US20180344239A1 (en) * 2015-11-13 2018-12-06 Segterra, Inc. Managing Evidence-Based Rules
US20180189636A1 (en) 2016-12-30 2018-07-05 Suggestic, Inc. Deep Learning Ingredient and Nutrient Identification Systems and Methods

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10360495B2 (en) 2016-12-30 2019-07-23 Suggestic, Inc. Augmented reality and blockchain technology for decision augmentation systems and methods using contextual filtering and personalized program generation
US20190172575A1 (en) * 2017-08-18 2019-06-06 Serotonin, Inc. Method for populating user accounts with profiles of supplements for consumption
US20220181002A1 (en) * 2017-08-18 2022-06-09 Serotonin, Inc. Method for populating user accounts with profiles of supplements for consumption
US11182815B1 (en) * 2018-08-21 2021-11-23 Sarath Chandar Krishnan Methods and apparatus for a dish rating and management system
US20210343393A1 (en) * 2018-10-15 2021-11-04 Shinshu University Health management system
EP4465303A3 (en) * 2018-10-15 2025-01-22 Wellnas.Co.,Ltd Health management system
EP3869518A4 (en) * 2018-10-15 2022-07-06 Wellnas.Co.,Ltd HEALTH MANAGEMENT SYSTEM
CN109871279A (en) * 2019-03-11 2019-06-11 京东方科技集团股份有限公司 Consensus task coordination method and device, blockchain system, storage medium
WO2020186114A1 (en) 2019-03-12 2020-09-17 Inculab Llc Systems and methods for personal taste recommendation
US11776037B2 (en) 2019-03-12 2023-10-03 Inculab Llc Systems and methods for personal taste recommendation
EP3931786A4 (en) * 2019-03-12 2022-11-23 Inculab LLC PERSONAL TASTE RECOMMENDATION SYSTEMS AND METHODS
US10861588B1 (en) 2019-06-25 2020-12-08 Colgate-Palmolive Company Systems and methods for preparing compositions
US11728012B2 (en) 2019-06-25 2023-08-15 Colgate-Palmolive Company Systems and methods for preparing a product
US11315663B2 (en) 2019-06-25 2022-04-26 Colgate-Palmolive Company Systems and methods for producing personal care products
US12165749B2 (en) 2019-06-25 2024-12-10 Colgate-Palmolive Company Systems and methods for preparing compositions
US11342049B2 (en) 2019-06-25 2022-05-24 Colgate-Palmolive Company Systems and methods for preparing a product
US10515715B1 (en) 2019-06-25 2019-12-24 Colgate-Palmolive Company Systems and methods for evaluating compositions
US10839941B1 (en) 2019-06-25 2020-11-17 Colgate-Palmolive Company Systems and methods for evaluating compositions
US10839942B1 (en) 2019-06-25 2020-11-17 Colgate-Palmolive Company Systems and methods for preparing a product
US11114193B2 (en) 2019-07-03 2021-09-07 Kpn Innovations, Llc Methods and systems for optimizing dietary levels utilizing artificial intelligence
US20210042637A1 (en) * 2019-08-05 2021-02-11 Kenneth Neumann Methods and systems for generating a vibrant compatibility plan using artificial intelligence
US11610683B2 (en) * 2019-08-05 2023-03-21 Kpn Innovations, Llc. Methods and systems for generating a vibrant compatibility plan using artificial intelligence
US20230207136A1 (en) * 2019-08-05 2023-06-29 Kpn Innovations, Llc. Methods and systems for generating a vibrant compatbility plan using artificial intelligence
US12014834B2 (en) * 2019-08-05 2024-06-18 Kpn Innovations Llc Methods and systems for generating a vibrant compatbility plan using artificial intelligence
US12374439B2 (en) 2019-08-05 2025-07-29 Kpn Innovations Llc Methods and systems for generating a vibrant compatibility plan using artificial intelligence
US10832172B1 (en) 2019-08-22 2020-11-10 Kpn Innovations, Llc. Systems and methods for arranging transport of adapted nutrimental artifacts with user-defined restriction requirements using artificial intelligence
US11929161B2 (en) 2019-08-22 2024-03-12 Kpn Innovations, Llc Systems and methods for displaying nutrimental artifacts on a user device
US20220207628A1 (en) * 2020-01-01 2022-06-30 Rockspoon, Inc. Biomarker-based food item design system and method
US11741557B2 (en) * 2020-01-01 2023-08-29 Rockspoon, Inc. Biomarker-based food item design system and method
US11615877B2 (en) 2020-05-12 2023-03-28 Bender, LLC Lifestyle preference management system and method
US11342060B2 (en) 2020-05-12 2022-05-24 Bender, LLC Lifestyle preference management system and method
US11853965B2 (en) 2020-08-19 2023-12-26 Walmart Apollo, Llc Automated food selection using hyperspectral sensing
US11551185B2 (en) 2020-08-19 2023-01-10 Walmart Apollo, Llc Automated food selection using hyperspectral sensing

Also Published As

Publication number Publication date
US10685576B2 (en) 2020-06-16
US20180190147A1 (en) 2018-07-05
US10360495B2 (en) 2019-07-23
US20200334999A1 (en) 2020-10-22
US20180190375A1 (en) 2018-07-05

Similar Documents

Publication Publication Date Title
US20180189636A1 (en) Deep Learning Ingredient and Nutrient Identification Systems and Methods
Bolaños et al. Food ingredients recognition through multi-label learning
US20200098466A1 (en) Machine learning implementations for a menu generation platform
US12405959B2 (en) Methods and systems for arranging and displaying guided recommendations via a graphical user interface based on biological extraction
Anderson A survey of food recommenders
US11200814B2 (en) Methods and systems for self-fulfillment of a dietary request
US20240071598A1 (en) Methods and systems for ordered food preferences accompanying symptomatic inputs
Mazlan et al. Exploring the impact of hybrid recommender systems on personalized mental health recommendations
Maia et al. Context-aware food recommendation system
Yadav et al. Predicting depression from routine survey data using machine learning
El Bouhissi et al. Towards an Efficient Knowledge-based Recommendation System.
US20210057077A1 (en) Systems and methods for arranging transport of adapted nutrimental artifacts with user-defined restriction requirements using artificial intelligence
Gaikwad et al. Precision Nutrition through Smart Wearable Technology Tailored Solutions for Personalized Health Enhancement
Su et al. Do recommender systems function in the health domain: a system review
Rout et al. Machine learning model for awareness of diet recommendation
Zhang Innovative food recommendation systems: a machine learning approach
Al-Chalabi et al. Food recommendation system based on data clustering techniques and user nutrition records
Merchant et al. ConvFood: a CNN-based food recognition mobile application for obese and diabetic patients
Morales-Garzón et al. Adaptafood: an intelligent system to adapt recipes to specialised diets and healthy lifestyles
Khan et al. Investigating health-aware smart-nudging with machine learning to help people pursue healthier eating-habits
Nasrin et al. Impact of emotional state on food preference by students: a machine learning approach
US20230230674A1 (en) Systems and methods for arranging transport of adapted nutrimental artifacts with user-defined restriction requirements using artificial intelligence
DJELIL Development and Evaluation of a Food Recommendation System to Promote Healthy Eating
Perera et al. Multiple objective optimization based dietary recommender system
Pawade et al. RecommenDiet: A System to Recommend a Dietary Regimen Using Facial Features

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: SUGGESTIC, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHAPELA, VICTOR;CORRAL CORRAL, RICARDO;REEL/FRAME:045361/0017

Effective date: 20180322

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION