[go: up one dir, main page]

WO2024158873A1 - Systems and methods for generating digital representation of a subject for rendering a service - Google Patents

Systems and methods for generating digital representation of a subject for rendering a service Download PDF

Info

Publication number
WO2024158873A1
WO2024158873A1 PCT/US2024/012713 US2024012713W WO2024158873A1 WO 2024158873 A1 WO2024158873 A1 WO 2024158873A1 US 2024012713 W US2024012713 W US 2024012713W WO 2024158873 A1 WO2024158873 A1 WO 2024158873A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional models
dimensional
subjects
processors
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2024/012713
Other languages
French (fr)
Inventor
Samantha WEST
Kaarthikeyan SUBRAMANIAM
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mars Inc
Original Assignee
Mars Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mars Inc filed Critical Mars Inc
Publication of WO2024158873A1 publication Critical patent/WO2024158873A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • G06Q10/40
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0279Fundraising management
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/22Social work or social welfare, e.g. community support activities or counselling services

Definitions

  • the present disclosure generally relates to video games and virtual world created by electronic means, and in particular to computer-enabled games that display character(s) in the virtual world for achieving an objective.
  • the conventional systems are technically challenged in (i) optimizing the application process by utilizing data tracking and documenting capabilities, (ii) broadcasting live feeds of adoptable pets to prevent the users from wasting time, energy, and money traveling to the rescue shelters, and/or (iii) maintaining electronic medical records of the pets and facilitating the users looking to foster or adopt pets by providing access to such medical records.
  • the present disclosure solves the technical challenges typically encountered during the use of conventional systems while fostering or adopting pets, such as those discussed in this disclosure. Specifically, the present disclosure solved the technical challenges by providing a centralized system that generates animated three- dimensional model(s) of one or more subjects in a virtual environment for rendering a service.
  • a computer-implemented method includes: receiving, by one or more processors, data associated with one or more subjects; processing, by the one or more processors, the data to generate one or more three-dimensional models of the one or more subjects; generating, by the one or more processors, a presentation of the one or more three-dimensional models in a virtual environment; receiving, by the one or more processors, a selection of at least one three-dimensional model of a subject from the one or more three-dimensional models of the one or more subjects; and determining, by the one or more processors, at least one action for the at least one selected three-dimensional model of the subject.
  • a system includes: one or more processors; a non- transitory computer readable medium storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations including: receiving data associated with one or more subjects; processing the data to generate one or more three-dimensional models of the one or more subjects; generating a presentation of the one or more three-dimensional models in a virtual environment; receiving a selection of at least one three-dimensional model of a subject from the one or more three-dimensional models of the one or more subjects; and determining at least one action for the at least one selected three-dimensional model of the subject.
  • a non-transitory computer readable medium the non- transitory computer readable medium storing instructions which, when executed by one or more processors of a computing system, cause the one or more processors to perform operations including: receiving data associated with one or more subjects, wherein the data associated with the one or more subjects includes one or more images, one or more videos, and/or one or more audio recording of one or more animals captured by one or more sensors; processing the data to generate one or more three-dimensional models of the one or more subjects; generating a presentation of the one or more three-dimensional models in a virtual environment; receiving a selection of at least one three-dimensional model of a subject from the one or more three- dimensional models of the one or more subjects; and determining at least one action for the at least one selected three-dimensional model of the subject.
  • FIG. 1 is a diagram showing an example of a system for generating animated three-dimensional model(s) of one or more subjects in a virtual environment for rendering a service, according to aspects of the disclosure.
  • FIG. 2 is a flowchart of a process for generating animated three-dimensional model(s) of a subject in a virtual environment for gamification of fostering or adoption, according to aspects of the disclosure.
  • FIG. 3 is a diagram that illustrates an interactive session between a user and a three-dimensional model (e.g., avatar) of an animal for rendering a service, according to one example embodiment.
  • a three-dimensional model e.g., avatar
  • FIG. 4 is a diagram that illustrates interactions between registered users in a virtual environment for rendering a service, according to one example embodiment.
  • FIG. 5 is a user interface diagram that illustrates the steps for fostering or adopting three-dimensional model(s) in a virtual environment, according to one example embodiment.
  • FIG. 6 is a user interface diagram that illustrates various stages of the pet care transactions web platform for fostering or adopting three-dimensional model(s) in a virtual environment, according to one example embodiment.
  • FIG. 7 shows an example machine learning training flow chart.
  • FIG. 8 illustrates an implementation of a computer system that executes techniques presented herein.
  • While enquiring about the animal, service providers may refuse to provide details about the animal until the application for fostering or adoption has been submitted and approved.
  • the users may find the application process to be extensive and lengthy with strict requirements (e.g., background search, home checks, references, etc.).
  • strict requirements e.g., background search, home checks, references, etc.
  • users may feel overwhelmed fostering a dog in their homes until they get adopted because not every dog lover has the means to take a dog into their home, and the application for fostering or adoption may be rejected upon determining the user does not have a yard or works long hours. Such difficulties discourage users from fostering or adopting pets.
  • System 100 overcomes the technical shortcomings of the current technologies by providing methods and systems for fostering or adopting pets in a digital medium so that users can virtually foster or adopt real-life pets from any location.
  • digital representation of animals in the metaverse e.g., three-dimensional avatars of the animals that encapsulate their appearance and/or mannerisms
  • the gamification of fostering or adopting pets without having to worry about making time to care for the pets, spending on pet food or veterinarian, the health and age of the pets, or the tedious application process motivates users to foster or adopt.
  • Such a virtual platform for fostering or adopting real-life pets may intensify the attachment the users feel for their pets and may increase the likelihood of fostering or adopting additional real-life pets and reduce pet homelessness.
  • the system 100 provides real-time recommendations to the users based on the current conditions of the pets (e.g., health conditions, food requirements, veterinary expenses, etc.) and each users can make decisions for the upkeep of their pets based on the real-time recommendations while remaining closely connected to the pets.
  • the present disclosure taps the potential of video games to bring about positive change toward fostering or adopting animals. The sophistication of modern game engines can not only reach vast audiences but can engage on a whole new interactive level.
  • FIG. 1 is a diagram showing an example of a system for generating animated three-dimensional model(s) of one or more subjects in a virtual environment for rendering a service, according to aspects of the disclosure.
  • FIG. 1 includes the system 100 that comprises user equipment (UE) 101 a-101 n (collectively referred to as UE 101 ) that includes applications 103a-103n (collectively referred to as an application 103) and sensors 105a-105n (collectively referred to as a sensor 105), a communication network 107, a third-party data source(s) 109, an animation generation platform 111 , and a database 123.
  • UE user equipment
  • UE 101 user equipment
  • applications 103a-103n collectively referred to as an application 103
  • sensors 105a-105n collectively referred to as a sensor 105
  • the UE 101 includes, but is not restricted to, any type of mobile terminal, wireless terminal, fixed terminal, or portable terminal.
  • Examples of the UE 101 include, but are not restricted to, a mobile handset, a wireless communication device, a station, a unit, a device, a multimedia computer (e.g., computer system 800), a multimedia tablet, an Internet node, a communicator, a desktop computer, a laptop computer, a notebook computer, a netbook computer, a tablet computer, a Personal Communication System (PCS) device, a personal navigation device, a Personal Digital Assistant (PDA), a digital cam era/cam corder, an infotainment system, a dashboard computer, a television device, or any combination thereof, including the accessories and peripherals of these devices, or any combination thereof.
  • a multimedia computer e.g., computer system 800
  • a multimedia tablet e.g., an Internet node
  • a communicator e.g., a desktop computer, a laptop computer, a
  • the UE 101 facilitates various input means for receiving information, including, but not restricted to, a touch screen capability, a keyboard and keypad data entry, a voice-based input mechanism, and the like.
  • the UE 101 is configured with different features for generating, sharing, and viewing of visual content. Any known and future implementations of the UE 101 are also applicable.
  • the application 103 includes various applications such as, but not restricted to, content provisioning applications, multimedia applications, media player applications, camera/imaging applications, notification services, software applications, networking applications, storage services, contextual information determination services, and the like.
  • one of the application 103 at the UE 101 acts as a client for the animation generation platform 111 and performs one or more functions associated with the functions of the animation generation platform 111 by interacting with the animation generation platform 111 over the communication network 107.
  • each sensor 105 includes any type of sensor.
  • the sensors 105 include, for example, a camera/imaging sensor for gathering image data and/or video data, an audio recorder for gathering audio data, a network detection sensor for detecting wireless signals or receivers for different short- range communications (e.g., Bluetooth, Wi-Fi, Li-Fi, near field communication (NFC), etc.) from the communication network 107, a global positioning sensor for gathering location data, and the like.
  • various elements of the system 100 communicate with each other through the communication network 107.
  • the communication network 107 supports a variety of different communication protocols and communication techniques.
  • the communication network 107 allows the DE 101 and the third party data source(s) 109 to communicate with the animation generation platform 111.
  • the communication network 107 of the system 100 includes one or more networks such as a data network, a wireless network, a telephony network, or any combination thereof.
  • the data network is any local area network (LAN), metropolitan area network (MAN), wide area network (WAN), a public data network (e.g., the Internet), short range wireless network, or any other suitable packet-switched network, such as a commercially owned, proprietary packet-switched network, e.g., a proprietary cable or fiber-optic network, and the like, or any combination thereof.
  • the wireless network is, for example, a cellular communication network and employs various technologies including 5G (5th Generation), 4G, 3G, 2G, Long Term Evolution (LTE), wireless fidelity (Wi-Fi), Bluetooth®, Internet Protocol (IP) data casting, satellite, mobile ad-hoc network (MANET), vehicle controller area network (CAN bus), and the like, or any combination thereof.
  • 5G 5th Generation
  • 4G 3G
  • 2G Long Term Evolution
  • Wi-Fi wireless fidelity
  • Bluetooth® Bluetooth®
  • IP Internet Protocol
  • satellite mobile ad-hoc network
  • MANET mobile ad-hoc network
  • CAN bus vehicle controller area network
  • the third-party data source(s) 109 includes various databases (e.g., rescue center databases, veterinary hospitals databases, etc.) that store a plurality of data (e.g., image data, video data, sound recordings, behavioral data, and/or medical records) associated with one or more subjects (e.g., real-life animals) for transmission to participating entities (e.g., the animation generation platform 111 ).
  • the third-party data source(s) 109 includes various cloud storage services, video monitoring services (e.g., security camera, pet camera, etc.), and/or other sources of personal recordings associated with one or more subjects.
  • the animation generation platform 111 is a platform with multiple interconnected components.
  • the animation generation platform 111 includes one or more servers, intelligent networking devices, computing devices, components, and corresponding software for generating animated three-dimensional model(s) of one or more subjects in a virtual environment for rendering a service.
  • the animation generation platform 111 generates three-dimensional models of the animals that encapsulate their appearance and/or mannerisms in a virtual environment, such as the metaverse.
  • the animation generation platform 111 enables users to foster or adopt animals via an end-to-end pet care transactions web platform.
  • the animation generation platform 111 generates three-dimensional models of a plurality of animals (e.g., dogs, cats, etc.) that are available for adoption or foster care.
  • the user may select at least one three-dimensional model that represents a rescued animal for fostering or adoption.
  • the user may donate money for the maintenance of the selected three-dimensional model or purchase a virtual property (e.g., a virtual land) in the virtual environment to accommodate the selected three-dimensional model.
  • a virtual property e.g., a virtual land
  • Such actions by the user in the pet care transactions web platform result in the service providers taking care of the rescued animals in real locations (e.g., animal sanctuaries, animal adoption centers, animal shelters, veterinarian clinics, etc.). For example, a donation from the user for the upkeep of the three-dimensional model is utilized to take care of the rescued animal.
  • the users may also host a meet and greet of the three-dimensional models on their property for other registered users.
  • Such online interaction increases the likelihood of other registered users fostering or adopting the rescued animals (e.g., other registered users may select three-dimensional models of the rescued animal for fostering or adoption, and may accommodate them on their virtual land in the care transactions web platform.
  • the animation generation platform 111 comprises a data collecting module 113, a learning module 115, a model creation engine 117, an animation engine 119, a presentation module 121 , or any combination thereof.
  • terms such as “component” or “module” generally encompass hardware and/or software, e.g., that a processor or the like used to implement associated functionality. It is contemplated that the functions of these components are combined in one or more components or performed by other components of equivalent functionality.
  • the data collecting module 113 collects, in real-time or near real-time, relevant data associated with one or more subjects (e.g., animals) through various data collection techniques.
  • the relevant data includes image data, video data, audio data, behavioral data, and/or medical records associated with the animals available for foster or adoption.
  • the image data includes pictures and/or drawings that correspond to rescued animals.
  • the medical data includes clinical data, genetic data, or diagnostic data associated with the rescued animals.
  • the data collecting module 113 uses a web-crawling component to access various data sources (e.g., third-party data source(s) 109, database 123) to collect the relevant data.
  • the data collecting module 113 includes various software applications (e.g., data mining applications in Extended Meta Language (XML)) that automatically search for and return relevant data associated with one or more subjects.
  • the data collecting module 113 collects images (e.g., images of animals) uploaded by the users via the user interface of their respective UE 101.
  • the data collecting module 113 also collects images and/or videos of the users (e.g., registered users) participating in the service program.
  • the data collecting module 113 transmits the collected data to the learning module 115 for further processing.
  • the learning module 115 analyzes the collected data (e.g., images and/or videos of the animals) to learn visual characteristics or appearance details of the animals.
  • the learning module 115 utilizes a neural network model that applies style transfer or extraction techniques to alter a generic three-dimensional model to match the animal in the images and/or videos.
  • the learning module 115 utilizes a generative adversarial network (GAN) to learn mappings from input images to output images, and also learn a loss function to train this mapping.
  • GAN generative adversarial network
  • the learning module 115 analyzes the collected data (e.g., identify actions performed by the animals in the videos) to learn the behaviors of the animals, and generates a behavior model that represents the behavior of the animals.
  • the learning module 115 performs semantic analysis of the collected data to extract or determine traits of the animals.
  • the learning module 115 analyzes the collected data (e.g., audio recording) to learn auditory information associated with the animal to generate a sound model that represents the sound or vocal traits of the animals.
  • the learning module 115 also analyzes images and/or videos of the users to learn their visual characteristics or appearance details. The learning module 115 transmits the analyzed data to the model creation engine 117 for further processing.
  • the model creation engine 117 generates three- dimensional models based on visual characteristics or appearance details of the subjects (e.g., animals) in the images and/or videos.
  • the images are two-dimensional images of the subjects from different viewpoints.
  • the model creation engine 117 utilizes the photogrammetry technique to analyze images for identifying common points. The identified points serve as reference markers that are utilized to calculate the distance and angle between different elements in the images to build three-dimensional models.
  • the model creation engine 117 modifies the three-dimensional models based on user inputs.
  • the user may input instructions for modifying the three-dimensional models via the graphical user interface elements of the UE 101 .
  • the model creation engine 117 transmits the three- dimensional models to the animation engine 119.
  • the model creation engine 117 also generates three-dimensional models based on visual characteristics or appearance details of the users.
  • the animation engine 119 generates animation for three- dimensional models to move in a manner that approximates specific movements performed by the real-life animal in the detected videos.
  • the animation engine 119 utilizes deep neural networks or other machine learning models for determining kinematic data, skeletal movements, or similar information from the videos of real-life animals to generate animation that can be applied to three-dimensional models.
  • the animation engine 119 creates a virtual environment (e.g., a virtual rescue shelter, a virtual island, etc.) that is populated by the animated three-dimensional models, such virtual environment may be based on real locations (e.g., a rescue shelter, a veterinary clinic, etc.).
  • a user can sponsor one or more animated three-dimensional models at a virtual rescue shelter and pay for their expenses (e.g., food, shelter, medical, etc.).
  • the user can purchase land on the virtual island, select animated three-dimensional models for fostering or adoption, and accommodate the selected animated three-dimensional models on the purchased land.
  • Such actions performed by the users in the virtual world of video games are reflected in the real world, for example, by sponsoring an animated three-dimensional model at a virtual rescue shelter, the user is supporting a real-life animal at an actual rescue shelter.
  • the presentation module 121 enables display of the virtual environment and the animated three-dimensional models in the UE 101.
  • the presentation module 121 is configured to operate in connection with augmented reality (AR) processing techniques, wherein the virtual environment, the animated three-dimensional models, graphic elements, and various applications interact.
  • AR augmented reality
  • the presentation module 121 also comprises a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like.
  • the presentation module 121 generates real-time notifications regarding conditions of the animated three-dimensional models (e.g., health, food requirement, water requirements, etc.) in the UE 101.
  • the user may provide inputs for the maintenance of the animated three-dimensional models (e.g., instructions to take the virtual pet to a veterinarian by providing financial support).
  • the presentation module 121 updates the display of the virtual environment and the animated three-dimensional models based on the user input.
  • the presentation module 121 employs various application programming interfaces (APIs) or other function calls corresponding to the application 103 on the UE 101 , thus enabling the display of graphics primitives such as menus, buttons, data entry fields, etc.
  • APIs application programming interfaces
  • the presentation module 121 also causes interfacing of guidance information to include, at least in part, one or more annotations, audio messages, video messages, or a combination thereof in the UE 101 to guide the users.
  • the above presented modules and components of the animation generation platform 111 are implemented in hardware, firmware, software, or a combination thereof. Though depicted as a separate entity in FIG. 1 , it is contemplated that the animation generation platform 111 is also implemented for direct operation by the respective UE 101. As such, the animation generation platform 111 generates direct signal inputs by way of the operating system of the UE 101. In another embodiment, one or more of the modules 113-121 are implemented for operation by the respective UEs, as the animation generation platform 111.
  • the various executions presented herein contemplate any and all arrangements and models.
  • the database 123 is any type of database, such as relational, hierarchical, object-oriented, and/or the like, wherein data are organized in any suitable manner, including data tables or lookup tables.
  • the database 123 accesses various data sources, stores content associated with the subjects (e.g., animals available for fostering or adoption), and manages multiple types of information that provide means for aiding in the content provisioning and sharing process.
  • the database 123 stores three-dimensional models of the subject(s) generated by the model creation engine 117 and/or the animations generated by the animation engine 119. It is understood that any other suitable data may be included in the database 123.
  • the database 123 includes a machine learning based training database with a pre-defined mapping defining a relationship between various input parameters and output parameters based on various statistical methods.
  • the training database includes machine learning algorithms to learn mappings between input parameters related to the subject(s).
  • the training database is routinely updated and/or supplemented based on the machine learning methods.
  • a protocol includes a set of rules defining how the network nodes within the communication network 107 interact with each other based on information sent over the communication links.
  • the protocols are effective at different layers of operation within each node, from generating and receiving physical signals of various types, to selecting a link for transferring those signals, to the format of information indicated by those signals, to identifying which software application executing on a computer system sends or receives the information.
  • the conceptually different layers of protocols for exchanging information over a network are described in the Open Systems Interconnection (OSI) Reference Model.
  • Each packet typically comprises (1 ) header information associated with a particular protocol, and (2) payload information that follows the header information and contains information that may be processed independently of that particular protocol. In some protocols, the packet includes (3) trailer information following the payload and indicating the end of the payload information.
  • the header includes information such as the source of the packet, its destination, the length of the payload, and other properties used by the protocol.
  • the data in the payload for the particular protocol includes a header and payload for a different protocol associated with a different, higher layer of the OSI Reference Model.
  • the header for a particular protocol typically indicates a type for the next protocol contained in its payload.
  • the higher layer protocol is said to be encapsulated in the lower layer protocol.
  • the headers included in a packet traversing multiple heterogeneous networks, such as the Internet typically include a physical (layer 1 ) header, a data-link (layer 2) header, an internetwork (layer 3) header and a transport (layer 4) header, and various application (layer 5, layer 6 and layer 7) headers as defined by the OSI Reference Model.
  • FIG. 2 is a flowchart of a process for generating animated three-dimensional model(s) of a subject in a virtual environment for gamification of fostering or adoption, according to aspects of the disclosure.
  • the animation generation platform 111 and/or any of the modules 113-121 performs one or more portions of the process 200 and are implemented using, for instance, a chip set including a processor and a memory as shown in FIG. 8.
  • the animation generation platform 111 and/or any of modules 113-121 provide means for accomplishing various parts of the process 200, as well as means for accomplishing embodiments of other processes described herein in conjunction with other components of the system 100.
  • the process 200 is illustrated and described as a sequence of steps, it is contemplated that various embodiments of the process 200 are performed in any order or combination and need not include all of the illustrated steps.
  • the animation generation platform 111 receives, via processor 802 (which may include one or more processors), data associated with the subject(s) (e.g., animals available for fostering or adoption).
  • the data includes images, videos, and/or audio recordings of the subject(s) captured by various sensors (e.g., sensor 105). It is understood that the data may include any other relevant data associated with the subject(s).
  • the animation generation platform 111 processes, via processor 802, the data to generate three-dimensional models of the subject(s).
  • the animation generation platform 111 applies computer vision techniques (e.g., a neural network model or a classification model) to the images and/or videos to learn the visual characteristics of the subject(s) for generating the three-dimensional model(s).
  • the animation generation platform 111 also applies photogrammetry technique to analyze the image(s) for identifying common points as reference markers that are utilized to calculate the distance and angle between different elements in the images to build the three-dimensional model(s).
  • the three-dimensional models are then stored in database 123.
  • the animation generation platform 111 generates, via processor 802, a presentation of the one or more three-dimensional models in a virtual environment.
  • the virtual environment represents real locations (e.g., animal sanctuaries, animal adoption centers, animal shelters, etc.).
  • the animation generation platform 111 generates animations for the three-dimensional model(s) in the virtual environment.
  • the animated three-dimensional model(s) executes specific movements performed by the animal(s) in the video(s).
  • the animated three- dimensional model(s) is also configured to communicate with the UE 101 associated with a user, and an action is determined for the animated three-dimensional model(s) based on the communication.
  • the animated three-dimensional models are then stored in database 123.
  • the animation generation platform 111 receives training data correlating the data associated with the subject(s) to the three-dimensional model(s) and/or the animated three-dimensional model(s).
  • the animation generation platform 111 inputs the training data to a machine learning model to configure the machine learning model to output the three-dimensional model(s) and/or the animated three-dimensional model(s) for the data associated with the subject(s).
  • the animation generation platform 111 receives, via processor 802, a selection of a three-dimensional model from the plurality of three-dimensional models of one or more subjects.
  • the animation generation platform 111 receives a request to purchase a property in the virtual environment (e.g., a virtual land in a virtual rescue shelter) from the UE 101 associated with the user, wherein the request includes a transaction amount for the property.
  • the animation generation platform 111 superimposes the selected three-dimensional model on the purchased property for fostering or adoption.
  • the animation generation platform 111 continuously monitors the condition of the selected three-dimensional model on the purchased property, and generates real-time notifications regarding the condition of the selected three-dimensional model in the UE 101 associated with the user.
  • the condition of the selected three-dimensional model represents the condition of an animal in real locations (e.g., a dog in a rescue shelter).
  • the animation generation platform 111 determines, via processor 802, an action for the at least one selected three-dimensional model of the subject.
  • the action includes fostering or adopting the subject (e.g., pets) in the virtual environment, such fostering or adopting the subject in the virtual environment cause service providers (e.g., rescue shelters, animal hostels, animal foster homes, veterinary hospitals, etc.) to take care of the animals in real locations.
  • service providers e.g., rescue shelters, animal hostels, animal foster homes, veterinary hospitals, etc.
  • the action involves the user sponsoring the subject(s) by paying for their expenses, such payments is utilized by the service providers to take care of the animals in real locations.
  • FIG. 3 is a diagram that illustrates an interactive session between a user and a three-dimensional model (e.g., avatar) of an animal for rendering a service, according to one example embodiment.
  • the animation generation platform 111 generates a display 300 (e.g., pet care transactions web platform) in a user interface of the UE 101 associated with the user.
  • the display 300 includes three-dimensional models 301 and 303 that represent the user and the animal that may be fostered or adopted by the user, respectively.
  • the three-dimensional models 301 and 303 are realistic reproductions of the user and the animal (e.g., lifelike) or some fanciful alter egos (e.g., cartoons).
  • the animation generation platform 111 utilizes conversational artificial intelligence (Al) for creating human-like interactions and conversations between the three-dimensional models 301 and 303 (e.g., chatbots 305 to answer questions and provide support or generative Al).
  • Conversational Al uses a combination of natural language processing (NLP), foundation models, and machine learning (ML) to understand and process human language. Accordingly, the three- dimensional models understand natural language input, maintain context for coherent conversations, and provide contextually relevant responses for engaging the users in dynamic and interactive dialogues.
  • the animation generation platform 111 is constantly learning from such interactions and improving response quality over time.
  • FIG. 3 depicts a single user interacting with a three-dimensional model 303, it should be understood that multiple users may interact with a plurality of three-dimensional models to foster or adopt.
  • a virtual approach enables users to foster or adopt rescued animals anywhere in the world.
  • user Bruce represented by the three-dimensional models 301
  • a dog e.g., the three-dimensional model 303
  • a virtual environment e.g., pet care transactions web platform
  • FIG. 4 is a diagram that illustrates interactions between registered users in a virtual environment for rendering a service, according to one example embodiment.
  • the animation generation platform 111 generates a display 400 (e.g., pet care transactions web platform) in a user interface of the UE 101 associated with the user.
  • the display 400 includes a three-dimensional model 401 of an animal that may be fostered or adopted by the user.
  • the display 400 also includes annotations, audio messages, and/or video messages to guide the users.
  • FIG. 4 depicts a single three-dimensional model 401 , it should be understood that a plurality of three- dimensional models of animals available for fostering or adoption may be presented in the display 400.
  • the display 400 is a pet care transactions web platform for adopting or fostering animals.
  • the three-dimensional model 401 is a non-player character (NPC) based on real rescued animals that are available for fostering or adoption.
  • the animation generation platform 111 utilizes a trained machine-learning model for generating the three-dimensional models, for example, the trained machine-learning model learns associations between images, videos, audio, three-dimensional models, and/or animated three-dimensional models.
  • the three- dimensional model 401 performs mannerisms similar to that of the animal it corresponds to (e.g., sitting style, barking tonalities, running style, jumping style, walking style, playing style, etc.).
  • the three-dimensional model 401 performs habits unique to the animal it corresponds to, such as tricks taught to the animal.
  • the display 400 includes annotation 403 that informs the users to donate money for maintenance of the three-dimensional model 401 , via the pet care transactions web platform, so that a rescued animal is saved in real life.
  • the user may donate money or purchase a virtual property (e.g., a virtual land 405) in the virtual environment to accommodate the three-dimensional model 401 .
  • the user can foster the three-dimensional model 401 on the virtual property and host a meet and greet on the property for other registered users.
  • the registered users may interact in the metaverse using virtual reality and/or augmented reality technologies.
  • the other registered users may donate for the upkeep or adopt the three-dimensional model 401 and accommodate them on their virtual land 407 in the virtual environment.
  • the animation generation platform 111 generates digital token 409 indicating the contribution of the users (e.g., foster care providers, adopters, donors, etc.) in fostering or adopting rescued animals.
  • the users may share the digital token 409 and their experience on social media.
  • FIG. 5 is a user interface diagram that illustrates the steps for fostering or adopting three-dimensional model(s) in a virtual environment, according to one example embodiment.
  • the animation generation platform 111 generates a display 500 (e.g., pet care transactions web platform) in a user interface of the UE 101 associated with the user.
  • a user e.g., a registered user
  • the user may search for rescue animals (e.g., a dog, a cat, or any other animals) looking for a foster home.
  • rescue animals e.g., a dog, a cat, or any other animals
  • the animation generation platform 111 processes historical information of the user to determine their preferences and may recommend one or more three-dimensional model(s) that represent the pets (e.g., specific breeds of dogs or cats) that the user likes.
  • the user selects a three-dimensional model of his choice, whereupon the three-dimensional model is downloaded for fostering in the virtual environment.
  • the user purchases land in the virtual environment of the pet care transactions web platform, and the selected three-dimensional model is uploaded and fostered on the purchased land. As discussed, such fostering of the three- dimensional model causes the fostering of real rescued animals.
  • step 507 the user receives digital tokens or non-fungible tokens (NFTs) for his/her contribution to the fostering or adoption of rescued animals.
  • NFTs non-fungible tokens
  • the user may share the digital tokens, the NFTs, and his/her experience on social media (step 509).
  • other registered users log in to the pet care transactions web platform by entering their credential information via their respective LIE 101.
  • the other registered users looking to adopt a pet may interact with the user fostering the three-dimensional model on the purchased land.
  • the other registered users interact with the three-dimensional model and receive relevant information that helps them in determining whether or not to adopt (e.g., breed of the animal, location of the animal, age of the animal, health records of the animal, etc.).
  • the other registered users may donate for the upkeep of the animal or adopt the three-dimensional model 401 and accommodate them on their virtual land in the virtual environment. Such adoption of the three-dimensional model in the virtual environment results in the adoption of real rescued animals.
  • the other registered users may redeem the experience for digital tokens or NFTs.
  • the other registered users may share the digital tokens, the NFTs, and their experience on social media.
  • FIG. 6 is a user interface diagram that illustrates various stages of the pet care transactions web platform for fostering or adopting three-dimensional model(s) in a virtual environment, according to one example embodiment.
  • the first stage includes the animation generation platform 111 generating a display of a homepage of the pet care transactions web platform in a user interface of the UE 101 associated with the user.
  • the user logs into the web platform and is navigated to pages 601 , 603, and 605 which provide information relating to fostering or adopting three- dimensional model(s) in a virtual environment that causes fostering or adoption of the real rescued animals.
  • the user is navigated to the main page.
  • the system determines that the user completed reading through the pages based on sensor data (e.g., touch detection sensors indicating the user wishes to move to the next page, gaze detection sensors that detect eye movements of the user, etc.).
  • sensor data e.g., touch detection sensors indicating the user wishes to move to the next page, gaze detection sensors that detect eye movements of the user, etc
  • the user is navigated to pages 607, 609, and 611 .
  • the user searches for rescued animals or is provided with recommendations on the three- dimensional model(s) that represent the pets (e.g., specific breeds of dogs or cats) that the user likes based on his/her historical information and/or preference information.
  • the user selects a three-dimensional model of his choice, whereupon the user may either foster the selected three-dimensional model (e.g., donating money utilizing virtual currency (e.g., cryptocurrency 613) or credit cards 615) or engage in an interaction with the three-dimensional model.
  • Such donation by the user is utilized for fighting pet homelessness or combat barriers to adoption.
  • the donation is also used for support service providers (e.g., rescue shelters, veterinary clinics, etc.) registered with the pet care transactions web platform.
  • the user is navigated to pages 617, 619, and 621 for interacting with the three-dimensional model.
  • the user personalizes the three-dimensional model by modifying its visual appearances, color, clothes, and/or voice.
  • the user engages in real-time communication with the three- dimensional model to understand the real rescued animal he/she is fostering or adopting (e.g., name of the dog, breed of the dog, location of the dog, animal shelter that is fostering the dog, health-related information, etc.).
  • the three-dimensional model also stores relevant information associated with the user during the conversation (e.g., date of birth, appointments, etc.), and timely reminds the user about the upcoming appointments.
  • the three-dimensional model also acts as a friend by wishing birthday to the user.
  • the user while interacting with the three-dimensional model may play various video games (e.g., go on a quest together, solve a puzzle together, compete against each other, etc.).
  • the user may choose to foster the three-dimensional model.
  • the user may donate money utilizing virtual currency (e.g., cryptocurrency 613) or credit cards 615 to purchase land in the virtual environment and fosters the three-dimensional model on the purchased land.
  • the user hosts a meet and greet on the property for the other registered users.
  • the other registered users interact with the three-dimensional model and may either donate or adopt and accommodate the three-dimensional model on their virtual land.
  • Such fostering or adoption of the three-dimensional model in the virtual environment results in the fostering or adoption of real rescued animals.
  • the user and the other registered users may redeem their experience for digital tokens or NFTs (e.g., 623, 625, and 627), such digital tokens or NFTs include pictures of the users and/or the pets they are fostering or adopting.
  • digital tokens or NFTs include pictures of the users and/or the pets they are fostering or adopting.
  • the user and the other registered users may share their digital tokens, NFTs, and experience on social media.
  • One or more implementations disclosed herein include and/or are implemented using a machine learning model.
  • one or more of the modules of the animation generation platform 111 are implemented using a machine learning model and/or are used to train the machine learning model.
  • a given machine learning model is trained using the training flow chart 700 of FIG. 7.
  • Training data 712 includes one or more of stage inputs 714 and known outcomes 718 related to the machine learning model to be trained.
  • Stage inputs 71 are from any applicable source including text, visual representations, data, values, comparisons, and stage outputs, e.g., one or more outputs from one or more steps from FIG. 2.
  • the known outcomes 718 are included for the machine learning models generated based on supervised or semi-supervised training.
  • An unsupervised machine learning model is not be trained using known outcomes 718.
  • Known outcomes 718 includes known or desired outputs for future inputs similar to or in the same category as stage inputs 714 that do not have corresponding known outputs.
  • the training data 712 and a training algorithm 720 e.g., one or more of the modules implemented using the machine learning model and/or are used to train the machine learning model, is provided to a training component 730 that applies the training data 712 to the training algorithm 720 to generate the machine learning model.
  • the training component 730 is provided comparison results 716 that compare a previous output of the corresponding machine learning model to apply the previous result to re-train the machine learning model.
  • the comparison results 716 are used by training component 730 to update the corresponding machine learning model.
  • the training algorithm 720 utilizes machine learning networks and/or models including, but not limited to a deep learning network such as Deep Neural Networks (DNN), Convolutional Neural Networks (CNN), Fully Convolutional Networks (FCN) and Recurrent Neural Networks (RCN), probabilistic models such as Bayesian Networks and Graphical Models, classifiers such as K- Nearest Neighbors, and/or discriminative models such as Decision Forests and maximum margin methods, the model specifically discussed herein, or the like.
  • a deep learning network such as Deep Neural Networks (DNN), Convolutional Neural Networks (CNN), Fully Convolutional Networks (FCN) and Recurrent Neural Networks (RCN)
  • probabilistic models such as Bayesian Networks and Graphical Models
  • classifiers such as K- Nearest Neighbors
  • discriminative models such as Decision Forests and maximum margin methods, the model specifically discussed herein, or the like.
  • the machine learning model used herein is trained and/or used by adjusting one or more weights and/or one or more layers of the machine learning model. For example, during training, a given weight is adjusted (e.g., increased, decreased, removed) based on training data or input data. Similarly, a layer is updated, added, or removed based on training data/and or input data. The resulting outputs are adjusted based on the adjusted weights and/or layers.
  • any process or operation discussed in this disclosure is understood to be computer-implementable, such as the process illustrated in FIG. 2 are performed by one or more processors of a computer system as described herein.
  • a process or process step performed by one or more processors is also referred to as an operation.
  • the one or more processors are configured to perform such processes by having access to instructions (e.g., software or computer-readable code) that, when executed by one or more processors, cause one or more processors to perform the processes.
  • the instructions are stored in a memory of the computer system.
  • a processor is a central processing unit (CPU), a graphics processing unit (GPU), or any suitable type of processing unit.
  • a computer system such as a system or device implementing a process or operation in the examples above, includes one or more computing devices.
  • One or more processors of a computer system are included in a single computing device or distributed among a plurality of computing devices.
  • One or more processors of a computer system are connected to a data storage device.
  • a memory of the computer system includes the respective memory of each computing device of the plurality of computing devices.
  • FIG. 8 illustrates an implementation of a computer system that executes techniques presented herein.
  • the computer system 800 includes a set of instructions that are executed to cause the computer system 800 to perform any one or more of the methods or computer based functions disclosed herein.
  • the computer system 800 operates as a standalone device or is connected, e.g., using a network, to other computer systems or peripheral devices.
  • processor refers to any device or portion of a device that processes electronic data, e.g., from registers and/or memory to transform that electronic data into other electronic data that, e.g., is stored in registers and/or memory.
  • a “computer,” a “computing machine,” a “computing platform,” a “computing device,” or a “server” includes one or more processors.
  • the computer system 800 operates in the capacity of a server or as a client user computer in a server-client user network environment, or as a peer computer system in a peer-to-peer (or distributed) network environment.
  • the computer system 800 is also implemented as or incorporated into various devices, such as a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a mobile device, a palmtop computer, a laptop computer, a desktop computer, a communications device, a wireless telephone, a landline telephone, a control system, a camera, a scanner, a facsimile machine, a printer, a pager, a personal trusted device, a web appliance, a network router, switch or bridge, or any other machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • the computer system 800 is implemented using electronic devices that provide voice, video, or data communication.
  • the term “system” shall also be taken to include any collection of systems or sub-systems that individually or jointly execute a set, or multiple sets, of instructions to perform one or more computer functions.
  • the computer system 800 includes a processor 802, e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both.
  • the processor 802 is a component in a variety of systems.
  • the processor 802 is part of a standard personal computer or a workstation.
  • the processor 802 is one or more processors, digital signal processors, application specific integrated circuits, field programmable gate arrays, servers, networks, digital circuits, analog circuits, combinations thereof, or other now known or later developed devices for analyzing and processing data.
  • the processor 802 implements a software program, such as code generated manually (i.e. , programmed).
  • the computer system 800 includes a memory 804 that communicates via bus 808.
  • Memory 804 is a main memory, a static memory, or a dynamic memory.
  • Memory 804 includes, but is not limited to computer-readable storage media such as various types of volatile and non-volatile storage media, including but not limited to random access memory, read-only memory, programmable read-only memory, electrically programmable read-only memory, electrically erasable read-only memory, flash memory, magnetic tape or disk, optical media and the like.
  • the memory 804 includes a cache or random-access memory for the processor 802.
  • the memory 804 is separate from the processor 802, such as a cache memory of a processor, the system memory, or other memory.
  • Memory 804 is an external storage device or database for storing data. Examples include a hard drive, compact disc (“CD”), digital video disc (“DVD”), memory card, memory stick, floppy disc, universal serial bus (“USB”) memory device, or any other device operative to store data.
  • the memory 804 is operable to store instructions executable by the processor 802.
  • the functions, acts, or tasks illustrated in the figures or described herein are performed by processor 802 executing the instructions stored in memory 804.
  • the functions, acts, or tasks are independent of the particular type of instruction set, storage media, processor, or processing strategy and are performed by software, hardware, integrated circuits, firmware, micro-code, and the like, operating alone or in combination.
  • processing strategies include multiprocessing, multitasking, parallel processing, and the like.
  • the computer system 800 further includes a display 810, such as a liquid crystal display (LCD), an organic light emitting diode (OLED), a flat panel display, a solid-state display, a cathode ray tube (CRT), a projector, a printer or other now known or later developed display device for outputting determined information.
  • a display 810 acts as an interface for the user to see the functioning of the processor 802, or specifically as an interface with the software stored in the memory 804 or in the drive unit 806.
  • the computer system 800 includes an input/output device 812 configured to allow a user to interact with any of the components of the computer system 800.
  • the input/output device 812 is a number pad, a keyboard, a cursor control device, such as a mouse, a joystick, touch screen display, remote control, or any other device operative to interact with the computer system 800.
  • the computer system 800 also includes the drive unit 806 implemented as a disk or optical drive.
  • the drive unit 806 includes a computer-readable medium 822 in which one or more sets of instructions 824, e.g. software, is embedded. Further, the sets of instructions 824 embodies one or more of the methods or logic as described herein. Instructions 824 resides completely or partially within memory 804 and/or within processor 802 during execution by the computer system 800.
  • the memory 804 and the processor 802 also include computer-readable media as discussed above.
  • computer-readable medium 822 includes the set of instructions 824 or receives and executes the set of instructions 824 responsive to a propagated signal so that a device connected to network 830 communicates voice, video, audio, images, or any other data over network 830. Further, the sets of instructions 824 are transmitted or received over the network 830 via the communication port or interface 820, and/or using the bus 808.
  • the communication port or interface 820 is a part of the processor 802 or is a separate component.
  • the communication port or interface 820 is created in software or is a physical connection in hardware.
  • the communication port or interface 820 is configured to connect with the network 830, external media, display 810, or any other components in the computer system 800, or combinations thereof.
  • connection with network 830 is a physical connection, such as a wired Ethernet connection, or is established wirelessly as discussed below.
  • the additional connections with other components of the computer system 800 are physical connections or are established wirelessly.
  • Network 830 alternatively be directly connected to the bus 808.
  • While the computer-readable medium 822 is shown to be a single medium, the term “computer-readable medium” includes a single medium or multiple media, such as a centralized or distributed database, and/or associated caches and servers that store one or more sets of instructions.
  • the term “computer-readable medium” also includes any medium that is capable of storing, encoding, or carrying a set of instructions for execution by a processor or that causes a computer system to perform any one or more of the methods or operations disclosed herein.
  • the computer- readable medium 822 is non-transitory, and may be tangible.
  • the computer-readable medium 822 includes a solid-state memory such as a memory card or other package that houses one or more non-volatile read-only memories.
  • the computer-readable medium 822 is a random-access memory or other volatile re-writable memory. Additionally or alternatively, the computer-readable medium 822 includes a magneto-optical or optical medium, such as a disk or tapes or other storage device to capture carrier wave signals such as a signal communicated over a transmission medium.
  • a digital file attachment to an e-mail or other self- contained information archive or set of archives is considered a distribution medium that is a tangible storage medium. Accordingly, the disclosure is considered to include any one or more of a computer-readable medium or a distribution medium and other equivalents and successor media, in which data or instructions are stored.
  • dedicated hardware implementations such as application specific integrated circuits, programmable logic arrays, and other hardware devices, is constructed to implement one or more of the methods described herein.
  • Applications that include the apparatus and systems of various implementations broadly include a variety of electronic and computer systems.
  • One or more implementations described herein implement functions using two or more specific interconnected hardware modules or devices with related control and data signals that are communicated between and through the modules, or as portions of an applicationspecific integrated circuit. Accordingly, the present system encompasses software, firmware, and hardware implementations.
  • Network 830 defines one or more networks including wired or wireless networks.
  • the wireless network is a cellular telephone network, an 802.10, 802.16, 802.20, or WiMAX network.
  • networks include a public network, such as the Internet, a private network, such as an intranet, or combinations thereof, and utilizes a variety of networking protocols now available or later developed including, but not limited to TCP/IP based networking protocols.
  • Network 830 includes wide area networks (WAN), such as the Internet, local area networks (LAN), campus area networks, metropolitan area networks, a direct connection such as through a Universal Serial Bus (USB) port, or any other networks that allows for data communication.
  • WAN wide area networks
  • LAN local area networks
  • USB Universal Serial Bus
  • Network 830 is configured to couple one computing device to another computing device to enable communication of data between the devices.
  • Network 830 is generally enabled to employ any form of machine-readable media for communicating information from one device to another.
  • Network 830 includes communication methods by which information travels between computing devices.
  • Network 830 is divided into sub-networks. The sub-networks allow access to all of the other components connected thereto or the sub-networks restrict access between the components.
  • Network 830 is regarded as a public or private network connection and includes, for example, a virtual private network or an encryption or other security mechanism employed over the public Internet, or the like.
  • the methods described herein are implemented by software programs executable by a computer system. Further, in an example, non-limited implementation, implementations can include distributed processing, component/object distributed processing, and parallel processing. Alternatively, virtual computer system processing can be constructed to implement one or more of the methods or functionality as described herein. [0079]
  • standards for Internet and other packet switched network transmission e.g., TCP/IP, UDP/IP, HTML, HTTP
  • Such standards are periodically superseded by faster or more efficient equivalents having essentially the same functions. Accordingly, replacement standards and protocols having the same or similar functions as those disclosed herein are considered equivalents thereof.
  • an element described herein of an apparatus embodiment is an example of a means for carrying out the function performed by the element for the purpose of carrying out the invention.
  • the present disclosure furthermore relates to the following aspects.
  • Example 1 A computer-implemented method comprising: receiving, by one or more processors, data associated with one or more subjects; processing, by the one or more processors, the data to generate one or more three-dimensional models of the one or more subjects; generating, by the one or more processors, a presentation of the one or more three-dimensional models in a virtual environment; receiving, by the one or more processors, a selection of at least one three-dimensional model of a subject from the one or more three-dimensional models of the one or more subjects; and determining, by the one or more processors, at least one action for the at least one selected three-dimensional model of the subject.
  • Example 2 The computer-implemented method of example 1 , wherein the data associated with the one or more subjects includes one or more images, one or more videos, and/or one or more audio recording of one or more animals captured by one or more sensors.
  • Example 3 The computer-implemented method of example 2, wherein processing the data to generate the one or more three-dimensional models of the one or more subjects comprises: applying, by the one or more processors, computer vision techniques to the one or more images and/or the one or more videos to learn visual characteristics of the one or more animals for generating the one or more three- dimensional models, wherein the computer vision techniques includes a neural network model or a classification model; and storing, by the one or more processors, the one or more three-dimensional models in a database.
  • Example 4 The computer-implemented method of example 3, wherein generating the presentation of the one or more three-dimensional models in the virtual environment comprises: generating, by the one or more processors, animations for the one or more three-dimensional models in the virtual environment, wherein the one or more animated three-dimensional models execute specific movements performed by the one or more animals in the one or more videos.
  • Example 5 The computer-implemented method of example 4, wherein the one or more animated three-dimensional models are configured to communicate with a device associated with a user, and wherein the at least one action is based on the communication between the one or more animated three-dimensional models and the user.
  • Example 6 The computer-implemented method of examples 1 -5, further comprising: receiving, by the one or more processors, training data correlating the data associated with the one or more subjects to the one or more three-dimensional models and/or one or more animated three-dimensional models; and inputting, by the one or more processors, the training data to a machine learning model to configure the machine learning model to output the one or more three-dimensional models and/or the one or more animated three-dimensional models for the data associated with the one or more subjects.
  • Example 7 The computer-implemented method of examples 1 -6, wherein the at least one action for the at least one selected three-dimensional model includes fostering or adopting the subject in the virtual environment.
  • Example 8 The computer-implemented method of example 7, wherein receiving the selection of the at least one three-dimensional model further comprises: receiving, by the one or more processors, a request to purchase a property in the virtual environment from a device associated with a user, wherein the request includes a transaction amount for the property; and superimposing, by the one or more processors, the at least one selected three-dimensional model on the purchased property.
  • Example 9 The computer-implemented method of example 8, further comprising: monitoring, by the one or more processors, condition of the at least one selected three-dimensional model on the purchased property; and generating, by the one or more processors, a real-time notification regarding the condition of the at least one selected three-dimensional model in the device associated with the user.
  • Example 10 The computer-implemented method of example 9, wherein fostering or adopting the subject in the virtual environment cause service providers to foster or adopt animals in real locations, and wherein the condition of the at least one selected three-dimensional model represent condition of the animals in the real locations.
  • Example 11 A system comprising: one or more processors; a non-transitory computer readable medium storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising: receiving data associated with one or more subjects; processing the data to generate one or more three-dimensional models of the one or more subjects; generating a presentation of the one or more three-dimensional models in a virtual environment; receiving a selection of at least one three-dimensional model of a subject from the one or more three-dimensional models of the one or more subjects; and determining at least one action for the at least one selected three-dimensional model of the subject.
  • Example 12 The system of example 11 , wherein the data associated with the one or more subjects includes one or more images, one or more videos, and/or one or more audio recording of one or more animals captured by one or more sensors.
  • Example 13 The system of example 12, wherein processing the data to generate the one or more three-dimensional models of the one or more subjects comprises: applying computer vision techniques to the one or more images and/or the one or more videos to learn visual characteristics of the one or more animals for generating the one or more three-dimensional models, wherein the computer vision techniques includes a neural network model or a classification model; and storing the one or more three-dimensional models in a database.
  • Example 14 The system of example 13, wherein generating the presentation of the one or more three-dimensional models in the virtual environment comprises: generating animations for the one or more three-dimensional models in the virtual environment, wherein the one or more animated three-dimensional models execute specific movements performed by the one or more animals in the one or more videos.
  • Example 15 The system of example 14, wherein the one or more animated three-dimensional models are configured to communicate with a device associated with a user, and wherein the at least one action is based on the communication between the one or more animated three-dimensional models and the user.
  • Example 16 The system of examples 11 -15, further comprising: receiving training data correlating the data associated with the one or more subjects to the one or more three-dimensional models and/or one or more animated three-dimensional models; and inputting the training data to a machine learning model to configure the machine learning model to output the one or more three-dimensional models and/or the one or more animated three-dimensional models for the data associated with the one or more subjects.
  • Example 17 The system of example 11-16, wherein receiving the selection of the at least one three-dimensional model further comprises: receiving a request to purchase a property in the virtual environment from a device associated with a user, wherein the request includes a transaction amount for the property; and superimposing the at least one selected three-dimensional model on the purchased property, wherein the at least one action for the at least one selected three-dimensional model includes fostering or adopting the subject in the virtual environment.
  • Example 18 A non-transitory computer readable medium, the non- transitory computer readable medium storing instructions which, when executed by one or more processors of a computing system, cause the one or more processors to perform operations, comprising: receiving data associated with one or more subjects, wherein the data associated with the one or more subjects includes one or more images, one or more videos, and/or one or more audio recording of one or more animals captured by one or more sensors; processing the data to generate one or more three-dimensional models of the one or more subjects; generating a presentation of the one or more three-dimensional models in a virtual environment; receiving a selection of at least one three-dimensional model of a subject from the one or more three- dimensional models of the one or more subjects; and determining at least one action for the at least one selected three-dimensional model of the subject.
  • Example 19 The non-transitory computer readable medium of example 18, wherein processing the data to generate the one or more three-dimensional models of the one or more subjects comprises: applying computer vision techniques to the one or more images and/or the one or more videos to learn visual characteristics of the one or more animals for generating the one or more three-dimensional models, wherein the computer vision techniques includes a neural network model or a classification model; and storing the one or more three-dimensional models in a database.
  • Example 20 The non-transitory computer readable medium of example 19, wherein generating the presentation of the one or more three-dimensional models in the virtual environment comprises: generating animations for the one or more three- dimensional models in the virtual environment, wherein the one or more animated three- dimensional models execute specific movements performed by the one or more animals in the one or more videos.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • Molecular Biology (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Business, Economics & Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Mathematical Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Optimization (AREA)
  • Algebra (AREA)
  • Game Theory and Decision Science (AREA)
  • Probability & Statistics with Applications (AREA)
  • Medical Informatics (AREA)
  • Computational Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Child & Adolescent Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Human Resources & Organizations (AREA)

Abstract

Systems and methods are disclosed for generating animated three-dimensional model(s) of one or more subjects in a virtual environment for rendering a service. The method includes receiving data associated with the one or more subjects. The data are processed to generate one or more three-dimensional models of the one or more subjects. A presentation of the one or more three-dimensional models is generated in a virtual environment. At least one three-dimensional model of a subject is selected from the one or more three-dimensional models of the one or more subjects. An action is determined for the at least one selected three-dimensional model of the subject.

Description

SYSTEMS AND METHODS FOR GENERATING DIGITAL REPRESENTATION OF A SUBJECT FOR RENDERING A SERVICE
CROSS-REFERENCE TO RELATED APPLICATION(S)
[0001] This application claims the benefit of priority to U.S. Provisional Application No. 63/481 ,555, filed on January 25, 2023, the entirety of which is incorporated herein by reference.
TECHNICAL FIELD
[0002] The present disclosure generally relates to video games and virtual world created by electronic means, and in particular to computer-enabled games that display character(s) in the virtual world for achieving an objective.
BACKGROUND
[0003] The lack of time to take care of the pets, the cost of pet ownership (e.g., spending on pet food, veterinarian, etc.), and the health and age of the pets are some of the reasons pets are abandoned. In addition, the time-consuming and tedious application process discourages users from fostering or adopting pets, for example, users desiring to foster or adopt pets must inquire individual shelters and upon availability travel to the shelter to meet the pets. The conventional systems are technically challenged in (i) optimizing the application process by utilizing data tracking and documenting capabilities, (ii) broadcasting live feeds of adoptable pets to prevent the users from wasting time, energy, and money traveling to the rescue shelters, and/or (iii) maintaining electronic medical records of the pets and facilitating the users looking to foster or adopt pets by providing access to such medical records. There is a need for a method that makes it easy for the users to take care of pets with simple actions, for example, a virtual world with various games that allow varied and interactive means for fostering or adopting real-life pets.
SUMMARY OF THE DISCLOSURE
[0004] The present disclosure solves the technical challenges typically encountered during the use of conventional systems while fostering or adopting pets, such as those discussed in this disclosure. Specifically, the present disclosure solved the technical challenges by providing a centralized system that generates animated three- dimensional model(s) of one or more subjects in a virtual environment for rendering a service.
[0005] In some embodiments, a computer-implemented method includes: receiving, by one or more processors, data associated with one or more subjects; processing, by the one or more processors, the data to generate one or more three-dimensional models of the one or more subjects; generating, by the one or more processors, a presentation of the one or more three-dimensional models in a virtual environment; receiving, by the one or more processors, a selection of at least one three-dimensional model of a subject from the one or more three-dimensional models of the one or more subjects; and determining, by the one or more processors, at least one action for the at least one selected three-dimensional model of the subject.
[0006] In some embodiments, a system includes: one or more processors; a non- transitory computer readable medium storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations including: receiving data associated with one or more subjects; processing the data to generate one or more three-dimensional models of the one or more subjects; generating a presentation of the one or more three-dimensional models in a virtual environment; receiving a selection of at least one three-dimensional model of a subject from the one or more three-dimensional models of the one or more subjects; and determining at least one action for the at least one selected three-dimensional model of the subject.
[0007] In some embodiments, a non-transitory computer readable medium, the non- transitory computer readable medium storing instructions which, when executed by one or more processors of a computing system, cause the one or more processors to perform operations including: receiving data associated with one or more subjects, wherein the data associated with the one or more subjects includes one or more images, one or more videos, and/or one or more audio recording of one or more animals captured by one or more sensors; processing the data to generate one or more three-dimensional models of the one or more subjects; generating a presentation of the one or more three-dimensional models in a virtual environment; receiving a selection of at least one three-dimensional model of a subject from the one or more three- dimensional models of the one or more subjects; and determining at least one action for the at least one selected three-dimensional model of the subject.
[0008] It is to be understood that both the foregoing general description and the following detailed description are example and explanatory only and are not restrictive of the detailed embodiments, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate various example embodiments and together with the description, serve to explain the principles of the disclosed embodiments.
[0010] FIG. 1 is a diagram showing an example of a system for generating animated three-dimensional model(s) of one or more subjects in a virtual environment for rendering a service, according to aspects of the disclosure.
[0011] FIG. 2 is a flowchart of a process for generating animated three-dimensional model(s) of a subject in a virtual environment for gamification of fostering or adoption, according to aspects of the disclosure.
[0012] FIG. 3 is a diagram that illustrates an interactive session between a user and a three-dimensional model (e.g., avatar) of an animal for rendering a service, according to one example embodiment.
[0013] FIG. 4 is a diagram that illustrates interactions between registered users in a virtual environment for rendering a service, according to one example embodiment.
[0014] FIG. 5 is a user interface diagram that illustrates the steps for fostering or adopting three-dimensional model(s) in a virtual environment, according to one example embodiment.
[0015] FIG. 6 is a user interface diagram that illustrates various stages of the pet care transactions web platform for fostering or adopting three-dimensional model(s) in a virtual environment, according to one example embodiment.
[0016] FIG. 7 shows an example machine learning training flow chart. [0017] FIG. 8 illustrates an implementation of a computer system that executes techniques presented herein.
DETAILED DESCRIPTION OF EMBODIMENTS
[0018] While principles of the present disclosure are described herein with reference to illustrative embodiments for particular applications, it should be understood that the disclosure is not limited thereto. Those having ordinary skill in the art and access to the teachings provided herein will recognize additional modifications, applications, embodiments, and substitution of equivalents all fall within the scope of the embodiments described herein. Accordingly, the embodiments are not to be considered as limited by the foregoing description.
[0019] Various non-limiting embodiments of the present disclosure will now be described to provide an overall understanding of the principles of the structure, function, and use of systems and methods disclosed herein for generating animated three- dimensional model(s) of one or more subjects in a virtual environment for rendering a service.
[0020] As discussed, conventional methods for fostering or adopting pets have been associated with person-to-person contact and/or required one’s presence in a particular physical location. The conventional techniques for fostering or adopting rescued animals are isolated, inefficient, and fail to provide a single destination for the data management needs of the users. For example, the conventional methods rely on the users manually inputting terms describing an animal they want to foster or adopt into a search, this process is inefficient because the users may not find detailed information about the animal (e.g., images, medical records, behavior histories or descriptions of pet’s personalities, and/or other types of animal-specific records). While enquiring about the animal, service providers (e.g., rescue shelters) may refuse to provide details about the animal until the application for fostering or adoption has been submitted and approved. Furthermore, while applying to foster or adopt an animal, the users may find the application process to be extensive and lengthy with strict requirements (e.g., background search, home checks, references, etc.). In one example, users may feel overwhelmed fostering a dog in their homes until they get adopted because not every dog lover has the means to take a dog into their home, and the application for fostering or adoption may be rejected upon determining the user does not have a yard or works long hours. Such difficulties discourage users from fostering or adopting pets.
[0021] System 100 overcomes the technical shortcomings of the current technologies by providing methods and systems for fostering or adopting pets in a digital medium so that users can virtually foster or adopt real-life pets from any location. In one example, as the metaverse increases in popularity, digital representation of animals in the metaverse (e.g., three-dimensional avatars of the animals that encapsulate their appearance and/or mannerisms) may be utilized to make fostering or adopting real-life pets easier, entertaining, and efficient. The gamification of fostering or adopting pets without having to worry about making time to care for the pets, spending on pet food or veterinarian, the health and age of the pets, or the tedious application process motivates users to foster or adopt. Such a virtual platform for fostering or adopting real-life pets may intensify the attachment the users feel for their pets and may increase the likelihood of fostering or adopting additional real-life pets and reduce pet homelessness. For example, the system 100 provides real-time recommendations to the users based on the current conditions of the pets (e.g., health conditions, food requirements, veterinary expenses, etc.) and each users can make decisions for the upkeep of their pets based on the real-time recommendations while remaining closely connected to the pets. The present disclosure taps the potential of video games to bring about positive change toward fostering or adopting animals. The sophistication of modern game engines can not only reach vast audiences but can engage on a whole new interactive level.
[0022] FIG. 1 is a diagram showing an example of a system for generating animated three-dimensional model(s) of one or more subjects in a virtual environment for rendering a service, according to aspects of the disclosure. FIG. 1 includes the system 100 that comprises user equipment (UE) 101 a-101 n (collectively referred to as UE 101 ) that includes applications 103a-103n (collectively referred to as an application 103) and sensors 105a-105n (collectively referred to as a sensor 105), a communication network 107, a third-party data source(s) 109, an animation generation platform 111 , and a database 123. [0023] In one embodiment, the UE 101 includes, but is not restricted to, any type of mobile terminal, wireless terminal, fixed terminal, or portable terminal. Examples of the UE 101 include, but are not restricted to, a mobile handset, a wireless communication device, a station, a unit, a device, a multimedia computer (e.g., computer system 800), a multimedia tablet, an Internet node, a communicator, a desktop computer, a laptop computer, a notebook computer, a netbook computer, a tablet computer, a Personal Communication System (PCS) device, a personal navigation device, a Personal Digital Assistant (PDA), a digital cam era/cam corder, an infotainment system, a dashboard computer, a television device, or any combination thereof, including the accessories and peripherals of these devices, or any combination thereof. The UE 101 facilitates various input means for receiving information, including, but not restricted to, a touch screen capability, a keyboard and keypad data entry, a voice-based input mechanism, and the like. In addition, the UE 101 is configured with different features for generating, sharing, and viewing of visual content. Any known and future implementations of the UE 101 are also applicable.
[0024] In one embodiment, the application 103 includes various applications such as, but not restricted to, content provisioning applications, multimedia applications, media player applications, camera/imaging applications, notification services, software applications, networking applications, storage services, contextual information determination services, and the like. In one embodiment, one of the application 103 at the UE 101 acts as a client for the animation generation platform 111 and performs one or more functions associated with the functions of the animation generation platform 111 by interacting with the animation generation platform 111 over the communication network 107.
[0025] By way of example, each sensor 105 includes any type of sensor. In one embodiment, the sensors 105 include, for example, a camera/imaging sensor for gathering image data and/or video data, an audio recorder for gathering audio data, a network detection sensor for detecting wireless signals or receivers for different short- range communications (e.g., Bluetooth, Wi-Fi, Li-Fi, near field communication (NFC), etc.) from the communication network 107, a global positioning sensor for gathering location data, and the like. [0026] In one embodiment, various elements of the system 100 communicate with each other through the communication network 107. The communication network 107 supports a variety of different communication protocols and communication techniques. In one embodiment, the communication network 107 allows the DE 101 and the third party data source(s) 109 to communicate with the animation generation platform 111. The communication network 107 of the system 100 includes one or more networks such as a data network, a wireless network, a telephony network, or any combination thereof. It is contemplated that the data network is any local area network (LAN), metropolitan area network (MAN), wide area network (WAN), a public data network (e.g., the Internet), short range wireless network, or any other suitable packet-switched network, such as a commercially owned, proprietary packet-switched network, e.g., a proprietary cable or fiber-optic network, and the like, or any combination thereof. In addition, the wireless network is, for example, a cellular communication network and employs various technologies including 5G (5th Generation), 4G, 3G, 2G, Long Term Evolution (LTE), wireless fidelity (Wi-Fi), Bluetooth®, Internet Protocol (IP) data casting, satellite, mobile ad-hoc network (MANET), vehicle controller area network (CAN bus), and the like, or any combination thereof.
[0027] In one embodiment, the third-party data source(s) 109 includes various databases (e.g., rescue center databases, veterinary hospitals databases, etc.) that store a plurality of data (e.g., image data, video data, sound recordings, behavioral data, and/or medical records) associated with one or more subjects (e.g., real-life animals) for transmission to participating entities (e.g., the animation generation platform 111 ). In one embodiment, the third-party data source(s) 109 includes various cloud storage services, video monitoring services (e.g., security camera, pet camera, etc.), and/or other sources of personal recordings associated with one or more subjects.
[0028] In one embodiment, the animation generation platform 111 is a platform with multiple interconnected components. The animation generation platform 111 includes one or more servers, intelligent networking devices, computing devices, components, and corresponding software for generating animated three-dimensional model(s) of one or more subjects in a virtual environment for rendering a service. As the metaverse increases in popularity, digital representation of rescued animals is valuable to make the adoption or fostering process more entertaining, popular, and efficient. The animation generation platform 111 generates three-dimensional models of the animals that encapsulate their appearance and/or mannerisms in a virtual environment, such as the metaverse.
[0029] In one example, the animation generation platform 111 enables users to foster or adopt animals via an end-to-end pet care transactions web platform. The animation generation platform 111 generates three-dimensional models of a plurality of animals (e.g., dogs, cats, etc.) that are available for adoption or foster care. The user may select at least one three-dimensional model that represents a rescued animal for fostering or adoption. The user may donate money for the maintenance of the selected three-dimensional model or purchase a virtual property (e.g., a virtual land) in the virtual environment to accommodate the selected three-dimensional model. Such actions by the user in the pet care transactions web platform result in the service providers taking care of the rescued animals in real locations (e.g., animal sanctuaries, animal adoption centers, animal shelters, veterinarian clinics, etc.). For example, a donation from the user for the upkeep of the three-dimensional model is utilized to take care of the rescued animal. The users may also host a meet and greet of the three-dimensional models on their property for other registered users. Such online interaction increases the likelihood of other registered users fostering or adopting the rescued animals (e.g., other registered users may select three-dimensional models of the rescued animal for fostering or adoption, and may accommodate them on their virtual land in the care transactions web platform.
[0030] In one embodiment, the animation generation platform 111 comprises a data collecting module 113, a learning module 115, a model creation engine 117, an animation engine 119, a presentation module 121 , or any combination thereof. As used herein, terms such as “component” or “module” generally encompass hardware and/or software, e.g., that a processor or the like used to implement associated functionality. It is contemplated that the functions of these components are combined in one or more components or performed by other components of equivalent functionality.
[0031] In one embodiment, the data collecting module 113 collects, in real-time or near real-time, relevant data associated with one or more subjects (e.g., animals) through various data collection techniques. In one embodiment, the relevant data includes image data, video data, audio data, behavioral data, and/or medical records associated with the animals available for foster or adoption. The image data includes pictures and/or drawings that correspond to rescued animals. The medical data includes clinical data, genetic data, or diagnostic data associated with the rescued animals. The data collecting module 113 uses a web-crawling component to access various data sources (e.g., third-party data source(s) 109, database 123) to collect the relevant data. In one embodiment, the data collecting module 113 includes various software applications (e.g., data mining applications in Extended Meta Language (XML)) that automatically search for and return relevant data associated with one or more subjects. In another embodiment, the data collecting module 113 collects images (e.g., images of animals) uploaded by the users via the user interface of their respective UE 101. In one instance, the data collecting module 113 also collects images and/or videos of the users (e.g., registered users) participating in the service program. The data collecting module 113 transmits the collected data to the learning module 115 for further processing.
[0032] In one embodiment, the learning module 115 analyzes the collected data (e.g., images and/or videos of the animals) to learn visual characteristics or appearance details of the animals. The learning module 115 utilizes a neural network model that applies style transfer or extraction techniques to alter a generic three-dimensional model to match the animal in the images and/or videos. In one example, the learning module 115 utilizes a generative adversarial network (GAN) to learn mappings from input images to output images, and also learn a loss function to train this mapping. In one embodiment, the learning module 115 analyzes the collected data (e.g., identify actions performed by the animals in the videos) to learn the behaviors of the animals, and generates a behavior model that represents the behavior of the animals. In one example, the learning module 115 performs semantic analysis of the collected data to extract or determine traits of the animals. In one embodiment, the learning module 115 analyzes the collected data (e.g., audio recording) to learn auditory information associated with the animal to generate a sound model that represents the sound or vocal traits of the animals. In one instance, the learning module 115 also analyzes images and/or videos of the users to learn their visual characteristics or appearance details. The learning module 115 transmits the analyzed data to the model creation engine 117 for further processing.
[0033] In one embodiment, the model creation engine 117 generates three- dimensional models based on visual characteristics or appearance details of the subjects (e.g., animals) in the images and/or videos. In one instance, the images are two-dimensional images of the subjects from different viewpoints. In one example, the model creation engine 117 utilizes the photogrammetry technique to analyze images for identifying common points. The identified points serve as reference markers that are utilized to calculate the distance and angle between different elements in the images to build three-dimensional models. In one embodiment, the model creation engine 117 modifies the three-dimensional models based on user inputs. In one example, the user may input instructions for modifying the three-dimensional models via the graphical user interface elements of the UE 101 . The model creation engine 117 transmits the three- dimensional models to the animation engine 119. In one instance, the model creation engine 117 also generates three-dimensional models based on visual characteristics or appearance details of the users.
[0034] In one embodiment, the animation engine 119 generates animation for three- dimensional models to move in a manner that approximates specific movements performed by the real-life animal in the detected videos. In one example, the animation engine 119 utilizes deep neural networks or other machine learning models for determining kinematic data, skeletal movements, or similar information from the videos of real-life animals to generate animation that can be applied to three-dimensional models. In one embodiment, the animation engine 119 creates a virtual environment (e.g., a virtual rescue shelter, a virtual island, etc.) that is populated by the animated three-dimensional models, such virtual environment may be based on real locations (e.g., a rescue shelter, a veterinary clinic, etc.). In one example, a user can sponsor one or more animated three-dimensional models at a virtual rescue shelter and pay for their expenses (e.g., food, shelter, medical, etc.). In another example, the user can purchase land on the virtual island, select animated three-dimensional models for fostering or adoption, and accommodate the selected animated three-dimensional models on the purchased land. Such actions performed by the users in the virtual world of video games are reflected in the real world, for example, by sponsoring an animated three-dimensional model at a virtual rescue shelter, the user is supporting a real-life animal at an actual rescue shelter.
[0035] In one embodiment, the presentation module 121 enables display of the virtual environment and the animated three-dimensional models in the UE 101. In one example, the presentation module 121 is configured to operate in connection with augmented reality (AR) processing techniques, wherein the virtual environment, the animated three-dimensional models, graphic elements, and various applications interact. The presentation module 121 also comprises a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. In one example, the presentation module 121 generates real-time notifications regarding conditions of the animated three-dimensional models (e.g., health, food requirement, water requirements, etc.) in the UE 101. The user may provide inputs for the maintenance of the animated three-dimensional models (e.g., instructions to take the virtual pet to a veterinarian by providing financial support). The presentation module 121 updates the display of the virtual environment and the animated three-dimensional models based on the user input. In one embodiment, the presentation module 121 employs various application programming interfaces (APIs) or other function calls corresponding to the application 103 on the UE 101 , thus enabling the display of graphics primitives such as menus, buttons, data entry fields, etc. The presentation module 121 also causes interfacing of guidance information to include, at least in part, one or more annotations, audio messages, video messages, or a combination thereof in the UE 101 to guide the users.
[0036] The above presented modules and components of the animation generation platform 111 are implemented in hardware, firmware, software, or a combination thereof. Though depicted as a separate entity in FIG. 1 , it is contemplated that the animation generation platform 111 is also implemented for direct operation by the respective UE 101. As such, the animation generation platform 111 generates direct signal inputs by way of the operating system of the UE 101. In another embodiment, one or more of the modules 113-121 are implemented for operation by the respective UEs, as the animation generation platform 111. The various executions presented herein contemplate any and all arrangements and models.
[0037] The database 123 is any type of database, such as relational, hierarchical, object-oriented, and/or the like, wherein data are organized in any suitable manner, including data tables or lookup tables. In one embodiment, the database 123 accesses various data sources, stores content associated with the subjects (e.g., animals available for fostering or adoption), and manages multiple types of information that provide means for aiding in the content provisioning and sharing process. In one example, the database 123 stores three-dimensional models of the subject(s) generated by the model creation engine 117 and/or the animations generated by the animation engine 119. It is understood that any other suitable data may be included in the database 123. In another embodiment, the database 123 includes a machine learning based training database with a pre-defined mapping defining a relationship between various input parameters and output parameters based on various statistical methods. For example, the training database includes machine learning algorithms to learn mappings between input parameters related to the subject(s). The training database is routinely updated and/or supplemented based on the machine learning methods.
[0038] By way of example, the UE 101 , the third party data source(s) 109, and the animation generation platform 111 communicate with each other and other components of the communication network 107 using well known, new or still developing protocols. In this context, a protocol includes a set of rules defining how the network nodes within the communication network 107 interact with each other based on information sent over the communication links. The protocols are effective at different layers of operation within each node, from generating and receiving physical signals of various types, to selecting a link for transferring those signals, to the format of information indicated by those signals, to identifying which software application executing on a computer system sends or receives the information. The conceptually different layers of protocols for exchanging information over a network are described in the Open Systems Interconnection (OSI) Reference Model.
[0039] Communications between the network nodes are typically effected by exchanging discrete packets of data. Each packet typically comprises (1 ) header information associated with a particular protocol, and (2) payload information that follows the header information and contains information that may be processed independently of that particular protocol. In some protocols, the packet includes (3) trailer information following the payload and indicating the end of the payload information. The header includes information such as the source of the packet, its destination, the length of the payload, and other properties used by the protocol. Often, the data in the payload for the particular protocol includes a header and payload for a different protocol associated with a different, higher layer of the OSI Reference Model. The header for a particular protocol typically indicates a type for the next protocol contained in its payload. The higher layer protocol is said to be encapsulated in the lower layer protocol. The headers included in a packet traversing multiple heterogeneous networks, such as the Internet, typically include a physical (layer 1 ) header, a data-link (layer 2) header, an internetwork (layer 3) header and a transport (layer 4) header, and various application (layer 5, layer 6 and layer 7) headers as defined by the OSI Reference Model.
[0040] FIG. 2 is a flowchart of a process for generating animated three-dimensional model(s) of a subject in a virtual environment for gamification of fostering or adoption, according to aspects of the disclosure. In various embodiments, the animation generation platform 111 and/or any of the modules 113-121 performs one or more portions of the process 200 and are implemented using, for instance, a chip set including a processor and a memory as shown in FIG. 8. As such, the animation generation platform 111 and/or any of modules 113-121 provide means for accomplishing various parts of the process 200, as well as means for accomplishing embodiments of other processes described herein in conjunction with other components of the system 100. Although the process 200 is illustrated and described as a sequence of steps, it is contemplated that various embodiments of the process 200 are performed in any order or combination and need not include all of the illustrated steps.
[0041] In step 201 , the animation generation platform 111 receives, via processor 802 (which may include one or more processors), data associated with the subject(s) (e.g., animals available for fostering or adoption). In one instance, the data includes images, videos, and/or audio recordings of the subject(s) captured by various sensors (e.g., sensor 105). It is understood that the data may include any other relevant data associated with the subject(s).
[0042] In step 203, the animation generation platform 111 processes, via processor 802, the data to generate three-dimensional models of the subject(s). In one embodiment, the animation generation platform 111 applies computer vision techniques (e.g., a neural network model or a classification model) to the images and/or videos to learn the visual characteristics of the subject(s) for generating the three-dimensional model(s). The animation generation platform 111 also applies photogrammetry technique to analyze the image(s) for identifying common points as reference markers that are utilized to calculate the distance and angle between different elements in the images to build the three-dimensional model(s). The three-dimensional models are then stored in database 123.
[0043] In step 205, the animation generation platform 111 generates, via processor 802, a presentation of the one or more three-dimensional models in a virtual environment. In one instance, the virtual environment represents real locations (e.g., animal sanctuaries, animal adoption centers, animal shelters, etc.). In one embodiment, the animation generation platform 111 generates animations for the three-dimensional model(s) in the virtual environment. The animated three-dimensional model(s) executes specific movements performed by the animal(s) in the video(s). The animated three- dimensional model(s) is also configured to communicate with the UE 101 associated with a user, and an action is determined for the animated three-dimensional model(s) based on the communication. The animated three-dimensional models are then stored in database 123.
[0044] In one embodiment, the animation generation platform 111 receives training data correlating the data associated with the subject(s) to the three-dimensional model(s) and/or the animated three-dimensional model(s). The animation generation platform 111 inputs the training data to a machine learning model to configure the machine learning model to output the three-dimensional model(s) and/or the animated three-dimensional model(s) for the data associated with the subject(s).
[0045] In step 207, the animation generation platform 111 receives, via processor 802, a selection of a three-dimensional model from the plurality of three-dimensional models of one or more subjects. In one instance, the animation generation platform 111 receives a request to purchase a property in the virtual environment (e.g., a virtual land in a virtual rescue shelter) from the UE 101 associated with the user, wherein the request includes a transaction amount for the property. The animation generation platform 111 superimposes the selected three-dimensional model on the purchased property for fostering or adoption. The animation generation platform 111 continuously monitors the condition of the selected three-dimensional model on the purchased property, and generates real-time notifications regarding the condition of the selected three-dimensional model in the UE 101 associated with the user. In one instance, the condition of the selected three-dimensional model represents the condition of an animal in real locations (e.g., a dog in a rescue shelter).
[0046] In step 209, the animation generation platform 111 determines, via processor 802, an action for the at least one selected three-dimensional model of the subject. The action includes fostering or adopting the subject (e.g., pets) in the virtual environment, such fostering or adopting the subject in the virtual environment cause service providers (e.g., rescue shelters, animal hostels, animal foster homes, veterinary hospitals, etc.) to take care of the animals in real locations. In one instance, the action involves the user sponsoring the subject(s) by paying for their expenses, such payments is utilized by the service providers to take care of the animals in real locations.
[0047] FIG. 3 is a diagram that illustrates an interactive session between a user and a three-dimensional model (e.g., avatar) of an animal for rendering a service, according to one example embodiment. In one example, the animation generation platform 111 generates a display 300 (e.g., pet care transactions web platform) in a user interface of the UE 101 associated with the user. The display 300 includes three-dimensional models 301 and 303 that represent the user and the animal that may be fostered or adopted by the user, respectively. The three-dimensional models 301 and 303 are realistic reproductions of the user and the animal (e.g., lifelike) or some fanciful alter egos (e.g., cartoons). In one instance, the animation generation platform 111 utilizes conversational artificial intelligence (Al) for creating human-like interactions and conversations between the three-dimensional models 301 and 303 (e.g., chatbots 305 to answer questions and provide support or generative Al). Conversational Al uses a combination of natural language processing (NLP), foundation models, and machine learning (ML) to understand and process human language. Accordingly, the three- dimensional models understand natural language input, maintain context for coherent conversations, and provide contextually relevant responses for engaging the users in dynamic and interactive dialogues. The animation generation platform 111 is constantly learning from such interactions and improving response quality over time.
[0048] Though FIG. 3 depicts a single user interacting with a three-dimensional model 303, it should be understood that multiple users may interact with a plurality of three-dimensional models to foster or adopt. Such a virtual approach enables users to foster or adopt rescued animals anywhere in the world. In this example, user Bruce (represented by the three-dimensional models 301 ) is located in New York, and he may choose to virtually adopt or foster a dog (e.g., the three-dimensional model 303) in a virtual environment (e.g., pet care transactions web platform), such action by user Bruce results in service providers (e.g., rescue shelters) fostering or adopting rescued animals in real locations.
[0049] FIG. 4 is a diagram that illustrates interactions between registered users in a virtual environment for rendering a service, according to one example embodiment. In one example, the animation generation platform 111 generates a display 400 (e.g., pet care transactions web platform) in a user interface of the UE 101 associated with the user. The display 400 includes a three-dimensional model 401 of an animal that may be fostered or adopted by the user. The display 400 also includes annotations, audio messages, and/or video messages to guide the users. Though FIG. 4 depicts a single three-dimensional model 401 , it should be understood that a plurality of three- dimensional models of animals available for fostering or adoption may be presented in the display 400.
[0050] In one instance, the display 400 is a pet care transactions web platform for adopting or fostering animals. The three-dimensional model 401 is a non-player character (NPC) based on real rescued animals that are available for fostering or adoption. In one instance, the animation generation platform 111 utilizes a trained machine-learning model for generating the three-dimensional models, for example, the trained machine-learning model learns associations between images, videos, audio, three-dimensional models, and/or animated three-dimensional models. The three- dimensional model 401 performs mannerisms similar to that of the animal it corresponds to (e.g., sitting style, barking tonalities, running style, jumping style, walking style, playing style, etc.). The three-dimensional model 401 performs habits unique to the animal it corresponds to, such as tricks taught to the animal.
[0051] In this example, the display 400 includes annotation 403 that informs the users to donate money for maintenance of the three-dimensional model 401 , via the pet care transactions web platform, so that a rescued animal is saved in real life. The user may donate money or purchase a virtual property (e.g., a virtual land 405) in the virtual environment to accommodate the three-dimensional model 401 . The user can foster the three-dimensional model 401 on the virtual property and host a meet and greet on the property for other registered users. The registered users may interact in the metaverse using virtual reality and/or augmented reality technologies. The other registered users may donate for the upkeep or adopt the three-dimensional model 401 and accommodate them on their virtual land 407 in the virtual environment. Such actions of fostering or adopting three-dimensional models in the virtual environment result in the fostering or adoption of real rescued animals. The animation generation platform 111 generates digital token 409 indicating the contribution of the users (e.g., foster care providers, adopters, donors, etc.) in fostering or adopting rescued animals. The users may share the digital token 409 and their experience on social media.
[0052] FIG. 5 is a user interface diagram that illustrates the steps for fostering or adopting three-dimensional model(s) in a virtual environment, according to one example embodiment. In one example, the animation generation platform 111 generates a display 500 (e.g., pet care transactions web platform) in a user interface of the UE 101 associated with the user. In step 501 , a user (e.g., a registered user) logs into the pet care transactions web platform by entering his/her credential information via his/her UE 101. Upon authentication, the user may search for rescue animals (e.g., a dog, a cat, or any other animals) looking for a foster home. In one instance, the animation generation platform 111 processes historical information of the user to determine their preferences and may recommend one or more three-dimensional model(s) that represent the pets (e.g., specific breeds of dogs or cats) that the user likes. [0053] In step 503, the user selects a three-dimensional model of his choice, whereupon the three-dimensional model is downloaded for fostering in the virtual environment. In step 505, the user purchases land in the virtual environment of the pet care transactions web platform, and the selected three-dimensional model is uploaded and fostered on the purchased land. As discussed, such fostering of the three- dimensional model causes the fostering of real rescued animals. In step 507, the user receives digital tokens or non-fungible tokens (NFTs) for his/her contribution to the fostering or adoption of rescued animals. The user may share the digital tokens, the NFTs, and his/her experience on social media (step 509).
[0054] In one instance, other registered users (e.g., an adopter or a donor) log in to the pet care transactions web platform by entering their credential information via their respective LIE 101. Upon authentication, the other registered users looking to adopt a pet may interact with the user fostering the three-dimensional model on the purchased land. In step 511 , the user hosts a meet and greet on the property for the other registered users. The other registered users interact with the three-dimensional model and receive relevant information that helps them in determining whether or not to adopt (e.g., breed of the animal, location of the animal, age of the animal, health records of the animal, etc.). In step 517, the other registered users may donate for the upkeep of the animal or adopt the three-dimensional model 401 and accommodate them on their virtual land in the virtual environment. Such adoption of the three-dimensional model in the virtual environment results in the adoption of real rescued animals. In step 519, the other registered users may redeem the experience for digital tokens or NFTs. In step 521 , the other registered users may share the digital tokens, the NFTs, and their experience on social media.
[0055] FIG. 6 is a user interface diagram that illustrates various stages of the pet care transactions web platform for fostering or adopting three-dimensional model(s) in a virtual environment, according to one example embodiment. In one example, the first stage includes the animation generation platform 111 generating a display of a homepage of the pet care transactions web platform in a user interface of the UE 101 associated with the user. The user logs into the web platform and is navigated to pages 601 , 603, and 605 which provide information relating to fostering or adopting three- dimensional model(s) in a virtual environment that causes fostering or adoption of the real rescued animals. Once the user completes glancing through the pages, the user is navigated to the main page. The system determines that the user completed reading through the pages based on sensor data (e.g., touch detection sensors indicating the user wishes to move to the next page, gaze detection sensors that detect eye movements of the user, etc.).
[0056] In the second stage, the user is navigated to pages 607, 609, and 611 . The user searches for rescued animals or is provided with recommendations on the three- dimensional model(s) that represent the pets (e.g., specific breeds of dogs or cats) that the user likes based on his/her historical information and/or preference information. The user selects a three-dimensional model of his choice, whereupon the user may either foster the selected three-dimensional model (e.g., donating money utilizing virtual currency (e.g., cryptocurrency 613) or credit cards 615) or engage in an interaction with the three-dimensional model. Such donation by the user is utilized for fighting pet homelessness or combat barriers to adoption. The donation is also used for support service providers (e.g., rescue shelters, veterinary clinics, etc.) registered with the pet care transactions web platform.
[0057] In the third stage, the user is navigated to pages 617, 619, and 621 for interacting with the three-dimensional model. In one example, the user personalizes the three-dimensional model by modifying its visual appearances, color, clothes, and/or voice. In one example, the user engages in real-time communication with the three- dimensional model to understand the real rescued animal he/she is fostering or adopting (e.g., name of the dog, breed of the dog, location of the dog, animal shelter that is fostering the dog, health-related information, etc.). The three-dimensional model also stores relevant information associated with the user during the conversation (e.g., date of birth, appointments, etc.), and timely reminds the user about the upcoming appointments. The three-dimensional model also acts as a friend by wishing birthday to the user. In one example, the user while interacting with the three-dimensional model may play various video games (e.g., go on a quest together, solve a puzzle together, compete against each other, etc.). Pursuant to the interaction, the user may choose to foster the three-dimensional model. The user may donate money utilizing virtual currency (e.g., cryptocurrency 613) or credit cards 615 to purchase land in the virtual environment and fosters the three-dimensional model on the purchased land. The user hosts a meet and greet on the property for the other registered users. The other registered users interact with the three-dimensional model and may either donate or adopt and accommodate the three-dimensional model on their virtual land. Such fostering or adoption of the three-dimensional model in the virtual environment results in the fostering or adoption of real rescued animals.
[0058] In one instance, the user and the other registered users may redeem their experience for digital tokens or NFTs (e.g., 623, 625, and 627), such digital tokens or NFTs include pictures of the users and/or the pets they are fostering or adopting. The user and the other registered users may share their digital tokens, NFTs, and experience on social media.
[0059] One or more implementations disclosed herein include and/or are implemented using a machine learning model. For example, one or more of the modules of the animation generation platform 111 are implemented using a machine learning model and/or are used to train the machine learning model. A given machine learning model is trained using the training flow chart 700 of FIG. 7. Training data 712 includes one or more of stage inputs 714 and known outcomes 718 related to the machine learning model to be trained. Stage inputs 71 are from any applicable source including text, visual representations, data, values, comparisons, and stage outputs, e.g., one or more outputs from one or more steps from FIG. 2. The known outcomes 718 are included for the machine learning models generated based on supervised or semi-supervised training. An unsupervised machine learning model is not be trained using known outcomes 718. Known outcomes 718 includes known or desired outputs for future inputs similar to or in the same category as stage inputs 714 that do not have corresponding known outputs.
[0060] The training data 712 and a training algorithm 720, e.g., one or more of the modules implemented using the machine learning model and/or are used to train the machine learning model, is provided to a training component 730 that applies the training data 712 to the training algorithm 720 to generate the machine learning model. According to an implementation, the training component 730 is provided comparison results 716 that compare a previous output of the corresponding machine learning model to apply the previous result to re-train the machine learning model. The comparison results 716 are used by training component 730 to update the corresponding machine learning model. The training algorithm 720 utilizes machine learning networks and/or models including, but not limited to a deep learning network such as Deep Neural Networks (DNN), Convolutional Neural Networks (CNN), Fully Convolutional Networks (FCN) and Recurrent Neural Networks (RCN), probabilistic models such as Bayesian Networks and Graphical Models, classifiers such as K- Nearest Neighbors, and/or discriminative models such as Decision Forests and maximum margin methods, the model specifically discussed herein, or the like.
[0061] The machine learning model used herein is trained and/or used by adjusting one or more weights and/or one or more layers of the machine learning model. For example, during training, a given weight is adjusted (e.g., increased, decreased, removed) based on training data or input data. Similarly, a layer is updated, added, or removed based on training data/and or input data. The resulting outputs are adjusted based on the adjusted weights and/or layers.
[0062] In general, any process or operation discussed in this disclosure is understood to be computer-implementable, such as the process illustrated in FIG. 2 are performed by one or more processors of a computer system as described herein. A process or process step performed by one or more processors is also referred to as an operation. The one or more processors are configured to perform such processes by having access to instructions (e.g., software or computer-readable code) that, when executed by one or more processors, cause one or more processors to perform the processes. The instructions are stored in a memory of the computer system. A processor is a central processing unit (CPU), a graphics processing unit (GPU), or any suitable type of processing unit.
[0063] A computer system, such as a system or device implementing a process or operation in the examples above, includes one or more computing devices. One or more processors of a computer system are included in a single computing device or distributed among a plurality of computing devices. One or more processors of a computer system are connected to a data storage device. A memory of the computer system includes the respective memory of each computing device of the plurality of computing devices.
[0064] FIG. 8 illustrates an implementation of a computer system that executes techniques presented herein. The computer system 800 includes a set of instructions that are executed to cause the computer system 800 to perform any one or more of the methods or computer based functions disclosed herein. The computer system 800 operates as a standalone device or is connected, e.g., using a network, to other computer systems or peripheral devices.
[0065] Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification, discussions utilizing terms such as "processing," "computing," "calculating," “determining”, analyzing” or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulate and/or transform data represented as physical, such as electronic, quantities into other data similarly represented as physical quantities.
[0066] In a similar manner, the term "processor" refers to any device or portion of a device that processes electronic data, e.g., from registers and/or memory to transform that electronic data into other electronic data that, e.g., is stored in registers and/or memory. A “computer,” a “computing machine,” a "computing platform," a “computing device,” or a “server” includes one or more processors.
[0067] In a networked deployment, the computer system 800 operates in the capacity of a server or as a client user computer in a server-client user network environment, or as a peer computer system in a peer-to-peer (or distributed) network environment. The computer system 800 is also implemented as or incorporated into various devices, such as a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a mobile device, a palmtop computer, a laptop computer, a desktop computer, a communications device, a wireless telephone, a landline telephone, a control system, a camera, a scanner, a facsimile machine, a printer, a pager, a personal trusted device, a web appliance, a network router, switch or bridge, or any other machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. In a particular implementation, the computer system 800 is implemented using electronic devices that provide voice, video, or data communication. Further, while the computer system 800 is illustrated as a single system, the term “system” shall also be taken to include any collection of systems or sub-systems that individually or jointly execute a set, or multiple sets, of instructions to perform one or more computer functions.
[0068] As illustrated in FIG. 8, the computer system 800 includes a processor 802, e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both. The processor 802 is a component in a variety of systems. For example, the processor 802 is part of a standard personal computer or a workstation. The processor 802 is one or more processors, digital signal processors, application specific integrated circuits, field programmable gate arrays, servers, networks, digital circuits, analog circuits, combinations thereof, or other now known or later developed devices for analyzing and processing data. The processor 802 implements a software program, such as code generated manually (i.e. , programmed).
[0069] The computer system 800 includes a memory 804 that communicates via bus 808. Memory 804 is a main memory, a static memory, or a dynamic memory. Memory 804 includes, but is not limited to computer-readable storage media such as various types of volatile and non-volatile storage media, including but not limited to random access memory, read-only memory, programmable read-only memory, electrically programmable read-only memory, electrically erasable read-only memory, flash memory, magnetic tape or disk, optical media and the like. In one implementation, the memory 804 includes a cache or random-access memory for the processor 802. In alternative implementations, the memory 804 is separate from the processor 802, such as a cache memory of a processor, the system memory, or other memory. Memory 804 is an external storage device or database for storing data. Examples include a hard drive, compact disc (“CD”), digital video disc (“DVD”), memory card, memory stick, floppy disc, universal serial bus (“USB”) memory device, or any other device operative to store data. The memory 804 is operable to store instructions executable by the processor 802. The functions, acts, or tasks illustrated in the figures or described herein are performed by processor 802 executing the instructions stored in memory 804. The functions, acts, or tasks are independent of the particular type of instruction set, storage media, processor, or processing strategy and are performed by software, hardware, integrated circuits, firmware, micro-code, and the like, operating alone or in combination. Likewise, processing strategies include multiprocessing, multitasking, parallel processing, and the like.
[0070] As shown, the computer system 800 further includes a display 810, such as a liquid crystal display (LCD), an organic light emitting diode (OLED), a flat panel display, a solid-state display, a cathode ray tube (CRT), a projector, a printer or other now known or later developed display device for outputting determined information. The display 810 acts as an interface for the user to see the functioning of the processor 802, or specifically as an interface with the software stored in the memory 804 or in the drive unit 806.
[0071] Additionally or alternatively, the computer system 800 includes an input/output device 812 configured to allow a user to interact with any of the components of the computer system 800. The input/output device 812 is a number pad, a keyboard, a cursor control device, such as a mouse, a joystick, touch screen display, remote control, or any other device operative to interact with the computer system 800. [0072] The computer system 800 also includes the drive unit 806 implemented as a disk or optical drive. The drive unit 806 includes a computer-readable medium 822 in which one or more sets of instructions 824, e.g. software, is embedded. Further, the sets of instructions 824 embodies one or more of the methods or logic as described herein. Instructions 824 resides completely or partially within memory 804 and/or within processor 802 during execution by the computer system 800. The memory 804 and the processor 802 also include computer-readable media as discussed above.
[0073] In some systems, computer-readable medium 822 includes the set of instructions 824 or receives and executes the set of instructions 824 responsive to a propagated signal so that a device connected to network 830 communicates voice, video, audio, images, or any other data over network 830. Further, the sets of instructions 824 are transmitted or received over the network 830 via the communication port or interface 820, and/or using the bus 808. The communication port or interface 820 is a part of the processor 802 or is a separate component. The communication port or interface 820 is created in software or is a physical connection in hardware. The communication port or interface 820 is configured to connect with the network 830, external media, display 810, or any other components in the computer system 800, or combinations thereof. The connection with network 830 is a physical connection, such as a wired Ethernet connection, or is established wirelessly as discussed below. Likewise, the additional connections with other components of the computer system 800 are physical connections or are established wirelessly. Network 830 alternatively be directly connected to the bus 808.
[0074] While the computer-readable medium 822 is shown to be a single medium, the term "computer-readable medium" includes a single medium or multiple media, such as a centralized or distributed database, and/or associated caches and servers that store one or more sets of instructions. The term "computer-readable medium" also includes any medium that is capable of storing, encoding, or carrying a set of instructions for execution by a processor or that causes a computer system to perform any one or more of the methods or operations disclosed herein. The computer- readable medium 822 is non-transitory, and may be tangible.
[0075] The computer-readable medium 822 includes a solid-state memory such as a memory card or other package that houses one or more non-volatile read-only memories. The computer-readable medium 822 is a random-access memory or other volatile re-writable memory. Additionally or alternatively, the computer-readable medium 822 includes a magneto-optical or optical medium, such as a disk or tapes or other storage device to capture carrier wave signals such as a signal communicated over a transmission medium. A digital file attachment to an e-mail or other self- contained information archive or set of archives is considered a distribution medium that is a tangible storage medium. Accordingly, the disclosure is considered to include any one or more of a computer-readable medium or a distribution medium and other equivalents and successor media, in which data or instructions are stored.
[0076] In an alternative implementation, dedicated hardware implementations, such as application specific integrated circuits, programmable logic arrays, and other hardware devices, is constructed to implement one or more of the methods described herein. Applications that include the apparatus and systems of various implementations broadly include a variety of electronic and computer systems. One or more implementations described herein implement functions using two or more specific interconnected hardware modules or devices with related control and data signals that are communicated between and through the modules, or as portions of an applicationspecific integrated circuit. Accordingly, the present system encompasses software, firmware, and hardware implementations.
[0077] Computer system 800 is connected to network 830. Network 830 defines one or more networks including wired or wireless networks. The wireless network is a cellular telephone network, an 802.10, 802.16, 802.20, or WiMAX network. Further, such networks include a public network, such as the Internet, a private network, such as an intranet, or combinations thereof, and utilizes a variety of networking protocols now available or later developed including, but not limited to TCP/IP based networking protocols. Network 830 includes wide area networks (WAN), such as the Internet, local area networks (LAN), campus area networks, metropolitan area networks, a direct connection such as through a Universal Serial Bus (USB) port, or any other networks that allows for data communication. Network 830 is configured to couple one computing device to another computing device to enable communication of data between the devices. Network 830 is generally enabled to employ any form of machine-readable media for communicating information from one device to another. Network 830 includes communication methods by which information travels between computing devices. Network 830 is divided into sub-networks. The sub-networks allow access to all of the other components connected thereto or the sub-networks restrict access between the components. Network 830 is regarded as a public or private network connection and includes, for example, a virtual private network or an encryption or other security mechanism employed over the public Internet, or the like.
[0078] In accordance with various implementations of the present disclosure, the methods described herein are implemented by software programs executable by a computer system. Further, in an example, non-limited implementation, implementations can include distributed processing, component/object distributed processing, and parallel processing. Alternatively, virtual computer system processing can be constructed to implement one or more of the methods or functionality as described herein. [0079] Although the present specification describes components and functions that are implemented in particular implementations with reference to particular standards and protocols, the disclosure is not limited to such standards and protocols. For example, standards for Internet and other packet switched network transmission (e.g., TCP/IP, UDP/IP, HTML, HTTP) represent examples of the state of the art. Such standards are periodically superseded by faster or more efficient equivalents having essentially the same functions. Accordingly, replacement standards and protocols having the same or similar functions as those disclosed herein are considered equivalents thereof.
[0080] It will be understood that the steps of methods discussed are performed in one embodiment by an appropriate processor (or processors) of a processing (i.e. , computer) system executing instructions (computer-readable code) stored in storage. It will also be understood that the disclosure is not limited to any particular implementation or programming technique and that the disclosure is implemented using any appropriate techniques for implementing the functionality described herein. The disclosure is not limited to any particular programming language or operating system.
[0081] It should be appreciated that in the above description of example embodiments of the present disclosure, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. This method of the present disclosure, however, is not to be interpreted as reflecting an intention that the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of the present disclosure.
[0082] Furthermore, while some embodiments described herein include some but not other features included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention, and form different embodiments, as would be understood by those skilled in the art. For example, in the following claims, any of the claimed embodiments can be used in any combination. [0083] Furthermore, some of the embodiments are described herein as a method or combination of elements of a method that can be implemented by a processor of a computer system or by other means of carrying out the function. Thus, a processor with the necessary instructions for carrying out such a method or element of a method forms a means for carrying out the method or element of a method. Furthermore, an element described herein of an apparatus embodiment is an example of a means for carrying out the function performed by the element for the purpose of carrying out the invention. [0084] In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the present disclosure are practiced without these specific details. In other instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
[0085] Thus, while there has been described what are believed to be the preferred embodiments of the invention, those skilled in the art will recognize that other and further modifications are made thereto without departing from the spirit of the invention, and it is intended to claim all such changes and modifications as falling within the scope of the invention. For example, any formulas given above are merely representative of procedures that may be used. Functionality may be added or deleted from the block diagrams and operations may be interchanged among functional blocks. Steps may be added or deleted to methods described within the scope of the present disclosure. [0086] The above disclosed subject matter is to be considered illustrative, and not restrictive, and the appended claims are intended to cover all such modifications, enhancements, and other implementations, which fall within the true spirit and scope of the present disclosure. Thus, to the maximum extent allowed by law, the scope of the present disclosure is to be determined by the broadest permissible interpretation of the following claims and their equivalents, and shall not be restricted or limited by the foregoing detailed description. While various implementations of the disclosure have been described, it will be apparent to those of ordinary skill in the art that many more implementations and implementations are possible within the scope of the disclosure. Accordingly, the disclosure is not to be restricted except in light of the attached claims and their equivalents.
[0087] The present disclosure furthermore relates to the following aspects.
[0088] Example 1 . A computer-implemented method comprising: receiving, by one or more processors, data associated with one or more subjects; processing, by the one or more processors, the data to generate one or more three-dimensional models of the one or more subjects; generating, by the one or more processors, a presentation of the one or more three-dimensional models in a virtual environment; receiving, by the one or more processors, a selection of at least one three-dimensional model of a subject from the one or more three-dimensional models of the one or more subjects; and determining, by the one or more processors, at least one action for the at least one selected three-dimensional model of the subject.
[0089] Example 2. The computer-implemented method of example 1 , wherein the data associated with the one or more subjects includes one or more images, one or more videos, and/or one or more audio recording of one or more animals captured by one or more sensors.
[0090] Example 3. The computer-implemented method of example 2, wherein processing the data to generate the one or more three-dimensional models of the one or more subjects comprises: applying, by the one or more processors, computer vision techniques to the one or more images and/or the one or more videos to learn visual characteristics of the one or more animals for generating the one or more three- dimensional models, wherein the computer vision techniques includes a neural network model or a classification model; and storing, by the one or more processors, the one or more three-dimensional models in a database.
[0091] Example 4. The computer-implemented method of example 3, wherein generating the presentation of the one or more three-dimensional models in the virtual environment comprises: generating, by the one or more processors, animations for the one or more three-dimensional models in the virtual environment, wherein the one or more animated three-dimensional models execute specific movements performed by the one or more animals in the one or more videos. [0092] Example 5. The computer-implemented method of example 4, wherein the one or more animated three-dimensional models are configured to communicate with a device associated with a user, and wherein the at least one action is based on the communication between the one or more animated three-dimensional models and the user.
[0093] Example 6. The computer-implemented method of examples 1 -5, further comprising: receiving, by the one or more processors, training data correlating the data associated with the one or more subjects to the one or more three-dimensional models and/or one or more animated three-dimensional models; and inputting, by the one or more processors, the training data to a machine learning model to configure the machine learning model to output the one or more three-dimensional models and/or the one or more animated three-dimensional models for the data associated with the one or more subjects.
[0094] Example 7. The computer-implemented method of examples 1 -6, wherein the at least one action for the at least one selected three-dimensional model includes fostering or adopting the subject in the virtual environment.
[0095] Example 8. The computer-implemented method of example 7, wherein receiving the selection of the at least one three-dimensional model further comprises: receiving, by the one or more processors, a request to purchase a property in the virtual environment from a device associated with a user, wherein the request includes a transaction amount for the property; and superimposing, by the one or more processors, the at least one selected three-dimensional model on the purchased property.
[0096] Example 9. The computer-implemented method of example 8, further comprising: monitoring, by the one or more processors, condition of the at least one selected three-dimensional model on the purchased property; and generating, by the one or more processors, a real-time notification regarding the condition of the at least one selected three-dimensional model in the device associated with the user.
[0097] Example 10. The computer-implemented method of example 9, wherein fostering or adopting the subject in the virtual environment cause service providers to foster or adopt animals in real locations, and wherein the condition of the at least one selected three-dimensional model represent condition of the animals in the real locations.
[0098] Example 11 . A system comprising: one or more processors; a non-transitory computer readable medium storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising: receiving data associated with one or more subjects; processing the data to generate one or more three-dimensional models of the one or more subjects; generating a presentation of the one or more three-dimensional models in a virtual environment; receiving a selection of at least one three-dimensional model of a subject from the one or more three-dimensional models of the one or more subjects; and determining at least one action for the at least one selected three-dimensional model of the subject.
[0099] Example 12. The system of example 11 , wherein the data associated with the one or more subjects includes one or more images, one or more videos, and/or one or more audio recording of one or more animals captured by one or more sensors.
[00100] Example 13. The system of example 12, wherein processing the data to generate the one or more three-dimensional models of the one or more subjects comprises: applying computer vision techniques to the one or more images and/or the one or more videos to learn visual characteristics of the one or more animals for generating the one or more three-dimensional models, wherein the computer vision techniques includes a neural network model or a classification model; and storing the one or more three-dimensional models in a database.
[00101] Example 14. The system of example 13, wherein generating the presentation of the one or more three-dimensional models in the virtual environment comprises: generating animations for the one or more three-dimensional models in the virtual environment, wherein the one or more animated three-dimensional models execute specific movements performed by the one or more animals in the one or more videos.
[00102] Example 15. The system of example 14, wherein the one or more animated three-dimensional models are configured to communicate with a device associated with a user, and wherein the at least one action is based on the communication between the one or more animated three-dimensional models and the user.
[00103] Example 16. The system of examples 11 -15, further comprising: receiving training data correlating the data associated with the one or more subjects to the one or more three-dimensional models and/or one or more animated three-dimensional models; and inputting the training data to a machine learning model to configure the machine learning model to output the one or more three-dimensional models and/or the one or more animated three-dimensional models for the data associated with the one or more subjects.
[00104] Example 17. The system of example 11-16, wherein receiving the selection of the at least one three-dimensional model further comprises: receiving a request to purchase a property in the virtual environment from a device associated with a user, wherein the request includes a transaction amount for the property; and superimposing the at least one selected three-dimensional model on the purchased property, wherein the at least one action for the at least one selected three-dimensional model includes fostering or adopting the subject in the virtual environment.
[00105] Example 18. A non-transitory computer readable medium, the non- transitory computer readable medium storing instructions which, when executed by one or more processors of a computing system, cause the one or more processors to perform operations, comprising: receiving data associated with one or more subjects, wherein the data associated with the one or more subjects includes one or more images, one or more videos, and/or one or more audio recording of one or more animals captured by one or more sensors; processing the data to generate one or more three-dimensional models of the one or more subjects; generating a presentation of the one or more three-dimensional models in a virtual environment; receiving a selection of at least one three-dimensional model of a subject from the one or more three- dimensional models of the one or more subjects; and determining at least one action for the at least one selected three-dimensional model of the subject.
[00106] Example 19. The non-transitory computer readable medium of example 18, wherein processing the data to generate the one or more three-dimensional models of the one or more subjects comprises: applying computer vision techniques to the one or more images and/or the one or more videos to learn visual characteristics of the one or more animals for generating the one or more three-dimensional models, wherein the computer vision techniques includes a neural network model or a classification model; and storing the one or more three-dimensional models in a database.
[00107] Example 20. The non-transitory computer readable medium of example 19, wherein generating the presentation of the one or more three-dimensional models in the virtual environment comprises: generating animations for the one or more three- dimensional models in the virtual environment, wherein the one or more animated three- dimensional models execute specific movements performed by the one or more animals in the one or more videos.

Claims

CLAIMS What is claimed is:
1 . A computer-implemented method comprising: receiving, by one or more processors, data associated with one or more subjects; processing, by the one or more processors, the data to generate one or more three- dimensional models of the one or more subjects; generating, by the one or more processors, a presentation of the one or more three- dimensional models in a virtual environment; receiving, by the one or more processors, a selection of at least one three- dimensional model of a subject from the one or more three-dimensional models of the one or more subjects; and determining, by the one or more processors, at least one action for the at least one selected three-dimensional model of the subject.
2. The computer-implemented method of claim 1 , wherein the data associated with the one or more subjects includes one or more images, one or more videos, and/or one or more audio recording of one or more animals captured by one or more sensors.
3. The computer-implemented method of claim 2, wherein processing the data to generate the one or more three-dimensional models of the one or more subjects comprises: applying, by the one or more processors, computer vision techniques to the one or more images and/or the one or more videos to learn visual characteristics of the one or more animals for generating the one or more three-dimensional models, wherein the computer vision techniques includes a neural network model or a classification model; and storing, by the one or more processors, the one or more three-dimensional models in a database.
4. The computer-implemented method of claim 3, wherein generating the presentation of the one or more three-dimensional models in the virtual environment comprises: generating, by the one or more processors, animations for the one or more three- dimensional models in the virtual environment, wherein the one or more animated three-dimensional models execute specific movements performed by the one or more animals in the one or more videos.
5. The computer-implemented method of claim 4, wherein the one or more animated three-dimensional models are configured to communicate with a device associated with a user, and wherein the at least one action is based on the communication between the one or more animated three-dimensional models and the user.
6. The computer-implemented method of claim 1 , further comprising: receiving, by the one or more processors, training data correlating the data associated with the one or more subjects to the one or more three-dimensional models and/or one or more animated three-dimensional models; and inputting, by the one or more processors, the training data to a machine learning model to configure the machine learning model to output the one or more three- dimensional models and/or the one or more animated three-dimensional models for the data associated with the one or more subjects.
7. The computer-implemented method of claim 1 , wherein the at least one action for the at least one selected three-dimensional model includes fostering or adopting the subject in the virtual environment.
8. The computer-implemented method of claim 7, wherein receiving the selection of the at least one three-dimensional model further comprises: receiving, by the one or more processors, a request to purchase a property in the virtual environment from a device associated with a user, wherein the request includes a transaction amount for the property; and superimposing, by the one or more processors, the at least one selected three- dimensional model on the purchased property.
9. The computer-implemented method of claim 8, further comprising: monitoring, by the one or more processors, condition of the at least one selected three-dimensional model on the purchased property; and generating, by the one or more processors, a real-time notification regarding the condition of the at least one selected three-dimensional model in the device associated with the user.
10. The computer-implemented method of claim 9, wherein fostering or adopting the subject in the virtual environment cause service providers to foster or adopt animals in real locations, and wherein the condition of the at least one selected three-dimensional model represent condition of the animals in the real locations.
11 . A system comprising: one or more processors; a non-transitory computer readable medium storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising: receiving data associated with one or more subjects; processing the data to generate one or more three-dimensional models of the one or more subjects; generating a presentation of the one or more three-dimensional models in a virtual environment; receiving a selection of at least one three-dimensional model of a subject from the one or more three-dimensional models of the one or more subjects; and determining at least one action for the at least one selected three-dimensional model of the subject.
12. The system of claim 11 , wherein the data associated with the one or more subjects includes one or more images, one or more videos, and/or one or more audio recording of one or more animals captured by one or more sensors.
13. The system of claim 12, wherein processing the data to generate the one or more three-dimensional models of the one or more subjects comprises: applying computer vision techniques to the one or more images and/or the one or more videos to learn visual characteristics of the one or more animals for generating the one or more three-dimensional models, wherein the computer vision techniques includes a neural network model or a classification model; and storing the one or more three-dimensional models in a database.
14. The system of claim 13, wherein generating the presentation of the one or more three-dimensional models in the virtual environment comprises: generating animations for the one or more three-dimensional models in the virtual environment, wherein the one or more animated three-dimensional models execute specific movements performed by the one or more animals in the one or more videos.
15. The system of claim 14, wherein the one or more animated three-dimensional models are configured to communicate with a device associated with a user, and wherein the at least one action is based on the communication between the one or more animated three-dimensional models and the user.
16. The system of claim 11 , further comprising: receiving training data correlating the data associated with the one or more subjects to the one or more three-dimensional models and/or one or more animated three- dimensional models; and inputting the training data to a machine learning model to configure the machine learning model to output the one or more three-dimensional models and/or the one or more animated three-dimensional models for the data associated with the one or more subjects.
17. The system of claim 11 , wherein receiving the selection of the at least one three- dimensional model further comprises: receiving a request to purchase a property in the virtual environment from a device associated with a user, wherein the request includes a transaction amount for the property; and superimposing the at least one selected three-dimensional model on the purchased property, wherein the at least one action for the at least one selected three- dimensional model includes fostering or adopting the subject in the virtual environment.
18. A non-transitory computer readable medium, the non-transitory computer readable medium storing instructions which, when executed by one or more processors of a computing system, cause the one or more processors to perform operations, comprising: receiving data associated with one or more subjects, wherein the data associated with the one or more subjects includes one or more images, one or more videos, and/or one or more audio recording of one or more animals captured by one or more sensors; processing the data to generate one or more three-dimensional models of the one or more subjects; generating a presentation of the one or more three-dimensional models in a virtual environment; receiving a selection of at least one three-dimensional model of a subject from the one or more three-dimensional models of the one or more subjects; and determining at least one action for the at least one selected three-dimensional model of the subject.
19. The non-transitory computer readable medium of claim 18, wherein processing the data to generate the one or more three-dimensional models of the one or more subjects comprises: applying computer vision techniques to the one or more images and/or the one or more videos to learn visual characteristics of the one or more animals for generating the one or more three-dimensional models, wherein the computer vision techniques includes a neural network model or a classification model; and storing the one or more three-dimensional models in a database.
20. The non-transitory computer readable medium of claim 19, wherein generating the presentation of the one or more three-dimensional models in the virtual environment comprises: generating animations for the one or more three-dimensional models in the virtual environment, wherein the one or more animated three-dimensional models execute specific movements performed by the one or more animals in the one or more videos.
PCT/US2024/012713 2023-01-25 2024-01-24 Systems and methods for generating digital representation of a subject for rendering a service Ceased WO2024158873A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363481555P 2023-01-25 2023-01-25
US63/481,555 2023-01-25

Publications (1)

Publication Number Publication Date
WO2024158873A1 true WO2024158873A1 (en) 2024-08-02

Family

ID=90139857

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2024/012713 Ceased WO2024158873A1 (en) 2023-01-25 2024-01-24 Systems and methods for generating digital representation of a subject for rendering a service

Country Status (1)

Country Link
WO (1) WO2024158873A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140285339A1 (en) * 2012-01-26 2014-09-25 Squishycute.Com Llc Computer-implemented animal shelter management system
US20200312003A1 (en) * 2019-03-27 2020-10-01 Electronic Arts Inc. Virtual animal character generation from image or video data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140285339A1 (en) * 2012-01-26 2014-09-25 Squishycute.Com Llc Computer-implemented animal shelter management system
US20200312003A1 (en) * 2019-03-27 2020-10-01 Electronic Arts Inc. Virtual animal character generation from image or video data

Similar Documents

Publication Publication Date Title
US12354023B2 (en) Private artificial intelligence (AI) model of a user for use by an autonomous personal companion
US20240267344A1 (en) Chatbot for interactive platforms
US20210295579A1 (en) Systems and Methods for Generating an Interactive Avatar Model
US11568265B2 (en) Continual selection of scenarios based on identified tags describing contextual environment of a user for execution by an artificial intelligence model of the user by an autonomous personal companion
CN120752659A (en) Determine user intent from chatbot interactions
US20240355065A1 (en) Dynamic model adaptation customized for individual users
CN121002544A (en) Using model adaptation to overlay visual content
Brown The Innovation Ultimatum: How six strategic technologies will reshape every business in the 2020s
CN118036694B (en) Method, device and equipment for training intelligent agent and computer storage medium
KR102694719B1 (en) Method and system for training companion dogs based on artificial intelligence
JP7505208B2 (en) MATCHING SYSTEM, MATCHING METHOD, AND MATCHING PROGRAM
KR102493062B1 (en) Server and method for managing for decentralized metaverse operation, and program stored in computer readable medium performing the same
WO2024158873A1 (en) Systems and methods for generating digital representation of a subject for rendering a service
KR102720656B1 (en) Method, device and system for providing advertising content matching automation solution through digital advertising media
KR102619388B1 (en) Method, device and system for providing interface that implement visual effect depending on access related to metaverse memorial hall service based on constellation
KR102619389B1 (en) Method, device and system for providing interface of metaverse memorial hall service based on constellation
Martin et al. Teaching and Learning Computer-Intelligent Games
KR102621308B1 (en) Method, device and system for providing platform service of metaverse memorial hall based on constellation
US20250384608A1 (en) Generative ai pet avatar generation
US12548226B2 (en) Systems and methods for a three-dimensional digital pet representation platform
US20250383759A1 (en) Virtual pet features within a map
JP2026024979A (en) system
JP2026024795A (en) system
JP2026024527A (en) system
JP2026024950A (en) system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24709231

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 24709231

Country of ref document: EP

Kind code of ref document: A1