WO2016014597A2 - Translating emotions into electronic representations - Google Patents
Translating emotions into electronic representations Download PDFInfo
- Publication number
- WO2016014597A2 WO2016014597A2 PCT/US2015/041419 US2015041419W WO2016014597A2 WO 2016014597 A2 WO2016014597 A2 WO 2016014597A2 US 2015041419 W US2015041419 W US 2015041419W WO 2016014597 A2 WO2016014597 A2 WO 2016014597A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- emotion
- input data
- computer
- user
- electronic representation
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/12—Messaging; Mailboxes; Announcements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/535—Tracking the activity of the user
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/011—Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/2866—Architectures; Arrangements
- H04L67/30—Profiles
- H04L67/306—User profiles
Definitions
- the present invention generally relates to emotional translation. More specifically, the present invention relates to translating human emotions into visual and/or audio representations.
- Related Art
- Emoticons are simplistic images representative of mood. Emoticons may be used to convey tenor of a message or communication that might otherwise be lacking emotional context such as an email or text message. Emojis, too, may convey a human emotion.
- a computer-implemented method for translating a user emotion into an electronic representation includes a step of receiving emotion input data associated with the user emotion.
- the emotion input data is received at a computing device.
- the method includes executing instructions stored in memory of the computing device.
- the computing device analyzes the emotion input data.
- the computing device then generates the electronic representation based on the emotion input data.
- a non-transitory computer-readable storage medium has a computer program embodied thereon.
- the computer program is executable to perform a method for translating a user emotion into an electronic representation.
- the method includes a step of analyzing received emotion input data associated with the user emotion and generating the electronic representation based on the emotion input data.
- a system for translating a user emotion into an electronic representation includes a sender computing device communicatively coupled to a server by a communications network.
- the sender computing device receives emotion input data from a user.
- the emotion input data is associated with the user emotion.
- the server receives the emotion input data from the sender computing device over the communications network and executes instructions stored in memory of the server. Upon execution of the instructions by a processor of the server, the server analyzes the emotion input data. The server then generates the electronic representation based on the emotion input data.
- FIG. 1 illustrates an exemplary network environment in which a system for translating a user emotion into an electronic representation may be implemented.
- FIG. 2 illustrates an exemplary application.
- FIG. 3 illustrates an exemplary method for translating a user emotion into an electronic representation.
- FIG. 4 illustrates an exemplary electronic representation resulting from the translation of a user emotion.
- FIG. 5 illustrates an exemplary slider interface for translating a user emotion.
- FIG. 6 illustrates another exemplary method for translating a user emotion into an electronic representation that includes machine learning.
- FIG. 7 illustrates an exemplary method for the comparison of voluntary and involuntary data.
- FIG. 8 illustrates a plurality of exemplary facial expressions created using a variety of emotion translation sliders.
- FIG. 9 illustrates an exemplary system for implementing a computing device.
- Innovate emotion translation technologies are provided. Using the technologies, a user may custom-tailor emoticons and other electronic
- the technologies may involve receiving emotion input data from the user with the active participation of the user, or they may involve automatically collecting such data (e.g., by way of hardware-based capture technologies).
- the technologies may involve continuous feedback functionalities based on machine learning.
- FIG. 1 illustrates an exemplary network environment 100 in which a system for translating a user emotion into an electronic representation may be implemented.
- exemplary environment 100 may include a sender computing device 110 communicatively coupled to a communications network 120.
- Sender computing device 110 may be communicatively coupled to a receiver computing device 130 through network 120 and, in some cases, one or more intermediate computing devices (e.g., network server 140).
- Network server 140 may be communicatively coupled to an application server 150.
- Application server 150 may include a database 160 stored in memory.
- application server 150 may be communicatively coupled to one or more separate and distinct database servers (not shown) that maintain database 160 and include executable instructions associated with managing the database (e.g., performing lookups).
- Each of the foregoing computing devices or systems may include a processor, a network interface that permits the device to exchange data with other devices, and memory storing executable instructions and data. When executed, the executable
- Network 120 may be implemented as a private network, a public network, an intranet, the Internet, or any suitable combination of the foregoing.
- FIG. 1 illustrates certain computing devices communicatively coupled to a single network 120 (e.g., sender computing device 110, receiver computing device 130, and network server 140), persons of ordinary skill in the art will readily recognize and appreciate that all of the devices depicted in FIG. 1 may be communicatively coupled together through either a single network 120 or a series of such networks.
- Network server 140 may receive and respond to requests transmitted over network 120 between the various computing devices depicted in FIG. 1 (e.g., between sender computing device 110 and application server 150. Although network server 140 is depicted between network 120 and application server 150 in FIG. 1, persons of ordinary skill in the art will appreciate that the environment illustrated in FIG. 1 may include additional network servers between other computing devices. In one embodiment, for example, network 120 may be the Internet and network server 140 may be a web server. In various possible embodiments, network server 140 and application server 150 may be incorporated into a single computing device or, alternatively, may function as standalone computing devices as illustrated in FIG. 1.
- Application server 150 may communicate with multiple computing devices, including for example network server 140, sender computing device 110, and receiver computing device 130.
- Application server 150 may host and maintain an executable application in memory. When executed, the application may perform a method for translating a user emotion into an electronic representation (e.g., an emoticon or emoji).
- network server 140 and application server 150 may be incorporated as a single computing device or, alternatively, they may function as standalone computing devices.
- Database 160 may store data, process data, and resolve queries received from application server 150 (e.g., requests for historical or otherwise previously stored emotion input data).
- Sender computing device 110 and receiver computing device 130 may each be communicatively coupled to network 120 at a network interface.
- Computing devices 110 and 130 may each be coupled either directly to network 120 or through any number of intermediate network servers, gateways, or other suitable computing devices.
- Computing devices 110 and 130 may each be a device that includes a processor, memory, and network interface (e.g., a desktop computer, workstation, laptop, smart phone, tablet, e-reader, smart watch, or other suitable computing device).
- Computing devices 110 and 130 may each include one or more locally stored applications, such as a network browser application through which a user may access network-based applications (e.g., ChromeTM application, FireFoxTM
- the network browser may permit the operator to view content provided to computing devices 110 and 130 by application server 150.
- computing devices 110 and 130 may each be a mobile device and, rather than viewing content provided with a network browser application, a user of computing device 110 or 130 may do so through a custom mobile application downloaded and locally installed on computing device 110 or 130.
- the user may communicate with application server 150 to operate the executable application stored in memory of application server 150.
- FIG. 2 illustrates an exemplary application 200.
- Application 200 may be maintained and operated at an application server, such as application server 150 of FIG. 1, or it may be downloaded to a computing device associated with a user and maintained as a local application.
- Application 200 may include a plurality of executable instructions arranged as objects, modules, or other structures depending on the selected programming language, other design considerations, and the preference of the programmer.
- the executable instructions may be responsible for effectuating one or more functionalities that contribute to the translation of a user emotion into an electronic representation.
- application 200 may include executable instructions 230 that, when executed by a processor, renders and display one or more user interfaces (e.g., a graphical user interface or a textual user interface).
- Application 200 may further include an emotion translation engine 220 that, when executed by a processor, translates received emotion input data into an electronic representation.
- Application 200 may also include a user feedback engine 230 that, when executed by a processor, processes user feedback with machine learning functionalities.
- FIG. 3 illustrates an exemplary method for translating a user emotion into an electronic representation.
- emotion input data is received from a user.
- the emotion input data is then analyzed and translated into a corresponding electronic representation (e.g., a visual and/or audible file), which is then output at step 330.
- a corresponding electronic representation e.g., a visual and/or audible file
- the emotion input data may be received or collected either automatically or with active participation from the user.
- the emotion input data may be received, for instance, as sensor data, image data, facial recognition data, vocal recognition data, or data associated with an emotions-based interface like that illustrated and discussed in the context of FIG. 4.
- Sensors are inclusive of cameras, microphones, temperature sensors, and other forms of sensors that capture biometric data.
- the emotion input data is analyzed and translated by a translation engine (e.g., emotion translation engine that 220 of FIG. 2).
- the translation engine may use audio and/or graphic principles to translate the emotion input data.
- the translation engine may use a mapping function to associate the emotion input data with a value within a predetermined range of emotion input data or with a position along a predetermined emotional scale.
- the translation engine may map the determined value to a value within a predetermined graphical or audible range or to a position along a predetermined graphical or audible scale.
- a specific translator for performing the conversion (e.g., a specific conversion algorithm, range, or scale to be applied depending on whether the output type is a visual representation, an audible representation, or a combination audio-visual representation).
- the emotional scale may be between sad and happy and the graphical scale may be between a mouth facing down to fiat to a mouth facing up.
- the same representation might be between major to minor scales.
- the conversion might also affect more than one graphics/audio component. For example, an introvert feeling may enlarge the eyes and scale down the mouth.
- Predefined, generic rules and upper and lower limits for each rule may play a role in creating the desired output representation for any value between those limits.
- the principles utilized by the engine might also include performing real-time, frame-by-frame analysis of the emotional information or data.
- the translation engine analyzes user input and translates such input into expressions of primary emotions, including joy, surprise, anger, disgust, sadness, contempt and fear or advanced emotions, including frustration and confusion; the results of such analyses and translations are output as any of a variety of audio/visual representations as further described herein.
- the analyses may take into account those facial muscle movements based on the Facial Action Coding System (FACS).
- FACS Facial Action Coding System
- the analyses may alternatively or also utilize landmarks and feature extraction in the context of recognizing emotions from vocal utterances and input. Such a process might first find landmarks in acoustic or vocal signal input. The landmark or landmarks might be used to extract other features. Such features may include the number of landmarks, voice onset time, syllable rate, syllable duration, timing features including unvoiced and voiced duration, as well as pitch and energy features. Such an analysis may occur as an integrated part of the translation engine. Such an analysis may also take place by way of accessing analytical operations provided by a third-party system communicatively coupled to the translation engine.
- the translation engine may be implemented in software (e.g., application 200 of FIG. 2) and embedded in non- transitory computer-readable storage media, such as a hard drive or other memory device that might be found on the likes of a mobile device (e.g., a tablet, e-reader, smart phone, smart watch, or similar device).
- the translation engine may also be stored and implemented on a network server (e.g., application server 150 of FIG. 1), which may be a part of a network or cloud-based computing system as depicted in FIG. 1.
- Input may be provided to the translation by way of the aforementioned hardware devices, which may also include software processing capabilities.
- the emotion input data may be provided and processed over a cloud-based network or as a part of a more traditional network (e.g., a wide area network, a local area network, the Internet, an intranet, or proprietary network).
- a cloud-based network e.g., a wide area network, a local area network, the Internet, an intranet, or proprietary network.
- output is provided in the form of an electronic representation (e.g., audio and/or visual representation).
- the electronic representation should be understood as inclusive of or in addition to graphics.
- Such representations can include emoticons, emojis, avatars, sound clips, images, or video, and animations (e.g., GIFs).
- FIG. 4 illustrates an exemplary electronic representation 400 resulting from the translation of a user emotion.
- Electronic representation 400 may be generated as a result of the translation methodology discussed in the context of FIG. 3 above.
- Such audio and/or visual representation or other electronic representation may be provided by way of a display 410 that is a part of a mobile device, such as sender computing device 110 or receiver computing device 130 of FIG. 1.
- Display 410 may further include a plurality of emotion translation sliders 420, which may allow a user to custom-tailor or fine-tune electronic representation 400.
- sliders 420 may instead display the slider configuration selected by the user of sender computing device 110 when generating electronic representation 400.
- computing devices 110 and 130 may each be a mobile device, such as an AndroidTM or iOSTM device, a laptop or desktop computing device, a network connected television, or a multi-panel display such as a billboard or conference room display. Audio may be provided by way of speakers or other sound equipment that be integrated or part of a standalone system. Such a system may be synchronized and operate in conjunction with the visual output device or it may be audio only.
- Electronic representation 400 output by the translation engine may be transmitted over a hardwired network connection or a wireless network connection.
- the network may be SMS-based for multimedia based messaging (e.g., MMS or iMessage).
- Electronic representation 400 which may be audio and/or visual (including but not limited to video), may be an attachment to an email or some other primary data transmission (e.g., a chat message).
- Electronic representation 400 may also take the form of stored data for later access. For example, electronic
- representation 400 may be stored in a non-transitory computer-readable storage device, such as portable storage.
- Electronic representation 400 may be accompanied by statistical data or other types of metadata.
- One exemplary use of audio and/or visual presentation of emotions by way of electronic representations 400 is a native emoticon or emoji messaging application.
- Such an application may send and receive emoticons (and optional text) between its users (e.g., from a user of sender computing device 110 to a user of receiver computing device 130 of FIG. 1).
- Such an application e.g., application 200 of FIG. 2
- the application might also or alternatively be integrated into another proprietary messaging application such as the FacebookTM application, the InstagramTM application, the SnapchatTM application, the WhatsAppTM application, and the Google HangoutsTM application.
- the aforementioned applications are exemplary and not intended to be limiting as to the scope of the innovative emotion translation technologies provided herein.
- the application may provide a notification sound that matches the sender emotion. For example, a smiley face might be accompanied by a happy notification sound whereas an irritated or angry emotion might result in a noisy or 'irritated' sound.
- a receiving user might be able to tell by the notification sound the sender emotion without even viewing a visual emoticon or emoji.
- FIG. 5 illustrates an exemplary slider interface 500 for translating a user emotion.
- An electronic representation output by the translation engine can be influenced by or adjusted through an emotions-based interface like that illustrated in FIG. 5. Such adjustments may take place as the actual input, as a part of the input, or as a part of output processing, which may be before or after the output audio and/or visual representation has been generated. Such adjustments by way of such an interface like that in FIG. 5 take into account various mathematical calculations that are correlated to the likes of being happy or sad, angry or concerned, and so forth. As illustrated in FIG.
- interface 500 may include a happy/sad slider 505, an angry/worried slider 510, an extrovert slider 515, an introvert slider 520, a love slider 525, a hate slider 530, a sexy slider 535, a blush slider 540, a crying slider 545, a boy/girl slider 550, an OMG slider 555, a wink slider 560, and a tongue-out slider 565.
- the examples shown in FIG. 5 are merely illustrative. Persons or ordinary skill in the art will readily recognize and appreciate that other emotional possibilities and ranges are possible.
- Other forms of input other than sliders are also possible including numeric scaling, binary (yes/no) entry, or 'tick' boxes, to name but a few.
- FIG. 6 illustrates another exemplary method 600 for translating a user emotion into an electronic representation that includes machine learning.
- Method 600 may be performed by a computing device executing instructions stored in memory (e.g., application 200 of FIG. 2).
- the computing device may acknowledge received emotional input data (e.g., data representation a given emotional state of a human user).
- the computing device may translate the received emotion input data into an electronic representation using a translation engine.
- the computing device may output the generated electronic representation (e.g., an audio and/or visual representation or file).
- the computing device may then, at step 640, solicit user feedback from a user of the computing device.
- the computing device may determine whether the user is satisfied with the electronic representation generated from the emotion data. When the user is not satisfied with the electronic representation, the computing device regenerates the electronic representation by returning to step 620 in a feedback loop. When the user is satisfied with the electronic representation, method 600 may conclude.
- a computing system e.g., sender computing device 110 of FIG. 1 may detect satisfaction with outputted electronic
- Such feedback might include user input provided in response to a direct query, an emotional sensor, or an interface like that shown in FIG.S 4 or 5.
- the system may make small modifications to similarly situated input data going forward. If the modification later proves to enhance satisfaction, the confidence behind such an adjustment may permeate through the system. Similar adjustments may be made in response to positive feedback. Over time, the system itself can improve the conversion function with respect to specific users, conversations, or the system as a whole.
- FIG. 7 illustrates an exemplary method 700 for the comparison of voluntary data and involuntary data.
- voluntary data may be received from a user of a computing device.
- Voluntary data may be an emotional statement or an emotional expression associated with the user.
- involuntary data may be obtained from a third party or other even from the user him or herself.
- the voluntary data received from the user may be processed by the translation engine to produce an electronic representation of the emotion associated with the user.
- the voluntary data may be one or more of data received by way of a graphical user interface like the slider of FIG.S 4 or 5, a vocal user interface whereby a user might use his voice and intonation to change the outcome, a physical interface whereby the user can use his voluntary movements to change the outcome with skeleton detection (e.g., by way of an accelerometer), hand detection or movement detection, a textual interface whereby a user might write "I am very happy but a little afraid" to change the outcome, or facial recognition interface whereby a user might user their face to express facial expression to change the outcome with camera face detection.
- a graphical user interface like the slider of FIG.S 4 or 5
- a vocal user interface whereby a user might use his voice and intonation to change the outcome
- a physical interface whereby the user can use his voluntary movements to change the outcome with skeleton detection (e.g., by way of an accelerometer), hand detection or movement detection
- a textual interface whereby a user might write "I am very happy but a
- the involuntary data received at step 720 may be an input that comes from the user him or herself but without the full control of the user.
- the involuntary input may be an emotional state that is determined or concluded from an involuntary output of the user (e.g. an increased heart rate that may be detected by a heart rate sensor and associated with a variety of different emotions, such as anger or exuberance).
- the involuntary data may be obtained from a third-party and may be exemplified by one or more of the following: vocal (e.g., voice recognition, intonation detection or speech rate detection; physical (e.g., detecting the rate and volume of user tics with movement detection); textual (e.g., typing rate detection); or a mood API.
- method 700 may include comparing the voluntary data and the involuntary data in order to correct inferences of the involuntary data and other results to generate more accurate output (e.g., a more accurate electronic
- the involuntary data received at step 720 may, for example, come from a third-party system.
- An embodiment of the presently disclosed technology may compare the voluntary data extracted in the context of the translation engine and third-party data otherwise referred to as involuntary data. Through such a comparison, the following exemplary result discussed for illustrative purposes might be achieved.
- a user in one geographic locale may have a particular dislike of the summer season. Because of that dislike— and the particular locale of the user— the user might cause to be generated emoticons or other emotional output with a generally negative connotation.
- a third-party system may be— at the same time as the user in the summer environment— in a cold weather environment.
- That system might erroneously presume that the same user would be 'happy' when it sunny and hot because of the locale of the third-party system, which was designed for a cold weather environment. This is not the case as the user clearly has a dislike of the summer season, which is being expressed in the summer locale.
- An embodiment like that described above can correct the erroneous presumptions of the third-party system.
- FIG. 8 illustrates a plurality of exemplary facial expressions 800 created using a variety of emotion translation sliders. Exemplary translation sliders are discussed and illustrated in the context of FIG. 5. Facial expressions are symbols of moods and a way to express them. Systems such as FACS classify moods to a complicated matrix of codes. FIG. 6 illustrates expressions utilizing four exemplary sliders:
- the sliders which may each be divided into a plurality of degrees or units, can be manipulated to represent the majority of human moods. With an exemplary four scales of one hundred units each, for example, there would be iOO 4 options of facial expressions.
- FIG. 9 illustrates an exemplary system 900 for implementing a computing device.
- the computing system 900 of FIG. 9 may be implemented in the context of sender computing device 110, receiver computing device 130, network server 140, application server 150, or database server 160 of FIG. 1.
- the computing system of FIG. 9 may include one or more processors 910 and memory 920.
- Main memory 920 may store, in part, instructions and data for execution by processor 910.
- Main memory 920 may store the executable code when in operation.
- Computing system 900 may further include a mass storage device 930, a portable storage medium drive 940, output devices 950, user input devices 960, a graphics display system 970, and peripheral devices 980.
- FIG. 9 The components shown in FIG. 9 are depicted as being connected via a single bus 990. The components may alternatively be connected through one or more data transport means.
- Processor 910 and main memory 920 may be connected via a local microprocessor bus.
- Mass storage device 930, peripheral device(s) 980, portable storage device 940, and display system 970 may be connected via one or more input/output buses.
- Mass storage device 930 which may be implemented with a magnetic disk drive or an optical disk drive, may be a non-volatile storage device for storing data and instructions for use by processor 910. Mass storage device 930 may store system software for implementing embodiments of the solution described herein for purposes of loading the software into main memory 920.
- Portable storage device 940 may operate in conjunction with a portable nonvolatile storage medium, such as a compact disk or digital video disc, to input and output data and code to and from computer system 900.
- the system software for implementing embodiments of the present solution may be stored on such a portable medium and input to computer system 900 via portable storage device 940.
- Input devices 960 may provide a portion of a user interface.
- Input devices 960 may include an alpha-numeric keypad, such as a keyboard, touch screen, or touchpad, for inputting alpha-numeric and other information, or a pointing device, such as a mouse, a trackball, stylus, or cursor direction keys.
- system 900 may include output devices 950, such as speakers, printers, network interfaces, monitors, and the like.
- Display system 970 may include a liquid crystal display or other suitable display device. Display system 970 may receive textual and graphical information and may process the information for output to the display device.
- Peripherals 980 may include any type of computer support device to add additional functionality to computer system 900.
- Peripheral device 980 could be, for example, a modem or a router.
- system 900 may be a desktop computer, workstation, server, mainframe computer, laptop, tablet, smartphone or other mobile or hand-held computing device, or any other suitable computing device.
- Computer system 900 may also include various bus
- Various operating systems may be used, such as a UNIXTM operating system, a LINUXTM operating system, a WINDOWSTM operating system, a MACINTOSHTM operating system, a PALMTM operating system, and other suitable operating systems.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Innovate emotion translation technologies are provided. Using the technologies, which may be embodied in various systems, methods, and computer-readable storage media, a user may custom-tailor emoticons and other electronic representations of emotions to convey a human emotion experienced by the user. The technologies may involve receiving emotion input data from the user with the active participation of the user, or they may involve automatically collecting such data. The technologies may involve continuous feedback functionalities based on machine learning.
Description
TRANSLATING EMOTIONS INTO ELECTRONIC REPRESENTATIONS
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] The present application claims the benefit of U.S. provisional patent application number 62/026,795 filed July 21, 2014 and entitled "Translation of Emotions to an Audio/Visual Representation," the content of which is incorporated herein by reference in its entirety for all purposes.
BACKGROUND OF THE INVENTION Field of the Invention
[0002] The present invention generally relates to emotional translation. More specifically, the present invention relates to translating human emotions into visual and/or audio representations. Related Art
[0003] Emoticons are simplistic images representative of mood. Emoticons may be used to convey tenor of a message or communication that might otherwise be lacking emotional context such as an email or text message. Emojis, too, may convey a human emotion.
[0004] Conventional emoticons and emojis are typically selected from a
predetermined palette of static icons, such as a 'happy face' or a 'frowning face.' Such static icons fail to accurately convey the nuances of mood or feeling.
SUMMARY OF THE CLAIMED INVENTION
[0005] A computer-implemented method for translating a user emotion into an electronic representation includes a step of receiving emotion input data associated with the user emotion. The emotion input data is received at a computing device. The method includes executing instructions stored in memory of the computing device. Upon execution of the instructions by a processor of the computing device, the computing device analyzes the emotion input data. The computing device then generates the electronic representation based on the emotion input data.
[0006] A non-transitory computer-readable storage medium has a computer program embodied thereon. The computer program is executable to perform a method for translating a user emotion into an electronic representation. The method includes a step of analyzing received emotion input data associated with the user emotion and generating the electronic representation based on the emotion input data.
[0007] A system for translating a user emotion into an electronic representation includes a sender computing device communicatively coupled to a server by a communications network. The sender computing device receives emotion input data from a user. The emotion input data is associated with the user emotion. The server receives the emotion input data from the sender computing device over the communications network and executes instructions stored in memory of the server. Upon execution of the instructions by a processor of the server, the server analyzes the emotion input data. The server then generates the electronic representation based on the emotion input data.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] FIG. 1 illustrates an exemplary network environment in which a system for translating a user emotion into an electronic representation may be implemented.
[0009] FIG. 2 illustrates an exemplary application.
[0010] FIG. 3 illustrates an exemplary method for translating a user emotion into an electronic representation.
[0011] FIG. 4 illustrates an exemplary electronic representation resulting from the translation of a user emotion.
[0012] FIG. 5 illustrates an exemplary slider interface for translating a user emotion.
[0013] FIG. 6 illustrates another exemplary method for translating a user emotion into an electronic representation that includes machine learning.
[0014] FIG. 7 illustrates an exemplary method for the comparison of voluntary and involuntary data.
[0015] FIG. 8 illustrates a plurality of exemplary facial expressions created using a variety of emotion translation sliders.
[0016] FIG. 9 illustrates an exemplary system for implementing a computing device.
DETAILED DESCRIPTION
[0017] Innovate emotion translation technologies are provided. Using the technologies, a user may custom-tailor emoticons and other electronic
representations of emotions to convey a human emotion experienced by the user. The technologies may involve receiving emotion input data from the user with the active participation of the user, or they may involve automatically collecting such data (e.g., by way of hardware-based capture technologies). The technologies may involve continuous feedback functionalities based on machine learning. Although the technologies are illustrated herein by way of various systems, methods, and computer-readable storage media, persons of ordinary skill in the art will readily appreciate that the embodied discussed are merely exemplary and in no way limit the scope of the present disclosure.
[0018] FIG. 1 illustrates an exemplary network environment 100 in which a system for translating a user emotion into an electronic representation may be implemented. As illustrated in FIG. 1, exemplary environment 100 may include a sender computing device 110 communicatively coupled to a communications network 120. Sender computing device 110 may be communicatively coupled to a receiver computing device 130 through network 120 and, in some cases, one or more intermediate computing devices (e.g., network server 140). Network server 140 may be communicatively coupled to an application server 150. Application server 150 may include a database 160 stored in memory. Alternatively, application server 150 may be communicatively coupled to one or more separate and distinct database servers (not shown) that maintain database 160 and include executable instructions associated with managing the database (e.g., performing lookups). Each of the foregoing computing devices or systems may include a processor, a network interface that permits the device to exchange data with other devices, and memory storing executable instructions and data. When executed, the executable
instructions, which may be arranged as software modules or any other suitable structure, perform one or more defined functionalities. In various embodiments, some or all of the foregoing devices may collectively embody the innovative emotion
translation technologies disclosed herein.
[0019] Network 120 may be implemented as a private network, a public network, an intranet, the Internet, or any suitable combination of the foregoing. Although FIG. 1 illustrates certain computing devices communicatively coupled to a single network 120 (e.g., sender computing device 110, receiver computing device 130, and network server 140), persons of ordinary skill in the art will readily recognize and appreciate that all of the devices depicted in FIG. 1 may be communicatively coupled together through either a single network 120 or a series of such networks.
[0020] Network server 140 may receive and respond to requests transmitted over network 120 between the various computing devices depicted in FIG. 1 (e.g., between sender computing device 110 and application server 150. Although network server 140 is depicted between network 120 and application server 150 in FIG. 1, persons of ordinary skill in the art will appreciate that the environment illustrated in FIG. 1 may include additional network servers between other computing devices. In one embodiment, for example, network 120 may be the Internet and network server 140 may be a web server. In various possible embodiments, network server 140 and application server 150 may be incorporated into a single computing device or, alternatively, may function as standalone computing devices as illustrated in FIG. 1.
[0021] Application server 150 may communicate with multiple computing devices, including for example network server 140, sender computing device 110, and receiver computing device 130. Application server 150 may host and maintain an executable application in memory. When executed, the application may perform a method for translating a user emotion into an electronic representation (e.g., an emoticon or emoji). As noted above, network server 140 and application server 150 may be incorporated as a single computing device or, alternatively, they may function as standalone computing devices. Database 160 may store data, process data, and resolve queries received from application server 150 (e.g., requests for historical or otherwise previously stored emotion input data).
[0022] Sender computing device 110 and receiver computing device 130 may each be communicatively coupled to network 120 at a network interface. Computing devices 110 and 130 may each be coupled either directly to network 120 or through any
number of intermediate network servers, gateways, or other suitable computing devices. Computing devices 110 and 130 may each be a device that includes a processor, memory, and network interface (e.g., a desktop computer, workstation, laptop, smart phone, tablet, e-reader, smart watch, or other suitable computing device). Computing devices 110 and 130 may each include one or more locally stored applications, such as a network browser application through which a user may access network-based applications (e.g., Chrome™ application, FireFox™
application, Safari™ application, Opera™ application, or Internet Explorer™ application). The network browser may permit the operator to view content provided to computing devices 110 and 130 by application server 150. In some embodiments, computing devices 110 and 130 may each be a mobile device and, rather than viewing content provided with a network browser application, a user of computing device 110 or 130 may do so through a custom mobile application downloaded and locally installed on computing device 110 or 130. Through one or more user interfaces displayed at computing device 110 or 130, the user may communicate with application server 150 to operate the executable application stored in memory of application server 150.
[0023] FIG. 2 illustrates an exemplary application 200. Application 200 may be maintained and operated at an application server, such as application server 150 of FIG. 1, or it may be downloaded to a computing device associated with a user and maintained as a local application. Application 200 may include a plurality of executable instructions arranged as objects, modules, or other structures depending on the selected programming language, other design considerations, and the preference of the programmer. The executable instructions may be responsible for effectuating one or more functionalities that contribute to the translation of a user emotion into an electronic representation.
[0024] As shown in FIG. 2, application 200 may include executable instructions 230 that, when executed by a processor, renders and display one or more user interfaces (e.g., a graphical user interface or a textual user interface). Application 200 may further include an emotion translation engine 220 that, when executed by a processor, translates received emotion input data into an electronic representation.
Application 200 may also include a user feedback engine 230 that, when executed by a processor, processes user feedback with machine learning functionalities.
[0025] FIG. 3 illustrates an exemplary method for translating a user emotion into an electronic representation. At step 310, emotion input data is received from a user. At step 320, the emotion input data is then analyzed and translated into a corresponding electronic representation (e.g., a visual and/or audible file), which is then output at step 330.
[0026] The emotion input data may be received or collected either automatically or with active participation from the user. The emotion input data may be received, for instance, as sensor data, image data, facial recognition data, vocal recognition data, or data associated with an emotions-based interface like that illustrated and discussed in the context of FIG. 4. Sensors are inclusive of cameras, microphones, temperature sensors, and other forms of sensors that capture biometric data.
[0027] Referring back to step 320, the emotion input data is analyzed and translated by a translation engine (e.g., emotion translation engine that 220 of FIG. 2). The translation engine may use audio and/or graphic principles to translate the emotion input data. The translation engine may use a mapping function to associate the emotion input data with a value within a predetermined range of emotion input data or with a position along a predetermined emotional scale. The translation engine may map the determined value to a value within a predetermined graphical or audible range or to a position along a predetermined graphical or audible scale. For each form of output type, there may be a specific translator for performing the conversion (e.g., a specific conversion algorithm, range, or scale to be applied depending on whether the output type is a visual representation, an audible representation, or a combination audio-visual representation).
[0028] For example, in the case of a simple "smiley," the emotional scale may be between sad and happy and the graphical scale may be between a mouth facing down to fiat to a mouth facing up. In audio, the same representation might be between major to minor scales. The conversion might also affect more than one graphics/audio component. For example, an introvert feeling may enlarge the eyes and scale down the mouth. Predefined, generic rules and upper and lower limits for
each rule may play a role in creating the desired output representation for any value between those limits.
[0029] The principles utilized by the engine might also include performing real-time, frame-by-frame analysis of the emotional information or data. The translation engine analyzes user input and translates such input into expressions of primary emotions, including joy, surprise, anger, disgust, sadness, contempt and fear or advanced emotions, including frustration and confusion; the results of such analyses and translations are output as any of a variety of audio/visual representations as further described herein. The analyses may take into account those facial muscle movements based on the Facial Action Coding System (FACS).
[0030] The analyses may alternatively or also utilize landmarks and feature extraction in the context of recognizing emotions from vocal utterances and input. Such a process might first find landmarks in acoustic or vocal signal input. The landmark or landmarks might be used to extract other features. Such features may include the number of landmarks, voice onset time, syllable rate, syllable duration, timing features including unvoiced and voiced duration, as well as pitch and energy features. Such an analysis may occur as an integrated part of the translation engine. Such an analysis may also take place by way of accessing analytical operations provided by a third-party system communicatively coupled to the translation engine.
[0031] The aforementioned techniques are exemplary. Other techniques that are known in the art may be implemented or combined. The translation engine may be implemented in software (e.g., application 200 of FIG. 2) and embedded in non- transitory computer-readable storage media, such as a hard drive or other memory device that might be found on the likes of a mobile device (e.g., a tablet, e-reader, smart phone, smart watch, or similar device). The translation engine may also be stored and implemented on a network server (e.g., application server 150 of FIG. 1), which may be a part of a network or cloud-based computing system as depicted in FIG. 1. Input may be provided to the translation by way of the aforementioned hardware devices, which may also include software processing capabilities. As discussed in the context of network 120 of FIG. 1, the emotion input data may be provided and processed over a cloud-based network or as a part of a more
traditional network (e.g., a wide area network, a local area network, the Internet, an intranet, or proprietary network).
[0032] At step 330, once the emotion input data has been analyzed and translated by the translation engine, output is provided in the form of an electronic representation (e.g., audio and/or visual representation). The electronic representation should be understood as inclusive of or in addition to graphics. Such representations can include emoticons, emojis, avatars, sound clips, images, or video, and animations (e.g., GIFs).
[0033] FIG. 4 illustrates an exemplary electronic representation 400 resulting from the translation of a user emotion. Electronic representation 400 may be generated as a result of the translation methodology discussed in the context of FIG. 3 above.
Such audio and/or visual representation or other electronic representation (as well as or in addition to graphics or animation) may be provided by way of a display 410 that is a part of a mobile device, such as sender computing device 110 or receiver computing device 130 of FIG. 1. Display 410 may further include a plurality of emotion translation sliders 420, which may allow a user to custom-tailor or fine-tune electronic representation 400. Where display 410 is a display of receiver computing device 130, sliders 420 may instead display the slider configuration selected by the user of sender computing device 110 when generating electronic representation 400.
[0034] As discussed in the context of FIG. 1, computing devices 110 and 130 may each be a mobile device, such as an Android™ or iOS™ device, a laptop or desktop computing device, a network connected television, or a multi-panel display such as a billboard or conference room display. Audio may be provided by way of speakers or other sound equipment that be integrated or part of a standalone system. Such a system may be synchronized and operate in conjunction with the visual output device or it may be audio only.
[0035] Electronic representation 400 output by the translation engine may be transmitted over a hardwired network connection or a wireless network connection. The network may be SMS-based for multimedia based messaging (e.g., MMS or iMessage). Electronic representation 400, which may be audio and/or visual (including but not limited to video), may be an attachment to an email or some other
primary data transmission (e.g., a chat message). Electronic representation 400 may also take the form of stored data for later access. For example, electronic
representation 400 may be stored in a non-transitory computer-readable storage device, such as portable storage. Electronic representation 400 may be accompanied by statistical data or other types of metadata.
[0036] One exemplary use of audio and/or visual presentation of emotions by way of electronic representations 400 is a native emoticon or emoji messaging application. Such an application may send and receive emoticons (and optional text) between its users (e.g., from a user of sender computing device 110 to a user of receiver computing device 130 of FIG. 1). Such an application (e.g., application 200 of FIG. 2) may operate as a part of a proprietary network or service. The application might also or alternatively be integrated into another proprietary messaging application such as the Facebook™ application, the Instagram™ application, the Snapchat™ application, the WhatsApp™ application, and the Google Hangouts™ application. The aforementioned applications are exemplary and not intended to be limiting as to the scope of the innovative emotion translation technologies provided herein.
[0037] When a message arrives at a receiving device (e.g., receiver computing device 130 of FIG. 1), the application may provide a notification sound that matches the sender emotion. For example, a smiley face might be accompanied by a happy notification sound whereas an irritated or angry emotion might result in a noisy or 'irritated' sound. A receiving user might be able to tell by the notification sound the sender emotion without even viewing a visual emoticon or emoji.
[0038] FIG. 5 illustrates an exemplary slider interface 500 for translating a user emotion. An electronic representation output by the translation engine can be influenced by or adjusted through an emotions-based interface like that illustrated in FIG. 5. Such adjustments may take place as the actual input, as a part of the input, or as a part of output processing, which may be before or after the output audio and/or visual representation has been generated. Such adjustments by way of such an interface like that in FIG. 5 take into account various mathematical calculations that are correlated to the likes of being happy or sad, angry or worried, and so forth. As illustrated in FIG. 5, for instance, interface 500 may include a happy/sad slider 505,
an angry/worried slider 510, an extrovert slider 515, an introvert slider 520, a love slider 525, a hate slider 530, a sexy slider 535, a blush slider 540, a crying slider 545, a boy/girl slider 550, an OMG slider 555, a wink slider 560, and a tongue-out slider 565. The examples shown in FIG. 5 are merely illustrative. Persons or ordinary skill in the art will readily recognize and appreciate that other emotional possibilities and ranges are possible. Other forms of input other than sliders are also possible including numeric scaling, binary (yes/no) entry, or 'tick' boxes, to name but a few.
[0039] FIG. 6 illustrates another exemplary method 600 for translating a user emotion into an electronic representation that includes machine learning. Method 600 may be performed by a computing device executing instructions stored in memory (e.g., application 200 of FIG. 2). At step 610, upon execution of the instructions by a processor of the computing device, the computing device may acknowledge received emotional input data (e.g., data representation a given emotional state of a human user). At step 610, the computing device may translate the received emotion input data into an electronic representation using a translation engine. At step 630, the computing device may output the generated electronic representation (e.g., an audio and/or visual representation or file). The computing device may then, at step 640, solicit user feedback from a user of the computing device. At step 650, the computing device may determine whether the user is satisfied with the electronic representation generated from the emotion data. When the user is not satisfied with the electronic representation, the computing device regenerates the electronic representation by returning to step 620 in a feedback loop. When the user is satisfied with the electronic representation, method 600 may conclude. By applying method 600, a computing system (e.g., sender computing device 110 of FIG. 1) may detect satisfaction with outputted electronic
representations as a part of user feedback. Such feedback might include user input provided in response to a direct query, an emotional sensor, or an interface like that shown in FIG.S 4 or 5. In the case of dissatisfaction, the system may make small modifications to similarly situated input data going forward. If the modification later proves to enhance satisfaction, the confidence behind such an adjustment may permeate through the system. Similar adjustments may be made in response to
positive feedback. Over time, the system itself can improve the conversion function with respect to specific users, conversations, or the system as a whole.
[0040] FIG. 7 illustrates an exemplary method 700 for the comparison of voluntary data and involuntary data. At step 710, voluntary data may be received from a user of a computing device. Voluntary data may be an emotional statement or an emotional expression associated with the user. At step 720, involuntary data may be obtained from a third party or other even from the user him or herself. At step 730, the voluntary data received from the user may be processed by the translation engine to produce an electronic representation of the emotion associated with the user. At step 740, the voluntary data may be one or more of data received by way of a graphical user interface like the slider of FIG.S 4 or 5, a vocal user interface whereby a user might use his voice and intonation to change the outcome, a physical interface whereby the user can use his voluntary movements to change the outcome with skeleton detection (e.g., by way of an accelerometer), hand detection or movement detection, a textual interface whereby a user might write "I am very happy but a little afraid" to change the outcome, or facial recognition interface whereby a user might user their face to express facial expression to change the outcome with camera face detection.
[0041] The involuntary data received at step 720 may be an input that comes from the user him or herself but without the full control of the user. The involuntary input may be an emotional state that is determined or concluded from an involuntary output of the user (e.g. an increased heart rate that may be detected by a heart rate sensor and associated with a variety of different emotions, such as anger or exuberance). The involuntary data may be obtained from a third-party and may be exemplified by one or more of the following: vocal (e.g., voice recognition, intonation detection or speech rate detection; physical (e.g., detecting the rate and volume of user tics with movement detection); textual (e.g., typing rate detection); or a mood API. At step 750, method 700 may include comparing the voluntary data and the involuntary data in order to correct inferences of the involuntary data and other results to generate more accurate output (e.g., a more accurate electronic
representation 400 of FIG. 4).
[0042] The involuntary data received at step 720 may, for example, come from a third-party system. An embodiment of the presently disclosed technology may compare the voluntary data extracted in the context of the translation engine and third-party data otherwise referred to as involuntary data. Through such a comparison, the following exemplary result discussed for illustrative purposes might be achieved. A user in one geographic locale may have a particular dislike of the summer season. Because of that dislike— and the particular locale of the user— the user might cause to be generated emoticons or other emotional output with a generally negative connotation. A third-party system, however, may be— at the same time as the user in the summer environment— in a cold weather environment. That system might erroneously presume that the same user would be 'happy' when it sunny and hot because of the locale of the third-party system, which was designed for a cold weather environment. This is not the case as the user clearly has a dislike of the summer season, which is being expressed in the summer locale. An embodiment like that described above can correct the erroneous presumptions of the third-party system.
[0043] FIG. 8 illustrates a plurality of exemplary facial expressions 800 created using a variety of emotion translation sliders. Exemplary translation sliders are discussed and illustrated in the context of FIG. 5. Facial expressions are symbols of moods and a way to express them. Systems such as FACS classify moods to a complicated matrix of codes. FIG. 6 illustrates expressions utilizing four exemplary sliders:
happy/none/sad slider 810, angry/worried slider 820, none/introvert slider 830, and none/extrovert slider 840. The sliders, which may each be divided into a plurality of degrees or units, can be manipulated to represent the majority of human moods. With an exemplary four scales of one hundred units each, for example, there would be iOO4 options of facial expressions.
[0044] FIG. 9 illustrates an exemplary system 900 for implementing a computing device. The computing system 900 of FIG. 9 may be implemented in the context of sender computing device 110, receiver computing device 130, network server 140, application server 150, or database server 160 of FIG. 1. The computing system of FIG. 9 may include one or more processors 910 and memory 920. Main memory 920
may store, in part, instructions and data for execution by processor 910. Main memory 920 may store the executable code when in operation. Computing system 900 may further include a mass storage device 930, a portable storage medium drive 940, output devices 950, user input devices 960, a graphics display system 970, and peripheral devices 980.
[0045] The components shown in FIG. 9 are depicted as being connected via a single bus 990. The components may alternatively be connected through one or more data transport means. Processor 910 and main memory 920, for example, may be connected via a local microprocessor bus. Mass storage device 930, peripheral device(s) 980, portable storage device 940, and display system 970 may be connected via one or more input/output buses.
[0046] Mass storage device 930, which may be implemented with a magnetic disk drive or an optical disk drive, may be a non-volatile storage device for storing data and instructions for use by processor 910. Mass storage device 930 may store system software for implementing embodiments of the solution described herein for purposes of loading the software into main memory 920.
[0047] Portable storage device 940 may operate in conjunction with a portable nonvolatile storage medium, such as a compact disk or digital video disc, to input and output data and code to and from computer system 900. The system software for implementing embodiments of the present solution may be stored on such a portable medium and input to computer system 900 via portable storage device 940.
[0048] Input devices 960 may provide a portion of a user interface. Input devices 960 may include an alpha-numeric keypad, such as a keyboard, touch screen, or touchpad, for inputting alpha-numeric and other information, or a pointing device, such as a mouse, a trackball, stylus, or cursor direction keys. Additionally, system 900 may include output devices 950, such as speakers, printers, network interfaces, monitors, and the like.
[0049] Display system 970 may include a liquid crystal display or other suitable display device. Display system 970 may receive textual and graphical information and may process the information for output to the display device.
[0050] Peripherals 980 may include any type of computer support device to add
additional functionality to computer system 900. Peripheral device 980 could be, for example, a modem or a router.
[0051] The components illustrated in computer system 900 of FIG. 9 are those typically found in computer systems that may be suitable for use with embodiments of the present solution. The depiction of such components is not intended to be exhaustive in nature, but is rather intended to represent a broad category of computer components that are well known in the art. Thus, system 900 may be a desktop computer, workstation, server, mainframe computer, laptop, tablet, smartphone or other mobile or hand-held computing device, or any other suitable computing device. Computer system 900 may also include various bus
configurations, networked platforms, multi-processor platforms, and the like.
Various operating systems may be used, such as a UNIX™ operating system, a LINUX™ operating system, a WINDOWS™ operating system, a MACINTOSH™ operating system, a PALM™ operating system, and other suitable operating systems.
[0052] The foregoing detailed description of the technology has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the technology to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. The described embodiments were chosen in order to best explain the principles of the technology, its practical application, and to enable others skilled in the art to utilize the technology in various embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope of the technology be defined by the claim.
Claims
1. A computer-implemented method for translating a user emotion into an electronic representation, the method comprising:
receiving at a computing device emotion input data associated with the user emotion; and
executing instructions stored in memory of the computing device, wherein execution of the instructions by a processor of the computing device:
analyzes the emotion input data, and
generates the electronic representation based on the emotion input data.
2. The computer-implemented method of claim 1, wherein the emotion input data includes emotion data associated with a value within a predetermined range of emotion data.
3. The computer-implemented method of claim 1, wherein the emotion input data includes audio data.
4. The computer-implemented method of claim 1, wherein the emotion input data includes visual data.
5. The computer-implemented method of claim 1, wherein the emotion input data includes text.
6. The computer-implemented method of claim 4, wherein the visual data includes image data captured by a camera.
7. The computer-implemented method of claim 1, wherein the emotion input data includes a Facial Action Coding System code.
8. The computer-implemented method of claim 1, wherein the electronic representation is a visual representation.
9. The computer-implemented method of claim 8, wherein the electronic representation is an emoticon.
10. The computer-implemented method of claim 8, wherein the electronic representation is an emoji.
11. The computer-implemented method of claim 1, wherein the electronic representation includes an audio representation of the user emotion.
12. The computer-implemented method of claim 1, wherein analyzing the emotion input data includes associating the emotion input data with a position along a predetermined emotion scale.
13. The computer-implemented method of claim 12, wherein analyzing the emotion input data further includes mapping the position along the emotion scale to a position along an electronic representation scale.
14. The computer-implemented method of claim 3, wherein analyzing the emotion input data includes detecting a landmark in the audio data.
15. The computer-implemented method of claim 14, wherein the landmark in the audio data includes at least one of a voice onset time, a syllable rate, a syllable duration, a voice duration, and a pitch.
16. A non-transitory computer-readable storage medium having a computer program embodied thereon, the computer program executable to perform a method for translating a user emotion into an electronic representation, the method comprising:
analyzing received emotion input data, the emotion input data associated with the user emotion; and
generating the electronic representation based on the emotion input data.
17. The non-transitory computer-readable storage medium of claim 16, wherein the electronic representation is a visual representation.
18. The non-transitory computer-readable storage medium of claim 16, wherein analyzing the emotion input data includes associating the emotion input data with a position along a predetermined emotion scale.
19. The non-transitory computer-readable storage medium of claim 16, wherein analyzing the emotion input data further includes mapping the position along the emotion scale to a position along an electronic representation scale.
20. A system for translating a user emotion into an electronic representation, the method comprising:
a sender computing device that receives emotion input data from a user, the emotion input data associated with the user emotion; and
a server that receives the emotion input data from the sender computing device over a communications network and executes instructions stored in memory of the server, wherein execution of the instructions by a processor of the server:
analyzes the emotion input data, and
generates the electronic representation based on the emotion input data.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201462026795P | 2014-07-21 | 2014-07-21 | |
US62/026,795 | 2014-07-21 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2016014597A2 true WO2016014597A2 (en) | 2016-01-28 |
WO2016014597A3 WO2016014597A3 (en) | 2016-03-24 |
Family
ID=55163952
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2015/041419 WO2016014597A2 (en) | 2014-07-21 | 2015-07-21 | Translating emotions into electronic representations |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2016014597A2 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102016216407A1 (en) | 2016-08-31 | 2018-03-01 | BSH Hausgeräte GmbH | Individual communication support |
EP3318964A1 (en) * | 2016-11-07 | 2018-05-09 | NayaDaya Oy | Method, computer program product, computer readable medium, computer system and electronic apparatus for associating visual indication of emotion experienced by user in response to emotion-causing object or event with digital object |
EP3340077A1 (en) * | 2016-12-20 | 2018-06-27 | Beijing Xiaomi Mobile Software Co., Ltd. | Method and apparatus for inputting expression information |
CN106020504B (en) * | 2016-05-17 | 2018-11-27 | 百度在线网络技术(北京)有限公司 | Information output method and device |
US12216810B2 (en) * | 2020-02-26 | 2025-02-04 | Mursion, Inc. | Systems and methods for automated control of human inhabited characters |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060122834A1 (en) * | 2004-12-03 | 2006-06-08 | Bennett Ian M | Emotion detection device & method for use in distributed systems |
US20100249538A1 (en) * | 2009-03-24 | 2010-09-30 | Neurofocus, Inc. | Presentation measure using neurographics |
US8326002B2 (en) * | 2009-08-13 | 2012-12-04 | Sensory Logic, Inc. | Methods of facial coding scoring for optimally identifying consumers' responses to arrive at effective, incisive, actionable conclusions |
WO2011158010A1 (en) * | 2010-06-15 | 2011-12-22 | Jonathan Edward Bishop | Assisting human interaction |
US9207755B2 (en) * | 2011-12-20 | 2015-12-08 | Iconicast, LLC | Method and system for emotion tracking, tagging, and rating and communication |
-
2015
- 2015-07-21 WO PCT/US2015/041419 patent/WO2016014597A2/en active Application Filing
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106020504B (en) * | 2016-05-17 | 2018-11-27 | 百度在线网络技术(北京)有限公司 | Information output method and device |
DE102016216407A1 (en) | 2016-08-31 | 2018-03-01 | BSH Hausgeräte GmbH | Individual communication support |
EP3318964A1 (en) * | 2016-11-07 | 2018-05-09 | NayaDaya Oy | Method, computer program product, computer readable medium, computer system and electronic apparatus for associating visual indication of emotion experienced by user in response to emotion-causing object or event with digital object |
EP3340077A1 (en) * | 2016-12-20 | 2018-06-27 | Beijing Xiaomi Mobile Software Co., Ltd. | Method and apparatus for inputting expression information |
US12216810B2 (en) * | 2020-02-26 | 2025-02-04 | Mursion, Inc. | Systems and methods for automated control of human inhabited characters |
Also Published As
Publication number | Publication date |
---|---|
WO2016014597A3 (en) | 2016-03-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11593984B2 (en) | Using text for avatar animation | |
US11929072B2 (en) | Using textual input and user state information to generate reply content to present in response to the textual input | |
CN114787814B (en) | Anaphoric resolution | |
US10909331B2 (en) | Implicit identification of translation payload with neural machine translation | |
JP6725672B2 (en) | Identifying voice input that provides credentials | |
US9972304B2 (en) | Privacy preserving distributed evaluation framework for embedded personalized systems | |
US20210090314A1 (en) | Multimodal approach for avatar animation | |
DK179343B1 (en) | Intelligent task discovery | |
CN110797019B (en) | Multi-command single speech input method | |
TWI579714B (en) | Method, system, and computer readable storage medium for predictive text input | |
US20180330714A1 (en) | Machine learned systems | |
US20180349346A1 (en) | Lattice-based techniques for providing spelling corrections | |
JP2019102063A (en) | Method and apparatus for controlling page | |
US20180077095A1 (en) | Augmentation of Communications with Emotional Data | |
CN107679032A (en) | Voice changes error correction method and device | |
WO2015183699A1 (en) | Predictive messaging method | |
US10586528B2 (en) | Domain-specific speech recognizers in a digital medium environment | |
TW201629949A (en) | A caching apparatus for serving phonetic pronunciations | |
EP2965313A1 (en) | Speech recognition assisted evaluation on text-to-speech pronunciation issue detection | |
CN105074817A (en) | Systems and methods for switching processing modes using gestures | |
US20170061955A1 (en) | Facilitating dynamic and intelligent conversion of text into real user speech | |
CN104808794A (en) | Method and system for inputting lip language | |
WO2016014597A2 (en) | Translating emotions into electronic representations | |
US20210118232A1 (en) | Method and System for Translating Air Writing To An Augmented Reality Device | |
CN116762055A (en) | Sync VR notifications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15825563 Country of ref document: EP Kind code of ref document: A2 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15825563 Country of ref document: EP Kind code of ref document: A2 |