[go: up one dir, main page]

US20180025004A1 - Process to provide audio/video/literature files and/or events/activities ,based upon an emoji or icon associated to a personal feeling - Google Patents

Process to provide audio/video/literature files and/or events/activities ,based upon an emoji or icon associated to a personal feeling Download PDF

Info

Publication number
US20180025004A1
US20180025004A1 US15/214,436 US201615214436A US2018025004A1 US 20180025004 A1 US20180025004 A1 US 20180025004A1 US 201615214436 A US201615214436 A US 201615214436A US 2018025004 A1 US2018025004 A1 US 2018025004A1
Authority
US
United States
Prior art keywords
user
video
emoji
mood
music
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/214,436
Inventor
Eric Koenig
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US15/214,436 priority Critical patent/US20180025004A1/en
Publication of US20180025004A1 publication Critical patent/US20180025004A1/en
Priority to US16/297,019 priority patent/US11151187B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • G06F17/30029
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/632Query formulation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/638Presentation of query results
    • G06F16/639Presentation of query results using playlists
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/686Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title or artist information, time, location or usage information, user ratings
    • G06F17/30752
    • G06F17/30772
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/04Protocols specially adapted for terminals or networks with limited capabilities; specially adapted for terminal portability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • H04L67/125Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks involving control of end-device applications over a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/30Profiles
    • H04L67/306User profiles
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus

Definitions

  • the present invention relates to a process to recommend audio/video/literature files and/or events/activities, more particularly associating these recommendations to a personal feeling that has been paired with an emoji or icon.
  • U.S. Pat. No. 8,255,810 by Moore, et al. and issued on Aug. 28, 2012 is for a portable touch screen device, method, and graphical user interface for using emoji characters while in a locked mode. It discloses a computer-implemented method performed at a portable electronic device with a touch screen display includes simultaneously displaying a character input area operable to display text character input and emoji character input selected by a user, a keyboard display area, and a plurality of emoji category icons.
  • the method In response to detecting a gesture on a respective emoji category icon, the method also includes simultaneously displaying: a first subset of emoji character keys for the respective emoji category in the keyboard display area and a plurality of subset-sequence-indicia icons for the respective emoji category. The method also includes detecting a gesture in the keyboard display area and, in response: replacing display of the first subset of emoji character keys with display of a second subset of emoji character keys for the respective emoji category, and updating the information provided by the subset-sequence-indicia icons.
  • U.S. Pat. No. 8,918,339 issued Dec. 23, 2014/United States Patent Application 20140279418 by Yigal Rubinstein; et al. and published on Sep. 18, 2014 is for ASSOCIATING AN INDICATION OF USER EMOTIONAL REACTION WITH CONTENT ITEMS PRESENTED BY A SOCIAL NETWORKING SYSTEM. It discloses a social networking system user may associate an emoji representing the user's emotional reaction with a content item presented by the social networking system. The user is presented with one or more emoji maintained by the social networking system and selects an emoji for associating with the content item.
  • the social networking system prompts the user for compensation or requests compensation from an entity associated with the selected emoji.
  • the selected emoji is associated with the content item and a connection, or other information, between the user and the object identifying the selected emoji is stored by the social networking system.
  • the selected emoji may be displayed with the content item to the user and to other users connected to the user.
  • a system includes a computing device that includes a memory configured to store instructions.
  • the system also includes a processor to execute the instructions to perform operations that include sending information to a publisher that represents one or more immutable stylistic features for enhancing text messages.
  • Operations also include sending information to the publisher that represents one or more conditions regarding the use of the one or more immutable stylistic features in text messages that include mutable content.
  • Operations also include receiving feedback information in response to the stylistic features being used in one or more text messages that include mutable content.
  • United States Patent Application 20160142362 by Ana Fitzner and filed on May 19, 2016 is for CUSTOM ENCODED MESSAGES AMONGST A CUSTOMIZED SOCIAL GROUP. It discloses a system and method for sending custom encoded messages amongst a customized social group.
  • a selection of intended recipients within a subgroup from a list of contacts is received on a client device. It is determined whether all of the selected recipients are capable of receiving a custom encoded message. If it is negative, the method includes sending an invitation email to one or more of the selected recipients. If it is affirmative: the method includes receiving a message from the user intended for the selected recipient. An assignment of a graphical symbol to an alphabet is received from the user. The assignment associating with all of the intended recipients is stored in a memory of the client device. The assignment and the message are transmitted to a remote processor for converting the message to the custom encoded message based on the assignment and transmitting them to the device.
  • United States Patent Application 20150341304 by Corinne Elizabeth Sherman; et al. and published on Nov. 26, 2015 is for a PERSONALIZED SHARING AT THE USER LEVEL. It discloses a system comprising a computer-readable storage medium storing at least one program and a computer-implemented method for generating and provisioning personalized messages is presented. Consistent with some embodiments, the method may include receiving a request from a user to share content with one or more other users. In response to receiving the request, user profile data about the user may be accessed, and a portion of the user profile data may be selected. The method may further include generating a personalized message based on the selected portion of the user profile data, and communicating the personalized message to the one or more other users. It is assigned to EBAY INC.
  • a system receives moment data from an input to a mobile device.
  • the system receives geographic location information, time information, and contextual information that is local to the mobile device.
  • the system creates a message about the moment data based on the geographic location information, the time information, and the contextual information.
  • the system outputs the moment data with the message.
  • United States Patent Application 20150222586 by David Ebersman et al. published on Aug. 6, 2015 is for an ideograms Based on Sentiment Analysis. It discloses particular embodiments of a method comprise analyzing a message to perform sentiment analysis with respect to at least a portion of the message. One or more sentiments associated with the at least a portion of the message may then be identified. One or more ideograms (e.g., written characters, symbols or images that represent an idea or thing), each corresponding to an identified sentiment, may then be suggested to a user for insertion into a message.
  • ideograms e.g., written characters, symbols or images that represent an idea or thing
  • an association may be saved in a user-specific dictionary linking the user-selected one or more of the ideograms with the portion of the message.
  • the sentiment analysis may incorporate social-networking information and/or historical ideogram usage information.
  • the text prediction engine may be configured to analyze emoji as words within the model and generate probabilities and candidate rankings for predictions that include both emoji and words.
  • User-specific emoji use may also be learned by monitoring a user's typing activity to adapt predictions to the user's particular usage of emoji.
  • the current invention is a recommendation system that translates a user's mood into activities and/or files (music, video, and/or literature) that are personalized.
  • the user selects an icon, emoticon, or emoji, that represents their desired mood (hereinafter referred to as an Emoodji) and the system matches files and/or events and activities to that desired mood, personalized to each user.
  • the system also provides for a means to exchange these mood-themed files and activities as a form of “slang” language—enabling users to share music, video, literature, events, and activities through images instead of actual text.
  • the process has the following steps:
  • Profile Creation The user first enters some basic profile information, then they choose their desired mood icons (i.e. Emoodjis) from a series of preloaded icon options, and finally they set their preferred genres for each mood (i.e. which genres help them to feel those selected moods)—for example, regarding music, the user has entered into the requisite data entry field or selected from available preloaded options, for example, that Pop, Dance, and Hip Hop make that user feel Happy, so when that user selects the “Happy” Emoodji” (i.e. the Emoodji that user associated with the mood of being happy), the app plays those music genres.
  • desired mood icons i.e. Emoodjis
  • the software will provide music/video/literature files (or multi-media content) or an event to the user, based upon the selected Emoodji and the corresponding genres—for example, if Pop, Dance, and Hip Hop make the user feel Happy, the app will provide music that matches those music genres, or if Hiking, camping, and Rafting make the user feel adventurous, the app will provide suggestions on things to do that match those activities.
  • the user can edit the track that is provided to them via the system by adding additional instrumentation, vocals, sound bites, video clip, etc.
  • the user can also edit the recommended literature file by adding content of their own choosing. They can similarly add extra activities to an event suggestion made by the app.
  • the innovative process is more efficient, effective, accurate and functional than the current art.
  • FIG. 1 shows an overview of how users access the system
  • FIG. 2 shows a sample of a login screen
  • FIG. 3 shows a sample of a profile screen
  • FIG. 4 displays the icons used to signify emotions
  • FIG. 5 display music being associated to the icon
  • FIG. 6 displays a user choosing music based on their mood
  • FIG. 7 shows a set of music tracks based on a mood
  • FIG. 8 displays the edit function of the system
  • FIG. 9 displays the ability to add video to a file
  • FIG. 10 displays sharing of the file with others
  • FIG. 11 shows the system sharing a file through social media
  • FIG. 12 shows a recipient receiving a message with a file
  • FIG. 13 shows the sharing of an audio file
  • FIG. 14 shows Activity/Event Process Profile Creation
  • FIG. 15 displays the Activity/Event Process Recommendation Step
  • FIG. 16 displays the Activity/Event Process Customization Step
  • FIG. 17 displays the Activity/Event Process Sharing Step.
  • the current invention is a recommendation system 1 that translates a user's mood into activities and/or files (music, video, and/or literature) that are personalized.
  • the user selects an icon, emoticon, or emoji, that represents their desired mood (i.e. Emoodji) and the system matches files and/or events and activities to that desired mood, personalized to each user.
  • the system also provides for a means to exchange these mood-themed files as a form of “slang” language—enabling users to share music, video, literature, events, and activities through images instead of actual text.
  • FIG. 1 displays the preferred embodiment of the system architecture 1 accessed through an Internet, Intranet and/or Wireless network 500 .
  • the system could be implemented on a device-to-device or client/server architecture as well.
  • the system 1 is accessed from a user's computing device 10 through a web browser over HTTP and/or HTTPS protocols 500 or wireless network or cell phone to cell phone connection.
  • a computing device 20 such as a cell phone, that can access the system 1 must have some version of a CPU, CPU memory, local hard disk, keyboard/keypad/input and display unit.
  • the computing device 20 can be any desktop, laptop, tablet, smart phone or general purpose computing device with an appropriate amount of memory suitable for this purpose and an active connection to the Internet 500 . Computing devices like this are well known in the art and are not pertinent to the invention.
  • the system 1 , data and processing code can reside in the non-transitory memory 310 of the one or more computing devices.
  • the system 1 in the preferred embodiment would be written to act like a smart phone application (app).
  • the process has the following steps:
  • Profile Creation The user first enters some basic profile information, then they choose their desired mood icons 200 (i.e. Emoodjis) from a series of preloaded icon options, and finally they set their preferred genres for each mood (i.e. which music, video, literature, and/or activity genres help them to feel those selected moods)—for example, Pop, Dance, and Hip Hop make the user feel Happy, so when the user selects the happy Emoodji, the app plays those music/video genres, or Hiking, camping, and Rafting make the user feel Artsurous, so when the user selects the Adventurous Emoodji, the app provides suggestions on things to do that match those activity genres.
  • desired mood icons 200 i.e. Emoodjis
  • preferred genres for each mood i.e. which music, video, literature, and/or activity genres help them to feel those selected moods
  • Pop, Dance, and Hip Hop make the user feel Happy, so when the user selects the happy Emoodji, the
  • the user can edit the track that is provided to them via the system by adding additional instrumentation, vocals, sound bites, video clip, etc.
  • the user can also edit the recommended literature file by adding content of their own choosing. They can similarly add extra activities to an event suggestion made by the app.
  • the first step in the current invention process is the creation of a personalized user account (i.e. profile) that will be utilized to provide the user 10 with the most relevant music/video/literature/activity options:
  • Emoodjis As used in the current invention, the user 10 will select their preferred moods by choosing the associated icons 200 or Emojis (called Emoodjis as used in the current invention) that represent these moods (e.g. Happy, Energetic, Chill, Angry, etc.) as shown in FIG. 4 .
  • These Emoodjis 200 will be chosen from a set of preloaded icon options.
  • the user can purchase more Emoodjis 200 sets through a one-time purchase or through enrollment in a recurring payment plan for premium services.
  • Emoodji the user 10 will tell the system 1 which genres they listen to or watch or read, or what events they participate in when they want to feel those moods as shown in FIG. 5 .
  • the system 1 has artificial intelligence (AI) that is used to create a correlation between mood and music/video/literature/activity 210 for that particular user 10 .
  • AI artificial intelligence
  • the user 10 can assign their desired mood icons 200 to an activity or event such as eating, movies, sporting events, dancing or any other activity or social event. This can be done as a standalone system or in combination with music, videos, and/or literature.
  • the system 1 in the preferred embodiment will have the following options would provide the user with music/video/literature/activity based upon their desired mood—i.e. after they select the Emoodji, the music/video/literature would come from one of the following: a Personal Music/Video/Literature Library, a Licensed Database, an Original Database, Original Compositions (AI) and Original Compositions (User).
  • FIG. 6 shows a user 10 choosing music based on their mood or the mood that they want to be in.
  • the user 10 can match their existing music, video, and/or literature library on their mobile computing device such as a cell phone, tablet or computer 20 (e.g. including music, video, and or literature from a download music/video/literature site) with the system 1 .
  • the songs. videos, and literature 210 in database are tagged by genre.
  • the system 1 searches for songs, videos, or literature in the user's music/video/literature library that are tagged with the appropriate genres. For example, the system 1 then provides a selection of songs that match the associated music genres, as determined by the user 10 as shown in FIG. 7 . So the user 10 picks the Emoodji 200 associated with their mood or desired mood and the system 101 picks songs it associates with that mood.
  • a Licensed Database is an external database of songs/video/literature that have been licensed and archived via a centralized server.
  • the songs in the music/video/literature library are tagged by genre 200 .
  • the system searches for songs 210 in the licensed music/video database that are tagged with the appropriate genres 200 .
  • the system 1 provides a selection of songs that match the associated music genres 200 , as determined by the user 10 .
  • the system 1 may also use an Original Database which is an external database of music/video/literature in non-transitory memory that have been created originally for the platform (i.e. rights to music/video/literature owned outright by the platform operators, via acquisition or work-for-hire) and archived via a centralized server.
  • the songs and videos 210 in the music/video library are tagged by genre.
  • the system 1 searches for songs or videos in the original music database that are tagged with the appropriate genres 200 .
  • the system 1 will provide a selection of songs or videos that match the associated music genres 200 , as determined by the user 10 .
  • the system 1 can create Original Compositions using its artificial intelligence (AI).
  • AI artificial intelligence
  • the system's AI is programmed with music theory (i.e. what types of sounds, instruments, melodies, etc. are typical/inherent to certain genres).
  • a database of “song parts” i.e. instrumental components of a song—for example, drums, bass, guitar, horns, winds, strings, etc.) is established via a centralized server.
  • the song parts in the music library are tagged by genre 200 (for example, certain drum tracks work best with hip hop, certain bass tracks work best with funk, certain horn tracks work best for jazz, etc.).
  • the system 1 searches for song parts in the music database that are tagged with the appropriate genres 200 .
  • the system 1 composes a variety of songs that fit the selected genre, using its programmed music theory together with the song parts found in the database.
  • the system may add in video, in accordance with the selected mood and appropriate genre.
  • the system then provides the songs or videos to the user 10 .
  • the user 10 can add additional song or video parts (tracks), as identified by the AI (i.e. fitting the selected genre).
  • the system 1 can enable the user to create Original Compositions.
  • the system's AI is programmed with music theory (i.e. what types of sounds, instruments, melodies, etc. are typical/inherent to certain genres 200 ).
  • the system 1 has a database of “song parts” (i.e. instrumental components of a song—for example, drums, bass, guitar, horns, winds, strings, etc.) is established via a centralized server.
  • the song or video parts in the music/video library are tagged by genre (for example, certain drum tracks work best with hip hop, certain bass tracks work best with funk, certain horn tracks work best for jazz, etc.).
  • the system 1 searches for song parts in the music database that are tagged with the appropriate genres 200 .
  • the system 1 provides the potential song parts to the user, categorized by instrument as shown in FIG. 8 .
  • the user 10 selects a track for each possible instrument, as identified by the AI (i.e. fitting the selected genre 200 ). These selected tracks are then aggregated and used by the system 1 to create an Original Composition.
  • the system 1 would have Music and Video Editing.
  • the user 10 will be able to edit the audio/video track provided by the system 1 as shown in FIG. 8 . (this is inherently possible with the tracks originally composed within the app, or through licensing arrangements for prerecorded media).
  • FIG. 9 shows where a video can be added to an audio track.
  • additional instruments e.g. guitars, drums, bass, strings, brass, etc.
  • FIG. 9 shows where a video can be added to an audio track.
  • different video clips and editing options can be made to add to or modify the video.
  • These video files may be related to the user's desired mood, as indicated by the selected Emoodji, and the associated video genre.
  • the system 1 recommends a restaurant based on the criteria to the user 10 can customize the choices as shown in FIG. 15 .
  • the user 10 can review the information provided by the system about the recommended restaurants.
  • the user 10 can choose one of the recommended restaurants or ask for another suggestion, they can choose a reservation time and date, secure their sitting and have the system 1 add the event to their calendar.
  • the system 1 can have its own calendar function or can interface with a users calendar application.
  • the system 1 would allow for social interactions (e.g. sharing and collaborations) through one or more of the following options: In-App Messaging (Text), In-App Messaging (Voice) or third-party (i.e. installed) Keyboard.
  • In-App Messaging Text
  • In-App Messaging Vehicle
  • third-party i.e. installed
  • the system 1 can enable multiple users to contribute to a single song, video, or literature (or piece of multi-media content), as each user 10 can add to the literature, audio track or video clip, share it with other users, and repeat this process until the literature, audio or video clip is finalized as shown in FIG. 13 . This contribution and editing can occur many times.
  • the audio/video/literature files can be sent and shared through a number of means such as In-App Messaging (Text).
  • the user 10 creates multi-media content (or MMC), which can contain music, video, voice, graphics, etc.
  • MMC multi-media content
  • the user 10 selects ‘Share’ from within the system 1 .
  • the user 10 selects the desired recipients (individuals and/or groups) 220 .
  • the user 10 selects the ‘Text/Type/Message’ option.
  • the user 10 can write an original message in a text entry field.
  • the user saves the text message.
  • the message is added to the MMC prior to sending it to the selected recipient(s) 220 .
  • the user 10 chooses the social media hyperlink to send it on as shown in FIG. 11 .
  • Emoodji 200 When the recipient receives the message, they click an Emoodji 200 to play the MMC as shown in FIG. 12 .
  • the system 1 has In-App Messaging (Voice).
  • the user 10 creates MMC.
  • the user 10 selects ‘Share’ from within the system screen as shown in FIG. 10 .
  • the user 10 selects the desired recipients 220 (individuals and/or groups).
  • the user 10 selects the ‘Recording/Voice’ option.
  • the user can create an original message via an audio/video recording.
  • the user 10 saves the message.
  • the message is added to the MMC prior to sending it to the selected recipient(s) 220 .
  • the recipient 110 receives the message, they click an Emoodji 200 to play the MMC as shown in FIG. 12 .
  • the user 10 can install a third-party Keyboard that features Emoodjis as the keys instead of traditional alphanumeric characters, as described below.
  • the user 10 creates an MMC and saves their MMC to their Archive.
  • the user 10 installs the Emoodji Keyboard app to their device.
  • the keyboard can feature the Emoodjis selected by the user 10 , or a variety of popular or common Emoodjis. This keyboard would act in a manner similar to animated GIF keyboards or meme keyboards, where the user 10 can search for an image by typing the theme into a search bar. With this app, the user presses the Emoodji 200 to select the associated mood/theme.
  • the user 10 can access the Emoodji Keyboard from any text conversation—Chats, Direct Messages, Twitter, etc.
  • the user 10 selects an Emoodji, triggering one of the following events to occur: the AI searches the recipient's music library to play a music/video/literature track tagged with the selected mood/theme (i.e. genre); the AI searches the recipient's Archive of Original Compositions to play a track tagged with the selected mood/theme (i.e. genre); or the user sends the Emoodji 200 to the recipient 110 , which is hyperlinked to the app—upon pressing the Emodji 200 in the text conversation, the AI is triggered to search the centralized (i.e. cloud-based) song parts database and composes a new Original Composition on the recipient's device. It is also possible for the AI to search the song/video/literature parts database and compose an original work on the user's device first, before sending it to the recipient.
  • the AI searches the recipient'
  • the User 10 can access their saved activities in their calendar. The user 10 can then share the event/activity to friends and family. The User 10 can select from contacts (or shares through social media) people that they want to share the event with.
  • the system 1 can share through the original Emoodji selected when creating the activity (via email, text, social media, etc.).
  • the people the system 1 shares with can confirm their the system 1 or through other confirmation means such as social media or phone call or text.
  • the people the system 1 shares with can also have input to the event or suggest changes to the event.
  • the current invention can work as an Audio/Video/Literature/Media Standalone App where the:
  • User 10 selects genre(s) of that media (i.e. audio, video, literature, or combination) that they associate with each mood, and
  • User 10 selects mood via Emoodji; AI chooses appropriate genre option(s) that match that mood (based upon user's input in Step 2).
  • the current invention can also work as an Activity/Event Standalone App having new material to include activities and events in addition to media where the:
  • User 10 selects activities (i.e. eating, shopping, hiking, reading, etc.) that they associate with each mood, and
  • User 10 selects mood via Emoodji; AI chooses appropriate activity option(s) that match that mood (based upon user's input in Step 2).
  • activity pairing i.e. matching mood to event, or Step 2 above
  • the user 10 selects moods, then actions—or they can select favorite actions, and then assign one mood each to those actions.
  • the system 1 can be an Integrated App—or General Items of Interest—where the:
  • User 10 select items from a variety of categories that they associate with each mood, including music genres, specific bands, movie genres, specific movie titles, book genres, specific book titles, food, activities, events, etc., and
  • User 10 selects mood via Emoodji; AI chooses appropriate option(s) that match that mood (based upon user's input in Step 2).
  • AI learns what options the user 10 favors and which options user ignores; the AI then updates itself accordingly to make more favorable recommendations.
  • All of the above embodiments may be contained within one single software process, such as an app, or may be implemented in individual processes or apps, each designed to feature one or more form of specific features outlined above, whether it be music or video or activity, as well as a combination. Please note the system can also work with literary works and generation as well.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Library & Information Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)
  • Human Computer Interaction (AREA)

Abstract

The current invention is a recommendation system that translates a user's mood into activities and/or files (music, video, and/or literature) that are personalized. The user selects an icon, emoticon, or emoji, that represents their desired mood and the system matches files and/or events and activities to that desired mood, personalized to each user.

Description

    CROSS-REFERENCES TO RELATED APPLICATIONS (IF ANY)
  • None
  • BACKGROUND 1. Field of the Invention
  • The present invention relates to a process to recommend audio/video/literature files and/or events/activities, more particularly associating these recommendations to a personal feeling that has been paired with an emoji or icon.
  • 2. Description of Prior Art
  • U.S. Pat. No. 8,255,810 by Moore, et al. and issued on Aug. 28, 2012 is for a portable touch screen device, method, and graphical user interface for using emoji characters while in a locked mode. It discloses a computer-implemented method performed at a portable electronic device with a touch screen display includes simultaneously displaying a character input area operable to display text character input and emoji character input selected by a user, a keyboard display area, and a plurality of emoji category icons. In response to detecting a gesture on a respective emoji category icon, the method also includes simultaneously displaying: a first subset of emoji character keys for the respective emoji category in the keyboard display area and a plurality of subset-sequence-indicia icons for the respective emoji category. The method also includes detecting a gesture in the keyboard display area and, in response: replacing display of the first subset of emoji character keys with display of a second subset of emoji character keys for the respective emoji category, and updating the information provided by the subset-sequence-indicia icons.
  • U.S. Pat. No. 8,918,339 issued Dec. 23, 2014/United States Patent Application 20140279418 by Yigal Rubinstein; et al. and published on Sep. 18, 2014 is for ASSOCIATING AN INDICATION OF USER EMOTIONAL REACTION WITH CONTENT ITEMS PRESENTED BY A SOCIAL NETWORKING SYSTEM. It discloses a social networking system user may associate an emoji representing the user's emotional reaction with a content item presented by the social networking system. The user is presented with one or more emoji maintained by the social networking system and selects an emoji for associating with the content item. If certain emoji are selected, the social networking system prompts the user for compensation or requests compensation from an entity associated with the selected emoji. The selected emoji is associated with the content item and a connection, or other information, between the user and the object identifying the selected emoji is stored by the social networking system. The selected emoji may be displayed with the content item to the user and to other users connected to the user.
  • United States Patent Application 20160092937 by Steven Martin and filed on Mar. 31, 2016 for a selectable Text Messaging Styles for Brand Owners. It discloses a system includes a computing device that includes a memory configured to store instructions. The system also includes a processor to execute the instructions to perform operations that include sending information to a publisher that represents one or more immutable stylistic features for enhancing text messages. Operations also include sending information to the publisher that represents one or more conditions regarding the use of the one or more immutable stylistic features in text messages that include mutable content. Operations also include receiving feedback information in response to the stylistic features being used in one or more text messages that include mutable content.
  • United States Patent Application 20160142362 by Ana Fitzner and filed on May 19, 2016 is for CUSTOM ENCODED MESSAGES AMONGST A CUSTOMIZED SOCIAL GROUP. It discloses a system and method for sending custom encoded messages amongst a customized social group. A selection of intended recipients within a subgroup from a list of contacts is received on a client device. It is determined whether all of the selected recipients are capable of receiving a custom encoded message. If it is negative, the method includes sending an invitation email to one or more of the selected recipients. If it is affirmative: the method includes receiving a message from the user intended for the selected recipient. An assignment of a graphical symbol to an alphabet is received from the user. The assignment associating with all of the intended recipients is stored in a memory of the client device. The assignment and the message are transmitted to a remote processor for converting the message to the custom encoded message based on the assignment and transmitting them to the device.
  • United States Patent Application 20150341304 by Corinne Elizabeth Sherman; et al. and published on Nov. 26, 2015 is for a PERSONALIZED SHARING AT THE USER LEVEL. It discloses a system comprising a computer-readable storage medium storing at least one program and a computer-implemented method for generating and provisioning personalized messages is presented. Consistent with some embodiments, the method may include receiving a request from a user to share content with one or more other users. In response to receiving the request, user profile data about the user may be accessed, and a portion of the user profile data may be selected. The method may further include generating a personalized message based on the selected portion of the user profile data, and communicating the personalized message to the one or more other users. It is assigned to EBAY INC.
  • United States Patent Application 20150334529 by Neilesh JAIN et al. and published on Nov. 19, 2015 for SHARING MOMENT EXPERIENCES. It discloses a system for sharing moment experiences is described. A system receives moment data from an input to a mobile device. The system receives geographic location information, time information, and contextual information that is local to the mobile device. The system creates a message about the moment data based on the geographic location information, the time information, and the contextual information. The system outputs the moment data with the message.
  • United States Patent Application 20150222586 by David Ebersman et al. published on Aug. 6, 2015 is for an ideograms Based on Sentiment Analysis. It discloses particular embodiments of a method comprise analyzing a message to perform sentiment analysis with respect to at least a portion of the message. One or more sentiments associated with the at least a portion of the message may then be identified. One or more ideograms (e.g., written characters, symbols or images that represent an idea or thing), each corresponding to an identified sentiment, may then be suggested to a user for insertion into a message. Upon receiving a user selection of one or more of the ideograms in relation to some portion of the message, an association may be saved in a user-specific dictionary linking the user-selected one or more of the ideograms with the portion of the message. In particular embodiments, the sentiment analysis may incorporate social-networking information and/or historical ideogram usage information.
  • United States Patent Application 20150100537 by Jason Grieves; et al. and published on Apr. 9, 2015 is for emoji for Text Predictions. It discloses techniques to employ emoji for text predictions are described herein. In one or more implementations, entry of characters is detected during interaction with a device. Prediction candidates corresponding to the detected characters are generated according to a language model that is configured to consider emoji along with words and phrases. The language model may make use of a mapping table that maps a plurality of emoji to corresponding words. The mapping table enables a text prediction engine to offer the emoji as alternatives for matching words. In addition or alternatively, the text prediction engine may be configured to analyze emoji as words within the model and generate probabilities and candidate rankings for predictions that include both emoji and words. User-specific emoji use may also be learned by monitoring a user's typing activity to adapt predictions to the user's particular usage of emoji.
  • There is still room for improvement in the art.
  • SUMMARY OF THE INVENTION
  • The current invention is a recommendation system that translates a user's mood into activities and/or files (music, video, and/or literature) that are personalized. The user selects an icon, emoticon, or emoji, that represents their desired mood (hereinafter referred to as an Emoodji) and the system matches files and/or events and activities to that desired mood, personalized to each user.
  • The system also provides for a means to exchange these mood-themed files and activities as a form of “slang” language—enabling users to share music, video, literature, events, and activities through images instead of actual text.
  • The process has the following steps:
  • 1. Profile Creation: The user first enters some basic profile information, then they choose their desired mood icons (i.e. Emoodjis) from a series of preloaded icon options, and finally they set their preferred genres for each mood (i.e. which genres help them to feel those selected moods)—for example, regarding music, the user has entered into the requisite data entry field or selected from available preloaded options, for example, that Pop, Dance, and Hip Hop make that user feel Happy, so when that user selects the “Happy” Emoodji” (i.e. the Emoodji that user associated with the mood of being happy), the app plays those music genres.
  • 2. Music/Video/Literature/Event Generation: Once the user has created a profile, the software will provide music/video/literature files (or multi-media content) or an event to the user, based upon the selected Emoodji and the corresponding genres—for example, if Pop, Dance, and Hip Hop make the user feel Happy, the app will provide music that matches those music genres, or if Hiking, Camping, and Rafting make the user feel Adventurous, the app will provide suggestions on things to do that match those activities.
  • 3. Music/Video/Literature/Event Editing: In some embodiments, the user can edit the track that is provided to them via the system by adding additional instrumentation, vocals, sound bites, video clip, etc. The user can also edit the recommended literature file by adding content of their own choosing. They can similarly add extra activities to an event suggestion made by the app.
  • 4. Sharing/Collaboration: Users can send each other Emoodjis in the same fashion they sent traditional emojis—as a graphic representation of a mood, but one that triggers multi-media content to play or literature to be displayed when clicked by the recipient, or asks the recipient to schedule or confirm an event.
  • The innovative process is more efficient, effective, accurate and functional than the current art.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Without restricting the full scope of this invention, the preferred form of this invention is illustrated in the following drawings:
  • FIG. 1 shows an overview of how users access the system;
  • FIG. 2 shows a sample of a login screen;
  • FIG. 3 shows a sample of a profile screen;
  • FIG. 4 displays the icons used to signify emotions;
  • FIG. 5 display music being associated to the icon;
  • FIG. 6 displays a user choosing music based on their mood;
  • FIG. 7 shows a set of music tracks based on a mood;
  • FIG. 8 displays the edit function of the system;
  • FIG. 9 displays the ability to add video to a file;
  • FIG. 10 displays sharing of the file with others;
  • FIG. 11 shows the system sharing a file through social media;
  • FIG. 12 shows a recipient receiving a message with a file;
  • FIG. 13 shows the sharing of an audio file;
  • FIG. 14 shows Activity/Event Process Profile Creation;
  • FIG. 15 displays the Activity/Event Process Recommendation Step;
  • FIG. 16 displays the Activity/Event Process Customization Step; and
  • FIG. 17 displays the Activity/Event Process Sharing Step.
  • DESCRIPTION OF THE PREFERRED EMBODIMENT
  • There are a number of significant design features and improvements incorporated within the invention.
  • The current invention is a recommendation system 1 that translates a user's mood into activities and/or files (music, video, and/or literature) that are personalized. The user selects an icon, emoticon, or emoji, that represents their desired mood (i.e. Emoodji) and the system matches files and/or events and activities to that desired mood, personalized to each user. The system also provides for a means to exchange these mood-themed files as a form of “slang” language—enabling users to share music, video, literature, events, and activities through images instead of actual text.
  • FIG. 1 displays the preferred embodiment of the system architecture 1 accessed through an Internet, Intranet and/or Wireless network 500. However, the system could be implemented on a device-to-device or client/server architecture as well.
  • In FIG. 1, the system 1 is accessed from a user's computing device 10 through a web browser over HTTP and/or HTTPS protocols 500 or wireless network or cell phone to cell phone connection. A computing device 20, such as a cell phone, that can access the system 1 must have some version of a CPU, CPU memory, local hard disk, keyboard/keypad/input and display unit. The computing device 20 can be any desktop, laptop, tablet, smart phone or general purpose computing device with an appropriate amount of memory suitable for this purpose and an active connection to the Internet 500. Computing devices like this are well known in the art and are not pertinent to the invention.
  • The system 1, data and processing code can reside in the non-transitory memory 310 of the one or more computing devices. The system 1 in the preferred embodiment would be written to act like a smart phone application (app).
  • In one embodiment, the process has the following steps:
  • 1. Profile Creation: The user first enters some basic profile information, then they choose their desired mood icons 200 (i.e. Emoodjis) from a series of preloaded icon options, and finally they set their preferred genres for each mood (i.e. which music, video, literature, and/or activity genres help them to feel those selected moods)—for example, Pop, Dance, and Hip Hop make the user feel Happy, so when the user selects the happy Emoodji, the app plays those music/video genres, or Hiking, Camping, and Rafting make the user feel Adventurous, so when the user selects the Adventurous Emoodji, the app provides suggestions on things to do that match those activity genres.
  • 2. Music/Video/Literature/Event Generation: Once the user 10 has created a profile, the app will provide music/video/literature/event files to the user, based upon the selected Emoodji 200 and the corresponding genres—for example, if Pop, Dance, and Hip Hop make the user feel Happy, the app will provide music that matches those music/video genres.
  • 3. Music/Video/Literature/Event Editing: In some embodiments, the user can edit the track that is provided to them via the system by adding additional instrumentation, vocals, sound bites, video clip, etc. The user can also edit the recommended literature file by adding content of their own choosing. They can similarly add extra activities to an event suggestion made by the app.
  • 4. Sharing/Collaboration: Users 10 can send each other Emoodjis in the same fashion they sent traditional emojis—as a graphic representation of a mood, but one that triggers multi-media content to play when clicked or literature to be displayed when clicked by the recipient, or asks the recipient to schedule or confirm an event.
  • The process is defined in more detail below.
  • Profile Creation
  • The first step in the current invention process is the creation of a personalized user account (i.e. profile) that will be utilized to provide the user 10 with the most relevant music/video/literature/activity options:
  • 1. After the user downloads the data that contains the system data and execution files, the user will see a Welcome screen with an introduction, outlining the uses of the app—its features and functionality as shown in FIG. 2.
  • 2. Then, they will be directed to enter their personal data (for example, name, age, gender, address, etc.) as shown in FIG. 3.
  • 3. Next, the user 10 will select their preferred moods by choosing the associated icons 200 or Emojis (called Emoodjis as used in the current invention) that represent these moods (e.g. Happy, Energetic, Chill, Angry, etc.) as shown in FIG. 4. These Emoodjis 200 will be chosen from a set of preloaded icon options. In the preferred embodiment, the user can purchase more Emoodjis 200 sets through a one-time purchase or through enrollment in a recurring payment plan for premium services.
  • 4. Finally, for each selected Emoodji, the user 10 will tell the system 1 which genres they listen to or watch or read, or what events they participate in when they want to feel those moods as shown in FIG. 5. The system 1 has artificial intelligence (AI) that is used to create a correlation between mood and music/video/literature/activity 210 for that particular user 10.
  • The user 10 can assign their desired mood icons 200 to an activity or event such as eating, movies, sporting events, dancing or any other activity or social event. This can be done as a standalone system or in combination with music, videos, and/or literature.
  • Music/Video/Literature/Event Generation
  • After the user 10 has selected their mood icons (Emoodjis) 200 and associated music video, and literature genres 210 or event with each mood (i.e. what music/video genres make them feel that particular mood), the system 1 in the preferred embodiment will have the following options would provide the user with music/video/literature/activity based upon their desired mood—i.e. after they select the Emoodji, the music/video/literature would come from one of the following: a Personal Music/Video/Literature Library, a Licensed Database, an Original Database, Original Compositions (AI) and Original Compositions (User). FIG. 6 shows a user 10 choosing music based on their mood or the mood that they want to be in.
  • The user 10 can match their existing music, video, and/or literature library on their mobile computing device such as a cell phone, tablet or computer 20 (e.g. including music, video, and or literature from a download music/video/literature site) with the system 1. The songs. videos, and literature 210 in database are tagged by genre. The user selects the mood by selecting the appropriate icon (Emoodji) onscreen, which is translated into a genre by the system's software programming algorithms, for example, Chill=Jazz, Soul, Blues.
  • The system 1 then searches for songs, videos, or literature in the user's music/video/literature library that are tagged with the appropriate genres. For example, the system 1 then provides a selection of songs that match the associated music genres, as determined by the user 10 as shown in FIG. 7. So the user 10 picks the Emoodji 200 associated with their mood or desired mood and the system 101 picks songs it associates with that mood.
  • Another option is for the system 1 to use a Licensed Database. A Licensed Database is an external database of songs/video/literature that have been licensed and archived via a centralized server. The songs in the music/video/literature library are tagged by genre 200. The user 10 selects the mood by selecting the appropriate icon 200 (Emoodji) onscreen, which is translated into a genre by the system's algorithms—for example, Chill=Jazz, Soul, Blues. In this example, the system searches for songs 210 in the licensed music/video database that are tagged with the appropriate genres 200. Next, the system 1 provides a selection of songs that match the associated music genres 200, as determined by the user 10.
  • The system 1 may also use an Original Database which is an external database of music/video/literature in non-transitory memory that have been created originally for the platform (i.e. rights to music/video/literature owned outright by the platform operators, via acquisition or work-for-hire) and archived via a centralized server. Again, the songs and videos 210 in the music/video library are tagged by genre. The user 10 selects the mood by selecting the appropriate icon 200 (Emoodji) onscreen, which is translated into a genre by the system's algorithms—for example, Chill=Jazz, Soul, Blues. The system 1 searches for songs or videos in the original music database that are tagged with the appropriate genres 200. The system 1 will provide a selection of songs or videos that match the associated music genres 200, as determined by the user 10.
  • The system 1 can create Original Compositions using its artificial intelligence (AI). The system's AI is programmed with music theory (i.e. what types of sounds, instruments, melodies, etc. are typical/inherent to certain genres). A database of “song parts” (i.e. instrumental components of a song—for example, drums, bass, guitar, horns, winds, strings, etc.) is established via a centralized server. The song parts in the music library are tagged by genre 200 (for example, certain drum tracks work best with hip hop, certain bass tracks work best with funk, certain horn tracks work best for jazz, etc.). The user 10 selects the mood by selecting the associated Emoodji 200, which is translated into a genre by the system's programming algorithms—for example, Chill=Jazz, Soul, Blues. The system 1 searches for song parts in the music database that are tagged with the appropriate genres 200. The system 1 composes a variety of songs that fit the selected genre, using its programmed music theory together with the song parts found in the database. The system may add in video, in accordance with the selected mood and appropriate genre. The system then provides the songs or videos to the user 10. The user 10 can add additional song or video parts (tracks), as identified by the AI (i.e. fitting the selected genre).
  • The system 1 can enable the user to create Original Compositions. The system's AI is programmed with music theory (i.e. what types of sounds, instruments, melodies, etc. are typical/inherent to certain genres 200). The system 1 has a database of “song parts” (i.e. instrumental components of a song—for example, drums, bass, guitar, horns, winds, strings, etc.) is established via a centralized server. The song or video parts in the music/video library are tagged by genre (for example, certain drum tracks work best with hip hop, certain bass tracks work best with funk, certain horn tracks work best for jazz, etc.). The user 10 selects the mood by selecting the associated Emoodji 200, which is translated into a genre by the software programming algorithms—for example, Chill=Jazz, Soul, Blues. The system 1 searches for song parts in the music database that are tagged with the appropriate genres 200. The system 1 provides the potential song parts to the user, categorized by instrument as shown in FIG. 8. The user 10 selects a track for each possible instrument, as identified by the AI (i.e. fitting the selected genre 200). These selected tracks are then aggregated and used by the system 1 to create an Original Composition.
  • Music/Video/Literature/Event Editing
  • The system 1 would have Music and Video Editing. The user 10 will be able to edit the audio/video track provided by the system 1 as shown in FIG. 8. (this is inherently possible with the tracks originally composed within the app, or through licensing arrangements for prerecorded media).
  • A user 10 will be given the option, to customize their music track by layering in a variety of additional instruments (e.g. guitars, drums, bass, strings, brass, etc.) for a fee as shown in FIG. 8. FIG. 9 shows where a video can be added to an audio track. For a video file, different video clips and editing options can be made to add to or modify the video. These video files may be related to the user's desired mood, as indicated by the selected Emoodji, and the associated video genre.
  • After the system 1 recommends a restaurant based on the criteria to the user 10 can customize the choices as shown in FIG. 15. The user 10 can review the information provided by the system about the recommended restaurants. The user 10 can choose one of the recommended restaurants or ask for another suggestion, they can choose a reservation time and date, secure their sitting and have the system 1 add the event to their calendar. The system 1 can have its own calendar function or can interface with a users calendar application.
  • Sharing/Collaboration
  • The system 1 would allow for social interactions (e.g. sharing and collaborations) through one or more of the following options: In-App Messaging (Text), In-App Messaging (Voice) or third-party (i.e. installed) Keyboard.
  • The system 1 can enable multiple users to contribute to a single song, video, or literature (or piece of multi-media content), as each user 10 can add to the literature, audio track or video clip, share it with other users, and repeat this process until the literature, audio or video clip is finalized as shown in FIG. 13. This contribution and editing can occur many times.
  • As shown in FIG. 10, the audio/video/literature files can be sent and shared through a number of means such as In-App Messaging (Text). The user 10 creates multi-media content (or MMC), which can contain music, video, voice, graphics, etc. The user 10 selects ‘Share’ from within the system 1. The user 10 selects the desired recipients (individuals and/or groups) 220. The user 10 selects the ‘Text/Type/Message’ option. The user 10 can write an original message in a text entry field. The user saves the text message. The message is added to the MMC prior to sending it to the selected recipient(s) 220. The user 10 chooses the social media hyperlink to send it on as shown in FIG. 11.
  • When the recipient receives the message, they click an Emoodji 200 to play the MMC as shown in FIG. 12.
  • The system 1 has In-App Messaging (Voice). The user 10 creates MMC. The user 10 selects ‘Share’ from within the system screen as shown in FIG. 10. The user 10 selects the desired recipients 220 (individuals and/or groups). The user 10 selects the ‘Recording/Voice’ option. The user can create an original message via an audio/video recording. The user 10 saves the message. The message is added to the MMC prior to sending it to the selected recipient(s) 220. When the recipient 110 receives the message, they click an Emoodji 200 to play the MMC as shown in FIG. 12.
  • The user 10 can install a third-party Keyboard that features Emoodjis as the keys instead of traditional alphanumeric characters, as described below. The user 10 creates an MMC and saves their MMC to their Archive. The user 10 installs the Emoodji Keyboard app to their device. The keyboard can feature the Emoodjis selected by the user 10, or a variety of popular or common Emoodjis. This keyboard would act in a manner similar to animated GIF keyboards or meme keyboards, where the user 10 can search for an image by typing the theme into a search bar. With this app, the user presses the Emoodji 200 to select the associated mood/theme. The user 10 can access the Emoodji Keyboard from any text conversation—Chats, Direct Messages, Twitter, etc. The user 10 selects an Emoodji, triggering one of the following events to occur: the AI searches the recipient's music library to play a music/video/literature track tagged with the selected mood/theme (i.e. genre); the AI searches the recipient's Archive of Original Compositions to play a track tagged with the selected mood/theme (i.e. genre); or the user sends the Emoodji 200 to the recipient 110, which is hyperlinked to the app—upon pressing the Emodji 200 in the text conversation, the AI is triggered to search the centralized (i.e. cloud-based) song parts database and composes a new Original Composition on the recipient's device. It is also possible for the AI to search the song/video/literature parts database and compose an original work on the user's device first, before sending it to the recipient.
  • As shown in FIG. 16, the User 10 can access their saved activities in their calendar. The user 10 can then share the event/activity to friends and family. The User 10 can select from contacts (or shares through social media) people that they want to share the event with. The system 1 can share through the original Emoodji selected when creating the activity (via email, text, social media, etc.). The people the system 1 shares with can confirm their the system 1 or through other confirmation means such as social media or phone call or text. The people the system 1 shares with can also have input to the event or suggest changes to the event.
  • Any activity or event can be shared this way.
  • Operation
  • The current invention can work as an Audio/Video/Literature/Media Standalone App where the:
  • 1. User 10 selects moods by choosing from preloaded icon options (these can be customized by the user later),
  • 2. User 10 selects genre(s) of that media (i.e. audio, video, literature, or combination) that they associate with each mood, and
  • 3. User 10 selects mood via Emoodji; AI chooses appropriate genre option(s) that match that mood (based upon user's input in Step 2).
  • The current invention can also work as an Activity/Event Standalone App having new material to include activities and events in addition to media where the:
  • 1. User 10 selects moods by choosing from preloaded icon options (these can be customized by the user 10 later),
  • 2. User 10 selects activities (i.e. eating, shopping, hiking, reading, etc.) that they associate with each mood, and
  • 3. User 10 selects mood via Emoodji; AI chooses appropriate activity option(s) that match that mood (based upon user's input in Step 2).
  • In activity pairing (i.e. matching mood to event, or Step 2 above), the user 10 selects moods, then actions—or they can select favorite actions, and then assign one mood each to those actions.
  • The system 1 can be an Integrated App—or General Items of Interest—where the:
  • 1. User 10 selects moods by choosing from preloaded icon options (these can be customized later),
  • 2. User 10 select items from a variety of categories that they associate with each mood, including music genres, specific bands, movie genres, specific movie titles, book genres, specific book titles, food, activities, events, etc., and
  • 3. User 10 selects mood via Emoodji; AI chooses appropriate option(s) that match that mood (based upon user's input in Step 2).
  • As user 10 engages with the app, AI learns what options the user 10 favors and which options user ignores; the AI then updates itself accordingly to make more favorable recommendations.
  • All of the above embodiments may be contained within one single software process, such as an app, or may be implemented in individual processes or apps, each designed to feature one or more form of specific features outlined above, whether it be music or video or activity, as well as a combination. Please note the system can also work with literary works and generation as well.
  • CONCLUSION
  • Although the present invention has been described in considerable detail with reference to certain preferred versions thereof, other versions are possible. Therefore, the point and scope of the appended claims should not be limited to the description of the preferred versions contained herein. The system is not limited to any particular programming language, computer platform or architecture.
  • As to a further discussion of the manner of usage and operation of the present invention, the same should be apparent from the above description. Accordingly, no further discussion relating to the manner of usage and operation will be provided. With respect to the above description, it is to be realized that the optimum dimensional relationships for the parts of the invention, to include variations in size, materials, shape, form, function and manner of operation, assembly and use, are deemed readily apparent and obvious to one skilled in the art, and all equivalent relationships to those illustrated in the drawings and described in the specification are intended to be encompassed by the present invention.
  • Therefore, the foregoing is considered as illustrative only of the principles of the invention. Further, since numerous modifications and changes will readily occur to those skilled in the art, it is not desired to limit the invention to the exact construction and operation shown and described, and accordingly, all suitable modifications and equivalents may be resorted to, falling within the scope of the invention.

Claims (20)

That which is claimed is:
1. A system comprising;
having a system that resides in the non-transitory memory of a computing device;
selecting an emoji to be associated with a mood; and
having the system associate a plurality of items to said emoji based on the mood.
2. The system according to claim 1 in having the item be audio files.
3. The system according to claim 2 wherein the files are native to the user's device.
4. The system according to claim 2 wherein the files are stored in a centralized server.
5. A system according to claim 1 in having the item be video files.
6. The system according to claim 5 wherein the files are native to the user's device.
7. The system according to claim 5 wherein the files are stored in a centralized server.
8. A system according to claim 1 in having the item be an event.
9. The system according to claim 8 wherein the system uses criteria to make suggestions on the event.
10. The system according to claim 1 in having the item be literature files.
11. A system according to claim 1 in having the system generate an item based on the emoji.
12. A system according to claim 11 in having the system use artificial intelligence to generate the item.
13. A system according to claim 11 in having the system share the item.
14. A system according to claim 13 allowing collaboration on the item.
15. A system according to claim 11 allowing modification to the item.
16. A system comprising;
having a system that resides in the non-transitory memory of a computing device;
selecting an emoji to be associated with a mood; and
having the system associate a plurality of items comprising one or more of videos, audios, events, or literature to said emoji based on the mood.
17. A system according to claim 16 in having the system generate an item based on the emoji.
18. A system according to claim 17 in having the system use artificial intelligence to generate the item.
19. A system according to claim 16 in having the share the item.
20. A system according to claim 16 allowing collaboration on the item.
US15/214,436 2016-07-19 2016-07-19 Process to provide audio/video/literature files and/or events/activities ,based upon an emoji or icon associated to a personal feeling Abandoned US20180025004A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US15/214,436 US20180025004A1 (en) 2016-07-19 2016-07-19 Process to provide audio/video/literature files and/or events/activities ,based upon an emoji or icon associated to a personal feeling
US16/297,019 US11151187B2 (en) 2016-07-19 2019-03-08 Process to provide audio/video/literature files and/or events/activities, based upon an emoji or icon associated to a personal feeling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/214,436 US20180025004A1 (en) 2016-07-19 2016-07-19 Process to provide audio/video/literature files and/or events/activities ,based upon an emoji or icon associated to a personal feeling

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/297,019 Continuation US11151187B2 (en) 2016-07-19 2019-03-08 Process to provide audio/video/literature files and/or events/activities, based upon an emoji or icon associated to a personal feeling

Publications (1)

Publication Number Publication Date
US20180025004A1 true US20180025004A1 (en) 2018-01-25

Family

ID=60988730

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/214,436 Abandoned US20180025004A1 (en) 2016-07-19 2016-07-19 Process to provide audio/video/literature files and/or events/activities ,based upon an emoji or icon associated to a personal feeling
US16/297,019 Expired - Fee Related US11151187B2 (en) 2016-07-19 2019-03-08 Process to provide audio/video/literature files and/or events/activities, based upon an emoji or icon associated to a personal feeling

Family Applications After (1)

Application Number Title Priority Date Filing Date
US16/297,019 Expired - Fee Related US11151187B2 (en) 2016-07-19 2019-03-08 Process to provide audio/video/literature files and/or events/activities, based upon an emoji or icon associated to a personal feeling

Country Status (1)

Country Link
US (2) US20180025004A1 (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170263225A1 (en) * 2015-09-29 2017-09-14 Amper Music, Inc. Toy instruments and music learning systems employing automated music composition engines driven by graphical icon based musical experience descriptors
US20190130033A1 (en) * 2017-10-26 2019-05-02 Muso.Ai Inc. Acquiring, maintaining, and processing a rich set of metadata for musical projects
US10372756B2 (en) * 2016-09-27 2019-08-06 Microsoft Technology Licensing, Llc Control system using scoped search and conversational interface
US20190250877A1 (en) * 2018-02-15 2019-08-15 Sriram Varadhan LetzRock An app designed to eliminate manually skipping of the songs
US10854180B2 (en) 2015-09-29 2020-12-01 Amper Music, Inc. Method of and system for controlling the qualities of musical energy embodied in and expressed by digital music to be automatically composed and generated by an automated music composition and generation engine
US10964299B1 (en) 2019-10-15 2021-03-30 Shutterstock, Inc. Method of and system for automatically generating digital performances of music compositions using notes selected from virtual musical instruments based on the music-theoretic states of the music compositions
US11024275B2 (en) 2019-10-15 2021-06-01 Shutterstock, Inc. Method of digitally performing a music composition using virtual musical instruments having performance logic executing within a virtual musical instrument (VMI) library management system
US11037538B2 (en) 2019-10-15 2021-06-15 Shutterstock, Inc. Method of and system for automated musical arrangement and musical instrument performance style transformation supported within an automated music performance system
US11093120B1 (en) * 2020-08-12 2021-08-17 Facebook, Inc. Systems and methods for generating and broadcasting digital trails of recorded media
US11120601B2 (en) * 2018-02-28 2021-09-14 Snap Inc. Animated expressive icon
US11138259B2 (en) 2017-11-28 2021-10-05 Muso.Ai Inc. Obtaining details regarding an image based on search intent and determining royalty distributions of musical projects
US11256402B1 (en) 2020-08-12 2022-02-22 Facebook, Inc. Systems and methods for generating and broadcasting digital trails of visual media
USD960898S1 (en) 2020-08-12 2022-08-16 Meta Platforms, Inc. Display screen with a graphical user interface
USD960899S1 (en) 2020-08-12 2022-08-16 Meta Platforms, Inc. Display screen with a graphical user interface
WO2023087888A1 (en) * 2021-11-17 2023-05-25 腾讯科技(深圳)有限公司 Emoticon display and associated sound acquisition methods and apparatuses, device and storage medium
US11783723B1 (en) * 2019-06-13 2023-10-10 Dance4Healing Inc. Method and system for music and dance recommendations
US11798075B2 (en) 2017-11-28 2023-10-24 Muso.Ai Inc. Obtaining details regarding an image based on search intent and determining royalty distributions of musical projects
US12239306B2 (en) 2019-04-04 2025-03-04 Wright Medical Technology, Inc. Surgical system and methods for stabilization and fixation of fractures, joints, and reconstructions
US12314331B2 (en) 2021-09-14 2025-05-27 Muso.Ai Inc. Obtaining details regarding an image based on search intent and determining royalty distributions and credits for projects

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2025155068A1 (en) * 2024-01-17 2025-07-24 삼성전자 주식회사 Electronic device, method for providing speech balloon function including message thereof, and recording medium thereof

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060041848A1 (en) * 2004-08-23 2006-02-23 Luigi Lira Overlaid display of messages in the user interface of instant messaging and other digital communication services
US20060236847A1 (en) * 2005-04-07 2006-10-26 Withop Ryan L Using images as an efficient means to select and filter records in a database
KR100701856B1 (en) * 2005-08-12 2007-04-02 삼성전자주식회사 Method of providing background effect of message in mobile communication terminal
US20100011388A1 (en) * 2008-07-10 2010-01-14 William Bull System and method for creating playlists based on mood
US8620850B2 (en) * 2010-09-07 2013-12-31 Blackberry Limited Dynamically manipulating an emoticon or avatar
US20120162350A1 (en) * 2010-12-17 2012-06-28 Voxer Ip Llc Audiocons
US20120179757A1 (en) * 2011-01-10 2012-07-12 Troy Allen Jones System and process for communication and promotion of audio within a social network
US9639861B2 (en) * 2011-12-12 2017-05-02 Textsoft, Inc Messaging with greeting card and gift option
US20140164507A1 (en) * 2012-12-10 2014-06-12 Rawllin International Inc. Media content portions recommended
US20140298364A1 (en) * 2013-03-26 2014-10-02 Rawllin International Inc. Recommendations for media content based on emotion
US10552479B2 (en) * 2013-04-11 2020-02-04 Hungama Digital Media Entertainment Private Limited System to search and play music
US10050926B2 (en) * 2014-02-05 2018-08-14 Facebook, Inc. Ideograms based on sentiment analysis
WO2015186534A1 (en) * 2014-06-06 2015-12-10 ソニー株式会社 Information processing device and method, and program
US20160006679A1 (en) * 2014-07-07 2016-01-07 Kitri Software Llc System and method for recommending multimedia for plain-text messages
US20160259502A1 (en) * 2014-09-10 2016-09-08 Katrina Parrott Diverse emojis/emoticons

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11017750B2 (en) 2015-09-29 2021-05-25 Shutterstock, Inc. Method of automatically confirming the uniqueness of digital pieces of music produced by an automated music composition and generation system while satisfying the creative intentions of system users
US11657787B2 (en) 2015-09-29 2023-05-23 Shutterstock, Inc. Method of and system for automatically generating music compositions and productions using lyrical input and music experience descriptors
US11430418B2 (en) 2015-09-29 2022-08-30 Shutterstock, Inc. Automatically managing the musical tastes and preferences of system users based on user feedback and autonomous analysis of music automatically composed and generated by an automated music composition and generation system
US10311842B2 (en) 2015-09-29 2019-06-04 Amper Music, Inc. System and process for embedding electronic messages and documents with pieces of digital music automatically composed and generated by an automated music composition and generation engine driven by user-specified emotion-type and style-type musical experience descriptors
US11468871B2 (en) 2015-09-29 2022-10-11 Shutterstock, Inc. Automated music composition and generation system employing an instrument selector for automatically selecting virtual instruments from a library of virtual instruments to perform the notes of the composed piece of digital music
US11651757B2 (en) 2015-09-29 2023-05-16 Shutterstock, Inc. Automated music composition and generation system driven by lyrical input
US10467998B2 (en) 2015-09-29 2019-11-05 Amper Music, Inc. Automated music composition and generation system for spotting digital media objects and event markers using emotion-type, style-type, timing-type and accent-type musical experience descriptors that characterize the digital music to be automatically composed and generated by the system
US10672371B2 (en) 2015-09-29 2020-06-02 Amper Music, Inc. Method of and system for spotting digital media objects and event markers using musical experience descriptors to characterize digital music to be automatically composed and generated by an automated music composition and generation engine
US20170263225A1 (en) * 2015-09-29 2017-09-14 Amper Music, Inc. Toy instruments and music learning systems employing automated music composition engines driven by graphical icon based musical experience descriptors
US10854180B2 (en) 2015-09-29 2020-12-01 Amper Music, Inc. Method of and system for controlling the qualities of musical energy embodied in and expressed by digital music to be automatically composed and generated by an automated music composition and generation engine
US12039959B2 (en) 2015-09-29 2024-07-16 Shutterstock, Inc. Automated music composition and generation system employing virtual musical instrument libraries for producing notes contained in the digital pieces of automatically composed music
US11011144B2 (en) 2015-09-29 2021-05-18 Shutterstock, Inc. Automated music composition and generation system supporting automated generation of musical kernels for use in replicating future music compositions and production environments
US10262641B2 (en) * 2015-09-29 2019-04-16 Amper Music, Inc. Music composition and generation instruments and music learning systems employing automated music composition engines driven by graphical icon based musical experience descriptors
US11030984B2 (en) * 2015-09-29 2021-06-08 Shutterstock, Inc. Method of scoring digital media objects using musical experience descriptors to indicate what, where and when musical events should appear in pieces of digital music automatically composed and generated by an automated music composition and generation system
US11430419B2 (en) 2015-09-29 2022-08-30 Shutterstock, Inc. Automatically managing the musical tastes and preferences of a population of users requesting digital pieces of music automatically composed and generated by an automated music composition and generation system
US11037539B2 (en) 2015-09-29 2021-06-15 Shutterstock, Inc. Autonomous music composition and performance system employing real-time analysis of a musical performance to automatically compose and perform music to accompany the musical performance
US11776518B2 (en) 2015-09-29 2023-10-03 Shutterstock, Inc. Automated music composition and generation system employing virtual musical instrument libraries for producing notes contained in the digital pieces of automatically composed music
US11037540B2 (en) 2015-09-29 2021-06-15 Shutterstock, Inc. Automated music composition and generation systems, engines and methods employing parameter mapping configurations to enable automated music composition and generation
US11037541B2 (en) * 2015-09-29 2021-06-15 Shutterstock, Inc. Method of composing a piece of digital music using musical experience descriptors to indicate what, when and how musical events should appear in the piece of digital music automatically composed and generated by an automated music composition and generation system
US10372756B2 (en) * 2016-09-27 2019-08-06 Microsoft Technology Licensing, Llc Control system using scoped search and conversational interface
US10795931B2 (en) * 2017-10-26 2020-10-06 Muso.Ai Inc. Acquiring, maintaining, and processing a rich set of metadata for musical projects
US20190130033A1 (en) * 2017-10-26 2019-05-02 Muso.Ai Inc. Acquiring, maintaining, and processing a rich set of metadata for musical projects
US11138259B2 (en) 2017-11-28 2021-10-05 Muso.Ai Inc. Obtaining details regarding an image based on search intent and determining royalty distributions of musical projects
US11798075B2 (en) 2017-11-28 2023-10-24 Muso.Ai Inc. Obtaining details regarding an image based on search intent and determining royalty distributions of musical projects
US20190250877A1 (en) * 2018-02-15 2019-08-15 Sriram Varadhan LetzRock An app designed to eliminate manually skipping of the songs
US11120601B2 (en) * 2018-02-28 2021-09-14 Snap Inc. Animated expressive icon
US11688119B2 (en) 2018-02-28 2023-06-27 Snap Inc. Animated expressive icon
US12400389B2 (en) 2018-02-28 2025-08-26 Snap Inc. Animated expressive icon
US11468618B2 (en) 2018-02-28 2022-10-11 Snap Inc. Animated expressive icon
US11880923B2 (en) 2018-02-28 2024-01-23 Snap Inc. Animated expressive icon
US12239306B2 (en) 2019-04-04 2025-03-04 Wright Medical Technology, Inc. Surgical system and methods for stabilization and fixation of fractures, joints, and reconstructions
US11783723B1 (en) * 2019-06-13 2023-10-10 Dance4Healing Inc. Method and system for music and dance recommendations
US11024275B2 (en) 2019-10-15 2021-06-01 Shutterstock, Inc. Method of digitally performing a music composition using virtual musical instruments having performance logic executing within a virtual musical instrument (VMI) library management system
US11037538B2 (en) 2019-10-15 2021-06-15 Shutterstock, Inc. Method of and system for automated musical arrangement and musical instrument performance style transformation supported within an automated music performance system
US10964299B1 (en) 2019-10-15 2021-03-30 Shutterstock, Inc. Method of and system for automatically generating digital performances of music compositions using notes selected from virtual musical instruments based on the music-theoretic states of the music compositions
USD960899S1 (en) 2020-08-12 2022-08-16 Meta Platforms, Inc. Display screen with a graphical user interface
US11093120B1 (en) * 2020-08-12 2021-08-17 Facebook, Inc. Systems and methods for generating and broadcasting digital trails of recorded media
US11256402B1 (en) 2020-08-12 2022-02-22 Facebook, Inc. Systems and methods for generating and broadcasting digital trails of visual media
USD960898S1 (en) 2020-08-12 2022-08-16 Meta Platforms, Inc. Display screen with a graphical user interface
US12314331B2 (en) 2021-09-14 2025-05-27 Muso.Ai Inc. Obtaining details regarding an image based on search intent and determining royalty distributions and credits for projects
WO2023087888A1 (en) * 2021-11-17 2023-05-25 腾讯科技(深圳)有限公司 Emoticon display and associated sound acquisition methods and apparatuses, device and storage medium

Also Published As

Publication number Publication date
US20190205328A1 (en) 2019-07-04
US11151187B2 (en) 2021-10-19

Similar Documents

Publication Publication Date Title
US11151187B2 (en) Process to provide audio/video/literature files and/or events/activities, based upon an emoji or icon associated to a personal feeling
Werner Organizing music, organizing gender: algorithmic culture and Spotify recommendations
US11853354B2 (en) Override of automatically shared meta-data of media
US12457187B2 (en) Electronic system and method for facilitating sound media and electronic commerce by selectively utilizing one or more song clips
US9401941B2 (en) Song lyric processing with user interaction
JP5856620B2 (en) Networked system that supports media access and social networking
US8180765B2 (en) Device and method for selecting at least one media for recommendation to a user
US12411650B2 (en) Generating a customized social-driven playlist
JP4981812B2 (en) System and method for creating a playlist
US20070220025A1 (en) Automatic meta-data sharing of existing media
KR20080035617A (en) Create a single action media playlist
US20070245006A1 (en) Apparatus, method and computer program product to provide ad hoc message recipient lists
US12399935B2 (en) Method and apparatus for recommending music content
US11086931B2 (en) Audio and visual asset matching platform including a master digital asset
US20210357445A1 (en) Multimedia asset matching systems and methods
JP2024112801A (en) Method for displaying emoticon by using custom keyword and user terminal
US20210149951A1 (en) Audio and Visual Asset Matching Platform
US20220383258A1 (en) Technological support for collaborative songwriting and rights registration therefor
KR102674782B1 (en) Apparatus for processing a message that provides feedback expression items
KR102730926B1 (en) Apparatus for processing a message that analyzing and providing feedback expression items
CN114282041B (en) A text reading control method and device
KR102806929B1 (en) Method for processing a message that analyzing and providing feedback expression items
Devaney et al. The multimodal, multidimensional nature of music from multiple perspectives
CN121167030A (en) Information processing methods and electronic devices
Jowsey Post-Colonial Welcome?

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION